hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
c95be8eeeddc0694e1a154c8db192b8dc45abd1e | 199 | py | Python | restler/unit_tests/log_baseline_test_files/unit_test_server_auth.py | mkleshchenok/restler-fuzzer | 1bd7bc68a6c4de997e9fda9a9db5ffb0504b864c | [
"MIT"
] | 1,539 | 2020-11-16T19:20:55.000Z | 2022-03-30T16:36:49.000Z | restler/unit_tests/log_baseline_test_files/unit_test_server_auth.py | mkleshchenok/restler-fuzzer | 1bd7bc68a6c4de997e9fda9a9db5ffb0504b864c | [
"MIT"
] | 282 | 2020-11-17T04:53:38.000Z | 2022-03-31T13:16:25.000Z | restler/unit_tests/log_baseline_test_files/unit_test_server_auth.py | mkleshchenok/restler-fuzzer | 1bd7bc68a6c4de997e9fda9a9db5ffb0504b864c | [
"MIT"
] | 171 | 2020-11-16T21:55:59.000Z | 2022-03-28T12:56:26.000Z | # Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
print("{'user1':{}, 'user2':{}}")
print("Authorization: valid_unit_test_token")
print("Authorization: shadow_unit_test_token") | 39.8 | 46 | 0.753769 | 24 | 199 | 6 | 0.75 | 0.25 | 0.180556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010929 | 0.080402 | 199 | 5 | 46 | 39.8 | 0.775956 | 0.341709 | 0 | 0 | 0 | 0 | 0.751938 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
c95c9870f690e05f7b05f115efc52b57f751b4d0 | 12,032 | py | Python | test/unit/test_filters.py | RahmanTeamDevelopment/DeNovoFilter | f433d7efc0818386dea2fb944e7d8aceb331cb05 | [
"MIT"
] | null | null | null | test/unit/test_filters.py | RahmanTeamDevelopment/DeNovoFilter | f433d7efc0818386dea2fb944e7d8aceb331cb05 | [
"MIT"
] | null | null | null | test/unit/test_filters.py | RahmanTeamDevelopment/DeNovoFilter | f433d7efc0818386dea2fb944e7d8aceb331cb05 | [
"MIT"
] | null | null | null | """Unit tests for the filters module"""
from unittest import TestCase
from main import filters
class TestFilters(TestCase):
def setUp(self):
self.filt = filters.Filters(False)
def assert_fails_check(self, errmsg='', fun=lambda: None, argv=None):
with self.assertRaises(ValueError) as cm:
fun(*argv)
err = cm.exception
self.assertEqual(str(err), errmsg)
def assert_passes_check(self, fun=lambda: None, argv=None):
try:
fun(*argv)
except ValueError:
self.fail("{}() raised ValueError unexpectedly".format(fun.func_name))
def test_check_if_multiallelic(self):
multi_calls = [('7', 41231134, 'GGG'), ('12', 12345678, 'AC')]
self.assert_fails_check(
errmsg='multi_allele_call',
fun=self.filt.check_if_multiallelic,
argv=[
{'REMOVE_MULTI_ALLELE_CALLS': True},
{'multiallelic_calls': multi_calls},
('12', 12345678, 'AC', 'T')
]
)
self.assert_passes_check(
fun=self.filt.check_if_multiallelic,
argv=[
{'REMOVE_MULTI_ALLELE_CALLS': True},
{'multiallelic_calls': multi_calls},
('2', 12345678, 'AC', 'T')
]
)
self.assert_passes_check(
fun=self.filt.check_if_multiallelic,
argv=[
{'REMOVE_MULTI_ALLELE_CALLS': True},
{'multiallelic_calls': []},
('12', 12345678, 'AC', 'T')
]
)
def test_check_if_called_in_parent(self):
data = {
'mother_var': {
('12', 12345678, 'AC', 'T'): {},
('7', 41231134, 'TTT', 'A'): {},
},
'father_var': {
('5', 4779121, 'G', 'C'): {}
}
}
self.assert_fails_check(
errmsg='called_in_parent',
fun=self.filt.check_if_called_in_parent,
argv=[
data,
('12', 12345678, 'AC', 'T')
]
)
self.assert_fails_check(
errmsg='called_in_parent',
fun=self.filt.check_if_called_in_parent,
argv=[
data,
('5', 4779121, 'G', 'C')
]
)
self.assert_passes_check(
fun=self.filt.check_if_called_in_parent,
argv=[
data,
('9', 24681012, 'A', 'T')
]
)
def test_check_if_low_quality(self):
self.assert_fails_check(
errmsg='low_quality',
fun=self.filt.check_if_low_quality,
argv=[{'quality': 'low'}]
)
self.assert_passes_check(
fun=self.filt.check_if_low_quality,
argv=[{'quality': 'else'}]
)
def test_check_if_outside_splice_side_boundary(self):
self.assert_passes_check(
fun=self.filt.check_if_outside_splice_side_boundary,
argv=[
{'SPLICE_SITE_BOUNDARY': 10},
{'csn': 'c.111A>C_p.='}
]
)
# TODO: Add more asserts here!
self.fail('Need to add more asserts here')
def test_check_tr_in_child(self):
self.assert_fails_check(
errmsg='low_child_tr ({})'.format(1),
fun=self.filt.check_tr_in_child,
argv=[
{'CHILD_MIN_TR': 3},
{'TR': 1}
]
)
self.assert_passes_check(
fun=self.filt.check_tr_in_child,
argv=[
{'CHILD_MIN_TR': 3},
{'TR': 4}
]
)
self.assert_passes_check(
fun=self.filt.check_tr_in_child,
argv=[
{'CHILD_MIN_TR': 3},
{'TR': 3}
]
)
def test_check_tc_in_child(self):
self.assert_fails_check(
errmsg='low_child_tc ({})'.format(11),
fun=self.filt.check_tc_in_child,
argv=[
{'CHILD_MIN_TC': 15},
{'TC': 11}
]
)
self.assert_passes_check(
fun=self.filt.check_tc_in_child,
argv=[
{'CHILD_MIN_TC': 15},
{'TC': 18}
]
)
self.assert_passes_check(
fun=self.filt.check_tc_in_child,
argv=[
{'CHILD_MIN_TC': 15},
{'TC': 15}
]
)
def test_check_tr_per_tc_in_child(self):
self.assert_fails_check(
errmsg='low_child_tr_per_tc ({})'.format(0.1),
fun=self.filt.check_tr_per_tc_in_child,
argv=[
{'CHILD_MIN_TR_PER_TC': 0.2},
{'TR': 1, 'TC': 10}
]
)
self.assert_passes_check(
fun=self.filt.check_tr_per_tc_in_child,
argv=[
{'CHILD_MIN_TR_PER_TC': 0.2},
{'TR': 3, 'TC': 10}
]
)
self.assert_passes_check(
fun=self.filt.check_tr_per_tc_in_child,
argv=[
{'CHILD_MIN_TR_PER_TC': 0.2},
{'TR': 2, 'TC': 10}
]
)
def test_check_control_frequency(self):
self.assert_fails_check(
errmsg='high_control_frequency ({})'.format(0.15),
fun=self.filt.check_control_frequency,
argv=[
{'CONTROL_MAX_FREQUENCY': 0.1},
0.15
]
)
self.assert_passes_check(
fun=self.filt.check_control_frequency,
argv=[
{'CONTROL_MAX_FREQUENCY': 0.1},
0.09
]
)
self.assert_passes_check(
fun=self.filt.check_control_frequency,
argv=[
{'CONTROL_MAX_FREQUENCY': 0.1},
0.1
]
)
def test_check_gnomad_exomes_frequency(self):
self.assert_fails_check(
errmsg='high_gnomad_exomes_frequency ({})'.format(0.15),
fun=self.filt.check_gnomad_exomes_frequency,
argv=[
{'GNOMAD_MAX_FREQUENCY': 0.1},
0.15
]
)
self.assert_passes_check(
fun=self.filt.check_gnomad_exomes_frequency,
argv=[
{'GNOMAD_MAX_FREQUENCY': 0.1},
0.09
]
)
self.assert_passes_check(
fun=self.filt.check_gnomad_exomes_frequency,
argv=[
{'GNOMAD_MAX_FREQUENCY': 0.1},
0.1
]
)
def test_check_gnomad_genomes_frequency(self):
self.assert_fails_check(
errmsg='high_gnomad_genomes_frequency ({})'.format(0.15),
fun=self.filt.check_gnomad_genomes_frequency,
argv=[
{'GNOMAD_MAX_FREQUENCY': 0.1},
0.15
]
)
self.assert_passes_check(
fun=self.filt.check_gnomad_genomes_frequency,
argv=[
{'GNOMAD_MAX_FREQUENCY': 0.1},
0.09
]
)
self.assert_passes_check(
fun=self.filt.check_gnomad_genomes_frequency,
argv=[
{'GNOMAD_MAX_FREQUENCY': 0.1},
0.1
]
)
def test_check_tc_and_tr_in_mother(self):
self.assert_fails_check(
errmsg='low_mother_tc ({})'.format(4),
fun=self.filt.check_tc_and_tr_in_mother,
argv=[
{'PARENT_MIN_COVERAGE': 6, 'PARENT_MAX_ALT_ALLELE_COUNT': 1},
{'mother_tc': 4, 'mother_tr': 0}
]
)
self.assert_passes_check(
fun=self.filt.check_tc_and_tr_in_mother,
argv=[
{'PARENT_MIN_COVERAGE': 6, 'PARENT_MAX_ALT_ALLELE_COUNT': 1},
{'mother_tc': 7, 'mother_tr': 0}
]
)
self.assert_passes_check(
fun=self.filt.check_tc_and_tr_in_mother,
argv=[
{'PARENT_MIN_COVERAGE': 6, 'PARENT_MAX_ALT_ALLELE_COUNT': 1},
{'mother_tc': 6, 'mother_tr': 0}
]
)
self.assert_fails_check(
errmsg='high_mother_tr ({})'.format(2),
fun=self.filt.check_tc_and_tr_in_mother,
argv=[
{'PARENT_MIN_COVERAGE': 6, 'PARENT_MAX_ALT_ALLELE_COUNT': 1},
{'mother_tc': 7, 'mother_tr': 2}
]
)
self.assert_passes_check(
fun=self.filt.check_tc_and_tr_in_mother,
argv=[
{'PARENT_MIN_COVERAGE': 6, 'PARENT_MAX_ALT_ALLELE_COUNT': 1},
{'mother_tc': 7, 'mother_tr': 0}
]
)
self.assert_passes_check(
fun=self.filt.check_tc_and_tr_in_mother,
argv=[
{'PARENT_MIN_COVERAGE': 6, 'PARENT_MAX_ALT_ALLELE_COUNT': 1},
{'mother_tc': 7, 'mother_tr': 1}
]
)
def test_check_tc_and_tr_in_father(self):
self.assert_fails_check(
errmsg='low_father_tc ({})'.format(4),
fun=self.filt.check_tc_and_tr_in_father,
argv=[
{'PARENT_MIN_COVERAGE': 6, 'PARENT_MAX_ALT_ALLELE_COUNT': 1},
{'father_tc': 4, 'father_tr': 0}
]
)
self.assert_passes_check(
fun=self.filt.check_tc_and_tr_in_father,
argv=[
{'PARENT_MIN_COVERAGE': 6, 'PARENT_MAX_ALT_ALLELE_COUNT': 1},
{'father_tc': 7, 'father_tr': 0}
]
)
self.assert_passes_check(
fun=self.filt.check_tc_and_tr_in_father,
argv=[
{'PARENT_MIN_COVERAGE': 6, 'PARENT_MAX_ALT_ALLELE_COUNT': 1},
{'father_tc': 6, 'father_tr': 0}
]
)
self.assert_fails_check(
errmsg='high_father_tr ({})'.format(2),
fun=self.filt.check_tc_and_tr_in_father,
argv=[
{'PARENT_MIN_COVERAGE': 6, 'PARENT_MAX_ALT_ALLELE_COUNT': 1},
{'father_tc': 7, 'father_tr': 2}
]
)
self.assert_passes_check(
fun=self.filt.check_tc_and_tr_in_father,
argv=[
{'PARENT_MIN_COVERAGE': 6, 'PARENT_MAX_ALT_ALLELE_COUNT': 1},
{'father_tc': 7, 'father_tr': 0}
]
)
self.assert_passes_check(
fun=self.filt.check_tc_and_tr_in_father,
argv=[
{'PARENT_MIN_COVERAGE': 6, 'PARENT_MAX_ALT_ALLELE_COUNT': 1},
{'father_tc': 7, 'father_tr': 1}
]
)
def test_apply(self):
self.filt.detailed = False
self.filt.filter = []
with self.assertRaises(ValueError) as cm:
self.filt._apply(True, 'some message')
err = cm.exception
self.assertEqual(str(err), 'some message')
self.assertEquals(len(self.filt.filter), 0)
try:
self.filt._apply(False, 'some message')
except ValueError:
self.fail("_apply() raised ValueError unexpectedly")
self.assertEquals(len(self.filt.filter), 0)
self.filt.detailed = True
self.filt.filter = []
try:
self.filt._apply(True, 'some message')
except ValueError:
self.fail("_apply() raised ValueError unexpectedly")
self.assertEquals(self.filt.filter[0], 'some message')
self.filt.filter = []
try:
self.filt._apply(False, 'some message')
except ValueError:
self.fail("_apply() raised ValueError unexpectedly")
self.assertEquals(len(self.filt.filter), 0)
def test_apply_filters(self):
self.fail('Test not yet implemented') | 27.099099 | 82 | 0.498088 | 1,301 | 12,032 | 4.235204 | 0.099923 | 0.076951 | 0.077858 | 0.113249 | 0.842831 | 0.818149 | 0.778947 | 0.73412 | 0.705445 | 0.652269 | 0 | 0.034091 | 0.385638 | 12,032 | 444 | 83 | 27.099099 | 0.71131 | 0.005236 | 0 | 0.562857 | 0 | 0 | 0.160314 | 0.045219 | 0 | 0 | 0 | 0.002252 | 0.142857 | 1 | 0.048571 | false | 0.074286 | 0.005714 | 0 | 0.057143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
a3413417361b6a20432918164f3ac32762d9863c | 109 | py | Python | trainer/models/sisr/__init__.py | jason-zl190/sisr | 2415d28333c94602c52be9c314a8044165d992cf | [
"Apache-2.0"
] | 2 | 2019-12-15T17:12:46.000Z | 2019-12-15T21:09:31.000Z | trainer/models/sisr/__init__.py | jason-zl190/sisr | 2415d28333c94602c52be9c314a8044165d992cf | [
"Apache-2.0"
] | null | null | null | trainer/models/sisr/__init__.py | jason-zl190/sisr | 2415d28333c94602c52be9c314a8044165d992cf | [
"Apache-2.0"
] | 1 | 2020-12-15T15:30:12.000Z | 2020-12-15T15:30:12.000Z | from trainer.models.sisr.models import MySRResNet, Discriminator
from trainer.models.sisr.gan import MySRGAN
| 36.333333 | 64 | 0.853211 | 15 | 109 | 6.2 | 0.6 | 0.236559 | 0.365591 | 0.451613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082569 | 109 | 2 | 65 | 54.5 | 0.93 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
a36a465692e940872634de5cdacd71a2af50e507 | 342 | py | Python | main/CompuCellPythonTutorial/clusterSurface/Simulation/clusterSurface.py | JulianoGianlupi/nh-cc3d-4x-base-tool | c0f4aceebd4c5bf3ec39e831ef851e419b161259 | [
"CC0-1.0"
] | null | null | null | main/CompuCellPythonTutorial/clusterSurface/Simulation/clusterSurface.py | JulianoGianlupi/nh-cc3d-4x-base-tool | c0f4aceebd4c5bf3ec39e831ef851e419b161259 | [
"CC0-1.0"
] | null | null | null | main/CompuCellPythonTutorial/clusterSurface/Simulation/clusterSurface.py | JulianoGianlupi/nh-cc3d-4x-base-tool | c0f4aceebd4c5bf3ec39e831ef851e419b161259 | [
"CC0-1.0"
] | 1 | 2021-02-26T21:50:29.000Z | 2021-02-26T21:50:29.000Z | from cc3d import CompuCellSetup
from .clusterSurfaceSteppables import VolumeParamSteppable
from .clusterSurfaceSteppables import MitosisSteppableClusters
CompuCellSetup.register_steppable(steppable=VolumeParamSteppable(frequency=10))
CompuCellSetup.register_steppable(steppable=MitosisSteppableClusters(frequency=10))
CompuCellSetup.run()
| 34.2 | 83 | 0.891813 | 28 | 342 | 10.821429 | 0.428571 | 0.184818 | 0.224422 | 0.264026 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015432 | 0.052632 | 342 | 9 | 84 | 38 | 0.919753 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
6e712622966ff80a9acbd8efc225c943f02eee4b | 4,900 | py | Python | hubcheck/pageobjects/widgets/groups_wiki_menu.py | codedsk/hubcheck | 2ff506eb56ba00f035300862f8848e4168452a17 | [
"MIT"
] | 1 | 2016-02-13T13:42:23.000Z | 2016-02-13T13:42:23.000Z | hubcheck/pageobjects/widgets/groups_wiki_menu.py | codedsk/hubcheck | 2ff506eb56ba00f035300862f8848e4168452a17 | [
"MIT"
] | null | null | null | hubcheck/pageobjects/widgets/groups_wiki_menu.py | codedsk/hubcheck | 2ff506eb56ba00f035300862f8848e4168452a17 | [
"MIT"
] | null | null | null | from hubcheck.pageobjects.basepagewidget import BasePageWidget
from hubcheck.pageobjects.basepageelement import Link
class GroupsWikiMenu1(BasePageWidget):
def __init__(self, owner, locatordict={}):
super(GroupsWikiMenu1,self).__init__(owner,locatordict)
# load hub's classes
GroupsWikiMenu_Locators = self.load_class('GroupsWikiMenu_Locators')
# update this object's locator
self.locators.update(GroupsWikiMenu_Locators.locators)
# update the locators with those from the owner
self.update_locators_from_owner()
# setup page object's components
self.article = Link(self,{'base':'article'})
self.edit = Link(self,{'base':'edit'})
self.comments = Link(self,{'base':'comments'})
self.history = Link(self,{'base':'history'})
self.delete = Link(self,{'base':'delete'})
self.mainpage = Link(self,{'base':'mainpage'})
self.index = Link(self,{'base':'index'})
# update the component's locators with this objects overrides
self._updateLocators()
def goto_article(self):
"""click the article menu link"""
self.article.click()
def goto_edit(self):
"""click the edit menu link"""
self.edit.click()
def goto_comments(self):
"""click the comments menu link"""
self.comments.click()
def goto_history(self):
"""click the history menu link"""
self.history.click()
def goto_delete(self):
"""click the delete menu link"""
self.delete.click()
def goto_mainpage(self):
"""click the main page menu link"""
self.mainpage.click()
def goto_index(self):
"""click the index menu link"""
self.index.click()
class GroupsWikiMenu1_Locators_Base(object):
"""locators for GroupsWikiMenu object"""
locators = {
'base' : "css=#sub-menu",
'article' : "css=.page-text",
'edit' : "css=.page-edit",
'comments' : "css=.page-comments",
'history' : "css=.page-history",
'delete' : "css=.page-delete",
'mainpage' : "css=.page-main",
'index' : "css=.page-index",
}
class GroupsWikiMenu1_Locators_Base_2(object):
"""
locators for GroupsWikiMenu object
used by nees62
"""
locators = {
'base' : "css=#page_content",
'article' : "css=.page-text",
'edit' : "css=.page-edit",
'comments' : "css=.page-comments",
'history' : "css=.page-history",
'delete' : "css=.page-delete",
'mainpage' : "css=.home",
'index' : "css=.page-index",
}
class GroupsWikiMenu2(BasePageWidget):
"""
removed mainpage link
added search link
"""
def __init__(self, owner, locatordict={}):
super(GroupsWikiMenu2,self).__init__(owner,locatordict)
# load hub's classes
GroupsWikiMenu_Locators = self.load_class('GroupsWikiMenu_Locators')
# update this object's locator
self.locators.update(GroupsWikiMenu_Locators.locators)
# update the locators with those from the owner
self.update_locators_from_owner()
# setup page object's components
self.article = Link(self,{'base':'article'})
self.edit = Link(self,{'base':'edit'})
self.comments = Link(self,{'base':'comments'})
self.history = Link(self,{'base':'history'})
self.delete = Link(self,{'base':'delete'})
self.search = Link(self,{'base':'search'})
self.index = Link(self,{'base':'index'})
# update the component's locators with this objects overrides
self._updateLocators()
def goto_article(self):
"""click the article menu link"""
self.article.click()
def goto_edit(self):
"""click the edit menu link"""
self.edit.click()
def goto_comments(self):
"""click the comments menu link"""
self.comments.click()
def goto_history(self):
"""click the history menu link"""
self.history.click()
def goto_delete(self):
"""click the delete menu link"""
self.delete.click()
def goto_search(self):
"""click the search menu link"""
self.search.click()
def goto_index(self):
"""click the index menu link"""
self.index.click()
class GroupsWikiMenu2_Locators_Base(object):
"""locators for GroupsWikiMenu object"""
locators = {
'base' : "css=#sub-menu",
'article' : "css=.page-text",
'edit' : "css=.page-edit",
'comments' : "css=.page-comments",
'history' : "css=.page-history",
'delete' : "css=.page-delete",
'search' : "css=.page-search",
'index' : "css=.page-index",
}
| 25.128205 | 76 | 0.583878 | 530 | 4,900 | 5.296226 | 0.115094 | 0.079801 | 0.05985 | 0.033131 | 0.815461 | 0.786605 | 0.763805 | 0.763805 | 0.763805 | 0.763805 | 0 | 0.00281 | 0.273673 | 4,900 | 194 | 77 | 25.257732 | 0.785895 | 0.186735 | 0 | 0.758242 | 0 | 0 | 0.182126 | 0.011985 | 0 | 0 | 0 | 0 | 0 | 1 | 0.175824 | false | 0 | 0.021978 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6ebc15f317917fdd76851a59a2de876897c7f4d1 | 3,077 | py | Python | clasificador/test/testwiktionary.py | pln-fing-udelar/pghumor | d1a352bc72fba45cffd2af10c584e4ac43febf70 | [
"Apache-2.0"
] | 52 | 2015-06-17T04:51:22.000Z | 2022-03-25T09:19:00.000Z | clasificador/test/testwiktionary.py | PONCHARELO12/pghumor | c5b552d3799858c74616367c1c68531c20163285 | [
"Apache-2.0"
] | 6 | 2016-12-03T05:05:26.000Z | 2019-10-18T17:52:42.000Z | clasificador/test/testwiktionary.py | PONCHARELO12/pghumor | c5b552d3799858c74616367c1c68531c20163285 | [
"Apache-2.0"
] | 17 | 2015-06-17T04:55:47.000Z | 2021-10-16T14:30:27.000Z | # coding=utf-8
from __future__ import absolute_import, division, print_function, unicode_literals
import unittest
from clasificador.herramientas.wiktionary import Wiktionary
class TestWiktionary(unittest.TestCase):
def test_esta_en_wiktionary_consulta_palabra_comun(self):
texto = "hola"
self.assertTrue(Wiktionary.pertenece_consulta(texto), "Debería estar en wiktionary el texto \"" + texto + "\"")
def test_esta_en_wiktionary_consulta_palabra_comun_con_acento(self):
texto = "árbol"
self.assertTrue(Wiktionary.pertenece_consulta(texto), "Debería estar en wiktionary el texto \"" + texto + "\"")
def test_esta_en_wiktionary_consulta_error_de_tipeo(self):
texto = "holaa"
self.assertFalse(Wiktionary.pertenece_consulta(texto), "No debería estar en wiktionary el texto \"" + texto
+ "\"")
def test_esta_en_wiktionary_consulta_palabra_inexistente(self):
texto = "jajajajaaaaaaaa"
self.assertFalse(Wiktionary.pertenece_consulta(texto), "No debería estar en wiktionary el texto \"" + texto
+ "\"")
def test_esta_en_wiktionary_consulta_palabra_inexistente2(self):
texto = "aldnkvnvrbyweruvnrhuvhuirbv"
self.assertFalse(Wiktionary.pertenece_consulta(texto), "No debería estar en wiktionary el texto \"" + texto
+ "\"")
def test_esta_en_wiktionary_consulta_palabra_cotidiana(self):
texto = "Suárez"
self.assertTrue(Wiktionary.pertenece_consulta(texto), "Debería estar en wiktionary el texto \"" + texto + "\"")
def test_esta_en_wiktionary_palabra_comun(self):
texto = "hola"
self.assertTrue(Wiktionary.pertenece(texto),
"Debería estar en el diccionario de wiktionary el texto \"" + texto + "\"")
def test_esta_en_wiktionary_palabra_comun_con_acento(self):
texto = "árbol"
self.assertTrue(Wiktionary.pertenece(texto),
"Debería estar en el diccionario de wiktionary el texto \"" + texto + "\"")
def test_esta_en_wiktionary_error_de_tipeo(self):
texto = "holaa"
self.assertFalse(Wiktionary.pertenece(texto),
"No debería estar el diccionario de en wiktionary el texto \"" + texto + "\"")
def test_esta_en_wiktionary_palabra_inexistente(self):
texto = "jajajajaaaaaaaa"
self.assertFalse(Wiktionary.pertenece(texto),
"No debería estar el diccionario de en wiktionary el texto \"" + texto + "\"")
def test_esta_en_wiktionary_palabra_inexistente2(self):
texto = "aldnkvnvrbyweruvnrhuvhuirbv"
self.assertFalse(Wiktionary.pertenece(texto),
"No debería estar el diccionario de en wiktionary el texto \"" + texto + "\"")
def test_esta_en_wiktionary_palabra_cotidiana(self):
texto = "Suárez"
self.assertTrue(Wiktionary.pertenece(texto), "Debería estar en wiktionary el texto \"" + texto + "\"")
if __name__ == '__main__':
unittest.main()
| 44.594203 | 119 | 0.674358 | 333 | 3,077 | 5.951952 | 0.159159 | 0.133199 | 0.066599 | 0.078708 | 0.898587 | 0.898587 | 0.898587 | 0.898587 | 0.86226 | 0.718466 | 0 | 0.001265 | 0.229444 | 3,077 | 68 | 120 | 45.25 | 0.834669 | 0.0039 | 0 | 0.62 | 0 | 0 | 0.260529 | 0.01763 | 0 | 0 | 0 | 0 | 0.24 | 1 | 0.24 | false | 0 | 0.06 | 0 | 0.32 | 0.02 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
42c07c7d7f88f3454dc4d75f0cdc0b33c0141ceb | 4,399 | py | Python | test/pyaz/snapshot/__init__.py | bigdatamoore/py-az-cli | 54383a4ee7cc77556f6183e74e992eec95b28e01 | [
"MIT"
] | null | null | null | test/pyaz/snapshot/__init__.py | bigdatamoore/py-az-cli | 54383a4ee7cc77556f6183e74e992eec95b28e01 | [
"MIT"
] | 9 | 2021-09-24T16:37:24.000Z | 2021-12-24T00:39:19.000Z | test/pyaz/snapshot/__init__.py | bigdatamoore/py-az-cli | 54383a4ee7cc77556f6183e74e992eec95b28e01 | [
"MIT"
] | null | null | null | import json, subprocess
from .. pyaz_utils import get_cli_name, get_params
def create(resource_group, name, location=None, size_gb=None, sku=None, source=None, for_upload=None, incremental=None, __SOURCE_BLOB_URI=None, __SOURCE_DISK=None, __SOURCE_SNAPSHOT=None, source_storage_account_id=None, hyper_v_generation=None, tags=None, disk_encryption_set=None, encryption_type=None, network_access_policy=None, disk_access=None, edge_zone=None, no_wait=None):
params = get_params(locals())
command = "az snapshot create " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
def delete(resource_group, name):
params = get_params(locals())
command = "az snapshot delete " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
def grant_access(resource_group, name, duration_in_seconds, access_level=None):
params = get_params(locals())
command = "az snapshot grant-access " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
def list(resource_group=None):
params = get_params(locals())
command = "az snapshot list " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
def revoke_access(resource_group, name):
params = get_params(locals())
command = "az snapshot revoke-access " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
def show(resource_group, name):
params = get_params(locals())
command = "az snapshot show " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
def update(resource_group, name, sku=None, disk_encryption_set=None, encryption_type=None, network_access_policy=None, disk_access=None, set=None, add=None, remove=None, force_string=None, no_wait=None):
params = get_params(locals())
command = "az snapshot update " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
def wait(resource_group, name, timeout=None, interval=None, deleted=None, created=None, updated=None, exists=None, custom=None):
params = get_params(locals())
command = "az snapshot wait " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
| 37.922414 | 380 | 0.678791 | 551 | 4,399 | 5.313975 | 0.154265 | 0.076503 | 0.054645 | 0.057377 | 0.812842 | 0.812842 | 0.812842 | 0.812842 | 0.769809 | 0.769809 | 0 | 0.004575 | 0.205047 | 4,399 | 115 | 381 | 38.252174 | 0.832714 | 0 | 0 | 0.830189 | 0 | 0 | 0.054331 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075472 | false | 0 | 0.018868 | 0 | 0.169811 | 0.226415 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
42d6228eed54ed2efb7cdc3a14fadafe177aabd3 | 29,043 | py | Python | cave/com.raytheon.viz.gfe/python/autotest/ExpireTime_TestScript.py | srcarter3/awips2 | 37f31f5e88516b9fd576eaa49d43bfb762e1d174 | [
"Apache-2.0"
] | null | null | null | cave/com.raytheon.viz.gfe/python/autotest/ExpireTime_TestScript.py | srcarter3/awips2 | 37f31f5e88516b9fd576eaa49d43bfb762e1d174 | [
"Apache-2.0"
] | null | null | null | cave/com.raytheon.viz.gfe/python/autotest/ExpireTime_TestScript.py | srcarter3/awips2 | 37f31f5e88516b9fd576eaa49d43bfb762e1d174 | [
"Apache-2.0"
] | 1 | 2021-10-30T00:03:05.000Z | 2021-10-30T00:03:05.000Z | ##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
#
# U.S. EXPORT CONTROLLED TECHNICAL DATA
# This software product contains export-restricted data whose
# export/transfer/disclosure is restricted by U.S. law. Dissemination
# to non-U.S. persons whether in the United States or abroad requires
# an export license or other authorization.
#
# Contractor Name: Raytheon Company
# Contractor Address: 6825 Pine Street, Suite 340
# Mail Stop B8
# Omaha, NE 68106
# 402.291.0100
#
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
##
# ----------------------------------------------------------------------------
# This software is in the public domain, furnished "as is", without technical
# support, and with no warranty, express or implied, as to its usefulness for
# any purpose.
#
# Expire Time Test Case
#
# Author:
# ----------------------------------------------------------------------------
def1 = """#Definition["state_IDs"] = ["ST"]"""
def2 = """Definition["state_IDs"] = ["FL"]"""
pfm1 = """Definition["defaultEditAreas"] = [('FLZ050','FLZ050\\nGFE TEST\\n35.00N 90.00W\\n35'),]"""
scripts = [
{
"commentary": "Clear out all Hazards Table and Grids.",
"name": "Expire_0",
"productType": None,
"clearHazardsTable": 1,
"checkStrings": [],
},
{
"name":"ExpireAFM_am",
"productType":"AFM",
"commentary": "Checking product expire time for AFM, with Morning issuance.",
"comboFlag": 1,
"combinations": [(["FLZ050"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Morning', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FOUS52 KTBW 010800",
"AFMTBW",
"Area Forecast Matrices",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"FLZ050-012100-",
"Pinellas-",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name":"ExpireAFM_pm",
"productType":"AFM",
"commentary": "Checking product expire time for AFM, with Afternoon issuance.",
"comboFlag": 1,
"combinations": [(["FLZ050"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Afternoon', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FOUS52 KTBW 010800",
"AFMTBW",
"Area Forecast Matrices",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"FLZ050-020900-",
"Pinellas-",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name":"ExpireCWF_Morning",
"productType":"CWF",
"commentary": "Checking product expire time for CWF, with Morning issuance.",
"comboFlag": 1,
"combinations": [(["GMZ870"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Morning', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FZUS52 KTBW 010800",
"CWFTBW",
"Coastal Waters Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"GMZ800-012100-",
"300 AM EST Mon Feb 1 2010",
"$$",
"GMZ870-012100-",
"Waters from Tarpon Springs to Suwannee River FL out 20 to 60 NM-",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name":"ExpireCWF_Morning Update",
"productType":"CWF",
"commentary": "Checking product expire time for CWF, with Morning Update AM issuance.",
"comboFlag": 1,
"combinations": [(["GMZ870"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Morning Update', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FZUS52 KTBW 010800",
"CWFTBW",
"Coastal Waters Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"GMZ800-012100-",
"300 AM EST Mon Feb 1 2010",
"Synopsis for Bonita Beach to Suwannee River FL out 60 NM",
"$$",
"GMZ870-012100-",
"Waters from Tarpon Springs to Suwannee River FL out 20 to 60 NM-",
"300 AM EST Mon Feb 1 2010",
"$$",
],
},
{
"name":"ExpireCWF_AfternoonUpdate",
"productType":"CWF",
"commentary": "Checking product expire time for CWF, with Afternoon Update issuance.",
"comboFlag": 1,
"combinations": [(["GMZ870"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Afternoon Update', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FZUS52 KTBW 010800",
"CWFTBW",
"Coastal Waters Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"GMZ800-012100-",
"300 AM EST Mon Feb 1 2010",
"Synopsis for Bonita Beach to Suwannee River FL out 60 NM",
"$$",
"GMZ870-012100-",
"Waters from Tarpon Springs to Suwannee River FL out 20 to 60 NM-",
"300 AM EST Mon Feb 1 2010",
"$$",
],
},
{
"name":"ExpireCWF_EveningUpdate",
"productType":"CWF",
"commentary": "Checking product expire time for CWF, with Evening Update issuance.",
"comboFlag": 1,
"combinations": [(["GMZ870"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Evening Update', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FZUS52 KTBW 010800",
"CWFTBW ",
"Coastal Waters Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"GMZ800-012100-",
"300 AM EST Mon Feb 1 2010",
"Synopsis for Bonita Beach to Suwannee River FL out 60 NM",
"$$",
"GMZ870-012100-",
"Waters from Tarpon Springs to Suwannee River FL out 20 to 60 NM-",
"300 AM EST Mon Feb 1 2010",
"$$",
],
},
{
"name":"ExpireCWFPac_Morning",
"productType":"CWF_Pacific",
"commentary": "Checking product expire time for CWF_Pacific, with Morning issuance.",
"comboFlag": 1,
"combinations": [(["GMZ870"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Morning', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FZUS52 KTBW 010800",
"CWFTBW",
"Coastal Waters Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"GMZ800-012100-",
"300 AM EST Mon Feb 1 2010",
"$$",
"GMZ870-012100-",
"Waters from Tarpon Springs to Suwannee River FL out 20 to 60 NM-",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name":"ExpireCWFPac_MorningUpdate",
"productType":"CWF_Pacific",
"commentary": "Checking product expire time for CWF_Pacific, with Morning Update issuance.",
"comboFlag": 1,
"combinations": [(["GMZ870"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Morning Update', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FZUS52 KTBW 010800",
"CWFTBW",
"Coastal Waters Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"GMZ800-012100-",
"300 AM EST Mon Feb 1 2010",
"Synopsis for Bonita Beach to Suwannee River FL out 60 NM",
"$$",
"GMZ870-012100-",
"Waters from Tarpon Springs to Suwannee River FL out 20 to 60 NM-",
"300 AM EST Mon Feb 1 2010",
"$$",
],
},
{
"name":"ExpireCWFPac_AfternoonUpdate",
"productType":"CWF_Pacific",
"commentary": "Checking product expire time for CWF_Pacific, with Afternoon Update issuance.",
"comboFlag": 1,
"combinations": [(["GMZ870"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Afternoon Update', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FZUS52 KTBW 010800",
"CWFTBW",
"Coastal Waters Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"GMZ800-012100-",
"300 AM EST Mon Feb 1 2010",
"Synopsis for Bonita Beach to Suwannee River FL out 60 NM",
"$$",
"GMZ870-012100-",
"Waters from Tarpon Springs to Suwannee River FL out 20 to 60 NM-",
"300 AM EST Mon Feb 1 2010",
"$$",
],
},
{
"name":"ExpireCWFPac_EveningUpdate",
"productType":"CWF_Pacific",
"commentary": "Checking product expire time for CWF_Pacific, with Evening Update issuance.",
"comboFlag": 1,
"combinations": [(["GMZ870"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Evening Update', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FZUS52 KTBW 010800",
"CWFTBW ",
"Coastal Waters Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"GMZ800-012100-",
"300 AM EST Mon Feb 1 2010",
"Synopsis for Bonita Beach to Suwannee River FL out 60 NM",
"$$",
"GMZ870-012100-",
"Waters from Tarpon Springs to Suwannee River FL out 20 to 60 NM-",
"300 AM EST Mon Feb 1 2010",
"$$",
],
},
{
"name":"ExpireFWF_am",
"productType":"FWF",
"commentary": "Checking product expire time for FWF, with Morning issuance.",
"comboFlag": 1,
"combinations": [(["FLZ050"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Morning', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FNUS52 KTBW 010800",
"FWFTBW",
"Fire Weather Planning Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"FLZ050-012100-",
"Pinellas-",
],
},
{
"name":"ExpireFWF_amU",
"productType":"FWF",
"commentary": "Checking product expire time for FWF, with Morning Update issuance.",
"comboFlag": 1,
"combinations": [(["FLZ050"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Morning Update', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FNUS52 KTBW 010800",
"FWFTBW",
"Fire Weather Planning Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"FLZ050-012100-",
"Pinellas-",
],
},
{
"name":"ExpireFWF_pmU",
"productType":"FWF",
"commentary": "Checking product expire time for FWF, with Afternoon Update issuance.",
"comboFlag": 1,
"combinations": [(["FLZ050"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Afternoon Update', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FNUS52 KTBW 010800",
"FWFTBW",
"Fire Weather Planning Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"FLZ050-012100-",
"Pinellas-",
],
},
{
"name":"ExpireFWF_pm",
"productType":"FWF",
"commentary": "Checking product expire time for FWF, with Afternoon issuance.",
"comboFlag": 1,
"combinations": [(["FLZ050"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Afternoon', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FNUS52 KTBW 010800",
"FWFTBW",
"Fire Weather Planning Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"FLZ050-020900-",
"Pinellas-",
],
},
{
"name":"ExpireFWF_eU",
"productType":"FWF",
"commentary": "Checking product expire time for FWF, with Evening Update issuance.",
"comboFlag": 1,
"combinations": [(["FLZ050"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Evening Update', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FNUS52 KTBW 010800",
"FWFTBW",
"Fire Weather Planning Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"FLZ050-020900-",
"Pinellas-",
],
},
{
"name":"ExpireFWF_emu",
"productType":"FWF",
"commentary": "Checking product expire time for FWF, with Early Morning Update issuance.",
"comboFlag": 1,
"combinations": [(["FLZ050"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Early Morning Update', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FNUS52 KTBW 010800",
"FWFTBW",
"Fire Weather Planning Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"FLZ050-010900-",
"Pinellas-",
],
},
{
"name":"ExpireFWFTab_am",
"productType":"FWF",
"commentary": "Checking product expire time for FWF, with Morning issuance.",
"comboFlag": 1,
"combinations": [(["FLZ050"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Morning', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FNUS52 KTBW 010800",
"FWFTBW",
"Fire Weather Planning Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
".DISCUSSION...",
"FLZ050-012100-",
"Pinellas-",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name":"ExpireFWFTab_pm",
"productType":"FWF",
"commentary": "Checking product expire time for FWF, with Afternoon issuance.",
"comboFlag": 1,
"combinations": [(["FLZ050"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Afternoon', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FNUS52 KTBW 010800",
"FWFTBW",
"Fire Weather Planning Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
".DISCUSSION...",
"FLZ050-020900-",
"Pinellas-",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name":"ExpireGLF_4am",
"productType":"GLF",
"commentary": "Checking product expire time for GLF, with 400 AM issuance.",
"comboFlag": 0,
"cmdLineVars": "{('Product Issuance', 'productIssuance'): '400 AM', ('Groupings', 'groupings'): 'West 1/2:East 1/2'}",
"checkStrings": [
"UFUS42 KTBW 010800",
"GLFABC",
"LSZ260-012100-",
"Open Lakes Forecast for Statename",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"LSZ261-012100-",
],
},
{
"name":"ExpireGLF_10am",
"productType":"GLF",
"commentary": "Checking product expire time for GLF, with 1000 AM issuance.",
"comboFlag": 0,
"cmdLineVars": "{('Product Issuance', 'productIssuance'): '1000 AM', ('Groupings', 'groupings'): 'West 1/2:East 1/2'}",
"checkStrings": [
"UFUS42 KTBW 010800",
"GLFABC",
"LSZ260-012100-",
"Open Lakes Forecast for Statename",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"LSZ261-012100-",
],
},
{
"name":"ExpireGLF_4pm",
"productType":"GLF",
"commentary": "Checking product expire time for GLF, with 400 PM issuance.",
"comboFlag": 0,
"cmdLineVars": "{('Product Issuance', 'productIssuance'): '400 PM', ('Groupings', 'groupings'): 'West 1/2:East 1/2'}",
"checkStrings": [
"UFUS42 KTBW 010800",
"GLFABC",
"LSZ260-020900-",
"Open Lakes Forecast for Statename",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"LSZ261-020900-",
],
},
{
"name":"ExpireGLF_10pm",
"productType":"GLF",
"commentary": "Checking product expire time for GLF, with 1000 PM issuance.",
"comboFlag": 0,
"cmdLineVars": "{('Product Issuance', 'productIssuance'): '1000 PM', ('Groupings', 'groupings'): 'West 1/2:East 1/2'}",
"checkStrings": [
"UFUS42 KTBW 010800",
"GLFABC",
"LSZ260-020900-",
"Open Lakes Forecast for Statename",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"LSZ261-020900-",
],
},
{
"name":"ExpireNSH_430am",
"productType":"NSH",
"commentary": "Checking product expire time for NSH, with 430 AM issuance.",
"comboFlag": 1,
"combinations": [(["GMZ870"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): '430 AM', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"UFUS42 KTBW 010800",
"NSHABC",
"Nearshore Marine Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"For waters within five nautical miles of shore on Lake (name)",
"GMZ870-011600-",
"Waters from Tarpon Springs to Suwannee River FL out 20 to 60 NM-",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name":"ExpireNSH_amU",
"productType":"NSH",
"commentary": "Checking product expire time for NSH, with Morning Update issuance.",
"comboFlag": 1,
"combinations": [(["GMZ870"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Morning Update', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"UFUS42 KTBW 010800",
"NSHABC",
"Nearshore Marine Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"For waters within five nautical miles of shore on Lake (name)",
"GMZ870-012200-",
"Waters from Tarpon Springs to Suwannee River FL out 20 to 60 NM-",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name":"ExpireNSH_430pm",
"productType":"NSH",
"commentary": "Checking product expire time for NSH, with 430 PM issuance.",
"comboFlag": 1,
"combinations": [(["GMZ870"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): '430 PM', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"UFUS42 KTBW 010800",
"NSHABC",
"Nearshore Marine Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"For waters within five nautical miles of shore on Lake (name)",
"GMZ870-020400-",
"Waters from Tarpon Springs to Suwannee River FL out 20 to 60 NM-",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name":"ExpireNSH_pmU",
"productType":"NSH",
"commentary": "Checking product expire time for NSH, with Evening Update issuance.",
"comboFlag": 1,
"combinations": [(["GMZ870"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Evening Update', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"UFUS42 KTBW 010800",
"NSHABC",
"Nearshore Marine Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"For waters within five nautical miles of shore on Lake (name)",
"GMZ870-021000-",
"Waters from Tarpon Springs to Suwannee River FL out 20 to 60 NM-",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name":"ExpireOFF_4am",
"productType":"OFF",
"commentary": "Checking product expire time for OFF, with 400 AM issuance.",
"cmdLineVars": "{('Product Issuance', 'productIssuance'): '400 AM', ('Issued By', 'issuedBy'): None}",
"comboFlag": 1,
"combinations": [(["GMZ870"],"")],
"checkStrings": [
"UFUS42 KTBW 010800",
"OFFABC",
"Offshore Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"400 AM EST Mon Feb 1 2010",
"-012100-",
"400 AM EST Mon Feb 1 2010",
".SYNOPSIS...",
"GMZ870-012100-",
"Waters from Tarpon Springs to Suwannee River FL out 20 to 60 NM-",
"400 AM EST Mon Feb 1 2010",
],
},
{
"name":"ExpireOFF_4pm",
"productType":"OFF",
"commentary": "Checking product expire time for OFF, with 400 PM issuance.",
"cmdLineVars": "{('Product Issuance', 'productIssuance'): '400 PM', ('Issued By', 'issuedBy'): None}",
"comboFlag": 1,
"combinations": [(["GMZ870"],"")],
"checkStrings": [
"UFUS42 KTBW 010800",
"OFFABC",
"Offshore Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"400 PM EST Mon Feb 1 2010",
"-020900-",
"400 PM EST Mon Feb 1 2010",
".SYNOPSIS...",
"GMZ870-020900-",
"Waters from Tarpon Springs to Suwannee River FL out 20 to 60 NM-",
"400 PM EST Mon Feb 1 2010",
],
},
{
"name":"ExpirePFM_am",
"productType":"PFM",
"commentary": "Checking product expire time for PFM, with Morning issuance.",
"comboFlag": 0,
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Morning', ('Issued By', 'issuedBy'): None }",
"internalStrip": 0,
"fileChanges": [
("PFM_<site>_Definition", "TextUtility", "add", pfm1, "delete"),
],
"checkStrings": [
"FOUS52 KTBW 010800",
"PFMTBW",
"Point Forecast Matrices",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"FLZ050-012100-",
"GFE TEST",
"35.00N 90.00W Elev. 35 ft",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name":"ExpirePFM_pm",
"productType":"PFM",
"commentary": "Checking product expire time for PFM, with Afternoon issuance.",
"comboFlag": 1,
"combinations": [(["FLZ050"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Afternoon', ('Issued By', 'issuedBy'): None }",
"internalStrip": 0,
"fileChanges": [
("PFM_<site>_Definition", "TextUtility", "add", pfm1, "delete"),
],
"checkStrings": [
"FOUS52 KTBW 010800",
"PFMTBW",
"Point Forecast Matrices",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"FLZ050-020900-",
"GFE TEST",
"35.00N 90.00W Elev. 35 ft",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name":"ExpireSFT_am",
"productType":"SFT",
"commentary": "Checking product expire time for SFT, with Morning issuance.",
"comboFlag": 0,
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Morning', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FPUS62 KTBW 010800",
"SFTTBW",
"STZ000-012200-",
"Tabular State Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name":"ExpireSFT_pm",
"productType":"SFT",
"commentary": "Checking product expire time for SFT, with Afternoon issuance.",
"comboFlag": 1,
"combinations": [(["FLZ050"],"")],
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Afternoon', ('Issued By', 'issuedBy'): None }",
"checkStrings": [
"FPUS62 KTBW 010800",
"SFTTBW",
"STZ000-021000-",
"Tabular State Forecast for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name": "ExpireZFP_am",
"productType":"ZFP",
"commentary": "Checking product expire time for ZFP, with Morning issuance.",
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Morning', ('Issued By', 'issuedBy'): None }",
"comboFlag": 1,
"combinations": [(["FLZ139"],"")],
"checkStrings": [
"FPUS52 KTBW 010800",
"ZFPTBW",
"Zone Forecast Product for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"FLZ139-012100-",
"Levy-",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name": "ExpireZFP_amP1st",
"productType":"ZFP",
"commentary": "Checking product expire time for ZFP, with Morning with Pre-1st Period issuance.",
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Morning with Pre-1st Period', ('Issued By', 'issuedBy'): None }",
"comboFlag": 1,
"combinations": [(["FLZ139"],"")],
"checkStrings": [
"FPUS52 KTBW 010800",
"ZFPTBW",
"Zone Forecast Product for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"FLZ139-012100-",
"Levy-",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name": "ExpireZFP_amU",
"productType":"ZFP",
"commentary": "Checking product expire time for ZFP, with Morning Update issuance.",
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Morning Update', ('Issued By', 'issuedBy'): None }",
"comboFlag": 1,
"combinations": [(["FLZ139"],"")],
"checkStrings": [
"FPUS52 KTBW 010800",
"ZFPTBW",
"Zone Forecast Product for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"FLZ139-012100-",
"Levy-",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name": "ExpireZFP_pmU",
"productType":"ZFP",
"commentary": "Checking product expire time for ZFP, with Afternoon Update issuance.",
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Afternoon Update', ('Issued By', 'issuedBy'): None }",
"comboFlag": 1,
"combinations": [(["FLZ139"],"")],
"checkStrings": [
"FPUS52 KTBW 010800",
"ZFPTBW",
"Zone Forecast Product for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"FLZ139-012100-",
"Levy-",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name": "ExpireZFP_pm",
"productType":"ZFP",
"commentary": "Checking product expire time for ZFP, with Afternoon issuance.",
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Afternoon', ('Issued By', 'issuedBy'): None }",
"comboFlag": 1,
"combinations": [(["FLZ139"],"")],
"checkStrings": [
"FPUS52 KTBW 010800",
"ZFPTBW",
"Zone Forecast Product for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"FLZ139-020900-",
"Levy-",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name": "ExpireZFP_pmP1st",
"productType":"ZFP",
"commentary": "Checking product expire time for ZFP, with Afternoon with Pre-1st Period issuance.",
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Afternoon with Pre-1st Period', ('Issued By', 'issuedBy'): None }",
"comboFlag": 1,
"combinations": [(["FLZ139"],"")],
"checkStrings": [
"FPUS52 KTBW 010800",
"ZFPTBW",
"Zone Forecast Product for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"FLZ139-020900-",
"Levy-",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name": "ExpireZFP_eU",
"productType":"ZFP",
"commentary": "Checking product expire time for ZFP, with Evening Update issuance.",
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Evening Update', ('Issued By', 'issuedBy'): None }",
"comboFlag": 1,
"combinations": [(["FLZ139"],"")],
"checkStrings": [
"FPUS52 KTBW 010800",
"ZFPTBW",
"Zone Forecast Product for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"FLZ139-020900-",
"Levy-",
"300 AM EST Mon Feb 1 2010",
],
},
{
"name": "ExpireZFP_emU",
"productType":"ZFP",
"commentary": "Checking product expire time for ZFP, with Early Morning Update issuance.",
"cmdLineVars": "{('Product Issuance', 'productIssuance'): 'Early Morning Update', ('Issued By', 'issuedBy'): None }",
"comboFlag": 1,
"combinations": [(["FLZ139"],"")],
"checkStrings": [
"FPUS52 KTBW 010800",
"ZFPTBW",
"Zone Forecast Product for Florida",
"National Weather Service Tampa Bay Ruskin FL",
"300 AM EST Mon Feb 1 2010",
"FLZ139-010900-",
"Levy-",
"300 AM EST Mon Feb 1 2010",
],
},
{
"commentary": "Deleting hazard grids.",
"name": "Cleanup",
"productType": None,
"checkStrings": [],
"clearHazardsTable": 1,
},
]
import TestScript
def testScript(self, dataMgr):
gridsStartTime = self.getAbsFromLocal(2010, 1, 1, 0, 0)
drtTime = self.getAbsFromLocal(2010, 1, 1, 4, 0)
defaults = {
"gridsStartTime": "20100116_0500",
"drtTime": "20100201_0800",
"database": "<site>_GRID__Fcst_00000000_0000",
"createGrids": [],
"deleteGrids": [("Fcst", "Hazards", "SFC", "all", "all")],
"publishGrids": 0,
"decodeVTEC": 0,
"orderStrings": 1,
}
return TestScript.generalTestScript(self, dataMgr, scripts, defaults)
| 33.078588 | 130 | 0.582171 | 3,120 | 29,043 | 5.399359 | 0.097756 | 0.027781 | 0.041672 | 0.046302 | 0.904963 | 0.901163 | 0.891191 | 0.880981 | 0.862044 | 0.846907 | 0 | 0.085888 | 0.256344 | 29,043 | 877 | 131 | 33.116306 | 0.694092 | 0.036566 | 0 | 0.741162 | 0 | 0.011364 | 0.684397 | 0.01081 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001263 | false | 0 | 0.001263 | 0 | 0.003788 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
28427523c477c5a5c63c7e96b62560ca2796e9ec | 149 | py | Python | NlpHub/__init__.py | shahid017/NlpHub | 95037cf472abcabd0bc3283ae8fa168795806a67 | [
"MIT"
] | 1 | 2021-07-22T21:34:25.000Z | 2021-07-22T21:34:25.000Z | NlpHub/__init__.py | shahid017/zubaan | 95037cf472abcabd0bc3283ae8fa168795806a67 | [
"MIT"
] | null | null | null | NlpHub/__init__.py | shahid017/zubaan | 95037cf472abcabd0bc3283ae8fa168795806a67 | [
"MIT"
] | 1 | 2021-07-12T04:54:28.000Z | 2021-07-12T04:54:28.000Z | from NlpHub.NlpHub import Transformers
from NlpHub.NlpHub import Annotators
from NlpHub.NlpHub import Analyzers
from NlpHub.NlpHub import Convertors
| 29.8 | 38 | 0.865772 | 20 | 149 | 6.45 | 0.35 | 0.310078 | 0.496124 | 0.682171 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107383 | 149 | 4 | 39 | 37.25 | 0.969925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
288c6e5f4868c58918b04632ae82f4b56e6a906d | 34,865 | py | Python | u8g2/luRS12_te.py | tve/mpy-lib | 9f102459c61a5be424291a277e421bd1fc16843a | [
"MIT"
] | 6 | 2020-02-27T11:17:54.000Z | 2020-12-04T10:14:26.000Z | u8g2/luRS12_te.py | tve/mpy-lib | 9f102459c61a5be424291a277e421bd1fc16843a | [
"MIT"
] | 4 | 2020-07-29T14:07:04.000Z | 2021-05-19T05:10:33.000Z | u8g2/luRS12_te.py | tve/mpy-lib | 9f102459c61a5be424291a277e421bd1fc16843a | [
"MIT"
] | 3 | 2020-05-16T08:15:16.000Z | 2021-09-30T10:39:37.000Z | data=b'\xbe\x00\x03\x03\x05\x05\x05\x05\x06\x1c\x19\xf5\xfa\x0c\xfd\x0c\xfd\x02\x0e\x04\x5e\x0d\x0a\x20\x06\x00\xc0\x58\x02\x21\x08\x82\x49\x58\xe2\x81\x44\x22\x08\x84\x44\x6c\xe2\x41\x00\x23\x1c\x8b\x41\xb8\x52\x24\x95\x88\x25\xb2\xc3\xe4\x30\x93\x88\x25\xb2\xc3\xe4\x30\x93\x88\x25\x52\x91\x10\x00\x24\x12\xc6\xc9\xb7\x4a\x76\x90\x94\xd8\x86\x33\x12\xe9\x70\x88\x89\x00\x25\x1c\x8c\x49\xb8\x6a\x38\x29\xc9\x24\x92\x59\x45\x3a\x91\x83\xca\x57\x8a\x50\xa2\x36\x51\x13\x55\x86\x23\x00\x26\x16\x89\x45\xc8\x6e\x5a\x94\x08\x25\x42\x46\x0b\x49\x42\xa2\x8c\x46\x95\xd8\x48\x00\x27\x07\x82\x44\x4c\xe2\x10\x28\x0f\xe4\xc5\x66\x2e\x24\x0a\x89\x42\xfa\x26\x92\xc9\x02\x29\x11\xe4\xc5\x66\x22\x26\x8b\xc9\x62\x22\xbd\x88\x24\xa2\x18\x00\x2a\x0d\xc6\x44\x8b\x4a\x1a\x32\x45\x42\xa4\x14\x00\x2b\x15\x4a\x41\xb8\x52\x0e\x91\x43\xe4\x10\xe1\xe1\x60\x94\x43\xe4\x10\x39\x44\x08\x2c\x09\xa3\xc4\x56\xc2\x22\xa1\x00\x2d\x07\x44\x44\x6a\xe2\x10\x2e\x07\x42\x48\x58\x82\x00\x2f\x13\xe8\xc1\x86\x5a\x55\x2c\x15\x4b\xa7\xd2\xa9\x58\x2a\x96\x8a\xc5\x00\x30\x11\x89\x45\xb8\xaa\x76\x92\x49\xa4\x7c\x95\xc8\x44\xb7\x12\x00\x31\x09\x84\x49\xb8\xe2\x10\xd2\x7f\x32\x11\x88\x45\xb8\xa6\x74\x10\x8e\x55\x67\xb4\xe1\x54\x7a\x38\x08\x33\x12\x87\x49\xb8\xa6\x72\x88\x4d\xa5\x21\x5a\x59\x3a\x3b\x48\x4a\x00\x34\x16\x8a\x41\xb8\x76\xcc\x2a\x11\x4e\x64\x23\xd1\x4c\x74\x38\x98\xe5\x10\x39\x44\x04\x35\x0f\x86\x49\xb8\xe2\x51\x48\xb2\x0d\xd5\x0e\x11\x12\x00\x36\x15\x88\x45\xb8\x8e\x64\x92\x45\xa6\x92\xd1\x65\x54\x24\x4a\x44\x13\x1b\x09\x00\x37\x11\x88\x49\xb8\xe2\x41\x3a\x95\x0e\xa7\xd2\xa9\x74\x2a\x96\x02\x38\x14\x88\x49\xb8\xaa\x72\x18\x12\x25\x22\x19\xcd\x22\x2b\x96\x0e\x33\x12\x00\x39\x15\x88\x45\xb8\x8a\x66\x19\x49\x84\xc4\xd2\xe4\x34\x11\x4b\x62\xa2\x1a\x0d\x00\x3a\x08\x22\x49\x58\x82\x0e\x23\x3b\x0a\x82\xc9\x56\x82\x0e\x3b\x48\x00\x3c\x14\x4b\x45\xb8\x1e\x24\x07\x50\x8b\x35\x73\x1d\x52\x87\xd4\x21\x74\x90\x00\x3d\x0b\xaa\x40\xb9\xe2\xc1\x0e\x3b\x1c\x0c\x3e\x15\x4b\x41\xb8\x42\x0e\xa2\x43\xea\x90\x3a\xa4\x6c\x2b\x56\xe9\x00\x39\x08\x00\x3f\x10\x87\x41\x78\x86\x74\x88\x4d\x85\xb3\x45\xa9\x1c\xaa\x06\x40\x1e\x8e\x49\xf8\xd6\x7c\x98\x11\x47\x23\x92\x44\x54\x22\x89\x62\x24\x09\xd3\x88\x32\x91\xad\x86\x0f\x71\x48\x15\x00\x41\x1b\x8b\x41\xc8\x72\x0e\x99\x03\xe8\x00\x89\x58\x22\x15\x0d\x65\xb2\x43\xec\x20\x92\x4a\xc4\x75\x80\x00\x42\x12\x87\x49\xa8\xc2\x72\x90\x31\x49\x4a\x16\x51\x8d\x76\x38\x44\x00\x43\x19\x8a\x45\xc8\xce\x76\x88\x4c\xe9\x00\x39\x44\x0e\x91\x43\xe4\x90\x39\x64\x1a\x3a\xc4\x2c\x00\x44\x12\x8a\x49\xd8\xe2\x76\x08\x09\x27\xd2\x32\x5f\x25\x87\xd0\x0d\x00\x45\x0e\x87\x49\x98\xe2\x41\xaa\x6a\xb1\x48\x55\x0f\x07\x46\x0d\x87\x49\x98\xe2\x41\xaa\x6a\xb1\x48\x75\x05\x47\x17\x8a\x45\xc8\xce\x76\x88\x4c\xe9\x00\x39\x44\x0e\x11\x93\xab\x92\xa1\xe8\x10\xb3\x00\x48\x0d\x89\x49\xc8\x42\xca\xeb\xe1\x21\xca\xab\x00\x49\x08\x82\x45\x58\xe2\xc3\x00\x4a\x0b\xe6\xbd\x56\x52\xff\xc7\x0b\x09\x00\x4b\x17\x89\x49\xb8\x42\x58\x13\x89\x46\x92\x99\x44\xd8\x28\x99\x89\x64\x4a\xb2\x89\x70\x00\x4c\x0b\x88\x45\x98\x42\xac\x7f\x3e\x1c\x04\x4d\x19\x8b\x49\xf8\x62\x6a\xbd\x1d\x62\x87\x98\x45\x85\xa2\x42\x51\x21\x51\x48\x23\xd2\x88\x0e\x10\x4e\x12\x89\x49\xd8\x42\x5a\xb4\xd9\x4e\x14\x11\x49\x42\xba\x39\x56\x05\x4f\x16\x8b\x45\xd8\xae\x7a\x5b\x99\xd6\x01\x74\x00\x1d\x40\x07\x54\x27\xb3\xd9\xb5\x06\x50\x0f\x88\x45\x98\xe2\x72\x18\x72\x3b\x4c\x6a\x62\xcd\x00\x51\x21\xec\xc5\xd6\xae\x7c\x9c\x8d\xa6\x13\x39\x40\x22\x07\x48\xe4\x00\x89\x1c\x20\x99\x8e\x66\xc3\x73\x1d\x38\x87\xd1\x61\x11\x00\x52\x16\x89\x49\xb8\xc2\x76\x92\x89\x64\x22\x99\x48\xad\x58\x14\xc9\x94\x64\x13\xe1\x00\x53\x10\x87\x45\x98\xa6\x62\x91\x45\xa4\x53\xea\xb6\xc3\x84\x04\x54\x19\x8a\x41\xb8\xe2\xc1\x28\x87\xc8\x21\x72\x88\x1c\x22\x87\xc8\x21\x72\x88\x1c\x22\x87\x08\x01\x55\x0d\x89\x45\xc8\x42\xca\xbf\xd6\x26\xb7\x12\x00\x56\x1b\x8b\x41\xb8\x42\x0e\x28\x4b\xa4\x22\xa9\x4c\x36\x53\x1c\x49\x25\x62\x89\x1c\x40\x07\xcc\x21\x43\x00\x57\x1f\x8f\x41\xf8\x42\x38\x24\x0e\x25\xb2\x15\x11\x4d\xa4\x22\xd2\x22\x92\x49\x34\x56\x14\x69\x44\xda\x78\x36\x9e\xcd\x00\x58\x16\x8a\x41\xb8\x62\x2a\x11\xca\x44\x33\x2b\x1d\x30\x66\x15\xc9\x46\x23\xa1\x44\x3a\x59\x19\x8a\x41\xb8\x42\x2c\x11\x8a\x66\x4a\x42\x2b\x1d\x30\x07\xc8\x21\x72\x88\x1c\x22\x87\x08\x01\x5a\x0f\x89\x45\xa8\xe2\x81\x3c\x5d\x56\x9e\x2e\x1f\x0e\x04\x5b\x0b\xe4\xc5\x66\xe2\x30\xd2\x7f\x3a\x04\x5c\x16\xe8\xc1\x86\x42\x2c\x07\x88\xe5\x00\xb1\x78\x2c\x1e\xab\x03\xc4\x72\x80\x58\x00\x5d\x0b\xe4\xc1\x66\xe2\x10\xd2\x7f\x3a\x0c\x5e\x15\x6a\x41\xb9\x36\x0e\x91\x43\xe4\x00\x32\xd5\x28\x92\x8d\x46\x42\xc9\xb0\x2c\x5f\x07\x47\xc4\x97\xe2\x01\x60\x07\x44\x4c\xad\x62\x26\x61\x12\x28\x45\x98\x8a\x66\x8a\xc9\x4a\x16\x99\x44\x26\x39\x44\x46\x02\x62\x11\x88\x49\xb8\x42\xac\x65\x74\x19\x15\xb9\x1d\x26\x92\x11\x00\x63\x0f\x27\x45\x98\xaa\x72\x10\x4a\x55\x67\x11\x13\x05\x00\x64\x11\x88\x45\xb8\x5a\xd3\x44\x72\x98\x11\x19\x4b\x93\xd3\x44\x00\x65\x0f\x28\x45\x98\x8a\x66\x11\x1e\x0e\xe6\x61\xe4\x54\x01\x66\x0e\xa6\x45\x68\x8a\x52\x11\xca\x2a\x25\xa1\xbe\x01\x67\x14\x88\xc5\xb6\x6a\x22\x39\xcc\x88\x8c\xa5\xc9\x69\x22\x09\x4a\x4c\x25\x00\x68\x0d\x87\x49\xb8\x42\xaa\x65\x22\x39\xd1\xf8\x26\x69\x09\x82\x45\x58\x82\x74\x38\x10\x6a\x0d\xe5\xc1\x56\x4e\x1d\x22\xd3\xdf\x0e\x14\x00\x6b\x12\x88\x49\xa8\x42\xac\xad\xa4\x32\xaa\x11\x25\x32\x91\xd2\x44\x36\x6c\x08\x82\x45\x58\xe2\xc3\x00\x6d\x16\x2c\x49\x08\x43\x32\x9a\x48\x28\xa7\x11\x4d\x46\x93\xd1\x64\x34\x19\x4d\x46\x53\x6e\x0c\x27\x49\xb8\x42\x32\x91\x9c\x68\x7c\x13\x6f\x0f\x29\x45\xa8\xaa\x76\x99\x55\x59\x6b\x93\x5b\x09\x00\x70\x11\x88\xc9\xb6\x42\x32\xba\x8c\x8a\xdc\x0e\x13\xc9\x48\xac\x0c\x71\x11\x88\xc5\xb6\x6a\x22\x39\xcc\x88\x8c\xa5\xc9\x69\x22\xd6\x00\x72\x0b\x25\x49\x78\x42\x72\x18\xc9\xf4\x06\x73\x0e\x26\x45\x98\x86\x52\x11\xd2\x88\xb4\x43\x84\x02\x74\x0d\x65\x45\x68\x46\x26\x3a\x4c\x64\xba\x91\x06\x75\x0c\x27\x49\xb8\x42\xc6\x4f\x17\xc9\x44\x00\x76\x12\x29\x41\x98\x42\x4a\x9c\xc8\x44\x32\x15\xa1\x44\x58\x1d\xcf\x00\x77\x18\x2d\x41\xd8\x42\x36\xa3\xcd\x4a\x23\x15\x4d\x12\x4d\x12\x09\x71\x44\x9c\x0d\x67\x32\x00\x78\x12\x29\x45\xa8\x62\x36\x19\xc9\x24\x52\xea\xb4\x36\x91\xa9\x08\x07\x79\x15\x89\xc1\x96\x42\x5a\x9b\xc8\x44\x32\x15\xa1\x44\x4a\x1d\x8f\xc5\x63\x29\x00\x7a\x0c\x28\x45\xa8\xe2\x41\xaa\xd7\xc3\x41\x00\x7b\x0e\xe5\xc1\x66\x4e\x34\x92\xe9\x22\xa3\xe9\x36\x13\x7c\x08\xe2\xc5\x46\xe2\x07\x01\x7d\x0f\xe5\xc5\x66\x42\x36\xd3\x8d\x26\x91\xe9\x34\x92\x01\x7e\x0d\x8a\xc0\xb9\x66\x78\x08\x91\x0e\xc1\x09\x00\xa0\x06\x00\xc0\x58\x02\xa1\x09\x82\xc9\x56\x82\x74\x38\x10\xa2\x16\x87\x49\xb8\x4e\x2a\xb3\x44\x2c\x22\x89\x48\x22\x92\x88\x24\x92\x88\xa9\x26\x02\xa3\x10\x88\x49\xb8\x92\x56\x12\x2b\x9a\x8c\x62\x71\xf8\x70\x08\xa4\x14\x49\xc5\xb8\x42\x7a\xa0\x9c\x26\x23\x99\x48\x26\x9a\x8c\x2e\x07\xaa\x00\xa5\x15\x8a\x41\xb8\x62\x38\x19\xcd\xac\xc4\xdb\x59\x7a\x3b\xcb\x21\x72\x88\x10\x00\xa6\x09\xe2\xc5\x46\xe2\x50\x3e\x14\xa7\x14\xe7\xc9\xb6\xea\x70\x15\x0f\x2b\x12\x9b\x45\x52\x1c\xcf\x0e\x93\x0a\x00\xa8\x08\x45\x4c\xad\x22\x34\x12\xa9\x1d\x8e\x41\xf8\xd2\x7c\x10\x32\x89\x66\x12\x51\x8d\x24\x26\x89\x49\x35\x89\x68\x26\x22\xd2\x0e\x72\x80\x11\x00\xaa\x0b\xc7\x40\x7b\xa6\xe2\x85\x22\x99\x08\xab\x14\x28\x49\x98\x4e\x22\x93\x88\x24\x22\xcb\x44\x66\x93\x08\x25\x32\x89\x00\xac\x0d\xaa\x40\xb9\xe2\xc1\x0e\x91\x43\xe4\x10\x01\xad\x07\x44\x44\x6a\xe2\x10\xae\x0e\xe8\xc4\xba\x8a\x66\x39\x58\x0e\x13\x1b\x09\x00\xaf\x07\x45\x4c\xad\xe2\x30\xb0\x07\x63\xc4\x5c\x82\x42\xb1\x12\x4a\x41\xb8\x52\x0e\x91\x43\x84\x87\x83\x51\x0e\x91\x23\x3c\x18\xb2\x0d\xe4\x98\x6a\x62\x12\x8a\x85\x24\xa1\x18\x01\xb3\x0c\xe4\x98\x6a\x62\x18\x92\x08\x63\x14\x00\xb4\x07\x44\x50\xad\xa6\x04\xb5\x0d\x87\xc9\xb6\x42\xc6\x4f\x87\x0b\x55\x15\x00\xb6\x0b\xe6\xc9\xb6\xe6\xf0\x21\x52\xdb\x3f\xb7\x07\x42\x50\xba\x82\x00\xb8\x07\x63\xd0\xa6\x42\x54\xb9\x08\xe2\x94\x6a\x42\x92\x0f\xba\x0d\xc6\x44\x7b\x86\x72\x08\x91\x0e\x11\x0a\x00\xbb\x15\x28\x45\x98\x42\x22\x93\x08\x25\x32\x9b\x64\x62\x92\x88\x24\x32\x89\x0c\x00\xbc\x1f\x8b\x4d\xd8\x42\x0e\x90\x84\x45\x51\x59\x70\x16\x1b\xc6\x44\x49\x8a\x93\xb4\x89\x24\x26\xaa\x88\x23\x72\x40\x04\x00\xbd\x1d\x8b\x4d\xd8\x42\x0e\x90\x84\x45\x51\x59\x70\x16\x1b\xc6\x4a\x21\x49\x28\x36\x0c\x0d\x45\xc2\x98\x30\x26\x25\xbe\x1e\x8b\x4d\xd8\x62\x2c\x0b\xca\xb4\x4d\x23\xe3\x88\x28\x32\x11\x09\x27\x69\x13\x49\x4c\x54\x11\x47\xe4\x80\x08\x00\xbf\x10\x87\xc1\x76\x4e\x2a\x87\x2a\xce\x16\xa5\xb3\x43\x88\x02\xc0\x1e\xeb\x41\xc8\x6e\x0e\x93\xa3\xce\x21\x73\x00\x1d\x20\x11\x4b\xa4\xa2\xa1\x4c\x76\x88\x1d\x44\x52\x89\xb8\x0e\x10\xc1\x1e\xeb\x41\xc8\x76\x0e\x90\x23\xcf\x21\x73\x00\x1d\x20\x11\x4b\xa4\xa2\xa1\x4c\x76\x88\x1d\x44\x52\x89\xb8\x0e\x10\xc2\x1e\xeb\x41\xc8\x92\x2c\x92\xa3\xcd\x21\x73\x00\x1d\x20\x11\x4b\xa4\xa2\xa1\x4c\x76\x88\x1d\x44\x52\x89\xb8\x0e\x10\xc3\x1f\xeb\x41\xc8\x8e\x12\x8d\xcc\x11\xe7\x90\x39\x80\x0e\x90\x88\x25\x52\xd1\x50\x26\x3b\xc4\x0e\x22\xa9\x44\x5c\x07\x08\xc4\x1f\xeb\x41\xc8\x2e\x24\x0e\xc9\x11\xe7\x90\x39\x80\x0e\x90\x88\x25\x52\xd1\x50\x26\x3b\xc4\x0e\x22\xa9\x44\x5c\x07\x08\xc5\x20\x0b\x42\xc8\x72\x0e\x49\x87\xcc\x51\xe7\x90\x39\x80\x0e\x90\x88\x25\x52\xd1\x50\x26\x3b\xc4\x0e\x22\xa9\x44\x5c\x07\x08\xc6\x20\x90\x45\xf8\x1e\x70\x90\x03\x0e\x62\x3a\xb0\x0e\x94\x5c\x27\x57\x91\x1c\x74\x87\x1c\xe2\x10\xa1\x1c\x30\x3c\x50\x0f\x01\xc7\x1d\xea\xc5\xc6\xce\x76\x88\x4c\xe9\x00\x39\x44\x0e\x91\x43\xe4\x90\x39\x64\x1a\x3a\xc4\xcc\x71\x90\x1c\x30\x03\xc8\x11\xe7\x49\x98\x66\x2c\x07\x1d\x0e\x52\x55\x8b\x45\xaa\x7a\x38\xc9\x11\xe7\x49\x98\x6a\x26\x07\x1e\x0e\x52\x55\x8b\x45\xaa\x7a\x38\xca\x11\xe7\x49\x98\x86\xa4\x0e\x39\x1c\xa4\xaa\x16\x8b\x54\xf5\x70\xcb\x13\xe7\x49\x98\x22\x24\x0a\xc9\x41\x87\x83\x54\xd5\x62\x91\xaa\x1e\x0e\xcc\x0b\xe4\x49\x58\x62\x26\x15\xe9\x7f\x01\xcd\x0b\xe4\x49\x58\xa6\x0e\x10\xe9\x7f\x01\xce\x0d\xe6\x4d\x58\x86\x22\x92\x43\x84\xfa\x9f\x00\xcf\x0c\xe5\x49\x58\x22\x34\x12\xcb\xf4\x3f\x01\xd0\x1a\x8c\x41\xd8\xea\x7a\x08\x0a\x67\xd2\x91\xf8\x20\x3b\xc8\x44\x62\x91\x58\x24\x95\x1d\x82\x37\x00\xd1\x16\xe9\x49\xd8\x8a\x12\x8b\xcc\x81\xd2\xa2\xcd\x76\xa2\x88\x48\x12\xd2\xcd\xb1\x2a\xd2\x1a\xeb\x45\xd8\x6e\x0e\x93\x23\x56\x6f\x2b\xd3\x3a\x80\x0e\xa0\x03\xe8\x80\xea\x64\x36\xbb\xd6\x00\xd3\x1a\xeb\x45\xd8\x76\x0e\x90\xa3\x56\x6f\x2b\xd3\x3a\x80\x0e\xa0\x03\xe8\x80\xea\x64\x36\xbb\xd6\x00\xd4\x1a\xeb\x45\xd8\x92\x2c\x92\x23\x55\x6f\x2b\xd3\x3a\x80\x0e\xa0\x03\xe8\x80\xea\x64\x36\xbb\xd6\x00\xd5\x1b\xeb\x45\xd8\x8e\x12\x8d\xcc\xd1\xaa\xb7\x95\x69\x1d\x40\x07\xd0\x01\x74\x40\x75\x32\x9b\x5d\x6b\x00\xd6\x1b\xeb\x45\xd8\x2e\x24\x0e\xc9\xd1\xaa\xb7\x95\x69\x1d\x40\x07\xd0\x01\x74\x40\x75\x32\x9b\x5d\x6b\x00\xd7\x12\x4a\x41\xb8\x42\x5c\x9c\x8c\x66\x56\x32\xd5\x36\x9a\x0c\xcb\x02\xd8\x1b\x8b\x45\xd8\xae\x22\x3a\x84\x66\x93\xd9\x6d\x42\x13\x91\x64\x94\xd9\x6d\x32\x1b\x1d\x42\x92\x1a\x00\xd9\x10\xe9\x45\xc8\x6a\x0e\x91\x43\xa5\xfc\x6b\x6d\x72\x2b\x01\xda\x0f\xe9\x45\xc8\x72\x2a\x07\x4b\xf9\xd7\xda\xe4\x56\x02\xdb\x10\xe9\x45\xc8\x8e\x28\x92\xc3\xa4\xfc\x6b\x6d\x72\x2b\x01\xdc\x11\xe9\x45\xc8\x2a\x24\x0c\xc9\x81\x52\xfe\xb5\x36\xb9\x95\x00\xdd\x1c\xea\x41\xb8\x72\x2c\x47\x11\x4b\x84\xa2\x99\x92\xd0\x4a\x07\xcc\x01\x72\x88\x1c\x22\x87\xc8\x21\x42\x00\xde\x10\x88\x45\x98\x42\x2c\xbe\x1c\x86\xdc\x0e\x93\x9a\x58\x0c\xdf\x17\xa8\x49\xa8\x86\x66\xd2\xcb\x48\x22\x93\x8c\x44\x13\x99\x44\x48\x09\x51\x4e\x13\x00\xe0\x15\x88\x45\x98\x66\x0e\x90\x83\x69\xa6\x98\xac\x64\x91\x49\x64\x92\x43\x64\x24\xe1\x15\x88\x45\x98\x6e\x28\x47\xa0\x99\x62\xb2\x92\x45\x26\x91\x49\x0e\x91\x91\x00\xe2\x15\x88\x45\x98\x8a\x26\x92\x03\x69\xa6\x98\xac\x64\x91\x49\x64\x92\x43\x64\x24\xe3\x16\x88\x45\x98\x86\x12\x8a\xcc\xa1\x34\x53\x4c\x56\xb2\xc8\x24\x32\xc9\x21\x32\x12\xe4\x16\x88\x45\x98\x26\x24\x0b\xc9\xa1\x34\x53\x4c\x56\xb2\xc8\x24\x32\xc9\x21\x32\x12\xe5\x16\xa8\x45\x98\x6a\x1a\x89\xce\xc1\x34\x53\x4c\x56\xb2\xc8\x24\x32\xc9\x21\x32\x12\xe6\x16\x2c\x45\xe8\x6a\x44\x3a\x8c\x62\x4a\x87\xc9\xc1\x26\x95\xcd\x2c\x96\x19\x05\x00\xe7\x11\x87\xc5\x96\xaa\x72\x10\x4a\x55\x67\x11\x13\x31\x2c\x1c\x01\xe8\x13\x88\x45\x98\x6a\x0e\x90\x43\x69\x16\xe1\xe1\x60\x1e\x46\x4e\x15\x00\xe9\x12\x88\x45\x98\x72\x28\x07\xd3\x2c\xc2\xc3\xc1\x3c\x8c\x9c\x2a\x00\xea\x13\x88\x45\x98\x8a\x26\x92\x03\x69\x16\xe1\xe1\x60\x1e\x46\x4e\x15\x00\xeb\x14\x88\x45\x98\x2a\x24\x0b\xc9\x81\x34\x8b\xf0\x70\x30\x0f\x23\xa7\x0a\x00\xec\x0b\x84\x49\x58\x62\x26\x15\xe9\x2f\x00\xed\x0b\x84\x49\x58\xa6\x0e\x10\xe9\x2f\x00\xee\x0d\x86\x4d\x58\x86\x22\x92\x43\x84\xfa\x13\x00\xef\x0c\x85\x49\x58\x22\x34\x12\xcb\xf4\x27\x00\xf0\x12\x88\x45\xa8\xa2\x46\x8e\x08\x4b\x96\x51\x91\xb1\x34\xb1\x91\x00\xf1\x10\x87\x49\xb8\x82\x92\x32\x07\x49\x26\x92\x13\x8d\x6f\x02\xf2\x12\x89\x45\xa8\x6a\x0e\x91\x23\xd4\x2e\xb3\x2a\x6b\x6d\x72\x2b\x01\xf3\x11\x89\x45\xa8\x72\x2a\x47\xa9\x5d\x66\x55\xd6\xda\xe4\x56\x02\xf4\x12\x89\x45\xa8\x8e\x28\x92\x43\x6b\x97\x59\x95\xb5\x36\xb9\x95\x00\xf5\x13\x89\x45\xa8\x8a\x12\x8b\xcc\xc1\xb5\xcb\xac\xca\x5a\x9b\xdc\x4a\x00\xf6\x13\x89\x45\xa8\x2a\x24\x0c\xc9\xc1\xb5\xcb\xac\xca\x5a\x9b\xdc\x4a\x00\xf7\x10\x2a\x41\xb8\x52\x0e\x91\x23\x3c\xd8\xb1\xc9\x21\x42\x00\xf8\x0f\x29\x45\xa8\xea\x72\x19\x99\x2e\x93\x93\x69\xf2\x04\xf9\x0e\x87\x49\xb8\x66\x2c\x07\xc9\xf8\xe9\x22\x99\x08\xfa\x0e\x87\x49\xb8\x6e\x26\x87\xc9\xf8\xe9\x22\x99\x08\xfb\x0e\x87\x49\xb8\x8a\xa4\x0e\x90\xf1\xd3\x45\x32\x11\xfc\x10\x87\x49\xb8\x26\x24\x0a\xc9\x21\x32\x7e\xba\x48\x26\x02\xfd\x17\xe9\xc1\x96\x72\x2a\x07\x4b\x6b\x13\x99\x48\xa6\x22\x94\x48\xa9\xe3\xb1\x78\x2c\x05\xfe\x13\xe8\xc9\xb6\x42\xac\x65\x74\x19\x15\xb9\x1d\x26\x92\x91\x58\x19\x00\xff\x19\xe9\xc1\x96\x2a\x24\x0c\xc9\x81\xd2\xda\x44\x26\x92\xa9\x08\x25\x52\xea\x78\x2c\x1e\x4b\x01\x00\x00\x00\x08\x01\x64\x08\x3c\xff\xff\x01\x00\x1e\xeb\x41\xc8\xae\x5c\x47\x9c\x43\xe6\x00\x3a\x40\x22\x96\x48\x45\x43\x99\xec\x10\x3b\x88\xa4\x12\x71\x1d\x20\x01\x01\x16\x88\x45\x98\xaa\x56\x07\xd2\x4c\x31\x59\xc9\x22\x93\xc8\x24\x87\xc8\x48\x00\x01\x02\x1f\xeb\x41\xc8\x2a\x18\xa6\xa3\xce\x21\x73\x00\x1d\x20\x11\x4b\xa4\xa2\xa1\x4c\x76\x88\x1d\x44\x52\x89\xb8\x0e\x10\x01\x03\x16\x88\x45\x98\x26\x18\xa3\x43\x69\xa6\x98\xac\x64\x91\x49\x64\x92\x43\x64\x24\x01\x04\x20\xeb\xc1\xc6\x72\x0e\x99\x03\xe8\x00\x89\x58\x22\x15\x0d\x65\xb2\x43\xec\x20\x92\x4a\xc4\x75\x80\x1c\x45\x0e\x1a\x02\x01\x05\x16\x88\xc5\x96\x8a\x66\x8a\xc9\x4a\x16\x99\x44\x26\x39\x44\x46\x72\xa0\x78\x04\x01\x06\x1d\xea\x45\xc8\x76\x2c\x47\xb3\x1d\x22\x53\x3a\x40\x0e\x91\x43\xe4\x10\x39\x64\x0e\x99\x86\x0e\x31\x0b\x00\x01\x07\x12\x87\x45\x98\x6e\x26\x87\x56\x0e\x42\xa9\xea\x2c\x62\xa2\x00\x01\x08\x1e\xea\x45\xc8\x8e\x2a\x92\xa3\xd8\x0e\x91\x29\x1d\x20\x87\xc8\x21\x72\x88\x1c\x32\x87\x4c\x43\x87\x98\x05\x00\x01\x09\x12\x87\x45\x98\x8a\xa4\x0e\xaa\x1c\x84\x52\xd5\x59\xc4\x44\x01\x01\x0a\x1e\xea\x45\xc8\x52\x0e\x91\xa3\xd9\x0e\x91\x29\x1d\x20\x87\xc8\x21\x72\x88\x1c\x32\x87\x4c\x43\x87\x98\x05\x00\x01\x0b\x12\x87\x45\x98\x4e\x2a\x07\x56\x0e\x42\xa9\xea\x2c\x62\xa2\x00\x01\x0c\x1d\xea\x45\xc8\x4a\x95\x8e\x64\x3b\x44\xa6\x74\x80\x1c\x22\x87\xc8\x21\x72\xc8\x1c\x32\x0d\x1d\x62\x16\x00\x01\x0d\x13\x87\x45\x98\x46\x24\xa2\xc3\x2a\x07\xa1\x54\x75\x16\x31\x51\x00\x01\x0e\x15\xea\x49\xd8\x4a\x95\x0e\xbe\x1d\x42\xc2\x89\xb4\xcc\x57\xc9\x21\x74\x03\x01\x0f\x17\x8c\x45\xe8\x5a\x32\x96\x8c\x45\xa2\x89\xd2\x71\x26\xd4\xc7\xd1\xf4\x3c\x11\x02\x01\x10\x1b\x8c\x41\xd8\xea\x7a\x08\x0a\x67\xd2\x91\xf8\x20\x3b\xc8\x44\x62\x91\x58\x24\x95\x1d\x82\x37\x00\x01\x11\x12\x88\x45\xb8\x5a\xd6\x34\x91\x1c\x66\x44\xc6\xd2\xe4\x34\x11\x01\x12\x12\xe7\x49\x98\xa6\x54\x87\x1c\x0e\x52\x55\x8b\x45\xaa\x7a\x38\x01\x13\x13\x88\x45\x98\xaa\x56\x07\xd2\x2c\xc2\xc3\xc1\x3c\x8c\x9c\x2a\x00\x01\x14\x13\xe7\x49\x98\x22\x18\xa2\x83\x0e\x07\xa9\xaa\xc5\x22\x55\x3d\x1c\x01\x15\x14\x88\x45\x98\x26\x18\xa3\x43\x69\x16\xe1\xe1\x60\x1e\x46\x4e\x15\x00\x01\x16\x12\xe7\x49\x98\x4a\x2a\x87\x1d\x0e\x52\x55\x8b\x45\xaa\x7a\x38\x01\x17\x13\x88\x45\x98\x4e\x2c\x07\xd3\x2c\xc2\xc3\xc1\x3c\x8c\x9c\x2a\x00\x01\x18\x12\xe7\xc9\x96\xe2\x41\xaa\x6a\xb1\x48\x55\x0f\x77\x90\x74\x04\x01\x19\x13\x88\xc5\x96\x8a\x66\x11\x1e\x0e\xe6\x61\xe4\x54\x87\x8a\x47\x00\x01\x1a\x13\xe7\x49\x98\x42\x24\xa2\x83\x0e\x07\xa9\xaa\xc5\x22\x55\x3d\x1c\x01\x1b\x14\x88\x45\x98\x46\x24\xa3\x43\x69\x16\xe1\xe1\x60\x1e\x46\x4e\x15\x00\x01\x1c\x1c\xea\x45\xc8\x8e\x2a\x92\xa3\xd8\x0e\x91\x29\x1d\x20\x87\xc8\x21\x62\x72\x55\x32\x14\x1d\x62\x16\x00\x01\x1d\x18\xe8\xc5\xb6\x8a\x26\x92\x03\x27\x92\xc3\x8c\xc8\x58\x9a\x9c\x26\x92\xa0\xc4\x54\x02\x01\x1e\x1c\xea\x45\xc8\x2a\x18\xa5\x23\xd9\x0e\x91\x29\x1d\x20\x87\xc8\x21\x62\x72\x55\x32\x14\x1d\x62\x16\x00\x01\x1f\x18\xe8\xc5\xb6\x26\x18\xa3\x43\x27\x92\xc3\x8c\xc8\x58\x9a\x9c\x26\x92\xa0\xc4\x54\x02\x01\x20\x1c\xea\x45\xc8\x52\x0e\x91\xa3\xd9\x0e\x91\x29\x1d\x20\x87\xc8\x21\x62\x72\x55\x32\x14\x1d\x62\x16\x00\x01\x21\x17\xe8\xc5\xb6\x4e\x2c\x07\x4f\x24\x87\x19\x91\xb1\x34\x39\x4d\x24\x41\x89\xa9\x04\x01\x22\x21\x4a\x46\xc5\xce\x76\x88\x4c\xe9\x00\x39\x44\x0e\x11\x93\xab\x92\xa1\xe8\x10\xb3\xa3\xcc\x01\x73\x88\x1c\x22\x07\x08\x01\x01\x23\x19\x28\xc6\xb6\x52\x2a\x9e\xce\xa1\x13\xc9\x61\x46\x64\x2c\x4d\x4e\x13\x49\x50\x62\x2a\x01\x01\x24\x11\xe9\x49\xc8\x8e\x28\x92\xc3\xa4\xbc\x1e\x1e\xa2\xbc\x0a\x01\x25\x1b\xe8\x45\xb8\x86\x26\x92\x03\xc5\x5a\x46\x12\xca\x48\x22\x93\xc8\x24\x32\x89\x4c\x22\x93\xc8\x04\x01\x26\x10\x89\x49\xc8\x42\x4a\x3d\x3c\x44\x0f\x0f\x51\x5e\x05\x01\x27\x10\x87\x49\xb8\x42\x5a\x2a\x49\x26\x92\x13\x8d\x6f\x02\x01\x28\x0e\xe6\x3d\x58\x82\x22\x99\x83\x84\xfa\x9f\x00\x01\x29\x0e\x86\x3d\x58\x82\x22\x99\x83\x84\xfa\x13\x00\x01\x2a\x0c\xe5\x41\x58\xe2\x30\x96\xe9\x7f\x02\x01\x2b\x0c\x85\x41\x58\xe2\x30\x96\xe9\x4f\x00\x01\x2c\x0e\xe6\x3d\x58\x22\x18\xa1\x83\x84\xfa\x9f\x00\x01\x2d\x0e\x86\x3d\x58\x22\x18\xa1\x83\x84\xfa\x13\x00\x01\x2e\x0b\xe3\xc5\x56\x42\xa2\xff\x51\x32\x01\x2f\x0d\xe3\xc5\x56\x42\x22\x94\xe8\x8f\x92\x01\x01\x30\x0a\xe2\x45\x58\x82\x74\x78\x18\x01\x31\x09\x22\x45\x58\xe2\x81\x00\x01\x32\x0f\xe9\xc5\xa6\x42\xca\xff\x55\x0e\x90\x15\x49\x00\x01\x33\x12\xe9\xc5\xa6\x42\x4a\x95\x83\xa4\xfc\xab\x1c\x20\x2c\x52\x00\x01\x34\x0f\x47\xbe\x56\x8a\xa4\x0e\x94\xea\xff\x52\xa2\x01\x01\x35\x0f\xe7\xc1\x56\x8a\xa4\x0e\x93\xea\x9f\x4a\x34\x00\x01\x36\x1f\x49\x4a\xb5\x42\x58\x13\x89\x46\x92\x99\x44\xd8\x28\x99\x89\x64\x4a\xb2\x89\x70\x0e\x1d\xcf\x01\x72\x80\x58\x08\x01\x37\x19\x48\x4a\xa5\x42\xac\xad\xa4\x32\xaa\x11\x25\x32\x91\xd2\x44\x36\x87\x4d\xc7\xaa\x42\x00\x01\x38\x12\x28\x49\xa8\x42\x56\x52\x19\xd5\x88\x12\x99\x48\x69\x22\x1b\x01\x39\x0e\xe8\x45\x98\x6a\x28\x07\xeb\x3f\x1f\x0e\x02\x01\x3a\x0b\xe4\x45\x58\xa6\x2c\xd2\xff\x04\x01\x3b\x11\x48\x46\x95\x42\xac\x7f\x3e\x1c\xe4\xb0\xe9\x58\x55\x08\x01\x3c\x0d\x43\x46\x55\x42\xa2\xff\xd1\x22\xa1\x00\x01\x3d\x1a\x8b\x45\xd8\x42\xdc\x0e\x20\x4b\xe4\x20\x39\x48\x0e\x92\x83\xe4\x20\x39\xe8\x10\x3b\xc4\x00\x01\x3e\x0e\x87\x45\x98\x42\xd4\x46\x92\x48\xf5\x2b\x00\x01\x3f\x1d\x8e\x45\x48\x43\x0e\x95\x43\xe5\x50\x39\x54\x0e\x95\x43\xe5\x30\x3a\x8c\x0e\x95\x43\x0f\xe1\x43\x18\x00\x01\x40\x1a\x8a\x45\x08\x43\x0e\x91\x43\xe4\x10\x39\x44\x0e\x91\x43\xc4\xec\x10\x39\x44\x0e\x91\x43\x00\x01\x41\x1a\x8a\x41\x98\x4a\x0e\x91\x43\xe4\x10\x49\x98\x3a\xa6\x43\xe4\x10\x39\x44\x0e\x39\x84\x0e\x01\x01\x42\x0e\x86\x41\x58\x4a\xa8\x25\x44\x19\x11\xf5\x04\x01\x43\x16\xe9\x49\xd8\x72\x2a\x07\x4b\x8b\x36\xdb\x89\x22\x22\x49\x48\x37\xc7\xaa\x00\x01\x44\x10\x87\x49\xb8\x6e\x26\x87\x49\x26\x92\x13\x8d\x6f\x02\x01\x45\x1b\x49\x4a\xd5\x42\x5a\xb4\xd9\x4e\x14\x11\x49\x42\xba\x39\x56\xe5\xd0\xf1\x1c\x20\x07\x88\x85\x00\x01\x46\x13\xe7\x49\xb5\x42\x32\x91\x9c\x68\x7c\x93\x83\x86\x53\x45\x19\x00\x01\x47\x17\xe9\x49\xd8\x46\x24\xa4\x43\xa5\x45\x9b\xed\x44\x11\x91\x24\xa4\x9b\x63\x55\x00\x01\x48\x11\x87\x49\xb8\x42\x24\xa2\x83\x24\x13\xc9\x89\xc6\x37\x01\x01\x49\x1d\x8d\x41\xf8\x62\x0e\x9b\x03\xe5\x30\xa1\x64\x0e\x90\x90\x47\x62\x99\x58\x26\x96\x89\x65\x62\x99\x58\x26\x01\x4a\x17\xe9\xc9\xd6\x42\x5a\xb4\xd9\x4e\x14\x11\x49\x42\xba\x39\x56\xe5\x00\x61\x91\x02\x01\x4b\x11\x87\xc9\xb6\x42\x32\x91\x9c\x68\x7c\x93\x8a\x4a\x14\x00\x01\x4c\x1a\xeb\x45\xd8\xae\x5c\x47\xab\xde\x56\xa6\x75\x00\x1d\x40\x07\xd0\x01\xd5\xc9\x6c\x76\xad\x01\x01\x4d\x12\x89\x45\xa8\xaa\x58\x07\xd7\x2e\xb3\x2a\x6b\x6d\x72\x2b\x01\x01\x4e\x1b\xeb\x45\xd8\x2e\x18\xa6\xa3\x55\x6f\x2b\xd3\x3a\x80\x0e\xa0\x03\xe8\x80\xea\x64\x36\xbb\xd6\x00\x01\x4f\x13\x89\x45\xa8\x2a\x18\xa4\x83\x6b\x97\x59\x95\xb5\x36\xb9\x95\x00\x01\x50\x1a\xeb\x45\xd8\x96\x3c\x47\xac\xde\x56\xa6\x75\x00\x1d\x40\x07\xd0\x01\xd5\xc9\x6c\x76\xad\x01\x01\x51\x12\x89\x45\xa8\x92\x38\x47\xa8\x5d\x66\x55\xd6\xda\xe4\x56\x02\x01\x52\x1c\x8f\x45\x18\x8f\x72\x3a\x58\x66\xd3\xa9\xb2\x54\x6c\x11\x5b\xc4\xd2\xa9\x78\x36\x07\x1c\x6c\x94\x03\x01\x53\x17\x2d\x45\x08\x8b\xa9\x52\x19\xcd\x88\x07\xe1\x41\x28\x1d\xd1\x22\x15\x13\x0b\x00\x01\x54\x19\xe9\x49\xb8\x6e\x2a\x47\xb0\x9d\x64\x22\x99\x48\x26\x52\x2b\x16\x45\x32\x25\xd9\x44\x38\x01\x55\x0f\x85\x49\x78\x6a\x22\x07\x48\x0e\x23\x99\xde\x00\x01\x56\x1f\x49\x4a\xb5\xc2\x76\x92\x89\x64\x22\x99\x48\xad\x58\x14\xc9\x94\x64\x13\xe1\x1c\x3a\x9e\x03\xe4\x00\xb1\x10\x00\x01\x57\x11\xe5\x49\x75\x42\x72\x18\xc9\xf4\x0e\x19\xcd\x94\x64\x00\x01\x58\x1a\xe9\x49\xb8\x46\x24\xa4\x43\x6d\x27\x99\x48\x26\x92\x89\xd4\x8a\x45\x91\x4c\x49\x36\x11\x0e\x01\x59\x10\x86\x45\x78\x42\x24\xa1\x43\x54\x2a\x33\xa1\xde\x00\x01\x5a\x14\xe7\x45\x98\x6e\x26\x07\x56\x2c\xb2\x88\x74\x4a\xdd\x76\x98\x90\x00\x01\x5b\x12\x86\x45\x98\x6a\x24\x87\x51\x2a\x42\x1a\x91\x76\x88\x50\x00\x01\x5c\x14\xe7\x45\x98\x86\xa4\x0e\xaa\x58\x64\x11\xe9\x94\xba\xed\x30\x21\x01\x01\x5d\x13\x86\x45\x98\x86\x22\x92\x03\x28\x15\x21\x8d\x48\x3b\x44\x28\x00\x01\x5e\x14\xe7\xc5\x96\xa6\x62\x91\x45\xa4\x53\xea\xb6\xc3\x84\x2a\x16\x4e\x00\x01\x5f\x12\x86\xc5\x96\x86\x52\x11\xd2\x88\xb4\x43\x84\x26\x95\x4d\x00\x01\x60\x15\xe7\x45\x98\x46\x24\xa2\x83\x2a\x16\x59\x44\x3a\xa5\x6e\x3b\x4c\x48\x00\x01\x61\x13\x86\x45\x98\x42\x24\xa1\x43\x28\x15\x21\x8d\x48\x3b\x44\x28\x00\x01\x62\x1f\xea\xc1\xb6\xe2\xc1\x28\x87\xc8\x21\x72\x88\x1c\x22\x87\xc8\x21\x72\x88\x1c\x22\x87\xc8\x21\x72\x90\x1c\x30\x03\x01\x63\x11\xc5\xc5\x66\x46\x26\x3a\x4c\x64\xba\x91\x46\x42\xd1\x00\x01\x64\x1e\xea\x41\xb8\x4a\x95\x0e\x3e\x1c\x8c\x72\x88\x1c\x22\x87\xc8\x21\x72\x88\x1c\x22\x87\xc8\x21\x72\x88\x10\x00\x01\x65\x12\x88\x45\xa8\x76\x22\x9a\xc8\x0e\x92\xa2\x58\x67\xea\x0c\x00\x01\x66\x18\x8a\x41\xb8\xe2\xc1\x28\x87\xc8\x21\x72\x40\xb5\x2c\x87\xc8\x21\x72\x88\x1c\x22\x04\x01\x67\x10\x65\x45\x68\x46\x26\x3a\x4c\x44\x87\x89\x4c\x46\x1a\x01\x68\x12\xe9\x45\xc8\x8a\x12\x8b\xcc\x81\x52\xfe\xb5\x36\xb9\x95\x00\x01\x69\x10\x87\x49\xb8\x86\x92\x32\x87\xc8\xf8\xe9\x22\x99\x08\x01\x6a\x10\xe9\x45\xc8\xaa\x58\x07\x4a\xf9\xd7\xda\xe4\x56\x02\x01\x6b\x0f\x87\x49\xb8\xa6\x54\x87\xc8\xf8\xe9\x22\x99\x08\x01\x6c\x11\xe9\x45\xc8\x2a\x18\xa4\x03\xa5\xfc\x6b\x6d\x72\x2b\x01\x01\x6d\x10\x87\x49\xb8\x26\x18\xa2\x43\x64\xfc\x74\x91\x4c\x04\x01\x6e\x12\x09\x46\xc8\x6e\x1c\x09\xcf\xa1\x52\xfe\xb5\x36\xb9\x95\x00\x01\x6f\x11\xa7\x49\xb8\x6a\x18\x09\xce\x41\x32\x7e\xba\x48\x26\x02\x01\x70\x10\xe9\x45\xc8\x92\x38\x87\x4a\xf9\xd7\xda\xe4\x56\x02\x01\x71\x0f\x87\x49\xb8\x8e\x34\x07\xc9\xf8\xe9\x22\x99\x08\x01\x72\x11\xe9\xc5\xc6\x42\xca\xbf\xd6\x26\xb7\x3a\x82\x1c\x30\x03\x01\x73\x10\x87\xc9\xb6\x42\xc6\x4f\x17\xc9\x44\x0e\x92\x8e\x00\x01\x74\x25\xef\x41\xf8\x9a\x0e\x13\xc9\x51\x85\x43\xe2\x50\x22\x5b\x11\xd1\x44\x2a\x22\x2d\x22\x99\x44\x63\x45\x91\x46\xa4\x8d\x67\xe3\xd9\x0c\x00\x01\x75\x1d\x8d\x41\xd8\x92\x0e\x11\xc9\xd1\x64\x33\xda\xac\x34\x52\xd1\x24\xd1\x24\x91\x10\x47\xc4\xd9\x70\x26\x03\x01\x76\x1e\xea\x41\xb8\x8e\x2a\x92\x43\xc5\x12\xa1\x68\xa6\x24\xb4\xd2\x01\x73\x80\x1c\x22\x87\xc8\x21\x72\x88\x10\x00\x01\x77\x19\xe9\xc1\x96\x8e\x28\x92\xc3\xa4\xb5\x89\x4c\x24\x53\x11\x4a\xa4\xd4\xf1\x58\x3c\x96\x02\x01\x78\x1e\xea\x41\xb8\x2e\x24\x0d\xc9\xa1\x62\x89\x50\x34\x53\x12\x5a\xe9\x80\x39\x40\x0e\x91\x43\xe4\x10\x39\x44\x08\x01\x79\x13\xe9\x45\xa8\x76\x2a\x87\x1e\x0e\xe4\xe9\xb2\xf2\x74\xf9\x70\x20\x01\x7a\x10\x88\x45\xa8\x72\x28\x07\x1e\x0e\x52\xbd\x1e\x0e\x02\x01\x7b\x14\xe9\x45\xa8\x52\x0e\x90\x43\x0f\x07\xf2\x74\x59\x79\xba\x7c\x38\x10\x01\x7c\x10\x88\x45\xa8\x4e\x2c\x07\x1e\x0e\x52\xbd\x1e\x0e\x02\x01\x7d\x14\xe9\x45\xa8\x46\x24\xa4\x43\x0f\x07\xf2\x74\x59\x79\xba\x7c\x38\x10\x01\x7e\x11\x88\x45\xa8\x46\x24\xa3\xc3\x0e\x07\xa9\x5e\x0f\x07\x01\x01\x86\x1a\x8a\x45\xc8\xc6\x76\x08\x45\xe7\x90\x39\x44\x0e\x91\x43\xe4\x10\x39\x80\x3a\x39\xc4\x6c\x00\x01\x89\x1b\x8c\x41\xd8\xea\x7a\x08\x0a\x67\xd2\x91\xf8\x20\x3b\xc8\x44\x62\x91\x58\x24\x95\x1d\x82\x37\x00\x01\x8e\x0d\x87\x49\x98\xe2\x55\x8b\xab\xea\xe1\x20\x01\x92\x13\xe8\xc9\xb6\x92\x56\x93\x8a\x15\x6b\x55\xb1\x54\xac\x2a\x16\x03\x01\x97\x0e\x85\x41\x58\x46\xa6\xd3\x61\x22\xd3\x09\x00\x01\x9a\x0e\x85\x41\x58\x46\xa6\xd3\x61\x22\xd3\x09\x00\x01\x9d\x24\xec\xbd\xd6\x4e\x2a\x1b\xca\x68\x32\x9a\xac\x24\x93\x88\x64\x22\x89\x4c\x54\x93\xd1\x64\x34\xe1\x4c\x2a\x93\x03\xea\x00\x3a\x04\x00\x01\x9f\x15\x8b\x45\xd8\xae\x7a\x5b\x99\xd6\x01\x1f\xea\x80\xea\x64\x36\xbb\xd6\x00\x01\xa0\x1e\x8d\x45\xd8\xae\x34\xba\xac\x4d\xaa\x14\x39\x40\x24\x07\x88\xe4\x00\x91\x1c\x20\x9a\xce\x56\xef\x80\x2a\x00\x01\xa1\x14\x2b\x45\xa8\xaa\x32\x39\xd8\xae\x13\xa9\x48\x2a\x9a\xcd\xae\x45\x00\x01\xa7\x14\x87\x45\x98\xa6\x64\x89\x49\x85\x23\xd2\x6c\x28\x9d\x1d\x42\x14\x00\x01\xa8\x0f\x26\x45\x98\x86\x54\x14\x51\x28\x33\xd9\x85\x02\x01\xae\x1e\xea\xc1\xb6\xe2\xc1\x28\x87\xc8\x21\x72\x88\x1c\x22\x87\xc8\x21\x72\x88\x1c\x22\x87\xc8\x21\x72\x48\x99\x02\x01\xaf\x17\x8b\x45\xc8\x42\xea\x75\x22\x15\x49\x45\x52\x91\x54\x24\x15\xcd\x66\xd7\x22\x00\x01\xb0\x13\x2a\x49\xb8\x42\xf6\x26\xa1\x51\x64\x3a\xcd\x28\xc2\x89\x0c\x00\x01\xb5\x10\x89\x45\xa8\xe2\x81\x3c\x5d\x6c\x9c\x2e\x1f\x0e\x04\x01\xb6\x0f\x28\x45\xa8\xe2\x41\x2a\x6b\x93\x4a\x0f\x07\x01\x01\xbb\x13\x88\x45\xb8\xa6\x74\x10\x8e\xc5\x87\x03\x69\x38\x95\x1e\x0e\x02\x01\xbc\x10\x86\x49\xb8\xe2\x51\x48\xb2\x0d\xd5\x0e\x11\x12\x00\x01\xc0\x09\xe2\xc5\x46\xe2\x07\x01\x01\xc2\x1d\xea\xc1\xb6\x52\x0e\x91\x43\xe4\x10\x39\x44\x78\x38\x18\x85\x87\x83\x51\x0e\x91\x43\xe4\x10\x39\x44\x08\x01\xc3\x09\x82\x49\x58\xe2\x81\x44\x01\xcd\x1f\xeb\x41\xc8\x4a\x99\x8e\x3a\x87\xcc\x01\x74\x80\x44\x2c\x91\x8a\x86\x32\xd9\x21\x76\x10\x49\x25\xe2\x3a\x40\x00\x01\xce\x16\x88\x45\x98\x46\x24\xa3\x43\x69\xa6\x98\xac\x64\x91\x49\x64\x92\x43\x64\x24\x01\xcf\x0e\xe6\x3d\x58\x42\x24\xa1\x83\x84\xfa\x9f\x00\x01\xd0\x0e\x86\x3d\x58\x42\x24\xa1\x83\x84\xfa\x13\x00\x01\xd1\x1b\xeb\x45\xd8\x4e\x24\xa6\xa3\x55\x6f\x2b\xd3\x3a\x80\x0e\xa0\x03\xe8\x80\xea\x64\x36\xbb\xd6\x00\x01\xd2\x12\x89\x45\xa8\x4a\x91\x0e\xae\x5d\x66\x55\xd6\xda\xe4\x56\x02\x01\xd3\x10\xe9\x45\xc8\x4a\x91\x0e\x94\xf2\xaf\xb5\xc9\xad\x04\x01\xd4\x10\x87\x49\xb8\x46\x24\xa2\x43\x64\xfc\x74\x91\x4c\x04\x01\xd5\x15\x49\x46\xc8\xaa\x58\x07\x87\x84\x21\x39\x50\xca\xbf\xd6\x26\xb7\x12\x00\x01\xd6\x14\xe7\x49\xb8\xa6\x54\x07\x85\x44\x21\x39\x44\xc6\x4f\x17\xc9\x44\x00\x01\xd7\x15\x49\x46\xc8\x72\x2a\x47\x09\x09\x43\x72\xa0\x94\x7f\xad\x4d\x6e\x25\x00\x01\xd8\x14\xe7\x49\xb8\x6e\x26\x07\x86\x44\x21\x39\x44\xc6\x4f\x17\xc9\x44\x00\x01\xd9\x15\x49\x46\xc8\x4a\x91\x0e\x0e\x09\x43\x72\xa0\x94\x7f\xad\x4d\x6e\x25\x00\x01\xda\x14\xe7\x49\xb8\x46\x24\xa2\x83\x42\xa2\x90\x1c\x22\xe3\xa7\x8b\x64\x22\x01\xdb\x16\x49\x46\xc8\x6a\x0e\x91\x23\x84\x84\x21\x39\x50\xca\xbf\xd6\x26\xb7\x12\x00\x01\xdc\x14\xe7\x49\xb8\x66\x2c\x87\x85\x44\x21\x39\x44\xc6\x4f\x17\xc9\x44\x00\x01\xdd\x10\x28\x45\x98\xa6\x74\x09\x8e\x0f\x07\xa3\xc4\x46\x02\x01\xde\x23\x4b\x42\xc8\xae\x5c\x47\x0b\x89\x43\x72\xc4\x39\x64\x0e\xa0\x03\x24\x62\x89\x54\x34\x94\xc9\x0e\xb1\x83\x48\x2a\x11\xd7\x01\x02\x01\xdf\x1a\xe8\x45\x98\xa6\x56\x07\x86\x64\x21\x39\x94\x66\x8a\xc9\x4a\x16\x99\x44\x26\x39\x44\x46\x02\x01\xe0\x22\x4b\x42\xc8\xae\x5c\x47\x94\x83\xe4\xc8\x73\xc8\x1c\x40\x07\x48\xc4\x12\xa9\x68\x28\x93\x1d\x62\x07\x91\x54\x22\xae\x03\x04\x01\xe1\x18\xe8\x45\x98\xaa\x56\x87\x8a\xe5\x60\x9a\x29\x26\x2b\x59\x64\x12\x99\xe4\x10\x19\x09\x01\xe2\x25\xf0\x45\xf8\x1e\x50\x07\xd6\x31\x1f\xe4\x80\x83\x98\x0e\xac\x03\x25\xd7\xc9\x55\x24\x07\xdd\x21\x87\x38\x44\x28\x07\x0c\x0f\xd4\x43\x00\x01\xe3\x1a\x8c\x45\xe8\xb2\x0e\xa8\xa3\x8d\x48\x87\x51\x4c\xe9\x30\x39\xd8\xa4\xb2\x99\xc5\x32\xa3\x00\x01\xe4\x1b\x8b\x45\xc8\xce\x78\x08\x4d\x23\x73\x88\x1c\x24\x07\x89\x25\x62\xc9\xac\x32\x2a\x1d\x82\x26\x00\x01\xe5\x13\x88\xc5\xb6\x6a\x22\x39\xcc\x88\x8c\xa5\xc9\xe1\x41\x62\x2a\x01\x01\xe6\x1b\xea\x45\xc8\x4a\x95\x8e\x64\x3b\x44\xa6\x74\x80\x1c\x22\x87\x88\xc9\x55\xc9\x50\x74\x88\x59\x00\x01\xe7\x18\xe8\xc5\xb6\x46\x24\xa3\x43\x27\x92\xc3\x8c\xc8\x58\x9a\x9c\x26\x92\xa0\xc4\x54\x02\x01\xe8\x1b\xe9\x49\xb8\x46\x24\xa4\x43\x85\x35\x91\x68\x24\x99\x49\x84\x8d\x92\x99\x48\xa6\x24\x9b\x08\x07\x01\xe9\x17\xe8\x49\xa8\x42\x24\xa3\x03\xc5\xda\x4a\x2a\xa3\x1a\x51\x22\x13\x29\x4d\x64\x03\x01\xea\x1b\xeb\xc5\xd6\xae\x7a\x5b\x99\xd6\x01\x74\x00\x1d\x40\x07\x54\x27\xb3\xd9\xb5\x8e\x28\x07\x0d\x01\x01\xeb\x13\x89\xc5\xa6\xaa\x76\x99\x55\x59\x6b\x93\x5b\x1d\x41\x0e\x98\x01\x01\xec\x1e\x4b\xc6\xd6\xae\x5c\x47\xab\xde\x56\xa6\x75\x00\x1d\x40\x07\xd0\x01\xd5\xc9\x6c\x76\xad\x23\xca\x41\x43\x00\x01\xed\x16\xe9\xc5\xa6\xaa\x58\x07\xd7\x2e\xb3\x2a\x6b\x6d\x72\xab\x23\xc8\x01\x33\x00\x01\xf0\x10\xe7\xc1\x56\x46\x24\xa2\x03\xa5\xfa\xa7\x12\x0d\x00\x01\xf4\x1b\xea\x45\xc8\x76\x2c\x47\xb3\x1d\x22\x53\x3a\x40\x0e\x91\x43\xc4\xe4\xaa\x64\x28\x3a\xc4\x2c\x00\x01\xf5\x17\xe8\xc5\xb6\x72\x28\x07\x4f\x24\x87\x19\x91\xb1\x34\x39\x4d\x24\x41\x89\xa9\x04\x01\xf8\x17\xe9\x49\xd8\x6a\x0e\x91\x43\xa5\x45\x9b\xed\x44\x11\x91\x24\xa4\x9b\x63\x55\x00\x01\xf9\x10\x87\x49\xb8\x66\x2c\x07\x49\x26\x92\x13\x8d\x6f\x02\x01\xfa\x25\x6b\x42\xc8\x76\x0e\x90\x23\xcf\x21\xe9\x90\x39\xea\x1c\x32\x07\xd0\x01\x12\xb1\x44\x2a\x1a\xca\x64\x87\xd8\x41\x24\x95\x88\xeb\x00\x01\x01\xfb\x1a\x08\x46\x98\x6e\x28\x47\x98\x46\xa2\x73\x30\xcd\x14\x93\x95\x2c\x32\x89\x4c\x72\x88\x8c\x04\x01\xfc\x25\xf0\x45\xf8\x1e\x34\x87\xca\x71\x39\xc8\x01\x07\x31\x1d\x58\x07\x4a\xae\x93\xab\x48\x0e\xba\x43\x0e\x71\x88\x50\x0e\x18\x1e\xa8\x87\x00\x01\xfd\x1a\x8c\x45\xe8\x7a\x0e\x91\xa3\x8e\x48\x87\x51\x4c\xe9\x30\x39\xd8\xa4\xb2\x99\xc5\x32\xa3\x00\x01\xfe\x1f\xeb\x45\xd8\x7a\x0e\x90\x23\x56\x44\x87\xd0\x6c\x32\xbb\x4d\x68\x22\x92\x8c\x32\xbb\x4d\x66\xa3\x43\x48\x52\x03\x01\xff\x12\x89\x45\xa8\x72\x2a\x47\x79\x19\x99\x2e\x93\x93\x69\xf2\x04\x02\x00\x1f\xeb\x41\xc8\x8a\x0e\x9a\xa3\xce\x21\x73\x00\x1d\x20\x11\x4b\xa4\xa2\xa1\x4c\x76\x88\x1d\x44\x52\x89\xb8\x0e\x10\x02\x01\x16\x88\x45\x98\x86\x3c\x87\xd2\x4c\x31\x59\xc9\x22\x93\xc8\x24\x87\xc8\x48\x00\x02\x02\x1f\xeb\x41\xc8\x8e\x1c\x8c\x23\xce\x21\x73\x00\x1d\x20\x11\x4b\xa4\xa2\xa1\x4c\x76\x88\x1d\x44\x52\x89\xb8\x0e\x10\x02\x03\x16\x88\x45\x98\x8a\x16\x8c\x03\x69\xa6\x98\xac\x64\x91\x49\x64\x92\x43\x64\x24\x02\x04\x12\xe7\x49\x98\x82\x3a\x07\x1d\x0e\x52\x55\x8b\x45\xaa\x7a\x38\x02\x05\x13\x88\x45\x98\x86\x3c\x87\xd2\x2c\xc2\xc3\xc1\x3c\x8c\x9c\x2a\x00\x02\x06\x13\xe7\x49\x98\x86\x14\x8c\x43\x0e\x07\xa9\xaa\xc5\x22\x55\x3d\x1c\x02\x07\x14\x88\x45\x98\x8a\x16\x8c\x03\x69\x16\xe1\xe1\x60\x1e\x46\x4e\x15\x00\x02\x08\x0d\xe5\x3d\x58\x82\x36\x07\xc8\xf4\xbf\x00\x02\x09\x0d\x85\x3d\x58\x82\x36\x07\xc8\xf4\x17\x00\x02\x0a\x0e\xe6\x3d\x58\x86\x12\x8c\x43\x84\xfa\x9f\x00\x02\x0b\x0e\x86\x3d\x58\x86\x12\x8c\x43\x84\xfa\x13\x00\x02\x0c\x1b\xeb\x45\xd8\x8a\x0e\x9a\x23\x56\x6f\x2b\xd3\x3a\x80\x0e\xa0\x03\xe8\x80\xea\x64\x36\xbb\xd6\x00\x02\x0d\x13\x89\x45\xa8\x86\x0e\x98\x23\xd4\x2e\xb3\x2a\x6b\x6d\x72\x2b\x01\x02\x0e\x1b\xeb\x45\xd8\x92\x1c\x8c\x23\x55\x6f\x2b\xd3\x3a\x80\x0e\xa0\x03\xe8\x80\xea\x64\x36\xbb\xd6\x00\x02\x0f\x12\x89\x45\xa8\x8e\x98\x0e\xad\x5d\x66\x55\xd6\xda\xe4\x56\x02\x02\x10\x1a\xe9\x49\xb8\x86\x0e\x98\x43\x6d\x27\x99\x48\x26\x92\x89\xd4\x8a\x45\x91\x4c\x49\x36\x11\x0e\x02\x11\x0f\x86\x45\x78\x82\x38\x87\xa8\x54\x66\x42\xbd\x01\x02\x12\x19\xe9\x49\xb8\x8a\x98\x0e\xb4\x9d\x64\x22\x99\x48\x26\x52\x2b\x16\x45\x32\x25\xd9\x44\x38\x02\x13\x11\x86\x45\x78\x86\x12\x8c\x03\x24\x92\xca\x4c\xa8\x37\x00\x02\x14\x11\xe9\x45\xc8\x86\x0e\x98\x43\xa5\xfc\x6b\x6d\x72\x2b\x01\x02\x15\x0f\x87\x49\xb8\x82\x3a\x07\xc9\xf8\xe9\x22\x99\x08\x02\x16\x10\xe9\x45\xc8\x8e\x98\x0e\x93\xf2\xaf\xb5\xc9\xad\x04\x02\x17\x10\x87\x49\xb8\x8a\x14\x8c\x03\x64\xfc\x74\x91\x4c\x04\x02\x18\x15\x47\x46\x95\xa6\x62\x91\x45\xa4\x53\xea\xb6\xc3\x84\x0e\x5c\x55\x94\x01\x02\x19\x14\xe6\x45\x95\x86\x52\x11\xd2\x88\xb4\x43\x84\x0e\x9a\x0d\xd5\x44\x00\x02\x1a\x23\x4a\x42\xb5\xe2\xc1\x28\x87\xc8\x21\x72\x88\x1c\x22\x87\xc8\x21\x72\x88\x1c\x22\x87\xc8\x11\xe7\x80\x39\x44\x0e\x91\x03\x84\x00\x02\x1b\x13\x25\x46\x65\x46\x26\x3a\x4c\x64\xba\x91\xc6\xa3\x99\x92\x08\x00\x02\x1e\x10\xe9\x49\xc8\x4a\x91\x0e\x94\xf2\x7a\x78\x88\xf2\x2a\x02\x1f\x1b\xe8\x45\xb8\x42\x24\xa3\x43\xc5\x5a\x46\x12\xca\x48\x22\x93\xc8\x24\x32\x89\x4c\x22\x93\xc8\x04\x02\x26\x1f\xeb\x41\xc8\x52\x0e\x92\x23\xcf\x21\x73\x00\x1d\x20\x11\x4b\xa4\xa2\xa1\x4c\x76\x88\x1d\x44\x52\x89\xb8\x0e\x10\x02\x27\x16\x88\x45\x98\x4e\x2c\x07\xd3\x4c\x31\x59\xc9\x22\x93\xc8\x24\x87\xc8\x48\x00\x02\x28\x12\xe7\xc9\x96\xe2\x41\xaa\x6a\xb1\x48\x55\x0f\x27\xb1\x70\x04\x02\x29\x14\x88\xc5\x96\x8a\x66\x11\x1e\x0e\xe6\x61\xe4\x54\x94\x03\xa4\x23\x00\x02\x2a\x1f\x4b\x46\xd8\xae\x5c\x47\x0b\x89\x43\x72\xb4\xea\x6d\x65\x5a\x07\xd0\x01\x74\x00\x1d\x50\x9d\xcc\x66\xd7\x1a\x00\x02\x2b\x17\xe9\x45\xa8\xaa\x58\x07\x87\x84\x21\x39\xb8\x76\x99\x55\x59\x6b\x93\x5b\x09\x00\x02\x2c\x1f\x4b\x46\xd8\xae\x5c\x47\xa3\x44\x23\x73\xb4\xea\x6d\x65\x5a\x07\xd0\x01\x74\x00\x1d\x50\x9d\xcc\x66\xd7\x1a\x00\x02\x2d\x17\xe9\x45\xa8\xaa\x58\x07\x53\x62\x91\x39\xb8\x76\x99\x55\x59\x6b\x93\x5b\x09\x00\x02\x2e\x1b\xeb\x45\xd8\x56\x0e\x92\x23\x56\x6f\x2b\xd3\x3a\x80\x0e\xa0\x03\xe8\x80\xea\x64\x36\xbb\xd6\x00\x02\x2f\x13\x89\x45\xa8\x52\x0e\x90\x23\xd4\x2e\xb3\x2a\x6b\x6d\x72\x2b\x01\x02\x30\x1e\x4b\x46\xd8\xae\x5c\x47\x95\x83\xe4\x88\xd5\xdb\xca\xb4\x0e\xa0\x03\xe8\x00\x3a\xa0\x3a\x99\xcd\xae\x35\x00\x02\x31\x16\xe9\x45\xa8\xaa\x58\x47\x91\x03\xe4\x08\xb5\xcb\xac\xca\x5a\x9b\xdc\x4a\x00\x02\x32\x1c\xea\x41\xb8\xaa\x5a\x07\xab\x08\x45\x33\x25\xa1\x95\x0e\x98\x03\xe4\x10\x39\x44\x0e\x91\x43\x84\x00\x02\x33\x18\xe9\xc1\x96\xaa\x58\x07\x4a\x6b\x13\x99\x48\xa6\x22\x94\x48\xa9\xe3\xb1\x78\x2c\x05\x02\x50\x14\x28\x45\x98\x42\x34\x39\x44\x64\x12\x99\xc4\x54\x93\x85\x6c\x24\x00\x02\x54\x0f\x27\x45\x98\xa2\x64\x9d\xaa\xce\x0e\x22\x12\x00\x02\x58\x10\x28\x45\x98\x8a\x66\x11\x1e\x0e\xe4\xe1\x61\x54\x02\x02\x59\x10\x28\x45\x98\xa6\x74\x09\x8e\x0f\x07\xa3\xc4\x46\x02\x02\x5f\x0f\xa6\x45\x66\x4e\xa8\x4f\x95\x9a\x50\x52\x21\x01\x02\x65\x0e\x87\xc9\xb6\x42\xc6\x4f\x17\xc9\x44\xaa\x01\x02\x75\x10\x29\x45\xa8\xaa\x76\x99\x1d\x1e\xa4\xb5\xc9\xad\x04\x02\x79\x0b\x25\x49\x78\x4e\x9f\x0e\x13\x01\x02\x87\x0e\x65\x45\x68\x62\x44\xd3\xcb\x61\x24\x93\x00\x02\x88\x0e\xc6\xc5\x66\x46\x28\xab\x94\x84\xfa\xb1\x44\x02\x89\x10\x27\x49\xb8\x42\xc6\x76\x38\xd0\x48\x17\xc9\x44\x00\x02\x8c\x12\x29\x41\x98\x6e\x3c\x2d\x4a\x84\x12\x99\x92\x4c\x32\xa4\x0a\x02\x8d\x18\x2d\x41\xd8\x4e\x36\x9c\x0d\x49\x43\x8a\x92\x44\x93\x44\x93\x68\x54\x9b\xd1\x66\x02\x02\x8e\x14\x89\x41\x98\x56\x3c\x16\xaf\x52\x25\x42\x89\x4c\x49\x26\x99\x55\x05\x02\x9e\x13\x88\xc9\xa6\x62\x26\x19\x69\x93\x08\x69\xa5\x89\x48\x54\x13\x6b\x02\xbb\x08\x83\x40\x4c\x86\x62\x02\xbc\x09\x83\x40\x4c\xc2\x42\x01\x02\xbd\x09\x83\x40\x4c\xe2\x10\x12\x00\x00'
| 17,432.5 | 34,864 | 0.74995 | 8,716 | 34,865 | 2.999885 | 0.029601 | 0.02157 | 0.008261 | 0.008261 | 0.42785 | 0.394233 | 0.360844 | 0.323211 | 0.283971 | 0.258729 | 0 | 0.382056 | 0.000029 | 34,865 | 1 | 34,865 | 34,865 | 0.367915 | 0 | 0 | 0 | 0 | 1 | 0.999742 | 0.999742 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
9544295124877450562154b1d02ab7ce8531d1eb | 10,675 | py | Python | pyclustering/cluster/tests/unit/ut_encoder.py | JosephChataignon/pyclustering | bf4f51a472622292627ec8c294eb205585e50f52 | [
"BSD-3-Clause"
] | 1,013 | 2015-01-26T19:50:14.000Z | 2022-03-31T07:38:48.000Z | pyclustering/cluster/tests/unit/ut_encoder.py | peterlau0626/pyclustering | bf4f51a472622292627ec8c294eb205585e50f52 | [
"BSD-3-Clause"
] | 542 | 2015-01-20T16:44:32.000Z | 2022-01-29T14:57:20.000Z | pyclustering/cluster/tests/unit/ut_encoder.py | peterlau0626/pyclustering | bf4f51a472622292627ec8c294eb205585e50f52 | [
"BSD-3-Clause"
] | 262 | 2015-03-19T07:28:12.000Z | 2022-03-30T07:28:24.000Z | """!
@brief Unit-tests for clustering result representation.
@authors Andrei Novikov (pyclustering@yandex.ru)
@date 2014-2020
@copyright BSD-3-Clause
"""
import unittest
import math
import matplotlib
matplotlib.use('Agg')
from pyclustering.cluster.encoder import cluster_encoder
from pyclustering.cluster.encoder import type_encoding
class Test(unittest.TestCase):
def getIndexRepresentor(self):
clusters = [ [0, 1, 2, 3], [4, 5, 6, 7] ]
data = [10, 11, 13, 12, 64, 65, 65, 68]
return cluster_encoder(type_encoding.CLUSTER_INDEX_LIST_SEPARATION, clusters, data)
def testIndexToLabel(self):
representor = self.getIndexRepresentor()
representor.set_encoding(type_encoding.CLUSTER_INDEX_LABELING)
assert 8 == len(representor.get_clusters())
assert [0, 0, 0, 0, 1, 1, 1, 1] == representor.get_clusters()
def testIndexToObject(self):
representor = self.getIndexRepresentor()
representor.set_encoding(type_encoding.CLUSTER_OBJECT_LIST_SEPARATION)
assert 2 == len(representor.get_clusters());
assert [ [10, 11, 13, 12 ], [64, 65, 65, 68] ] == representor.get_clusters()
def testObjectToIndex(self):
representor = self.getIndexRepresentor()
representor.set_encoding(type_encoding.CLUSTER_OBJECT_LIST_SEPARATION)
representor.set_encoding(type_encoding.CLUSTER_INDEX_LIST_SEPARATION)
assert 2 == len(representor.get_clusters())
assert [ [0, 1, 2, 3], [4, 5, 6, 7] ] == representor.get_clusters()
def testObjectToLabel(self):
representor = self.getIndexRepresentor()
representor.set_encoding(type_encoding.CLUSTER_OBJECT_LIST_SEPARATION)
representor.set_encoding(type_encoding.CLUSTER_INDEX_LABELING)
assert 8 == len(representor.get_clusters())
assert [0, 0, 0, 0, 1, 1, 1, 1] == representor.get_clusters()
def testLabelToIndex(self):
representor = self.getIndexRepresentor()
representor.set_encoding(type_encoding.CLUSTER_INDEX_LABELING)
representor.set_encoding(type_encoding.CLUSTER_INDEX_LIST_SEPARATION)
assert 2 == len(representor.get_clusters())
assert [ [0, 1, 2, 3], [4, 5, 6, 7] ] == representor.get_clusters()
def testLabelToObject(self):
representor = self.getIndexRepresentor()
representor.set_encoding(type_encoding.CLUSTER_INDEX_LABELING)
representor.set_encoding(type_encoding.CLUSTER_OBJECT_LIST_SEPARATION)
assert 2 == len(representor.get_clusters())
assert [ [10, 11, 13, 12 ], [64, 65, 65, 68] ] == representor.get_clusters()
def testLabelToLabel(self):
representor = self.getIndexRepresentor()
representor.set_encoding(type_encoding.CLUSTER_INDEX_LABELING)
representor.set_encoding(type_encoding.CLUSTER_INDEX_LABELING)
assert 8 == len(representor.get_clusters())
assert [0, 0, 0, 0, 1, 1, 1, 1] == representor.get_clusters()
def testObjectToObject(self):
representor = self.getIndexRepresentor()
representor.set_encoding(type_encoding.CLUSTER_OBJECT_LIST_SEPARATION)
representor.set_encoding(type_encoding.CLUSTER_OBJECT_LIST_SEPARATION)
assert 2 == len(representor.get_clusters())
assert [ [10, 11, 13, 12 ], [64, 65, 65, 68] ] == representor.get_clusters()
def testIndexToIndex(self):
representor = self.getIndexRepresentor()
representor.set_encoding(type_encoding.CLUSTER_INDEX_LIST_SEPARATION)
representor.set_encoding(type_encoding.CLUSTER_INDEX_LIST_SEPARATION)
assert 2 == len(representor.get_clusters())
assert [ [0, 1, 2, 3], [4, 5, 6, 7] ] == representor.get_clusters()
def getIndexRepresentorDoubleData(self):
clusters = [ [0, 1, 2, 3], [4, 5, 6, 7] ]
data = [5.4562, 5.1235, 4.9235, 4.8712, 8.3451, 8.4215, 8.6535, 8.7345]
return cluster_encoder(type_encoding.CLUSTER_INDEX_LIST_SEPARATION, clusters, data)
def testDoubleObjectToIndex(self):
representor = self.getIndexRepresentorDoubleData()
representor.set_encoding(type_encoding.CLUSTER_OBJECT_LIST_SEPARATION)
representor.set_encoding(type_encoding.CLUSTER_INDEX_LIST_SEPARATION)
assert 2 == len(representor.get_clusters());
assert [ [0, 1, 2, 3], [4, 5, 6, 7] ] == representor.get_clusters()
def testDoubleObjectToLabel(self):
representor = self.getIndexRepresentorDoubleData()
representor.set_encoding(type_encoding.CLUSTER_OBJECT_LIST_SEPARATION)
representor.set_encoding(type_encoding.CLUSTER_INDEX_LABELING)
assert 8 == len(representor.get_clusters())
assert [0, 0, 0, 0, 1, 1, 1, 1] == representor.get_clusters()
def testOverAllTypes(self):
representor = self.getIndexRepresentorDoubleData()
representor.set_encoding(type_encoding.CLUSTER_OBJECT_LIST_SEPARATION)
representor.set_encoding(type_encoding.CLUSTER_INDEX_LIST_SEPARATION)
representor.set_encoding(type_encoding.CLUSTER_INDEX_LABELING)
representor.set_encoding(type_encoding.CLUSTER_INDEX_LIST_SEPARATION)
representor.set_encoding(type_encoding.CLUSTER_OBJECT_LIST_SEPARATION)
representor.set_encoding(type_encoding.CLUSTER_INDEX_LABELING)
assert 8 == len(representor.get_clusters())
assert [0, 0, 0, 0, 1, 1, 1, 1] == representor.get_clusters()
def getIndexRepresentorTwoDimensionData(self):
clusters = [ [0, 1, 2, 3], [4, 5, 6, 7] ]
data = [ [5.1, 5.2], [5.2, 5.1], [5.4, 5.2], [5.1, 5.0], [8.1, 8.0], [8.4, 8.2], [8.3, 8.4], [8.5, 8.5]]
return cluster_encoder(type_encoding.CLUSTER_INDEX_LIST_SEPARATION, clusters, data)
def testIndexToLabelTwoDimension(self):
representor = self.getIndexRepresentorTwoDimensionData()
representor.set_encoding(type_encoding.CLUSTER_INDEX_LABELING)
assert 8 == len(representor.get_clusters())
assert [0, 0, 0, 0, 1, 1, 1, 1] == representor.get_clusters()
def testIndexToObjectTwoDimension(self):
representor = self.getIndexRepresentorTwoDimensionData()
representor.set_encoding(type_encoding.CLUSTER_OBJECT_LIST_SEPARATION)
assert 2 == len(representor.get_clusters())
assert [ [[5.1, 5.2], [5.2, 5.1], [5.4, 5.2], [5.1, 5.0]], [[8.1, 8.0], [8.4, 8.2], [8.3, 8.4], [8.5, 8.5]] ] == representor.get_clusters()
def testObjectToIndexTwoDimension(self):
representor = self.getIndexRepresentorTwoDimensionData()
representor.set_encoding(type_encoding.CLUSTER_OBJECT_LIST_SEPARATION)
representor.set_encoding(type_encoding.CLUSTER_INDEX_LIST_SEPARATION)
assert 2 == len(representor.get_clusters())
assert [ [0, 1, 2, 3], [4, 5, 6, 7] ] == representor.get_clusters()
def testObjectToLabelTwoDimension(self):
representor = self.getIndexRepresentorTwoDimensionData()
representor.set_encoding(type_encoding.CLUSTER_OBJECT_LIST_SEPARATION)
representor.set_encoding(type_encoding.CLUSTER_INDEX_LABELING)
assert 8 == len(representor.get_clusters())
assert [0, 0, 0, 0, 1, 1, 1, 1] == representor.get_clusters()
def testLabelToIndexTwoDimension(self):
representor = self.getIndexRepresentorTwoDimensionData()
representor.set_encoding(type_encoding.CLUSTER_INDEX_LABELING)
representor.set_encoding(type_encoding.CLUSTER_INDEX_LIST_SEPARATION)
assert 2 == len(representor.get_clusters())
assert [[0, 1, 2, 3], [4, 5, 6, 7]] == representor.get_clusters()
def testLabelToObjectTwoDimension(self):
representor = self.getIndexRepresentorTwoDimensionData()
representor.set_encoding(type_encoding.CLUSTER_INDEX_LABELING)
representor.set_encoding(type_encoding.CLUSTER_OBJECT_LIST_SEPARATION)
assert 2 == len(representor.get_clusters())
assert [ [[5.1, 5.2], [5.2, 5.1], [5.4, 5.2], [5.1, 5.0]], [[8.1, 8.0], [8.4, 8.2], [8.3, 8.4], [8.5, 8.5]] ] == representor.get_clusters()
def testIndexListToLabelsMissedPoint(self):
clusters = [[0, 1, 2, 3], [4, 5, 6]] # the last point is missed
data = [[5.1, 5.2], [5.2, 5.1], [5.4, 5.2], [5.1, 5.0], [8.1, 8.0], [8.4, 8.2], [8.3, 8.4], [8.5, 8.5]]
encoder = cluster_encoder(type_encoding.CLUSTER_INDEX_LIST_SEPARATION, clusters, data)
encoder.set_encoding(type_encoding.CLUSTER_INDEX_LABELING)
expected = [0, 0, 0, 0, 1, 1, 1, float('NaN')]
actual = encoder.get_clusters()
self.assertEqual(len(expected), len(actual))
for i in range(len(expected)):
if math.isnan(expected[i]) is True:
self.assertTrue(math.isnan(actual[i]))
else:
self.assertEqual(expected[i], actual[i])
def testObjectListToLabelsMissedPoint(self):
clusters = [[[5.1, 5.2], [5.2, 5.1]], [[8.1, 8.0], [8.4, 8.2]]]
data = [[5.1, 5.2], [5.2, 5.1], [14.1, 76.0], [8.1, 8.0], [8.4, 8.2]]
encoder = cluster_encoder(type_encoding.CLUSTER_OBJECT_LIST_SEPARATION, clusters, data)
encoder.set_encoding(type_encoding.CLUSTER_INDEX_LABELING)
expected = [0, 0, float('NaN'), 1, 1]
actual = encoder.get_clusters()
self.assertEqual(len(expected), len(actual))
for i in range(len(expected)):
if math.isnan(expected[i]) is True:
self.assertTrue(math.isnan(actual[i]))
else:
self.assertEqual(expected[i], actual[i])
def testLabelsToIndexListAndObjectListMissedPoint(self):
clusters = [0, 0, float('NaN'), 1, 1]
data = [[5.1, 5.2], [5.2, 5.1], [14.1, 76.0], [8.1, 8.0], [8.4, 8.2]]
encoder = cluster_encoder(type_encoding.CLUSTER_INDEX_LABELING, clusters, data)
encoder.set_encoding(type_encoding.CLUSTER_INDEX_LIST_SEPARATION)
expected = [[0, 1], [3, 4]]
actual = encoder.get_clusters()
self.assertEqual(len(expected), len(actual))
self.assertEqual(expected, actual)
encoder = cluster_encoder(type_encoding.CLUSTER_INDEX_LABELING, clusters, data)
encoder.set_encoding(type_encoding.CLUSTER_OBJECT_LIST_SEPARATION)
expected = [[[5.1, 5.2], [5.2, 5.1]], [[8.1, 8.0], [8.4, 8.2]]]
actual = encoder.get_clusters()
self.assertEqual(len(expected), len(actual))
self.assertEqual(expected, actual)
| 40.435606 | 147 | 0.659672 | 1,302 | 10,675 | 5.208909 | 0.089094 | 0.084931 | 0.131672 | 0.135653 | 0.864937 | 0.85432 | 0.850192 | 0.848422 | 0.84105 | 0.837511 | 0 | 0.058298 | 0.214239 | 10,675 | 263 | 148 | 40.589354 | 0.750238 | 0.0163 | 0 | 0.738095 | 0 | 0 | 0.001144 | 0 | 0 | 0 | 0 | 0 | 0.27381 | 1 | 0.142857 | false | 0 | 0.029762 | 0 | 0.196429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
95b4e2281a555639e17c72240f9240171663fa67 | 1,142 | py | Python | RLBotPack/Beast from the East/util/zone.py | RLMarvin/RLBotPack | c88c4111bf67d324b471ad87ad962e7bc8c2a202 | [
"MIT"
] | null | null | null | RLBotPack/Beast from the East/util/zone.py | RLMarvin/RLBotPack | c88c4111bf67d324b471ad87ad962e7bc8c2a202 | [
"MIT"
] | null | null | null | RLBotPack/Beast from the East/util/zone.py | RLMarvin/RLBotPack | c88c4111bf67d324b471ad87ad962e7bc8c2a202 | [
"MIT"
] | null | null | null | from util.vec import Vec3
class Zone:
def __contains__(self, point: Vec3) -> bool:
raise NotImplementedError
class Zone2d(Zone):
def __init__(self, corner_a: Vec3, corner_b: Vec3):
self.corner_min = Vec3(min(corner_a.x, corner_b.x), min(corner_a.y, corner_b.y), 0)
self.corner_max = Vec3(max(corner_a.x, corner_b.x), max(corner_a.y, corner_b.y), 0)
def __contains__(self, point: Vec3) -> bool:
return self.corner_min.x <= point.x <= self.corner_max.x \
and self.corner_min.y <= point.y <= self.corner_max.y
class Zone3d(Zone):
def __init__(self, corner_a: Vec3, corner_b: Vec3):
self.corner_min = Vec3(min(corner_a.x, corner_b.x), min(corner_a.y, corner_b.y), min(corner_a.z, corner_b.z))
self.corner_max = Vec3(max(corner_a.x, corner_b.x), max(corner_a.y, corner_b.y), max(corner_a.z, corner_b.z))
def __contains__(self, point: Vec3) -> bool:
return self.corner_min.x <= point.x <= self.corner_max.x \
and self.corner_min.y <= point.y <= self.corner_max.y \
and self.corner_min.z <= point.z <= self.corner_max.z
| 40.785714 | 117 | 0.650613 | 194 | 1,142 | 3.530928 | 0.14433 | 0.233577 | 0.132847 | 0.081752 | 0.826277 | 0.826277 | 0.738686 | 0.735766 | 0.735766 | 0.735766 | 0 | 0.017621 | 0.204904 | 1,142 | 27 | 118 | 42.296296 | 0.736784 | 0 | 0 | 0.368421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.263158 | false | 0 | 0.052632 | 0.105263 | 0.578947 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 10 |
95f2f268bde8986d2ad9b3042e9feee5cf82b29f | 26 | py | Python | deedee.py | castlemas/DDDDDDDDDD | f3de60c0b84e71e5aabf6cb4a3c57b43e3f1ba9e | [
"MIT"
] | null | null | null | deedee.py | castlemas/DDDDDDDDDD | f3de60c0b84e71e5aabf6cb4a3c57b43e3f1ba9e | [
"MIT"
] | null | null | null | deedee.py | castlemas/DDDDDDDDDD | f3de60c0b84e71e5aabf6cb4a3c57b43e3f1ba9e | [
"MIT"
] | null | null | null | print("king dee dee dee")
| 13 | 25 | 0.692308 | 5 | 26 | 3.6 | 0.6 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
c25a0ce56488167cb3e41d8f81ad57fa4da5ba2f | 90 | py | Python | ryu/app/graphql-app/resolvers/version.py | Zowder/ryu | 5dc20aa412f57a6f9ff2b8c05964f792ead5ae00 | [
"Apache-2.0"
] | null | null | null | ryu/app/graphql-app/resolvers/version.py | Zowder/ryu | 5dc20aa412f57a6f9ff2b8c05964f792ead5ae00 | [
"Apache-2.0"
] | null | null | null | ryu/app/graphql-app/resolvers/version.py | Zowder/ryu | 5dc20aa412f57a6f9ff2b8c05964f792ead5ae00 | [
"Apache-2.0"
] | null | null | null | RYU_LATEST_VERSION="4.31"
def _RyuVersionResolver():
return str(RYU_LATEST_VERSION)
| 15 | 34 | 0.777778 | 12 | 90 | 5.416667 | 0.75 | 0.276923 | 0.492308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037975 | 0.122222 | 90 | 5 | 35 | 18 | 0.78481 | 0 | 0 | 0 | 0 | 0 | 0.044944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
c2a8bae881e43fe99614d94687a6d4c30d59ac43 | 129 | py | Python | geophires_api/service/parsing/__init__.py | tuw-eeg/GEOPHIRES-api | bdc422bc1eedcb6fef858e0a2a07aa605eae5b18 | [
"Apache-2.0"
] | null | null | null | geophires_api/service/parsing/__init__.py | tuw-eeg/GEOPHIRES-api | bdc422bc1eedcb6fef858e0a2a07aa605eae5b18 | [
"Apache-2.0"
] | null | null | null | geophires_api/service/parsing/__init__.py | tuw-eeg/GEOPHIRES-api | bdc422bc1eedcb6fef858e0a2a07aa605eae5b18 | [
"Apache-2.0"
] | null | null | null | from .default_output_parsing_service import DefaultOutputParsingService
from .output_parsing_service import OutputParsingService
| 43 | 71 | 0.922481 | 13 | 129 | 8.769231 | 0.615385 | 0.22807 | 0.350877 | 0.45614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.062016 | 129 | 2 | 72 | 64.5 | 0.942149 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
c2b6725fd30df7d49527b469dc74280864708d45 | 2,730 | py | Python | spectrogram.py | animolopez/module | 588b8de7211bef29b85282a33c9313f90a505f71 | [
"MIT"
] | null | null | null | spectrogram.py | animolopez/module | 588b8de7211bef29b85282a33c9313f90a505f71 | [
"MIT"
] | null | null | null | spectrogram.py | animolopez/module | 588b8de7211bef29b85282a33c9313f90a505f71 | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
import numpy as np
import scipy.fft
def showSpectrogram(x, N=(512), fs=(44100), frange=(8000), Window=('hamming')):
if x.ndim != 1:
error = "dim of signal must be 1."
return print(error)
if Window == 'hamming':
Win = np.hamming(N)
elif Window == 'hanning':
Win = np.hanning(N)
length = x.size / fs
plt.specgram(x, NFFT=N, Fs=fs, window=Win)
plt.axis([0, length, 0, frange])
plt.xlabel("time [s]")
plt.ylabel("frequency [Hz]")
plt.show()
def showFFT(x, N=(None), start=(0), fs=(44100), frange=(8000), amprange=(None)):
if x.ndim != 1:
error = "dim of signal must be 1."
return print(error)
if N == None:
N = 2
while len(x) > N:
N *= 2
info = "N = %s" % N
print(info)
X = scipy.fft.fft(x, n=N)
freqList = scipy.fft.fftfreq(N, d = 1.0/ fs)
amplitudeSpectrum = [np.sqrt(c.real ** 2 + c.imag ** 2) for c in X]
phaseSpectrum = [np.arctan2(int(c.imag), int(c.real)) for c in X]
# 振幅スペクトルを描画
if amprange == None:
amprange = np.max(amplitudeSpectrum)
plt.subplot(211)
plt.plot(freqList, amplitudeSpectrum, linestyle='-')
plt.axis([0, frange, 0, amprange])
plt.xlabel("frequency [Hz]")
plt.ylabel("amplitude spectrum")
# 位相スペクトルを描画
plt.subplot(212)
plt.plot(freqList, phaseSpectrum, linestyle='-')
plt.axis([0, frange, -np.pi, np.pi])
plt.xlabel("frequency [Hz]")
plt.ylabel("phase spectrum")
plt.show()
def showSTFT(x, N=(512), start=(0), fs=(44100), frange=(8000), amprange=(None), window=('hamming')):
if x.ndim != 1:
error = "dim of signal must be 1."
return print(error)
if N == None:
N = 2
while len(x) > N:
N *= 2
info = "N = %s" % N
print(info)
if window == 'hamming':
Win = np.hamming(N)
elif window == 'hanning':
Win = np.hanning(N)
x = Win * x[start:start+N]
X = scipy.fft.fft(x, n=N)
freqList = scipy.fft.fftfreq(N, d = 1.0/ fs)
amplitudeSpectrum = [np.sqrt(c.real ** 2 + c.imag ** 2) for c in X]
phaseSpectrum = [np.arctan2(int(c.imag), int(c.real)) for c in X]
# 振幅スペクトルを描画
if amprange == None:
amprange = np.max(amplitudeSpectrum)
plt.subplot(211)
plt.plot(freqList, amplitudeSpectrum, linestyle='-')
plt.axis([0, frange, 0, amprange])
plt.xlabel("frequency [Hz]")
plt.ylabel("amplitude spectrum")
# 位相スペクトルを描画
plt.subplot(212)
plt.plot(freqList, phaseSpectrum, linestyle='-')
plt.axis([0, frange, -np.pi, np.pi])
plt.xlabel("frequency [Hz]")
plt.ylabel("phase spectrum")
plt.show()
| 27.575758 | 100 | 0.569231 | 388 | 2,730 | 4.005155 | 0.208763 | 0.009009 | 0.02574 | 0.018018 | 0.839768 | 0.839768 | 0.839768 | 0.839768 | 0.794723 | 0.794723 | 0 | 0.037202 | 0.261538 | 2,730 | 98 | 101 | 27.857143 | 0.733631 | 0.015751 | 0 | 0.789474 | 0 | 0 | 0.101417 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039474 | false | 0 | 0.039474 | 0 | 0.118421 | 0.065789 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6c575a8475b01ebcdf2cd67cd74754b1a20a8d7e | 241,288 | py | Python | tests/end_to_end/__main__.py | Natureshadow/biboumi | b70136b96e579e8d38a30a298f885899cb80514c | [
"Zlib"
] | null | null | null | tests/end_to_end/__main__.py | Natureshadow/biboumi | b70136b96e579e8d38a30a298f885899cb80514c | [
"Zlib"
] | null | null | null | tests/end_to_end/__main__.py | Natureshadow/biboumi | b70136b96e579e8d38a30a298f885899cb80514c | [
"Zlib"
] | null | null | null | #!/usr/bin/env python3
import collections
import lxml.etree
import datetime
import slixmpp
import asyncio
import logging
import signal
import atexit
import time
import sys
import io
import os
from functools import partial
from slixmpp.xmlstream.matcher.base import MatcherBase
if not hasattr(asyncio, "ensure_future"):
asyncio.ensure_future = getattr(asyncio, "async")
class MatchAll(MatcherBase):
"""match everything"""
def match(self, xml):
return True
class StanzaError(Exception):
"""
Raised when a step fails.
"""
pass
class SkipStepError(Exception):
"""
Raised by a step when it needs to be skiped, by running
the next available step immediately.
"""
pass
class XMPPComponent(slixmpp.BaseXMPP):
"""
XMPPComponent sending a “scenario” of stanzas, checking that the responses
match the expected results.
"""
def __init__(self, scenario, biboumi):
super().__init__(jid="biboumi.localhost", default_ns="jabber:component:accept")
self.is_component = True
self.auto_authorize = None # Do not accept or reject subscribe requests automatically
self.auto_subscribe = False
self.stream_header = '<stream:stream %s %s from="%s" id="%s">' % (
'xmlns="jabber:component:accept"',
'xmlns:stream="%s"' % self.stream_ns,
self.boundjid, self.get_id())
self.stream_footer = "</stream:stream>"
self.register_handler(slixmpp.Callback('Match All',
MatchAll(None),
self.handle_incoming_stanza))
self.add_event_handler("session_end", self.on_end_session)
asyncio.ensure_future(self.accept_routine())
self.scenario = scenario
self.biboumi = biboumi
# A callable, taking a stanza as argument and raising a StanzaError
# exception if the test should fail.
self.stanza_checker = None
self.failed = False
self.accepting_server = None
self.saved_values = {}
def error(self, message):
print("[31;1mFailure[0m: %s" % (message,))
self.scenario.steps = []
self.failed = True
def on_end_session(self, _):
self.loop.stop()
def handle_incoming_stanza(self, stanza):
if self.stanza_checker:
try:
self.stanza_checker(stanza)
except StanzaError as e:
self.error(e)
except SkipStepError:
# Run the next step and then re-handle this same stanza
self.run_scenario()
return self.handle_incoming_stanza(stanza)
self.stanza_checker = None
self.run_scenario()
def run_scenario(self):
if self.scenario.steps:
step = self.scenario.steps.pop(0)
try:
step(self, self.biboumi)
except Exception as e:
self.error(e)
self.run_scenario()
else:
if self.biboumi:
self.biboumi.stop()
@asyncio.coroutine
def accept_routine(self):
self.accepting_server = yield from self.loop.create_server(lambda: self,
"127.0.0.1", 8811, reuse_address=True)
def check_stanza_against_all_expected_xpaths(self):
pass
def match(stanza, xpath):
tree = lxml.etree.parse(io.StringIO(str(stanza)))
matched = tree.xpath(xpath, namespaces={'re': 'http://exslt.org/regular-expressions',
'muc_user': 'http://jabber.org/protocol/muc#user',
'muc_owner': 'http://jabber.org/protocol/muc#owner',
'muc': 'http://jabber.org/protocol/muc',
'disco_info': 'http://jabber.org/protocol/disco#info',
'muc_traffic': 'http://jabber.org/protocol/muc#traffic',
'disco_items': 'http://jabber.org/protocol/disco#items',
'commands': 'http://jabber.org/protocol/commands',
'dataform': 'jabber:x:data',
'version': 'jabber:iq:version',
'mam': 'urn:xmpp:mam:2',
'rms': 'http://jabber.org/protocol/rsm',
'delay': 'urn:xmpp:delay',
'forward': 'urn:xmpp:forward:0',
'client': 'jabber:client',
'rsm': 'http://jabber.org/protocol/rsm',
'carbon': 'urn:xmpp:carbons:2',
'hints': 'urn:xmpp:hints',
'stanza': 'urn:ietf:params:xml:ns:xmpp-stanzas',
'stable_id': 'urn:xmpp:sid:0'})
return matched
def check_xpath(xpaths, xmpp, after, stanza):
for xpath in xpaths:
expected = True
real_xpath = xpath
# We can check that a stanza DOESN’T match, by adding a ! before it.
if xpath.startswith('!'):
expected = False
xpath = xpath[1:]
matched = match(stanza, xpath)
if (expected and not matched) or (not expected and matched):
raise StanzaError("Received stanza\n%s\ndid not match expected xpath\n%s" % (stanza, real_xpath))
if after:
if isinstance(after, collections.Iterable):
for af in after:
af(stanza, xmpp)
else:
after(stanza, xmpp)
def all_xpaths_match(stanza, xpaths):
for xpath in xpaths:
matched = match(stanza, xpath)
if not matched:
return False
return True
def check_list_of_xpath(list_of_xpaths, xmpp, stanza):
found = None
for i, xpaths in enumerate(list_of_xpaths):
if all_xpaths_match(stanza, xpaths):
found = True
list_of_xpaths.pop(i)
break
if not found:
raise StanzaError("Received stanza “%s” did not match any of the expected xpaths:\n%s" % (stanza, list_of_xpaths))
if list_of_xpaths:
step = partial(expect_unordered_already_formatted, list_of_xpaths)
xmpp.scenario.steps.insert(0, step)
def check_xpath_optional(xpaths, xmpp, after, stanza):
try:
check_xpath(xpaths, xmpp, after, stanza)
except StanzaError:
raise SkipStepError()
class Scenario:
"""Defines a list of actions that are executed in sequence, until one of
them throws an exception, or until the end. An action can be something
like “send a stanza”, “receive the next stanza and check that it matches
the given XPath”, “send a signal”, “wait for the end of the process”,
etc
"""
def __init__(self, name, steps, conf="basic"):
"""
Steps is a list of 2-tuple:
[(action, answer), (action, answer)]
"""
self.name = name
self.steps = []
self.conf = conf
for elem in steps:
if isinstance(elem, collections.Iterable):
for step in elem:
self.steps.append(step)
else:
self.steps.append(elem)
class ProcessRunner:
def __init__(self):
self.process = None
self.signal_sent = False
self.create = None
@asyncio.coroutine
def start(self):
self.process = yield from self.create
@asyncio.coroutine
def wait(self):
code = yield from self.process.wait()
return code
def stop(self):
if not self.signal_sent:
self.signal_sent = True
if self.process:
self.process.send_signal(signal.SIGINT)
def __del__(self):
self.stop()
class BiboumiRunner(ProcessRunner):
def __init__(self, name):
super().__init__()
self.name = name
self.fd = open("biboumi_%s_output.txt" % (name,), "w")
with_valgrind = os.environ.get("E2E_WITH_VALGRIND") is not None
if with_valgrind:
self.create = asyncio.create_subprocess_exec("valgrind", "--suppressions=" + (os.environ.get("E2E_BIBOUMI_SUPP_DIR") or "") + "biboumi.supp", "--leak-check=full", "--show-leak-kinds=all",
"--errors-for-leak-kinds=all", "--error-exitcode=16",
"./biboumi", "test.conf", stdin=None, stdout=self.fd,
stderr=self.fd, loop=None, limit=None)
else:
self.create = asyncio.create_subprocess_exec("./biboumi", "test.conf", stdin=None, stdout=self.fd,
stderr=self.fd, loop=None, limit=None)
class IrcServerRunner(ProcessRunner):
def __init__(self):
super().__init__()
self.create = asyncio.create_subprocess_exec("charybdis", "-foreground", "-configfile", os.getcwd() + "/../tests/end_to_end/ircd.conf",
stderr=asyncio.subprocess.PIPE)
def send_stanza(stanza, xmpp, biboumi):
replacements = common_replacements
replacements.update(xmpp.saved_values)
xmpp.send_raw(stanza.format_map(replacements))
asyncio.get_event_loop().call_soon(xmpp.run_scenario)
def expect_stanza(xpaths, xmpp, biboumi, optional=False, after=None):
replacements = common_replacements
replacements.update(xmpp.saved_values)
check_func = check_xpath if not optional else check_xpath_optional
if isinstance(xpaths, str):
xmpp.stanza_checker = partial(check_func, [xpaths.format_map(replacements)], xmpp, after)
elif isinstance(xpaths, tuple):
xmpp.stanza_checker = partial(check_func, [xpath.format_map(replacements) for xpath in xpaths], xmpp, after)
else:
print("Warning, from argument type passed to expect_stanza: %s" % (type(xpaths)))
def save_current_timestamp_plus_delta(key, delta, message, xmpp):
now_plus_delta = datetime.datetime.utcnow() + delta
xmpp.saved_values[key] = now_plus_delta.strftime("%FT%T.967Z")
def sleep_for(duration, xmpp, biboumi):
time.sleep(duration)
asyncio.get_event_loop().call_soon(xmpp.run_scenario)
# list_of_xpaths: [(xpath, xpath), (xpath, xpath), (xpath)]
def expect_unordered(list_of_xpaths, xmpp, biboumi):
formatted_list_of_xpaths = []
for xpaths in list_of_xpaths:
formatted_xpaths = []
for xpath in xpaths:
formatted_xpath = xpath.format_map(common_replacements)
formatted_xpaths.append(formatted_xpath)
formatted_list_of_xpaths.append(tuple(formatted_xpaths))
expect_unordered_already_formatted(formatted_list_of_xpaths, xmpp, biboumi)
def expect_unordered_already_formatted(formatted_list_of_xpaths, xmpp, biboumi):
xmpp.stanza_checker = partial(check_list_of_xpath, formatted_list_of_xpaths, xmpp)
class BiboumiTest:
"""
Spawns a biboumi process and a fake XMPP Component that will run a
Scenario. It redirects the outputs of the subprocess into separated
files, and detects any failure in the running of the scenario.
"""
def __init__(self, scenario, expected_code=0):
self.scenario = scenario
self.expected_code = expected_code
def run(self):
with_valgrind = os.environ.get("E2E_WITH_VALGRIND") is not None
print("Running scenario: [33;1m%s[0m%s" % (self.scenario.name, " (with valgrind)" if with_valgrind else ''))
# Redirect the slixmpp logging into a specific file
output_filename = "slixmpp_%s_output.txt" % (self.scenario.name,)
with open(output_filename, "w"):
pass
logging.basicConfig(level=logging.DEBUG,
format='%(levelname)-8s %(message)s',
filename=output_filename)
with open("test.conf", "w") as fd:
fd.write(confs[self.scenario.conf])
try:
os.remove("e2e_test.sqlite")
except FileNotFoundError:
pass
# Start the XMPP component and biboumi
biboumi = BiboumiRunner(self.scenario.name)
xmpp = XMPPComponent(self.scenario, biboumi)
asyncio.get_event_loop().run_until_complete(biboumi.start())
asyncio.get_event_loop().call_soon(xmpp.run_scenario)
xmpp.process()
code = asyncio.get_event_loop().run_until_complete(biboumi.wait())
xmpp.biboumi = None
self.scenario.steps.clear()
failed = False
if not xmpp.failed:
if code != self.expected_code:
xmpp.error("Wrong return code from biboumi's process: %d" % (code,))
failed = True
else:
print("[32;1mSuccess![0m")
else:
failed = True
xmpp.saved_values.clear()
if xmpp.server:
xmpp.accepting_server.close()
return not failed
confs = {
'basic':
"""hostname=biboumi.localhost
password=coucou
db_name=e2e_test.sqlite
port=8811
admin=admin@example.com
identd_port=1113
outgoing_bind=127.0.0.1""",
'fixed_server':
"""hostname=biboumi.localhost
password=coucou
db_name=e2e_test.sqlite
port=8811
fixed_irc_server=irc.localhost
admin=admin@example.com
identd_port=1113
""",
'persistent_by_default':
"""hostname=biboumi.localhost
password=coucou
db_name=e2e_test.sqlite
port=8811
persistent_by_default=true
""",}
common_replacements = {
'irc_server_one': 'irc.localhost@biboumi.localhost',
'irc_server_two': 'localhost@biboumi.localhost',
'irc_host_one': 'irc.localhost',
'irc_host_two': 'localhost',
'biboumi_host': 'biboumi.localhost',
'resource_one': 'resource1',
'resource_two': 'resource2',
'nick_one': 'Nick',
'jid_one': 'first@example.com',
'jid_two': 'second@example.com',
'jid_admin': 'admin@example.com',
'nick_two': 'Bobby',
'nick_three': 'Bernard',
'lower_nick_one': 'nick',
'lower_nick_two': 'bobby',
}
def handshake_sequence():
return (partial(expect_stanza, "//handshake"),
partial(send_stanza, "<handshake xmlns='jabber:component:accept'/>"))
def connection_begin_sequence(irc_host, jid, expected_irc_presence=False, fixed_irc_server=False):
jid = jid.format_map(common_replacements)
if fixed_irc_server:
xpath = "/message[@to='" + jid + "'][@from='biboumi.localhost']/body[text()='%s']"
xpath_re = "/message[@to='" + jid + "'][@from='biboumi.localhost']/body[re:test(text(), '%s')]"
else:
xpath = "/message[@to='" + jid + "'][@from='" + irc_host + "@biboumi.localhost']/body[text()='%s']"
xpath_re = "/message[@to='" + jid + "'][@from='" + irc_host + "@biboumi.localhost']/body[re:test(text(), '%s')]"
result = (
partial(expect_stanza,
(xpath % ('Connecting to %s:6697 (encrypted)' % irc_host),
"/message/hints:no-copy",
"/message/carbon:private"
)
),
partial(expect_stanza,
xpath % 'Connection failed: Connection refused'),
partial(expect_stanza,
xpath % ('Connecting to %s:6670 (encrypted)' % irc_host)),
partial(expect_stanza,
xpath % 'Connection failed: Connection refused'),
partial(expect_stanza,
xpath % ('Connecting to %s:6667 (not encrypted)' % irc_host)),
partial(expect_stanza,
xpath % 'Connected to IRC server.'))
if expected_irc_presence:
result += (partial(expect_stanza, "/presence[@from='" + irc_host + "@biboumi.localhost']"),)
# These five messages can be receive in any order
result += (
partial(expect_stanza,
xpath_re % (r'^%s: (\*\*\* Checking Ident|\*\*\* Looking up your hostname\.\.\.|\*\*\* Found your hostname: .*|ACK multi-prefix|\*\*\* Got Ident response)$' % 'irc.localhost')),
partial(expect_stanza,
xpath_re % (r'^%s: (\*\*\* Checking Ident|\*\*\* Looking up your hostname\.\.\.|\*\*\* Found your hostname: .*|ACK multi-prefix|\*\*\* Got Ident response)$' % 'irc.localhost')),
partial(expect_stanza,
xpath_re % (r'^%s: (\*\*\* Checking Ident|\*\*\* Looking up your hostname\.\.\.|\*\*\* Found your hostname: .*|ACK multi-prefix|\*\*\* Got Ident response)$' % 'irc.localhost')),
partial(expect_stanza,
xpath_re % (r'^%s: (\*\*\* Checking Ident|\*\*\* Looking up your hostname\.\.\.|\*\*\* Found your hostname: .*|ACK multi-prefix|\*\*\* Got Ident response)$' % 'irc.localhost')),
partial(expect_stanza,
xpath_re % (r'^%s: (\*\*\* Checking Ident|\*\*\* Looking up your hostname\.\.\.|\*\*\* Found your hostname: .*|ACK multi-prefix|\*\*\* Got Ident response)$' % 'irc.localhost')),
)
return result
def connection_tls_begin_sequence(irc_host, jid, fixed_irc_server):
jid = jid.format_map(common_replacements)
if fixed_irc_server:
xpath = "/message[@to='" + jid + "'][@from='biboumi.localhost']/body[text()='%s']"
xpath_re = "/message[@to='" + jid + "'][@from='biboumi.localhost']/body[re:test(text(), '%s')]"
else:
xpath = "/message[@to='" + jid + "'][@from='" + irc_host + "@biboumi.localhost']/body[text()='%s']"
xpath_re = "/message[@to='" + jid + "'][@from='" + irc_host + "@biboumi.localhost']/body[re:test(text(), '%s')]"
irc_host = 'irc.localhost'
return (
partial(expect_stanza,
(xpath % ('Connecting to %s:7778 (encrypted)' % irc_host),
"/message/hints:no-copy",
"/message/carbon:private",
)
),
partial(expect_stanza,
xpath % 'Connected to IRC server (encrypted).'),
# These five messages can be receive in any order
partial(expect_stanza,
xpath_re % (r'^%s: (\*\*\* Checking Ident|\*\*\* Looking up your hostname\.\.\.|\*\*\* Found your hostname: .*|ACK multi-prefix|\*\*\* Got Ident response)$' % irc_host)),
partial(expect_stanza,
xpath_re % (r'^%s: (\*\*\* Checking Ident|\*\*\* Looking up your hostname\.\.\.|\*\*\* Found your hostname: .*|ACK multi-prefix|\*\*\* Got Ident response)$' % irc_host)),
partial(expect_stanza,
xpath_re % (r'^%s: (\*\*\* Checking Ident|\*\*\* Looking up your hostname\.\.\.|\*\*\* Found your hostname: .*|ACK multi-prefix|\*\*\* Got Ident response)$' % irc_host)),
partial(expect_stanza,
xpath_re % (r'^%s: (\*\*\* Checking Ident|\*\*\* Looking up your hostname\.\.\.|\*\*\* Found your hostname: .*|ACK multi-prefix|\*\*\* Got Ident response)$' % irc_host)),
partial(expect_stanza,
xpath_re % (r'^%s: (\*\*\* Checking Ident|\*\*\* Looking up your hostname\.\.\.|\*\*\* Found your hostname: .*|ACK multi-prefix|\*\*\* Got Ident response)$' % irc_host)),
)
def connection_end_sequence(irc_host, jid, fixed_irc_server=False):
jid = jid.format_map(common_replacements)
if fixed_irc_server:
xpath = "/message[@to='" + jid + "'][@from='biboumi.localhost']/body[text()='%s']"
xpath_re = "/message[@to='" + jid + "'][@from='biboumi.localhost']/body[re:test(text(), '%s')]"
else:
xpath = "/message[@to='" + jid + "'][@from='" + irc_host + "@biboumi.localhost']/body[text()='%s']"
xpath_re = "/message[@to='" + jid + "'][@from='" + irc_host + "@biboumi.localhost']/body[re:test(text(), '%s')]"
irc_host = 'irc.localhost'
return (
partial(expect_stanza,
xpath_re % (r'^%s: Your host is .*$' % irc_host)),
partial(expect_stanza,
xpath_re % (r'^%s: This server was created .*$' % irc_host)),
partial(expect_stanza,
xpath_re % (r'^%s: There are \d+ users and \d+ invisible on \d+ servers$' % irc_host)),
partial(expect_stanza,
xpath_re % (r'^%s: \d+ unknown connection\(s\)$' % irc_host), optional=True),
partial(expect_stanza,
xpath_re % (r'^%s: \d+ channels formed$' % irc_host), optional=True),
partial(expect_stanza,
xpath_re % (r'^%s: I have \d+ clients and \d+ servers$' % irc_host)),
partial(expect_stanza,
xpath_re % (r'^%s: \d+ \d+ Current local users \d+, max \d+$' % irc_host)),
partial(expect_stanza,
xpath_re % (r'^%s: \d+ \d+ Current global users \d+, max \d+$' % irc_host)),
partial(expect_stanza,
xpath_re % (r'^%s: Highest connection count: \d+ \(\d+ clients\) \(\d+ connections received\)$' % irc_host)),
partial(expect_stanza,
xpath % "- This is charybdis MOTD you might replace it, but if not your friends will\n- laugh at you.\n"),
partial(expect_stanza,
xpath_re % r'^User mode for \w+ is \[\+Z?i\]$'),
)
def connection_middle_sequence(irc_host, jid, fixed_irc_server=False):
if fixed_irc_server:
xpath_re = "/message[@to='" + jid + "'][@from='biboumi.localhost']/body[re:test(text(), '%s')]"
else:
xpath_re = "/message[@to='" + jid + "'][@from='" + irc_host + "@biboumi.localhost']/body[re:test(text(), '%s')]"
irc_host = 'irc.localhost'
return (
partial(expect_stanza, xpath_re % (r'^%s: \*\*\* You are exempt from flood limits$' % irc_host)),
)
def connection_sequence(irc_host, jid, expected_irc_presence=False, fixed_irc_server=False):
return connection_begin_sequence(irc_host, jid, expected_irc_presence, fixed_irc_server=fixed_irc_server) +\
connection_middle_sequence(irc_host, jid, fixed_irc_server=fixed_irc_server) +\
connection_end_sequence(irc_host, jid, fixed_irc_server=fixed_irc_server)
def connection_tls_sequence(irc_host, jid, fixed_irc_server=False):
return connection_tls_begin_sequence(irc_host, jid, fixed_irc_server) + \
connection_middle_sequence(irc_host, jid, fixed_irc_server) +\
connection_end_sequence(irc_host, jid, fixed_irc_server)
def extract_attribute(xpath, name, stanza):
matched = match(stanza, xpath)
return matched[0].get(name)
def chan_name_from_jid(jid):
return jid[1:jid.find('%')]
def extract_text(xpath, stanza):
matched = match(stanza, xpath)
return matched[0].text
def save_value(name, func, stanza, xmpp):
xmpp.saved_values[name] = func(stanza)
if __name__ == '__main__':
atexit.register(asyncio.get_event_loop().close)
# Start the test component, accepting connections on the configured
# port.
scenarios = (
Scenario("basic_handshake_success",
[
handshake_sequence()
]),
Scenario("irc_server_connection",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
]),
Scenario("irc_server_connection_failure",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%doesnotexist@{biboumi_host}/{nick_one}' />"),
partial(expect_stanza,
"/message/body[text()='Connecting to doesnotexist:6697 (encrypted)']"),
partial(expect_stanza,
"/message/body[re:test(text(), 'Connection failed: (Domain name not found|Name or service not known)')]"),
partial(expect_stanza,
("/presence[@from='#foo%doesnotexist@{biboumi_host}/{nick_one}']/muc:x",
"/presence/error[@type='cancel']/stanza:item-not-found",
"/presence/error[@type='cancel']/stanza:text[re:test(text(), '(Domain name not found|Name or service not known)')]")),
]),
Scenario("simple_channel_join",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
]),
Scenario("quit",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# Send a raw QUIT message
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='{irc_server_one}' type='chat'><body>QUIT bye bye</body></message>"),
partial(expect_stanza, ("/presence[@from='#foo%{irc_server_one}/{nick_one}'][@type='unavailable']/muc_user:x/muc_user:status[@code='110']",
"/presence[@from='#foo%{irc_server_one}/{nick_one}'][@type='unavailable']/muc_user:x/muc_user:status[@code='110']",)),
]),
Scenario("multiple_channels_join",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#bar%{irc_server_one}/{nick_one}' />"),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#baz%{irc_server_one}/{nick_one}'> <x xmlns='http://jabber.org/protocol/muc'><password>SECRET</password></x></presence>"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
partial(expect_stanza,
"/message/body[text()='Mode #bar [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#bar%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#bar%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
partial(expect_stanza,
"/message/body[text()='Mode #baz [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#baz%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#baz%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
]),
Scenario("not_connected_error",
[
handshake_sequence(),
partial(send_stanza,
"<presence type='unavailable' from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
]),
Scenario("channel_join_with_two_users",
[
handshake_sequence(),
# First user joins
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@jid='~nick@localhost'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# Second user joins
partial(send_stanza,
"<presence from='{jid_two}/{resource_one}' to='#foo%{irc_server_one}/{nick_two}' />"),
connection_sequence("irc.localhost", '{jid_two}/{resource_one}'),
partial(expect_unordered, [
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='none'][@role='participant'][@jid='~bobby@localhost']",),
("/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",),
("/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='none'][@jid='~bobby@localhost'][@role='participant']",
"/presence/muc_user:x/muc_user:status[@code='110']",),
("/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]",),
]),
]),
Scenario("channel_force_join",
[
handshake_sequence(),
# First user joins
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}'><x xmlns='http://jabber.org/protocol/muc'/></presence>"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@jid='~nick@localhost'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# Second user joins
partial(send_stanza,
"<presence from='{jid_two}/{resource_one}' to='#foo%{irc_server_one}/{nick_two}'><x xmlns='http://jabber.org/protocol/muc'/></presence>"),
connection_sequence("irc.localhost", '{jid_two}/{resource_one}'),
partial(expect_unordered, [
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='none'][@role='participant'][@jid='~bobby@localhost']",),
("/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",),
("/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='none'][@jid='~bobby@localhost'][@role='participant']",
"/presence/muc_user:x/muc_user:status[@code='110']",),
("/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]",),
]),
# Here we simulate a desynchronization of a client: The client thinks it’s
# disconnected from the room, but biboumi still thinks it’s in the room. The
# client thus sends a join presence, and biboumi should send everything
# (user list, history, etc) in response.
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_three}'><x xmlns='http://jabber.org/protocol/muc'/></presence>"),
partial(expect_unordered, [
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='none'][@role='participant'][@jid='~bobby@localhost']",),
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']",),
("/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]",),
]),
# And also, that was not the same nickname
partial(expect_unordered, [
("/presence[@from='#foo%{irc_server_one}/{nick_one}'][@to='{jid_two}/{resource_one}'][@type='unavailable']/muc_user:x/muc_user:item[@nick='Bernard']",
"/presence/muc_user:x/muc_user:status[@code='303']"),
("/presence[@from='#foo%{irc_server_one}/{nick_three}'][@to='{jid_two}/{resource_one}']",),
("/presence[@from='#foo%{irc_server_one}/{nick_one}'][@to='{jid_one}/{resource_one}'][@type='unavailable']/muc_user:x/muc_user:item[@nick='Bernard']",
"/presence/muc_user:x/muc_user:status[@code='303']",
"/presence/muc_user:x/muc_user:status[@code='110']"),
("/presence[@from='#foo%{irc_server_one}/{nick_three}'][@to='{jid_one}/{resource_one}']",
"/presence/muc_user:x/muc_user:status[@code='110']"),
]),
]),
Scenario("channel_join_with_password",
[
handshake_sequence(),
# First user joins
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@jid='~nick@localhost'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# Set a password in the room, by using /mode +k
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><body>/mode +k SECRET</body></message>"),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='Mode #foo [+k SECRET] by {nick_one}']"),
# Second user tries to join, without a password
partial(send_stanza,
"<presence from='{jid_two}/{resource_one}' to='#foo%{irc_server_one}/{nick_two}'/>"),
connection_sequence("irc.localhost", '{jid_two}/{resource_one}'),
partial(expect_stanza, "/message/body[text()='{irc_host_one}: #foo: Cannot join channel (+k) - bad key']"),
partial(expect_stanza,
"/presence[@type='error'][@from='#foo%{irc_server_one}/{nick_two}']/error[@type='auth']/stanza:not-authorized",
),
# Second user joins, with a password
partial(send_stanza,
"<presence from='{jid_two}/{resource_one}' to='#foo%{irc_server_one}/{nick_two}'> <x xmlns='http://jabber.org/protocol/muc'><password>SECRET</password></x></presence>"),
# connection_sequence("irc.localhost", '{jid_two}/{resource_one}'),
partial(expect_unordered, [
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='none'][@role='participant'][@jid='~bobby@localhost']",),
("/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",),
("/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='none'][@jid='~bobby@localhost'][@role='participant']",
"/presence/muc_user:x/muc_user:status[@code='110']",),
("/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]",),
]),
]),
Scenario("channel_custom_topic",
[
handshake_sequence(),
# First user joins
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@jid='~nick@localhost'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# First user sets the topic
partial(send_stanza,
"<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><subject>TOPIC TEST</subject></message>"),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat'][@to='{jid_one}/{resource_one}']/subject[text()='TOPIC TEST']"),
# Second user joins
partial(send_stanza,
"<presence from='{jid_two}/{resource_one}' to='#foo%{irc_server_one}/{nick_two}' />"),
connection_sequence("irc.localhost", '{jid_two}/{resource_one}'),
# Our presence, sent to the other user
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='none'][@jid='~bobby@localhost'][@role='participant']",)),
# The other user presence
partial(expect_stanza,
"/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']"),
# Our own presence
partial(expect_stanza,
("/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='none'][@jid='~bobby@localhost'][@role='participant']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/subject[text()='TOPIC TEST']"),
]),
Scenario("multiline_topic",
[
handshake_sequence(),
# User joins
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# User tries to set a multiline topic
partial(send_stanza,
"<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><subject>FIRST LINE\nSECOND LINE.</subject></message>"),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat'][@to='{jid_one}/{resource_one}']/subject[text()='FIRST LINE SECOND LINE.']"),
]),
Scenario("channel_basic_join_on_fixed_irc_server",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#zgeg@{biboumi_host}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}', fixed_irc_server=True),
partial(expect_stanza,
"/message/body[text()='Mode #zgeg [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#zgeg@{biboumi_host}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#zgeg@{biboumi_host}'][@type='groupchat']/subject[not(text())]"),
], conf='fixed_server'
),
Scenario("list_adhoc",
[
handshake_sequence(),
partial(send_stanza, "<iq type='get' id='idwhatever' from='{jid_one}/{resource_one}' to='{biboumi_host}'><query xmlns='http://jabber.org/protocol/disco#items' node='http://jabber.org/protocol/commands' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/disco_items:query[@node='http://jabber.org/protocol/commands']",
"/iq/disco_items:query/disco_items:item[3]")),
]),
Scenario("list_admin_adhoc",
[
handshake_sequence(),
partial(send_stanza, "<iq type='get' id='idwhatever' from='{jid_admin}/{resource_one}' to='{biboumi_host}'><query xmlns='http://jabber.org/protocol/disco#items' node='http://jabber.org/protocol/commands' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/disco_items:query[@node='http://jabber.org/protocol/commands']",
"/iq/disco_items:query/disco_items:item[5]")),
]),
Scenario("list_adhoc_fixed_server",
[
handshake_sequence(),
partial(send_stanza, "<iq type='get' id='idwhatever' from='{jid_one}/{resource_one}' to='{biboumi_host}'><query xmlns='http://jabber.org/protocol/disco#items' node='http://jabber.org/protocol/commands' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/disco_items:query[@node='http://jabber.org/protocol/commands']",
"/iq/disco_items:query/disco_items:item[5]")),
], conf='fixed_server'),
Scenario("list_admin_adhoc_fixed_server",
[
handshake_sequence(),
partial(send_stanza, "<iq type='get' id='idwhatever' from='{jid_admin}/{resource_one}' to='{biboumi_host}'><query xmlns='http://jabber.org/protocol/disco#items' node='http://jabber.org/protocol/commands' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/disco_items:query[@node='http://jabber.org/protocol/commands']",
"/iq/disco_items:query/disco_items:item[5]")),
], conf='fixed_server'),
Scenario("list_adhoc_irc",
[
handshake_sequence(),
partial(send_stanza, "<iq type='get' id='idwhatever' from='{jid_one}/{resource_one}' to='{irc_host_one}@{biboumi_host}'><query xmlns='http://jabber.org/protocol/disco#items' node='http://jabber.org/protocol/commands' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/disco_items:query[@node='http://jabber.org/protocol/commands']",
"/iq/disco_items:query/disco_items:item[2]")),
]),
Scenario("list_adhoc_irc_fixed_server",
[
handshake_sequence(),
partial(send_stanza, "<iq type='get' id='idwhatever' from='{jid_one}/{resource_one}' to='{biboumi_host}'><query xmlns='http://jabber.org/protocol/disco#items' node='http://jabber.org/protocol/commands' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/disco_items:query[@node='http://jabber.org/protocol/commands']",
"/iq/disco_items:query/disco_items:item[4]")),
], conf='fixed_server'),
Scenario("list_admin_adhoc_irc_fixed_server",
[
handshake_sequence(),
partial(send_stanza, "<iq type='get' id='idwhatever' from='{jid_admin}/{resource_one}' to='{biboumi_host}'><query xmlns='http://jabber.org/protocol/disco#items' node='http://jabber.org/protocol/commands' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/disco_items:query[@node='http://jabber.org/protocol/commands']",
"/iq/disco_items:query/disco_items:item[6]")),
], conf='fixed_server'),
Scenario("list_muc_user_adhoc",
[
handshake_sequence(),
partial(send_stanza, "<iq type='get' id='idwhatever' from='{jid_admin}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}'><query xmlns='http://jabber.org/protocol/disco#items' node='http://jabber.org/protocol/commands' /></iq>"),
partial(expect_stanza, "/iq[@type='error']/error[@type='cancel']/stanza:feature-not-implemented"),
]
),
Scenario("execute_hello_adhoc_command",
[
handshake_sequence(),
partial(send_stanza, "<iq type='set' id='hello-command1' from='{jid_one}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='hello' action='execute' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/commands:command[@node='hello'][@sessionid][@status='executing']",
"/iq/commands:command/dataform:x[@type='form']/dataform:title[text()='Configure your name.']",
"/iq/commands:command/dataform:x[@type='form']/dataform:instructions[text()='Please provide your name.']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single']/dataform:required",
"/iq/commands:command/commands:actions/commands:next",
),
after = partial(save_value, "sessionid", partial(extract_attribute, "/iq[@type='result']/commands:command[@node='hello']", "sessionid"))
),
partial(send_stanza, "<iq type='set' id='hello-command2' from='{jid_one}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='hello' sessionid='{sessionid}' action='next'><x xmlns='jabber:x:data' type='submit'><field var='name'><value>COUCOU</value></field></x></command></iq>"),
partial(expect_stanza, "/iq[@type='result']/commands:command[@node='hello'][@status='completed']/commands:note[@type='info'][text()='Hello COUCOU!']")
]),
Scenario("execute_incomplete_hello_adhoc_command",
[
handshake_sequence(),
partial(send_stanza, "<iq type='set' id='hello-command1' from='{jid_one}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='hello' action='execute' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/commands:command[@node='hello'][@sessionid][@status='executing']",
"/iq/commands:command/commands:actions/commands:next",
),
after = partial(save_value, "sessionid", partial(extract_attribute, "/iq[@type='result']/commands:command[@node='hello']", "sessionid"))
),
partial(send_stanza, "<iq type='set' id='hello-command2' from='{jid_one}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='hello' sessionid='{sessionid}' action='next'><x xmlns='jabber:x:data' type='submit'></x></command></iq>"),
partial(expect_stanza, "/iq[@type='error']")
]),
Scenario("execute_ping_adhoc_command",
[
handshake_sequence(),
partial(send_stanza, "<iq type='set' id='ping-command1' from='{jid_one}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='ping' action='execute' /></iq>"),
partial(expect_stanza, "/iq[@type='result']/commands:command[@node='ping'][@status='completed']/commands:note[@type='info'][text()='Pong']")
]),
Scenario("execute_reload_adhoc_command",
[
handshake_sequence(),
partial(send_stanza, "<iq type='set' id='ping-command1' from='{jid_admin}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='reload' action='execute' /></iq>"),
partial(expect_stanza, "/iq[@type='result']/commands:command[@node='reload'][@status='completed']/commands:note[@type='info'][text()='Configuration reloaded.']")
]),
Scenario("execute_forbidden_adhoc_command",
[
handshake_sequence(),
partial(send_stanza, "<iq type='set' id='command1' from='{jid_one}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='disconnect-user' action='execute' /></iq>"),
partial(expect_stanza, ("/iq[@type='error'][@id='command1']/commands:command[@node='disconnect-user']",
"/iq/commands:command/commands:error[@type='cancel']/stanza:forbidden")),
]),
Scenario("execute_disconnect_user_adhoc_command",
[
handshake_sequence(),
partial(send_stanza, "<presence from='{jid_admin}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_admin}/{resource_one}'),
partial(expect_stanza, "/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza, "<iq type='set' id='command1' from='{jid_admin}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='disconnect-user' action='execute' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/commands:command[@node='disconnect-user'][@sessionid][@status='executing']",
"/iq/commands:command/commands:actions/commands:next",
),
after = partial(save_value, "sessionid", partial(extract_attribute, "/iq/commands:command[@node='disconnect-user']", "sessionid"))
),
partial(send_stanza, "<iq type='set' id='command2' from='{jid_admin}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='disconnect-user' sessionid='{sessionid}' action='next'><x xmlns='jabber:x:data' type='submit'><field var='jids'><value>{jid_admin}</value></field><field var='quit-message'><value>Disconnected by e2e</value></field></x></command></iq>"),
partial(expect_stanza, "/iq[@type='result']/commands:command[@node='disconnect-user'][@status='completed']/commands:note[@type='info'][text()='1 user has been disconnected.']"),
# Note, charybdis ignores our QUIT message, so we can't test it
partial(expect_stanza, "/presence[@type='unavailable'][@to='{jid_admin}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']"),
]),
Scenario("execute_admin_disconnect_from_server_adhoc_command",
[
handshake_sequence(),
# Admin connects to first server
partial(send_stanza, "<presence from='{jid_admin}/{resource_one}' to='#bar%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_admin}/{resource_one}'),
partial(expect_stanza, "/message/body[text()='Mode #bar [+nt] by {irc_host_one}']"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
# Non-Admin connects to first server
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_two}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza, "/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
# Non-admin connects to second server
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#bon%{irc_server_two}/{nick_three}' />"),
connection_sequence("localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza, "/message/body[text()='Mode #bon [+nt] by {irc_host_one}']"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
# Execute as admin
partial(send_stanza, "<iq type='set' id='command1' from='{jid_admin}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='disconnect-from-irc-server' action='execute' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/commands:command[@node='disconnect-from-irc-server'][@sessionid][@status='executing']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@var='jid'][@type='list-single']/dataform:option[@label='{jid_one}']/dataform:value[text()='{jid_one}']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@var='jid'][@type='list-single']/dataform:option[@label='{jid_admin}']/dataform:value[text()='{jid_admin}']",
"/iq/commands:command/commands:actions/commands:next",
),
after = partial(save_value, "sessionid", partial(extract_attribute, "/iq/commands:command[@node='disconnect-from-irc-server']", "sessionid"))
),
partial(send_stanza, "<iq type='set' id='command2' from='{jid_admin}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='disconnect-from-irc-server' sessionid='{sessionid}' action='next'><x xmlns='jabber:x:data' type='submit'><field var='jid'><value>{jid_one}</value></field><field var='quit-message'><value>e2e test one</value></field></x></command></iq>"),
partial(expect_stanza, ("/iq[@type='result']/commands:command[@node='disconnect-from-irc-server'][@sessionid][@status='executing']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@var='quit-message'][@type='text-single']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@var='irc-servers'][@type='list-multi']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@var='irc-servers']/dataform:option[@label='localhost']/dataform:value[text()='localhost']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@var='irc-servers']/dataform:option[@label='irc.localhost']/dataform:value[text()='irc.localhost']",
"/iq/commands:command/commands:actions/commands:next",
),
after = partial(save_value, "sessionid", partial(extract_attribute, "/iq/commands:command[@node='disconnect-from-irc-server']", "sessionid"))
),
partial(send_stanza, "<iq type='set' id='command2' from='{jid_admin}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='disconnect-from-irc-server' sessionid='{sessionid}' action='next'><x xmlns='jabber:x:data' type='submit'><field var='irc-servers'><value>localhost</value></field><field var='quit-message'><value>Disconnected by e2e</value></field></x></command></iq>"),
partial(expect_unordered, [("/presence[@type='unavailable'][@to='{jid_one}/{resource_one}'][@from='#bon%{irc_server_two}/{nick_three}']",),
("/iq[@type='result']/commands:command[@node='disconnect-from-irc-server'][@status='completed']/commands:note[@type='info'][text()='{jid_one} was disconnected from 1 IRC server.']",),
]),
# Execute as non-admin (this skips the first step)
partial(send_stanza, "<iq type='set' id='command1' from='{jid_one}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='disconnect-from-irc-server' action='execute' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/commands:command[@node='disconnect-from-irc-server'][@sessionid][@status='executing']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@var='quit-message'][@type='text-single']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@var='irc-servers'][@type='list-multi']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@var='irc-servers']/dataform:option[@label='irc.localhost']/dataform:value[text()='irc.localhost']",
"/iq/commands:command/commands:actions/commands:next",
),
after = partial(save_value, "sessionid", partial(extract_attribute, "/iq/commands:command[@node='disconnect-from-irc-server']", "sessionid"))
),
partial(send_stanza, "<iq type='set' id='command2' from='{jid_one}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='disconnect-from-irc-server' sessionid='{sessionid}' action='next'><x xmlns='jabber:x:data' type='submit'><field var='irc-servers'><value>irc.localhost</value></field><field var='quit-message'><value>Disconnected by e2e</value></field></x></command></iq>"),
partial(expect_unordered, [("/presence[@type='unavailable'][@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']",),
("/iq[@type='result']/commands:command[@node='disconnect-from-irc-server'][@status='completed']/commands:note[@type='info'][text()='{jid_one}/{resource_one} was disconnected from 1 IRC server.']",),
]),
]),
Scenario("multisessionnick",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat'][@to='{jid_one}/{resource_one}']/subject[not(text())]"),
# The other resources joins the same room, with the same nick
partial(send_stanza,
"<presence from='{jid_one}/{resource_two}' to='#foo%{irc_server_one}/{nick_one}' />"),
# We receive our own join
partial(expect_unordered,
[("/presence[@to='{jid_one}/{resource_two}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']"),
("/message[@from='#foo%{irc_server_one}'][@type='groupchat'][@to='{jid_one}/{resource_two}']/subject[not(text())]",)]
),
# A different user joins the same room
partial(send_stanza,
"<presence from='{jid_two}/{resource_one}' to='#foo%{irc_server_one}/{nick_two}' />"),
connection_sequence("irc.localhost", '{jid_two}/{resource_one}'),
partial(expect_unordered, [
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']",),
("/presence[@to='{jid_one}/{resource_two}'][@from='#foo%{irc_server_one}/{nick_two}']",),
("/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']",
"/presence/muc_user:x/muc_user:status[@code='110']",),
("/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']",),
]
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# That second user sends a private message to the first one
partial(send_stanza, "<message from='{jid_two}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' type='chat'><body>RELLO</body></message>"),
# Message is received with a server-wide JID, by the two resources behind nick_one
partial(expect_stanza, ("/message[@from='{lower_nick_two}%{irc_server_one}'][@to='{jid_one}/{resource_one}'][@type='chat']/body[text()='RELLO']",
"/message/hints:no-copy",
"/message/carbon:private",
"!/message/muc_user:x")),
partial(expect_stanza, "/message[@from='{lower_nick_two}%{irc_server_one}'][@to='{jid_one}/{resource_two}'][@type='chat']/body[text()='RELLO']"),
# First occupant (with the two resources) changes her/his nick
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_two}' />"),
partial(expect_unordered, [
("/message[@to='{jid_one}/{resource_one}'][@type='chat']/body[text()='irc.localhost: Bobby: Nickname is already in use.']",),
("/message[@to='{jid_one}/{resource_two}'][@type='chat']/body[text()='irc.localhost: Bobby: Nickname is already in use.']",),
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}'][@type='error']",),
("/presence[@to='{jid_one}/{resource_two}'][@from='#foo%{irc_server_one}/{nick_two}'][@type='error']",),
]),
# First occupant (with the two resources) changes her/his nick
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_three}' />"),
partial(expect_unordered, [
("/presence[@from='#foo%{irc_server_one}/{nick_one}'][@to='{jid_two}/{resource_one}'][@type='unavailable']/muc_user:x/muc_user:item[@nick='Bernard']",
"/presence/muc_user:x/muc_user:status[@code='303']"),
("/presence[@from='#foo%{irc_server_one}/{nick_three}'][@to='{jid_two}/{resource_one}']",),
("/presence[@from='#foo%{irc_server_one}/{nick_one}'][@to='{jid_one}/{resource_one}'][@type='unavailable']/muc_user:x/muc_user:item[@nick='Bernard']",
"/presence/muc_user:x/muc_user:status[@code='303']",
"/presence/muc_user:x/muc_user:status[@code='110']"),
("/presence[@from='#foo%{irc_server_one}/{nick_three}'][@to='{jid_one}/{resource_one}']",
"/presence/muc_user:x/muc_user:status[@code='110']"),
("/presence[@from='#foo%{irc_server_one}/{nick_one}'][@to='{jid_one}/{resource_two}'][@type='unavailable']/muc_user:x/muc_user:item[@nick='Bernard']",
"/presence/muc_user:x/muc_user:status[@code='303']",
"/presence/muc_user:x/muc_user:status[@code='110']"),
("/presence[@from='#foo%{irc_server_one}/{nick_three}'][@to='{jid_one}/{resource_two}']",
"/presence/muc_user:x/muc_user:status[@code='110']"),
]),
# One resource leaves the server entirely.
partial(send_stanza, "<presence type='unavailable' from='{jid_one}/{resource_two}' to='#foo%{irc_server_one}/{nick_one}' />"),
# The leave is forwarded only to us
partial(expect_stanza,
("/presence[@type='unavailable']/muc_user:x/muc_user:status[@code='110']",
"/presence/status[text()='Biboumi note: 1 resources are still in this channel.']",
)
),
# The second user sends two new private messages to the first user
partial(send_stanza, "<message from='{jid_two}/{resource_one}' to='#foo%{irc_server_one}/{nick_three}' type='chat'><body>first</body></message>"),
partial(send_stanza, "<message from='{jid_two}/{resource_one}' to='#foo%{irc_server_one}/{nick_three}' type='chat'><body>second</body></message>"),
# The first user receives the two messages, on the connected resource, once each
partial(expect_unordered, [
("/message[@from='{lower_nick_two}%{irc_server_one}'][@to='{jid_one}/{resource_one}'][@type='chat']/body[text()='first']",),
("/message[@from='{lower_nick_two}%{irc_server_one}'][@to='{jid_one}/{resource_one}'][@type='chat']/body[text()='second']",),
]),
]),
Scenario("persistent_channel",
[
# Join the channel with user 1
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat'][@to='{jid_one}/{resource_one}']/subject[not(text())]"),
# Make it persistent for user 1
partial(send_stanza, "<iq from='{jid_one}/{resource_one}' id='conf1' to='#foo%{irc_server_one}' type='get'><query xmlns='http://jabber.org/protocol/muc#owner'/></iq>"),
partial(expect_stanza, "/iq[@type='result']/muc_owner:query/dataform:x/dataform:field[@var='persistent'][@type='boolean']/dataform:value[text()='false']"),
partial(send_stanza, "<iq from='{jid_one}/{resource_one}' id='conf2' to='#foo%{irc_server_one}' type='set'><query xmlns='http://jabber.org/protocol/muc#owner'><x type='submit' xmlns='jabber:x:data'><field var='persistent' xmlns='jabber:x:data'><value>true</value></field></x></query></iq>"),
partial(expect_stanza, "/iq[@type='result']"),
# Check that the value is now effectively true
partial(send_stanza, "<iq from='{jid_one}/{resource_one}' id='conf1' to='#foo%{irc_server_one}' type='get'><query xmlns='http://jabber.org/protocol/muc#owner'/></iq>"),
partial(expect_stanza, "/iq[@type='result']/muc_owner:query/dataform:x/dataform:field[@var='persistent'][@type='boolean']/dataform:value[text()='true']"),
# A second user joins the same channel
partial(send_stanza,
"<presence from='{jid_two}/{resource_one}' to='#foo%{irc_server_one}/{nick_two}' />"),
connection_sequence("irc.localhost", '{jid_two}/{resource_one}'),
partial(expect_unordered, [
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']",),
("/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']",
"/presence/muc_user:x/muc_user:status[@code='110']",),
("/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']",),
]
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# First user leaves the room (but biboumi will stay in the channel)
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' type='unavailable' />"),
# Only user 1 receives the unavailable presence
partial(expect_stanza,
("/presence[@from='#foo%{irc_server_one}/{nick_one}'][@to='{jid_one}/{resource_one}'][@type='unavailable']/muc_user:x/muc_user:status[@code='110']",
"/presence/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']")),
# Second user sends a channel message
partial(send_stanza, "<message type='groupchat' from='{jid_two}/{resource_one}' to='#foo%{irc_server_one}'><body>coucou</body></message>"),
# Message should only be received by user 2, since user 1 has no resource in the room
partial(expect_stanza, "/message[@type='groupchat'][@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']"),
# Second user leaves the channel
partial(send_stanza, "<presence type='unavailable' from='{jid_two}/{resource_one}' to='#foo%{irc_server_one}/{nick_two}' />"),
partial(expect_stanza, "/presence[@type='unavailable'][@from='#foo%{irc_server_one}/{nick_two}']"),
]),
Scenario("channel_join_with_different_nick",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat'][@to='{jid_one}/{resource_one}']/subject[not(text())]"),
# The same resource joins a different channel with a different nick
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#bar%{irc_server_one}/{nick_two}' />"),
# We must receive a join presence in response, without any nick change (nick_two) must be ignored
partial(expect_stanza,
"/message/body[text()='Mode #bar [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#bar%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#bar%{irc_server_one}'][@type='groupchat'][@to='{jid_one}/{resource_one}']/subject[not(text())]"),
]),
Scenario("notices",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='{irc_server_one}' type='chat'><body>NOTICE {nick_one} :[#foo] Hello in a notice.</body></message>"),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/body[text()='[notice] [#foo] Hello in a notice.']"),
]),
Scenario("multiline_message",
[
handshake_sequence(),
# First user joins
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# Send a multi-line channel message
partial(send_stanza, "<message id='the-message-id' from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><body>un\ndeux\ntrois</body></message>"),
# Receive multiple messages, in order
partial(expect_stanza,
"/message[@from='#foo%{irc_server_one}/{nick_one}'][@id='the-message-id'][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='un']"),
partial(expect_stanza,
"/message[@from='#foo%{irc_server_one}/{nick_one}'][@id][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='deux']"),
partial(expect_stanza,
"/message[@from='#foo%{irc_server_one}/{nick_one}'][@id][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='trois']"),
# Send a simple message, with no id
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><body>hello</body></message>"),
# Expect a non-empty id as a result (should be a uuid)
partial(expect_stanza,
"!/message[@id='']/body[text()='hello']"),
# Second user joins
partial(send_stanza,
"<presence from='{jid_two}/{resource_one}' to='#foo%{irc_server_one}/{nick_two}' />"),
connection_sequence("irc.localhost", '{jid_two}/{resource_one}'),
# Our presence, sent to the other user
partial(expect_unordered, [
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='none'][@jid='~bobby@localhost'][@role='participant']",),
("/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",),
("/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='none'][@jid='~bobby@localhost'][@role='participant']",
"/presence/muc_user:x/muc_user:status[@code='110']"),
("/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]",)
]),
# Send a multi-line channel message
partial(send_stanza, "<message id='the-message-id' from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><body>un\ndeux\ntrois</body></message>"),
# Receive multiple messages, for each user
partial(expect_unordered, [
("/message[@from='#foo%{irc_server_one}/{nick_one}'][@id='the-message-id'][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='un']",),
("/message[@from='#foo%{irc_server_one}/{nick_one}'][@id][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='deux']",),
("/message[@from='#foo%{irc_server_one}/{nick_one}'][@id][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='trois']",),
("/message[@from='#foo%{irc_server_one}/{nick_one}'][@id][@to='{jid_two}/{resource_one}'][@type='groupchat']/body[text()='un']",),
("/message[@from='#foo%{irc_server_one}/{nick_one}'][@id][@to='{jid_two}/{resource_one}'][@type='groupchat']/body[text()='deux']",),
("/message[@from='#foo%{irc_server_one}/{nick_one}'][@id][@to='{jid_two}/{resource_one}'][@type='groupchat']/body[text()='trois']",),
])
]),
Scenario("channel_messages",
[
handshake_sequence(),
# First user joins
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# Second user joins
partial(send_stanza,
"<presence from='{jid_two}/{resource_one}' to='#foo%{irc_server_one}/{nick_two}' />"),
connection_sequence("irc.localhost", '{jid_two}/{resource_one}'),
# Our presence, sent to the other user
partial(expect_unordered, [
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='none'][@jid='~bobby@localhost'][@role='participant']",),
("/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",),
("/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='none'][@jid='~bobby@localhost'][@role='participant']",
"/presence/muc_user:x/muc_user:status[@code='110']"),
("/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]",)
]),
# Send a channel message
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><body>coucou</body></message>"),
# Receive the message, forwarded to the two users
partial(expect_unordered, [
("/message[@from='#foo%{irc_server_one}/{nick_one}'][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='coucou']",
"/message/stable_id:stanza-id[@by='#foo%{irc_server_one}'][@id]"),
("/message[@from='#foo%{irc_server_one}/{nick_one}'][@to='{jid_two}/{resource_one}'][@type='groupchat']/body[text()='coucou']",
"/message/stable_id:stanza-id[@by='#foo%{irc_server_one}'][@id]")
]),
# Send a private message, to a in-room JID
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_two}' type='chat'><body>coucou in private</body></message>"),
# Message is received with a server-wide JID
partial(expect_stanza, "/message[@from='{lower_nick_one}%{irc_server_one}'][@to='{jid_two}/{resource_one}'][@type='chat']/body[text()='coucou in private']"),
# Respond to the message, to the server-wide JID
partial(send_stanza, "<message from='{jid_two}/{resource_one}' to='{lower_nick_one}%{irc_server_one}' type='chat'><body>yes</body></message>"),
# The response is received from the in-room JID
partial(expect_stanza, ("/message[@from='#foo%{irc_server_one}/{nick_two}'][@to='{jid_one}/{resource_one}'][@type='chat']/body[text()='yes']",
"/message/muc_user:x")),
## Do the exact same thing, from a different chan,
# to check if the response comes from the right JID
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#dummy%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza, "/message"),
partial(expect_stanza,
"/presence/muc_user:x/muc_user:status[@code='110']"),
partial(expect_stanza, "/message[@from='#dummy%{irc_server_one}'][@type='groupchat']/subject"),
# Send a private message, to a in-room JID
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#dummy%{irc_server_one}/{nick_two}' type='chat'><body>re in private</body></message>"),
# Message is received with a server-wide JID
partial(expect_stanza, "/message[@from='{lower_nick_one}%{irc_server_one}'][@to='{jid_two}/{resource_one}'][@type='chat']/body[text()='re in private']"),
# Respond to the message, to the server-wide JID
partial(send_stanza, "<message from='{jid_two}/{resource_one}' to='{lower_nick_one}%{irc_server_one}' type='chat'><body>re</body></message>"),
# The response is received from the in-room JID
partial(expect_stanza, "/message[@from='#dummy%{irc_server_one}/{nick_two}'][@to='{jid_one}/{resource_one}'][@type='chat']/body[text()='re']"),
# Now we leave the room, to check if the subsequent private messages are still received properly
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#dummy%{irc_server_one}/{nick_one}' type='unavailable' />"),
partial(expect_stanza,
"/presence[@type='unavailable']/muc_user:x/muc_user:status[@code='110']"),
# The private messages from this nick should now come (again) from the server-wide JID
partial(send_stanza, "<message from='{jid_two}/{resource_one}' to='{lower_nick_one}%{irc_server_one}' type='chat'><body>hihihoho</body></message>"),
partial(expect_stanza,
"/message[@from='{lower_nick_two}%{irc_server_one}'][@to='{jid_one}/{resource_one}']"),
]
),
Scenario("encoded_channel_join",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#biboumi\\40louiz.org\\3a80%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #biboumi@louiz.org:80 [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#biboumi\\40louiz.org\\3a80%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#biboumi\\40louiz.org\\3a80%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
]),
Scenario("self_ping_with_error",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# Send a ping to ourself
partial(send_stanza,
"<iq type='get' from='{jid_one}/{resource_one}' id='first_ping' to='#foo%{irc_server_one}/{nick_one}'><ping xmlns='urn:xmpp:ping' /></iq>"),
# We receive our own ping request,
partial(expect_stanza,
"/iq[@from='{lower_nick_one}%{irc_server_one}'][@type='get'][@to='{jid_one}/{resource_one}'][@id='gnip_tsrif']"),
# Respond to the request with an error
partial(send_stanza,
"<iq from='{jid_one}/{resource_one}' id='gnip_tsrif' to='{lower_nick_one}%{irc_server_one}' type='error'><error type='cancel'><feature-not-implemented xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/></error></iq>"),
partial(expect_stanza,
"/iq[@from='#foo%{irc_server_one}/{nick_one}'][@type='result'][@to='{jid_one}/{resource_one}'][@id='first_ping']"),
# Send a ping to ourself
partial(send_stanza,
"<iq type='get' from='{jid_one}/{resource_one}' id='first_ping' to='#foo%{irc_server_one}/{nick_one}'><ping xmlns='urn:xmpp:ping' /></iq>"),
# We receive our own ping request,
partial(expect_stanza,
"/iq[@from='{lower_nick_one}%{irc_server_one}'][@type='get'][@to='{jid_one}/{resource_one}'][@id='gnip_tsrif']"),
# Respond to the request with an error
partial(send_stanza,
"<iq from='{jid_one}/{resource_one}' id='gnip_tsrif' to='{lower_nick_one}%{irc_server_one}' type='error'><error type='cancel'><service-unavailable xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/></error></iq>"),
partial(expect_stanza,
"/iq[@from='#foo%{irc_server_one}/{nick_one}'][@type='result'][@to='{jid_one}/{resource_one}'][@id='first_ping']"),
]),
Scenario("self_ping_not_in_muc",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# Send a ping to ourself, in a muc where we’re not
partial(send_stanza,
"<iq type='get' from='{jid_one}/{resource_one}' id='first_ping' to='#nil%{irc_server_one}/{nick_one}'><ping xmlns='urn:xmpp:ping' /></iq>"),
# Immediately receive an error
partial(expect_stanza,
"/iq[@from='#nil%{irc_server_one}/{nick_one}'][@type='error'][@to='{jid_one}/{resource_one}'][@id='first_ping']/error/stanza:not-allowed"),
# Send a ping to ourself, in a muc where we are, but not this resource
partial(send_stanza,
"<iq type='get' from='{jid_one}/{resource_two}' id='first_ping' to='#foo%{irc_server_one}/{nick_one}'><ping xmlns='urn:xmpp:ping' /></iq>"),
# Immediately receive an error
partial(expect_stanza,
"/iq[@from='#foo%{irc_server_one}/{nick_one}'][@type='error'][@to='{jid_one}/{resource_two}'][@id='first_ping']/error/stanza:not-allowed"),
]),
Scenario("self_ping_on_real_channel",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# Send a ping to ourself
partial(send_stanza,
"<iq type='get' from='{jid_one}/{resource_one}' id='first_ping' to='#foo%{irc_server_one}/{nick_one}'><ping xmlns='urn:xmpp:ping' /></iq>"),
# We receive our own ping request,
partial(expect_stanza,
"/iq[@from='{lower_nick_one}%{irc_server_one}'][@type='get'][@to='{jid_one}/{resource_one}'][@id='gnip_tsrif']"),
# Respond to the request
partial(send_stanza,
"<iq type='result' to='{lower_nick_one}%{irc_server_one}' id='gnip_tsrif' from='{jid_one}/{resource_one}'/>"),
partial(expect_stanza,
"/iq[@from='#foo%{irc_server_one}/{nick_one}'][@type='result'][@to='{jid_one}/{resource_one}'][@id='first_ping']"),
# Now join the same room, from the same bare JID, behind the same nick
partial(send_stanza,
"<presence from='{jid_one}/{resource_two}' to='#foo%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_two}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat'][@to='{jid_one}/{resource_two}']/subject[not(text())]"),
# And re-send a self ping
partial(send_stanza,
"<iq type='get' from='{jid_one}/{resource_one}' id='second_ping' to='#foo%{irc_server_one}/{nick_one}'><ping xmlns='urn:xmpp:ping' /></iq>"),
# We receive our own ping request. Note that we don't know the to value, it could be one of our two resources.
partial(expect_stanza,
"/iq[@from='{lower_nick_one}%{irc_server_one}'][@type='get'][@to][@id='gnip_dnoces']",
after = partial(save_value, "to", partial(extract_attribute, "/iq", "to"))),
# Respond to the request, using the extracted 'to' value as our 'from'
partial(send_stanza,
"<iq type='result' to='{lower_nick_one}%{irc_server_one}' id='gnip_dnoces' from='{to}'/>"),
partial(expect_stanza,
"/iq[@from='#foo%{irc_server_one}/{nick_one}'][@type='result'][@to='{jid_one}/{resource_one}'][@id='second_ping']"),
## And re-do exactly the same thing, just change the resource initiating the self ping
partial(send_stanza,
"<iq type='get' from='{jid_one}/{resource_two}' id='third_ping' to='#foo%{irc_server_one}/{nick_one}'><ping xmlns='urn:xmpp:ping' /></iq>"),
partial(expect_stanza,
"/iq[@from='{lower_nick_one}%{irc_server_one}'][@type='get'][@to][@id='gnip_driht']",
after = partial(save_value, "to", partial(extract_attribute, "/iq", "to"))),
partial(send_stanza,
"<iq type='result' to='{lower_nick_one}%{irc_server_one}' id='gnip_driht' from='{to}'/>"),
partial(expect_stanza,
"/iq[@from='#foo%{irc_server_one}/{nick_one}'][@type='result'][@to='{jid_one}/{resource_two}'][@id='third_ping']"),
]),
Scenario("self_ping_fixed_server", [
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo@{biboumi_host}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}', fixed_irc_server=True),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo@{biboumi_host}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo@{biboumi_host}'][@type='groupchat']/subject[not(text())]"),
# Send a ping to ourself
partial(send_stanza,
"<iq type='get' from='{jid_one}/{resource_one}' id='first_ping' to='#foo@{biboumi_host}/{nick_one}'><ping xmlns='urn:xmpp:ping' /></iq>"),
# We receive our own ping request,
partial(expect_stanza,
"/iq[@from='{lower_nick_one}@{biboumi_host}'][@type='get'][@to='{jid_one}/{resource_one}'][@id='gnip_tsrif']"),
# Respond to the request
partial(send_stanza,
"<iq type='result' to='{lower_nick_one}@{biboumi_host}' id='gnip_tsrif' from='{jid_one}/{resource_one}'/>"),
partial(expect_stanza,
"/iq[@from='#foo@{biboumi_host}/{nick_one}'][@type='result'][@to='{jid_one}/{resource_one}'][@id='first_ping']"),
], conf="fixed_server"),
Scenario("simple_kick",
[
handshake_sequence(),
# First user joins
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence/muc_user:x/muc_user:status[@code='110']"),
partial(expect_stanza, "/message[@type='groupchat']/subject"),
# Second user joins
partial(send_stanza,
"<presence from='{jid_two}/{resource_one}' to='#foo%{irc_server_one}/{nick_two}' />"),
connection_sequence("irc.localhost", '{jid_two}/{resource_one}'),
partial(expect_unordered, [
("/presence/muc_user:x/muc_user:item[@affiliation='none'][@role='participant']",),
("/presence/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",),
("/presence/muc_user:x/muc_user:status[@code='110']",),
("/message/subject",),
]),
# demonstrate bug https://lab.louiz.org/louiz/biboumi/issues/3291
# First user joins an other channel
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#bar%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence/muc_user:x/muc_user:status[@code='110']"),
partial(expect_stanza, "/message[@type='groupchat']/subject"),
# Second user joins
partial(send_stanza,
"<presence from='{jid_two}/{resource_one}' to='#bar%{irc_server_one}/{nick_two}' />"),
partial(expect_unordered, [
("/presence/muc_user:x/muc_user:item[@affiliation='none'][@role='participant']",),
("/presence/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",),
("/presence/muc_user:x/muc_user:status[@code='110']",),
("/message/subject",),
]),
# Moderator kicks participant
partial(send_stanza,
"<iq id='kick1' to='#foo%{irc_server_one}' from='{jid_one}/{resource_one}' type='set'><query xmlns='http://jabber.org/protocol/muc#admin'><item nick='{nick_two}' role='none'><reason>reported</reason></item></query></iq>"),
partial(expect_unordered, [
("/presence[@type='unavailable'][@to='{jid_two}/{resource_one}']/muc_user:x/muc_user:item[@role='none']/muc_user:actor[@nick='{nick_one}']",
"/presence/muc_user:x/muc_user:item/muc_user:reason[text()='reported']",
"/presence/muc_user:x/muc_user:status[@code='307']",
"/presence/muc_user:x/muc_user:status[@code='110']"
),
("/presence[@type='unavailable'][@to='{jid_one}/{resource_one}']/muc_user:x/muc_user:item[@role='none']/muc_user:actor[@nick='{nick_one}']",
"/presence/muc_user:x/muc_user:item/muc_user:reason[text()='reported']",
"/presence/muc_user:x/muc_user:status[@code='307']",
),
("/iq[@id='kick1'][@type='result']",),
]),
# Bug 3291, suite. We must not receive any presence from #foo, here
partial(send_stanza, "<message from='{jid_two}/{resource_one}' to='{irc_server_one}' type='chat'><body>QUIT bye bye</body></message>"),
partial(expect_unordered,
[("/presence[@from='#bar%{irc_server_one}/{nick_two}'][@to='{jid_one}/{resource_one}']",),
("/presence[@from='#bar%{irc_server_one}/{nick_two}'][@to='{jid_two}/{resource_one}']",),
("/message",),
("/message",)])
]),
Scenario("mode_change",
[
handshake_sequence(),
# First user joins
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence/muc_user:x/muc_user:status[@code='110']"),
partial(expect_stanza, "/message[@type='groupchat']/subject"),
# Second user joins
partial(send_stanza,
"<presence from='{jid_two}/{resource_one}' to='#foo%{irc_server_one}/{nick_two}' />"),
connection_sequence("irc.localhost", '{jid_two}/{resource_one}'),
partial(expect_unordered, [
("/presence/muc_user:x/muc_user:item[@affiliation='none'][@role='participant']",),
("/presence/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",),
("/presence/muc_user:x/muc_user:status[@code='110']",),
("/message/subject",),
]),
# Change a user mode with a message starting with /mode
partial(send_stanza,
"<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><body>/mode +v {nick_two}</body></message>"),
partial(expect_unordered, [
("/message[@to='{jid_one}/{resource_one}']/body[text()='Mode #foo [+v {nick_two}] by {nick_one}']",),
("/message[@to='{jid_two}/{resource_one}']/body[text()='Mode #foo [+v {nick_two}] by {nick_one}']",),
("/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='member'][@role='participant']",),
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='member'][@role='participant']",)
]),
# using an iq
partial(send_stanza,
"<iq from='{jid_one}/{resource_one}' id='id1' to='#foo%{irc_server_one}' type='set'><query xmlns='http://jabber.org/protocol/muc#admin'><item affiliation='admin' nick='{nick_two}'/></query></iq>"),
partial(expect_unordered, [
("/message[@to='{jid_one}/{resource_one}']/body[text()='Mode #foo [+o {nick_two}] by {nick_one}']",),
("/message[@to='{jid_two}/{resource_one}']/body[text()='Mode #foo [+o {nick_two}] by {nick_one}']",),
("/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",),
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",),
("/iq[@id='id1'][@type='result'][@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}']",),
]),
# remove the mode
partial(send_stanza,
"<iq from='{jid_one}/{resource_one}' id='id1' to='#foo%{irc_server_one}' type='set'><query xmlns='http://jabber.org/protocol/muc#admin'><item affiliation='member' nick='{nick_two}' role='participant'/></query></iq>"),
partial(expect_unordered, [
("/message[@to='{jid_one}/{resource_one}']/body[text()='Mode #foo [+v-o {nick_two} {nick_two}] by {nick_one}']",),
("/message[@to='{jid_two}/{resource_one}']/body[text()='Mode #foo [+v-o {nick_two} {nick_two}] by {nick_one}']",),
("/presence[@to='{jid_two}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='member'][@role='participant']",),
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='member'][@role='participant']",),
("/iq[@id='id1'][@type='result'][@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}']",),
]),
# using an iq, an a non-existant nick
partial(send_stanza,
"<iq from='{jid_one}/{resource_one}' id='id1' to='#foo%{irc_server_one}' type='set'><query xmlns='http://jabber.org/protocol/muc#admin'><item affiliation='admin' nick='blectre'/></query></iq>"),
partial(expect_stanza, "/iq[@type='error']"),
# using an iq, without the rights to do it
partial(send_stanza,
"<iq from='{jid_two}/{resource_one}' id='id1' to='#foo%{irc_server_one}' type='set'><query xmlns='http://jabber.org/protocol/muc#admin'><item affiliation='admin' nick='{nick_one}'/></query></iq>"),
partial(expect_unordered, [
("/iq[@type='error']",),
("/message[@type='chat'][@to='{jid_two}/{resource_one}']",),
]),
# using an iq, with an unknown mode
partial(send_stanza,
"<iq from='{jid_two}/{resource_one}' id='id1' to='#foo%{irc_server_one}' type='set'><query xmlns='http://jabber.org/protocol/muc#admin'><item affiliation='owner' nick='{nick_one}'/></query></iq>"),
partial(expect_unordered, [
("/iq[@type='error']",),
("/message[@type='chat'][@to='{jid_two}/{resource_one}']",),
]),
]),
Scenario("multisession_kick",
[
handshake_sequence(),
# First user joins
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence/muc_user:x/muc_user:status[@code='110']"),
partial(expect_stanza, "/message[@type='groupchat']/subject"),
# Second user joins, fprom two resources
partial(send_stanza,
"<presence from='{jid_two}/{resource_one}' to='#foo%{irc_server_one}/{nick_two}' />"),
connection_sequence("irc.localhost", '{jid_two}/{resource_one}'),
partial(expect_unordered, [
("/presence/muc_user:x/muc_user:item[@affiliation='none'][@role='participant']",),
("/presence/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",),
("/presence/muc_user:x/muc_user:status[@code='110']",),
("/message/subject",),
]),
partial(send_stanza,
"<presence from='{jid_two}/{resource_two}' to='#foo%{irc_server_one}/{nick_two}' />"),
partial(expect_stanza,
"/presence[@to='{jid_two}/{resource_two}'][@from='#foo%{irc_server_one}/{nick_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_two}/{resource_two}'][@from='#foo%{irc_server_one}/{nick_two}']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat'][@to='{jid_two}/{resource_two}']/subject[not(text())]"),
# Moderator kicks participant
partial(send_stanza,
"<iq id='kick1' to='#foo%{irc_server_one}' from='{jid_one}/{resource_one}' type='set'><query xmlns='http://jabber.org/protocol/muc#admin'><item nick='{nick_two}' role='none'><reason>reported</reason></item></query></iq>"),
partial(expect_unordered, [
("/presence[@type='unavailable'][@to='{jid_two}/{resource_one}']/muc_user:x/muc_user:item[@role='none']/muc_user:actor[@nick='{nick_one}']",
"/presence/muc_user:x/muc_user:item/muc_user:reason[text()='reported']",
"/presence/muc_user:x/muc_user:status[@code='307']",
"/presence/muc_user:x/muc_user:status[@code='110']"
),
("/presence[@type='unavailable'][@to='{jid_two}/{resource_two}']/muc_user:x/muc_user:item[@role='none']/muc_user:actor[@nick='{nick_one}']",
"/presence/muc_user:x/muc_user:item/muc_user:reason[text()='reported']",
"/presence/muc_user:x/muc_user:status[@code='307']",
"/presence/muc_user:x/muc_user:status[@code='110']"
),
("/presence[@type='unavailable']/muc_user:x/muc_user:item[@role='none']/muc_user:actor[@nick='{nick_one}']",
"/presence/muc_user:x/muc_user:item/muc_user:reason[text()='reported']",
"/presence/muc_user:x/muc_user:status[@code='307']",
),
("/iq[@id='kick1'][@type='result']",),
]),
]),
Scenario("self_version",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# Send a version request to ourself
partial(send_stanza,
"<iq type='get' from='{jid_one}/{resource_one}' id='first_version' to='#foo%{irc_server_one}/{nick_one}'><query xmlns='jabber:iq:version' /></iq>"),
# We receive our own request,
partial(expect_stanza,
"/iq[@from='{lower_nick_one}%{irc_server_one}'][@type='get'][@to='{jid_one}/{resource_one}']",
after = partial(save_value, "id", partial(extract_attribute, "/iq", 'id'))),
# Respond to the request
partial(send_stanza,
"<iq type='result' to='{lower_nick_one}%{irc_server_one}' id='{id}' from='{jid_one}/{resource_one}'><query xmlns='jabber:iq:version'><name>e2e test</name><version>1.0</version><os>Fedora</os></query></iq>"),
partial(expect_stanza,
"/iq[@from='#foo%{irc_server_one}/{nick_one}'][@type='result'][@to='{jid_one}/{resource_one}'][@id='first_version']/version:query/version:name[text()='e2e test (through the biboumi gateway) 1.0 Fedora']"),
# Now join the same room, from the same bare JID, behind the same nick
partial(send_stanza,
"<presence from='{jid_one}/{resource_two}' to='#foo%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_two}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat'][@to='{jid_one}/{resource_two}']/subject[not(text())]"),
# And re-send a self ping
partial(send_stanza,
"<iq type='get' from='{jid_one}/{resource_two}' id='second_version' to='#foo%{irc_server_one}/{nick_one}'><query xmlns='jabber:iq:version' /></iq>"),
# We receive our own request. Note that we don't know the to value, it could be one of our two resources.
partial(expect_stanza,
"/iq[@from='{lower_nick_one}%{irc_server_one}'][@type='get'][@to]",
after = (partial(save_value, "to", partial(extract_attribute, "/iq", "to")),
partial(save_value, "id", partial(extract_attribute, "/iq", "id")))),
# Respond to the request, using the extracted 'to' value as our 'from'
partial(send_stanza,
"<iq type='result' to='{lower_nick_one}%{irc_server_one}' id='{id}' from='{to}'><query xmlns='jabber:iq:version'><name>e2e test</name><version>1.0</version><os>Fedora</os></query></iq>"),
partial(expect_stanza,
"/iq[@from='#foo%{irc_server_one}/{nick_one}'][@type='result'][@to='{jid_one}/{resource_two}'][@id='second_version']"),
# And do exactly the same thing, but initiated by the other resource
partial(send_stanza,
"<iq type='get' from='{jid_one}/{resource_one}' id='second_version' to='#foo%{irc_server_one}/{nick_one}'><query xmlns='jabber:iq:version' /></iq>"),
# We receive our own request. Note that we don't know the to value, it could be one of our two resources.
partial(expect_stanza,
"/iq[@from='{lower_nick_one}%{irc_server_one}'][@type='get'][@to]",
after = (partial(save_value, "to", partial(extract_attribute, "/iq", "to")),
partial(save_value, "id", partial(extract_attribute, "/iq", "id")))),
# Respond to the request, using the extracted 'to' value as our 'from'
partial(send_stanza,
"<iq type='result' to='{lower_nick_one}%{irc_server_one}' id='{id}' from='{to}'><query xmlns='jabber:iq:version'><name>e2e test</name><version>1.0</version><os>Fedora</os></query></iq>"),
partial(expect_stanza,
"/iq[@from='#foo%{irc_server_one}/{nick_one}'][@type='result'][@to='{jid_one}/{resource_one}'][@id='second_version']"),
]),
Scenario("version_on_global_nick",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
partial(send_stanza,
"<iq type='get' from='{jid_one}/{resource_one}' id='first_version' to='{lower_nick_one}%{irc_server_one}'><query xmlns='jabber:iq:version' /></iq>"),
partial(expect_stanza,
"/iq[@from='{lower_nick_one}%{irc_server_one}'][@type='get'][@to='{jid_one}/{resource_one}']",
after = partial(save_value, "id", partial(extract_attribute, "/iq", 'id'))),
partial(send_stanza,
"<iq type='result' to='{lower_nick_one}%{irc_server_one}' id='{id}' from='{jid_one}/{resource_one}'><query xmlns='jabber:iq:version'><name>e2e test</name><version>1.0</version><os>Fedora</os></query></iq>"),
partial(expect_stanza,
"/iq[@from='{lower_nick_one}%{irc_server_one}'][@type='result'][@to='{jid_one}/{resource_one}'][@id='first_version']/version:query/version:name[text()='e2e test (through the biboumi gateway) 1.0 Fedora']"),
]),
Scenario("self_invite",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
partial(send_stanza,
"<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}'><x xmlns='http://jabber.org/protocol/muc#user'><invite to='{nick_one}'/></x></message>"),
partial(expect_stanza,
"/message/body[text()='{nick_one} is already on channel #foo']")
]),
Scenario("client_error",
[
handshake_sequence(),
# First resource
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# Second resource, same channel
partial(send_stanza,
"<presence from='{jid_one}/{resource_two}' to='#foo%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_two}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat'][@to='{jid_one}/{resource_two}']/subject[not(text())]"),
# Now the first resource has an error
partial(send_stanza,
"<message from='{jid_one}/{resource_one}' to='#foo%%{irc_server_one}/{nick_one}' type='error'><error type='cancel'><recipient-unavailable xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/></error></message>"),
# Receive a leave only to the leaving resource
partial(expect_stanza,
("/presence[@type='unavailable'][@from='#foo%{irc_server_one}/{nick_one}'][@to='{jid_one}/{resource_one}']/muc_user:x/muc_user:status[@code='110']",
"/presence/status[text()='Biboumi note: 1 resources are still in this channel.']")
),
]),
Scenario("simple_mam",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# Send two channel messages
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><body>coucou</body></message>"),
partial(expect_stanza,
("/message[@from='#foo%{irc_server_one}/{nick_one}'][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='coucou']",
"/message/stable_id:stanza-id[@by='#foo%{irc_server_one}'][@id]",)
),
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><body>coucou 2</body></message>"),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}/{nick_one}'][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='coucou 2']"),
# Retrieve the complete archive
partial(send_stanza, "<iq to='#foo%{irc_server_one}' from='{jid_one}/{resource_one}' type='set' id='id1'><query xmlns='urn:xmpp:mam:2' queryid='qid1' /></iq>"),
partial(expect_stanza,
("/message/mam:result[@queryid='qid1']/forward:forwarded/delay:delay",
"/message/mam:result[@queryid='qid1']/forward:forwarded/client:message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/client:body[text()='coucou']")
),
partial(expect_stanza,
("/message/mam:result[@queryid='qid1']/forward:forwarded/delay:delay",
"/message/mam:result[@queryid='qid1']/forward:forwarded/client:message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/client:body[text()='coucou 2']")
),
partial(expect_stanza,
("/iq[@type='result'][@id='id1'][@from='#foo%{irc_server_one}'][@to='{jid_one}/{resource_one}']",
"/iq/mam:fin/rms:set/rsm:last",
"/iq/mam:fin/rsm:set/rsm:first",
"/iq/mam:fin[@complete='true']")),
# Retrieve an empty archive by specifying an early “end” date
partial(send_stanza, """<iq to='#foo%{irc_server_one}' from='{jid_one}/{resource_one}' type='set' id='id2'>
<query xmlns='urn:xmpp:mam:2' queryid='qid2'>
<x xmlns='jabber:x:data' type='submit'>
<field var='FORM_TYPE' type='hidden'> <value>urn:xmpp:mam:2</value></field>
<field var='end'><value>2000-06-07T00:00:00Z</value></field>
</x>
</query></iq>"""),
partial(expect_stanza,
("/iq[@type='result'][@id='id2'][@from='#foo%{irc_server_one}'][@to='{jid_one}/{resource_one}']",
"/iq/mam:fin[@complete='true']/rsm:set",)),
# Retrieve an empty archive by specifying a late “start” date
# (note that this test will break in ~1000 years)
partial(send_stanza, """<iq to='#foo%{irc_server_one}' from='{jid_one}/{resource_one}' type='set' id='id3'>
<query xmlns='urn:xmpp:mam:2' queryid='qid3'>
<x xmlns='jabber:x:data' type='submit'>
<field var='FORM_TYPE' type='hidden'> <value>urn:xmpp:mam:2</value></field>
<field var='start'><value>2222-06-07T00:00:00Z</value></field>
</x>
</query></iq>"""),
partial(expect_stanza,
("/iq[@type='result'][@id='id3'][@from='#foo%{irc_server_one}'][@to='{jid_one}/{resource_one}']",
"/iq/mam:fin[@complete='true']/rsm:set")),
# Retrieve a limited archive
partial(send_stanza, "<iq to='#foo%{irc_server_one}' from='{jid_one}/{resource_one}' type='set' id='id4'><query xmlns='urn:xmpp:mam:2' queryid='qid4'><set xmlns='http://jabber.org/protocol/rsm'><max>1</max></set></query></iq>"),
partial(expect_stanza,
("/message/mam:result[@queryid='qid4']/forward:forwarded/delay:delay",
"/message/mam:result[@queryid='qid4']/forward:forwarded/client:message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/client:body[text()='coucou']")
),
partial(expect_stanza,
("/iq[@type='result'][@id='id4'][@from='#foo%{irc_server_one}'][@to='{jid_one}/{resource_one}']",
"/iq/mam:fin[@complete='true']/rsm:set")),
]),
Scenario("mam_with_timestamps",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# Send two channel messages
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><body>coucou</body></message>"),
partial(expect_stanza,
("/message[@from='#foo%{irc_server_one}/{nick_one}'][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='coucou']",
"/message/stable_id:stanza-id[@by='#foo%{irc_server_one}'][@id]",)
),
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><body>coucou 2</body></message>"),
# Record the current time
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}/{nick_one}'][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='coucou 2']",
after = partial(save_current_timestamp_plus_delta, "first_timestamp", datetime.timedelta(seconds=1))),
# Wait two seconds before sending two new messages
partial(sleep_for, 2),
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><body>coucou 3</body></message>"),
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><body>coucou 4</body></message>"),
partial(expect_stanza, "/message[@type='groupchat']/body[text()='coucou 3']"),
partial(expect_stanza, "/message[@type='groupchat']/body[text()='coucou 4']",
after = partial(save_current_timestamp_plus_delta, "second_timestamp", datetime.timedelta(seconds=1))),
# Retrieve the archive, after our saved datetime
partial(send_stanza, """<iq to='#foo%{irc_server_one}' from='{jid_one}/{resource_one}' type='set' id='id8'>
<query xmlns='urn:xmpp:mam:2' queryid='qid16'>
<x type='submit' xmlns='jabber:x:data'>
<field var='FORM_TYPE' xmlns='jabber:x:data'><value xmlns='jabber:x:data'>urn:xmpp:mam:2</value></field>
<field var='start' xmlns='jabber:x:data'><value xmlns='jabber:x:data'>{first_timestamp}</value></field>
<field var='end' xmlns='jabber:x:data'><value xmlns='jabber:x:data'>{second_timestamp}</value></field>
</x>
</query>
</iq>"""),
partial(expect_stanza,
("/message/mam:result[@queryid='qid16']/forward:forwarded/delay:delay",
"/message/mam:result/forward:forwarded/client:message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/client:body[text()='coucou 3']")
),
partial(expect_stanza,
("/message/mam:result[@queryid='qid16']/forward:forwarded/delay:delay",
"/message/mam:result/forward:forwarded/client:message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/client:body[text()='coucou 4']")
),
partial(expect_stanza,
("/iq[@type='result'][@id='id8'][@from='#foo%{irc_server_one}'][@to='{jid_one}/{resource_one}']",
"/iq/mam:fin[@complete='true']/rsm:set")),
]),
Scenario("join_history_limits",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# Send two channel messages
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><body>coucou</body></message>"),
partial(expect_stanza,
("/message[@from='#foo%{irc_server_one}/{nick_one}'][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='coucou']",
"/message/stable_id:stanza-id[@by='#foo%{irc_server_one}'][@id]",)
),
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><body>coucou 2</body></message>"),
# Record the current time
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}/{nick_one}'][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='coucou 2']",
after = partial(save_current_timestamp_plus_delta, "first_timestamp", datetime.timedelta(seconds=1))),
# Wait two seconds before sending two new messages
partial(sleep_for, 2),
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><body>coucou 3</body></message>"),
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><body>coucou 4</body></message>"),
partial(expect_stanza, "/message[@type='groupchat']/body[text()='coucou 3']"),
partial(expect_stanza, "/message[@type='groupchat']/body[text()='coucou 4']",
after = partial(save_current_timestamp_plus_delta, "second_timestamp", datetime.timedelta(seconds=1))),
# join some other channel, to stay connected to the server even after leaving #foo
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#DUMMY%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence/muc_user:x/muc_user:status[@code='110']"),
partial(expect_stanza, "/message/subject"),
# Leave #foo
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='unavailable' />"),
partial(expect_stanza, "/presence[@type='unavailable']"),
# Rejoin #foo, with some history limit
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}'><x xmlns='http://jabber.org/protocol/muc'><history maxchars='0'/></x></presence>"),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence/muc_user:x/muc_user:status[@code='110']"),
partial(expect_stanza, "/message/subject"),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='unavailable' />"),
partial(expect_stanza, "/presence[@type='unavailable']"),
# Rejoin #foo, with some history limit
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}'><x xmlns='http://jabber.org/protocol/muc'><history maxstanzas='3'/></x></presence>"),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence/muc_user:x/muc_user:status[@code='110']"),
partial(expect_stanza,
"/message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/body[text()='coucou 2']"),
partial(expect_stanza,
"/message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/body[text()='coucou 3']"),
partial(expect_stanza,
"/message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/body[text()='coucou 4']"),
partial(expect_stanza, "/message/subject"),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='unavailable' />"),
partial(expect_stanza, "/presence[@type='unavailable']"),
# Rejoin #foo, with some history limit
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}'><x xmlns='http://jabber.org/protocol/muc'><history since='{first_timestamp}'/></x></presence>"),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence/muc_user:x/muc_user:status[@code='110']"),
partial(expect_stanza,
"/message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/body[text()='coucou 3']"),
partial(expect_stanza,
"/message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/body[text()='coucou 4']"),
partial(expect_stanza, "/message/subject"),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='unavailable' />"),
partial(expect_stanza, "/presence[@type='unavailable']"),
# Rejoin #foo, with some history limit
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}'><x xmlns='http://jabber.org/protocol/muc'><history seconds='1'/></x></presence>"),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence/muc_user:x/muc_user:status[@code='110']"),
partial(expect_stanza,
"/message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/body[text()='coucou 3']"),
partial(expect_stanza,
"/message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/body[text()='coucou 4']"),
partial(expect_stanza, "/message/subject"),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='unavailable' />"),
partial(expect_stanza, "/presence[@type='unavailable']"),
# Rejoin #foo, with some history limit
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}'><x xmlns='http://jabber.org/protocol/muc'><history seconds='5'/></x></presence>"),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence/muc_user:x/muc_user:status[@code='110']"),
partial(expect_stanza,
"/message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/body[text()='coucou']"), partial(expect_stanza,
"/message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/body[text()='coucou 2']"),
partial(expect_stanza,
"/message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/body[text()='coucou 3']"),
partial(expect_stanza,
"/message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/body[text()='coucou 4']"),
partial(expect_stanza, "/message/subject"),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='unavailable' />"),
partial(expect_stanza, "/presence[@type='unavailable']"),
]),
Scenario("mam_on_fixed_server",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo@{biboumi_host}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}', fixed_irc_server=True),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo@{biboumi_host}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo@{biboumi_host}'][@type='groupchat']/subject[not(text())]"),
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo@{biboumi_host}' type='groupchat'><body>coucou</body></message>"),
partial(expect_stanza, "/message[@from='#foo@{biboumi_host}/{nick_one}'][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='coucou']"),
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo@{biboumi_host}' type='groupchat'><body>coucou 2</body></message>"),
partial(expect_stanza, "/message[@from='#foo@{biboumi_host}/{nick_one}'][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='coucou 2']"),
# Retrieve the complete archive
partial(send_stanza, "<iq to='#foo@{biboumi_host}' from='{jid_one}/{resource_one}' type='set' id='id1'><query xmlns='urn:xmpp:mam:2' queryid='qid1' /></iq>"),
partial(expect_stanza,
("/message/mam:result[@queryid='qid1']/forward:forwarded/delay:delay",
"/message/mam:result[@queryid='qid1']/forward:forwarded/client:message[@from='#foo@{biboumi_host}/{nick_one}'][@type='groupchat']/client:body[text()='coucou']")
),
partial(expect_stanza,
("/message/mam:result[@queryid='qid1']/forward:forwarded/delay:delay",
"/message/mam:result[@queryid='qid1']/forward:forwarded/client:message[@from='#foo@{biboumi_host}/{nick_one}'][@type='groupchat']/client:body[text()='coucou 2']")
),
], conf="fixed_server"),
Scenario("default_mam_limit",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]",
after = partial(save_value, "counter", lambda x: 0)),
] + [
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><body>{counter}</body></message>"),
partial(expect_stanza,
"/message[@from='#foo%{irc_server_one}/{nick_one}'][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='{counter}']",
after = partial(save_value, "counter", lambda stanza: str(1 + int(extract_text("/message/body", stanza))))
),
] * 150 + [
# Retrieve the archive, without any restriction
partial(send_stanza, "<iq to='#foo%{irc_server_one}' from='{jid_one}/{resource_one}' type='set' id='id1'><query xmlns='urn:xmpp:mam:2' queryid='qid1' /></iq>"),
# Since we should only receive the last 100 messages from the archive,
# it should start with message "1"
partial(expect_stanza,
("/message/mam:result[@queryid='qid1']/forward:forwarded/delay:delay",
"/message/mam:result[@queryid='qid1']/forward:forwarded/client:message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/client:body[text()='1']")
),
] + [
# followed by 98 more messages
partial(expect_stanza,
("/message/mam:result[@queryid='qid1']/forward:forwarded/delay:delay",
"/message/mam:result[@queryid='qid1']/forward:forwarded/client:message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/client:body")
),
] * 98 + [
# and finally the message "99"
partial(expect_stanza,
("/message/mam:result[@queryid='qid1']/forward:forwarded/delay:delay",
"/message/mam:result[@queryid='qid1']/forward:forwarded/client:message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/client:body[text()='100']"),
after = partial(save_value, "last_uuid", partial(extract_attribute, "/message/mam:result", "id"))
),
# And it should not be marked as complete
partial(expect_stanza,
("/iq[@type='result'][@id='id1'][@from='#foo%{irc_server_one}'][@to='{jid_one}/{resource_one}']",
"/iq/mam:fin/rsm:set/rsm:last[text()='{last_uuid}']",
"!/iq//mam:fin[@complete='true']",
"/iq//mam:fin")),
# Retrieve the next page, using the “after” thingy
partial(send_stanza, "<iq to='#foo%{irc_server_one}' from='{jid_one}/{resource_one}' type='set' id='id2'><query xmlns='urn:xmpp:mam:2' queryid='qid2' ><set xmlns='http://jabber.org/protocol/rsm'><after>{last_uuid}</after></set></query></iq>"),
partial(expect_stanza,
("/message/mam:result[@queryid='qid2']/forward:forwarded/delay:delay",
"/message/mam:result[@queryid='qid2']/forward:forwarded/client:message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/client:body[text()='101']")
),
] + 47 * [
partial(expect_stanza,
("/message/mam:result[@queryid='qid2']/forward:forwarded/delay:delay",
"/message/mam:result[@queryid='qid2']/forward:forwarded/client:message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/client:body")
),
] + [
partial(expect_stanza,
("/message/mam:result[@queryid='qid2']/forward:forwarded/delay:delay",
"/message/mam:result[@queryid='qid2']/forward:forwarded/client:message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/client:body[text()='149']"),
after = partial(save_value, "last_uuid", partial(extract_attribute, "/message/mam:result", "id"))
),
partial(expect_stanza,
("/iq[@type='result'][@id='id2'][@from='#foo%{irc_server_one}'][@to='{jid_one}/{resource_one}']",
"/iq/mam:fin/rsm:set/rsm:last[text()='{last_uuid}']",
"/iq//mam:fin[@complete='true']",
"/iq//mam:fin")),
# Send a request with a non-existing ID set as the “after” value.
partial(send_stanza, "<iq to='#foo%{irc_server_one}' from='{jid_one}/{resource_one}' type='set' id='id3'><query xmlns='urn:xmpp:mam:2' queryid='qid3' ><set xmlns='http://jabber.org/protocol/rsm'><after>DUMMY_ID</after></set></query></iq>"),
partial(expect_stanza, "/iq[@id='id3'][@type='error']/error[@type='cancel']/stanza:item-not-found"),
# Request the last page just BEFORE the last message in the archive
partial(send_stanza, "<iq to='#foo%{irc_server_one}' from='{jid_one}/{resource_one}' type='set' id='id3'><query xmlns='urn:xmpp:mam:2' queryid='qid3' ><set xmlns='http://jabber.org/protocol/rsm'><before></before></set></query></iq>"),
partial(expect_stanza,
("/message/mam:result[@queryid='qid3']/forward:forwarded/delay:delay",
"/message/mam:result[@queryid='qid3']/forward:forwarded/client:message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/client:body[text()='50']")
),
] + 98 * [
partial(expect_stanza,
("/message/mam:result[@queryid='qid3']/forward:forwarded/delay:delay",
"/message/mam:result[@queryid='qid3']/forward:forwarded/client:message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/client:body")
),
] + [
partial(expect_stanza,
("/message/mam:result[@queryid='qid3']/forward:forwarded/delay:delay",
"/message/mam:result[@queryid='qid3']/forward:forwarded/client:message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/client:body[text()='149']"),
after = partial(save_value, "last_uuid", partial(extract_attribute, "/message/mam:result", "id"))
),
partial(expect_stanza,
("/iq[@type='result'][@id='id3'][@from='#foo%{irc_server_one}'][@to='{jid_one}/{resource_one}']",
"/iq/mam:fin/rsm:set/rsm:last[text()='{last_uuid}']",
"!/iq//mam:fin[@complete='true']",
"/iq//mam:fin")),
# Do the same thing, but with a limit value.
partial(send_stanza, "<iq to='#foo%{irc_server_one}' from='{jid_one}/{resource_one}' type='set' id='id4'><query xmlns='urn:xmpp:mam:2' queryid='qid4' ><set xmlns='http://jabber.org/protocol/rsm'><before>{last_uuid}</before><max>2</max></set></query></iq>"),
partial(expect_stanza,
("/message/mam:result[@queryid='qid4']/forward:forwarded/delay:delay",
"/message/mam:result[@queryid='qid4']/forward:forwarded/client:message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/client:body[text()='147']")
),
partial(expect_stanza,
("/message/mam:result[@queryid='qid4']/forward:forwarded/delay:delay",
"/message/mam:result[@queryid='qid4']/forward:forwarded/client:message[@from='#foo%{irc_server_one}/{nick_one}'][@type='groupchat']/client:body[text()='148']"),
after = partial(save_value, "last_uuid", partial(extract_attribute, "/message/mam:result", "id"))
),
partial(expect_stanza,
("/iq[@type='result'][@id='id4'][@from='#foo%{irc_server_one}'][@to='{jid_one}/{resource_one}']",
"/iq/mam:fin/rsm:set/rsm:last[text()='{last_uuid}']",
"/iq/mam:fin[@complete='true']",
"/iq/mam:fin")),
]),
Scenario("channel_history_on_fixed_server",
[
handshake_sequence(),
# First user join
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo@{biboumi_host}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}', fixed_irc_server=True),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo@{biboumi_host}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@jid='~nick@localhost'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo@{biboumi_host}'][@type='groupchat']/subject[not(text())]"),
# Send one channel message
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo@{biboumi_host}' type='groupchat'><body>coucou</body></message>"),
partial(expect_stanza, "/message[@from='#foo@{biboumi_host}/{nick_one}'][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='coucou']"),
# Second user joins
partial(send_stanza,
"<presence from='{jid_one}/{resource_two}' to='#foo@{biboumi_host}/{nick_one}' />"),
# connection_sequence("irc.localhost", '{jid_one}/{resource_two}'),
# partial(expect_stanza,
# "/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_two}'][@from='#foo@{biboumi_host}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@jid='~nick@localhost'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
# Receive the history message
partial(expect_stanza, ("/message[@from='#foo@{biboumi_host}/{nick_one}']/body[text()='coucou']",
"/message/delay:delay[@from='#foo@{biboumi_host}']")),
partial(expect_stanza, "/message[@from='#foo@{biboumi_host}'][@type='groupchat']/subject[not(text())]"),
], conf="fixed_server"),
Scenario("channel_history",
[
handshake_sequence(),
# First user join
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@jid='~nick@localhost'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# Send one channel message
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' type='groupchat'><body>coucou</body></message>"),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}/{nick_one}'][@to='{jid_one}/{resource_one}'][@type='groupchat']/body[text()='coucou']"),
# Second user joins
partial(send_stanza,
"<presence from='{jid_one}/{resource_two}' to='#foo%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_two}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@jid='~nick@localhost'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
# Receive the history message
partial(expect_stanza, ("/message[@from='#foo%{irc_server_one}/{nick_one}']/body[text()='coucou']",
"/message/delay:delay[@from='#foo%{irc_server_one}']")),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
]),
Scenario("simple_channel_list",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#bar%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza,
"/message/body[text()='Mode #bar [+nt] by {irc_host_one}']"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message[@from='#bar%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
partial(send_stanza, "<iq from='{jid_one}/{resource_one}' id='id1' to='{irc_server_one}' type='get'><query xmlns='http://jabber.org/protocol/disco#items'/></iq>"),
partial(expect_stanza, (
"/iq[@type='result']/disco_items:query",
"/iq/disco_items:query/disco_items:item[@jid='#foo%{irc_server_one}']",
"/iq/disco_items:query/disco_items:item[@jid='#bar%{irc_server_one}']"
))
]),
Scenario("channel_list_escaping",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#true\\2ffalse%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #true/false [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#true\\2ffalse%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#true\\2ffalse%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
]),
Scenario("channel_list_with_rsm",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#bar%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza,
"/message/body[text()='Mode #bar [+nt] by {irc_host_one}']"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message[@from='#bar%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#coucou%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza,
"/message/body[text()='Mode #coucou [+nt] by {irc_host_one}']"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message[@from='#coucou%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# Ask for 0 item
partial(send_stanza, "<iq from='{jid_one}/{resource_one}' id='id1' to='{irc_server_one}' type='get'><query xmlns='http://jabber.org/protocol/disco#items'><set xmlns='http://jabber.org/protocol/rsm'><max>0</max></set></query></iq>"),
# Get 0 item
partial(expect_stanza, (
"/iq[@type='result']/disco_items:query",
)),
# Ask for 2 (of 3) items We don’t have the count,
# because biboumi doesn’t have the complete list when
# it sends us the 2 items
partial(send_stanza, "<iq from='{jid_one}/{resource_one}' id='id1' to='{irc_server_one}' type='get'><query xmlns='http://jabber.org/protocol/disco#items'><set xmlns='http://jabber.org/protocol/rsm'><max>2</max></set></query></iq>"),
partial(expect_stanza, (
"/iq[@type='result']/disco_items:query",
"/iq/disco_items:query/disco_items:item[@jid='#bar%{irc_server_one}']",
"/iq/disco_items:query/disco_items:item[@jid='#coucou%{irc_server_one}']",
"/iq/disco_items:query/rsm:set/rsm:first[text()='#bar%{irc_server_one}'][@index='0']",
"/iq/disco_items:query/rsm:set/rsm:last[text()='#coucou%{irc_server_one}']"
)),
# Ask for 12 (of 3) items. We get the whole list, and thus we have the count included.
partial(send_stanza, "<iq from='{jid_one}/{resource_one}' id='id1' to='{irc_server_one}' type='get'><query xmlns='http://jabber.org/protocol/disco#items'><set xmlns='http://jabber.org/protocol/rsm'><max>12</max></set></query></iq>"),
partial(expect_stanza, (
"/iq[@type='result']/disco_items:query",
"/iq/disco_items:query/disco_items:item[@jid='#bar%{irc_server_one}']",
"/iq/disco_items:query/disco_items:item[@jid='#coucou%{irc_server_one}']",
"/iq/disco_items:query/disco_items:item[@jid='#foo%{irc_server_one}']",
"/iq/disco_items:query/rsm:set/rsm:first[text()='#bar%{irc_server_one}'][@index='0']",
"/iq/disco_items:query/rsm:set/rsm:last[text()='#foo%{irc_server_one}']",
"/iq/disco_items:query/rsm:set/rsm:count[text()='3']"
)),
# Ask for 1 item, AFTER the first item (so,
# the second). Since we don’t invalidate the cache
# with this request, we should have the count
# included.
partial(send_stanza, "<iq from='{jid_one}/{resource_one}' id='id1' to='{irc_server_one}' type='get'><query xmlns='http://jabber.org/protocol/disco#items'><set xmlns='http://jabber.org/protocol/rsm'><after>#bar%{irc_server_one}</after><max>1</max></set></query></iq>"),
partial(expect_stanza, (
"/iq[@type='result']/disco_items:query",
"/iq/disco_items:query/disco_items:item[@jid='#coucou%{irc_server_one}']",
"/iq/disco_items:query/rsm:set/rsm:first[text()='#coucou%{irc_server_one}'][@index='1']",
"/iq/disco_items:query/rsm:set/rsm:last[text()='#coucou%{irc_server_one}']",
"/iq/disco_items:query/rsm:set/rsm:count[text()='3']"
)),
# Ask for 1 item, AFTER the second item (so,
# the third).
partial(send_stanza, "<iq from='{jid_one}/{resource_one}' id='id1' to='{irc_server_one}' type='get'><query xmlns='http://jabber.org/protocol/disco#items'><set xmlns='http://jabber.org/protocol/rsm'><after>#coucou%{irc_server_one}</after><max>1</max></set></query></iq>"),
partial(expect_stanza, (
"/iq[@type='result']/disco_items:query",
"/iq/disco_items:query/disco_items:item[@jid='#foo%{irc_server_one}']",
"/iq/disco_items:query/rsm:set/rsm:first[text()='#foo%{irc_server_one}'][@index='2']",
"/iq/disco_items:query/rsm:set/rsm:last[text()='#foo%{irc_server_one}']",
"/iq/disco_items:query/rsm:set/rsm:count[text()='3']"
)),
# Ask for 1 item, AFTER the third item (so,
# the fourth). Since it doesn't exist, we get 0 item
partial(send_stanza, "<iq from='{jid_one}/{resource_one}' id='id1' to='{irc_server_one}' type='get'><query xmlns='http://jabber.org/protocol/disco#items'><set xmlns='http://jabber.org/protocol/rsm'><after>#foo%{irc_server_one}</after><max>1</max></set></query></iq>"),
partial(expect_stanza, (
"/iq[@type='result']/disco_items:query",
"/iq/disco_items:query/rsm:set/rsm:count[text()='3']"
)),
]),
Scenario("default_channel_list_limit",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message",
after = partial(save_value, "counter", lambda x: 0)),
] + [
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#{counter}%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence",
after = partial(save_value, "counter", lambda stanza: str(1 + int(chan_name_from_jid(extract_attribute("/presence", "from", stanza)))))),
partial(expect_stanza, "/message")
] * 110 + [
partial(send_stanza, "<iq from='{jid_one}/{resource_one}' id='id1' to='{irc_server_one}' type='get'><query xmlns='http://jabber.org/protocol/disco#items'/></iq>"),
# charybdis sends the list in alphabetic order, so #foo is the last, and #99 is after #120
partial(expect_stanza, ("/iq/disco_items:query/disco_items:item[@jid='#0%{irc_server_one}']",
"/iq/disco_items:query/disco_items:item[@jid='#1%{irc_server_one}']",
"/iq/disco_items:query/disco_items:item[@jid='#109%{irc_server_one}']",
"/iq/disco_items:query/disco_items:item[@jid='#9%{irc_server_one}']",
"!/iq/disco_items:query/disco_items:item[@jid='#foo%{irc_server_one}']",
"!/iq/disco_items:query/disco_items:item[@jid='#99%{irc_server_one}']",
"!/iq/disco_items:query/disco_items:item[@jid='#90%{irc_server_one}']")),
]),
Scenario("complete_channel_list_with_pages_of_3",
[
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#aaa%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#bbb%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#ccc%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#ddd%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#eee%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#fff%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#ggg%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#hhh%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#iii%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#jjj%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza, "<iq from='{jid_one}/{resource_one}' id='id' to='{irc_server_one}' type='get'><query xmlns='http://jabber.org/protocol/disco#items'><set xmlns='http://jabber.org/protocol/rsm'><max>3</max></set></query></iq>"),
partial(expect_stanza, (
"/iq[@type='result']/disco_items:query",
"/iq/disco_items:query/disco_items:item[@jid='#aaa%{irc_server_one}']",
"/iq/disco_items:query/disco_items:item[@jid='#bbb%{irc_server_one}']",
"/iq/disco_items:query/disco_items:item[@jid='#ccc%{irc_server_one}']",
"/iq/disco_items:query/rsm:set/rsm:first[text()='#aaa%{irc_server_one}'][@index='0']",
"/iq/disco_items:query/rsm:set/rsm:last[text()='#ccc%{irc_server_one}']"
)),
partial(send_stanza, "<iq from='{jid_one}/{resource_one}' id='id' to='{irc_server_one}' type='get'><query xmlns='http://jabber.org/protocol/disco#items'><set xmlns='http://jabber.org/protocol/rsm'><after>#ccc%{irc_server_one}</after><max>3</max></set></query></iq>"),
partial(expect_stanza, (
"/iq[@type='result']/disco_items:query",
"/iq/disco_items:query/disco_items:item[@jid='#ddd%{irc_server_one}']",
"/iq/disco_items:query/disco_items:item[@jid='#eee%{irc_server_one}']",
"/iq/disco_items:query/disco_items:item[@jid='#fff%{irc_server_one}']",
"/iq/disco_items:query/rsm:set/rsm:first[text()='#ddd%{irc_server_one}'][@index='3']",
"/iq/disco_items:query/rsm:set/rsm:last[text()='#fff%{irc_server_one}']"
)),
partial(send_stanza, "<iq from='{jid_one}/{resource_one}' id='id' to='{irc_server_one}' type='get'><query xmlns='http://jabber.org/protocol/disco#items'><set xmlns='http://jabber.org/protocol/rsm'><after>#fff%{irc_server_one}</after><max>3</max></set></query></iq>"),
partial(expect_stanza, (
"/iq[@type='result']/disco_items:query",
"/iq/disco_items:query/disco_items:item[@jid='#ggg%{irc_server_one}']",
"/iq/disco_items:query/disco_items:item[@jid='#hhh%{irc_server_one}']",
"/iq/disco_items:query/disco_items:item[@jid='#iii%{irc_server_one}']",
"/iq/disco_items:query/rsm:set/rsm:first[text()='#ggg%{irc_server_one}'][@index='6']",
"/iq/disco_items:query/rsm:set/rsm:last[text()='#iii%{irc_server_one}']"
)),
partial(send_stanza, "<iq from='{jid_one}/{resource_one}' id='id' to='{irc_server_one}' type='get'><query xmlns='http://jabber.org/protocol/disco#items'><set xmlns='http://jabber.org/protocol/rsm'><after>#iii%{irc_server_one}</after><max>3</max></set></query></iq>"),
partial(expect_stanza, (
"/iq[@type='result']/disco_items:query",
"/iq/disco_items:query/disco_items:item[@jid='#jjj%{irc_server_one}']",
"/iq/disco_items:query/rsm:set/rsm:first[text()='#jjj%{irc_server_one}'][@index='9']",
"/iq/disco_items:query/rsm:set/rsm:last[text()='#jjj%{irc_server_one}']",
"/iq/disco_items:query/rsm:set/rsm:count[text()='10']"
)),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#aaa%{irc_server_one}/{nick_one}' type='unavailable' />"),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#bbb%{irc_server_one}/{nick_one}' type='unavailable' />"),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#ccc%{irc_server_one}/{nick_one}' type='unavailable' />"),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#ddd%{irc_server_one}/{nick_one}' type='unavailable' />"),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#eee%{irc_server_one}/{nick_one}' type='unavailable' />"),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#fff%{irc_server_one}/{nick_one}' type='unavailable' />"),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#ggg%{irc_server_one}/{nick_one}' type='unavailable' />"),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#hhh%{irc_server_one}/{nick_one}' type='unavailable' />"),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#iii%{irc_server_one}/{nick_one}' type='unavailable' />"),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#jjj%{irc_server_one}/{nick_one}' type='unavailable' />"),
partial(expect_stanza, "/presence[@type='unavailable']"),
partial(expect_stanza, "/presence[@type='unavailable']"),
partial(expect_stanza, "/presence[@type='unavailable']"),
partial(expect_stanza, "/presence[@type='unavailable']"),
partial(expect_stanza, "/presence[@type='unavailable']"),
partial(expect_stanza, "/presence[@type='unavailable']"),
partial(expect_stanza, "/presence[@type='unavailable']"),
partial(expect_stanza, "/presence[@type='unavailable']"),
partial(expect_stanza, "/presence[@type='unavailable']"),
partial(expect_stanza, "/presence[@type='unavailable']")
]),
Scenario("muc_traffic_info",
[
handshake_sequence(),
partial(send_stanza,
"<iq from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' id='1' type='get'><query xmlns='http://jabber.org/protocol/disco#info' node='http://jabber.org/protocol/muc#traffic'/></iq>"),
partial(expect_stanza, "/iq[@from='#foo%{irc_server_one}'][@to='{jid_one}/{resource_one}'][@type='result']/disco_info:query[@node='http://jabber.org/protocol/muc#traffic']"),
]),
Scenario("muc_disco_info",
[
handshake_sequence(),
partial(send_stanza,
"<iq from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}' id='1' type='get'><query xmlns='http://jabber.org/protocol/disco#info'/></iq>"),
partial(expect_stanza,
("/iq[@from='#foo%{irc_server_one}'][@to='{jid_one}/{resource_one}'][@type='result']/disco_info:query",
"/iq[@type='result']/disco_info:query/disco_info:identity[@category='conference'][@type='irc'][@name='#foo on {irc_host_one}']",
"/iq/disco_info:query/disco_info:feature[@var='jabber:iq:version']",
"/iq/disco_info:query/disco_info:feature[@var='http://jabber.org/protocol/commands']",
"/iq/disco_info:query/disco_info:feature[@var='urn:xmpp:ping']",
"/iq/disco_info:query/disco_info:feature[@var='urn:xmpp:mam:2']",
"/iq/disco_info:query/disco_info:feature[@var='jabber:iq:version']",
)),
]),
Scenario("fixed_muc_disco_info",
[
handshake_sequence(),
partial(send_stanza,
"<iq from='{jid_one}/{resource_one}' to='#foo@{biboumi_host}' id='1' type='get'><query xmlns='http://jabber.org/protocol/disco#info'/></iq>"),
partial(expect_stanza,
("/iq[@from='#foo@{biboumi_host}'][@to='{jid_one}/{resource_one}'][@type='result']/disco_info:query",
"/iq[@type='result']/disco_info:query/disco_info:identity[@category='conference'][@type='irc'][@name='#foo on {irc_host_one}']",
"/iq/disco_info:query/disco_info:feature[@var='jabber:iq:version']",
"/iq/disco_info:query/disco_info:feature[@var='http://jabber.org/protocol/commands']",
"/iq/disco_info:query/disco_info:feature[@var='urn:xmpp:ping']",
"/iq/disco_info:query/disco_info:feature[@var='urn:xmpp:mam:2']",
"/iq/disco_info:query/disco_info:feature[@var='jabber:iq:version']",
)),
], conf='fixed_server'),
Scenario("raw_message",
[
handshake_sequence(),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='{irc_server_one}' type='chat'><body>WHOIS {nick_one}</body></message>"),
partial(expect_stanza, "/message[@from='{irc_server_one}'][@type='chat']/body[text()='irc.localhost: {nick_one} ~{nick_one} localhost * {nick_one}']"),
]),
Scenario("raw_message_fixed_irc_server",
[
handshake_sequence(),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}', fixed_irc_server=True),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='{biboumi_host}' type='chat'><body>WHOIS {nick_one}</body></message>"),
partial(expect_stanza, "/message[@from='{biboumi_host}'][@type='chat']/body[text()='irc.localhost: {nick_one} ~{nick_one} localhost * {nick_one}']"),
], conf='fixed_server'),
Scenario("self_disco_info",
[
handshake_sequence(),
partial(send_stanza, "<iq type='get' id='get1' from='{jid_one}/{resource_one}' to='{biboumi_host}'><query xmlns='http://jabber.org/protocol/disco#info'/></iq>"),
partial(expect_stanza,
("/iq[@type='result']/disco_info:query/disco_info:identity[@category='conference'][@type='irc'][@name='Biboumi XMPP-IRC gateway']",
"/iq/disco_info:query/disco_info:feature[@var='jabber:iq:version']",
"/iq/disco_info:query/disco_info:feature[@var='http://jabber.org/protocol/commands']",
"/iq/disco_info:query/disco_info:feature[@var='urn:xmpp:ping']",
"/iq/disco_info:query/disco_info:feature[@var='urn:xmpp:mam:2']",
"/iq/disco_info:query/disco_info:feature[@var='jabber:iq:version']",
)),
]),
Scenario("invite_other",
[
handshake_sequence(),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza, "<presence from='{jid_two}/{resource_two}' to='#bar%{irc_server_one}@{biboumi_host}/{nick_two}' />"),
connection_sequence("irc.localhost", '{jid_two}/{resource_two}'),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}'><x xmlns='http://jabber.org/protocol/muc#user'><invite to='{nick_two}'/></x></message>"),
partial(expect_stanza, "/message/body[text()='{nick_two} has been invited to #foo']"),
partial(expect_stanza, "/message[@to='{jid_two}/{resource_two}'][@from='#foo%{irc_server_one}']/muc_user:x/muc_user:invite[@from='#foo%{irc_server_one}/{nick_one}']"),
partial(send_stanza, "<message from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}'><x xmlns='http://jabber.org/protocol/muc#user'><invite to='bertrand@example.com'/></x></message>"),
partial(expect_stanza, "/message[@to='bertrand@example.com'][@from='#foo%{irc_server_one}']/muc_user:x/muc_user:invite[@from='{jid_one}/{resource_one}']"),
]),
Scenario("global_configure",
[
handshake_sequence(),
partial(send_stanza, "<iq type='set' id='id1' from='{jid_one}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='configure' action='execute' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/commands:command[@node='configure'][@sessionid][@status='executing']",
"/iq/commands:command/dataform:x[@type='form']/dataform:title[text()='Configure some global default settings.']",
"/iq/commands:command/dataform:x[@type='form']/dataform:instructions[text()='Edit the form, to configure your global settings for the component.']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='max_history_length']/dataform:value[text()='20']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='boolean'][@var='record_history']/dataform:value[text()='true']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='boolean'][@var='persistent']/dataform:value[text()='false']",
"/iq/commands:command/commands:actions/commands:next",
),
after = partial(save_value, "sessionid", partial(extract_attribute, "/iq[@type='result']/commands:command[@node='configure']", "sessionid"))
),
partial(send_stanza, "<iq type='set' id='id2' from='{jid_one}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='configure' sessionid='{sessionid}' action='next'><x xmlns='jabber:x:data' type='submit'><field var='record_history'><value>0</value></field><field var='max_history_length'><value>42</value></field></x></command></iq>"),
partial(expect_stanza, "/iq[@type='result']/commands:command[@node='configure'][@status='completed']/commands:note[@type='info'][text()='Configuration successfully applied.']"),
partial(send_stanza, "<iq type='set' id='id3' from='{jid_one}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='configure' action='execute' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/commands:command[@node='configure'][@sessionid][@status='executing']",
"/iq/commands:command/dataform:x[@type='form']/dataform:title[text()='Configure some global default settings.']",
"/iq/commands:command/dataform:x[@type='form']/dataform:instructions[text()='Edit the form, to configure your global settings for the component.']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='max_history_length']/dataform:value[text()='42']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='boolean'][@var='record_history']/dataform:value[text()='false']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='boolean'][@var='persistent']/dataform:value[text()='false']",
"/iq/commands:command/commands:actions/commands:next",
),
after = partial(save_value, "sessionid", partial(extract_attribute, "/iq[@type='result']/commands:command[@node='configure']", "sessionid"))
),
partial(send_stanza, "<iq type='set' id='id4' from='{jid_one}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' action='cancel' node='configure' sessionid='{sessionid}' /></iq>"),
partial(expect_stanza, "/iq[@type='result']/commands:command[@node='configure'][@status='canceled']"),
]),
Scenario("global_configure_persistent_by_default",
[
handshake_sequence(),
partial(send_stanza, "<iq type='set' id='id1' from='{jid_one}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='configure' action='execute' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/commands:command[@node='configure'][@sessionid][@status='executing']",
"/iq/commands:command/dataform:x[@type='form']/dataform:title[text()='Configure some global default settings.']",
"/iq/commands:command/dataform:x[@type='form']/dataform:instructions[text()='Edit the form, to configure your global settings for the component.']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='max_history_length']/dataform:value[text()='20']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='boolean'][@var='record_history']/dataform:value[text()='true']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='boolean'][@var='persistent']/dataform:value[text()='true']",
"/iq/commands:command/commands:actions/commands:next",
),
),
],conf='persistent_by_default'),
Scenario("irc_server_configure",
[
handshake_sequence(),
partial(send_stanza, "<iq type='set' id='id1' from='{jid_one}/{resource_one}' to='{irc_server_one}'><command xmlns='http://jabber.org/protocol/commands' node='configure' action='execute' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/commands:command[@node='configure'][@sessionid][@status='executing']",
"/iq/commands:command/dataform:x[@type='form']/dataform:title[text()='Configure the IRC server irc.localhost']",
"/iq/commands:command/dataform:x[@type='form']/dataform:instructions[text()='Edit the form, to configure the settings of the IRC server irc.localhost']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-multi'][@var='ports']/dataform:value[text()='6667']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-multi'][@var='tls_ports']/dataform:value[text()='6670']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-multi'][@var='tls_ports']/dataform:value[text()='6697']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='boolean'][@var='verify_cert']/dataform:value[text()='true']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='fingerprint']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-private'][@var='pass']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-multi'][@var='after_connect_commands']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='nick']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='username']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='realname']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='encoding_in']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='encoding_out']",
"/iq/commands:command/commands:actions/commands:next",
),
after = partial(save_value, "sessionid", partial(extract_attribute, "/iq[@type='result']/commands:command[@node='configure']", "sessionid"))
),
partial(send_stanza, "<iq type='set' id='id2' from='{jid_one}/{resource_one}' to='{irc_server_one}'>"
"<command xmlns='http://jabber.org/protocol/commands' node='configure' sessionid='{sessionid}' action='next'>"
"<x xmlns='jabber:x:data' type='submit'>"
"<field var='ports' />"
"<field var='tls_ports'><value>6697</value><value>6698</value></field>"
"<field var='verify_cert'><value>1</value></field>"
"<field var='fingerprint'><value>12:12:12</value></field>"
"<field var='pass'><value>coucou</value></field>"
"<field var='after_connect_commands'><value>first command</value><value>second command</value></field>"
"<field var='nick'><value>my_nickname</value></field>"
"<field var='username'><value>username</value></field>"
"<field var='realname'><value>realname</value></field>"
"<field var='encoding_out'><value>UTF-8</value></field>"
"<field var='encoding_in'><value>latin-1</value></field>"
"</x></command></iq>"),
partial(expect_stanza, "/iq[@type='result']/commands:command[@node='configure'][@status='completed']/commands:note[@type='info'][text()='Configuration successfully applied.']"),
partial(send_stanza, "<iq type='set' id='id3' from='{jid_one}/{resource_one}' to='{irc_server_one}'><command xmlns='http://jabber.org/protocol/commands' node='configure' action='execute' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/commands:command[@node='configure'][@sessionid][@status='executing']",
"/iq/commands:command/dataform:x[@type='form']/dataform:title[text()='Configure the IRC server irc.localhost']",
"/iq/commands:command/dataform:x[@type='form']/dataform:instructions[text()='Edit the form, to configure the settings of the IRC server irc.localhost']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-multi']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-multi'][@var='tls_ports']/dataform:value[text()='6697']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-multi'][@var='tls_ports']/dataform:value[text()='6698']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='boolean'][@var='verify_cert']/dataform:value[text()='true']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='fingerprint']/dataform:value[text()='12:12:12']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-private'][@var='pass']/dataform:value[text()='coucou']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='nick']/dataform:value[text()='my_nickname']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-multi'][@var='after_connect_commands']/dataform:value[text()='first command']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-multi'][@var='after_connect_commands']/dataform:value[text()='second command']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='username']/dataform:value[text()='username']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='realname']/dataform:value[text()='realname']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='encoding_in']/dataform:value[text()='latin-1']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='encoding_out']/dataform:value[text()='UTF-8']",
"/iq/commands:command/commands:actions/commands:next",
),
after = partial(save_value, "sessionid", partial(extract_attribute, "/iq[@type='result']/commands:command[@node='configure']", "sessionid"))
),
partial(send_stanza, "<iq type='set' id='id4' from='{jid_one}/{resource_one}' to='{irc_server_one}'><command xmlns='http://jabber.org/protocol/commands' action='cancel' node='configure' sessionid='{sessionid}' /></iq>"),
partial(expect_stanza, "/iq[@type='result']/commands:command[@node='configure'][@status='canceled']"),
# Same thing, but try to empty some values
partial(send_stanza, "<iq type='set' id='id1' from='{jid_one}/{resource_one}' to='{irc_server_one}'><command xmlns='http://jabber.org/protocol/commands' node='configure' action='execute' /></iq>"),
partial(expect_stanza, "/iq[@type='result']",
after = partial(save_value, "sessionid", partial(extract_attribute, "/iq[@type='result']/commands:command[@node='configure']", "sessionid"))
),
partial(send_stanza, "<iq type='set' id='id2' from='{jid_one}/{resource_one}' to='{irc_server_one}'>"
"<command xmlns='http://jabber.org/protocol/commands' node='configure' sessionid='{sessionid}' action='next'>"
"<x xmlns='jabber:x:data' type='submit'>"
"<field var='pass'><value></value></field>"
"<field var='after_connect_commands'></field>"
"<field var='username'><value></value></field>"
"<field var='realname'><value></value></field>"
"<field var='encoding_out'><value></value></field>"
"<field var='encoding_in'><value></value></field>"
"</x></command></iq>"),
partial(expect_stanza, "/iq[@type='result']/commands:command[@node='configure'][@status='completed']/commands:note[@type='info'][text()='Configuration successfully applied.']"),
partial(send_stanza, "<iq type='set' id='id3' from='{jid_one}/{resource_one}' to='{irc_server_one}'><command xmlns='http://jabber.org/protocol/commands' node='configure' action='execute' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/commands:command[@node='configure'][@sessionid][@status='executing']",
"/iq/commands:command/dataform:x[@type='form']/dataform:title[text()='Configure the IRC server irc.localhost']",
"/iq/commands:command/dataform:x[@type='form']/dataform:instructions[text()='Edit the form, to configure the settings of the IRC server irc.localhost']",
"!/iq/commands:command/dataform:x[@type='form']/dataform:field[@var='pass']/dataform:value",
"!/iq/commands:command/dataform:x[@type='form']/dataform:field[@var='after_connect_commands']/dataform:value",
"!/iq/commands:command/dataform:x[@type='form']/dataform:field[@var='username']/dataform:value",
"!/iq/commands:command/dataform:x[@type='form']/dataform:field[@var='realname']/dataform:value",
"!/iq/commands:command/dataform:x[@type='form']/dataform:field[@var='encoding_in']/dataform:value",
"!/iq/commands:command/dataform:x[@type='form']/dataform:field[@var='encoding_out']/dataform:value",
"/iq/commands:command/commands:actions/commands:next",
),
after = partial(save_value, "sessionid", partial(extract_attribute, "/iq[@type='result']/commands:command[@node='configure']", "sessionid"))
),
partial(send_stanza, "<iq type='set' id='id4' from='{jid_one}/{resource_one}' to='{irc_server_one}'><command xmlns='http://jabber.org/protocol/commands' action='cancel' node='configure' sessionid='{sessionid}' /></iq>"),
partial(expect_stanza, "/iq[@type='result']/commands:command[@node='configure'][@status='canceled']"),
]),
Scenario("irc_channel_configure",
[
handshake_sequence(),
partial(send_stanza, "<iq type='set' id='id1' from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}'><command xmlns='http://jabber.org/protocol/commands' node='configure' action='execute' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/commands:command[@node='configure'][@sessionid][@status='executing']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='encoding_in']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='encoding_out']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='list-single'][@var='record_history']/dataform:value[text()='unset']",
),
after = partial(save_value, "sessionid", partial(extract_attribute, "/iq[@type='result']/commands:command[@node='configure']", "sessionid"))
),
partial(send_stanza, "<iq type='set' id='id2' from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}'>"
"<command xmlns='http://jabber.org/protocol/commands' node='configure' sessionid='{sessionid}' action='next'>"
"<x xmlns='jabber:x:data' type='submit'>"
"<field var='ports' />"
"<field var='encoding_out'><value>UTF-8</value></field>"
"<field var='encoding_in'><value>latin-1</value></field>"
"<field var='record_history'><value>true</value></field>"
"</x></command></iq>"),
partial(expect_stanza, "/iq[@type='result']/commands:command[@node='configure'][@status='completed']/commands:note[@type='info'][text()='Configuration successfully applied.']"),
partial(send_stanza, "<iq type='set' id='id3' from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}'><command xmlns='http://jabber.org/protocol/commands' node='configure' action='execute' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/commands:command[@node='configure'][@sessionid][@status='executing']",
"/iq/commands:command/dataform:x[@type='form']/dataform:title[text()='Configure the IRC channel #foo on server irc.localhost']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='encoding_in']/dataform:value[text()='latin-1']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='encoding_out']/dataform:value[text()='UTF-8']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='list-single'][@var='record_history']/dataform:value[text()='true']",
"/iq/commands:command/commands:actions/commands:next",
),
after = partial(save_value, "sessionid", partial(extract_attribute, "/iq[@type='result']/commands:command[@node='configure']", "sessionid"))
),
partial(send_stanza, "<iq type='set' id='id4' from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}'><command xmlns='http://jabber.org/protocol/commands' action='cancel' node='configure' sessionid='{sessionid}' /></iq>"),
partial(expect_stanza, "/iq[@type='result']/commands:command[@node='configure'][@status='canceled']"),
]),
Scenario("irc_channel_configure_xep0045",
[
handshake_sequence(),
partial(send_stanza, "<iq type='get' id='id1' from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}'><query xmlns='http://jabber.org/protocol/muc#owner'/></iq>"),
partial(expect_stanza, ("/iq[@type='result']/muc_owner:query",
"/iq/muc_owner:query/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='encoding_in']",
"/iq/muc_owner:query/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='encoding_out']",
),
),
partial(send_stanza, "<iq type='set' id='id2' from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}'>"
"<query xmlns='http://jabber.org/protocol/muc#owner'>"
"<x xmlns='jabber:x:data' type='submit'>"
"<field var='ports' />"
"<field var='encoding_out'><value>UTF-8</value></field>"
"<field var='encoding_in'><value>latin-1</value></field>"
"</x></query></iq>"),
partial(expect_stanza, "/iq[@type='result']"),
partial(send_stanza, "<iq type='set' id='id3' from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}'><query xmlns='http://jabber.org/protocol/muc#owner'> <x xmlns='jabber:x:data' type='cancel'/></query></iq>"),
partial(expect_stanza, "/iq[@type='result']"),
]),
Scenario("irc_channel_configure_fixed",
[
handshake_sequence(),
partial(send_stanza, "<iq type='set' id='id1' from='{jid_one}/{resource_one}' to='#foo@{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='configure' action='execute' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/commands:command[@node='configure'][@sessionid][@status='executing']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='encoding_in']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='encoding_out']",
),
after = partial(save_value, "sessionid", partial(extract_attribute, "/iq[@type='result']/commands:command[@node='configure']", "sessionid"))
),
partial(send_stanza, "<iq type='set' id='id2' from='{jid_one}/{resource_one}' to='#foo@{biboumi_host}'>"
"<command xmlns='http://jabber.org/protocol/commands' node='configure' sessionid='{sessionid}' action='next'>"
"<x xmlns='jabber:x:data' type='submit'>"
"<field var='ports' />"
"<field var='encoding_out'><value>UTF-8</value></field>"
"<field var='encoding_in'><value>latin-1</value></field>"
"</x></command></iq>"),
partial(expect_stanza, "/iq[@type='result']/commands:command[@node='configure'][@status='completed']/commands:note[@type='info'][text()='Configuration successfully applied.']"),
partial(send_stanza, "<iq type='set' id='id3' from='{jid_one}/{resource_one}' to='#foo@{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='configure' action='execute' /></iq>"),
partial(expect_stanza, ("/iq[@type='result']/commands:command[@node='configure'][@sessionid][@status='executing']",
"/iq/commands:command/dataform:x[@type='form']/dataform:title[text()='Configure the IRC channel #foo on server irc.localhost']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='encoding_in']/dataform:value[text()='latin-1']",
"/iq/commands:command/dataform:x[@type='form']/dataform:field[@type='text-single'][@var='encoding_out']/dataform:value[text()='UTF-8']",
"/iq/commands:command/commands:actions/commands:next",
),
after = partial(save_value, "sessionid", partial(extract_attribute, "/iq[@type='result']/commands:command[@node='configure']", "sessionid"))
),
partial(send_stanza, "<iq type='set' id='id4' from='{jid_one}/{resource_one}' to='#foo@{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' action='cancel' node='configure' sessionid='{sessionid}' /></iq>"),
partial(expect_stanza, "/iq[@type='result']/commands:command[@node='configure'][@status='canceled']"),
], conf='fixed_server'),
Scenario("irc_tls_connection",
[
handshake_sequence(),
# First, use an adhoc command to configure how we connect to the irc server, configure
# only one TLS port, and disable the cert verification.
partial(send_stanza, "<iq type='set' id='id1' from='{jid_one}/{resource_one}' to='{irc_server_one}'><command xmlns='http://jabber.org/protocol/commands' node='configure' action='execute' /></iq>"),
partial(expect_stanza, "/iq[@type='result']",
after = partial(save_value, "sessionid", partial(extract_attribute, "/iq[@type='result']/commands:command[@node='configure']", "sessionid"))),
partial(send_stanza, "<iq type='set' id='id2' from='{jid_one}/{resource_one}' to='{irc_server_one}'>"
"<command xmlns='http://jabber.org/protocol/commands' node='configure' sessionid='{sessionid}' action='next'>"
"<x xmlns='jabber:x:data' type='submit'>"
"<field var='ports' />"
"<field var='tls_ports'><value>7778</value></field>"
"<field var='verify_cert'><value>0</value></field>"
"<field var='nick'><value>my_special_nickname</value></field>"
"</x></command></iq>"),
partial(expect_stanza, "/iq[@type='result']/commands:command[@node='configure'][@status='completed']/commands:note[@type='info'][text()='Configuration successfully applied.']"),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_tls_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/my_special_nickname']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
]),
Scenario("get_irc_connection_info",
[
handshake_sequence(),
partial(send_stanza, "<iq type='set' id='command1' from='{jid_one}/{resource_one}' to='{irc_server_one}'><command xmlns='http://jabber.org/protocol/commands' node='get-irc-connection-info' action='execute' /></iq>"),
partial(expect_stanza, "/iq/commands:command/commands:note[text()='You are not connected to the IRC server irc.localhost']"),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza, "<iq type='set' id='command2' from='{jid_one}/{resource_one}' to='{irc_server_one}'><command xmlns='http://jabber.org/protocol/commands' node='get-irc-connection-info' action='execute' /></iq>"),
partial(expect_stanza, r"/iq/commands:command/commands:note[re:test(text(), 'Connected to IRC server irc.localhost on port 6667 since \d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d \(\d+ seconds ago\)\.\n#foo from 1 resource: {resource_one}.*')]"),
]),
Scenario("get_irc_connection_info_fixed",
[
handshake_sequence(),
partial(send_stanza, "<iq type='set' id='command1' from='{jid_one}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='get-irc-connection-info' action='execute' /></iq>"),
partial(expect_stanza, "/iq/commands:command/commands:note[text()='You are not connected to the IRC server irc.localhost']"),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo@{biboumi_host}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}', fixed_irc_server=True),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza, "<iq type='set' id='command2' from='{jid_one}/{resource_one}' to='{biboumi_host}'><command xmlns='http://jabber.org/protocol/commands' node='get-irc-connection-info' action='execute' /></iq>"),
partial(expect_stanza, r"/iq/commands:command/commands:note[re:test(text(), 'Connected to IRC server irc.localhost on port 6667 since \d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d \(\d+ seconds ago\)\.\n#foo from 1 resource: {resource_one}.*')]"),
], conf='fixed_server'),
Scenario("irc_server_presence_subscription",
[
handshake_sequence(),
partial(send_stanza, "<presence type='subscribe' from='{jid_one}/{resource_one}' to='{irc_server_one}' id='sub1' />"),
partial(expect_stanza, "/presence[@to='{jid_one}'][@from='{irc_server_one}'][@type='subscribed']")
]),
Scenario("fixed_irc_server_presence_subscription",
[
handshake_sequence(),
partial(send_stanza, "<presence type='subscribe' from='{jid_one}/{resource_one}' to='{biboumi_host}' id='sub1' />"),
partial(expect_stanza, "/presence[@to='{jid_one}'][@from='{biboumi_host}'][@type='subscribed']")
], conf='fixed_server'),
Scenario("leave_unjoined_chan",
[
handshake_sequence(),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza, "/message"),
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/message"),
partial(send_stanza, "<presence from='{jid_two}/{resource_two}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_begin_sequence("irc.localhost", '{jid_two}/{resource_two}'),
partial(expect_stanza, "/message[@to='{jid_two}/{resource_two}'][@type='chat']/body[text()='irc.localhost: {nick_one}: Nickname is already in use.']"),
partial(expect_stanza, "/presence[@type='error']/error[@type='cancel'][@code='409']/stanza:conflict"),
partial(send_stanza, "<presence from='{jid_two}/{resource_two}' to='#foo%{irc_server_one}/{nick_one}' type='unavailable' />")
]),
Scenario("basic_subscribe_unsubscribe",
[
handshake_sequence(),
# Mutual subscription exchange
partial(send_stanza, "<presence from='{jid_one}' to='{biboumi_host}' type='subscribe' id='subid1' />"),
partial(expect_stanza, "/presence[@type='subscribed'][@id='subid1']"),
# Get the current presence of the biboumi gateway
partial(expect_stanza, "/presence"),
partial(expect_stanza, "/presence[@type='subscribe']"),
partial(send_stanza, "<presence from='{jid_one}' to='{biboumi_host}' type='subscribed' />"),
# Unsubscribe
partial(send_stanza, "<presence from='{jid_one}' to='{biboumi_host}' type='unsubscribe' id='unsubid1' />"),
partial(expect_stanza, "/presence[@type='unavailable']"),
partial(expect_stanza, "/presence[@type='unsubscribed']"),
partial(expect_stanza, "/presence[@type='unsubscribe']"),
partial(send_stanza, "<presence from='{jid_one}' to='{biboumi_host}' type='unavailable' />"),
partial(send_stanza, "<presence from='{jid_one}' to='{biboumi_host}' type='unsubscribed' />"),
]),
Scenario("resource_is_removed_from_server_when_last_chan_is_left",
[
# Join the channel
handshake_sequence(),
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
connection_sequence("irc.localhost", '{jid_one}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat'][@to='{jid_one}/{resource_one}']/subject[not(text())]"),
# Make it persistent
partial(send_stanza, "<iq from='{jid_one}/{resource_one}' id='conf1' to='#foo%{irc_server_one}' type='get'><query xmlns='http://jabber.org/protocol/muc#owner'/></iq>"),
partial(expect_stanza, "/iq[@type='result']/muc_owner:query/dataform:x/dataform:field[@var='persistent'][@type='boolean']/dataform:value[text()='false']"),
partial(send_stanza, "<iq from='{jid_one}/{resource_one}' id='conf2' to='#foo%{irc_server_one}' type='set'><query xmlns='http://jabber.org/protocol/muc#owner'><x type='submit' xmlns='jabber:x:data'><field var='persistent' xmlns='jabber:x:data'><value>true</value></field></x></query></iq>"),
partial(expect_stanza, "/iq[@type='result']"),
partial(send_stanza, "<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' type='unavailable' />"),
partial(expect_stanza, "/presence[@type='unavailable'][@from='#foo%{irc_server_one}/{nick_one}']"),
# Join the same channel, with the same JID, but a different resource
partial(send_stanza,
"<presence from='{jid_one}/{resource_two}' to='#foo%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_two}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat'][@to='{jid_one}/{resource_two}']/subject[not(text())]"),
# Join some other channel with someone else
partial(send_stanza,
"<presence from='{jid_two}/{resource_one}' to='#bar%{irc_server_one}/{nick_two}' />"),
connection_sequence("irc.localhost", '{jid_two}/{resource_one}'),
partial(expect_stanza,
"/message/body[text()='Mode #bar [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_two}/{resource_one}'][@from='#bar%{irc_server_one}/{nick_two}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#bar%{irc_server_one}'][@type='groupchat'][@to='{jid_two}/{resource_one}']/subject[not(text())]"),
# Send two messages from the second user to the first one
partial(send_stanza, "<message from='{jid_two}/{resource_one}' to='{lower_nick_one}%{irc_server_one}' type='chat'><body>kikoo</body></message>"),
partial(send_stanza, "<message from='{jid_two}/{resource_one}' to='{lower_nick_one}%{irc_server_one}' type='chat'><body>second kikoo</body></message>"),
# We must receive each message only once, no duplicate
partial(expect_stanza, "/message/body[text()='kikoo']"),
partial(expect_stanza, "/message/body[text()='second kikoo']"),
]
),
Scenario("irc_server_presence_in_roster",
[
handshake_sequence(),
# Mutual subscription exchange
partial(send_stanza, "<presence from='{jid_one}' to='{irc_server_one}' type='subscribe' id='subid1' />"),
partial(expect_stanza, "/presence[@type='subscribed'][@id='subid1']"),
partial(expect_stanza, "/presence[@type='subscribe']"),
partial(send_stanza, "<presence from='{jid_one}' to='{irc_server_one}' type='subscribed' />"),
# Join a channel on that server
partial(send_stanza,
"<presence from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
# We must receive the IRC server presence, in the connection sequence
connection_sequence("irc.localhost", '{jid_one}/{resource_one}', expected_irc_presence=True),
partial(expect_stanza,
"/message/body[text()='Mode #foo [+nt] by {irc_host_one}']"),
partial(expect_stanza,
("/presence[@to='{jid_one}/{resource_one}'][@from='#foo%{irc_server_one}/{nick_one}']/muc_user:x/muc_user:item[@affiliation='admin'][@role='moderator']",
"/presence/muc_user:x/muc_user:status[@code='110']")
),
partial(expect_stanza, "/message[@from='#foo%{irc_server_one}'][@type='groupchat']/subject[not(text())]"),
# Leave the channel, and thus the IRC server
partial(send_stanza, "<presence type='unavailable' from='{jid_one}/{resource_one}' to='#foo%{irc_server_one}/{nick_one}' />"),
partial(expect_stanza, "/presence[@type='unavailable'][@from='#foo%{irc_server_one}/{nick_one}']"),
partial(expect_stanza, "/presence[@from='{irc_server_one}'][@to='{jid_one}'][@type='unavailable']"),
])
)
failures = 0
scenar_list = sys.argv[1:]
irc_output = open("irc_output.txt", "w")
irc = IrcServerRunner()
print("Starting irc server…")
asyncio.get_event_loop().run_until_complete(irc.start())
while True:
res = asyncio.get_event_loop().run_until_complete(irc.process.stderr.readline())
irc_output.write(res.decode())
if not res:
print("IRC server failed to start, see irc_output.txt for more details. Exiting…")
sys.exit(1)
if b"now running in foreground mode" in res:
break
print("irc server started.")
checks = len([s for s in scenarios if s.name in scenar_list]) if scenar_list else len(scenarios)
print("Running %s checks for biboumi." % checks)
for s in scenarios:
if scenar_list and s.name not in scenar_list:
continue
test = BiboumiTest(s)
if not test.run():
print("You can check the files slixmpp_%s_output.txt and biboumi_%s_output.txt to help you debug." %
(s.name, s.name))
failures += 1
print("Waiting for irc server to exit…")
irc.stop()
asyncio.get_event_loop().run_until_complete(irc.wait())
if failures:
print("%d test%s failed, please fix %s." % (failures, 's' if failures > 1 else '',
'them' if failures > 1 else 'it'))
sys.exit(1)
else:
print("All tests passed successfully")
| 74.817984 | 439 | 0.528584 | 25,635 | 241,288 | 4.768754 | 0.03269 | 0.04594 | 0.054284 | 0.057294 | 0.877658 | 0.865616 | 0.855628 | 0.843963 | 0.834335 | 0.821591 | 0 | 0.00521 | 0.295995 | 241,288 | 3,224 | 440 | 74.841191 | 0.714355 | 0.040284 | 0 | 0.637292 | 0 | 0.217095 | 0.535691 | 0.406026 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015129 | false | 0.005673 | 0.005295 | 0.001891 | 0.030257 | 0.005673 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
6c5e66e5147f4dc5c404641b9c28624f6d721ff9 | 18,132 | py | Python | pairwiselr.py | philips-labs/pairwise-regression | 9b164601acb786d788c682a395b97b19338d34b3 | [
"MIT"
] | null | null | null | pairwiselr.py | philips-labs/pairwise-regression | 9b164601acb786d788c682a395b97b19338d34b3 | [
"MIT"
] | null | null | null | pairwiselr.py | philips-labs/pairwise-regression | 9b164601acb786d788c682a395b97b19338d34b3 | [
"MIT"
] | null | null | null | from sklearn.linear_model import Ridge as ridge
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
class KeyCovariatePairwiseLR(object):
'''
Remove key covariate from features
Do not train initial LR model
'''
def __init__(self, alpha1=0, alpha_blend=.1, cov_steps=10,\
func_smooth_z='sigmoid', coeff_smooth_z=10):
self.alpha_blend_ = alpha_blend
self.cov_steps_ = cov_steps
self.coeff_smooth_z = coeff_smooth_z
if func_smooth_z == 'sigmoid':
self.func_smooth_z = self.sigmoid_1d
elif func_smooth_z == 'gaussian':
self.func_smooth_z = self.gaussian_1d
else:
print('Linear smoothing function for z')
self.func_smooth_z = self.linear
def fit(self, df, col_z, col_y, cov_range_z = [], include_z_in_x = False):
"""
Given pandas [df] and the column names of key-covariate [col_z] and label [col_y]
Train model
"""
col_xz = set(df.columns) - set([col_y])
col_x = sorted(col_xz - set([col_z]))
col_xz = sorted(col_xz)
self.col_xz_ = col_xz
self.col_z_ = col_z
if include_z_in_x:
self.col_x_ = self.col_xz_
else:
self.col_x_ = col_x
# Get range of all features
self.cov_range_ = dict()
if len(cov_range_z)==0: # if the dynamic range of key covariate is not defined
self.cov_range_[col_z] = self.covariate_linspace(df[col_z], n_steps=self.cov_steps_)
else:
self.cov_range_[col_z] = cov_range_z
for ft in col_x:
self.cov_range_[ft] = self.covariate_linspace(df[ft], n_steps=self.cov_steps_)
self.cov_range_[col_z+'_plot'] = self.covariate_linspace(df[col_z], n_steps=self.cov_steps_)
# Calculate mean of all features
ft_median = dict()
for ft in self.col_xz_:
ft_median[ft] = np.median(df[ft], axis=0)
self.ft_median_ = ft_median
# Get matrix of all possible blends between key-cov step function and feature-specific regression
x_basis_matrix = []
for row in df[self.col_xz_].iterrows():
x_vec_test = row[1][self.col_x_].values # contains all features values for one sample
z_data = row[1][self.col_z_]
x_basis_row = self.calc_blended_basis(x_vec_test, z_data)
x_basis_matrix.append(x_basis_row)
blended_model = ridge(alpha=self.alpha_blend_)
blended_model.fit(x_basis_matrix, df[col_y])
self.blended_model_ = blended_model
# Train blended (key-cov pairwise LR) model
blended_model = ridge(alpha=self.alpha_blend_)
blended_model.fit(x_basis_matrix, df[col_y])
self.blended_model_ = blended_model
def calc_blended_basis(self, x_data_vec, z_data):
'''
Given one sample of: [x_data_vec] containing multiple features, and [z_data] of one key-cov value
Calculate all possible g_i(x)*x_j, combinations between key-cov function and x_j
'''
x_basis = []
z_unique = self.cov_range_[self.col_z_]
n_i = len(x_data_vec)
n_j = len(z_unique)
for i in range(n_i):
x_data = x_data_vec[i] # one feature
for j in range(n_j):
z_t = z_unique[j]
x_basis.append(self.func_smooth_z(z_data, z_t, self.coeff_smooth_z)*x_data) # f(z, z_t)*x_data
return x_basis # [n_i, 1]
def calc_pairwise_reg(self, x_data, z_data, v_j):
'''
[only used in plotting]
Calculate the value of pairwise interaction between feature [x_data] and key-covariate [z_data]
Given learned [v_j] linear stacking weights on feature i corresponding to [x_data]
'''
x_out = 1
z_unique = self.cov_range_[self.col_z_]
n_j = len(v_j)
for j in range(n_j):
z_t = z_unique[j]
x_out+=(v_j[j]*self.func_smooth_z(z_data, z_t, self.coeff_smooth_z)*x_data)
return x_out
def predict(self, df):
"""
Given trained blended model and new test data, predict
"""
# Get matrix of all possible blends between key-cov step function and feature-specific regression
x_basis_matrix = []
for row in df[self.col_xz_].iterrows():
x_vec_test = row[1][self.col_x_].values
z_data = row[1][self.col_z_]
x_basis_row = self.calc_blended_basis(x_vec_test, z_data)
x_basis_matrix.append(x_basis_row)
return self.blended_model_.predict(x_basis_matrix)
def feature_importance(self, x_sample):
'''
Given a sample, return the FI of its features
??
'''
# calculate
n_ft = len(self.col_xz_)
v = self.blended_model_.coef_
ft_names = self.col_xz_
step_size = len(self.cov_range_[self.col_z_]) # of z
z = x_sample[self.col_z_]
fi = np.nan*np.zeros(n_ft)
for i_ft in range(n_ft):
ft_name = ft_names[i_ft]
x = x_sample[ft_name]
v_j = v[i_ft*step_size:(i_ft+1)*step_size]
y_ft_x = self.calc_pairwise_reg(x, z, v_j)
if not ft_name==self.col_z_:
y_ft_median = self.calc_pairwise_reg(self.ft_median_[ft_name], z, v_j)
else:
y_ft_median = self.calc_pairwise_reg(self.ft_median_[ft_name], \
self.ft_median_[ft_name], v_j)
fi[i_ft] = y_ft_x - y_ft_median
return fi
def plot_pairwise_interactions(self, ft_plot=[], n_plot_cols=4, scaleup=1):
mpl.rcParams["font.size"] = 8
mpl.rcParams["axes.titlesize"] = 12
ft_names = self.col_x_
n_ft = len(ft_names)
if not ft_plot:
ft_plot = ft_names
i_plot = 0
n_plot_rows = int(n_ft/n_plot_cols) + 1
plt.figure(figsize=[3.54*scaleup, 2.2*n_plot_rows*scaleup/n_plot_cols],dpi=1200)
step_size = len(self.cov_range_[self.col_z_]) # of z
v = self.blended_model_.coef_
z_unique = self.cov_range_[self.col_z_ +'_plot']
for i_ft in range(n_ft):
try:
ft_name = ft_names[i_ft]
except:
import pdb;pdb.set_trace()
if ft_name in ft_plot:
v_j = v[i_ft*step_size:(i_ft+1)*step_size]
if ft_name == self.col_z_:
x_unique = self.cov_range_[ft_name+'_plot']
else:
x_unique = self.cov_range_[ft_name]
heatmap = np.zeros([len(x_unique), len(z_unique)])
i = 0
j = 0
for x in x_unique:
j = 0
for z in z_unique:
heatmap[i,j] = self.calc_pairwise_reg(x, z, v_j)
j += 1
i += 1
if ft_name == self.col_z_:
print(self.col_z_)
plt.subplot(n_plot_rows, n_plot_cols, i_plot+1)
plt.plot(z_unique,heatmap.diagonal())
# plt.title(ft_name)
plt.ylabel('pH contribution')
plt.xlabel(self.col_z_)
# plt.xlim([6.8, 7.5])
else:
plt.subplot(n_plot_rows, n_plot_cols, i_plot+1)
plt.pcolor(z_unique,x_unique,heatmap, cmap='Greys')
print_feat_name = ft_name.replace(' (mean)', '').replace('d_', '$\Delta$')
if 'prev_' in print_feat_name:
print_feat_name = print_feat_name.replace('prev_', '') + '[t-1]'
elif 'cur_' in ft_name:
print_feat_name = print_feat_name.replace('cur_', '') + '[t]'
plt.ylabel(print_feat_name)
plt.colorbar()
plt.xlabel(self.col_z_)
plt.xlabel('pH[t-1]')
i_plot+=1
else:
pass
plt.tight_layout()
@staticmethod
def step_function_1d(x, x_t):
'''
Implement step function and calculate output given input value [x] and threshold [x_t]
'''
if x<x_t:
return 0
else:
return 1
@staticmethod
def sigmoid_1d(x, x_t, coeff):
return 1./(1+np.exp(-coeff*(x-x_t)))
@staticmethod
def gaussian_1d(x, x_t, coeff):
return np.exp(-((x-x_t)/coeff)**2/2)/(np.sqrt(2*np.pi)*coeff)
@staticmethod
def linear(x, x_t, coeff):
return x
@staticmethod
def covariate_linspace(x_col, n_steps=10):
'''
Given a column of single feature values [x_col], return linspace of [n_steps] values from min to max of that feature
'''
xmin, xmax = np.min(x_col), np.max(x_col)
x_range = xmax - xmin
return np.linspace(xmin-.05*x_range, xmax+.05*x_range, n_steps)
class KeyCovariate2d(object):
'''
Remove key covariate from features
Do not train initial LR model
'''
def __init__(self, alpha1=0, alpha_blend=.1, cov_steps=10, func_smooth_x=None, \
func_smooth_z='sigmoid', coeff_smooth_x=None, coeff_smooth_z=10):
self.alpha_blend_ = alpha_blend
self.cov_steps_ = cov_steps
self.coeff_smooth_z = coeff_smooth_z
self.coeff_smooth_x = coeff_smooth_x
# self.coeff_smooth_x = coeff_smooth_x
if func_smooth_z == 'sigmoid':
self.func_smooth_z = self.sigmoid_1d
elif func_smooth_z == 'gaussian':
self.func_smooth_z = self.gaussian_1d
else:
print('Linear smoothing function for z')
self.func_smooth_z = self.linear
if func_smooth_x == 'sigmoid':
self.func_smooth_x = self.sigmoid_1d
elif func_smooth_x == 'gaussian':
self.func_smooth_x = self.gaussian_1d
else:
print('Linear smoothing function for x')
self.func_smooth_x = self.linear
def fit(self, df, col_z, col_y, cov_range_z = [], include_z_in_x = False):
"""
Given pandas [df] and the column names of key-covariate [col_z] and label [col_y]
Train model
"""
col_xz = set(df.columns) - set([col_y])
col_x = sorted(col_xz - set([col_z]))
col_xz = sorted(col_xz)
self.col_xz_ = col_xz
self.col_z_ = col_z
if include_z_in_x:
self.col_x_ = self.col_xz_
else:
self.col_x_ = col_x
# Get range of all features
self.cov_range_ = dict()
if len(cov_range_z)==0: # if the dynamic range of key covariate is not defined
self.cov_range_[col_z] = self.covariate_linspace(df[col_z], n_steps=self.cov_steps_)
else:
self.cov_range_[col_z] = cov_range_z
for ft in col_x:
self.cov_range_[ft] = self.covariate_linspace(df[ft], n_steps=self.cov_steps_)
self.cov_range_[col_z+'_plot'] = self.covariate_linspace(df[col_z], n_steps=self.cov_steps_)
# Get matrix of all possible blends between key-cov step function and feature-specific regression
x_basis_matrix = []
for row in df[self.col_xz_].iterrows():
x_vec_test = row[1][self.col_x_].values # contains all features values for one sample
z_data = row[1][self.col_z_]
x_basis_row = self.calc_blended_basis(x_vec_test, z_data)
x_basis_matrix.append(x_basis_row)
x_basis_all = self.calc_df_basis(df) # <<<<<<<<<<<<< FIX; how to convert to 1d and back?
x_basis_all = np.reshape(x_basis_all, (len(x_basis_all), self.cov_steps_**2*len(self.col_xz_)))
blended_model = ridge(alpha=self.alpha_blend_)
blended_model.fit(x_basis_all, df[col_y])
self.blended_model_ = blended_model
def calc_blended_basis(self, x_data_vec, z_data):
'''
Given one sample of: [x_data_vec] containing multiple features, and [z_data] of one key-cov value
Calculate all possible g_i(x)*x_j, combinations between key-cov function and x_j
'''
x_basis = []
z_unique = self.cov_range_[self.col_z_]
n_i = len(x_data_vec)
n_j = len(z_unique)
for i in range(n_i):
x_data = x_data_vec[i] # one feature
for j in range(n_j):
z_t = z_unique[j]
x_basis.append(self.func_smooth_z(z_data, z_t, self.coeff_smooth_z)*x_data) # f(z, z_t)*x_data
return x_basis # [n_i, 1]
def calc_df_basis(self, df):
'''
Calculate basis for entire [df] of dimensions {N, L+1}, N = samples, L = non-z features
Return full basis matrix of dimensions {N, L, M, M}, where M is number of step sizes for features
'''
x_basis_all = []
for row in df[self.col_xz_].iterrows():
x_basis_sample = []
x_samplerow = row[1][self.col_x_] # contains all features values for one sample
z_data = row[1][self.col_z_]
for ft in self.col_xz_:
x_data = x_samplerow[ft]
x_basis_1ft = self.calc_basis(x_data, z_data, ft)
x_basis_sample.append(x_basis_1ft)
x_basis_all.append(x_basis_sample)
return x_basis_all
def calc_basis(self, x_data, z_data, x_ft_name):
'''
Given a point feature value [x_data], its [x_ft_name], and a [z_data] key-cov value
Calculate the matrix of basis functions
'''
x_unique = self.cov_range_[x_ft_name]
z_unique = self.cov_range_[self.col_z_]
n_steps = self.cov_steps_
basis_mat = np.zeros((n_steps, n_steps))
i = 0 # ticker for ordinary feature x, rows
for x in x_unique:
j = 0 # ticker for z, columns
for z in z_unique:
basis_mat[i, j] = self.func_smooth_x(x_data, x, self.coeff_smooth_x)*self.func_smooth_z(z_data, z, self.coeff_smooth_z)
j+=1
i+=1
return basis_mat
def calc_pairwise_reg(self, x_data, z_data, v_j):
'''
[only used in plotting]
Calculate the value of pairwise interaction between feature [x_data] and key-covariate [z_data]
Given learned [v_j] linear stacking weights on feature i corresponding to [x_data]
'''
x_out = 1
z_unique = self.cov_range_[self.col_z_]
n_j = len(v_j)
for j in range(n_j):
z_t = z_unique[j]
x_out+=(v_j[j]*self.func_smooth_z(z_data, z_t, self.coeff_smooth_z)*x_data)
return x_out
def predict(self, df):
"""
Given trained blended model and new test data, predict
"""
# Get matrix of all possible blends between key-cov step function and feature-specific regression
x_basis_matrix = self.calc_df_basis(df)
x_basis_matrix = np.reshape(x_basis_matrix, (len(x_basis_matrix), self.cov_steps_**2*len(self.col_xz_)))
return self.blended_model_.predict(x_basis_matrix)
def plot_pairwise_interactions(self, ft_names=[], n_plot_cols=4):
mpl.rcParams["font.size"] = 8
mpl.rcParams["axes.titlesize"] = 12
if not ft_names:
ft_names = self.col_x_
n_ft = len(ft_names)
n_plot_rows = int(n_ft/n_plot_cols) + 1
plt.figure(figsize=[10, 2*n_plot_rows],dpi=300)
step_size = len(self.cov_range_[self.col_z_]) # of z
v = self.blended_model_.coef_
z_unique = self.cov_range_[self.col_z_ +'_plot']
for i_ft in range(n_ft):
try:
ft_name = ft_names[i_ft]
except:
import pdb;pdb.set_trace()
v_j = v[i_ft*step_size:(i_ft+1)*step_size]
# print(f'{ft_name}, {v_j}')
if ft_name == self.col_z_:
x_unique = self.cov_range_[ft_name+'_plot']
else:
x_unique = self.cov_range_[ft_name]
heatmap = np.zeros([len(x_unique), len(z_unique)])
i = 0
j = 0
for x in x_unique:
j = 0
for z in z_unique:
heatmap[i,j] = self.calc_pairwise_reg(x, z, v_j)
j += 1
i += 1
import pdb;pdb.set_trace()
if ft_name == self.col_z_:
plt.subplot(n_plot_rows, n_plot_cols, i_ft+1)
plt.plot(z_unique,heatmap.diagonal())
plt.title(ft_name)
plt.ylabel('pH contribution')
plt.xlabel(self.col_z_)
else:
print('cmap')
plt.subplot(n_plot_rows, n_plot_cols, i_ft+1)
plt.pcolor(z_unique,x_unique,heatmap, cmap='Greys')
plt.ylabel(ft_name.replace(' (mean)', ''))
plt.colorbar()
plt.xlabel(self.col_z_)
plt.title(ft_name.replace(' (mean)', ''))
plt.tight_layout()
@staticmethod
def step_function_1d(x, x_t):
'''
Implement step function and calculate output given input value [x] and threshold [x_t]
'''
if x<x_t:
return 0
else:
return 1
@staticmethod
def sigmoid_1d(x, x_t, coeff):
return 1./(1+np.exp(-coeff*(x-x_t)))
@staticmethod
def gaussian_1d(x, x_t, coeff):
return np.exp(-((x-x_t)/coeff)**2/2)/(np.sqrt(2*np.pi)*coeff)
@staticmethod
def linear(x, x_t, coeff):
return x
@staticmethod
def covariate_linspace(x_col, n_steps=10):
'''
Given a column of single feature values [x_col], return linspace of [n_steps] values from min to max of that feature
'''
xmin, xmax = np.min(x_col), np.max(x_col)
x_range = xmax - xmin
return np.linspace(xmin-.05*x_range, xmax+.05*x_range, n_steps)
| 38.993548 | 135 | 0.577101 | 2,687 | 18,132 | 3.56457 | 0.086342 | 0.037273 | 0.022552 | 0.022552 | 0.842765 | 0.817603 | 0.796408 | 0.783358 | 0.757778 | 0.737628 | 0 | 0.010494 | 0.322027 | 18,132 | 464 | 136 | 39.077586 | 0.768649 | 0.166722 | 0 | 0.77381 | 0 | 0 | 0.023023 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074405 | false | 0.002976 | 0.02381 | 0.017857 | 0.166667 | 0.029762 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6c7deb7a6de4df574acd6471bba0810208c1eb98 | 114,974 | py | Python | jupyter_cadquery/logo.py | Jojain/jupyter-cadquery | 65e494dd13f6177f87e82926b7896609fe290be0 | [
"Apache-2.0"
] | 158 | 2019-04-29T17:42:06.000Z | 2022-03-27T20:11:55.000Z | jupyter_cadquery/logo.py | Jojain/jupyter-cadquery | 65e494dd13f6177f87e82926b7896609fe290be0 | [
"Apache-2.0"
] | 72 | 2019-06-22T15:59:58.000Z | 2022-03-31T23:17:40.000Z | jupyter_cadquery/logo.py | Jojain/jupyter-cadquery | 65e494dd13f6177f87e82926b7896609fe290be0 | [
"Apache-2.0"
] | 24 | 2019-06-21T20:44:49.000Z | 2022-03-25T09:04:49.000Z | LOGO_DATA = 'gASV5hEBAAAAAAB9lCiMBGRhdGGUfZQojAdtYXBwaW5nlH2UKIwBMZR9lCiMBHBhdGiUSwCFlIwFc3RhdGWUXZQoSwFLAWV1jAEylH2UKGgHSwGFlGgJXZQoSwFLAWV1dYwGc2hhcGVzlH2UKIwFcGFydHOUXZQofZQojAJpZJRLAYwEdHlwZZRoD4wEbmFtZZSMB2p1cHl0ZXKUjAVzaGFwZZR9lCiMCHZlcnRpY2VzlIwVbnVtcHkuY29yZS5tdWx0aWFycmF5lIwMX3JlY29uc3RydWN0lJOUjAVudW1weZSMB25kYXJyYXmUk5RLAIWUQwFilIeUUpQoSwFNGAJLA4aUaB6MBWR0eXBllJOUjAJmNJSJiIeUUpQoSwOMATyUTk5OSv////9K/////0sAdJRiiUIgGQAAAACAv4usLLqw7lzBAACAP4usLLqw7lzBAACAPxfX7T8DGlzBAACAP+0pa0C8p1nBAACAPwEfr0Cgk1XBAACAP6YU50AJ70/BAACAP9CEDkFzzEjBAACAPzfEJUHmLUHBAACAP0niO0HHgjjBAACAP+C8UEFF4y7BAACAPzo2ZEHbaSTBAACAP/81dkF5MhnBAACAP/hUg0F9WQ3BAACAPxLDikGx+gDBAACAP/pokEHCZOzAAACAP6yQlUEV89XAAACAv6yQlUEV89XAAACAvxfX7T8DGlzBAACAv+0pa0C8p1nBAACAvwEfr0Cgk1XBAACAv6YU50AJ70/BAACAv9CEDkFzzEjBAACAvzfEJUHmLUHBAACAv0niO0HHgjjBAACAv+C8UEFF4y7BAACAvzo2ZEHbaSTBAACAv/81dkF5MhnBAACAv/hUg0F9WQ3BAACAvxLDikGx+gDBAACAv/pokEHCZOzAAACAv6yQlUEV89XAAACAP6yQlUEV89XAAACAPz8qkUHRWgDBAACAP3sGjEGm9BTBAACAP38fhkEI3SjBAACAP88Ef0FX4DvBAACAPw30cUFEI0zBAACAP4DYY0GOhFvBAACAP17BVEHQ8mnBAACAP67mRUHfdHbBAACAP+NANkEaCoHBAACAv+NANkEaCoHBAACAvz8qkUHRWgDBAACAv3sGjEGm9BTBAACAv38fhkEI3SjBAACAv88Ef0FX4DvBAACAvw30cUFEI0zBAACAv4DYY0GOhFvBAACAv17BVEHQ8mnBAACAv67mRUHfdHbBAACAv+NANkEaCoHBAACAP+NANkEaCoHBAACAP2TQIkEpTIfBAACAPyKbDkEu0IzBAACAP9AV80DSm5HBAACAP2ylx0BmoZXBAACAPxCen0DMkZjBAACAP2DFbUBi2ZrBAACAP0dAG0BfdZzBAACAPwDKmz+vWJ3BAACAP4usLLrppJ3BAACAv4usLLrppJ3BAACAv2TQIkEpTIfBAACAvyKbDkEu0IzBAACAv9AV80DSm5HBAACAv2ylx0BmoZXBAACAvxCen0DMkZjBAACAv2DFbUBi2ZrBAACAv0dAG0BfdZzBAACAvwDKmz+vWJ3BAACAv4usLLrppJ3BAACAP4usLLrppJ3BAACAP67quL+uOZ3BAACAPzcBOMBG+pvBAACAP7mIicD+5JnBAACAP+BftsAt/ZbBAACAPwrx3cCerpPBAACAPwRKAsHRu4/BAACAP9QOFcHiKIvBAACAP37xJcHLXIbBAACAP+pFNsEaCoHBAACAv+pFNsEaCoHBAACAv67quL+uOZ3BAACAvzcBOMBG+pvBAACAv7mIicD+5JnBAACAv+BftsAt/ZbBAACAvwrx3cCerpPBAACAvwRKAsHRu4/BAACAv9QOFcHiKIvBAACAv37xJcHLXIbBAACAv+pFNsEaCoHBAACAP+pFNsEaCoHBAACAPyLBSMElMHTBAACAPyMcWsGzD2XBAACAP0ppasHtoVTBAACAPzB9ecEwDUPBAACAP+D4gsGPUTLBAACAP3CkiMFmzCDBAACAP8e6jcHOkA7BAACAP5vqkcH4FfrAAACAPyuTlcEV89XAAACAvyuTlcEV89XAAACAvyLBSMElMHTBAACAvyMcWsGzD2XBAACAv0ppasHtoVTBAACAvzB9ecEwDUPBAACAv+D4gsGPUTLBAACAv3CkiMFmzCDBAACAv8e6jcHOkA7BAACAv5vqkcH4FfrAAACAvyuTlcEV89XAAACAPyuTlcEV89XAAACAP9fyjsFxWfLAAACAPxuLh8GxpQbBAACAP7cOfsEe4BPBAACAPw4ja8FbbSDBAACAP7JgVsGzKSzBAACAP8PdP8E38DbBAACAP+W4J8FInEDBAACAP28YDsEhC0nBAACAPwNS5sAtHVDBAACAPw3rs8BEOVXBAACAP7QBgMAJFFnBAACAP2fDFcCDoFvBAACAPz/glr9tmlzBAACAP4usLLqw7lzBAACAv4usLLqw7lzBAACAv9fyjsFxWfLAAACAvxuLh8GxpQbBAACAv7cOfsEe4BPBAACAvw4ja8FbbSDBAACAv7JgVsGzKSzBAACAv8PdP8E38DbBAACAv+W4J8FInEDBAACAv28YDsEhC0nBAACAvwNS5sAtHVDBAACAvw3rs8BEOVXBAACAv7QBgMAJFFnBAACAv2fDFcCDoFvBAACAvz/glr9tmlzBAACAv4usLLqw7lzBAACAvxfX7T8DGlzBAACAv+0pa0C8p1nBAACAvwEfr0Cgk1XBAACAv6YU50AJ70/BAACAv9CEDkFzzEjBAACAvzfEJUHmLUHBAACAv0niO0HHgjjBAACAv+C8UEFF4y7BAACAvzo2ZEHbaSTBAACAv/81dkF5MhnBAACAv/hUg0F9WQ3BAACAvxLDikGx+gDBAACAv/pokEHCZOzAAACAv6yQlUEV89XAAACAvz8qkUHRWgDBAACAv3sGjEGm9BTBAACAv38fhkEI3SjBAACAv88Ef0FX4DvBAACAvw30cUFEI0zBAACAv4DYY0GOhFvBAACAv17BVEHQ8mnBAACAv67mRUHfdHbBAACAv+NANkEaCoHBAACAv2TQIkEpTIfBAACAvyKbDkEu0IzBAACAv9AV80DSm5HBAACAv2ylx0BmoZXBAACAvxCen0DMkZjBAACAv2DFbUBi2ZrBAACAv0dAG0BfdZzBAACAvwDKmz+vWJ3BAACAv4usLLrppJ3BAACAv67quL+uOZ3BAACAvzcBOMBG+pvBAACAv7mIicD+5JnBAACAv+BftsAt/ZbBAACAvwrx3cCerpPBAACAvwRKAsHRu4/BAACAv9QOFcHiKIvBAACAv37xJcHLXIbBAACAv+pFNsEaCoHBAACAvyLBSMElMHTBAACAvyMcWsGzD2XBAACAv0ppasHtoVTBAACAvzB9ecEwDUPBAACAv+D4gsGPUTLBAACAv3CkiMFmzCDBAACAv8e6jcHOkA7BAACAv5vqkcH4FfrAAACAvyuTlcEV89XAAACAv9fyjsFxWfLAAACAvxuLh8GxpQbBAACAv7cOfsEe4BPBAACAvw4ja8FbbSDBAACAv7JgVsGzKSzBAACAv8PdP8E38DbBAACAv+W4J8FInEDBAACAv28YDsEhC0nBAACAvwNS5sAtHVDBAACAvw3rs8BEOVXBAACAv7QBgMAJFFnBAACAv2fDFcCDoFvBAACAvz/glr9tmlzBAACAP4usLLqw7lzBAACAPxfX7T8DGlzBAACAP+0pa0C8p1nBAACAPwEfr0Cgk1XBAACAP6YU50AJ70/BAACAP9CEDkFzzEjBAACAPzfEJUHmLUHBAACAP0niO0HHgjjBAACAP+C8UEFF4y7BAACAPzo2ZEHbaSTBAACAP/81dkF5MhnBAACAP/hUg0F9WQ3BAACAPxLDikGx+gDBAACAP/pokEHCZOzAAACAP6yQlUEV89XAAACAPz8qkUHRWgDBAACAP3sGjEGm9BTBAACAP38fhkEI3SjBAACAP88Ef0FX4DvBAACAPw30cUFEI0zBAACAP4DYY0GOhFvBAACAP17BVEHQ8mnBAACAP67mRUHfdHbBAACAP+NANkEaCoHBAACAP2TQIkEpTIfBAACAPyKbDkEu0IzBAACAP9AV80DSm5HBAACAP2ylx0BmoZXBAACAPxCen0DMkZjBAACAP2DFbUBi2ZrBAACAP0dAG0BfdZzBAACAPwDKmz+vWJ3BAACAP4usLLrppJ3BAACAP67quL+uOZ3BAACAPzcBOMBG+pvBAACAP7mIicD+5JnBAACAP+BftsAt/ZbBAACAPwrx3cCerpPBAACAPwRKAsHRu4/BAACAP9QOFcHiKIvBAACAP37xJcHLXIbBAACAP+pFNsEaCoHBAACAPyLBSMElMHTBAACAPyMcWsGzD2XBAACAP0ppasHtoVTBAACAPzB9ecEwDUPBAACAP+D4gsGPUTLBAACAP3CkiMFmzCDBAACAP8e6jcHOkA7BAACAP5vqkcH4FfrAAACAPyuTlcEV89XAAACAP9fyjsFxWfLAAACAPxuLh8GxpQbBAACAP7cOfsEe4BPBAACAPw4ja8FbbSDBAACAP7JgVsGzKSzBAACAP8PdP8E38DbBAACAP+W4J8FInEDBAACAP28YDsEhC0nBAACAPwNS5sAtHVDBAACAPw3rs8BEOVXBAACAP7QBgMAJFFnBAACAP2fDFcCDoFvBAACAPz/glr9tmlzBAACAv4usLLrj6FxBAACAP4usLLrj6FxBAACAP4sB7r82FFxBAACAP9g+a8DvoVlBAACAP1cpr8DSjVVBAACAP+Ie58A76U9BAACAP+SJDsGjxkhBAACAP0TJJcEVKEFBAACAP1DnO8H0fDhBAACAP+PBUMFu3S5BAACAPzo7ZMEBZCRBAACAP/46dsGbLBlBAACAP3ZXg8GcUw1BAACAP5DFisHL9ABBAACAP3hrkMHxWOxAAACAPyuTlcE859VAAACAvyuTlcE859VAAACAv4sB7r82FFxBAACAv9g+a8DvoVlBAACAv1cpr8DSjVVBAACAv+Ie58A76U9BAACAv+SJDsGjxkhBAACAv0TJJcEVKEFBAACAv1DnO8H0fDhBAACAv+PBUMFu3S5BAACAvzo7ZMEBZCRBAACAv/46dsGbLBlBAACAv3ZXg8GcUw1BAACAv5DFisHL9ABBAACAv3hrkMHxWOxAAACAvyuTlcE859VAAACAPyuTlcE859VAAACAP9AskcHFVABBAACAPx8JjMF87hRBAACAPzUihsHG1ihBAACAP1sKf8EC2jtBAACAP7L5ccHiHExBAACAPzveY8ElfltBAACAPyzHVMFl7GlBAACAPx/sRcHIbnZBAACAP+pFNsE0B4FBAACAv+pFNsE0B4FBAACAv9AskcHFVABBAACAvx8JjMF87hRBAACAvzUihsHG1ihBAACAv1sKf8EC2jtBAACAv7L5ccHiHExBAACAvzveY8ElfltBAACAvyzHVMFl7GlBAACAvx/sRcHIbnZBAACAv+pFNsE0B4FBAACAP+pFNsE0B4FBAACAP9rVIsEsSYdBAACAPwKhDsEdzYxBAACAPwEi88C9mJFBAACAP96xx8BRnpVBAACAP5mqn8C7jphBAACAP3nebcBY1ppBAACAP05ZG8BbcpxBAACAP334m7+1VZ1BAACAP4usLLrzoZ1BAACAv4usLLrzoZ1BAACAv9rVIsEsSYdBAACAvwKhDsEdzYxBAACAvwEi88C9mJFBAACAv96xx8BRnpVBAACAv5mqn8C7jphBAACAv3nebcBY1ppBAACAv05ZG8BbcpxBAACAv334m7+1VZ1BAACAv4usLLrzoZ1BAACAP4usLLrzoZ1BAACAP17BuD+3Np1BAACAP0TtN0BP95tBAACAP9F+iUAI4plBAACAP/BVtkA5+pZBAACAPwrn3UCtq5NBAACAPwBFAkHhuI9BAACAP9MJFUHzJYtBAACAP3XsJUHgWYZBAACAP+NANkE0B4FBAACAv+NANkE0B4FBAACAv17BuD+3Np1BAACAv0TtN0BP95tBAACAv9F+iUAI4plBAACAv/BVtkA5+pZBAACAvwrn3UCtq5NBAACAvwBFAkHhuI9BAACAv9MJFUHzJYtBAACAv3XsJUHgWYZBAACAv+NANkE0B4FBAACAP+NANkE0B4FBAACAPxW8SEFSKnRBAACAPxMXWkHYCWVBAACAPzpkakEGnFRBAACAPyV4eUE5B0NBAACAP172gkGGSzJBAACAP/ShiEFHxiBBAACAP1K4jUGTig5BAACAPyDokUHPCfpAAACAP6yQlUE859VAAACAv6yQlUE859VAAACAvxW8SEFSKnRBAACAvxMXWkHYCWVBAACAvzpkakEGnFRBAACAvyV4eUE5B0NBAACAv172gkGGSzJBAACAv/ShiEFHxiBBAACAv1K4jUGTig5BAACAvyDokUHPCfpAAACAv6yQlUE859VAAACAP6yQlUE859VAAACAP0f0jkH3PfJAAACAP5yPh0E8kgZBAACAP9YZfkGayhNBAACAP3gta0HhVyBBAACAPwRoVkHDFSxBAACAPz/gP0G63jZBAACAP6i1J0GdjUBBAACAP5EPDkE0/0hBAACAPz835kCaE1BBAACAP+nLs0BMMVVBAACAP93Ff0AuDVlBAACAP0KSFUBRmltBAACAP0KXlj+ElFxBAACAP4usLLrj6FxBAACAv4usLLrj6FxBAACAv0f0jkH3PfJAAACAv5yPh0E8kgZBAACAv9YZfkGayhNBAACAv3gta0HhVyBBAACAvwRoVkHDFSxBAACAvz/gP0G63jZBAACAv6i1J0GdjUBBAACAv5EPDkE0/0hBAACAvz835kCaE1BBAACAv+nLs0BMMVVBAACAv93Ff0AuDVlBAACAv0KSFUBRmltBAACAv0KXlj+ElFxBAACAv4usLLrj6FxBAACAv4sB7r82FFxBAACAv9g+a8DvoVlBAACAv1cpr8DSjVVBAACAv+Ie58A76U9BAACAv+SJDsGjxkhBAACAv0TJJcEVKEFBAACAv1DnO8H0fDhBAACAv+PBUMFu3S5BAACAvzo7ZMEBZCRBAACAv/46dsGbLBlBAACAv3ZXg8GcUw1BAACAv5DFisHL9ABBAACAv3hrkMHxWOxAAACAvyuTlcE859VAAACAv9AskcHFVABBAACAvx8JjMF87hRBAACAvzUihsHG1ihBAACAv1sKf8EC2jtBAACAv7L5ccHiHExBAACAvzveY8ElfltBAACAvyzHVMFl7GlBAACAvx/sRcHIbnZBAACAv+pFNsE0B4FBAACAv9rVIsEsSYdBAACAvwKhDsEdzYxBAACAvwEi88C9mJFBAACAv96xx8BRnpVBAACAv5mqn8C7jphBAACAv3nebcBY1ppBAACAv05ZG8BbcpxBAACAv334m7+1VZ1BAACAv4usLLrzoZ1BAACAv17BuD+3Np1BAACAv0TtN0BP95tBAACAv9F+iUAI4plBAACAv/BVtkA5+pZBAACAvwrn3UCtq5NBAACAvwBFAkHhuI9BAACAv9MJFUHzJYtBAACAv3XsJUHgWYZBAACAv+NANkE0B4FBAACAvxW8SEFSKnRBAACAvxMXWkHYCWVBAACAvzpkakEGnFRBAACAvyV4eUE5B0NBAACAv172gkGGSzJBAACAv/ShiEFHxiBBAACAv1K4jUGTig5BAACAvyDokUHPCfpAAACAv6yQlUE859VAAACAv0f0jkH3PfJAAACAv5yPh0E8kgZBAACAv9YZfkGayhNBAACAv3gta0HhVyBBAACAvwRoVkHDFSxBAACAvz/gP0G63jZBAACAv6i1J0GdjUBBAACAv5EPDkE0/0hBAACAvz835kCaE1BBAACAv+nLs0BMMVVBAACAv93Ff0AuDVlBAACAv0KSFUBRmltBAACAv0KXlj+ElFxBAACAP4usLLrj6FxBAACAP4sB7r82FFxBAACAP9g+a8DvoVlBAACAP1cpr8DSjVVBAACAP+Ie58A76U9BAACAP+SJDsGjxkhBAACAP0TJJcEVKEFBAACAP1DnO8H0fDhBAACAP+PBUMFu3S5BAACAPzo7ZMEBZCRBAACAP/46dsGbLBlBAACAP3ZXg8GcUw1BAACAP5DFisHL9ABBAACAP3hrkMHxWOxAAACAPyuTlcE859VAAACAP9AskcHFVABBAACAPx8JjMF87hRBAACAPzUihsHG1ihBAACAP1sKf8EC2jtBAACAP7L5ccHiHExBAACAPzveY8ElfltBAACAPyzHVMFl7GlBAACAPx/sRcHIbnZBAACAP+pFNsE0B4FBAACAP9rVIsEsSYdBAACAPwKhDsEdzYxBAACAPwEi88C9mJFBAACAP96xx8BRnpVBAACAP5mqn8C7jphBAACAP3nebcBY1ppBAACAP05ZG8BbcpxBAACAP334m7+1VZ1BAACAP4usLLrzoZ1BAACAP17BuD+3Np1BAACAP0TtN0BP95tBAACAP9F+iUAI4plBAACAP/BVtkA5+pZBAACAPwrn3UCtq5NBAACAPwBFAkHhuI9BAACAP9MJFUHzJYtBAACAP3XsJUHgWYZBAACAP+NANkE0B4FBAACAPxW8SEFSKnRBAACAPxMXWkHYCWVBAACAPzpkakEGnFRBAACAPyV4eUE5B0NBAACAP172gkGGSzJBAACAP/ShiEFHxiBBAACAP1K4jUGTig5BAACAPyDokUHPCfpAAACAP6yQlUE859VAAACAP0f0jkH3PfJAAACAP5yPh0E8kgZBAACAP9YZfkGayhNBAACAP3gta0HhVyBBAACAPwRoVkHDFSxBAACAPz/gP0G63jZBAACAP6i1J0GdjUBBAACAP5EPDkE0/0hBAACAPz835kCaE1BBAACAP+nLs0BMMVVBAACAP93Ff0AuDVlBAACAP0KSFUBRmltBAACAP0KXlj+ElFxBlHSUYowJdHJpYW5nbGVzlGgdaCBLAIWUaCKHlFKUKEsBTegFhZRoJ4wCdTSUiYiHlFKUKEsDaCtOTk5K/////0r/////SwB0lGKJQqAXAAARAAAAAAAAAAEAAAARAAAAAQAAAAIAAAASAAAAAgAAAAMAAAASAAAAEQAAAAIAAAATAAAAAwAAAAQAAAATAAAABAAAAAUAAAATAAAAEgAAAAMAAAAUAAAAEwAAAAUAAAAVAAAABQAAAAYAAAAVAAAAFAAAAAUAAAAWAAAABgAAAAcAAAAWAAAAFQAAAAYAAAAXAAAABwAAAAgAAAAXAAAAFgAAAAcAAAAYAAAACAAAAAkAAAAYAAAAFwAAAAgAAAAZAAAACQAAAAoAAAAZAAAAGAAAAAkAAAAaAAAACgAAAAsAAAAaAAAAGQAAAAoAAAAbAAAACwAAAAwAAAAbAAAAGgAAAAsAAAAcAAAADAAAAA0AAAAcAAAAGwAAAAwAAAAdAAAADQAAAA4AAAAdAAAAHAAAAA0AAAAQAAAADgAAAA8AAAAQAAAAHQAAAA4AAAAqAAAAHgAAAB8AAAAqAAAAHwAAACAAAAArAAAAIAAAACEAAAArAAAAKgAAACAAAAAsAAAAIQAAACIAAAAsAAAAKwAAACEAAAAtAAAAIgAAACMAAAAtAAAALAAAACIAAAAuAAAAIwAAACQAAAAuAAAALQAAACMAAAAvAAAAJAAAACUAAAAvAAAALgAAACQAAAAwAAAAJQAAACYAAAAwAAAALwAAACUAAAAxAAAAJgAAACcAAAAxAAAAJwAAACgAAAAxAAAAMAAAACYAAAApAAAAMQAAACgAAAA+AAAAMgAAADMAAAA+AAAAMwAAADQAAAA/AAAANAAAADUAAAA/AAAAPgAAADQAAABAAAAANQAAADYAAABAAAAAPwAAADUAAABBAAAANgAAADcAAABBAAAAQAAAADYAAABCAAAANwAAADgAAABCAAAAQQAAADcAAABDAAAAOAAAADkAAABDAAAAQgAAADgAAABEAAAAOQAAADoAAABEAAAAOgAAADsAAABEAAAAQwAAADkAAABFAAAARAAAADsAAAA9AAAAOwAAADwAAAA9AAAARQAAADsAAABSAAAARgAAAEcAAABSAAAARwAAAEgAAABTAAAASAAAAEkAAABTAAAAUgAAAEgAAABUAAAASQAAAEoAAABUAAAAUwAAAEkAAABVAAAASgAAAEsAAABVAAAAVAAAAEoAAABWAAAASwAAAEwAAABWAAAATAAAAE0AAABWAAAAVQAAAEsAAABXAAAAVgAAAE0AAABYAAAATQAAAE4AAABYAAAAVwAAAE0AAABZAAAATgAAAE8AAABZAAAATwAAAFAAAABZAAAAWAAAAE4AAABRAAAAWQAAAFAAAABmAAAAWgAAAFsAAABmAAAAWwAAAFwAAABnAAAAXAAAAF0AAABnAAAAXQAAAF4AAABnAAAAZgAAAFwAAABoAAAAZwAAAF4AAABpAAAAXgAAAF8AAABpAAAAXwAAAGAAAABpAAAAaAAAAF4AAABqAAAAaQAAAGAAAABrAAAAYAAAAGEAAABrAAAAagAAAGAAAABsAAAAYQAAAGIAAABsAAAAawAAAGEAAABtAAAAYgAAAGMAAABtAAAAYwAAAGQAAABtAAAAbAAAAGIAAABlAAAAbQAAAGQAAABuAAAAbwAAAHAAAAB/AAAAcAAAAHEAAAB/AAAAbgAAAHAAAACAAAAAcQAAAHIAAACAAAAAfwAAAHEAAACBAAAAcgAAAHMAAACBAAAAgAAAAHIAAACCAAAAgQAAAHMAAACDAAAAcwAAAHQAAACDAAAAggAAAHMAAACEAAAAdAAAAHUAAACEAAAAgwAAAHQAAACFAAAAdQAAAHYAAACFAAAAdgAAAHcAAACFAAAAhAAAAHUAAACGAAAAhQAAAHcAAACHAAAAdwAAAHgAAACHAAAAhgAAAHcAAACIAAAAeAAAAHkAAACIAAAAhwAAAHgAAACJAAAAeQAAAHoAAACJAAAAiAAAAHkAAACKAAAAegAAAHsAAACKAAAAewAAAHwAAACKAAAAiQAAAHoAAACLAAAAigAAAHwAAAB+AAAAfAAAAH0AAAB+AAAAiwAAAHwAAACYAAAAmwAAAJwAAACYAAAAmQAAAJsAAACaAAAAmwAAAJkAAACXAAAAnAAAAJ0AAACXAAAAmAAAAJwAAACWAAAAnQAAAJ4AAACWAAAAlwAAAJ0AAACVAAAAngAAAJ8AAACVAAAAlgAAAJ4AAACUAAAAnwAAAKAAAACUAAAAlQAAAJ8AAACTAAAAoAAAAKEAAACTAAAAlAAAAKAAAACSAAAAoQAAAKIAAACSAAAAogAAAKMAAACSAAAAkwAAAKEAAACRAAAAowAAAKQAAACRAAAAkgAAAKMAAACQAAAAkQAAAKQAAACQAAAApAAAAKUAAACQAAAApQAAAKYAAACPAAAApgAAAKcAAACPAAAAkAAAAKYAAACOAAAAjwAAAKcAAACOAAAApwAAAKgAAACOAAAAqAAAAKkAAACNAAAAjgAAAKkAAACNAAAAqQAAAKoAAACNAAAAqgAAAKsAAACMAAAAjQAAAKsAAACMAAAAqwAAAKwAAACMAAAArAAAAK0AAADLAAAArQAAAK4AAADLAAAAjAAAAK0AAADKAAAArgAAAK8AAADKAAAAywAAAK4AAADJAAAArwAAALAAAADJAAAAygAAAK8AAADIAAAAsAAAALEAAADIAAAAsQAAALIAAADIAAAAyQAAALAAAADHAAAAyAAAALIAAADHAAAAsgAAALMAAADGAAAAswAAALQAAADGAAAAtAAAALUAAADGAAAAxwAAALMAAADFAAAAtQAAALYAAADFAAAAxgAAALUAAAC3AAAAxQAAALYAAADEAAAAxQAAALcAAAC4AAAAxAAAALcAAADDAAAAxAAAALgAAAC5AAAAwwAAALgAAADCAAAAwwAAALkAAAC6AAAAwgAAALkAAADBAAAAwgAAALoAAAC7AAAAwQAAALoAAADAAAAAwQAAALsAAAC8AAAAwAAAALsAAAC/AAAAwAAAALwAAAC9AAAAvwAAALwAAAC+AAAAvwAAAL0AAADYAAAA3AAAANsAAADYAAAA2wAAANkAAADaAAAA2QAAANsAAADXAAAA3QAAANwAAADXAAAA3AAAANgAAADWAAAA3gAAAN0AAADWAAAA3QAAANcAAADVAAAA3wAAAN4AAADVAAAA3gAAANYAAADUAAAA4AAAAN8AAADUAAAA3wAAANUAAADTAAAA4QAAAOAAAADTAAAA4AAAANQAAADSAAAA4gAAAOEAAADSAAAA4wAAAOIAAADSAAAA4QAAANMAAADRAAAA5AAAAOMAAADRAAAA4wAAANIAAADQAAAA5AAAANEAAADQAAAA5QAAAOQAAADQAAAA5gAAAOUAAADPAAAA5wAAAOYAAADPAAAA5gAAANAAAADOAAAA5wAAAM8AAADOAAAA6AAAAOcAAADOAAAA6QAAAOgAAADNAAAA6QAAAM4AAADNAAAA6gAAAOkAAADNAAAA6wAAAOoAAADMAAAA6wAAAM0AAADMAAAA7AAAAOsAAADMAAAA7QAAAOwAAAALAQAA7gAAAO0AAAALAQAA7QAAAMwAAAAKAQAA7wAAAO4AAAAKAQAA7gAAAAsBAAAJAQAA8AAAAO8AAAAJAQAA7wAAAAoBAAAIAQAA8QAAAPAAAAAIAQAA8gAAAPEAAAAIAQAA8AAAAAkBAAAHAQAA8gAAAAgBAAAHAQAA8wAAAPIAAAAGAQAA9AAAAPMAAAAGAQAA9QAAAPQAAAAGAQAA8wAAAAcBAAAFAQAA9gAAAPUAAAAFAQAA9QAAAAYBAAD3AAAA9gAAAAUBAAAEAQAA9wAAAAUBAAD4AAAA9wAAAAQBAAADAQAA+AAAAAQBAAD5AAAA+AAAAAMBAAACAQAA+QAAAAMBAAD6AAAA+QAAAAIBAAABAQAA+gAAAAIBAAD7AAAA+gAAAAEBAAAAAQAA+wAAAAEBAAD8AAAA+wAAAAABAAD/AAAA/AAAAAABAAD9AAAA/AAAAP8AAAD+AAAA/QAAAP8AAAAdAQAADAEAAA0BAAAdAQAADQEAAA4BAAAeAQAADgEAAA8BAAAeAQAAHQEAAA4BAAAfAQAADwEAABABAAAfAQAAHgEAAA8BAAAgAQAAEAEAABEBAAAgAQAAHwEAABABAAAhAQAAEQEAABIBAAAhAQAAIAEAABEBAAAiAQAAEgEAABMBAAAiAQAAIQEAABIBAAAjAQAAEwEAABQBAAAjAQAAIgEAABMBAAAkAQAAFAEAABUBAAAkAQAAIwEAABQBAAAlAQAAFQEAABYBAAAlAQAAJAEAABUBAAAmAQAAFgEAABcBAAAmAQAAJQEAABYBAAAnAQAAFwEAABgBAAAnAQAAGAEAABkBAAAnAQAAJgEAABcBAAAoAQAAJwEAABkBAAApAQAAGQEAABoBAAApAQAAKAEAABkBAAAcAQAAGgEAABsBAAAcAQAAKQEAABoBAAA2AQAAKgEAACsBAAA2AQAAKwEAACwBAAA3AQAALAEAAC0BAAA3AQAANgEAACwBAAA4AQAALQEAAC4BAAA4AQAANwEAAC0BAAA5AQAALgEAAC8BAAA5AQAAOAEAAC4BAAA6AQAALwEAADABAAA6AQAAOQEAAC8BAAA7AQAAMAEAADEBAAA7AQAAOgEAADABAAA8AQAAMQEAADIBAAA8AQAAOwEAADEBAAA9AQAAMgEAADMBAAA9AQAAMwEAADQBAAA9AQAAPAEAADIBAAA1AQAAPQEAADQBAABKAQAAPgEAAD8BAABKAQAAPwEAAEABAABLAQAAQAEAAEEBAABLAQAASgEAAEABAABMAQAAQQEAAEIBAABMAQAASwEAAEEBAABNAQAAQgEAAEMBAABNAQAAQwEAAEQBAABNAQAATAEAAEIBAABOAQAATQEAAEQBAABPAQAARAEAAEUBAABPAQAATgEAAEQBAABQAQAARQEAAEYBAABQAQAATwEAAEUBAABRAQAARgEAAEcBAABRAQAARwEAAEgBAABRAQAAUAEAAEYBAABJAQAAUQEAAEgBAABeAQAAUgEAAFMBAABeAQAAUwEAAFQBAABfAQAAVAEAAFUBAABfAQAAXgEAAFQBAABgAQAAVQEAAFYBAABgAQAAXwEAAFUBAABhAQAAVgEAAFcBAABhAQAAVwEAAFgBAABhAQAAYAEAAFYBAABiAQAAYQEAAFgBAABjAQAAWAEAAFkBAABjAQAAYgEAAFgBAABkAQAAWQEAAFoBAABkAQAAYwEAAFkBAABlAQAAWgEAAFsBAABlAQAAZAEAAFoBAABdAQAAWwEAAFwBAABdAQAAZQEAAFsBAAByAQAAZgEAAGcBAAByAQAAZwEAAGgBAABzAQAAaAEAAGkBAABzAQAAcgEAAGgBAAB0AQAAaQEAAGoBAAB0AQAAcwEAAGkBAAB1AQAAagEAAGsBAAB1AQAAdAEAAGoBAAB2AQAAawEAAGwBAAB2AQAAdQEAAGsBAAB3AQAAbAEAAG0BAAB3AQAAdgEAAGwBAAB4AQAAbQEAAG4BAAB4AQAAdwEAAG0BAAB5AQAAbgEAAG8BAAB5AQAAeAEAAG4BAABxAQAAbwEAAHABAABxAQAAeQEAAG8BAACLAQAAegEAAHsBAACLAQAAewEAAHwBAACMAQAAfAEAAH0BAACMAQAAiwEAAHwBAACNAQAAfQEAAH4BAACNAQAAfgEAAH8BAACNAQAAjAEAAH0BAACOAQAAjQEAAH8BAACPAQAAfwEAAIABAACPAQAAjgEAAH8BAACQAQAAgAEAAIEBAACQAQAAjwEAAIABAACRAQAAgQEAAIIBAACRAQAAggEAAIMBAACRAQAAkAEAAIEBAACSAQAAkQEAAIMBAACTAQAAgwEAAIQBAACTAQAAhAEAAIUBAACTAQAAkgEAAIMBAACUAQAAkwEAAIUBAACVAQAAhQEAAIYBAACVAQAAlAEAAIUBAACWAQAAhgEAAIcBAACWAQAAlQEAAIYBAACXAQAAhwEAAIgBAACXAQAAlgEAAIcBAACKAQAAiAEAAIkBAACKAQAAlwEAAIgBAADLAQAAyQEAAMoBAADIAQAAyQEAAMsBAADMAQAAyAEAAMsBAADHAQAAyAEAAMwBAADNAQAAxwEAAMwBAADGAQAAxwEAAM0BAADOAQAAxgEAAM0BAADFAQAAxgEAAM4BAADPAQAAxQEAAM4BAADEAQAAxQEAAM8BAADQAQAAxAEAAM8BAADDAQAAxAEAANABAADRAQAAwwEAANABAADCAQAAwwEAANEBAADBAQAA0QEAANIBAADBAQAAwgEAANEBAADAAQAAwQEAANIBAAC/AQAA0gEAANMBAAC/AQAAwAEAANIBAAC+AQAAvwEAANMBAAC+AQAA0wEAANQBAAC9AQAAvgEAANQBAAC8AQAAvQEAANQBAAC8AQAA1AEAANUBAAC7AQAAvAEAANUBAAC7AQAA1QEAANYBAAC6AQAAuwEAANYBAAC6AQAA1gEAANcBAAC5AQAAugEAANcBAAC5AQAA1wEAAJgBAAC4AQAAuQEAAJgBAAC3AQAAmAEAAJkBAAC3AQAAuAEAAJgBAAC2AQAAtwEAAJkBAAC1AQAAmQEAAJoBAAC1AQAAtgEAAJkBAAC0AQAAtQEAAJoBAACzAQAAmgEAAJsBAACzAQAAtAEAAJoBAACyAQAAmwEAAJwBAACyAQAAswEAAJsBAACnAQAApAEAAKUBAACnAQAApQEAAKYBAACxAQAAsgEAAJwBAACoAQAAowEAAKQBAACoAQAApAEAAKcBAACwAQAAnAEAAJ0BAACwAQAAsQEAAJwBAACpAQAAogEAAKMBAACpAQAAowEAAKgBAACvAQAAnQEAAJ4BAACvAQAAsAEAAJ0BAACqAQAAoQEAAKIBAACqAQAAogEAAKkBAACuAQAArwEAAJ4BAACrAQAAoAEAAKEBAACrAQAAoQEAAKoBAACtAQAArgEAAJ4BAACtAQAAngEAAJ8BAACsAQAAnwEAAKABAACsAQAAoAEAAKsBAACsAQAArQEAAJ8BAAALAgAACgIAAAkCAAAIAgAACwIAAAkCAAAMAgAACwIAAAgCAAAHAgAADAIAAAgCAAANAgAADAIAAAcCAAAGAgAADQIAAAcCAAAOAgAADQIAAAYCAAAFAgAADgIAAAYCAAAPAgAADgIAAAUCAAAEAgAADwIAAAUCAAAQAgAADwIAAAQCAAADAgAAEAIAAAQCAAARAgAAEAIAAAMCAAACAgAAEQIAAAMCAAABAgAAEgIAABECAAABAgAAEQIAAAICAAAAAgAAEgIAAAECAAD/AQAAEwIAABICAAD/AQAAEgIAAAACAAD+AQAAEwIAAP8BAAD+AQAAFAIAABMCAAD9AQAAFAIAAP4BAAD8AQAAFAIAAP0BAAD8AQAAFQIAABQCAAD7AQAAFQIAAPwBAAD7AQAAFgIAABUCAAD6AQAAFgIAAPsBAAD6AQAAFwIAABYCAAD5AQAAFwIAAPoBAAD5AQAA2AEAABcCAAD4AQAA2AEAAPkBAAD3AQAA2QEAANgBAAD3AQAA2AEAAPgBAAD2AQAA2QEAAPcBAAD1AQAA2gEAANkBAAD1AQAA2QEAAPYBAAD0AQAA2gEAAPUBAADzAQAA2wEAANoBAADzAQAA2gEAAPQBAADyAQAA3AEAANsBAADyAQAA2wEAAPMBAADnAQAA5QEAAOQBAADnAQAA5gEAAOUBAADxAQAA3AEAAPIBAADoAQAA5AEAAOMBAADoAQAA5wEAAOQBAADwAQAA3QEAANwBAADwAQAA3AEAAPEBAADpAQAA4wEAAOIBAADpAQAA6AEAAOMBAADvAQAA3gEAAN0BAADvAQAA3QEAAPABAADqAQAA4gEAAOEBAADqAQAA6QEAAOIBAADuAQAA3gEAAO8BAADrAQAA4QEAAOABAADrAQAA6gEAAOEBAADtAQAA3gEAAO4BAADtAQAA3wEAAN4BAADsAQAA4AEAAN8BAADsAQAA6wEAAOABAADsAQAA3wEAAO0BAACUdJRijAdub3JtYWxzlGgdaCBLAIWUaCKHlFKUKEsBTRgCSwOGlGgqiUIgGQAAAAAApSCYfwoAAIA/AAAApSCYfwoAAIA/kfcGpZlfZb0pmX8/KIYNpc5B5b0XZH4/L8oTpa0LLb5zUXw/Ka4ZpVMXaL59Vnk/8Ssfpbbrkb4IYnU/Sb4jpXgirb5W63A/Vu0npUasyL6JhGs/u6wrpROI5L5sFWU/KewupYZWAL/MgV0/uJYxpZODDr/cqVQ/X5IzpZC5HL8ua0o/zb80pS3dKr9toj4/EgM1pW2kNr+3YTM/En00pTtXQr/ooiY/En00pTtXQr/ooiY/kfcGpZlfZb0pmX8/KIYNpc5B5b0XZH4/L8oTpa0LLb5zUXw/Ka4ZpVMXaL59Vnk/8Ssfpbbrkb4IYnU/Sb4jpXgirb5W63A/Vu0npUasyL6JhGs/u6wrpROI5L5sFWU/KewupYZWAL/MgV0/uJYxpZODDr/cqVQ/X5IzpZC5HL8ua0o/zb80pS3dKr9toj4/EgM1pW2kNr+3YTM/ynMkJREbcD8EmbG+ynMkJREbcD8EmbG++n8pJR8LaT+r6dO+iKQtJajKYD/P/PS+FOUwJf9QVz8qeQq/bDczJWazTD9zuxm/uX80JR83Qj9TyCa/cgI1JWrkNj95IDO/qb40JUrHKj8Jtj6/lM0zJZbaHj+SwEi/GzAyJeBLEj9WFFK/GzAyJeBLEj9WFFK/+n8pJR8LaT+r6dO+iKQtJajKYD/P/PS+FOUwJf9QVz8qeQq/bDczJWazTD9zuxm/uX80JR83Qj9TyCa/cgI1JWrkNj95IDO/qb40JUrHKj8Jtj6/lM0zJZbaHj+SwEi//S8yJR9LEj/cFFK//S8yJR9LEj/cFFK/zGgvJWmxAj8wIFy/3LkrJfvw5D47+2S/Hx8nJVAgwz4Xrmy/MqIhJVNDoD66InO/BPYbJUMjgD5o2ne/Ep4VJa7jPj44g3u//qAOJXlK+T2sGH6/RJQHJTouej2khX+/AAAAJafl3ykAAIC/AAAAJafl3ykAAIC/zGgvJWmxAj8wIFy/3LkrJfvw5D47+2S/Hx8nJVAgwz4Xrmy/MqIhJVNDoD66InO/BPYbJUMjgD5o2ne/Ep4VJa7jPj44g3u//qAOJXlK+T2sGH6/RJQHJTouej2khX+/AAAAJRDn36kAAIC/AAAAJRDn36kAAIC/7cjsJHNXlL3cU3+/g2jYJAqqE74GU32/HsrCJBHDXL7i+nm/tCGsJHpekr7xUHW/TPKWJKkfsr4hAnC/NRqBJJUi0b5/q2m/EmJVJPJE776CU2K/1N0qJL8wBb+pn1q/WiL/I81LEr9jFFK/WiL/I81LEr9jFFK/7cjsJHNXlL3cU3+/g2jYJAqqE74GU32/HsrCJBHDXL7i+nm/tCGsJHpekr7xUHW/TPKWJKkfsr4hAnC/NRqBJJUi0b5/q2m/EmJVJPJE776CU2K/1N0qJL8wBb+pn1q/4yP/I5NLEr+LFFK/4yP/I5NLEr+LFFK/AjiXI5wgIb+c7ka/v6a7ImsPL7/XyTq/Y7fookUlPL/PmS2/XQajoyw/SL+UfR+/4VwApCs+Ur+7DxK/ILEupFRYW7/E/wO/7kxcpEODY7+Yueq+f/CCpCg8ar9Tl86+uU+XpGIbcL9Rl7G+uU+XpGIbcL9Rl7G+AjiXI5wgIb+c7ka/v6a7ImsPL7/XyTq/Y7fookUlPL/PmS2/XQajoyw/SL+UfR+/4VwApCs+Ur+7DxK/ILEupFRYW7/E/wO/7kxcpEODY7+Yueq+f/CCpCg8ar9Tl86+O55gI0WDQj9+byY/O55gI0WDQj9+byY/vEWmoTa3Mz9NUDY/Us94o6/MJD+Z5kM/TXPqozNTFT8G8E8/R/copMDHBT9jQ1o/54dZpI2M7D46CmM/rouDpMe/zT6Ra2o/meyYpEU7rz48inA/EgGtpP8FkT4RhHU/xOG/pBhAZj7JcXk/xenPpJ7xMD4tJnw/lhbfpOUy+D3zHH4/pnLtpClkjz0rX38/cOL2pLdWDz3c138/AAAApYN3fgoAAIA/AAAApYN3fgoAAIA/vEWmoTa3Mz9NUDY/Us94o6/MJD+Z5kM/TXPqozNTFT8G8E8/R/copMDHBT9jQ1o/54dZpI2M7D46CmM/rouDpMe/zT6Ra2o/meyYpEU7rz48inA/EgGtpP8FkT4RhHU/xOG/pBhAZj7JcXk/xenPpJ7xMD4tJnw/lhbfpOUy+D3zHH4/pnLtpClkjz0rX38/cOL2pLdWDz3c138/AACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAAAAJfGYf4oAAIC/AAAAJfGYf4oAAIC/lvcGJURgZT0pmX+/MYYNJWpC5T0VZH6/OsoTJRkMLT5uUXy/Na4ZJdYXaD51Vnm//ysfJQHskT79YXW/Vr4jJckirT5H63C/Yu0nJZqsyD53hGu/xqwrJWiI5D5XFWW/MuwuJbBWAD+0gV2/vpYxJbuDDj/BqVS/Y5IzJba5HD8Ra0q/z780JU/dKj9Poj6/EgM1JYqkNj+ZYTO/EH00JVJXQj/Noia/EH00JVJXQj/Noia/lvcGJURgZT0pmX+/MYYNJWpC5T0VZH6/OsoTJRkMLT5uUXy/Na4ZJdYXaD51Vnm//ysfJQHskT79YXW/Vr4jJckirT5H63C/Yu0nJZqsyD53hGu/xqwrJWiI5D5XFWW/MuwuJbBWAD+0gV2/vpYxJbuDDj/BqVS/Y5IzJba5HD8Ra0q/z780JU/dKj9Poj6/EgM1JYqkNj+ZYTO/eHMkpXEbcL8Bl7E+eHMkpXEbcL8Bl7E+vn8ppYMLab/z59M+XaQtpQ7LYL9Z+/Q++eQwpWNRV7+OeAo/XDczpcazTL/yuhk/sn80pX03Qr/nxyY/cQI1pcXkNr8cIDM/rr40paTHKr+4tT4/lc0zpZ7aHr+LwEg/DzAypZNLEr+LFFI/DzAypZNLEr+LFFI/vn8ppYMLab/z59M+XaQtpQ7LYL9Z+/Q++eQwpWNRV7+OeAo/XDczpcazTL/yuhk/sn80pX03Qr/nxyY/cQI1pcXkNr8cIDM/rr40paTHKr+4tT4/lc0zpZ7aHr+LwEg/IDAypQRMEr88FFI/IDAypQRMEr88FFI/8GgvpRiyAr/IH1w/BLorpTby5L7s+mQ/SB8npWQhw77erWw/X6IhpV9EoL6OInM/OfYbpVwkgL5D2nc/UJ4VpQ/mPr4cg3s/RqEOpaVP+b2YGH4/bJQHpZQzer2ehX8/AAAApT5mewoAAIA/AAAApT5mewoAAIA/8GgvpRiyAr/IH1w/BLorpTby5L7s+mQ/SB8npWQhw77erWw/X6IhpV9EoL6OInM/OfYbpVwkgL5D2nc/UJ4VpQ/mPr4cg3s/RqEOpaVP+b2YGH4/bJQHpZQzer2ehX8/AAAApapkewoAAIA/AAAApapkewoAAIA/E8nspFtWlD3eU38/rmjYpHapEz4LU30/S8rCpH7CXD7r+nk/2SGspEBekj75UHU/avKWpH0fsj4pAnA/UhqBpG0i0T6Iq2k/ZGJVpLxE7z6QU2I/o94qpH8wBT/Rn1o/fSX/o1ZLEj+2FFI/fSX/o1ZLEj+2FFI/E8nspFtWlD3eU38/rmjYpHapEz4LU30/S8rCpH7CXD7r+nk/2SGspEBekj75UHU/avKWpH0fsj4pAnA/UhqBpG0i0T6Iq2k/ZGJVpLxE7z6QU2I/o94qpH8wBT/Rn1o/1yH/o+BLEj9WFFI/1yH/o+BLEj9WFFI/9DWXo+QgIT9h7kY/+J67oqsPLz+byTo/W77oInolPD+VmS0/yQejI1Q/SD9ifR8/ZF0AJEY+Uj+UDxI/ZLEuJGFYWz+v/wM/6UxcJEKDYz+bueo+BfCCJAM8aj/8l84+xk6XJCAbcD+0mLE+xk6XJCAbcD+0mLE+9DWXo+QgIT9h7kY/+J67oqsPLz+byTo/W77oInolPD+VmS0/yQejI1Q/SD9ifR8/ZF0AJEY+Uj+UDxI/ZLEuJGFYWz+v/wM/6UxcJEKDYz+bueo+BfCCJAM8aj/8l84+7yJho+uKQr+NZia/7yJho+uKQr+NZia/39elIRS4M79zTza/W+t4I8fKJL8z6EO/hYrqI9JPFb9z8k+/0v8oJBrFBb8DRVq/9ohZJNuL7L5pCmO/pIeDJGTFzb5Wamq/ROSYJHxHr74CiHC/hPWsJNYXkb5vgXW/l9S/JOdqZr5Rb3m/sdzPJCAeMb45JHy/NAvfJBGE+L22G36/gmrtJAKhj72jXn+/z932JFGeD72013+/AAAAJbN2fooAAIC/AAAAJbN2fooAAIC/39elIRS4M79zTza/W+t4I8fKJL8z6EO/hYrqI9JPFb9z8k+/0v8oJBrFBb8DRVq/9ohZJNuL7L5pCmO/pIeDJGTFzb5Wamq/ROSYJHxHr74CiHC/hPWsJNYXkb5vgXW/l9S/JOdqZr5Rb3m/sdzPJCAeMb45JHy/NAvfJBGE+L22G36/gmrtJAKhj72jXn+/z932JFGeD72013+/AACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAvwAAACUAAAClAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAlAACAPwAAAKUAAAAllHSUYowFZWRnZXOUaB1oIEsAhZRoIoeUUpQoSwFNDAFLAksDh5RoKolCIBkAAAAAgL+LrCy6sO5cwQAAgD+LrCy6sO5cwQAAgL+skJVBFfPVwAAAgD+skJVBFfPVwAAAgL+LrCy6sO5cwQAAgL8X1+0/AxpcwQAAgL8X1+0/AxpcwQAAgL/tKWtAvKdZwQAAgL/tKWtAvKdZwQAAgL8BH69AoJNVwQAAgL8BH69AoJNVwQAAgL+mFOdACe9PwQAAgL+mFOdACe9PwQAAgL/QhA5Bc8xIwQAAgL/QhA5Bc8xIwQAAgL83xCVB5i1BwQAAgL83xCVB5i1BwQAAgL9J4jtBx4I4wQAAgL9J4jtBx4I4wQAAgL/gvFBBReMuwQAAgL/gvFBBReMuwQAAgL86NmRB22kkwQAAgL86NmRB22kkwQAAgL//NXZBeTIZwQAAgL//NXZBeTIZwQAAgL/4VINBfVkNwQAAgL/4VINBfVkNwQAAgL8Sw4pBsfoAwQAAgL8Sw4pBsfoAwQAAgL/6aJBBwmTswAAAgL/6aJBBwmTswAAAgL+skJVBFfPVwAAAgD+LrCy6sO5cwQAAgD8X1+0/AxpcwQAAgD8X1+0/AxpcwQAAgD/tKWtAvKdZwQAAgD/tKWtAvKdZwQAAgD8BH69AoJNVwQAAgD8BH69AoJNVwQAAgD+mFOdACe9PwQAAgD+mFOdACe9PwQAAgD/QhA5Bc8xIwQAAgD/QhA5Bc8xIwQAAgD83xCVB5i1BwQAAgD83xCVB5i1BwQAAgD9J4jtBx4I4wQAAgD9J4jtBx4I4wQAAgD/gvFBBReMuwQAAgD/gvFBBReMuwQAAgD86NmRB22kkwQAAgD86NmRB22kkwQAAgD//NXZBeTIZwQAAgD//NXZBeTIZwQAAgD/4VINBfVkNwQAAgD/4VINBfVkNwQAAgD8Sw4pBsfoAwQAAgD8Sw4pBsfoAwQAAgD/6aJBBwmTswAAAgD/6aJBBwmTswAAAgD+skJVBFfPVwAAAgL/jQDZBGgqBwQAAgD/jQDZBGgqBwQAAgL+skJVBFfPVwAAAgL8/KpFB0VoAwQAAgL8/KpFB0VoAwQAAgL97BoxBpvQUwQAAgL97BoxBpvQUwQAAgL9/H4ZBCN0owQAAgL9/H4ZBCN0owQAAgL/PBH9BV+A7wQAAgL/PBH9BV+A7wQAAgL8N9HFBRCNMwQAAgL8N9HFBRCNMwQAAgL+A2GNBjoRbwQAAgL+A2GNBjoRbwQAAgL9ewVRB0PJpwQAAgL9ewVRB0PJpwQAAgL+u5kVB33R2wQAAgL+u5kVB33R2wQAAgL/jQDZBGgqBwQAAgD+skJVBFfPVwAAAgD8/KpFB0VoAwQAAgD8/KpFB0VoAwQAAgD97BoxBpvQUwQAAgD97BoxBpvQUwQAAgD9/H4ZBCN0owQAAgD9/H4ZBCN0owQAAgD/PBH9BV+A7wQAAgD/PBH9BV+A7wQAAgD8N9HFBRCNMwQAAgD8N9HFBRCNMwQAAgD+A2GNBjoRbwQAAgD+A2GNBjoRbwQAAgD9ewVRB0PJpwQAAgD9ewVRB0PJpwQAAgD+u5kVB33R2wQAAgD+u5kVB33R2wQAAgD/jQDZBGgqBwQAAgL+LrCy66aSdwQAAgD+LrCy66aSdwQAAgL/jQDZBGgqBwQAAgL9k0CJBKUyHwQAAgL9k0CJBKUyHwQAAgL8imw5BLtCMwQAAgL8imw5BLtCMwQAAgL/QFfNA0puRwQAAgL/QFfNA0puRwQAAgL9spcdAZqGVwQAAgL9spcdAZqGVwQAAgL8Qnp9AzJGYwQAAgL8Qnp9AzJGYwQAAgL9gxW1AYtmawQAAgL9gxW1AYtmawQAAgL9HQBtAX3WcwQAAgL9HQBtAX3WcwQAAgL8Ayps/r1idwQAAgL8Ayps/r1idwQAAgL+LrCy66aSdwQAAgD/jQDZBGgqBwQAAgD9k0CJBKUyHwQAAgD9k0CJBKUyHwQAAgD8imw5BLtCMwQAAgD8imw5BLtCMwQAAgD/QFfNA0puRwQAAgD/QFfNA0puRwQAAgD9spcdAZqGVwQAAgD9spcdAZqGVwQAAgD8Qnp9AzJGYwQAAgD8Qnp9AzJGYwQAAgD9gxW1AYtmawQAAgD9gxW1AYtmawQAAgD9HQBtAX3WcwQAAgD9HQBtAX3WcwQAAgD8Ayps/r1idwQAAgD8Ayps/r1idwQAAgD+LrCy66aSdwQAAgL/qRTbBGgqBwQAAgD/qRTbBGgqBwQAAgL+LrCy66aSdwQAAgL+u6ri/rjmdwQAAgL+u6ri/rjmdwQAAgL83ATjARvqbwQAAgL83ATjARvqbwQAAgL+5iInA/uSZwQAAgL+5iInA/uSZwQAAgL/gX7bALf2WwQAAgL/gX7bALf2WwQAAgL8K8d3Anq6TwQAAgL8K8d3Anq6TwQAAgL8ESgLB0buPwQAAgL8ESgLB0buPwQAAgL/UDhXB4iiLwQAAgL/UDhXB4iiLwQAAgL9+8SXBy1yGwQAAgL9+8SXBy1yGwQAAgL/qRTbBGgqBwQAAgD+LrCy66aSdwQAAgD+u6ri/rjmdwQAAgD+u6ri/rjmdwQAAgD83ATjARvqbwQAAgD83ATjARvqbwQAAgD+5iInA/uSZwQAAgD+5iInA/uSZwQAAgD/gX7bALf2WwQAAgD/gX7bALf2WwQAAgD8K8d3Anq6TwQAAgD8K8d3Anq6TwQAAgD8ESgLB0buPwQAAgD8ESgLB0buPwQAAgD/UDhXB4iiLwQAAgD/UDhXB4iiLwQAAgD9+8SXBy1yGwQAAgD9+8SXBy1yGwQAAgD/qRTbBGgqBwQAAgL8rk5XBFfPVwAAAgD8rk5XBFfPVwAAAgL/qRTbBGgqBwQAAgL8iwUjBJTB0wQAAgL8iwUjBJTB0wQAAgL8jHFrBsw9lwQAAgL8jHFrBsw9lwQAAgL9KaWrB7aFUwQAAgL9KaWrB7aFUwQAAgL8wfXnBMA1DwQAAgL8wfXnBMA1DwQAAgL/g+ILBj1EywQAAgL/g+ILBj1EywQAAgL9wpIjBZswgwQAAgL9wpIjBZswgwQAAgL/Huo3BzpAOwQAAgL/Huo3BzpAOwQAAgL+b6pHB+BX6wAAAgL+b6pHB+BX6wAAAgL8rk5XBFfPVwAAAgD/qRTbBGgqBwQAAgD8iwUjBJTB0wQAAgD8iwUjBJTB0wQAAgD8jHFrBsw9lwQAAgD8jHFrBsw9lwQAAgD9KaWrB7aFUwQAAgD9KaWrB7aFUwQAAgD8wfXnBMA1DwQAAgD8wfXnBMA1DwQAAgD/g+ILBj1EywQAAgD/g+ILBj1EywQAAgD9wpIjBZswgwQAAgD9wpIjBZswgwQAAgD/Huo3BzpAOwQAAgD/Huo3BzpAOwQAAgD+b6pHB+BX6wAAAgD+b6pHB+BX6wAAAgD8rk5XBFfPVwAAAgL8rk5XBFfPVwAAAgL/X8o7BcVnywAAAgL/X8o7BcVnywAAAgL8bi4fBsaUGwQAAgL8bi4fBsaUGwQAAgL+3Dn7BHuATwQAAgL+3Dn7BHuATwQAAgL8OI2vBW20gwQAAgL8OI2vBW20gwQAAgL+yYFbBsykswQAAgL+yYFbBsykswQAAgL/D3T/BN/A2wQAAgL/D3T/BN/A2wQAAgL/luCfBSJxAwQAAgL/luCfBSJxAwQAAgL9vGA7BIQtJwQAAgL9vGA7BIQtJwQAAgL8DUubALR1QwQAAgL8DUubALR1QwQAAgL8N67PARDlVwQAAgL8N67PARDlVwQAAgL+0AYDACRRZwQAAgL+0AYDACRRZwQAAgL9nwxXAg6BbwQAAgL9nwxXAg6BbwQAAgL8/4Ja/bZpcwQAAgL8/4Ja/bZpcwQAAgL+LrCy6sO5cwQAAgD8rk5XBFfPVwAAAgD/X8o7BcVnywAAAgD/X8o7BcVnywAAAgD8bi4fBsaUGwQAAgD8bi4fBsaUGwQAAgD+3Dn7BHuATwQAAgD+3Dn7BHuATwQAAgD8OI2vBW20gwQAAgD8OI2vBW20gwQAAgD+yYFbBsykswQAAgD+yYFbBsykswQAAgD/D3T/BN/A2wQAAgD/D3T/BN/A2wQAAgD/luCfBSJxAwQAAgD/luCfBSJxAwQAAgD9vGA7BIQtJwQAAgD9vGA7BIQtJwQAAgD8DUubALR1QwQAAgD8DUubALR1QwQAAgD8N67PARDlVwQAAgD8N67PARDlVwQAAgD+0AYDACRRZwQAAgD+0AYDACRRZwQAAgD9nwxXAg6BbwQAAgD9nwxXAg6BbwQAAgD8/4Ja/bZpcwQAAgD8/4Ja/bZpcwQAAgD+LrCy6sO5cwQAAgL+LrCy64+hcQQAAgD+LrCy64+hcQQAAgL8rk5XBPOfVQAAAgD8rk5XBPOfVQAAAgL+LrCy64+hcQQAAgL+LAe6/NhRcQQAAgL+LAe6/NhRcQQAAgL/YPmvA76FZQQAAgL/YPmvA76FZQQAAgL9XKa/A0o1VQQAAgL9XKa/A0o1VQQAAgL/iHufAO+lPQQAAgL/iHufAO+lPQQAAgL/kiQ7Bo8ZIQQAAgL/kiQ7Bo8ZIQQAAgL9EySXBFShBQQAAgL9EySXBFShBQQAAgL9Q5zvB9Hw4QQAAgL9Q5zvB9Hw4QQAAgL/jwVDBbt0uQQAAgL/jwVDBbt0uQQAAgL86O2TBAWQkQQAAgL86O2TBAWQkQQAAgL/+OnbBmywZQQAAgL/+OnbBmywZQQAAgL92V4PBnFMNQQAAgL92V4PBnFMNQQAAgL+QxYrBy/QAQQAAgL+QxYrBy/QAQQAAgL94a5DB8VjsQAAAgL94a5DB8VjsQAAAgL8rk5XBPOfVQAAAgD+LrCy64+hcQQAAgD+LAe6/NhRcQQAAgD+LAe6/NhRcQQAAgD/YPmvA76FZQQAAgD/YPmvA76FZQQAAgD9XKa/A0o1VQQAAgD9XKa/A0o1VQQAAgD/iHufAO+lPQQAAgD/iHufAO+lPQQAAgD/kiQ7Bo8ZIQQAAgD/kiQ7Bo8ZIQQAAgD9EySXBFShBQQAAgD9EySXBFShBQQAAgD9Q5zvB9Hw4QQAAgD9Q5zvB9Hw4QQAAgD/jwVDBbt0uQQAAgD/jwVDBbt0uQQAAgD86O2TBAWQkQQAAgD86O2TBAWQkQQAAgD/+OnbBmywZQQAAgD/+OnbBmywZQQAAgD92V4PBnFMNQQAAgD92V4PBnFMNQQAAgD+QxYrBy/QAQQAAgD+QxYrBy/QAQQAAgD94a5DB8VjsQAAAgD94a5DB8VjsQAAAgD8rk5XBPOfVQAAAgL/qRTbBNAeBQQAAgD/qRTbBNAeBQQAAgL8rk5XBPOfVQAAAgL/QLJHBxVQAQQAAgL/QLJHBxVQAQQAAgL8fCYzBfO4UQQAAgL8fCYzBfO4UQQAAgL81IobBxtYoQQAAgL81IobBxtYoQQAAgL9bCn/BAto7QQAAgL9bCn/BAto7QQAAgL+y+XHB4hxMQQAAgL+y+XHB4hxMQQAAgL873mPBJX5bQQAAgL873mPBJX5bQQAAgL8sx1TBZexpQQAAgL8sx1TBZexpQQAAgL8f7EXByG52QQAAgL8f7EXByG52QQAAgL/qRTbBNAeBQQAAgD8rk5XBPOfVQAAAgD/QLJHBxVQAQQAAgD/QLJHBxVQAQQAAgD8fCYzBfO4UQQAAgD8fCYzBfO4UQQAAgD81IobBxtYoQQAAgD81IobBxtYoQQAAgD9bCn/BAto7QQAAgD9bCn/BAto7QQAAgD+y+XHB4hxMQQAAgD+y+XHB4hxMQQAAgD873mPBJX5bQQAAgD873mPBJX5bQQAAgD8sx1TBZexpQQAAgD8sx1TBZexpQQAAgD8f7EXByG52QQAAgD8f7EXByG52QQAAgD/qRTbBNAeBQQAAgL+LrCy686GdQQAAgD+LrCy686GdQQAAgL/qRTbBNAeBQQAAgL/a1SLBLEmHQQAAgL/a1SLBLEmHQQAAgL8CoQ7BHc2MQQAAgL8CoQ7BHc2MQQAAgL8BIvPAvZiRQQAAgL8BIvPAvZiRQQAAgL/escfAUZ6VQQAAgL/escfAUZ6VQQAAgL+Zqp/Au46YQQAAgL+Zqp/Au46YQQAAgL953m3AWNaaQQAAgL953m3AWNaaQQAAgL9OWRvAW3KcQQAAgL9OWRvAW3KcQQAAgL99+Ju/tVWdQQAAgL99+Ju/tVWdQQAAgL+LrCy686GdQQAAgD/qRTbBNAeBQQAAgD/a1SLBLEmHQQAAgD/a1SLBLEmHQQAAgD8CoQ7BHc2MQQAAgD8CoQ7BHc2MQQAAgD8BIvPAvZiRQQAAgD8BIvPAvZiRQQAAgD/escfAUZ6VQQAAgD/escfAUZ6VQQAAgD+Zqp/Au46YQQAAgD+Zqp/Au46YQQAAgD953m3AWNaaQQAAgD953m3AWNaaQQAAgD9OWRvAW3KcQQAAgD9OWRvAW3KcQQAAgD99+Ju/tVWdQQAAgD99+Ju/tVWdQQAAgD+LrCy686GdQQAAgL/jQDZBNAeBQQAAgD/jQDZBNAeBQQAAgL+LrCy686GdQQAAgL9ewbg/tzadQQAAgL9ewbg/tzadQQAAgL9E7TdAT/ebQQAAgL9E7TdAT/ebQQAAgL/RfolACOKZQQAAgL/RfolACOKZQQAAgL/wVbZAOfqWQQAAgL/wVbZAOfqWQQAAgL8K591ArauTQQAAgL8K591ArauTQQAAgL8ARQJB4biPQQAAgL8ARQJB4biPQQAAgL/TCRVB8yWLQQAAgL/TCRVB8yWLQQAAgL917CVB4FmGQQAAgL917CVB4FmGQQAAgL/jQDZBNAeBQQAAgD+LrCy686GdQQAAgD9ewbg/tzadQQAAgD9ewbg/tzadQQAAgD9E7TdAT/ebQQAAgD9E7TdAT/ebQQAAgD/RfolACOKZQQAAgD/RfolACOKZQQAAgD/wVbZAOfqWQQAAgD/wVbZAOfqWQQAAgD8K591ArauTQQAAgD8K591ArauTQQAAgD8ARQJB4biPQQAAgD8ARQJB4biPQQAAgD/TCRVB8yWLQQAAgD/TCRVB8yWLQQAAgD917CVB4FmGQQAAgD917CVB4FmGQQAAgD/jQDZBNAeBQQAAgL+skJVBPOfVQAAAgD+skJVBPOfVQAAAgL/jQDZBNAeBQQAAgL8VvEhBUip0QQAAgL8VvEhBUip0QQAAgL8TF1pB2AllQQAAgL8TF1pB2AllQQAAgL86ZGpBBpxUQQAAgL86ZGpBBpxUQQAAgL8leHlBOQdDQQAAgL8leHlBOQdDQQAAgL9e9oJBhksyQQAAgL9e9oJBhksyQQAAgL/0oYhBR8YgQQAAgL/0oYhBR8YgQQAAgL9SuI1Bk4oOQQAAgL9SuI1Bk4oOQQAAgL8g6JFBzwn6QAAAgL8g6JFBzwn6QAAAgL+skJVBPOfVQAAAgD/jQDZBNAeBQQAAgD8VvEhBUip0QQAAgD8VvEhBUip0QQAAgD8TF1pB2AllQQAAgD8TF1pB2AllQQAAgD86ZGpBBpxUQQAAgD86ZGpBBpxUQQAAgD8leHlBOQdDQQAAgD8leHlBOQdDQQAAgD9e9oJBhksyQQAAgD9e9oJBhksyQQAAgD/0oYhBR8YgQQAAgD/0oYhBR8YgQQAAgD9SuI1Bk4oOQQAAgD9SuI1Bk4oOQQAAgD8g6JFBzwn6QAAAgD8g6JFBzwn6QAAAgD+skJVBPOfVQAAAgL+skJVBPOfVQAAAgL9H9I5B9z3yQAAAgL9H9I5B9z3yQAAAgL+cj4dBPJIGQQAAgL+cj4dBPJIGQQAAgL/WGX5BmsoTQQAAgL/WGX5BmsoTQQAAgL94LWtB4VcgQQAAgL94LWtB4VcgQQAAgL8EaFZBwxUsQQAAgL8EaFZBwxUsQQAAgL8/4D9But42QQAAgL8/4D9But42QQAAgL+otSdBnY1AQQAAgL+otSdBnY1AQQAAgL+RDw5BNP9IQQAAgL+RDw5BNP9IQQAAgL8/N+ZAmhNQQQAAgL8/N+ZAmhNQQQAAgL/py7NATDFVQQAAgL/py7NATDFVQQAAgL/dxX9ALg1ZQQAAgL/dxX9ALg1ZQQAAgL9CkhVAUZpbQQAAgL9CkhVAUZpbQQAAgL9Cl5Y/hJRcQQAAgL9Cl5Y/hJRcQQAAgL+LrCy64+hcQQAAgD+skJVBPOfVQAAAgD9H9I5B9z3yQAAAgD9H9I5B9z3yQAAAgD+cj4dBPJIGQQAAgD+cj4dBPJIGQQAAgD/WGX5BmsoTQQAAgD/WGX5BmsoTQQAAgD94LWtB4VcgQQAAgD94LWtB4VcgQQAAgD8EaFZBwxUsQQAAgD8EaFZBwxUsQQAAgD8/4D9But42QQAAgD8/4D9But42QQAAgD+otSdBnY1AQQAAgD+otSdBnY1AQQAAgD+RDw5BNP9IQQAAgD+RDw5BNP9IQQAAgD8/N+ZAmhNQQQAAgD8/N+ZAmhNQQQAAgD/py7NATDFVQQAAgD/py7NATDFVQQAAgD/dxX9ALg1ZQQAAgD/dxX9ALg1ZQQAAgD9CkhVAUZpbQQAAgD9CkhVAUZpbQQAAgD9Cl5Y/hJRcQQAAgD9Cl5Y/hJRcQQAAgD+LrCy64+hcQZR0lGJdlIaUdYwFY29sb3KUjAcjMjk4MGI5lIwCYmKUfZQojAR4bWlulEe/8GuXb3JB/IwEeG1heJRHP/Brl29yQf+MBHltaW6UR8AyuR7L4VRgjAR5bWF4lEdAMrjPA1Sh84wEem1pbpRHwDO7Vp7gYQOMBHptYXiUR0AzuvfaulgkdYwDaW5klGgIdX2UKGgUSwJoFWgPaBaMCGNhZHF1ZXJ5lGgYfZQoaBpoHWggSwCFlGgih5RSlChLAU3iBEsDhpRoKolCmDoAAAAAgL+r/OLAuyRawAAAgD+r/OLAuyRawAAAgD825NnAtVZZwAAAgD+0LNLA+zxXwAAAgD8InMvAqTtUwAAAgD+FZ8XA/ytQwAAAgD8tj7/A/A1LwAAAgD9Dm7rA1ItFwAAAgD/5ZbbAdds/wAAAgD8ASbTAgZg8wAAAgD+rPLLAuyQ5wAAAgL+rPLLAuyQ5wAAAgL825NnAtVZZwAAAgL+0LNLA+zxXwAAAgL8InMvAqTtUwAAAgL+FZ8XA/ytQwAAAgL8tj7/A/A1LwAAAgL9Dm7rA1ItFwAAAgL/5ZbbAdds/wAAAgL8ASbTAgZg8wAAAgL+rPLLAuyQ5wAAAgD+rPLLAuyQ5wAAAgD8SZq7A+sMxwAAAgD/d+qrAduEpwAAAgD8M+6fAL30hwAAAgD+fZqXAJJcYwAAAgD+WPaPAVS8PwAAAgD/wf6HAxEUFwAAAgD+vLaDA3rT1vwAAgD/RRp/ArtrfvwAAgD9N7J7A44XRvwAAgD+rvJ7Ad8nCvwAAgL+rvJ7Ad8nCvwAAgL8SZq7A+sMxwAAAgL/d+qrAduEpwAAAgL8M+6fAL30hwAAAgL+fZqXAJJcYwAAAgL+WPaPAVS8PwAAAgL/wf6HAxEUFwAAAgL+vLaDA3rT1vwAAgL/RRp/ArtrfvwAAgL9N7J7A44XRvwAAgL+rvJ7Ad8nCvwAAgD+rvJ7Ad8nCvwAAgD+rcuS/d8nCvwAAgL+rcuS/d8nCvwAAgL+rcuS/d8nCvwAAgD+rcuS/d8nCvwAAgD+xZee/o1f5vwAAgD8xPe+/KCsXwAAAgD8t+fu/vuIwwAAAgD/SzAbAldJJwAAAgD9A1RHA7othwAAAgD9WOR/Au4R4wAAAgL9WOR/Au4R4wAAAgL+xZee/o1f5vwAAgL8xPe+/KCsXwAAAgL8t+fu/vuIwwAAAgL/SzAbAldJJwAAAgL9A1RHA7othwAAAgL9WOR/Au4R4wAAAgD9WOR/Au4R4wAAAgD/15C7AWUeHwAAAgD9fYUDASXGRwAAAgD+UrlPAMMCawAAAgD+UzGjADDSjwAAAgD+/q3/AK8iqwAAAgD+rLIzAXoKxwAAAgL+rLIzAXoKxwAAAgL/15C7AWUeHwAAAgL9fYUDASXGRwAAAgL+UrlPAMMCawAAAgL+UzGjADDSjwAAAgL+/q3/AK8iqwAAAgL+rLIzAXoKxwAAAgD+rLIzAXoKxwAAAgD89OpvA3xS4wAAAgD/n36rAI0K9wAAAgD+oHbvAKgrBwAAAgD+A88vA9GzDwAAAgD9DbNbAAzHEwAAAgD+rHOHAXnLEwAAAgL+rHOHAXnLEwAAAgL89OpvA3xS4wAAAgL/n36rAI0K9wAAAgL+oHbvAKgrBwAAAgL+A88vA9GzDwAAAgL9DbNbAAzHEwAAAgL+rHOHAXnLEwAAAgD+rHOHAXnLEwAAAgD+I9/TAiLDDwAAAgD9q6APBBWvBwAAAgD/QcQzB/NS9wAAAgD8avBPBA2y5wAAAgD8dpxrBPOSzwAAAgD/YMiHBpj2twAAAgD9NXyfBQnilwAAAgD96LC3BD5ScwAAAgD85SDHB8CuVwAAAgD9VLjXBXiKNwAAAgL9VLjXBXiKNwAAAgL+I9/TAiLDDwAAAgL9q6APBBWvBwAAAgL/QcQzB/NS9wAAAgL8avBPBA2y5wAAAgL8dpxrBPOSzwAAAgL/YMiHBpj2twAAAgL9NXyfBQnilwAAAgL96LC3BD5ScwAAAgL85SDHB8CuVwAAAgL9VLjXBXiKNwAAAgD9VLjXBXiKNwAAAgD+BSjrBLcKAwAAAgD8a0j7BDtFmwAAAgD8jxULB1SpKwAAAgD+aI0bBsJErwAAAgD+A7UjBoAULwAAAgD/VIkvBRg3RvwAAgD+Yw0zBdimIvwAAgD/Kz03BNX/tvgAAgD+yLk7BwyNlvQAAgD9VTk7BJNq6PgAAgL9VTk7BJNq6PgAAgL+BSjrBLcKAwAAAgL8a0j7BDtFmwAAAgL8jxULB1SpKwAAAgL+aI0bBsJErwAAAgL+A7UjBoAULwAAAgL/VIkvBRg3RvwAAgL+Yw0zBdimIvwAAgL/Kz03BNX/tvgAAgL+yLk7BwyNlvQAAgL9VTk7BJNq6PgAAgD9VTk7BJNq6PgAAgD9VTk7BEm0WPwAAgL9VTk7BEm0WPwAAgL9VTk7BEm0WPwAAgD9VTk7BEm0WPwAAgD9q7U3BoFKlPwAAgD+pykzBH9X6PwAAgD+rAEvBlw4kQAAAgD9olEjB1+hHQAAAgD821UXB8nRmQAAAgD+xjELB86+BQAAAgD/Xuj7B2lSPQAAAgD+pXzrBLimcQAAAgD/r8DfBoWaiQAAAgD9VXjXBom2oQAAAgL9VXjXBom2oQAAAgL9q7U3BoFKlPwAAgL+pykzBH9X6PwAAgL+rAEvBlw4kQAAAgL9olEjB1+hHQAAAgL821UXB8nRmQAAAgL+xjELB86+BQAAAgL/Xuj7B2lSPQAAAgL+pXzrBLimcQAAAgL/r8DfBoWaiQAAAgL9VXjXBom2oQAAAgD9VXjXBom2oQAAAgD/Wwy/BsdizQAAAgD92uinBA/a9QAAAgD83QiPBlsXGQAAAgD8YWxzBbEfOQAAAgD8aBRXBhXvUQAAAgD88QA3B32HZQAAAgD9+DAXBfPrcQAAAgD/B0/jAW0XfQAAAgD8chO3AkAvgQAAAgD+r3OHAok3gQAAAgL+r3OHAok3gQAAAgL/Wwy/BsdizQAAAgL92uinBA/a9QAAAgL83QiPBlsXGQAAAgL8YWxzBbEfOQAAAgL8aBRXBhXvUQAAAgL88QA3B32HZQAAAgL9+DAXBfPrcQAAAgL/B0/jAW0XfQAAAgL8chO3AkAvgQAAAgL+r3OHAok3gQAAAgD+r3OHAok3gQAAAgD9uQs7AzIvfQAAAgD9jtrvASkbdQAAAgD+LOKrAG33ZQAAAgD+AYZvAYcnUQAAAgD99wY7AjIHPQAAAgD/A0oLAXDvJQAAAgD+TKm/A0vbBQAAAgD84+17A7sO7QAAAgD9WmU/Aov20QAAAgL9WmU/Aov20QAAAgL9uQs7AzIvfQAAAgL9jtrvASkbdQAAAgL+LOKrAG33ZQAAAgL+AYZvAYcnUQAAAgL99wY7AjIHPQAAAgL/A0oLAXDvJQAAAgL+TKm/A0vbBQAAAgL84+17A7sO7QAAAgL9WmU/Aov20QAAAgD9WmU/Aov20QAAAgD/vrzzAeDurQAAAgD+i6SvAo8ugQAAAgD9vRh3AJa6VQAAAgD9VxhDA/OKJQAAAgD9UaQbAUtR6QAAAgD/bXvy/WIdgQAAAgD9AMfC/Ct9EQAAAgD/YSei/Z9snQAAAgD9tmuW/f9kVQAAAgD+rcuS/RVsDQAAAgL+rcuS/RVsDQAAAgL/vrzzAeDurQAAAgL+i6SvAo8ugQAAAgL9vRh3AJa6VQAAAgL9VxhDA/OKJQAAAgL9UaQbAUtR6QAAAgL/bXvy/WIdgQAAAgL9AMfC/Ct9EQAAAgL/YSei/Z9snQAAAgL9tmuW/f9kVQAAAgL+rcuS/RVsDQAAAgD+rcuS/RVsDQAAAgD+rvJ7ARVsDQAAAgL+rvJ7ARVsDQAAAgL+rvJ7ARVsDQAAAgD+rvJ7ARVsDQAAAgD8aW5/AWIgVQAAAgD90xKDAGasmQAAAgD8ot6LAeTM1QAAAgD+FBKXAXY9BQAAAgD8D16fAhjxNQAAAgD+jLqvA8jpYQAAAgD9Gg67AOj1hQAAAgD+rPLLARbtpQAAAgL+rPLLARbtpQAAAgL8aW5/AWIgVQAAAgL90xKDAGasmQAAAgL8ot6LAeTM1QAAAgL+FBKXAXY9BQAAAgL8D16fAhjxNQAAAgL+jLqvA8jpYQAAAgL9Gg67AOj1hQAAAgL+rPLLARbtpQAAAgD+rPLLARbtpQAAAgD9iNLbAxmJxQAAAgD+VbLrAtDZ4QAAAgD9C5b7ADTd+QAAAgD9rnsPA6bGBQAAAgD8QmMjAgd6DQAAAgD8v0s3ATqGFQAAAgD/KTNPAUvqGQAAAgD/gB9nAi+mHQAAAgD9ESd7AnGSIQAAAgD+rvOPAoo2IQAAAgL+rvOPAoo2IQAAAgL9iNLbAxmJxQAAAgL+VbLrAtDZ4QAAAgL9C5b7ADTd+QAAAgL9rnsPA6bGBQAAAgL8QmMjAgd6DQAAAgL8v0s3ATqGFQAAAgL/KTNPAUvqGQAAAgL/gB9nAi+mHQAAAgL9ESd7AnGSIQAAAgL+rvOPAoo2IQAAAgD+rvOPAoo2IQAAAgD9hnuzAnyaIQAAAgD8WJvTAwhmHQAAAgD/vjPrAGZmFQAAAgD9fTADBRJGDQAAAgD/43QLB2EmBQAAAgD9IDgXBTL59QAAAgD+8IAfBiTd4QAAAgD9SFQnBZf9xQAAAgD8K7ArB4hVrQAAAgD/mpAzB/3pjQAAAgD90kg3B29deQAAAgD9Vdg7BRftZQAAAgL9Vdg7BRftZQAAAgL9hnuzAnyaIQAAAgL8WJvTAwhmHQAAAgL/vjPrAGZmFQAAAgL9fTADBRJGDQAAAgL/43QLB2EmBQAAAgL9IDgXBTL59QAAAgL+8IAfBiTd4QAAAgL9SFQnBZf9xQAAAgL8K7ArB4hVrQAAAgL/mpAzB/3pjQAAAgL90kg3B29deQAAAgL9Vdg7BRftZQAAAgD9Vdg7BRftZQAAAgD8q9w/B3pVQQAAAgD90VxHBNElGQAAAgD8zlxLBSBU7QAAAgD9othPBGvouQAAAgD8TtRTBqfchQAAAgD8D1hXBVjcPQAAAgD/q1RbBF2fwPwAAgD87uRfB1bSsPwAAgD8PBRjB47l9PwAAgD9VHhjBEm0cPwAAgL9VHhjBEm0cPwAAgL8q9w/B3pVQQAAAgL90VxHBNElGQAAAgL8zlxLBSBU7QAAAgL9othPBGvouQAAAgL8TtRTBqfchQAAAgL8D1hXBVjcPQAAAgL/q1RbBF2fwPwAAgL87uRfB1bSsPwAAgL8PBRjB47l9PwAAgL9VHhjBEm0cPwAAgD9VHhjBEm0cPwAAgD9VHhjBJNqEPgAAgL9VHhjBJNqEPgAAgL9VHhjBJNqEPgAAgD9VHhjBJNqEPgAAgD9qvRfBMpDzvgAAgD+pmhbBf9SPvwAAgD/09hTBBwHQvwAAgD9iMxPBeY79vwAAgD9jfBHBQh4PwAAAgD93FhDBlr8ZwAAAgD9Vjg7Bu2QjwAAAgL9Vjg7Bu2QjwAAAgL9qvRfBMpDzvgAAgL+pmhbBf9SPvwAAgL/09hTBBwHQvwAAgL9iMxPBeY79vwAAgL9jfBHBQh4PwAAAgL93FhDBlr8ZwAAAgL9Vjg7Bu2QjwAAAgD9Vjg7Bu2QjwAAAgD8l2QzB7VoswAAAgD/E/grBN4Q0wAAAgD8y/wjBmeA7wAAAgD9u2gbBE3BCwAAAgD96kATBpTJIwAAAgD9UIQLBTyhNwAAAgD/6Gf/AElFRwAAAgD/rpvnA7KxUwAAAgD956fPA3ztXwAAAgD+l4e3A6v1YwAAAgD/piejAB9tZwAAAgD+r/OLAuyRawAAAgL+r/OLAuyRawAAAgL8l2QzB7VoswAAAgL/E/grBN4Q0wAAAgL8y/wjBmeA7wAAAgL9u2gbBE3BCwAAAgL96kATBpTJIwAAAgL9UIQLBTyhNwAAAgL/6Gf/AElFRwAAAgL/rpvnA7KxUwAAAgL956fPA3ztXwAAAgL+l4e3A6v1YwAAAgL/piejAB9tZwAAAgL9Vjg7Bu2QjwAAAgL8l2QzB7VoswAAAgL/E/grBN4Q0wAAAgL8y/wjBmeA7wAAAgL9u2gbBE3BCwAAAgL96kATBpTJIwAAAgL9UIQLBTyhNwAAAgL/6Gf/AElFRwAAAgL/rpvnA7KxUwAAAgL956fPA3ztXwAAAgL+l4e3A6v1YwAAAgL/piejAB9tZwAAAgL+r/OLAuyRawAAAgL9VHhjBJNqEPgAAgL9qvRfBMpDzvgAAgL+pmhbBf9SPvwAAgL/09hTBBwHQvwAAgL9iMxPBeY79vwAAgL9jfBHBQh4PwAAAgL93FhDBlr8ZwAAAgL9VHhjBEm0cPwAAgL9Vdg7BRftZQAAAgL8q9w/B3pVQQAAAgL90VxHBNElGQAAAgL8zlxLBSBU7QAAAgL9othPBGvouQAAAgL8TtRTBqfchQAAAgL8D1hXBVjcPQAAAgL/q1RbBF2fwPwAAgL87uRfB1bSsPwAAgL8PBRjB47l9PwAAgL+rvOPAoo2IQAAAgL9hnuzAnyaIQAAAgL8WJvTAwhmHQAAAgL/vjPrAGZmFQAAAgL9fTADBRJGDQAAAgL/43QLB2EmBQAAAgL9IDgXBTL59QAAAgL+8IAfBiTd4QAAAgL9SFQnBZf9xQAAAgL8K7ArB4hVrQAAAgL/mpAzB/3pjQAAAgL90kg3B29deQAAAgL+rPLLARbtpQAAAgL9iNLbAxmJxQAAAgL+VbLrAtDZ4QAAAgL9C5b7ADTd+QAAAgL9rnsPA6bGBQAAAgL8QmMjAgd6DQAAAgL8v0s3ATqGFQAAAgL/KTNPAUvqGQAAAgL/gB9nAi+mHQAAAgL9ESd7AnGSIQAAAgL+rvJ7ARVsDQAAAgL8aW5/AWIgVQAAAgL90xKDAGasmQAAAgL8ot6LAeTM1QAAAgL+FBKXAXY9BQAAAgL8D16fAhjxNQAAAgL+jLqvA8jpYQAAAgL9Gg67AOj1hQAAAgL+rcuS/RVsDQAAAgL9WmU/Aov20QAAAgL/vrzzAeDurQAAAgL+i6SvAo8ugQAAAgL9vRh3AJa6VQAAAgL9VxhDA/OKJQAAAgL9UaQbAUtR6QAAAgL/bXvy/WIdgQAAAgL9AMfC/Ct9EQAAAgL/YSei/Z9snQAAAgL9tmuW/f9kVQAAAgL+r3OHAok3gQAAAgL9uQs7AzIvfQAAAgL9jtrvASkbdQAAAgL+LOKrAG33ZQAAAgL+AYZvAYcnUQAAAgL99wY7AjIHPQAAAgL/A0oLAXDvJQAAAgL+TKm/A0vbBQAAAgL84+17A7sO7QAAAgL9VXjXBom2oQAAAgL/Wwy/BsdizQAAAgL92uinBA/a9QAAAgL83QiPBlsXGQAAAgL8YWxzBbEfOQAAAgL8aBRXBhXvUQAAAgL88QA3B32HZQAAAgL9+DAXBfPrcQAAAgL/B0/jAW0XfQAAAgL8chO3AkAvgQAAAgL9VTk7BEm0WPwAAgL9q7U3BoFKlPwAAgL+pykzBH9X6PwAAgL+rAEvBlw4kQAAAgL9olEjB1+hHQAAAgL821UXB8nRmQAAAgL+xjELB86+BQAAAgL/Xuj7B2lSPQAAAgL+pXzrBLimcQAAAgL/r8DfBoWaiQAAAgL9VTk7BJNq6PgAAgL9VLjXBXiKNwAAAgL+BSjrBLcKAwAAAgL8a0j7BDtFmwAAAgL8jxULB1SpKwAAAgL+aI0bBsJErwAAAgL+A7UjBoAULwAAAgL/VIkvBRg3RvwAAgL+Yw0zBdimIvwAAgL/Kz03BNX/tvgAAgL+yLk7BwyNlvQAAgL+rHOHAXnLEwAAAgL+I9/TAiLDDwAAAgL9q6APBBWvBwAAAgL/QcQzB/NS9wAAAgL8avBPBA2y5wAAAgL8dpxrBPOSzwAAAgL/YMiHBpj2twAAAgL9NXyfBQnilwAAAgL96LC3BD5ScwAAAgL85SDHB8CuVwAAAgL+rLIzAXoKxwAAAgL89OpvA3xS4wAAAgL/n36rAI0K9wAAAgL+oHbvAKgrBwAAAgL+A88vA9GzDwAAAgL9DbNbAAzHEwAAAgL9WOR/Au4R4wAAAgL/15C7AWUeHwAAAgL9fYUDASXGRwAAAgL+UrlPAMMCawAAAgL+UzGjADDSjwAAAgL+/q3/AK8iqwAAAgL+rcuS/d8nCvwAAgL+xZee/o1f5vwAAgL8xPe+/KCsXwAAAgL8t+fu/vuIwwAAAgL/SzAbAldJJwAAAgL9A1RHA7othwAAAgL+rvJ7Ad8nCvwAAgL+rPLLAuyQ5wAAAgL8SZq7A+sMxwAAAgL/d+qrAduEpwAAAgL8M+6fAL30hwAAAgL+fZqXAJJcYwAAAgL+WPaPAVS8PwAAAgL/wf6HAxEUFwAAAgL+vLaDA3rT1vwAAgL/RRp/ArtrfvwAAgL9N7J7A44XRvwAAgL825NnAtVZZwAAAgL+0LNLA+zxXwAAAgL8InMvAqTtUwAAAgL+FZ8XA/ytQwAAAgL8tj7/A/A1LwAAAgL9Dm7rA1ItFwAAAgL/5ZbbAdds/wAAAgL8ASbTAgZg8wAAAgD9Vjg7Bu2QjwAAAgD8l2QzB7VoswAAAgD/E/grBN4Q0wAAAgD8y/wjBmeA7wAAAgD9u2gbBE3BCwAAAgD96kATBpTJIwAAAgD9UIQLBTyhNwAAAgD/6Gf/AElFRwAAAgD/rpvnA7KxUwAAAgD956fPA3ztXwAAAgD+l4e3A6v1YwAAAgD/piejAB9tZwAAAgD+r/OLAuyRawAAAgD9VHhjBJNqEPgAAgD9qvRfBMpDzvgAAgD+pmhbBf9SPvwAAgD/09hTBBwHQvwAAgD9iMxPBeY79vwAAgD9jfBHBQh4PwAAAgD93FhDBlr8ZwAAAgD9VHhjBEm0cPwAAgD9Vdg7BRftZQAAAgD8q9w/B3pVQQAAAgD90VxHBNElGQAAAgD8zlxLBSBU7QAAAgD9othPBGvouQAAAgD8TtRTBqfchQAAAgD8D1hXBVjcPQAAAgD/q1RbBF2fwPwAAgD87uRfB1bSsPwAAgD8PBRjB47l9PwAAgD+rvOPAoo2IQAAAgD9hnuzAnyaIQAAAgD8WJvTAwhmHQAAAgD/vjPrAGZmFQAAAgD9fTADBRJGDQAAAgD/43QLB2EmBQAAAgD9IDgXBTL59QAAAgD+8IAfBiTd4QAAAgD9SFQnBZf9xQAAAgD8K7ArB4hVrQAAAgD/mpAzB/3pjQAAAgD90kg3B29deQAAAgD+rPLLARbtpQAAAgD9iNLbAxmJxQAAAgD+VbLrAtDZ4QAAAgD9C5b7ADTd+QAAAgD9rnsPA6bGBQAAAgD8QmMjAgd6DQAAAgD8v0s3ATqGFQAAAgD/KTNPAUvqGQAAAgD/gB9nAi+mHQAAAgD9ESd7AnGSIQAAAgD+rvJ7ARVsDQAAAgD8aW5/AWIgVQAAAgD90xKDAGasmQAAAgD8ot6LAeTM1QAAAgD+FBKXAXY9BQAAAgD8D16fAhjxNQAAAgD+jLqvA8jpYQAAAgD9Gg67AOj1hQAAAgD+rcuS/RVsDQAAAgD9WmU/Aov20QAAAgD/vrzzAeDurQAAAgD+i6SvAo8ugQAAAgD9vRh3AJa6VQAAAgD9VxhDA/OKJQAAAgD9UaQbAUtR6QAAAgD/bXvy/WIdgQAAAgD9AMfC/Ct9EQAAAgD/YSei/Z9snQAAAgD9tmuW/f9kVQAAAgD+r3OHAok3gQAAAgD9uQs7AzIvfQAAAgD9jtrvASkbdQAAAgD+LOKrAG33ZQAAAgD+AYZvAYcnUQAAAgD99wY7AjIHPQAAAgD/A0oLAXDvJQAAAgD+TKm/A0vbBQAAAgD84+17A7sO7QAAAgD9VXjXBom2oQAAAgD/Wwy/BsdizQAAAgD92uinBA/a9QAAAgD83QiPBlsXGQAAAgD8YWxzBbEfOQAAAgD8aBRXBhXvUQAAAgD88QA3B32HZQAAAgD9+DAXBfPrcQAAAgD/B0/jAW0XfQAAAgD8chO3AkAvgQAAAgD9VTk7BEm0WPwAAgD9q7U3BoFKlPwAAgD+pykzBH9X6PwAAgD+rAEvBlw4kQAAAgD9olEjB1+hHQAAAgD821UXB8nRmQAAAgD+xjELB86+BQAAAgD/Xuj7B2lSPQAAAgD+pXzrBLimcQAAAgD/r8DfBoWaiQAAAgD9VTk7BJNq6PgAAgD9VLjXBXiKNwAAAgD+BSjrBLcKAwAAAgD8a0j7BDtFmwAAAgD8jxULB1SpKwAAAgD+aI0bBsJErwAAAgD+A7UjBoAULwAAAgD/VIkvBRg3RvwAAgD+Yw0zBdimIvwAAgD/Kz03BNX/tvgAAgD+yLk7BwyNlvQAAgD+rHOHAXnLEwAAAgD+I9/TAiLDDwAAAgD9q6APBBWvBwAAAgD/QcQzB/NS9wAAAgD8avBPBA2y5wAAAgD8dpxrBPOSzwAAAgD/YMiHBpj2twAAAgD9NXyfBQnilwAAAgD96LC3BD5ScwAAAgD85SDHB8CuVwAAAgD+rLIzAXoKxwAAAgD89OpvA3xS4wAAAgD/n36rAI0K9wAAAgD+oHbvAKgrBwAAAgD+A88vA9GzDwAAAgD9DbNbAAzHEwAAAgD9WOR/Au4R4wAAAgD/15C7AWUeHwAAAgD9fYUDASXGRwAAAgD+UrlPAMMCawAAAgD+UzGjADDSjwAAAgD+/q3/AK8iqwAAAgD+rcuS/d8nCvwAAgD+xZee/o1f5vwAAgD8xPe+/KCsXwAAAgD8t+fu/vuIwwAAAgD/SzAbAldJJwAAAgD9A1RHA7othwAAAgD+rvJ7Ad8nCvwAAgD+rPLLAuyQ5wAAAgD8SZq7A+sMxwAAAgD/d+qrAduEpwAAAgD8M+6fAL30hwAAAgD+fZqXAJJcYwAAAgD+WPaPAVS8PwAAAgD/wf6HAxEUFwAAAgD+vLaDA3rT1vwAAgD/RRp/ArtrfvwAAgD9N7J7A44XRvwAAgD825NnAtVZZwAAAgD+0LNLA+zxXwAAAgD8InMvAqTtUwAAAgD+FZ8XA/ytQwAAAgD8tj7/A/A1LwAAAgD9Dm7rA1ItFwAAAgD/5ZbbAdds/wAAAgD8ASbTAgZg8wAAAgL+tysG+Em0NPwAAgD+tysG+Em0NPwAAgD9LrbW+WOSpPwAAgD8kVZG+Iq4DQAAAgD+2NzG+TkktQAAAgD+oN8W84WtTQAAAgD97liQ+FV52QAAAgD9TXbg+uumJQAAAgD+WOhc/vLKXQAAAgD/Vl0M/MXegQAAAgD+pmnM/os2oQAAAgL+pmnM/os2oQAAAgL9LrbW+WOSpPwAAgL8kVZG+Iq4DQAAAgL+2NzG+TkktQAAAgL+oN8W84WtTQAAAgL97liQ+FV52QAAAgL9TXbg+uumJQAAAgL+WOhc/vLKXQAAAgL/Vl0M/MXegQAAAgL+pmnM/os2oQAAAgD+pmnM/os2oQAAAgD/vB6I/Xay0QAAAgD+8Bck//iK+QAAAgD/vXu4/dKrFQAAAgD+o7QpAU0LMQAAAgD9vvR9AmurRQAAAgD/NnjVASaPWQAAAgD/CkUxAYGzaQAAAgD9OlmRA30XdQAAAgD9xrH1Axi/fQAAAgD/544hAKwbgQAAAgD9VQ5NAok3gQAAAgL9VQ5NAok3gQAAAgL/vB6I/Xay0QAAAgL+8Bck//iK+QAAAgL/vXu4/dKrFQAAAgL+o7QpAU0LMQAAAgL9vvR9AmurRQAAAgL/NnjVASaPWQAAAgL/CkUxAYGzaQAAAgL9OlmRA30XdQAAAgL9xrH1Axi/fQAAAgL/544hAKwbgQAAAgL9VQ5NAok3gQAAAgD9VQ5NAok3gQAAAgD8n8KJAzIvfQAAAgD/AWrBA4pHdQAAAgD+P3rtAFL7aQAAAgD+W2sZA7OvWQAAAgD/VTtFAahvSQAAAgD9LO9tAjUzMQAAAgD/6n+RAVn/FQAAAgD/gfO1AxbO9QAAAgD+sf/RAz2m2QAAAgD9VI/tAom2uQAAAgL9VI/tAom2uQAAAgL8n8KJAzIvfQAAAgL/AWrBA4pHdQAAAgL+P3rtAFL7aQAAAgL+W2sZA7OvWQAAAgL/VTtFAahvSQAAAgL9LO9tAjUzMQAAAgL/6n+RAVn/FQAAAgL/gfO1AxbO9QAAAgL+sf/RAz2m2QAAAgL9VI/tAom2uQAAAgD9VI/tAom2uQAAAgD+rIQFBos3YQAAAgL+rIQFBos3YQAAAgL+rIQFBos3YQAAAgD+rIQFBos3YQAAAgD+rwTBBos3YQAAAgL+rwTBBos3YQAAAgL+rwTBBos3YQAAAgD+rwTBBos3YQAAAgD+rwTBBL3kswQAAgL+rwTBBL3kswQAAgL+rwTBBL3kswQAAgD+rwTBBL3kswQAAgD9Vw/RAL3kswQAAgL9Vw/RAL3kswQAAgL9Vw/RAL3kswQAAgD9Vw/RAL3kswQAAgD9Vw/RAXhKawAAAgL9Vw/RAXhKawAAAgL9Vw/RAXhKawAAAgD9Vw/RAXhKawAAAgD/PvetANlejwAAAgD8HFuNAsrSqwAAAgD+w3NpAEouwwAAAgD8NOtJAi6G1wAAAgD8gLslAHfi5wAAAgD/puL9AyI69wAAAgD9o2rVAjWXAwAAAgD+bkqtAa3zCwAAAgD+F4aBAY9PDwAAAgD88yJlAn0rEwAAAgD9Vg5JAXnLEwAAAgL9Vg5JAXnLEwAAAgL/PvetANlejwAAAgL8HFuNAsrSqwAAAgL+w3NpAEouwwAAAgL8NOtJAi6G1wAAAgL8gLslAHfi5wAAAgL/puL9AyI69wAAAgL9o2rVAjWXAwAAAgL+bkqtAa3zCwAAAgL+F4aBAY9PDwAAAgL88yJlAn0rEwAAAgL9Vg5JAXnLEwAAAgD9Vg5JAXnLEwAAAgD/bHIJAiLDDwAAAgD8qDWhAnrbBwAAAgD8ayU9A0OK+wAAAgD+PjThAqBC7wAAAgD+MWiJAJUC2wAAAgD8PMA1ASXGwwAAAgD8zHPI/EqSpwAAAgD9V6cs/gdihwAAAgD+Ex6c/lg6ZwAAAgD97e5E/0NWSwAAAgD+pGng/XjKMwAAAgL+pGng/XjKMwAAAgL/bHIJAiLDDwAAAgL8qDWhAnrbBwAAAgL8ayU9A0OK+wAAAgL+PjThAqBC7wAAAgL+MWiJAJUC2wAAAgL8PMA1ASXGwwAAAgL8zHPI/EqSpwAAAgL9V6cs/gdihwAAAgL+Ex6c/lg6ZwAAAgL97e5E/0NWSwAAAgL+pGng/XjKMwAAAgD+pGng/XjKMwAAAgD8t+S8/NmJ+wAAAgD+IleA+tCZiwAAAgD++PWQ+N7JDwAAAgD/2ciQ9vQQjwAAAgD/lb+C9SB4AwAAAgD9RwGe+rv21vwAAgD9PXqa+5k0ovwAAgD+W77q+9F2IvQAAgD+tysG+Em0NPwAAgL+tysG+Em0NPwAAgL8t+S8/NmJ+wAAAgL+IleA+tCZiwAAAgL++PWQ+N7JDwAAAgL/2ciQ9vQQjwAAAgL/lb+C9SB4AwAAAgL9RwGe+rv21vwAAgL9PXqa+5k0ovwAAgL+W77q+9F2IvQAAgL+qhkBAJNqcPgAAgD+qhkBAJNqcPgAAgD9XCkJAubW4vgAAgD9clUZAvq92vwAAgD+aWE1AyGO6vwAAgD8oqVRAHXjovwAAgD9PplxAzA8HwAAAgD9sumJAjKESwAAAgD+qZmlAu2QdwAAAgL+qZmlAu2QdwAAAgL9XCkJAubW4vgAAgL9clUZAvq92vwAAgL+aWE1AyGO6vwAAgL8oqVRAHXjovwAAgL9PplxAzA8HwAAAgL9sumJAjKESwAAAgL+qZmlAu2QdwAAAgD+qZmlAu2QdwAAAgD9mxnBABIQnwAAAgD/Hp3hAT7cwwAAAgD9nhYBAnf44wAAAgD+994RA7llAwAAAgD/nqolAQ8lGwAAAgD/jno5AmkxMwAAAgD+y05NA9ONQwAAAgD9USZlAUo9UwAAAgD/K/55Ask5XwAAAgD8S96RAFSJZwAAAgD9wi6lAEuRZwAAAgD9VQ65AuyRawAAAgL9VQ65AuyRawAAAgL9mxnBABIQnwAAAgL/Hp3hAT7cwwAAAgL9nhYBAnf44wAAAgL+994RA7llAwAAAgL/nqolAQ8lGwAAAgL/jno5AmkxMwAAAgL+y05NA9ONQwAAAgL9USZlAUo9UwAAAgL/K/55Ask5XwAAAgL8S96RAFSJZwAAAgL9wi6lAEuRZwAAAgL9VQ65AuyRawAAAgD9VQ65AuyRawAAAgD9+DLlAYSZZwAAAgD+DIcJAhY5WwAAAgD9pzMlAvthSwAAAgD8GStBAlGROwAAAgD9uzNVASXtJwAAAgD9F/tpAArdDwAAAgD+L399AwRc9wAAAgD8/cORAhJ01wAAAgD9hsOhATUgtwAAAgD/yn+xAGhgkwAAAgD/xPvBA7QwawAAAgD+olPJA4o0SwAAAgD9Vw/RAu6QKwAAAgL9Vw/RAu6QKwAAAgL9+DLlAYSZZwAAAgL+DIcJAhY5WwAAAgL9pzMlAvthSwAAAgL8GStBAlGROwAAAgL9uzNVASXtJwAAAgL9F/tpAArdDwAAAgL+L399AwRc9wAAAgL8/cORAhJ01wAAAgL9hsOhATUgtwAAAgL/yn+xAGhgkwAAAgL/xPvBA7QwawAAAgL+olPJA4o0SwAAAgL9Vw/RAu6QKwAAAgD9Vw/RAu6QKwAAAgD9Vw/RARZtEQAAAgL9Vw/RARZtEQAAAgL9Vw/RARZtEQAAAgD9Vw/RARZtEQAAAgD+NpfFAYWFPQAAAgD/FN+5AQlZZQAAAgD/9eepA6HliQAAAgD80bOZAUsxqQAAAgD9sDuJAgU1yQAAAgD+jYN1AdP14QAAAgD/aYthALNx+QAAAgD8RFdNA1PSBQAAAgD9Id81A9RKEQAAAgD9/icdAd8iFQAAAgD+1S8FAXBWHQAAAgD/rvbpAo/mHQAAAgD/r/LRAo2iIQAAAgD9VA69Aoo2IQAAAgL9VA69Aoo2IQAAAgL+NpfFAYWFPQAAAgL/FN+5AQlZZQAAAgL/9eepA6HliQAAAgL80bOZAUsxqQAAAgL9sDuJAgU1yQAAAgL+jYN1AdP14QAAAgL/aYthALNx+QAAAgL8RFdNA1PSBQAAAgL9Id81A9RKEQAAAgL9/icdAd8iFQAAAgL+1S8FAXBWHQAAAgL/rvbpAo/mHQAAAgL/r/LRAo2iIQAAAgL9VA69Aoo2IQAAAgD9VA69Aoo2IQAAAgD94kKhAz1GIQAAAgD8TYKJAVJ6HQAAAgD8ocpxAM3OGQAAAgD+2xpZAa9CEQAAAgD++XZFA+7WCQAAAgD8+N4xA5SOAQAAAgD84U4dAUDR6QAAAgD+rsYJAiDFzQAAAgD8tpXxAcT9rQAAAgD/4a3RADV5iQAAAgD+2+G5AWpdbQAAAgD+qxmlARVtUQAAAgL+qxmlARVtUQAAAgL94kKhAz1GIQAAAgL8TYKJAVJ6HQAAAgL8ocpxAM3OGQAAAgL+2xpZAa9CEQAAAgL++XZFA+7WCQAAAgL8+N4xA5SOAQAAAgL84U4dAUDR6QAAAgL+rsYJAiDFzQAAAgL8tpXxAcT9rQAAAgL/4a3RADV5iQAAAgL+2+G5AWpdbQAAAgL+qxmlARVtUQAAAgD+qxmlARVtUQAAAgD8DoWFA6exGQAAAgD+UYFpAoPI3QAAAgD9eBVRAaWwnQAAAgD9hj05ARFoVQAAAgD+c/klAMbwBQAAAgD/TTEVAZ1jKPwAAgD/p60FAu4GDPwAAgD/630BAUiEtPwAAgD+qhkBAJNqcPgAAgL+qhkBAJNqcPgAAgL8DoWFA6exGQAAAgL+UYFpAoPI3QAAAgL9eBVRAaWwnQAAAgL9hj05ARFoVQAAAgL+c/klAMbwBQAAAgL/TTEVAZ1jKPwAAgL/p60FAu4GDPwAAgL/630BAUiEtPwAAgL+pGng/XjKMwAAAgL8t+S8/NmJ+wAAAgL+IleA+tCZiwAAAgL++PWQ+N7JDwAAAgL/2ciQ9vQQjwAAAgL/lb+C9SB4AwAAAgL9RwGe+rv21vwAAgL9PXqa+5k0ovwAAgL+W77q+9F2IvQAAgL+tysG+Em0NPwAAgL9Vg5JAXnLEwAAAgL/bHIJAiLDDwAAAgL8qDWhAnrbBwAAAgL8ayU9A0OK+wAAAgL+PjThAqBC7wAAAgL+MWiJAJUC2wAAAgL8PMA1ASXGwwAAAgL8zHPI/EqSpwAAAgL9V6cs/gdihwAAAgL+Ex6c/lg6ZwAAAgL97e5E/0NWSwAAAgL9Vw/RAXhKawAAAgL/PvetANlejwAAAgL8HFuNAsrSqwAAAgL+w3NpAEouwwAAAgL8NOtJAi6G1wAAAgL8gLslAHfi5wAAAgL/puL9AyI69wAAAgL9o2rVAjWXAwAAAgL+bkqtAa3zCwAAAgL+F4aBAY9PDwAAAgL88yJlAn0rEwAAAgL9Vw/RAL3kswQAAgL+rwTBBL3kswQAAgL+rwTBBos3YQAAAgL+rIQFBos3YQAAAgL9VI/tAom2uQAAAgL9VQ5NAok3gQAAAgL8n8KJAzIvfQAAAgL/AWrBA4pHdQAAAgL+P3rtAFL7aQAAAgL+W2sZA7OvWQAAAgL/VTtFAahvSQAAAgL9LO9tAjUzMQAAAgL/6n+RAVn/FQAAAgL/gfO1AxbO9QAAAgL+sf/RAz2m2QAAAgL+pmnM/os2oQAAAgL/vB6I/Xay0QAAAgL+8Bck//iK+QAAAgL/vXu4/dKrFQAAAgL+o7QpAU0LMQAAAgL9vvR9AmurRQAAAgL/NnjVASaPWQAAAgL/CkUxAYGzaQAAAgL9OlmRA30XdQAAAgL9xrH1Axi/fQAAAgL/544hAKwbgQAAAgL9LrbW+WOSpPwAAgL8kVZG+Iq4DQAAAgL+2NzG+TkktQAAAgL+oN8W84WtTQAAAgL97liQ+FV52QAAAgL9TXbg+uumJQAAAgL+WOhc/vLKXQAAAgL/Vl0M/MXegQAAAgL+qxmlARVtUQAAAgL8DoWFA6exGQAAAgL+UYFpAoPI3QAAAgL9eBVRAaWwnQAAAgL9hj05ARFoVQAAAgL+c/klAMbwBQAAAgL/TTEVAZ1jKPwAAgL/p60FAu4GDPwAAgL/630BAUiEtPwAAgL+qhkBAJNqcPgAAgL9VA69Aoo2IQAAAgL94kKhAz1GIQAAAgL8TYKJAVJ6HQAAAgL8ocpxAM3OGQAAAgL+2xpZAa9CEQAAAgL++XZFA+7WCQAAAgL8+N4xA5SOAQAAAgL84U4dAUDR6QAAAgL+rsYJAiDFzQAAAgL8tpXxAcT9rQAAAgL/4a3RADV5iQAAAgL+2+G5AWpdbQAAAgL9Vw/RARZtEQAAAgL+NpfFAYWFPQAAAgL/FN+5AQlZZQAAAgL/9eepA6HliQAAAgL80bOZAUsxqQAAAgL9sDuJAgU1yQAAAgL+jYN1AdP14QAAAgL/aYthALNx+QAAAgL8RFdNA1PSBQAAAgL9Id81A9RKEQAAAgL9/icdAd8iFQAAAgL+1S8FAXBWHQAAAgL/rvbpAo/mHQAAAgL/r/LRAo2iIQAAAgL9Vw/RAu6QKwAAAgL9VQ65AuyRawAAAgL9+DLlAYSZZwAAAgL+DIcJAhY5WwAAAgL9pzMlAvthSwAAAgL8GStBAlGROwAAAgL9uzNVASXtJwAAAgL9F/tpAArdDwAAAgL+L399AwRc9wAAAgL8/cORAhJ01wAAAgL9hsOhATUgtwAAAgL/yn+xAGhgkwAAAgL/xPvBA7QwawAAAgL+olPJA4o0SwAAAgL+qZmlAu2QdwAAAgL9mxnBABIQnwAAAgL/Hp3hAT7cwwAAAgL9nhYBAnf44wAAAgL+994RA7llAwAAAgL/nqolAQ8lGwAAAgL/jno5AmkxMwAAAgL+y05NA9ONQwAAAgL9USZlAUo9UwAAAgL/K/55Ask5XwAAAgL8S96RAFSJZwAAAgL9wi6lAEuRZwAAAgL9XCkJAubW4vgAAgL9clUZAvq92vwAAgL+aWE1AyGO6vwAAgL8oqVRAHXjovwAAgL9PplxAzA8HwAAAgL9sumJAjKESwAAAgD+pGng/XjKMwAAAgD8t+S8/NmJ+wAAAgD+IleA+tCZiwAAAgD++PWQ+N7JDwAAAgD/2ciQ9vQQjwAAAgD/lb+C9SB4AwAAAgD9RwGe+rv21vwAAgD9PXqa+5k0ovwAAgD+W77q+9F2IvQAAgD+tysG+Em0NPwAAgD9Vg5JAXnLEwAAAgD/bHIJAiLDDwAAAgD8qDWhAnrbBwAAAgD8ayU9A0OK+wAAAgD+PjThAqBC7wAAAgD+MWiJAJUC2wAAAgD8PMA1ASXGwwAAAgD8zHPI/EqSpwAAAgD9V6cs/gdihwAAAgD+Ex6c/lg6ZwAAAgD97e5E/0NWSwAAAgD9Vw/RAXhKawAAAgD/PvetANlejwAAAgD8HFuNAsrSqwAAAgD+w3NpAEouwwAAAgD8NOtJAi6G1wAAAgD8gLslAHfi5wAAAgD/puL9AyI69wAAAgD9o2rVAjWXAwAAAgD+bkqtAa3zCwAAAgD+F4aBAY9PDwAAAgD88yJlAn0rEwAAAgD9Vw/RAL3kswQAAgD+rwTBBL3kswQAAgD+rwTBBos3YQAAAgD+rIQFBos3YQAAAgD9VI/tAom2uQAAAgD9VQ5NAok3gQAAAgD8n8KJAzIvfQAAAgD/AWrBA4pHdQAAAgD+P3rtAFL7aQAAAgD+W2sZA7OvWQAAAgD/VTtFAahvSQAAAgD9LO9tAjUzMQAAAgD/6n+RAVn/FQAAAgD/gfO1AxbO9QAAAgD+sf/RAz2m2QAAAgD+pmnM/os2oQAAAgD/vB6I/Xay0QAAAgD+8Bck//iK+QAAAgD/vXu4/dKrFQAAAgD+o7QpAU0LMQAAAgD9vvR9AmurRQAAAgD/NnjVASaPWQAAAgD/CkUxAYGzaQAAAgD9OlmRA30XdQAAAgD9xrH1Axi/fQAAAgD/544hAKwbgQAAAgD9LrbW+WOSpPwAAgD8kVZG+Iq4DQAAAgD+2NzG+TkktQAAAgD+oN8W84WtTQAAAgD97liQ+FV52QAAAgD9TXbg+uumJQAAAgD+WOhc/vLKXQAAAgD/Vl0M/MXegQAAAgD+qxmlARVtUQAAAgD8DoWFA6exGQAAAgD+UYFpAoPI3QAAAgD9eBVRAaWwnQAAAgD9hj05ARFoVQAAAgD+c/klAMbwBQAAAgD/TTEVAZ1jKPwAAgD/p60FAu4GDPwAAgD/630BAUiEtPwAAgD+qhkBAJNqcPgAAgD9VA69Aoo2IQAAAgD94kKhAz1GIQAAAgD8TYKJAVJ6HQAAAgD8ocpxAM3OGQAAAgD+2xpZAa9CEQAAAgD++XZFA+7WCQAAAgD8+N4xA5SOAQAAAgD84U4dAUDR6QAAAgD+rsYJAiDFzQAAAgD8tpXxAcT9rQAAAgD/4a3RADV5iQAAAgD+2+G5AWpdbQAAAgD9Vw/RARZtEQAAAgD+NpfFAYWFPQAAAgD/FN+5AQlZZQAAAgD/9eepA6HliQAAAgD80bOZAUsxqQAAAgD9sDuJAgU1yQAAAgD+jYN1AdP14QAAAgD/aYthALNx+QAAAgD8RFdNA1PSBQAAAgD9Id81A9RKEQAAAgD9/icdAd8iFQAAAgD+1S8FAXBWHQAAAgD/rvbpAo/mHQAAAgD/r/LRAo2iIQAAAgD9Vw/RAu6QKwAAAgD9VQ65AuyRawAAAgD9+DLlAYSZZwAAAgD+DIcJAhY5WwAAAgD9pzMlAvthSwAAAgD8GStBAlGROwAAAgD9uzNVASXtJwAAAgD9F/tpAArdDwAAAgD+L399AwRc9wAAAgD8/cORAhJ01wAAAgD9hsOhATUgtwAAAgD/yn+xAGhgkwAAAgD/xPvBA7QwawAAAgD+olPJA4o0SwAAAgD+qZmlAu2QdwAAAgD9mxnBABIQnwAAAgD/Hp3hAT7cwwAAAgD9nhYBAnf44wAAAgD+994RA7llAwAAAgD/nqolAQ8lGwAAAgD/jno5AmkxMwAAAgD+y05NA9ONQwAAAgD9USZlAUo9UwAAAgD/K/55Ask5XwAAAgD8S96RAFSJZwAAAgD9wi6lAEuRZwAAAgD9XCkJAubW4vgAAgD9clUZAvq92vwAAgD+aWE1AyGO6vwAAgD8oqVRAHXjovwAAgD9PplxAzA8HwAAAgD9sumJAjKESwJR0lGJoL2gdaCBLAIWUaCKHlFKUKEsBTbANhZRoNolCwDYAAAwAAAAAAAAAAQAAAAwAAAABAAAAAgAAAA0AAAACAAAAAwAAAA0AAAAMAAAAAgAAAA4AAAADAAAABAAAAA4AAAANAAAAAwAAAA8AAAAEAAAABQAAAA8AAAAOAAAABAAAABAAAAAFAAAABgAAABAAAAAPAAAABQAAABEAAAAGAAAABwAAABEAAAAQAAAABgAAABIAAAAHAAAACAAAABIAAAARAAAABwAAABMAAAAIAAAACQAAABMAAAASAAAACAAAAAsAAAAJAAAACgAAAAsAAAATAAAACQAAACEAAAAUAAAAFQAAACEAAAAVAAAAFgAAACIAAAAWAAAAFwAAACIAAAAhAAAAFgAAACMAAAAXAAAAGAAAACMAAAAiAAAAFwAAACQAAAAYAAAAGQAAACQAAAAjAAAAGAAAACUAAAAZAAAAGgAAACUAAAAkAAAAGQAAACYAAAAaAAAAGwAAACYAAAAlAAAAGgAAACcAAAAbAAAAHAAAACcAAAAmAAAAGwAAACgAAAAcAAAAHQAAACgAAAAnAAAAHAAAACkAAAAdAAAAHgAAACkAAAAoAAAAHQAAACAAAAAeAAAAHwAAACAAAAApAAAAHgAAAC0AAAAqAAAAKwAAAC0AAAArAAAALAAAADcAAAAuAAAALwAAADcAAAAvAAAAMAAAADgAAAAwAAAAMQAAADgAAAA3AAAAMAAAADkAAAAxAAAAMgAAADkAAAA4AAAAMQAAADoAAAAyAAAAMwAAADoAAAA5AAAAMgAAADsAAAAzAAAANAAAADsAAAA6AAAAMwAAADYAAAA0AAAANQAAADYAAAA7AAAANAAAAEUAAAA8AAAAPQAAAEUAAAA9AAAAPgAAAEYAAAA+AAAAPwAAAEYAAABFAAAAPgAAAEcAAAA/AAAAQAAAAEcAAABGAAAAPwAAAEgAAABAAAAAQQAAAEgAAABHAAAAQAAAAEkAAABBAAAAQgAAAEkAAABIAAAAQQAAAEQAAABCAAAAQwAAAEQAAABJAAAAQgAAAFMAAABKAAAASwAAAFMAAABLAAAATAAAAFQAAABMAAAATQAAAFQAAABTAAAATAAAAFUAAABNAAAATgAAAFUAAABUAAAATQAAAFYAAABOAAAATwAAAFYAAABVAAAATgAAAFcAAABPAAAAUAAAAFcAAABWAAAATwAAAFIAAABQAAAAUQAAAFIAAABXAAAAUAAAAGUAAABYAAAAWQAAAGUAAABZAAAAWgAAAGYAAABaAAAAWwAAAGYAAABlAAAAWgAAAGcAAABbAAAAXAAAAGcAAABmAAAAWwAAAGgAAABcAAAAXQAAAGgAAABnAAAAXAAAAGkAAABdAAAAXgAAAGkAAABeAAAAXwAAAGkAAABoAAAAXQAAAGoAAABpAAAAXwAAAGsAAABfAAAAYAAAAGsAAABqAAAAXwAAAGwAAABgAAAAYQAAAGwAAABrAAAAYAAAAG0AAABhAAAAYgAAAG0AAABsAAAAYQAAAGQAAABiAAAAYwAAAGQAAABtAAAAYgAAAHsAAABuAAAAbwAAAHsAAABvAAAAcAAAAHwAAABwAAAAcQAAAHwAAAB7AAAAcAAAAH0AAABxAAAAcgAAAH0AAAB8AAAAcQAAAH4AAAByAAAAcwAAAH4AAAB9AAAAcgAAAH8AAABzAAAAdAAAAH8AAAB0AAAAdQAAAH8AAAB+AAAAcwAAAIAAAAB/AAAAdQAAAIEAAAB1AAAAdgAAAIEAAACAAAAAdQAAAIIAAAB2AAAAdwAAAIIAAACBAAAAdgAAAIMAAAB3AAAAeAAAAIMAAACCAAAAdwAAAHoAAAB4AAAAeQAAAHoAAACDAAAAeAAAAIcAAACEAAAAhQAAAIcAAACFAAAAhgAAAJUAAACIAAAAiQAAAJUAAACJAAAAigAAAJYAAACKAAAAiwAAAJYAAACVAAAAigAAAJcAAACLAAAAjAAAAJcAAACWAAAAiwAAAJgAAACMAAAAjQAAAJgAAACXAAAAjAAAAJkAAACNAAAAjgAAAJkAAACOAAAAjwAAAJkAAACYAAAAjQAAAJoAAACZAAAAjwAAAJsAAACPAAAAkAAAAJsAAACaAAAAjwAAAJwAAACQAAAAkQAAAJwAAACbAAAAkAAAAJ0AAACRAAAAkgAAAJ0AAACcAAAAkQAAAJQAAACSAAAAkwAAAJQAAACdAAAAkgAAAKsAAACeAAAAnwAAAKsAAACfAAAAoAAAAKwAAACgAAAAoQAAAKwAAACrAAAAoAAAAK0AAAChAAAAogAAAK0AAACsAAAAoQAAAK4AAACiAAAAowAAAK4AAACtAAAAogAAAK8AAACjAAAApAAAAK8AAACkAAAApQAAAK8AAACuAAAAowAAALAAAACvAAAApQAAALEAAAClAAAApgAAALEAAACwAAAApQAAALIAAACmAAAApwAAALIAAACxAAAApgAAALMAAACnAAAAqAAAALMAAACyAAAApwAAAKoAAACoAAAAqQAAAKoAAACzAAAAqAAAAMAAAAC0AAAAtQAAAMAAAAC1AAAAtgAAAMEAAAC2AAAAtwAAAMEAAADAAAAAtgAAAMIAAAC3AAAAuAAAAMIAAADBAAAAtwAAAMMAAAC4AAAAuQAAAMMAAADCAAAAuAAAAMQAAAC5AAAAugAAAMQAAAC6AAAAuwAAAMQAAADDAAAAuQAAAMUAAADEAAAAuwAAAMYAAAC7AAAAvAAAAMYAAADFAAAAuwAAAMcAAAC8AAAAvQAAAMcAAADGAAAAvAAAAL8AAAC9AAAAvgAAAL8AAADHAAAAvQAAANUAAADIAAAAyQAAANUAAADJAAAAygAAANYAAADKAAAAywAAANYAAADVAAAAygAAANcAAADLAAAAzAAAANcAAADWAAAAywAAANgAAADMAAAAzQAAANgAAADXAAAAzAAAANkAAADNAAAAzgAAANkAAADYAAAAzQAAANoAAADOAAAAzwAAANoAAADPAAAA0AAAANoAAADZAAAAzgAAANsAAADaAAAA0AAAANwAAADQAAAA0QAAANwAAADbAAAA0AAAAN0AAADRAAAA0gAAAN0AAADcAAAA0QAAANQAAADSAAAA0wAAANQAAADdAAAA0gAAAOEAAADeAAAA3wAAAOEAAADfAAAA4AAAAO0AAADiAAAA4wAAAO0AAADjAAAA5AAAAO4AAADkAAAA5QAAAO4AAADtAAAA5AAAAO8AAADlAAAA5gAAAO8AAADuAAAA5QAAAPAAAADmAAAA5wAAAPAAAADvAAAA5gAAAPEAAADnAAAA6AAAAPEAAADwAAAA5wAAAPIAAADoAAAA6QAAAPIAAADxAAAA6AAAAPMAAADpAAAA6gAAAPMAAADyAAAA6QAAAOwAAADqAAAA6wAAAOwAAADzAAAA6gAAAAEBAAD0AAAA9QAAAAEBAAD1AAAA9gAAAAIBAAD2AAAA9wAAAAIBAAABAQAA9gAAAAMBAAD3AAAA+AAAAAMBAAACAQAA9wAAAAQBAAD4AAAA+QAAAAQBAAADAQAA+AAAAAUBAAD5AAAA+gAAAAUBAAAEAQAA+QAAAAYBAAD6AAAA+wAAAAYBAAAFAQAA+gAAAAcBAAD7AAAA/AAAAAcBAAAGAQAA+wAAAAgBAAD8AAAA/QAAAAgBAAAHAQAA/AAAAAkBAAD9AAAA/gAAAAkBAAAIAQAA/QAAAAABAAD+AAAA/wAAAAABAAAJAQAA/gAAABkBAAAKAQAACwEAABkBAAALAQAADAEAABoBAAAMAQAADQEAABoBAAAZAQAADAEAABsBAAANAQAADgEAABsBAAAaAQAADQEAABwBAAAOAQAADwEAABwBAAAbAQAADgEAAB0BAAAPAQAAEAEAAB0BAAAcAQAADwEAAB4BAAAQAQAAEQEAAB4BAAAdAQAAEAEAAB8BAAARAQAAEgEAAB8BAAAeAQAAEQEAACABAAASAQAAEwEAACABAAAfAQAAEgEAACEBAAATAQAAFAEAACEBAAAgAQAAEwEAACIBAAAUAQAAFQEAACIBAAAhAQAAFAEAACMBAAAVAQAAFgEAACMBAAAiAQAAFQEAABgBAAAWAQAAFwEAABgBAAAjAQAAFgEAADEBAAAkAQAAJQEAADEBAAAlAQAAJgEAADIBAAAmAQAAJwEAADIBAAAxAQAAJgEAADMBAAAnAQAAKAEAADMBAAAyAQAAJwEAADQBAAAoAQAAKQEAADQBAAAzAQAAKAEAADUBAAApAQAAKgEAADUBAAA0AQAAKQEAADYBAAAqAQAAKwEAADYBAAA1AQAAKgEAADcBAAArAQAALAEAADcBAAA2AQAAKwEAADgBAAAsAQAALQEAADgBAAA3AQAALAEAADkBAAAtAQAALgEAADkBAAA4AQAALQEAADABAAAuAQAALwEAADABAAA5AQAALgEAAD0BAAA6AQAAOwEAAD0BAAA7AQAAPAEAAEgBAAA+AQAAPwEAAEgBAAA/AQAAQAEAAEkBAABAAQAAQQEAAEkBAABIAQAAQAEAAEoBAABBAQAAQgEAAEoBAABJAQAAQQEAAEsBAABCAQAAQwEAAEsBAABKAQAAQgEAAEwBAABDAQAARAEAAEwBAABLAQAAQwEAAE0BAABEAQAARQEAAE0BAABMAQAARAEAAEcBAABFAQAARgEAAEcBAABNAQAARQEAAF0BAABOAQAATwEAAF0BAABPAQAAUAEAAF4BAABQAQAAUQEAAF4BAABdAQAAUAEAAF8BAABRAQAAUgEAAF8BAABeAQAAUQEAAGABAABSAQAAUwEAAGABAABfAQAAUgEAAGEBAABTAQAAVAEAAGEBAABgAQAAUwEAAGIBAABUAQAAVQEAAGIBAABhAQAAVAEAAGMBAABVAQAAVgEAAGMBAABiAQAAVQEAAGQBAABWAQAAVwEAAGQBAABjAQAAVgEAAGUBAABXAQAAWAEAAGUBAABkAQAAVwEAAGYBAABYAQAAWQEAAGYBAABlAQAAWAEAAGcBAABZAQAAWgEAAGcBAABmAQAAWQEAAFwBAABaAQAAWwEAAFwBAABnAQAAWgEAAGoBAADfAQAA4AEAAGsBAADeAQAA3wEAAGsBAADfAQAAagEAAGkBAADgAQAA4QEAAGkBAABqAQAA4AEAAGwBAADdAQAA3gEAAGwBAADeAQAAawEAAGgBAABpAQAA4QEAAGgBAADhAQAAzgEAAG0BAADcAQAA3QEAAG0BAADdAQAAbAEAAHsBAADOAQAAzwEAAHsBAABoAQAAzgEAAG4BAADbAQAA3AEAAG4BAADcAQAAbQEAAHoBAADPAQAA0AEAAHoBAAB7AQAAzwEAAG8BAADbAQAAbgEAAHkBAADQAQAA0QEAAHkBAAB6AQAA0AEAAHABAADaAQAA2wEAAHABAADbAQAAbwEAAHEBAADZAQAA2gEAAHEBAADaAQAAcAEAAHgBAADRAQAA0gEAAHgBAADSAQAA0wEAAHgBAAB5AQAA0QEAAHIBAADZAQAAcQEAAHMBAADYAQAA2QEAAHMBAADZAQAAcgEAAHcBAADTAQAA1AEAAHcBAADUAQAA1QEAAHcBAAB4AQAA0wEAAHQBAADnAQAA2AEAAHQBAADYAQAAcwEAAP8BAADmAQAA5wEAAP8BAADnAQAAdAEAAHYBAADVAQAA1gEAAHYBAADWAQAA1wEAAHYBAAB3AQAA1QEAAAACAADlAQAA5gEAAAACAADmAQAA/wEAAAECAADkAQAA5QEAAAECAADlAQAAAAIAAAICAADjAQAA5AEAAAICAADkAQAAAQIAAHUBAADXAQAAzQEAAHUBAAB2AQAA1wEAAAMCAADjAQAAAgIAAAQCAADiAQAA4wEAAAQCAADjAQAAAwIAAHwBAADNAQAAwwEAAHwBAAB1AQAAzQEAAAUCAADtAQAA4gEAAAUCAADiAQAABAIAAAYCAADtAQAABQIAAIYBAAB8AQAAwwEAAIYBAADDAQAAxAEAAPUBAADsAQAA7QEAAPUBAADtAQAABgIAAPYBAADsAQAA9QEAAOsBAADsAQAA9gEAAIUBAACGAQAAxAEAAIUBAADEAQAAxQEAAPcBAADrAQAA9gEAAOoBAADrAQAA9wEAAOoBAAD3AQAA+AEAAIQBAACFAQAAxQEAAIQBAADFAQAAxgEAAIMBAADGAQAAxwEAAIMBAACEAQAAxgEAAOkBAADqAQAA+AEAAOkBAAD4AQAA+QEAAIIBAACDAQAAxwEAAIIBAADHAQAAyAEAAIEBAACCAQAAyAEAAIEBAADIAQAAyQEAAOgBAADpAQAA+QEAAOgBAAD5AQAA+gEAAIABAACBAQAAyQEAAIABAADJAQAAygEAAH8BAACAAQAAygEAAH8BAADKAQAAywEAAPMBAADoAQAA+gEAAH4BAAB/AQAAywEAAH4BAADLAQAAzAEAAH0BAAB+AQAAzAEAAH0BAADMAQAAuQEAAJIBAAB9AQAAuQEAALoBAACSAQAAuQEAAPIBAAD6AQAA+wEAAPIBAADzAQAA+gEAAJEBAACSAQAAugEAAJABAACRAQAAugEAAPEBAADyAQAA+wEAAPEBAAD7AQAA/AEAALsBAACPAQAAkAEAALsBAACQAQAAugEAAPABAADxAQAA/AEAAPABAAD8AQAA/QEAALwBAACOAQAAjwEAALwBAACPAQAAuwEAAO8BAADwAQAA/QEAAO8BAAD9AQAA/gEAAO8BAAD+AQAA9AEAAL0BAACNAQAAjgEAAL0BAACOAQAAvAEAAO4BAADvAQAA9AEAAL4BAACLAQAAjAEAAL4BAACMAQAAjQEAAL4BAACNAQAAvQEAAL8BAACKAQAAiwEAAL8BAACLAQAAvgEAAMABAACJAQAAigEAAMABAACKAQAAvwEAAMEBAACIAQAAiQEAAMEBAACJAQAAwAEAAMIBAACIAQAAwQEAAMIBAACHAQAAiAEAALABAACHAQAAwgEAALABAACcAQAAhwEAALEBAACcAQAAsAEAALEBAACaAQAAmwEAALEBAACbAQAAnAEAAK8BAACdAQAAngEAAK8BAAClAQAAnQEAAK4BAACvAQAAngEAALIBAACaAQAAsQEAALIBAACYAQAAmQEAALIBAACZAQAAmgEAAK0BAACuAQAAngEAAK0BAACeAQAAnwEAALMBAACYAQAAsgEAALMBAACXAQAAmAEAAKwBAACtAQAAnwEAAKwBAACfAQAAoAEAALQBAACWAQAAlwEAALQBAACXAQAAswEAAKsBAACsAQAAoAEAAKsBAACgAQAAoQEAALUBAACVAQAAlgEAALUBAACWAQAAtAEAAKoBAACrAQAAoQEAALYBAACVAQAAtQEAALYBAACUAQAAlQEAAKkBAACqAQAAoQEAAKkBAAChAQAAogEAALcBAACTAQAAlAEAALcBAACUAQAAtgEAAKgBAACpAQAAogEAAKgBAACiAQAAowEAALgBAACTAQAAtwEAAKcBAACjAQAApAEAAKcBAACoAQAAowEAAKYBAACkAQAAkwEAAKYBAACnAQAApAEAAKYBAACTAQAAuAEAAAkCAAB/AgAAfgIAAAoCAAB+AgAAfQIAAAoCAAAJAgAAfgIAAAgCAACAAgAAfwIAAAgCAAB/AgAACQIAAAsCAAB9AgAAfAIAAAsCAAAKAgAAfQIAAAcCAACAAgAACAIAAAcCAABtAgAAgAIAAAwCAAB8AgAAewIAAAwCAAALAgAAfAIAABoCAABuAgAAbQIAABoCAABtAgAABwIAAA0CAAB7AgAAegIAAA0CAAAMAgAAewIAABkCAABvAgAAbgIAABkCAABuAgAAGgIAAA4CAAANAgAAegIAABgCAABwAgAAbwIAABgCAABvAgAAGQIAAA8CAAB6AgAAeQIAAA8CAAAOAgAAegIAABACAAB5AgAAeAIAABACAAAPAgAAeQIAABcCAABxAgAAcAIAABcCAAByAgAAcQIAABcCAABwAgAAGAIAABECAAAQAgAAeAIAABICAAB4AgAAdwIAABICAAARAgAAeAIAABYCAABzAgAAcgIAABYCAAB0AgAAcwIAABYCAAByAgAAFwIAABMCAAB3AgAAhgIAABMCAAASAgAAdwIAAJ4CAACGAgAAhQIAAJ4CAAATAgAAhgIAABUCAAB1AgAAdAIAABUCAAB2AgAAdQIAABUCAAB0AgAAFgIAAJ8CAACFAgAAhAIAAJ8CAACeAgAAhQIAAKACAACEAgAAgwIAAKACAACfAgAAhAIAAKECAACDAgAAggIAAKECAACgAgAAgwIAABQCAABsAgAAdgIAABQCAAB2AgAAFQIAAKICAAChAgAAggIAAKMCAACCAgAAgQIAAKMCAACiAgAAggIAABsCAABiAgAAbAIAABsCAABsAgAAFAIAAKQCAACBAgAAjAIAAKQCAACjAgAAgQIAAKUCAACkAgAAjAIAACUCAABiAgAAGwIAACUCAABjAgAAYgIAAJQCAACMAgAAiwIAAJQCAAClAgAAjAIAAJUCAACUAgAAiwIAAIoCAACVAgAAiwIAACQCAABjAgAAJQIAACQCAABkAgAAYwIAAJYCAACVAgAAigIAAIkCAACWAgAAigIAAIkCAACXAgAAlgIAACMCAABkAgAAJAIAACMCAABlAgAAZAIAACICAABmAgAAZQIAACICAABlAgAAIwIAAIgCAACXAgAAiQIAAIgCAACYAgAAlwIAACECAABmAgAAIgIAACECAABnAgAAZgIAACACAABnAgAAIQIAACACAABoAgAAZwIAAIcCAACYAgAAiAIAAIcCAACZAgAAmAIAAB8CAABoAgAAIAIAAB8CAABpAgAAaAIAAB4CAABpAgAAHwIAAB4CAABqAgAAaQIAAJICAACZAgAAhwIAAB0CAABqAgAAHgIAAB0CAABrAgAAagIAABwCAABrAgAAHQIAABwCAABYAgAAawIAADECAABYAgAAHAIAAFkCAABYAgAAMQIAAJECAACaAgAAmQIAAJECAACZAgAAkgIAADACAABZAgAAMQIAAC8CAABZAgAAMAIAAJACAACaAgAAkQIAAJACAACbAgAAmgIAAFoCAAAvAgAALgIAAFoCAABZAgAALwIAAI8CAACbAgAAkAIAAI8CAACcAgAAmwIAAFsCAAAuAgAALQIAAFsCAABaAgAALgIAAI4CAACcAgAAjwIAAI4CAACdAgAAnAIAAI4CAACTAgAAnQIAAFwCAAAtAgAALAIAAFwCAABbAgAALQIAAI0CAACTAgAAjgIAAF0CAAArAgAAKgIAAF0CAAAsAgAAKwIAAF0CAABcAgAALAIAAF4CAAAqAgAAKQIAAF4CAABdAgAAKgIAAF8CAAApAgAAKAIAAF8CAABeAgAAKQIAAGACAAAoAgAAJwIAAGACAABfAgAAKAIAAGECAABgAgAAJwIAAGECAAAnAgAAJgIAAE8CAABhAgAAJgIAAE8CAAAmAgAAOwIAAFACAABPAgAAOwIAAFACAAA6AgAAOQIAAFACAAA7AgAAOgIAAE4CAAA9AgAAPAIAAE4CAAA8AgAARAIAAE0CAAA9AgAATgIAAFECAABQAgAAOQIAAFECAAA4AgAANwIAAFECAAA5AgAAOAIAAEwCAAA9AgAATQIAAEwCAAA+AgAAPQIAAFICAABRAgAANwIAAFICAAA3AgAANgIAAEsCAAA+AgAATAIAAEsCAAA/AgAAPgIAAFMCAAA2AgAANQIAAFMCAABSAgAANgIAAEoCAAA/AgAASwIAAEoCAABAAgAAPwIAAFQCAAA1AgAANAIAAFQCAABTAgAANQIAAEkCAABAAgAASgIAAFUCAABUAgAANAIAAFUCAAA0AgAAMwIAAEgCAABAAgAASQIAAEgCAABBAgAAQAIAAFYCAAAzAgAAMgIAAFYCAABVAgAAMwIAAEcCAABBAgAASAIAAEcCAABCAgAAQQIAAFcCAABWAgAAMgIAAEYCAABDAgAAQgIAAEYCAABCAgAARwIAAEUCAAAyAgAAQwIAAEUCAABDAgAARgIAAEUCAABXAgAAMgIAALICAACmAgAApwIAALICAACnAgAAqAIAALMCAACoAgAAqQIAALMCAACyAgAAqAIAALQCAACpAgAAqgIAALQCAACzAgAAqQIAALUCAACqAgAAqwIAALUCAAC0AgAAqgIAALYCAACrAgAArAIAALYCAAC1AgAAqwIAALcCAACsAgAArQIAALcCAAC2AgAArAIAALgCAACtAgAArgIAALgCAAC3AgAArQIAALkCAACuAgAArwIAALkCAAC4AgAArgIAALECAACvAgAAsAIAALECAAC5AgAArwIAAMgCAAC6AgAAuwIAAMgCAAC7AgAAvAIAAMkCAAC8AgAAvQIAAMkCAADIAgAAvAIAAMoCAAC9AgAAvgIAAMoCAADJAgAAvQIAAMsCAAC+AgAAvwIAAMsCAADKAgAAvgIAAMwCAAC/AgAAwAIAAMwCAADLAgAAvwIAAM0CAADAAgAAwQIAAM0CAADMAgAAwAIAAM4CAADBAgAAwgIAAM4CAADCAgAAwwIAAM4CAADNAgAAwQIAAM8CAADOAgAAwwIAANACAADDAgAAxAIAANACAADPAgAAwwIAANECAADEAgAAxQIAANECAADQAgAAxAIAAMcCAADFAgAAxgIAAMcCAADRAgAAxQIAAN8CAADSAgAA0wIAAN8CAADTAgAA1AIAAOACAADUAgAA1QIAAOACAADfAgAA1AIAAOECAADVAgAA1gIAAOECAADgAgAA1QIAAOICAADWAgAA1wIAAOICAADhAgAA1gIAAOMCAADXAgAA2AIAAOMCAADiAgAA1wIAAOQCAADYAgAA2QIAAOQCAADjAgAA2AIAAOUCAADZAgAA2gIAAOUCAADaAgAA2wIAAOUCAADkAgAA2QIAAOYCAADlAgAA2wIAAOcCAADbAgAA3AIAAOcCAADmAgAA2wIAAN4CAADcAgAA3QIAAN4CAADnAgAA3AIAAOsCAADoAgAA6QIAAOsCAADpAgAA6gIAAO8CAADsAgAA7QIAAO8CAADtAgAA7gIAAPMCAADwAgAA8QIAAPMCAADxAgAA8gIAAPcCAAD0AgAA9QIAAPcCAAD1AgAA9gIAAPsCAAD4AgAA+QIAAPsCAAD5AgAA+gIAAAoDAAD8AgAA/QIAAAoDAAD9AgAA/gIAAAsDAAD+AgAA/wIAAAsDAAAKAwAA/gIAAAwDAAD/AgAAAAMAAAwDAAALAwAA/wIAAA0DAAAAAwAAAQMAAA0DAAAMAwAAAAMAAA4DAAABAwAAAgMAAA4DAAANAwAAAQMAAA8DAAACAwAAAwMAAA8DAAAOAwAAAgMAABADAAADAwAABAMAABADAAAPAwAAAwMAABEDAAAEAwAABQMAABEDAAAFAwAABgMAABEDAAAQAwAABAMAABIDAAARAwAABgMAABMDAAAGAwAABwMAABMDAAASAwAABgMAAAkDAAAHAwAACAMAAAkDAAATAwAABwMAACIDAAAUAwAAFQMAACIDAAAVAwAAFgMAACMDAAAWAwAAFwMAACMDAAAiAwAAFgMAACQDAAAXAwAAGAMAACQDAAAYAwAAGQMAACQDAAAjAwAAFwMAACUDAAAkAwAAGQMAACYDAAAZAwAAGgMAACYDAAAlAwAAGQMAACcDAAAaAwAAGwMAACcDAAAmAwAAGgMAACgDAAAbAwAAHAMAACgDAAAcAwAAHQMAACgDAAAnAwAAGwMAACkDAAAoAwAAHQMAACoDAAAdAwAAHgMAACoDAAApAwAAHQMAACsDAAAeAwAAHwMAACsDAAAqAwAAHgMAACEDAAAfAwAAIAMAACEDAAArAwAAHwMAADgDAAAsAwAALQMAADgDAAAtAwAALgMAADkDAAAuAwAALwMAADkDAAA4AwAALgMAADoDAAAvAwAAMAMAADoDAAA5AwAALwMAADsDAAAwAwAAMQMAADsDAAA6AwAAMAMAADwDAAAxAwAAMgMAADwDAAAyAwAAMwMAADwDAAA7AwAAMQMAAD0DAAA8AwAAMwMAAD4DAAAzAwAANAMAAD4DAAA9AwAAMwMAAD8DAAA0AwAANQMAAD8DAAA+AwAANAMAADcDAAA1AwAANgMAADcDAAA/AwAANQMAAEoDAABAAwAAQQMAAEoDAABBAwAAQgMAAEsDAABCAwAAQwMAAEsDAABKAwAAQgMAAEwDAABDAwAARAMAAEwDAABLAwAAQwMAAE0DAABEAwAARQMAAE0DAABMAwAARAMAAE4DAABFAwAARgMAAE4DAABNAwAARQMAAE8DAABGAwAARwMAAE8DAABOAwAARgMAAEkDAABHAwAASAMAAEkDAABPAwAARwMAAF8DAABQAwAAUQMAAF8DAABRAwAAUgMAAGADAABSAwAAUwMAAGADAABfAwAAUgMAAGEDAABTAwAAVAMAAGEDAABgAwAAUwMAAGIDAABUAwAAVQMAAGIDAABhAwAAVAMAAGMDAABVAwAAVgMAAGMDAABiAwAAVQMAAGQDAABWAwAAVwMAAGQDAABjAwAAVgMAAGUDAABXAwAAWAMAAGUDAABkAwAAVwMAAGYDAABYAwAAWQMAAGYDAABlAwAAWAMAAGcDAABZAwAAWgMAAGcDAABmAwAAWQMAAGgDAABaAwAAWwMAAGgDAABnAwAAWgMAAGkDAABbAwAAXAMAAGkDAABoAwAAWwMAAF4DAABcAwAAXQMAAF4DAABpAwAAXAMAAHoDAABqAwAAawMAAHoDAABrAwAAbAMAAHsDAABsAwAAbQMAAHsDAAB6AwAAbAMAAHwDAABtAwAAbgMAAHwDAAB7AwAAbQMAAH0DAABuAwAAbwMAAH0DAAB8AwAAbgMAAH4DAABvAwAAcAMAAH4DAAB9AwAAbwMAAH8DAABwAwAAcQMAAH8DAAB+AwAAcAMAAIADAABxAwAAcgMAAIADAAB/AwAAcQMAAIEDAAByAwAAcwMAAIEDAACAAwAAcgMAAIIDAABzAwAAdAMAAIIDAACBAwAAcwMAAIMDAAB0AwAAdQMAAIMDAACCAwAAdAMAAIQDAAB1AwAAdgMAAIQDAACDAwAAdQMAAIUDAAB2AwAAdwMAAIUDAACEAwAAdgMAAHkDAAB3AwAAeAMAAHkDAACFAwAAdwMAAIkDAACGAwAAhwMAAIkDAACHAwAAiAMAAJsDAACKAwAAiwMAAJsDAACLAwAAjAMAAJwDAACMAwAAjQMAAJwDAACbAwAAjAMAAJ0DAACNAwAAjgMAAJ0DAACcAwAAjQMAAJ4DAACOAwAAjwMAAJ4DAACdAwAAjgMAAJ8DAACPAwAAkAMAAJ8DAACeAwAAjwMAAKADAACQAwAAkQMAAKADAACfAwAAkAMAAKEDAACRAwAAkgMAAKEDAACgAwAAkQMAAKIDAACSAwAAkwMAAKIDAAChAwAAkgMAAKMDAACTAwAAlAMAAKMDAACiAwAAkwMAAKQDAACUAwAAlQMAAKQDAACjAwAAlAMAAKUDAACVAwAAlgMAAKUDAACkAwAAlQMAAKYDAACWAwAAlwMAAKYDAAClAwAAlgMAAKcDAACXAwAAmAMAAKcDAACmAwAAlwMAAJoDAACYAwAAmQMAAJoDAACnAwAAmAMAALcDAACoAwAAqQMAALcDAACpAwAAqgMAALgDAACqAwAAqwMAALgDAAC3AwAAqgMAALkDAACrAwAArAMAALkDAAC4AwAAqwMAALoDAACsAwAArQMAALoDAAC5AwAArAMAALsDAACtAwAArgMAALsDAAC6AwAArQMAALwDAACuAwAArwMAALwDAAC7AwAArgMAAL0DAACvAwAAsAMAAL0DAAC8AwAArwMAAL4DAACwAwAAsQMAAL4DAAC9AwAAsAMAAL8DAACxAwAAsgMAAL8DAAC+AwAAsQMAAMADAACyAwAAswMAAMADAAC/AwAAsgMAAMEDAACzAwAAtAMAAMEDAADAAwAAswMAALYDAAC0AwAAtQMAALYDAADBAwAAtAMAAM4DAADCAwAAwwMAAM4DAADDAwAAxAMAAM8DAADEAwAAxQMAAM8DAADOAwAAxAMAANADAADFAwAAxgMAANADAADPAwAAxQMAANEDAADGAwAAxwMAANEDAADQAwAAxgMAANIDAADHAwAAyAMAANIDAADRAwAAxwMAANMDAADIAwAAyQMAANMDAADSAwAAyAMAANQDAADJAwAAygMAANQDAADTAwAAyQMAANUDAADKAwAAywMAANUDAADLAwAAzAMAANUDAADUAwAAygMAAM0DAADVAwAAzAMAAEwEAADnAwAA6AMAAEwEAADoAwAA6QMAAE0EAADmAwAA5wMAAE0EAADnAwAATAQAAEsEAADpAwAA6gMAAEsEAABMBAAA6QMAAE4EAADkAwAA5QMAAE4EAADlAwAA5gMAAE4EAADmAwAATQQAAEoEAADqAwAA1gMAAEoEAABLBAAA6gMAAE8EAADjAwAA5AMAAE8EAADkAwAATgQAAFsEAADWAwAA1wMAAFsEAABKBAAA1gMAAFAEAADiAwAA4wMAAFAEAADjAwAATwQAAFoEAABbBAAA1wMAAFoEAADXAwAA2AMAAFEEAADhAwAA4gMAAFEEAADiAwAAUAQAAFIEAADgAwAA4QMAAFIEAADhAwAAUQQAAFkEAADYAwAA2QMAAFkEAADZAwAA2gMAAFkEAABaBAAA2AMAAFMEAADgAwAAUgQAAFMEAAD0AwAA9QMAAFMEAAD1AwAA4AMAAFgEAADaAwAA2wMAAFgEAABZBAAA2gMAAFQEAADzAwAA9AMAAFQEAAD0AwAAUwQAAFUEAADyAwAA8wMAAFUEAADzAwAAVAQAAD0EAADxAwAA8gMAAD0EAADyAwAAVQQAAFcEAADbAwAA3AMAAFcEAABYBAAA2wMAAD4EAADxAwAAPQQAAD4EAADvAwAA8AMAAD4EAADwAwAA8QMAAFYEAADcAwAA3QMAAFYEAADdAwAA3gMAAFYEAABXBAAA3AMAAD8EAADtAwAA7gMAAD8EAADuAwAA7wMAAD8EAADvAwAAPgQAAEAEAADsAwAA7QMAAEAEAADtAwAAPwQAAEEEAADsAwAAQAQAAEEEAADrAwAA7AMAACEEAADeAwAA3wMAACEEAABWBAAA3gMAAEIEAADrAwAAQQQAACAEAAAhBAAA3wMAACAEAADfAwAAEAQAAEMEAADrAwAAQgQAAEQEAADrAwAAQwQAAB8EAAAgBAAAEAQAAEUEAADrAwAARAQAAEYEAADrAwAARQQAAB4EAAAfBAAAEAQAAB4EAAAQBAAAEQQAAEcEAADrAwAARgQAAEgEAADrAwAARwQAAB0EAAARBAAAEgQAAB0EAAAeBAAAEQQAAEkEAADrAwAASAQAADwEAAD3AwAA6wMAADwEAADrAwAASQQAABwEAAASBAAAEwQAABwEAAAdBAAAEgQAABsEAAATBAAAFAQAABsEAAAcBAAAEwQAABoEAAAUBAAAFQQAABoEAAAbBAAAFAQAABkEAAAVBAAAFgQAABkEAAAWBAAAFwQAABkEAAAaBAAAFQQAABgEAAAZBAAAFwQAABgEAAAXBAAABQQAAC0EAAAYBAAABQQAAC0EAAAFBAAABgQAACwEAAAtBAAABgQAAAcEAAAsBAAABgQAACsEAAAsBAAABwQAAAgEAAArBAAABwQAAAkEAAArBAAACAQAAAkEAAAqBAAAKwQAAAoEAAAqBAAACQQAAAoEAAApBAAAKgQAAAsEAAApBAAACgQAAAsEAAAoBAAAKQQAAAwEAAAoBAAACwQAAA0EAAAoBAAADAQAAA0EAAAnBAAAKAQAAA4EAAAnBAAADQQAAA4EAAAmBAAAJwQAAA8EAAAmBAAADgQAAA8EAAAlBAAAJgQAAPsDAAAlBAAADwQAAPsDAAAkBAAAJQQAAPwDAAAjBAAAJAQAAPwDAAAkBAAA+wMAAP0DAAAjBAAA/AMAAP0DAAAiBAAAIwQAAP4DAAA7BAAAIgQAAP4DAAAiBAAA/QMAAP8DAAA7BAAA/gMAAP8DAAA6BAAAOwQAAAAEAAA6BAAA/wMAAAEEAAA6BAAAAAQAAAEEAAA5BAAAOgQAAPoDAAAuBAAALwQAAPoDAAAvBAAAMAQAAPoDAAAwBAAAMQQAAPoDAAAxBAAAMgQAAPoDAAAyBAAAMwQAAPoDAAAzBAAANAQAAPoDAAA0BAAANQQAAPoDAAA1BAAANgQAAAIEAAA5BAAAAQQAAAIEAAA4BAAAOQQAAAQEAAD6AwAANgQAAAQEAAA2BAAANwQAAAMEAAA3BAAAOAQAAAMEAAA4BAAAAgQAAAMEAAAEBAAANwQAAPgDAAA8BAAALgQAAPgDAAD6AwAA+QMAAPgDAAAuBAAA+gMAAPgDAAD3AwAAPAQAAOsDAAD3AwAA9gMAANIEAABuBAAAbQQAANIEAABvBAAAbgQAANMEAABtBAAAbAQAANMEAADSBAAAbQQAANEEAABwBAAAbwQAANEEAABvBAAA0gQAANQEAABrBAAAagQAANQEAABsBAAAawQAANQEAADTBAAAbAQAANAEAABcBAAAcAQAANAEAABwBAAA0QQAANUEAABqBAAAaQQAANUEAADUBAAAagQAAOEEAABdBAAAXAQAAOEEAABcBAAA0AQAANYEAABpBAAAaAQAANYEAADVBAAAaQQAAOAEAABdBAAA4QQAAOAEAABeBAAAXQQAANcEAABoBAAAZwQAANcEAADWBAAAaAQAANgEAABnBAAAZgQAANgEAADXBAAAZwQAAN8EAABfBAAAXgQAAN8EAABgBAAAXwQAAN8EAABeBAAA4AQAANkEAADYBAAAZgQAANkEAAB7BAAAegQAANkEAABmBAAAewQAAN4EAABhBAAAYAQAAN4EAABgBAAA3wQAANoEAAB6BAAAeQQAANoEAADZBAAAegQAANsEAAB5BAAAeAQAANsEAADaBAAAeQQAAMMEAAB4BAAAdwQAAMMEAADbBAAAeAQAAN0EAABiBAAAYQQAAN0EAABhBAAA3gQAAMQEAADDBAAAdwQAAMQEAAB2BAAAdQQAAMQEAAB3BAAAdgQAANwEAABjBAAAYgQAANwEAABkBAAAYwQAANwEAABiBAAA3QQAAMUEAAB0BAAAcwQAAMUEAAB1BAAAdAQAAMUEAADEBAAAdQQAAMYEAABzBAAAcgQAAMYEAADFBAAAcwQAAMcEAADGBAAAcgQAAMcEAAByBAAAcQQAAKcEAABlBAAAZAQAAKcEAABkBAAA3AQAAMgEAADHBAAAcQQAAKYEAABlBAAApwQAAKYEAACWBAAAZQQAAMkEAADIBAAAcQQAAMoEAADJBAAAcQQAAKUEAACWBAAApgQAAMsEAADKBAAAcQQAAMwEAADLBAAAcQQAAKQEAACWBAAApQQAAKQEAACXBAAAlgQAAM0EAADMBAAAcQQAAM4EAADNBAAAcQQAAKMEAACYBAAAlwQAAKMEAACXBAAApAQAAM8EAADOBAAAcQQAAMIEAABxBAAAfQQAAMIEAADPBAAAcQQAAKIEAACZBAAAmAQAAKIEAACYBAAAowQAAKEEAACaBAAAmQQAAKEEAACZBAAAogQAAKAEAACbBAAAmgQAAKAEAACaBAAAoQQAAJ8EAACcBAAAmwQAAJ8EAACdBAAAnAQAAJ8EAACbBAAAoAQAAJ4EAACdBAAAnwQAAJ4EAACLBAAAnQQAALMEAACLBAAAngQAALMEAACMBAAAiwQAALIEAACMBAAAswQAAI0EAACMBAAAsgQAALEEAACNBAAAsgQAAI4EAACNBAAAsQQAAI8EAACOBAAAsQQAAI8EAACxBAAAsAQAAJAEAACPBAAAsAQAAJAEAACwBAAArwQAAJEEAACQBAAArwQAAJEEAACvBAAArgQAAJIEAACRBAAArgQAAJMEAACSBAAArgQAAJMEAACuBAAArQQAAJQEAACTBAAArQQAAJQEAACtBAAArAQAAJUEAACUBAAArAQAAJUEAACsBAAAqwQAAIEEAACVBAAAqwQAAIEEAACrBAAAqgQAAIIEAACqBAAAqQQAAIIEAACBBAAAqgQAAIMEAACCBAAAqQQAAIMEAACpBAAAqAQAAIQEAACoBAAAwQQAAIQEAACDBAAAqAQAAIUEAACEBAAAwQQAAIUEAADBBAAAwAQAAIYEAACFBAAAwAQAAIcEAACGBAAAwAQAAIcEAADABAAAvwQAAIAEAAC1BAAAtAQAAIAEAAC2BAAAtQQAAIAEAAC3BAAAtgQAAIAEAAC4BAAAtwQAAIAEAAC5BAAAuAQAAIAEAAC6BAAAuQQAAIAEAAC7BAAAugQAAIAEAAC8BAAAuwQAAIgEAACHBAAAvwQAAIgEAAC/BAAAvgQAAIoEAAC8BAAAgAQAAIoEAAC9BAAAvAQAAIkEAAC+BAAAvQQAAIkEAACIBAAAvgQAAIkEAAC9BAAAigQAAH4EAAC0BAAAwgQAAH4EAAB/BAAAgAQAAH4EAACABAAAtAQAAH4EAADCBAAAfQQAAHEEAAB8BAAAfQQAAJR0lGJoOmgdaCBLAIWUaCKHlFKUKEsBTeIESwOGlGgqiUKYOgAAAAAAAAAAAAAAAIA/AAAAAAAAAAAAAIA/AAAAAMkSur3y8H4/AAAAAEjwOL5Ryns/AAAAAPWGiL4Vu3Y/AAAAABY5tr7cPG8/AAAAAMlU5L41ImU/AAAAALV+Br/V0lk/AAAAAOr8F7/K/00/AAAAAIm7IL9PQEc/AAAAAOkeKb/DLkA/AAAAAOkeKb/DLkA/AAAAAMkSur3y8H4/AAAAAEjwOL5Ryns/AAAAAPWGiL4Vu3Y/AAAAABY5tr7cPG8/AAAAAMlU5L41ImU/AAAAALV+Br/V0lk/AAAAAOr8F7/K/00/AAAAAIm7IL9PQEc/AAAAAOkeKb/DLkA/AAAAAOkeKb/DLkA/AAAAAB2DOb+BaTA/AAAAAEcDSb8khh4/AAAAAC8ZV7/Mzwo/AAAAAPNRY79meOs+AAAAAONebb/Cvb8+AAAAAAofdb9oq5M+AAAAAN+eer970VA+AAAAAGYPfr+apPs9AAAAAAlMf79mq5c9AAAAAGvnf7+rWuA8AAAAAGvnf7+rWuA8AAAAAB2DOb+BaTA/AAAAAEcDSb8khh4/AAAAAC8ZV7/Mzwo/AAAAAPNRY79meOs+AAAAAONebb/Cvb8+AAAAAAofdb9oq5M+AAAAAN+eer970VA+AAAAAGYPfr+apPs9AAAAAAlMf79mq5c9AAAAAAAAAIAAAIA/AAAAAAAAAIAAAIA/AAAAAAAAAIAAAIA/AAAAAAAAAIAAAIA/AAAAAEv9fz928BS8AAAAAEv9fz928BS8AAAAAJa4fj/sdcy9AAAAAL8rez8E9UW+AAAAAHA5dT+v+5K+AAAAAD/1bD+oxcG+AAAAAC7YYj9OTO2+AAAAAIYmVz8duwq/AAAAAIYmVz8duwq/AAAAAJa4fj/sdcy9AAAAAL8rez8E9UW+AAAAAHA5dT+v+5K+AAAAAD/1bD+oxcG+AAAAAC7YYj9OTO2+AAAAAIYmVz8duwq/AAAAAIYmVz8duwq/AAAAAPaZST9Wxh2/AAAAADYZOj8jyy+/AAAAAKr+KD8fS0C/AAAAALjDFj9w5U6/AAAAACb9Az/nWVu/AAAAAOtH4j48pGW/AAAAAOtH4j48pGW/AAAAAPaZST9Wxh2/AAAAADYZOj8jyy+/AAAAAKr+KD8fS0C/AAAAALjDFj9w5U6/AAAAACb9Az/nWVu/AAAAAOtH4j48pGW/AAAAAOtH4j48pGW/AAAAAFYAtz7UFm+/AAAAAPZ4ij6pdXa/AAAAALyxOz6vqXu/AAAAAKzUyD0jxH6/AAAAAHptRT3Us3+/AAAAAAAAAAAAAIC/AAAAAAAAAAAAAIC/AAAAAFYAtz7UFm+/AAAAAPZ4ij6pdXa/AAAAALyxOz6vqXu/AAAAAKzUyD0jxH6/AAAAAHptRT3Us3+/AAAAAAAAAAAAAIC/AAAAAAAAAAAAAIC/AAAAADnGn71COH+/AAAAAF/LJr7FlHy/AAAAADBTf74Z6ne/AAAAABwPqb5NpHG/AAAAAPoF077APmm/AAAAAMeB/L6ls16/AAAAAHIzEr9XJVK/AAAAAKPWJL843kO/AAAAADa1Mb91RTi/AAAAAJd7Pb/qIyy/AAAAAJd7Pb/qIyy/AAAAADnGn71COH+/AAAAAF/LJr7FlHy/AAAAADBTf74Z6ne/AAAAABwPqb5NpHG/AAAAAPoF077APmm/AAAAAMeB/L6ls16/AAAAAHIzEr9XJVK/AAAAAKPWJL843kO/AAAAADa1Mb91RTi/AAAAAJd7Pb/qIyy/AAAAAJd7Pb/qIyy/AAAAAHrbTL8Hhhm/AAAAAJtHWr/ewAW/AAAAAH6DZb+izOK+AAAAAJt/br8ZD7q+AAAAAD5Udb9USJK+AAAAAJU2er9ofli+AAAAAPhrfb/N9hC+AAAAAMU+f78tJ529AAAAAITSf788lBi9AAAAAAAAgL8AAAAAAAAAAAAAgL8AAAAAAAAAAHrbTL8Hhhm/AAAAAJtHWr/ewAW/AAAAAH6DZb+izOK+AAAAAJt/br8ZD7q+AAAAAD5Udb9USJK+AAAAAJU2er9ofli+AAAAAPhrfb/N9hC+AAAAAMU+f78tJ529AAAAAITSf788lBi9AAAAAAAAgL8AAAAAAAAAAAAAgL8AAAAAAAAAAAAAgL8AAAAAAAAAAAAAgL8AAAAAAAAAAAAAgL8AAAAAAAAAAAAAgL8AAAAAAAAAAJ9kf79I8Yw9AAAAAEFUfb81iBM+AAAAAACmeb+er2I+AAAAANsXdL+/VJo+AAAAACE6bb9jc8A+AAAAAP53ZL8e++Y+AAAAAHTNWb9rhwY/AAAAABtWTb/b4Rg/AAAAAPRXRr/s2SE/AAAAAJj7Pr99eSo/AAAAAJj7Pr99eSo/AAAAAJ9kf79I8Yw9AAAAAEFUfb81iBM+AAAAAACmeb+er2I+AAAAANsXdL+/VJo+AAAAACE6bb9jc8A+AAAAAP53ZL8e++Y+AAAAAHTNWb9rhwY/AAAAABtWTb/b4Rg/AAAAAPRXRr/s2SE/AAAAAJj7Pr99eSo/AAAAAJj7Pr99eSo/AAAAAAPULb+A7zs/AAAAABF3Gr/3JUw/AAAAAN1VBb8IiVo/AAAAALga3r6dqGY/AAAAAMigsL7USHA/AAAAAACbg74CZnc/AAAAAFNjML5oLHw/AAAAAL/7vL1i6H4/AAAAAFHwN73jvX8/AAAAAAAAAAAAAIA/AAAAAAAAAAAAAIA/AAAAAAPULb+A7zs/AAAAABF3Gr/3JUw/AAAAAN1VBb8IiVo/AAAAALga3r6dqGY/AAAAAMigsL7USHA/AAAAAACbg74CZnc/AAAAAFNjML5oLHw/AAAAAL/7vL1i6H4/AAAAAFHwN73jvX8/AAAAAAAAAAAAAIA/AAAAAAAAAAAAAIA/AAAAgLsUoj1xMn8/AAAAgM27KT5vdXw/AAAAgClzhD4pSXc/AAAAgGkpsT6tL3A/AAAAgCP72T7Uo2c/AAAAgFoqAT+KBl0/AAAAgICVFD++d1A/AAAAgAaOIj90xEU/AAAAgIqeLz9NQzo/AAAAgIqeLz9NQzo/AAAAgLsUoj1xMn8/AAAAgM27KT5vdXw/AAAAgClzhD4pSXc/AAAAgGkpsT6tL3A/AAAAgCP72T7Uo2c/AAAAgFoqAT+KBl0/AAAAgICVFD++d1A/AAAAgAaOIj90xEU/AAAAgIqeLz9NQzo/AAAAgIqeLz9NQzo/AAAAgKrYPz+NgCk/AAAAgBbkTj+UxRY/AAAAgDtDXD9RdgI/AAAAgDCWZz8YNdo+AAAAgKOncD9+ma4+AAAAgKFwdz8IS4M+AAAAgO8SfD8mpjI+AAAAgCrNfj+U8sU9AAAAgH+lfz+gMFc9AAAAgHL8fz9MqCo8AAAAgHL8fz9MqCo8AAAAgKrYPz+NgCk/AAAAgBbkTj+UxRY/AAAAgDtDXD9RdgI/AAAAgDCWZz8YNdo+AAAAgKOncD9+ma4+AAAAgKFwdz8IS4M+AAAAgO8SfD8mpjI+AAAAgCrNfj+U8sU9AAAAgH+lfz+gMFc9AAAAAAAAAAAAAIC/AAAAAAAAAAAAAIC/AAAAAAAAAAAAAIC/AAAAAAAAAAAAAIC/AAAAAG/tf78L/sK8AAAAAG/tf78L/sK8AAAAAOdSfr+u+em9AAAAAF0ler/hu1m+AAAAABm1c78swJy+AAAAADV8a79X08i+AAAAALrkYL8HnfS+AAAAAAEOVL8Uaw+/AAAAADRMR7/JrCC/AAAAABphOb8+jTC/AAAAABphOb8+jTC/AAAAAOdSfr+u+em9AAAAAF0ler/hu1m+AAAAABm1c78swJy+AAAAADV8a79X08i+AAAAALrkYL8HnfS+AAAAAAEOVL8Uaw+/AAAAADRMR7/JrCC/AAAAABphOb8+jTC/AAAAABphOb8+jTC/AAAAALiZKb9rwj+/AAAAADIGGL/x+E2/AAAAAMv0BL8lxFq/AAAAAIml4b4qzGW/AAAAADxBuL4u2W6/AAAAAF/Fjr6A2HW/AAAAAMpITL6i2nq/AAAAAGSS/L22C36/AAAAAOXJdL3cin+/AAAAAAAAAAAAAIC/AAAAAAAAAAAAAIC/AAAAALiZKb9rwj+/AAAAADIGGL/x+E2/AAAAAMv0BL8lxFq/AAAAAIml4b4qzGW/AAAAADxBuL4u2W6/AAAAAF/Fjr6A2HW/AAAAAMpITL6i2nq/AAAAAGSS/L22C36/AAAAAOXJdL3cin+/AAAAAAAAAAAAAIC/AAAAAAAAAAAAAIC/AAAAADyVvj2e436/AAAAAI5vPT7OlHu/AAAAAGvNiz6YRXa/AAAAAPVzuj7pa26/AAAAAC2w5D5rC2W/AAAAAFP5BD9kwVq/AAAAAJPRFj9X206/AAAAABOBJz/fl0G/AAAAAOi7Nj/MSTO/AAAAAEZQRD++TiS/AAAAAH9ESz9Pnxu/AAAAAAaoUT/u5hK/AAAAAAaoUT/u5hK/AAAAADyVvj2e436/AAAAAI5vPT7OlHu/AAAAAGvNiz6YRXa/AAAAAPVzuj7pa26/AAAAAC2w5D5rC2W/AAAAAFP5BD9kwVq/AAAAAJPRFj9X206/AAAAABOBJz/fl0G/AAAAAOi7Nj/MSTO/AAAAAEZQRD++TiS/AAAAAH9ESz9Pnxu/AAAAAAaoUT/u5hK/AAAAAAaoUT/u5hK/AAAAABf9XD+EOgG/AAAAABsJZj+irOC+AAAAANQfbT/d9MC+AAAAACuWcj+ojqO+AAAAAFy6dj8ujIi+AAAAALylej+JTVC+AAAAAMlvfT//ixC+AAAAAP9afz+ePJG9AAAAAGbbfz+R4Ai9AAAAgAAAgD8AAAAAAAAAgAAAgD8AAAAAAAAAABf9XD+EOgG/AAAAABsJZj+irOC+AAAAANQfbT/d9MC+AAAAACuWcj+ojqO+AAAAAFy6dj8ujIi+AAAAALylej+JTVC+AAAAAMlvfT//ixC+AAAAAP9afz+ePJG9AAAAAGbbfz+R4Ai9AAAAAAAAgD8AAAAAAAAAAAAAgD8AAAAAAAAAAAAAgD8AAAAAAAAAAAAAgD8AAAAAAAAAgAAAgD8AAAAAAAAAgAAAgD8AAAAAAAAAgBpnfz+C0Is9AAAAgM/qfD+xbx4+AAAAgDzQdz/YcYA+AAAAgLhOcD+1gLA+AAAAgFvmZj+FGd0+AAAAgFGjXT+bHAA/AAAAgLzbUT8FnRI/AAAAgLzbUT8FnRI/AAAAgBpnfz+C0Is9AAAAgM/qfD+xbx4+AAAAgDzQdz/YcYA+AAAAgLhOcD+1gLA+AAAAgFvmZj+FGd0+AAAAgFGjXT+bHAA/AAAAgLzbUT8FnRI/AAAAgLzbUT8FnRI/AAAAgJnPRD8ltiM/AAAAgGmzNT/VVTQ/AAAAgBewJD+j/kM/AAAAgN4VEj/oOVI/AAAAgHOs/D6Lp14/AAAAgK3t0z42Cmk/AAAAgIkBqz6PTHE/AAAAgDbhgj6kfnc/AAAAgA6tOD5nzXs/AAAAgMzH3z2Yd34/AAAAgBEwWD2npH8/AAAAAAAAAAAAAIA/AAAAAAAAAAAAAIA/AAAAgJnPRD8ltiM/AAAAgGmzNT/VVTQ/AAAAgBewJD+j/kM/AAAAgN4VEj/oOVI/AAAAgHOs/D6Lp14/AAAAgK3t0z42Cmk/AAAAgIkBqz6PTHE/AAAAgDbhgj6kfnc/AAAAgA6tOD5nzXs/AAAAgMzH3z2Yd34/AAAAgBEwWD2npH8/AACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAAAAAAAAgL8AAAAAAAAAAAAAgL8AAAAAAAAAAMx+f79DiYA9AAAAAAK+fb/2sgc+AAAAAOCPer+18FE+AAAAAAqndb9bGJA+AAAAAG7Ebr+yrLg+AAAAAOKOZr96hd4+AAAAAHZKXL8bagI/AAAAAPstVL/COw8/AAAAACEkS7+MyRs/AAAAACEkS7+MyRs/AAAAAMx+f79DiYA9AAAAAAK+fb/2sgc+AAAAAOCPer+18FE+AAAAAAqndb9bGJA+AAAAAG7Ebr+yrLg+AAAAAOKOZr96hd4+AAAAAHZKXL8bagI/AAAAAPstVL/COw8/AAAAACEkS7+MyRs/AAAAACEkS7+MyRs/AAAAAF24Or8PIi8/AAAAAG9OKb/mBEA/AAAAAB6kF78xQU4/AAAAABxhBL+iHVs/AAAAAKvh376OOmY/AAAAADCstb6lV28/AAAAAHQ2i774WnY/AAAAAOgGQ75pUHs/AAAAANqr5b2YYn4/AAAAANyUX71Lnn8/AAAAAAAAAAAAAIA/AAAAAAAAAAAAAIA/AAAAAF24Or8PIi8/AAAAAG9OKb/mBEA/AAAAAB6kF78xQU4/AAAAABxhBL+iHVs/AAAAAKvh376OOmY/AAAAADCstb6lV28/AAAAAHQ2i774WnY/AAAAAOgGQ75pUHs/AAAAANqr5b2YYn4/AAAAANyUX71Lnn8/AAAAAAAAAAAAAIA/AAAAAAAAAAAAAIA/AAAAgKEZyj0fwH4/AAAAgKu8Rj7kIXs/AAAAgJoWkT6dgXU/AAAAgA9Mvz7RdW0/AAAAgJir7D4kAmM/AAAAgGrvCz+BXlY/AAAAgBjeHz8n8kc/AAAAgKm2MT8PRDg/AAAAgH/hPj+0lio/AAAAgMlmSj89vxw/AAAAgMlmSj89vxw/AAAAgKEZyj0fwH4/AAAAgKu8Rj7kIXs/AAAAgJoWkT6dgXU/AAAAgA9Mvz7RdW0/AAAAgJir7D4kAmM/AAAAgGrvCz+BXlY/AAAAgBjeHz8n8kc/AAAAgKm2MT8PRDg/AAAAgH/hPj+0lio/AAAAAMp0fL8myyk+AAAAAMp0fL8myyk+AAAAAMp0fL8myyk+AAAAAMp0fL8myyk+AAAAAAAAAIAAAIA/AAAAAAAAAIAAAIA/AAAAAAAAAIAAAIA/AAAAAAAAAIAAAIA/AAAAAAAAgD8AAAAAAAAAAAAAgD8AAAAAAAAAAAAAgD8AAAAAAAAAAAAAgD8AAAAAAAAAAAAAAAAAAIC/AAAAAAAAAAAAAIC/AAAAAAAAAAAAAIC/AAAAAAAAAAAAAIC/AAAAAAAAgL8AAAAAAAAAAAAAgL8AAAAAAAAAAAAAgL8AAAAAAAAAAAAAgL8AAAAAAAAAAPO1Pz/Qpym/AAAAAPO1Pz/Qpym/AAAAAC6DLj/jTDu/AAAAAFnJHD/0Xkq/AAAAANM3Cz/p1Va/AAAAAKLa8D7U52G/AAAAAB6byT6EUWu/AAAAAJSSoT4763K/AAAAAFZIcz6Qq3i/AAAAAOc7JT4tpXy/AAAAAPmhtD2YAH+/AAAAALPfMD3fwn+/AAAAAAAAAAAAAIC/AAAAAAAAAAAAAIC/AAAAAC6DLj/jTDu/AAAAAFnJHD/0Xkq/AAAAANM3Cz/p1Va/AAAAAKLa8D7U52G/AAAAAB6byT6EUWu/AAAAAJSSoT4763K/AAAAAFZIcz6Qq3i/AAAAAOc7JT4tpXy/AAAAAPmhtD2YAH+/AAAAALPfMD3fwn+/AAAAAAAAAAAAAIC/AAAAAAAAAAAAAIC/AAAAAEHlwL2p3H6/AAAAABtHPb61lnu/AAAAAJEFir7XhXa/AAAAAPbutb72Sm+/AAAAADBB4b7G5GW/AAAAAE1rBb/ze1q/AAAAALLTGL+lYE2/AAAAAKF0Kr/u/z6/AAAAAKQQOr811C+/AAAAAEINQ7+yzSW/AAAAAMkpS78rwhu/AAAAAMkpS78rwhu/AAAAAEHlwL2p3H6/AAAAABtHPb61lnu/AAAAAJEFir7XhXa/AAAAAPbutb72Sm+/AAAAADBB4b7G5GW/AAAAAE1rBb/ze1q/AAAAALLTGL+lYE2/AAAAAKF0Kr/u/z6/AAAAAKQQOr811C+/AAAAAEINQ7+yzSW/AAAAAMkpS78rwhu/AAAAAMkpS78rwhu/AAAAAKn+WL/+0we/AAAAAClfZL9BXee+AAAAAC1dbb83xr++AAAAAFkudL8txpm+AAAAAJwceb+Q8Wu+AAAAALF4fL8mbim+AAAAADrnfr9cX729AAAAAOC/f7+tJjW9AAAAAAAAgL9dOKGkAAAAAAAAgL9dOKGkAAAAAKn+WL/+0we/AAAAAClfZL9BXee+AAAAAC1dbb83xr++AAAAAFkudL8txpm+AAAAAJwceb+Q8Wu+AAAAALF4fL8mbim+AAAAADrnfr9cX729AAAAAOC/f7+tJjW9AAAAgAAAgD8AAAAAAAAAgAAAgD8AAAAAAAAAgMFKfz9kNZg9AAAAgASNfD/Qhic+AAAAgL45dz8X5oQ+AAAAgOYAcD9JJrI+AAAAgK2HZj9Xo94+AAAAgL1PXj/34P0+AAAAgNdlVD/e6A4/AAAAgNdlVD/e6A4/AAAAgMFKfz9kNZg9AAAAgASNfD/Qhic+AAAAgL45dz8X5oQ+AAAAgOYAcD9JJrI+AAAAgK2HZj9Xo94+AAAAgL1PXj/34P0+AAAAgNdlVD/e6A4/AAAAgNdlVD/e6A4/AAAAgHvuSD9/oB4/AAAAgPRlOz9DaC4/AAAAgFbCKz8P1D0/AAAAgA0dGj/0aUw/AAAAgEm4Bj8+r1k/AAAAgIT64z6tOGU/AAAAgE3kuD6ruW4/AAAAgFNTjT7uDXY/AAAAgJ3CRD7LOns/AAAAgCn34z28aH4/AAAAgFU3Xj18n38/AAAAAAAAAAAAAIA/AAAAAAAAAAAAAIA/AAAAgHvuSD9/oB4/AAAAgPRlOz9DaC4/AAAAgFbCKz8P1D0/AAAAgA0dGj/0aUw/AAAAgEm4Bj8+r1k/AAAAgIT64z6tOGU/AAAAgE3kuD6ruW4/AAAAgFNTjT7uDXY/AAAAgJ3CRD7LOns/AAAAgCn34z28aH4/AAAAgFU3Xj18n38/AAAAAAAAAAAAAIA/AAAAAAAAAAAAAIA/AAAAAOJkwr0b2H4/AAAAABVqQr4BWHs/AAAAAC0/kL5YoXU/AAAAAN1YvL6NDG4/AAAAANKG5L68FWU/AAAAAJYVBr+bE1o/AAAAABMgGb+5J00/AAAAAH3oKr9JmD4/AAAAALoLO78DyS4/AAAAAEpISb97Lh4/AAAAALGBVb9uPw0/AAAAAJrVXL/zfQE/AAAAAJM7Y7+6zus+AAAAAJM7Y7+6zus+AAAAAOJkwr0b2H4/AAAAABVqQr4BWHs/AAAAAC0/kL5YoXU/AAAAAN1YvL6NDG4/AAAAANKG5L68FWU/AAAAAJYVBr+bE1o/AAAAABMgGb+5J00/AAAAAH3oKr9JmD4/AAAAALoLO78DyS4/AAAAAEpISb97Lh4/AAAAALGBVb9uPw0/AAAAAJrVXL/zfQE/AAAAAAAAgL8AAAAAAAAAAAAAgL8AAAAAAAAAAAAAgL8AAAAAAAAAAAAAgL8AAAAAAAAAAO0/Yr/7ju++AAAAAO0/Yr/7ju++AAAAABB1WL/brgi/AAAAAG+0TL8Ruhm/AAAAAMD1Pr8JgCq/AAAAAClRL78kjDq/AAAAAOECHr+Gakm/AAAAAHtoC79atla/AAAAAArv7755JmK/AAAAAA5fyL74lGu/AAAAACUWob7j/3K/AAAAAH7Odb7Wg3i/AAAAAK/pLL7nUny/AAAAAHC00L3Rqn6/AAAAAASlSb2JsH+/AAAAAAAAAAAAAIC/AAAAAAAAAAAAAIC/AAAAABB1WL/brgi/AAAAAG+0TL8Ruhm/AAAAAMD1Pr8JgCq/AAAAAClRL78kjDq/AAAAAOECHr+Gakm/AAAAAHtoC79atla/AAAAAArv7755JmK/AAAAAA5fyL74lGu/AAAAACUWob7j/3K/AAAAAH7Odb7Wg3i/AAAAAK/pLL7nUny/AAAAAHC00L3Rqn6/AAAAAASlSb2JsH+/AAAAAAAAAAAAAIC/AAAAAAAAAAAAAIC/AAAAALESlz10TX+/AAAAAGQhHD7AAX2/AAAAAHukcD6m1Hi/AAAAACW3oz5Xj3K/AAAAAHM3zz7JGGq/AAAAAOut+T5Rf1+/AAAAAHT9ED+v+1K/AAAAAHaUIz+a60S/AAAAAFBEND/KxDW/AAAAAJreQj+FBCa/AAAAANzVSz+44Bq/AAAAAFjKUz/tzg+/AAAAAFjKUz/tzg+/AAAAALESlz10TX+/AAAAAGQhHD7AAX2/AAAAAHukcD6m1Hi/AAAAACW3oz5Xj3K/AAAAAHM3zz7JGGq/AAAAAOut+T5Rf1+/AAAAAHT9ED+v+1K/AAAAAHaUIz+a60S/AAAAAFBEND/KxDW/AAAAAJreQj+FBCa/AAAAANzVSz+44Bq/AAAAAFjKUz/tzg+/AAAAAFjKUz/tzg+/AAAAAC8QYT/z/PO+AAAAAMACaz9UCcu+AAAAAK5Bcj+egKW+AAAAACNldz+NoYO+AAAAADXvej/lskq+AAAAAEbbfT/eOwS+AAAAAPl6fz+PbIK9AAAAAA7ifz/qnPe8AAAAAAAAgD8QPnikAAAAAAAAgD8QPnikAAAAAC8QYT/z/PO+AAAAAMACaz9UCcu+AAAAAK5Bcj+egKW+AAAAACNldz+NoYO+AAAAADXvej/lskq+AAAAAEbbfT/eOwS+AAAAAPl6fz+PbIK9AAAAAA7ifz/qnPe8AACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAvwAAAIAAAACAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAAACAPwAAAAAAAAAAlHSUYpXQPgAAAAAAAGhBaB1oIEsAhZRoIoeUUpQoSwFNcQJLAksDh5RoKolCmDoAAAAAgL+r/OLAuyRawAAAgD+r/OLAuyRawAAAgL+rPLLAuyQ5wAAAgD+rPLLAuyQ5wAAAgL+r/OLAuyRawAAAgL825NnAtVZZwAAAgL825NnAtVZZwAAAgL+0LNLA+zxXwAAAgL+0LNLA+zxXwAAAgL8InMvAqTtUwAAAgL8InMvAqTtUwAAAgL+FZ8XA/ytQwAAAgL+FZ8XA/ytQwAAAgL8tj7/A/A1LwAAAgL8tj7/A/A1LwAAAgL9Dm7rA1ItFwAAAgL9Dm7rA1ItFwAAAgL/5ZbbAdds/wAAAgL/5ZbbAdds/wAAAgL8ASbTAgZg8wAAAgL8ASbTAgZg8wAAAgL+rPLLAuyQ5wAAAgD+r/OLAuyRawAAAgD825NnAtVZZwAAAgD825NnAtVZZwAAAgD+0LNLA+zxXwAAAgD+0LNLA+zxXwAAAgD8InMvAqTtUwAAAgD8InMvAqTtUwAAAgD+FZ8XA/ytQwAAAgD+FZ8XA/ytQwAAAgD8tj7/A/A1LwAAAgD8tj7/A/A1LwAAAgD9Dm7rA1ItFwAAAgD9Dm7rA1ItFwAAAgD/5ZbbAdds/wAAAgD/5ZbbAdds/wAAAgD8ASbTAgZg8wAAAgD8ASbTAgZg8wAAAgD+rPLLAuyQ5wAAAgL+rvJ7Ad8nCvwAAgD+rvJ7Ad8nCvwAAgL+rPLLAuyQ5wAAAgL8SZq7A+sMxwAAAgL8SZq7A+sMxwAAAgL/d+qrAduEpwAAAgL/d+qrAduEpwAAAgL8M+6fAL30hwAAAgL8M+6fAL30hwAAAgL+fZqXAJJcYwAAAgL+fZqXAJJcYwAAAgL+WPaPAVS8PwAAAgL+WPaPAVS8PwAAAgL/wf6HAxEUFwAAAgL/wf6HAxEUFwAAAgL+vLaDA3rT1vwAAgL+vLaDA3rT1vwAAgL/RRp/ArtrfvwAAgL/RRp/ArtrfvwAAgL9N7J7A44XRvwAAgL9N7J7A44XRvwAAgL+rvJ7Ad8nCvwAAgD+rPLLAuyQ5wAAAgD8SZq7A+sMxwAAAgD8SZq7A+sMxwAAAgD/d+qrAduEpwAAAgD/d+qrAduEpwAAAgD8M+6fAL30hwAAAgD8M+6fAL30hwAAAgD+fZqXAJJcYwAAAgD+fZqXAJJcYwAAAgD+WPaPAVS8PwAAAgD+WPaPAVS8PwAAAgD/wf6HAxEUFwAAAgD/wf6HAxEUFwAAAgD+vLaDA3rT1vwAAgD+vLaDA3rT1vwAAgD/RRp/ArtrfvwAAgD/RRp/ArtrfvwAAgD9N7J7A44XRvwAAgD9N7J7A44XRvwAAgD+rvJ7Ad8nCvwAAgL+rcuS/d8nCvwAAgD+rcuS/d8nCvwAAgL+rvJ7Ad8nCvwAAgL+rcuS/d8nCvwAAgD+rvJ7Ad8nCvwAAgD+rcuS/d8nCvwAAgL9WOR/Au4R4wAAAgD9WOR/Au4R4wAAAgL+rcuS/d8nCvwAAgL+xZee/o1f5vwAAgL+xZee/o1f5vwAAgL8xPe+/KCsXwAAAgL8xPe+/KCsXwAAAgL8t+fu/vuIwwAAAgL8t+fu/vuIwwAAAgL/SzAbAldJJwAAAgL/SzAbAldJJwAAAgL9A1RHA7othwAAAgL9A1RHA7othwAAAgL9WOR/Au4R4wAAAgD+rcuS/d8nCvwAAgD+xZee/o1f5vwAAgD+xZee/o1f5vwAAgD8xPe+/KCsXwAAAgD8xPe+/KCsXwAAAgD8t+fu/vuIwwAAAgD8t+fu/vuIwwAAAgD/SzAbAldJJwAAAgD/SzAbAldJJwAAAgD9A1RHA7othwAAAgD9A1RHA7othwAAAgD9WOR/Au4R4wAAAgL+rLIzAXoKxwAAAgD+rLIzAXoKxwAAAgL9WOR/Au4R4wAAAgL/15C7AWUeHwAAAgL/15C7AWUeHwAAAgL9fYUDASXGRwAAAgL9fYUDASXGRwAAAgL+UrlPAMMCawAAAgL+UrlPAMMCawAAAgL+UzGjADDSjwAAAgL+UzGjADDSjwAAAgL+/q3/AK8iqwAAAgL+/q3/AK8iqwAAAgL+rLIzAXoKxwAAAgD9WOR/Au4R4wAAAgD/15C7AWUeHwAAAgD/15C7AWUeHwAAAgD9fYUDASXGRwAAAgD9fYUDASXGRwAAAgD+UrlPAMMCawAAAgD+UrlPAMMCawAAAgD+UzGjADDSjwAAAgD+UzGjADDSjwAAAgD+/q3/AK8iqwAAAgD+/q3/AK8iqwAAAgD+rLIzAXoKxwAAAgL+rHOHAXnLEwAAAgD+rHOHAXnLEwAAAgL+rLIzAXoKxwAAAgL89OpvA3xS4wAAAgL89OpvA3xS4wAAAgL/n36rAI0K9wAAAgL/n36rAI0K9wAAAgL+oHbvAKgrBwAAAgL+oHbvAKgrBwAAAgL+A88vA9GzDwAAAgL+A88vA9GzDwAAAgL9DbNbAAzHEwAAAgL9DbNbAAzHEwAAAgL+rHOHAXnLEwAAAgD+rLIzAXoKxwAAAgD89OpvA3xS4wAAAgD89OpvA3xS4wAAAgD/n36rAI0K9wAAAgD/n36rAI0K9wAAAgD+oHbvAKgrBwAAAgD+oHbvAKgrBwAAAgD+A88vA9GzDwAAAgD+A88vA9GzDwAAAgD9DbNbAAzHEwAAAgD9DbNbAAzHEwAAAgD+rHOHAXnLEwAAAgL9VLjXBXiKNwAAAgD9VLjXBXiKNwAAAgL+rHOHAXnLEwAAAgL+I9/TAiLDDwAAAgL+I9/TAiLDDwAAAgL9q6APBBWvBwAAAgL9q6APBBWvBwAAAgL/QcQzB/NS9wAAAgL/QcQzB/NS9wAAAgL8avBPBA2y5wAAAgL8avBPBA2y5wAAAgL8dpxrBPOSzwAAAgL8dpxrBPOSzwAAAgL/YMiHBpj2twAAAgL/YMiHBpj2twAAAgL9NXyfBQnilwAAAgL9NXyfBQnilwAAAgL96LC3BD5ScwAAAgL96LC3BD5ScwAAAgL85SDHB8CuVwAAAgL85SDHB8CuVwAAAgL9VLjXBXiKNwAAAgD+rHOHAXnLEwAAAgD+I9/TAiLDDwAAAgD+I9/TAiLDDwAAAgD9q6APBBWvBwAAAgD9q6APBBWvBwAAAgD/QcQzB/NS9wAAAgD/QcQzB/NS9wAAAgD8avBPBA2y5wAAAgD8avBPBA2y5wAAAgD8dpxrBPOSzwAAAgD8dpxrBPOSzwAAAgD/YMiHBpj2twAAAgD/YMiHBpj2twAAAgD9NXyfBQnilwAAAgD9NXyfBQnilwAAAgD96LC3BD5ScwAAAgD96LC3BD5ScwAAAgD85SDHB8CuVwAAAgD85SDHB8CuVwAAAgD9VLjXBXiKNwAAAgL9VTk7BJNq6PgAAgD9VTk7BJNq6PgAAgL9VLjXBXiKNwAAAgL+BSjrBLcKAwAAAgL+BSjrBLcKAwAAAgL8a0j7BDtFmwAAAgL8a0j7BDtFmwAAAgL8jxULB1SpKwAAAgL8jxULB1SpKwAAAgL+aI0bBsJErwAAAgL+aI0bBsJErwAAAgL+A7UjBoAULwAAAgL+A7UjBoAULwAAAgL/VIkvBRg3RvwAAgL/VIkvBRg3RvwAAgL+Yw0zBdimIvwAAgL+Yw0zBdimIvwAAgL/Kz03BNX/tvgAAgL/Kz03BNX/tvgAAgL+yLk7BwyNlvQAAgL+yLk7BwyNlvQAAgL9VTk7BJNq6PgAAgD9VLjXBXiKNwAAAgD+BSjrBLcKAwAAAgD+BSjrBLcKAwAAAgD8a0j7BDtFmwAAAgD8a0j7BDtFmwAAAgD8jxULB1SpKwAAAgD8jxULB1SpKwAAAgD+aI0bBsJErwAAAgD+aI0bBsJErwAAAgD+A7UjBoAULwAAAgD+A7UjBoAULwAAAgD/VIkvBRg3RvwAAgD/VIkvBRg3RvwAAgD+Yw0zBdimIvwAAgD+Yw0zBdimIvwAAgD/Kz03BNX/tvgAAgD/Kz03BNX/tvgAAgD+yLk7BwyNlvQAAgD+yLk7BwyNlvQAAgD9VTk7BJNq6PgAAgL9VTk7BEm0WPwAAgD9VTk7BEm0WPwAAgL9VTk7BJNq6PgAAgL9VTk7BEm0WPwAAgD9VTk7BJNq6PgAAgD9VTk7BEm0WPwAAgL9VXjXBom2oQAAAgD9VXjXBom2oQAAAgL9VTk7BEm0WPwAAgL9q7U3BoFKlPwAAgL9q7U3BoFKlPwAAgL+pykzBH9X6PwAAgL+pykzBH9X6PwAAgL+rAEvBlw4kQAAAgL+rAEvBlw4kQAAAgL9olEjB1+hHQAAAgL9olEjB1+hHQAAAgL821UXB8nRmQAAAgL821UXB8nRmQAAAgL+xjELB86+BQAAAgL+xjELB86+BQAAAgL/Xuj7B2lSPQAAAgL/Xuj7B2lSPQAAAgL+pXzrBLimcQAAAgL+pXzrBLimcQAAAgL/r8DfBoWaiQAAAgL/r8DfBoWaiQAAAgL9VXjXBom2oQAAAgD9VTk7BEm0WPwAAgD9q7U3BoFKlPwAAgD9q7U3BoFKlPwAAgD+pykzBH9X6PwAAgD+pykzBH9X6PwAAgD+rAEvBlw4kQAAAgD+rAEvBlw4kQAAAgD9olEjB1+hHQAAAgD9olEjB1+hHQAAAgD821UXB8nRmQAAAgD821UXB8nRmQAAAgD+xjELB86+BQAAAgD+xjELB86+BQAAAgD/Xuj7B2lSPQAAAgD/Xuj7B2lSPQAAAgD+pXzrBLimcQAAAgD+pXzrBLimcQAAAgD/r8DfBoWaiQAAAgD/r8DfBoWaiQAAAgD9VXjXBom2oQAAAgL+r3OHAok3gQAAAgD+r3OHAok3gQAAAgL9VXjXBom2oQAAAgL/Wwy/BsdizQAAAgL/Wwy/BsdizQAAAgL92uinBA/a9QAAAgL92uinBA/a9QAAAgL83QiPBlsXGQAAAgL83QiPBlsXGQAAAgL8YWxzBbEfOQAAAgL8YWxzBbEfOQAAAgL8aBRXBhXvUQAAAgL8aBRXBhXvUQAAAgL88QA3B32HZQAAAgL88QA3B32HZQAAAgL9+DAXBfPrcQAAAgL9+DAXBfPrcQAAAgL/B0/jAW0XfQAAAgL/B0/jAW0XfQAAAgL8chO3AkAvgQAAAgL8chO3AkAvgQAAAgL+r3OHAok3gQAAAgD9VXjXBom2oQAAAgD/Wwy/BsdizQAAAgD/Wwy/BsdizQAAAgD92uinBA/a9QAAAgD92uinBA/a9QAAAgD83QiPBlsXGQAAAgD83QiPBlsXGQAAAgD8YWxzBbEfOQAAAgD8YWxzBbEfOQAAAgD8aBRXBhXvUQAAAgD8aBRXBhXvUQAAAgD88QA3B32HZQAAAgD88QA3B32HZQAAAgD9+DAXBfPrcQAAAgD9+DAXBfPrcQAAAgD/B0/jAW0XfQAAAgD/B0/jAW0XfQAAAgD8chO3AkAvgQAAAgD8chO3AkAvgQAAAgD+r3OHAok3gQAAAgL9WmU/Aov20QAAAgD9WmU/Aov20QAAAgL+r3OHAok3gQAAAgL9uQs7AzIvfQAAAgL9uQs7AzIvfQAAAgL9jtrvASkbdQAAAgL9jtrvASkbdQAAAgL+LOKrAG33ZQAAAgL+LOKrAG33ZQAAAgL+AYZvAYcnUQAAAgL+AYZvAYcnUQAAAgL99wY7AjIHPQAAAgL99wY7AjIHPQAAAgL/A0oLAXDvJQAAAgL/A0oLAXDvJQAAAgL+TKm/A0vbBQAAAgL+TKm/A0vbBQAAAgL84+17A7sO7QAAAgL84+17A7sO7QAAAgL9WmU/Aov20QAAAgD+r3OHAok3gQAAAgD9uQs7AzIvfQAAAgD9uQs7AzIvfQAAAgD9jtrvASkbdQAAAgD9jtrvASkbdQAAAgD+LOKrAG33ZQAAAgD+LOKrAG33ZQAAAgD+AYZvAYcnUQAAAgD+AYZvAYcnUQAAAgD99wY7AjIHPQAAAgD99wY7AjIHPQAAAgD/A0oLAXDvJQAAAgD/A0oLAXDvJQAAAgD+TKm/A0vbBQAAAgD+TKm/A0vbBQAAAgD84+17A7sO7QAAAgD84+17A7sO7QAAAgD9WmU/Aov20QAAAgL+rcuS/RVsDQAAAgD+rcuS/RVsDQAAAgL9WmU/Aov20QAAAgL/vrzzAeDurQAAAgL/vrzzAeDurQAAAgL+i6SvAo8ugQAAAgL+i6SvAo8ugQAAAgL9vRh3AJa6VQAAAgL9vRh3AJa6VQAAAgL9VxhDA/OKJQAAAgL9VxhDA/OKJQAAAgL9UaQbAUtR6QAAAgL9UaQbAUtR6QAAAgL/bXvy/WIdgQAAAgL/bXvy/WIdgQAAAgL9AMfC/Ct9EQAAAgL9AMfC/Ct9EQAAAgL/YSei/Z9snQAAAgL/YSei/Z9snQAAAgL9tmuW/f9kVQAAAgL9tmuW/f9kVQAAAgL+rcuS/RVsDQAAAgD9WmU/Aov20QAAAgD/vrzzAeDurQAAAgD/vrzzAeDurQAAAgD+i6SvAo8ugQAAAgD+i6SvAo8ugQAAAgD9vRh3AJa6VQAAAgD9vRh3AJa6VQAAAgD9VxhDA/OKJQAAAgD9VxhDA/OKJQAAAgD9UaQbAUtR6QAAAgD9UaQbAUtR6QAAAgD/bXvy/WIdgQAAAgD/bXvy/WIdgQAAAgD9AMfC/Ct9EQAAAgD9AMfC/Ct9EQAAAgD/YSei/Z9snQAAAgD/YSei/Z9snQAAAgD9tmuW/f9kVQAAAgD9tmuW/f9kVQAAAgD+rcuS/RVsDQAAAgL+rvJ7ARVsDQAAAgD+rvJ7ARVsDQAAAgL+rcuS/RVsDQAAAgL+rvJ7ARVsDQAAAgD+rcuS/RVsDQAAAgD+rvJ7ARVsDQAAAgL+rPLLARbtpQAAAgD+rPLLARbtpQAAAgL+rvJ7ARVsDQAAAgL8aW5/AWIgVQAAAgL8aW5/AWIgVQAAAgL90xKDAGasmQAAAgL90xKDAGasmQAAAgL8ot6LAeTM1QAAAgL8ot6LAeTM1QAAAgL+FBKXAXY9BQAAAgL+FBKXAXY9BQAAAgL8D16fAhjxNQAAAgL8D16fAhjxNQAAAgL+jLqvA8jpYQAAAgL+jLqvA8jpYQAAAgL9Gg67AOj1hQAAAgL9Gg67AOj1hQAAAgL+rPLLARbtpQAAAgD+rvJ7ARVsDQAAAgD8aW5/AWIgVQAAAgD8aW5/AWIgVQAAAgD90xKDAGasmQAAAgD90xKDAGasmQAAAgD8ot6LAeTM1QAAAgD8ot6LAeTM1QAAAgD+FBKXAXY9BQAAAgD+FBKXAXY9BQAAAgD8D16fAhjxNQAAAgD8D16fAhjxNQAAAgD+jLqvA8jpYQAAAgD+jLqvA8jpYQAAAgD9Gg67AOj1hQAAAgD9Gg67AOj1hQAAAgD+rPLLARbtpQAAAgL+rvOPAoo2IQAAAgD+rvOPAoo2IQAAAgL+rPLLARbtpQAAAgL9iNLbAxmJxQAAAgL9iNLbAxmJxQAAAgL+VbLrAtDZ4QAAAgL+VbLrAtDZ4QAAAgL9C5b7ADTd+QAAAgL9C5b7ADTd+QAAAgL9rnsPA6bGBQAAAgL9rnsPA6bGBQAAAgL8QmMjAgd6DQAAAgL8QmMjAgd6DQAAAgL8v0s3ATqGFQAAAgL8v0s3ATqGFQAAAgL/KTNPAUvqGQAAAgL/KTNPAUvqGQAAAgL/gB9nAi+mHQAAAgL/gB9nAi+mHQAAAgL9ESd7AnGSIQAAAgL9ESd7AnGSIQAAAgL+rvOPAoo2IQAAAgD+rPLLARbtpQAAAgD9iNLbAxmJxQAAAgD9iNLbAxmJxQAAAgD+VbLrAtDZ4QAAAgD+VbLrAtDZ4QAAAgD9C5b7ADTd+QAAAgD9C5b7ADTd+QAAAgD9rnsPA6bGBQAAAgD9rnsPA6bGBQAAAgD8QmMjAgd6DQAAAgD8QmMjAgd6DQAAAgD8v0s3ATqGFQAAAgD8v0s3ATqGFQAAAgD/KTNPAUvqGQAAAgD/KTNPAUvqGQAAAgD/gB9nAi+mHQAAAgD/gB9nAi+mHQAAAgD9ESd7AnGSIQAAAgD9ESd7AnGSIQAAAgD+rvOPAoo2IQAAAgL9Vdg7BRftZQAAAgD9Vdg7BRftZQAAAgL+rvOPAoo2IQAAAgL9hnuzAnyaIQAAAgL9hnuzAnyaIQAAAgL8WJvTAwhmHQAAAgL8WJvTAwhmHQAAAgL/vjPrAGZmFQAAAgL/vjPrAGZmFQAAAgL9fTADBRJGDQAAAgL9fTADBRJGDQAAAgL/43QLB2EmBQAAAgL/43QLB2EmBQAAAgL9IDgXBTL59QAAAgL9IDgXBTL59QAAAgL+8IAfBiTd4QAAAgL+8IAfBiTd4QAAAgL9SFQnBZf9xQAAAgL9SFQnBZf9xQAAAgL8K7ArB4hVrQAAAgL8K7ArB4hVrQAAAgL/mpAzB/3pjQAAAgL/mpAzB/3pjQAAAgL90kg3B29deQAAAgL90kg3B29deQAAAgL9Vdg7BRftZQAAAgD+rvOPAoo2IQAAAgD9hnuzAnyaIQAAAgD9hnuzAnyaIQAAAgD8WJvTAwhmHQAAAgD8WJvTAwhmHQAAAgD/vjPrAGZmFQAAAgD/vjPrAGZmFQAAAgD9fTADBRJGDQAAAgD9fTADBRJGDQAAAgD/43QLB2EmBQAAAgD/43QLB2EmBQAAAgD9IDgXBTL59QAAAgD9IDgXBTL59QAAAgD+8IAfBiTd4QAAAgD+8IAfBiTd4QAAAgD9SFQnBZf9xQAAAgD9SFQnBZf9xQAAAgD8K7ArB4hVrQAAAgD8K7ArB4hVrQAAAgD/mpAzB/3pjQAAAgD/mpAzB/3pjQAAAgD90kg3B29deQAAAgD90kg3B29deQAAAgD9Vdg7BRftZQAAAgL9VHhjBEm0cPwAAgD9VHhjBEm0cPwAAgL9Vdg7BRftZQAAAgL8q9w/B3pVQQAAAgL8q9w/B3pVQQAAAgL90VxHBNElGQAAAgL90VxHBNElGQAAAgL8zlxLBSBU7QAAAgL8zlxLBSBU7QAAAgL9othPBGvouQAAAgL9othPBGvouQAAAgL8TtRTBqfchQAAAgL8TtRTBqfchQAAAgL8D1hXBVjcPQAAAgL8D1hXBVjcPQAAAgL/q1RbBF2fwPwAAgL/q1RbBF2fwPwAAgL87uRfB1bSsPwAAgL87uRfB1bSsPwAAgL8PBRjB47l9PwAAgL8PBRjB47l9PwAAgL9VHhjBEm0cPwAAgD9Vdg7BRftZQAAAgD8q9w/B3pVQQAAAgD8q9w/B3pVQQAAAgD90VxHBNElGQAAAgD90VxHBNElGQAAAgD8zlxLBSBU7QAAAgD8zlxLBSBU7QAAAgD9othPBGvouQAAAgD9othPBGvouQAAAgD8TtRTBqfchQAAAgD8TtRTBqfchQAAAgD8D1hXBVjcPQAAAgD8D1hXBVjcPQAAAgD/q1RbBF2fwPwAAgD/q1RbBF2fwPwAAgD87uRfB1bSsPwAAgD87uRfB1bSsPwAAgD8PBRjB47l9PwAAgD8PBRjB47l9PwAAgD9VHhjBEm0cPwAAgL9VHhjBJNqEPgAAgD9VHhjBJNqEPgAAgL9VHhjBEm0cPwAAgL9VHhjBJNqEPgAAgD9VHhjBEm0cPwAAgD9VHhjBJNqEPgAAgL9Vjg7Bu2QjwAAAgD9Vjg7Bu2QjwAAAgL9VHhjBJNqEPgAAgL9qvRfBMpDzvgAAgL9qvRfBMpDzvgAAgL+pmhbBf9SPvwAAgL+pmhbBf9SPvwAAgL/09hTBBwHQvwAAgL/09hTBBwHQvwAAgL9iMxPBeY79vwAAgL9iMxPBeY79vwAAgL9jfBHBQh4PwAAAgL9jfBHBQh4PwAAAgL93FhDBlr8ZwAAAgL93FhDBlr8ZwAAAgL9Vjg7Bu2QjwAAAgD9VHhjBJNqEPgAAgD9qvRfBMpDzvgAAgD9qvRfBMpDzvgAAgD+pmhbBf9SPvwAAgD+pmhbBf9SPvwAAgD/09hTBBwHQvwAAgD/09hTBBwHQvwAAgD9iMxPBeY79vwAAgD9iMxPBeY79vwAAgD9jfBHBQh4PwAAAgD9jfBHBQh4PwAAAgD93FhDBlr8ZwAAAgD93FhDBlr8ZwAAAgD9Vjg7Bu2QjwAAAgL9Vjg7Bu2QjwAAAgL8l2QzB7VoswAAAgL8l2QzB7VoswAAAgL/E/grBN4Q0wAAAgL/E/grBN4Q0wAAAgL8y/wjBmeA7wAAAgL8y/wjBmeA7wAAAgL9u2gbBE3BCwAAAgL9u2gbBE3BCwAAAgL96kATBpTJIwAAAgL96kATBpTJIwAAAgL9UIQLBTyhNwAAAgL9UIQLBTyhNwAAAgL/6Gf/AElFRwAAAgL/6Gf/AElFRwAAAgL/rpvnA7KxUwAAAgL/rpvnA7KxUwAAAgL956fPA3ztXwAAAgL956fPA3ztXwAAAgL+l4e3A6v1YwAAAgL+l4e3A6v1YwAAAgL/piejAB9tZwAAAgL/piejAB9tZwAAAgL+r/OLAuyRawAAAgD9Vjg7Bu2QjwAAAgD8l2QzB7VoswAAAgD8l2QzB7VoswAAAgD/E/grBN4Q0wAAAgD/E/grBN4Q0wAAAgD8y/wjBmeA7wAAAgD8y/wjBmeA7wAAAgD9u2gbBE3BCwAAAgD9u2gbBE3BCwAAAgD96kATBpTJIwAAAgD96kATBpTJIwAAAgD9UIQLBTyhNwAAAgD9UIQLBTyhNwAAAgD/6Gf/AElFRwAAAgD/6Gf/AElFRwAAAgD/rpvnA7KxUwAAAgD/rpvnA7KxUwAAAgD956fPA3ztXwAAAgD956fPA3ztXwAAAgD+l4e3A6v1YwAAAgD+l4e3A6v1YwAAAgD/piejAB9tZwAAAgD/piejAB9tZwAAAgD+r/OLAuyRawAAAgL+tysG+Em0NPwAAgD+tysG+Em0NPwAAgL+pmnM/os2oQAAAgD+pmnM/os2oQAAAgL+tysG+Em0NPwAAgL9LrbW+WOSpPwAAgL9LrbW+WOSpPwAAgL8kVZG+Iq4DQAAAgL8kVZG+Iq4DQAAAgL+2NzG+TkktQAAAgL+2NzG+TkktQAAAgL+oN8W84WtTQAAAgL+oN8W84WtTQAAAgL97liQ+FV52QAAAgL97liQ+FV52QAAAgL9TXbg+uumJQAAAgL9TXbg+uumJQAAAgL+WOhc/vLKXQAAAgL+WOhc/vLKXQAAAgL/Vl0M/MXegQAAAgL/Vl0M/MXegQAAAgL+pmnM/os2oQAAAgD+tysG+Em0NPwAAgD9LrbW+WOSpPwAAgD9LrbW+WOSpPwAAgD8kVZG+Iq4DQAAAgD8kVZG+Iq4DQAAAgD+2NzG+TkktQAAAgD+2NzG+TkktQAAAgD+oN8W84WtTQAAAgD+oN8W84WtTQAAAgD97liQ+FV52QAAAgD97liQ+FV52QAAAgD9TXbg+uumJQAAAgD9TXbg+uumJQAAAgD+WOhc/vLKXQAAAgD+WOhc/vLKXQAAAgD/Vl0M/MXegQAAAgD/Vl0M/MXegQAAAgD+pmnM/os2oQAAAgL9VQ5NAok3gQAAAgD9VQ5NAok3gQAAAgL+pmnM/os2oQAAAgL/vB6I/Xay0QAAAgL/vB6I/Xay0QAAAgL+8Bck//iK+QAAAgL+8Bck//iK+QAAAgL/vXu4/dKrFQAAAgL/vXu4/dKrFQAAAgL+o7QpAU0LMQAAAgL+o7QpAU0LMQAAAgL9vvR9AmurRQAAAgL9vvR9AmurRQAAAgL/NnjVASaPWQAAAgL/NnjVASaPWQAAAgL/CkUxAYGzaQAAAgL/CkUxAYGzaQAAAgL9OlmRA30XdQAAAgL9OlmRA30XdQAAAgL9xrH1Axi/fQAAAgL9xrH1Axi/fQAAAgL/544hAKwbgQAAAgL/544hAKwbgQAAAgL9VQ5NAok3gQAAAgD+pmnM/os2oQAAAgD/vB6I/Xay0QAAAgD/vB6I/Xay0QAAAgD+8Bck//iK+QAAAgD+8Bck//iK+QAAAgD/vXu4/dKrFQAAAgD/vXu4/dKrFQAAAgD+o7QpAU0LMQAAAgD+o7QpAU0LMQAAAgD9vvR9AmurRQAAAgD9vvR9AmurRQAAAgD/NnjVASaPWQAAAgD/NnjVASaPWQAAAgD/CkUxAYGzaQAAAgD/CkUxAYGzaQAAAgD9OlmRA30XdQAAAgD9OlmRA30XdQAAAgD9xrH1Axi/fQAAAgD9xrH1Axi/fQAAAgD/544hAKwbgQAAAgD/544hAKwbgQAAAgD9VQ5NAok3gQAAAgL9VI/tAom2uQAAAgD9VI/tAom2uQAAAgL9VQ5NAok3gQAAAgL8n8KJAzIvfQAAAgL8n8KJAzIvfQAAAgL/AWrBA4pHdQAAAgL/AWrBA4pHdQAAAgL+P3rtAFL7aQAAAgL+P3rtAFL7aQAAAgL+W2sZA7OvWQAAAgL+W2sZA7OvWQAAAgL/VTtFAahvSQAAAgL/VTtFAahvSQAAAgL9LO9tAjUzMQAAAgL9LO9tAjUzMQAAAgL/6n+RAVn/FQAAAgL/6n+RAVn/FQAAAgL/gfO1AxbO9QAAAgL/gfO1AxbO9QAAAgL+sf/RAz2m2QAAAgL+sf/RAz2m2QAAAgL9VI/tAom2uQAAAgD9VQ5NAok3gQAAAgD8n8KJAzIvfQAAAgD8n8KJAzIvfQAAAgD/AWrBA4pHdQAAAgD/AWrBA4pHdQAAAgD+P3rtAFL7aQAAAgD+P3rtAFL7aQAAAgD+W2sZA7OvWQAAAgD+W2sZA7OvWQAAAgD/VTtFAahvSQAAAgD/VTtFAahvSQAAAgD9LO9tAjUzMQAAAgD9LO9tAjUzMQAAAgD/6n+RAVn/FQAAAgD/6n+RAVn/FQAAAgD/gfO1AxbO9QAAAgD/gfO1AxbO9QAAAgD+sf/RAz2m2QAAAgD+sf/RAz2m2QAAAgD9VI/tAom2uQAAAgL+rIQFBos3YQAAAgD+rIQFBos3YQAAAgL9VI/tAom2uQAAAgL+rIQFBos3YQAAAgD9VI/tAom2uQAAAgD+rIQFBos3YQAAAgL+rwTBBos3YQAAAgD+rwTBBos3YQAAAgL+rIQFBos3YQAAAgL+rwTBBos3YQAAAgD+rIQFBos3YQAAAgD+rwTBBos3YQAAAgL+rwTBBL3kswQAAgD+rwTBBL3kswQAAgL+rwTBBos3YQAAAgL+rwTBBL3kswQAAgD+rwTBBos3YQAAAgD+rwTBBL3kswQAAgL9Vw/RAL3kswQAAgD9Vw/RAL3kswQAAgL+rwTBBL3kswQAAgL9Vw/RAL3kswQAAgD+rwTBBL3kswQAAgD9Vw/RAL3kswQAAgL9Vw/RAXhKawAAAgD9Vw/RAXhKawAAAgL9Vw/RAL3kswQAAgL9Vw/RAXhKawAAAgD9Vw/RAL3kswQAAgD9Vw/RAXhKawAAAgL9Vg5JAXnLEwAAAgD9Vg5JAXnLEwAAAgL9Vw/RAXhKawAAAgL/PvetANlejwAAAgL/PvetANlejwAAAgL8HFuNAsrSqwAAAgL8HFuNAsrSqwAAAgL+w3NpAEouwwAAAgL+w3NpAEouwwAAAgL8NOtJAi6G1wAAAgL8NOtJAi6G1wAAAgL8gLslAHfi5wAAAgL8gLslAHfi5wAAAgL/puL9AyI69wAAAgL/puL9AyI69wAAAgL9o2rVAjWXAwAAAgL9o2rVAjWXAwAAAgL+bkqtAa3zCwAAAgL+bkqtAa3zCwAAAgL+F4aBAY9PDwAAAgL+F4aBAY9PDwAAAgL88yJlAn0rEwAAAgL88yJlAn0rEwAAAgL9Vg5JAXnLEwAAAgD9Vw/RAXhKawAAAgD/PvetANlejwAAAgD/PvetANlejwAAAgD8HFuNAsrSqwAAAgD8HFuNAsrSqwAAAgD+w3NpAEouwwAAAgD+w3NpAEouwwAAAgD8NOtJAi6G1wAAAgD8NOtJAi6G1wAAAgD8gLslAHfi5wAAAgD8gLslAHfi5wAAAgD/puL9AyI69wAAAgD/puL9AyI69wAAAgD9o2rVAjWXAwAAAgD9o2rVAjWXAwAAAgD+bkqtAa3zCwAAAgD+bkqtAa3zCwAAAgD+F4aBAY9PDwAAAgD+F4aBAY9PDwAAAgD88yJlAn0rEwAAAgD88yJlAn0rEwAAAgD9Vg5JAXnLEwAAAgL+pGng/XjKMwAAAgD+pGng/XjKMwAAAgL9Vg5JAXnLEwAAAgL/bHIJAiLDDwAAAgL/bHIJAiLDDwAAAgL8qDWhAnrbBwAAAgL8qDWhAnrbBwAAAgL8ayU9A0OK+wAAAgL8ayU9A0OK+wAAAgL+PjThAqBC7wAAAgL+PjThAqBC7wAAAgL+MWiJAJUC2wAAAgL+MWiJAJUC2wAAAgL8PMA1ASXGwwAAAgL8PMA1ASXGwwAAAgL8zHPI/EqSpwAAAgL8zHPI/EqSpwAAAgL9V6cs/gdihwAAAgL9V6cs/gdihwAAAgL+Ex6c/lg6ZwAAAgL+Ex6c/lg6ZwAAAgL97e5E/0NWSwAAAgL97e5E/0NWSwAAAgL+pGng/XjKMwAAAgD9Vg5JAXnLEwAAAgD/bHIJAiLDDwAAAgD/bHIJAiLDDwAAAgD8qDWhAnrbBwAAAgD8qDWhAnrbBwAAAgD8ayU9A0OK+wAAAgD8ayU9A0OK+wAAAgD+PjThAqBC7wAAAgD+PjThAqBC7wAAAgD+MWiJAJUC2wAAAgD+MWiJAJUC2wAAAgD8PMA1ASXGwwAAAgD8PMA1ASXGwwAAAgD8zHPI/EqSpwAAAgD8zHPI/EqSpwAAAgD9V6cs/gdihwAAAgD9V6cs/gdihwAAAgD+Ex6c/lg6ZwAAAgD+Ex6c/lg6ZwAAAgD97e5E/0NWSwAAAgD97e5E/0NWSwAAAgD+pGng/XjKMwAAAgL+pGng/XjKMwAAAgL8t+S8/NmJ+wAAAgL8t+S8/NmJ+wAAAgL+IleA+tCZiwAAAgL+IleA+tCZiwAAAgL++PWQ+N7JDwAAAgL++PWQ+N7JDwAAAgL/2ciQ9vQQjwAAAgL/2ciQ9vQQjwAAAgL/lb+C9SB4AwAAAgL/lb+C9SB4AwAAAgL9RwGe+rv21vwAAgL9RwGe+rv21vwAAgL9PXqa+5k0ovwAAgL9PXqa+5k0ovwAAgL+W77q+9F2IvQAAgL+W77q+9F2IvQAAgL+tysG+Em0NPwAAgD+pGng/XjKMwAAAgD8t+S8/NmJ+wAAAgD8t+S8/NmJ+wAAAgD+IleA+tCZiwAAAgD+IleA+tCZiwAAAgD++PWQ+N7JDwAAAgD++PWQ+N7JDwAAAgD/2ciQ9vQQjwAAAgD/2ciQ9vQQjwAAAgD/lb+C9SB4AwAAAgD/lb+C9SB4AwAAAgD9RwGe+rv21vwAAgD9RwGe+rv21vwAAgD9PXqa+5k0ovwAAgD9PXqa+5k0ovwAAgD+W77q+9F2IvQAAgD+W77q+9F2IvQAAgD+tysG+Em0NPwAAgL+qhkBAJNqcPgAAgD+qhkBAJNqcPgAAgL+qZmlAu2QdwAAAgD+qZmlAu2QdwAAAgL+qhkBAJNqcPgAAgL9XCkJAubW4vgAAgL9XCkJAubW4vgAAgL9clUZAvq92vwAAgL9clUZAvq92vwAAgL+aWE1AyGO6vwAAgL+aWE1AyGO6vwAAgL8oqVRAHXjovwAAgL8oqVRAHXjovwAAgL9PplxAzA8HwAAAgL9PplxAzA8HwAAAgL9sumJAjKESwAAAgL9sumJAjKESwAAAgL+qZmlAu2QdwAAAgD+qhkBAJNqcPgAAgD9XCkJAubW4vgAAgD9XCkJAubW4vgAAgD9clUZAvq92vwAAgD9clUZAvq92vwAAgD+aWE1AyGO6vwAAgD+aWE1AyGO6vwAAgD8oqVRAHXjovwAAgD8oqVRAHXjovwAAgD9PplxAzA8HwAAAgD9PplxAzA8HwAAAgD9sumJAjKESwAAAgD9sumJAjKESwAAAgD+qZmlAu2QdwAAAgL9VQ65AuyRawAAAgD9VQ65AuyRawAAAgL+qZmlAu2QdwAAAgL9mxnBABIQnwAAAgL9mxnBABIQnwAAAgL/Hp3hAT7cwwAAAgL/Hp3hAT7cwwAAAgL9nhYBAnf44wAAAgL9nhYBAnf44wAAAgL+994RA7llAwAAAgL+994RA7llAwAAAgL/nqolAQ8lGwAAAgL/nqolAQ8lGwAAAgL/jno5AmkxMwAAAgL/jno5AmkxMwAAAgL+y05NA9ONQwAAAgL+y05NA9ONQwAAAgL9USZlAUo9UwAAAgL9USZlAUo9UwAAAgL/K/55Ask5XwAAAgL/K/55Ask5XwAAAgL8S96RAFSJZwAAAgL8S96RAFSJZwAAAgL9wi6lAEuRZwAAAgL9wi6lAEuRZwAAAgL9VQ65AuyRawAAAgD+qZmlAu2QdwAAAgD9mxnBABIQnwAAAgD9mxnBABIQnwAAAgD/Hp3hAT7cwwAAAgD/Hp3hAT7cwwAAAgD9nhYBAnf44wAAAgD9nhYBAnf44wAAAgD+994RA7llAwAAAgD+994RA7llAwAAAgD/nqolAQ8lGwAAAgD/nqolAQ8lGwAAAgD/jno5AmkxMwAAAgD/jno5AmkxMwAAAgD+y05NA9ONQwAAAgD+y05NA9ONQwAAAgD9USZlAUo9UwAAAgD9USZlAUo9UwAAAgD/K/55Ask5XwAAAgD/K/55Ask5XwAAAgD8S96RAFSJZwAAAgD8S96RAFSJZwAAAgD9wi6lAEuRZwAAAgD9wi6lAEuRZwAAAgD9VQ65AuyRawAAAgL9Vw/RAu6QKwAAAgD9Vw/RAu6QKwAAAgL9VQ65AuyRawAAAgL9+DLlAYSZZwAAAgL9+DLlAYSZZwAAAgL+DIcJAhY5WwAAAgL+DIcJAhY5WwAAAgL9pzMlAvthSwAAAgL9pzMlAvthSwAAAgL8GStBAlGROwAAAgL8GStBAlGROwAAAgL9uzNVASXtJwAAAgL9uzNVASXtJwAAAgL9F/tpAArdDwAAAgL9F/tpAArdDwAAAgL+L399AwRc9wAAAgL+L399AwRc9wAAAgL8/cORAhJ01wAAAgL8/cORAhJ01wAAAgL9hsOhATUgtwAAAgL9hsOhATUgtwAAAgL/yn+xAGhgkwAAAgL/yn+xAGhgkwAAAgL/xPvBA7QwawAAAgL/xPvBA7QwawAAAgL+olPJA4o0SwAAAgL+olPJA4o0SwAAAgL9Vw/RAu6QKwAAAgD9VQ65AuyRawAAAgD9+DLlAYSZZwAAAgD9+DLlAYSZZwAAAgD+DIcJAhY5WwAAAgD+DIcJAhY5WwAAAgD9pzMlAvthSwAAAgD9pzMlAvthSwAAAgD8GStBAlGROwAAAgD8GStBAlGROwAAAgD9uzNVASXtJwAAAgD9uzNVASXtJwAAAgD9F/tpAArdDwAAAgD9F/tpAArdDwAAAgD+L399AwRc9wAAAgD+L399AwRc9wAAAgD8/cORAhJ01wAAAgD8/cORAhJ01wAAAgD9hsOhATUgtwAAAgD9hsOhATUgtwAAAgD/yn+xAGhgkwAAAgD/yn+xAGhgkwAAAgD/xPvBA7QwawAAAgD/xPvBA7QwawAAAgD+olPJA4o0SwAAAgD+olPJA4o0SwAAAgD9Vw/RAu6QKwAAAgL9Vw/RARZtEQAAAgD9Vw/RARZtEQAAAgL9Vw/RAu6QKwAAAgL9Vw/RARZtEQAAAgD9Vw/RAu6QKwAAAgD9Vw/RARZtEQAAAgL9VA69Aoo2IQAAAgD9VA69Aoo2IQAAAgL9Vw/RARZtEQAAAgL+NpfFAYWFPQAAAgL+NpfFAYWFPQAAAgL/FN+5AQlZZQAAAgL/FN+5AQlZZQAAAgL/9eepA6HliQAAAgL/9eepA6HliQAAAgL80bOZAUsxqQAAAgL80bOZAUsxqQAAAgL9sDuJAgU1yQAAAgL9sDuJAgU1yQAAAgL+jYN1AdP14QAAAgL+jYN1AdP14QAAAgL/aYthALNx+QAAAgL/aYthALNx+QAAAgL8RFdNA1PSBQAAAgL8RFdNA1PSBQAAAgL9Id81A9RKEQAAAgL9Id81A9RKEQAAAgL9/icdAd8iFQAAAgL9/icdAd8iFQAAAgL+1S8FAXBWHQAAAgL+1S8FAXBWHQAAAgL/rvbpAo/mHQAAAgL/rvbpAo/mHQAAAgL/r/LRAo2iIQAAAgL/r/LRAo2iIQAAAgL9VA69Aoo2IQAAAgD9Vw/RARZtEQAAAgD+NpfFAYWFPQAAAgD+NpfFAYWFPQAAAgD/FN+5AQlZZQAAAgD/FN+5AQlZZQAAAgD/9eepA6HliQAAAgD/9eepA6HliQAAAgD80bOZAUsxqQAAAgD80bOZAUsxqQAAAgD9sDuJAgU1yQAAAgD9sDuJAgU1yQAAAgD+jYN1AdP14QAAAgD+jYN1AdP14QAAAgD/aYthALNx+QAAAgD/aYthALNx+QAAAgD8RFdNA1PSBQAAAgD8RFdNA1PSBQAAAgD9Id81A9RKEQAAAgD9Id81A9RKEQAAAgD9/icdAd8iFQAAAgD9/icdAd8iFQAAAgD+1S8FAXBWHQAAAgD+1S8FAXBWHQAAAgD/rvbpAo/mHQAAAgD/rvbpAo/mHQAAAgD/r/LRAo2iIQAAAgD/r/LRAo2iIQAAAgD9VA69Aoo2IQAAAgL+qxmlARVtUQAAAgD+qxmlARVtUQAAAgL9VA69Aoo2IQAAAgL94kKhAz1GIQAAAgL94kKhAz1GIQAAAgL8TYKJAVJ6HQAAAgL8TYKJAVJ6HQAAAgL8ocpxAM3OGQAAAgL8ocpxAM3OGQAAAgL+2xpZAa9CEQAAAgL+2xpZAa9CEQAAAgL++XZFA+7WCQAAAgL++XZFA+7WCQAAAgL8+N4xA5SOAQAAAgL8+N4xA5SOAQAAAgL84U4dAUDR6QAAAgL84U4dAUDR6QAAAgL+rsYJAiDFzQAAAgL+rsYJAiDFzQAAAgL8tpXxAcT9rQAAAgL8tpXxAcT9rQAAAgL/4a3RADV5iQAAAgL/4a3RADV5iQAAAgL+2+G5AWpdbQAAAgL+2+G5AWpdbQAAAgL+qxmlARVtUQAAAgD9VA69Aoo2IQAAAgD94kKhAz1GIQAAAgD94kKhAz1GIQAAAgD8TYKJAVJ6HQAAAgD8TYKJAVJ6HQAAAgD8ocpxAM3OGQAAAgD8ocpxAM3OGQAAAgD+2xpZAa9CEQAAAgD+2xpZAa9CEQAAAgD++XZFA+7WCQAAAgD++XZFA+7WCQAAAgD8+N4xA5SOAQAAAgD8+N4xA5SOAQAAAgD84U4dAUDR6QAAAgD84U4dAUDR6QAAAgD+rsYJAiDFzQAAAgD+rsYJAiDFzQAAAgD8tpXxAcT9rQAAAgD8tpXxAcT9rQAAAgD/4a3RADV5iQAAAgD/4a3RADV5iQAAAgD+2+G5AWpdbQAAAgD+2+G5AWpdbQAAAgD+qxmlARVtUQAAAgL+qxmlARVtUQAAAgL8DoWFA6exGQAAAgL8DoWFA6exGQAAAgL+UYFpAoPI3QAAAgL+UYFpAoPI3QAAAgL9eBVRAaWwnQAAAgL9eBVRAaWwnQAAAgL9hj05ARFoVQAAAgL9hj05ARFoVQAAAgL+c/klAMbwBQAAAgL+c/klAMbwBQAAAgL/TTEVAZ1jKPwAAgL/TTEVAZ1jKPwAAgL/p60FAu4GDPwAAgL/p60FAu4GDPwAAgL/630BAUiEtPwAAgL/630BAUiEtPwAAgL+qhkBAJNqcPgAAgD+qxmlARVtUQAAAgD8DoWFA6exGQAAAgD8DoWFA6exGQAAAgD+UYFpAoPI3QAAAgD+UYFpAoPI3QAAAgD9eBVRAaWwnQAAAgD9eBVRAaWwnQAAAgD9hj05ARFoVQAAAgD9hj05ARFoVQAAAgD+c/klAMbwBQAAAgD+c/klAMbwBQAAAgD/TTEVAZ1jKPwAAgD/TTEVAZ1jKPwAAgD/p60FAu4GDPwAAgD/p60FAu4GDPwAAgD/630BAUiEtPwAAgD/630BAUiEtPwAAgD+qhkBAJNqcPpR0lGJdlIaUdWhKjAcjZmZmZmZmlGhMfZQoaE5Hv/A7tQMA/5hoT0c/8Du1AwD/gGhQR8Ap0UFNj37GaFFHQCYfq/MwwR5oUkfAJZacfAdO+mhTR0AcGKGJceHTdWhUaA11ZYwDbG9jlE5oFowQanVweXRlciBjYWRxdWVyeZR1jAR0cmVllH2UKGgVjARub2RllGgWaHVoFEsDjAhjaGlsZHJlbpRdlCh9lChoFYwEbGVhZpRoFmgXaBRLAWhKjAcjMjk4MGI5lHV9lChoFWh8aBZoVmgUSwJoSowHI2ZmZmZmZpR1ZXVoTIwaanVweXRlcl9jYWRxdWVyeS5vY3BfdXRpbHOUjAtCb3VuZGluZ0JveJSTlCmBlH2UKIwHb3B0aW1hbJSJjAN0b2yURz7k+LWI42jxaE5Hv/Brl29yQfxoT0c/8GuXb3JB/2hQR8AyuR7L4VRgaFFHQDK4zwNUofNoUkfAM7tWnuBhA2hTR0AzuvfaulgkjAV4c2l6ZZRHQABrl29yQf6MBXlzaXpllEdAQrj255r7KowFenNpemWUR0BDuyc8zVyUjAZjZW50ZXKURzzAAAAAAAAAR79D8iMsmwAAR79HsQmCN4AAh5SMA21heJRHQDO7Vp7gYQN1YnVoFWgBjAZjb25maWeUfZQojAZoZWlnaHSUTVgCjAp0cmVlX3dpZHRolEv6jAljYWRfd2lkdGiUTSADjAliYl9mYWN0b3KURz/wAAAAAAAAjA1kZWZhdWx0X2NvbG9ylEvoS7BLJIeUjBFkZWZhdWx0X2VkZ2Vjb2xvcpRLwEvAS8CHlIwMcmVuZGVyX2VkZ2VzlIiMDnJlbmRlcl9ub3JtYWxzlImMDHJlbmRlcl9tYXRlc5SJjAptYXRlX3NjYWxllEsBjAdxdWFsaXR5lE6MCWRldmlhdGlvbpRHP7mZmZmZmZqMEWFuZ3VsYXJfdG9sZXJhbmNllEc/yZmZmZmZmowNZWRnZV9hY2N1cmFjeZROjApvcHRpbWFsX2JilImMBGF4ZXOUiYwFYXhlczCUiYwEZ3JpZJSIjAV0aWNrc5RLA4wFb3J0aG+UiYwLdHJhbnNwYXJlbnSUiYwLYmxhY2tfZWRnZXOUiYwRYW1iaWVudF9pbnRlbnNpdHmURz/wAAAAAAAAjBBkaXJlY3RfaW50ZW5zaXR5lEc/vrhR64UeuIwMcmVzZXRfY2FtZXJhlIiMDW1hY19zY3JvbGxiYXKUiIwHZGlzcGxheZSMB3NpZGVjYXKUjAV0b29sc5SIjAZ0aW1laXSUiYwIcG9zaXRpb26UR0BgRMzMzMzNR0BVM9cKPXCkR0BI1cKPXCj2h5SMBHpvb22UR0ABmZmZmZmadYwFY291bnSUSwJ1Lg==' | 114,974 | 114,974 | 0.964079 | 3,987 | 114,974 | 27.801104 | 0.492099 | 0.003248 | 0.000433 | 0.000758 | 0.212354 | 0.194293 | 0.173525 | 0.148264 | 0.137149 | 0.124464 | 0 | 0.080307 | 0.000017 | 114,974 | 1 | 114,974 | 114,974 | 0.88378 | 0 | 0 | 0 | 0 | 1 | 0.99987 | 0.99987 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
66816ff47f7bfff5c7970ad732016fde7a5f4e1c | 52,911 | py | Python | ke_t5/task/task.py | AIRC-KETI/ke-t5-downstreams | b8c061fb55d700606ff6d231cefe1ba01a98f469 | [
"Apache-2.0"
] | 17 | 2021-05-05T01:27:41.000Z | 2022-02-16T05:26:19.000Z | ke_t5/task/task.py | AIRC-KETI/ke-t5-downstreams | b8c061fb55d700606ff6d231cefe1ba01a98f469 | [
"Apache-2.0"
] | 3 | 2021-10-05T09:06:58.000Z | 2022-01-10T06:54:34.000Z | ke_t5/task/task.py | AIRC-KETI/ke-t5-downstreams | b8c061fb55d700606ff6d231cefe1ba01a98f469 | [
"Apache-2.0"
] | 4 | 2021-06-26T15:53:21.000Z | 2021-12-06T01:48:23.000Z | # Copyright 2021 san kim
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import functools
from ke_t5 import pipe as seq_pipe
from .task_meta import NIKL_META, KLUE_META
from . import preprocessors, metrics, postprocessors
from .utils import get_vocabulary
VOCABULARY = get_vocabulary()
GENERATIVE_OUTPUT_FEATURES = {
"inputs": seq_pipe.Feature(
tokenizer=VOCABULARY, add_eos=False, required=False),
"targets": seq_pipe.Feature(
tokenizer=VOCABULARY, add_eos=True)
}
DEFAULT_OUTPUT_FEATURES = {
"inputs": seq_pipe.Feature(
tokenizer=VOCABULARY, add_eos=False, required=False)
}
# ================================================
# ==================== KLUE ======================
# ================================================
# ============ KLUE topic classification: Generative ============
seq_pipe.TaskRegistry.add(
"klue_tc_gen",
seq_pipe.HFDataSource('klue', 'ynat'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "guid",
"title": "title",
"label": "label",
}),
functools.partial(
preprocessors.base_preproc_for_classification,
benchmark_name='klue_tc',
input_keys=['title'],
label_names=KLUE_META['tc_classes'],
with_feature_key=True,
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=GENERATIVE_OUTPUT_FEATURES,
train_postprocess_fn=functools.partial(
postprocessors.decode_for_generator,
decode_keys=['predictions', 'labels'],
tokenizer=VOCABULARY
),
metric_fns=[
metrics.exact_match_str_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('exact_match_str'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
additional_task_info={
'task_specific_params': {
'tc':{
"early_stopping": True,
"max_length": 5,
"num_beams": 1,
"prefix": "klue_tc title: {}"
},
},
},
num_proc=4,
)
# ============ KLUE topic classification: Classifier ============
seq_pipe.TaskRegistry.add(
"klue_tc",
seq_pipe.HFDataSource('klue', 'ynat'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "guid",
"title": "title",
"label": "label",
}),
functools.partial(
preprocessors.base_preproc_for_classification,
benchmark_name='klue_tc',
input_keys=['title'],
label_names=None,
with_feature_key=True,
no_label_idx=7,
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=DEFAULT_OUTPUT_FEATURES,
metric_fns=[
metrics.accuracy_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('accuracy'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
additional_task_info={
'num_labels': len(KLUE_META['tc_classes']),
'id2label': {idx:key for idx, key in enumerate(KLUE_META['tc_classes'])},
'label2id': {key:idx for idx, key in enumerate(KLUE_META['tc_classes'])},
'problem_type': "single_label_classification",
},
num_proc=4,
)
# ============ KLUE NLI: Generative ============
seq_pipe.TaskRegistry.add(
"klue_nli_gen",
seq_pipe.HFDataSource('klue', 'nli'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "guid",
"premise": "premise",
"hypothesis": "hypothesis",
"label": "label",
}),
functools.partial(
preprocessors.base_preproc_for_classification,
benchmark_name='klue_nli',
input_keys=['premise', 'hypothesis'],
label_names=KLUE_META['nli_classes'],
with_feature_key=True,
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=GENERATIVE_OUTPUT_FEATURES,
train_postprocess_fn=functools.partial(
postprocessors.decode_for_generator,
decode_keys=['predictions', 'labels'],
tokenizer=VOCABULARY
),
metric_fns=[
metrics.exact_match_str_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('exact_match_str'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
additional_task_info={
'task_specific_params': {
'nli':{
"early_stopping": True,
"max_length": 5,
"num_beams": 1,
"prefix": "klue_nli premise: {premise} hypothesis: {hypothesis}"
},
},
},
num_proc=4,
)
# ============ KLUE NLI: Classifier ============
seq_pipe.TaskRegistry.add(
"klue_nli",
seq_pipe.HFDataSource('klue', 'nli'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "guid",
"premise": "premise",
"hypothesis": "hypothesis",
"label": "label",
}),
functools.partial(
preprocessors.base_preproc_for_classification,
benchmark_name='klue_nli',
input_keys=['premise', 'hypothesis'],
label_names=None,
with_feature_key=True,
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
metric_fns=[
metrics.accuracy_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('accuracy'),
output_features=DEFAULT_OUTPUT_FEATURES,
model_input_columns=['input_ids', 'attention_mask', 'labels'],
num_proc=4,
additional_task_info={
'num_labels': len(KLUE_META['nli_classes']),
'id2label': {idx:key for idx, key in enumerate(KLUE_META['nli_classes'])},
'label2id': {key:idx for idx, key in enumerate(KLUE_META['nli_classes'])},
'problem_type': "single_label_classification",
},
)
# ============ KLUE STS: Generative ============
seq_pipe.TaskRegistry.add(
"klue_sts_gen",
seq_pipe.HFDataSource('klue', 'sts'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "guid",
"sentence1": "sentence1",
"sentence2": "sentence2",
"label": "labels",
}),
functools.partial(
preprocessors.base_preproc_for_regression,
benchmark_name='klue_sts',
input_keys=['sentence1', 'sentence2'],
with_feature_key=True,
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=GENERATIVE_OUTPUT_FEATURES,
train_postprocess_fn=functools.partial(
postprocessors.decode_and_float,
decode_keys=['predictions', 'labels'],
tokenizer=VOCABULARY
),
metric_fns=[
metrics.pearson_corrcoef_dict, metrics.spearman_corrcoef_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('spearman_corrcoef'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
num_proc=4,
)
# ============ KLUE STS: Regressor ============
seq_pipe.TaskRegistry.add(
"klue_sts_reg",
seq_pipe.HFDataSource('klue', 'sts'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "guid",
"sentence1": "sentence1",
"sentence2": "sentence2",
"label": "labels",
}),
functools.partial(
preprocessors.base_preproc_for_regression,
benchmark_name='klue_sts',
input_keys=['sentence1', 'sentence2'],
is_string_tgt=False,
with_feature_key=True
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=DEFAULT_OUTPUT_FEATURES,
logit_to_id=False,
metric_fns=[
metrics.pearson_corrcoef_dict, metrics.spearman_corrcoef_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('spearman_corrcoef'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
num_proc=4,
additional_task_info={
'num_labels': 1,
'problem_type': "regression",
},
)
# ============ KLUE STS: Classifier ============
seq_pipe.TaskRegistry.add(
"klue_sts",
seq_pipe.HFDataSource('klue', 'sts'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "guid",
"sentence1": "sentence1",
"sentence2": "sentence2",
"label": "labels",
}),
functools.partial(
preprocessors.base_preproc_for_regression,
benchmark_name='klue_sts',
input_keys=['sentence1', 'sentence2'],
is_string_tgt=False,
is_classification=True,
with_feature_key=True
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=DEFAULT_OUTPUT_FEATURES,
logit_to_id=False,
metric_fns=[
metrics.pearson_corrcoef_dict, metrics.spearman_corrcoef_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('spearman_corrcoef'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
num_proc=4,
additional_task_info={
'num_labels': 1,
'problem_type': "regression",
},
)
# ============ KLUE RE: Generative ============
seq_pipe.TaskRegistry.add(
"klue_re_gen",
seq_pipe.HFDataSource('klue', 're'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "guid",
"sentence": "sentence",
"subject_entity": "subject_entity",
"object_entity": "object_entity",
"label": "label",
}),
functools.partial(
preprocessors.re_preproc_for_classification,
benchmark_name='klue_re',
label_names=KLUE_META['re_relations']
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=GENERATIVE_OUTPUT_FEATURES,
train_postprocess_fn=functools.partial(
postprocessors.decode_for_generator,
decode_keys=['predictions', 'labels'],
tokenizer=VOCABULARY
),
metric_fns=[
metrics.exact_match_str_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('exact_match_str'),
additional_task_info={
'task_specific_params': {
're':{
"early_stopping": True,
"max_length": 10,
"num_beams": 1,
"prefix": "klue_re ",
},
},
},
model_input_columns=['input_ids', 'attention_mask', 'labels'],
num_proc=4,
)
# ============ KLUE RE: Classifier ============
seq_pipe.TaskRegistry.add(
"klue_re",
seq_pipe.HFDataSource('klue', 're'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "guid",
"sentence": "sentence",
"subject_entity": "subject_entity",
"object_entity": "object_entity",
"label": "label",
}),
functools.partial(
preprocessors.re_preproc_for_classification,
benchmark_name='klue_re',
label_names=None
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=DEFAULT_OUTPUT_FEATURES,
model_input_columns=['input_ids', 'attention_mask', 'labels'],
num_proc=4,
metric_fns=[
metrics.accuracy_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('accuracy'),
additional_task_info={
'num_labels': len(KLUE_META['re_relations']),
'id2label': {idx:key for idx, key in enumerate(KLUE_META['re_relations'])},
'label2id': {key:idx for idx, key in enumerate(KLUE_META['re_relations'])},
'problem_type': "single_label_classification",
},
)
# ============ KLUE RE: Classifier with sbj. obj. tk index ============
seq_pipe.TaskRegistry.add(
"klue_re_tk_idx",
seq_pipe.HFDataSource('klue', 're'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "guid",
"sentence": "sentence",
"subject_entity": "subject_entity",
"object_entity": "object_entity",
"label": "label",
}),
functools.partial(
preprocessors.re_preproc_for_classification_with_idx,
benchmark_name='klue_re',
label_names=None
),
functools.partial(
preprocessors.tokenize_re_with_tk_idx,
output_features=DEFAULT_OUTPUT_FEATURES,
input_key='inputs'
),
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "label",
}),
],
output_features=DEFAULT_OUTPUT_FEATURES,
model_input_columns=['input_ids', 'attention_mask', 'labels', 'entity_token_idx'],
num_proc=4,
metric_fns=[
metrics.f1_score_micro_sample_weight_dict, metrics.accuracy_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('f1_score_micro_sample_weight'),
additional_task_info={
'num_labels': len(KLUE_META['re_relations']),
'id2label': {idx:key for idx, key in enumerate(KLUE_META['re_relations'])},
'label2id': {key:idx for idx, key in enumerate(KLUE_META['re_relations'])},
'problem_type': "single_label_classification",
},
)
# ============ KLUE MRC: Generative ============
seq_pipe.TaskRegistry.add(
"klue_mrc_gen",
seq_pipe.HFDataSource('klue', 'mrc'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "guid",
"context": "context",
"question": "question",
"answers": "answers",
"is_impossible": "is_impossible",
}),
functools.partial(
preprocessors.preprocess_quad,
benchmark_name='klue_mrc',
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=GENERATIVE_OUTPUT_FEATURES,
train_postprocess_fn=functools.partial(
postprocessors.decode_for_generator,
decode_keys=['predictions', 'labels'],
tokenizer=VOCABULARY
),
metric_fns=[
metrics.exact_match_str_dict, metrics.f1_str_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('f1_str'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
additional_task_info={
'task_specific_params': {
'mrc':{
"early_stopping": True,
"max_length": 30,
"num_beams": 4,
"prefix": "klue_mrc question: {question} context: {context}",
},
},
},
num_proc=4,
)
# ============ KLUE MRC: Generative - Context free ============
seq_pipe.TaskRegistry.add(
"klue_mrc_gen_context_free",
seq_pipe.HFDataSource('klue', 'mrc'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "guid",
"context": "context",
"question": "question",
"answers": "answers",
"is_impossible": "is_impossible",
}),
functools.partial(
preprocessors.preprocess_quad,
benchmark_name='klue_mrc',
include_context=False
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=GENERATIVE_OUTPUT_FEATURES,
train_postprocess_fn=functools.partial(
postprocessors.decode_for_generator,
decode_keys=['predictions', 'labels'],
tokenizer=VOCABULARY
),
metric_fns=[
metrics.exact_match_str_dict, metrics.f1_str_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('f1_str'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
additional_task_info={
'task_specific_params': {
'mrc':{
"early_stopping": True,
"max_length": 30,
"num_beams": 4,
"prefix": "klue_mrc trivia question: {question}",
},
},
},
num_proc=4,
)
# ============ KLUE NER: Classifier ============
seq_pipe.TaskRegistry.add(
"klue_ner",
seq_pipe.HFDataSource('klue', 'ner'),
preprocessors=[
preprocessors.klue_ne_example_fmt,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"inputs": "text",
"tag_info": "NE",
}),
functools.partial(
preprocessors.tokenize_and_preproc_iob24klue,
tags=KLUE_META['ner_tags'],
iob2_tags=KLUE_META[ 'ner_iob2_tags'],
tag_label='tag_info'
),
functools.partial(
seq_pipe.preprocessors.trim_and_pad,
key_pad_id_map={
"inputs": VOCABULARY.pad_token_id,
"targets": KLUE_META[ 'ner_iob2_tags'].index('O'),
}
),
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=DEFAULT_OUTPUT_FEATURES,
metric_fns=[
metrics.token_accuracy_dict_variable_length,
functools.partial(metrics.char_level_f1_score_klue_dict, iob2_tags=KLUE_META['ner_iob2_tags'])
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('token_accuracy'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
additional_task_info={
'num_labels': len(KLUE_META['ner_iob2_tags']),
'id2label': {idx:key for idx, key in enumerate(KLUE_META['ner_iob2_tags'])},
'label2id': {key:idx for idx, key in enumerate(KLUE_META['ner_iob2_tags'])},
},
num_proc=4,
)
# ================================================
# ==================== NIKL ======================
# ================================================
# ============ NIKL NER: Classifier ============
seq_pipe.TaskRegistry.add(
"nikl_ner",
seq_pipe.HFDataSource('KETI-AIR/nikl', 'ne.v1.0'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "id",
"inputs": "form",
"tag_info": "NE",
}),
functools.partial(
preprocessors.tokenize_and_preproc_iob2,
tags=NIKL_META['ne_tags'],
iob2_tags=NIKL_META['ne_iob2_tags'],
tag_label='tag_info'
),
functools.partial(
seq_pipe.preprocessors.trim_and_pad,
key_pad_id_map={
"inputs": VOCABULARY.pad_token_id,
"targets": NIKL_META['ne_iob2_tags'].index('O'),
}
),
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=DEFAULT_OUTPUT_FEATURES,
metric_fns=[
metrics.token_accuracy_dict_variable_length
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('token_accuracy'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
num_proc=4,
additional_task_info={
'num_labels': len(NIKL_META['ne_iob2_tags']),
'id2label': {idx:key for idx, key in enumerate(NIKL_META['ne_iob2_tags'])},
'label2id': {key:idx for idx, key in enumerate(NIKL_META['ne_iob2_tags'])},
},
)
# ============ NIKL NER: Classifier - split ============
seq_pipe.TaskRegistry.add(
"nikl_ner_split",
seq_pipe.HFDataSource('KETI-AIR/nikl', 'ne.v1.0.split'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "id",
"inputs": "form",
"tag_info": "NE",
}),
functools.partial(
preprocessors.tokenize_and_preproc_iob2,
tags=NIKL_META['ne_tags'],
iob2_tags=NIKL_META['ne_iob2_tags'],
tag_label='tag_info'
),
functools.partial(
seq_pipe.preprocessors.trim_and_pad,
key_pad_id_map={
"inputs": VOCABULARY.pad_token_id,
"targets": NIKL_META['ne_iob2_tags'].index('O'),
}
),
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=DEFAULT_OUTPUT_FEATURES,
metric_fns=[
metrics.token_accuracy_dict_variable_length
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('token_accuracy'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
num_proc=4,
additional_task_info={
'num_labels': len(NIKL_META['ne_iob2_tags']),
'id2label': {idx:key for idx, key in enumerate(NIKL_META['ne_iob2_tags'])},
'label2id': {key:idx for idx, key in enumerate(NIKL_META['ne_iob2_tags'])},
},
)
# ============ NIKL NER2020: Classifier ============
seq_pipe.TaskRegistry.add(
"nikl_ner2020",
seq_pipe.HFDataSource('KETI-AIR/nikl', 'ne.2020.v1.0'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "id",
"inputs": "form",
"tag_info": "ne",
}),
functools.partial(
preprocessors.tokenize_and_preproc_iob2,
tags=NIKL_META['ne2020_tags'],
iob2_tags=NIKL_META['ne2020_iob2_tags'],
tag_label='tag_info'
),
functools.partial(
seq_pipe.preprocessors.trim_and_pad,
key_pad_id_map={
"inputs": VOCABULARY.pad_token_id,
"targets": NIKL_META['ne2020_iob2_tags'].index('O'),
}
),
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=DEFAULT_OUTPUT_FEATURES,
metric_fns=[
metrics.token_accuracy_dict_variable_length
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('token_accuracy'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
num_proc=4,
additional_task_info={
'num_labels': len(NIKL_META['ne2020_iob2_tags']),
'id2label': {idx:key for idx, key in enumerate(NIKL_META['ne2020_iob2_tags'])},
'label2id': {key:idx for idx, key in enumerate(NIKL_META['ne2020_iob2_tags'])},
},
)
# ============ NIKL NER2020: Classifier - split ============
seq_pipe.TaskRegistry.add(
"nikl_ner2020_split",
seq_pipe.HFDataSource('KETI-AIR/nikl', 'ne.2020.v1.0.split'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "id",
"inputs": "form",
"tag_info": "ne",
}),
functools.partial(
preprocessors.tokenize_and_preproc_iob2,
tags=NIKL_META['ne2020_tags'],
iob2_tags=NIKL_META['ne2020_iob2_tags'],
tag_label='tag_info'
),
functools.partial(
seq_pipe.preprocessors.trim_and_pad,
key_pad_id_map={
"inputs": VOCABULARY.pad_token_id,
"targets": NIKL_META['ne2020_iob2_tags'].index('O'),
}
),
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=DEFAULT_OUTPUT_FEATURES,
metric_fns=[
metrics.token_accuracy_dict_variable_length
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('token_accuracy'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
num_proc=4,
additional_task_info={
'num_labels': len(NIKL_META['ne2020_iob2_tags']),
'id2label': {idx:key for idx, key in enumerate(NIKL_META['ne2020_iob2_tags'])},
'label2id': {key:idx for idx, key in enumerate(NIKL_META['ne2020_iob2_tags'])},
},
)
# ============ NIKL summarization summary: Generative ============
seq_pipe.TaskRegistry.add(
"nikl_summarization_summary",
seq_pipe.HFDataSource('KETI-AIR/nikl', 'summarization.v1.0.summary'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "document_id",
"inputs": "article",
"targets": "highlights",
}),
functools.partial(
preprocessors.base_preproc_for_conditional_generation,
prefix='summarize_summary:',
input_keys=['inputs'],
with_feature_key=False,
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=GENERATIVE_OUTPUT_FEATURES,
train_postprocess_fn=functools.partial(
postprocessors.decode_for_generator,
decode_keys=['predictions', 'labels'],
tokenizer=VOCABULARY
),
metric_fns=[
metrics.bleu_dict, metrics.rouge_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('rougeLsum'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
additional_task_info={
'task_specific_params': {
'summarization':{
"early_stopping": True,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize_summary: "
},
},
},
num_proc=4,
)
# ============ NIKL summarization summary: Generative - split ============
seq_pipe.TaskRegistry.add(
"nikl_summarization_summary_split",
seq_pipe.HFDataSource('KETI-AIR/nikl', 'summarization.v1.0.summary.split'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "document_id",
"inputs": "article",
"targets": "highlights",
}),
functools.partial(
preprocessors.base_preproc_for_conditional_generation,
prefix='summarize_summary:',
input_keys=['inputs'],
with_feature_key=False,
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=GENERATIVE_OUTPUT_FEATURES,
train_postprocess_fn=functools.partial(
postprocessors.decode_for_generator,
decode_keys=['predictions', 'labels'],
tokenizer=VOCABULARY
),
metric_fns=[
metrics.bleu_dict, metrics.rouge_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('rougeLsum'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
additional_task_info={
'task_specific_params': {
'summarization':{
"early_stopping": True,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize_summary: "
},
},
},
num_proc=4,
)
# ============ NIKL summarization topic: Generative ============
seq_pipe.TaskRegistry.add(
"nikl_summarization_topic",
seq_pipe.HFDataSource('KETI-AIR/nikl', 'summarization.v1.0.topic'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "document_id",
"inputs": "article",
"targets": "highlights",
}),
functools.partial(
preprocessors.base_preproc_for_conditional_generation,
prefix='summarize_topic:',
input_keys=['inputs'],
with_feature_key=False,
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
metric_fns=[
metrics.bleu_dict, metrics.rouge_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('rougeLsum'),
output_features=GENERATIVE_OUTPUT_FEATURES,
train_postprocess_fn=functools.partial(
postprocessors.decode_for_generator,
decode_keys=['predictions', 'labels'],
tokenizer=VOCABULARY
),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
additional_task_info={
'task_specific_params': {
'summarization':{
"early_stopping": True,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize_topic: "
},
},
},
num_proc=4,
)
# ============ NIKL summarization topic: Generative - split ============
seq_pipe.TaskRegistry.add(
"nikl_summarization_topic_split",
seq_pipe.HFDataSource('KETI-AIR/nikl', 'summarization.v1.0.topic.split'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "document_id",
"inputs": "article",
"targets": "highlights",
}),
functools.partial(
preprocessors.base_preproc_for_conditional_generation,
prefix='summarize_topic:',
input_keys=['inputs'],
with_feature_key=False,
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
metric_fns=[
metrics.bleu_dict, metrics.rouge_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('rougeLsum'),
output_features=GENERATIVE_OUTPUT_FEATURES,
train_postprocess_fn=functools.partial(
postprocessors.decode_for_generator,
decode_keys=['predictions', 'labels'],
tokenizer=VOCABULARY
),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
additional_task_info={
'task_specific_params': {
'summarization':{
"early_stopping": True,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize_topic: "
},
},
},
num_proc=4,
)
# ============ NIKL CoLA: Generative ============
seq_pipe.TaskRegistry.add(
"nikl_cola_gen",
seq_pipe.HFDataSource('KETI-AIR/nikl', 'cola.v1.0'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "idx",
"sentence": "sentence",
"label": "label",
}),
functools.partial(
preprocessors.base_preproc_for_classification,
benchmark_name='nikl_cola',
input_keys=['sentence'],
label_names=NIKL_META['cola_classes'],
with_feature_key=True,
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=GENERATIVE_OUTPUT_FEATURES,
train_postprocess_fn=functools.partial(
postprocessors.decode_and_string_to_label,
decode_keys=['predictions', 'labels'],
tokenizer=VOCABULARY,
label_info=NIKL_META['cola_classes'],
),
metric_fns=[
metrics.matthews_corrcoef_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('matthews_corrcoef'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
additional_task_info={
'task_specific_params': {
'cola':{
"early_stopping": True,
"max_length": 5,
"num_beams": 1,
"prefix": "nikl_cola sentence: {}"
},
},
},
num_proc=4,
)
# ============ NIKL CoLA: Classifier ============
seq_pipe.TaskRegistry.add(
"nikl_cola",
seq_pipe.HFDataSource('KETI-AIR/nikl', 'cola.v1.0'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "idx",
"sentence": "sentence",
"label": "label",
}),
functools.partial(
preprocessors.base_preproc_for_classification,
benchmark_name='nikl_cola',
input_keys=['sentence'],
label_names=None,
with_feature_key=True,
no_label_idx=0,
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=DEFAULT_OUTPUT_FEATURES,
metric_fns=[
metrics.matthews_corrcoef_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('matthews_corrcoef'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
additional_task_info={
'num_labels': len(NIKL_META['cola_classes']),
'id2label': {idx:key for idx, key in enumerate(NIKL_META['cola_classes'])},
'label2id': {key:idx for idx, key in enumerate(NIKL_META['cola_classes'])},
'problem_type': "single_label_classification",
},
num_proc=4,
)
from .preprocessors import _DEFAULT_SPAN_TAGS
# ============ NIKL Coref Resol: Span Cands Extraction ============
seq_pipe.TaskRegistry.add(
"nikl_cr_span_extraction",
seq_pipe.HFDataSource('KETI-AIR/nikl', 'cr.2020.v1.0'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "id",
"inputs": "text",
"CR": "CR",
}),
functools.partial(
preprocessors.create_doc_span,
output_features=DEFAULT_OUTPUT_FEATURES,
),
functools.partial(
preprocessors.tokenize_and_preproc_cr_span_extraction,
output_features=DEFAULT_OUTPUT_FEATURES,
),
functools.partial(
seq_pipe.preprocessors.trim_and_pad,
key_pad_id_map={
"inputs": VOCABULARY.pad_token_id,
"targets": _DEFAULT_SPAN_TAGS.index('O'),
}
),
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=DEFAULT_OUTPUT_FEATURES,
metric_fns=[
metrics.token_accuracy_dict_variable_length
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('token_accuracy'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
num_proc=4,
additional_task_info={
'num_labels': len(_DEFAULT_SPAN_TAGS),
'id2label': {idx:key for idx, key in enumerate(_DEFAULT_SPAN_TAGS)},
'label2id': {key:idx for idx, key in enumerate(_DEFAULT_SPAN_TAGS)},
},
)
# ============ NIKL Coref Resol: Coref Span Prediction ============
seq_pipe.TaskRegistry.add(
"nikl_cr_coref_span_prediction",
seq_pipe.HFDataSource('KETI-AIR/nikl', 'cr.2020.v1.0'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "id",
"inputs": "text",
"CR": "CR",
}),
functools.partial(
preprocessors.create_doc_span,
output_features=DEFAULT_OUTPUT_FEATURES,
),
functools.partial(
preprocessors.create_cr_example,
output_features=DEFAULT_OUTPUT_FEATURES,
),
functools.partial(
seq_pipe.preprocessors.trim_and_pad,
key_pad_id_map={
"inputs": VOCABULARY.pad_token_id,
"targets": _DEFAULT_SPAN_TAGS.index('O'),
}
),
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=DEFAULT_OUTPUT_FEATURES,
metric_fns=[
metrics.token_accuracy_dict_variable_length
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('token_accuracy'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
num_proc=8,
additional_task_info={
'num_labels': len(_DEFAULT_SPAN_TAGS),
'id2label': {idx:key for idx, key in enumerate(_DEFAULT_SPAN_TAGS)},
'label2id': {key:idx for idx, key in enumerate(_DEFAULT_SPAN_TAGS)},
},
)
# ================================================
# =================== KorQuAD ====================
# ================================================
# ============ KorQuAD v1.0: Generative ============
seq_pipe.TaskRegistry.add(
"korquad_gen",
seq_pipe.HFDataSource('KETI-AIR/korquad', 'v1.0'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "id",
"context": "context",
"question": "question",
"answers": "answers",
}),
functools.partial(
preprocessors.preprocess_quad,
benchmark_name='korquad',
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=GENERATIVE_OUTPUT_FEATURES,
train_postprocess_fn=functools.partial(
postprocessors.decode_for_generator,
decode_keys=['predictions', 'labels'],
tokenizer=VOCABULARY
),
metric_fns=[
metrics.exact_match_str_dict, metrics.f1_str_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('f1_str'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
additional_task_info={
'task_specific_params': {
'mrc':{
"early_stopping": True,
"max_length": 30,
"num_beams": 4,
"prefix": "korquad question: {question} context: {context}",
},
},
},
num_proc=4,
)
# ============ KorQuAD v1.0: Generative - Context free ============
seq_pipe.TaskRegistry.add(
"korquad_gen_context_free",
seq_pipe.HFDataSource('KETI-AIR/korquad', 'v1.0'),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"id": "id",
"context": "context",
"question": "question",
"answers": "answers",
}),
functools.partial(
preprocessors.preprocess_quad,
benchmark_name='korquad',
include_context=False
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=GENERATIVE_OUTPUT_FEATURES,
train_postprocess_fn=functools.partial(
postprocessors.decode_for_generator,
decode_keys=['predictions', 'labels'],
tokenizer=VOCABULARY
),
metric_fns=[
metrics.exact_match_str_dict, metrics.f1_str_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('f1_str'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
additional_task_info={
'task_specific_params': {
'mrc':{
"early_stopping": True,
"max_length": 30,
"num_beams": 4,
"prefix": "korquad trivia question: {question}",
},
},
},
num_proc=4,
)
# ================================================
# ================== Kor 3i4k ====================
# ================================================
_KOR_3I4K_CLASSES = ["fragment", "statement", "question", "command", "rhetorical question", "rhetorical command", "intonation-depedent utterance"]
# ============ Kor 3i4k classification: Generative ============
seq_pipe.TaskRegistry.add(
"kor_3i4k_gen",
seq_pipe.HFDataSource('kor_3i4k', None),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"text": "text",
"targets": "label",
}),
functools.partial(
preprocessors.base_preproc_for_classification,
benchmark_name='kor_3i4k',
input_keys=['text'],
label_names=_KOR_3I4K_CLASSES,
with_feature_key=True,
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=GENERATIVE_OUTPUT_FEATURES,
train_postprocess_fn=functools.partial(
postprocessors.decode_for_generator,
decode_keys=['predictions', 'labels'],
tokenizer=VOCABULARY
),
metric_fns=[
metrics.exact_match_str_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('exact_match_str'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
additional_task_info={
'task_specific_params': {
'kor_3i4k':{
"early_stopping": True,
"max_length": 5,
"num_beams": 1,
"prefix": "kor_3i4k text: {}"
},
},
},
num_proc=4,
)
# ============ Kor 3i4k: Classifier ============
seq_pipe.TaskRegistry.add(
"kor_3i4k",
seq_pipe.HFDataSource('kor_3i4k', None),
preprocessors=[
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"text": "text",
"targets": "label",
}),
functools.partial(
preprocessors.base_preproc_for_classification,
benchmark_name='kor_3i4k',
input_keys=['text'],
label_names=None,
with_feature_key=True,
no_label_idx=0,
),
seq_pipe.preprocessors.tokenize_output_features,
seq_pipe.preprocessors.append_eos_after_trim_output_features,
seq_pipe.preprocessors.trim_and_pad_output_features,
functools.partial(
seq_pipe.preprocessors.rekey,
key_map={
"input_ids": "inputs",
"attention_mask": "inputs_attention_mask",
"labels": "targets",
}),
],
output_features=DEFAULT_OUTPUT_FEATURES,
metric_fns=[
metrics.accuracy_dict
],
best_fn=seq_pipe.evaluation.GreaterIsTheBest('accuracy'),
model_input_columns=['input_ids', 'attention_mask', 'labels'],
additional_task_info={
'num_labels': len(_KOR_3I4K_CLASSES),
'id2label': {idx:key for idx, key in enumerate(_KOR_3I4K_CLASSES)},
'label2id': {key:idx for idx, key in enumerate(_KOR_3I4K_CLASSES)},
'problem_type': "single_label_classification",
},
num_proc=4,
)
if __name__ == "__main__":
seq_pipe.set_hf_data_dir_override("./data")
seq_pipe.set_hf_cache_dir_override("./cache_dir/huggingface_datasets")
task = seq_pipe.get_task('klue_ner')
dataset = task.get_dataset(
sequence_length={"inputs": 512, "targets": 512},
split="train"
)
# Print the first 5 examples.
for _, ex in zip(range(5), iter(dataset)):
print(ex)
# import torch
# dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels'])
# dataloader = torch.utils.data.DataLoader(dataset, batch_size=32)
# print(next(iter(dataloader)))
| 34.092139 | 146 | 0.570808 | 5,054 | 52,911 | 5.60091 | 0.059161 | 0.055145 | 0.091144 | 0.052814 | 0.934257 | 0.924082 | 0.884905 | 0.865228 | 0.858268 | 0.840675 | 0 | 0.009373 | 0.296271 | 52,911 | 1,551 | 147 | 34.11412 | 0.750853 | 0.056094 | 0 | 0.850914 | 0 | 0 | 0.176493 | 0.023174 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.004219 | 0 | 0.004219 | 0.000703 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
66dec035d1c6b5a89a3079507260f18ec4ebd6c5 | 29,666 | py | Python | src/openpersonen/features/aanschrijfwijze/test/test_aanschrijfwijze.py | maykinmedia/open-personen | ddcf083ccd4eb864c5305bcd8bc75c6c64108272 | [
"RSA-MD"
] | 2 | 2020-08-26T11:24:43.000Z | 2021-07-28T09:46:40.000Z | src/openpersonen/features/aanschrijfwijze/test/test_aanschrijfwijze.py | maykinmedia/open-personen | ddcf083ccd4eb864c5305bcd8bc75c6c64108272 | [
"RSA-MD"
] | 153 | 2020-08-26T10:45:35.000Z | 2021-12-10T17:33:16.000Z | src/openpersonen/features/aanschrijfwijze/test/test_aanschrijfwijze.py | maykinmedia/open-personen | ddcf083ccd4eb864c5305bcd8bc75c6c64108272 | [
"RSA-MD"
] | null | null | null | from django.test import TestCase
from openpersonen.features.aanschrijfwijze import get_aanschrijfwijze
class TestGetAanschrijfwijzeWithPrefix(TestCase):
def test_aanschrijfwijze_with_prefix(self):
table_string = """
| aanduidingAanschrijving | samenstelling aanschrijfwijze | voorvoegsel | geslachtsnaam | voornamen | voorvoegsel partner | geslachtsnaam partner | aanschrijfwijze |
| E | VL VV GN | In het | Veld | Henk | van | Velzen | H. In het Veld |
| N | VL VV GN-VP GP | van | Velzen | Ingrid | In het | Veld | I. van Velzen-In het Veld |
| P | VL VP GP | In het | Veld | Suzanne | van | Velzen | S. van Velzen |
| V | VL VP GP-VV GN | van | Velzen | Fred | In het | Veld | F. In het Veld-van Velzen |
"""
# Convert table string to rows and remove empty rows, white spaces, and header row
table_rows = [
[item.strip() for item in row.strip().split("|") if item]
for row in table_string.split("\n")
if row.strip()
][1:]
for row in table_rows:
(
aanduiding_aanschrijving,
_,
voervoegsel,
geslachtsnaam,
voornamen,
voervoegsel_partner,
geslachtsnaam_partner,
aanschrijfwijze,
) = row
with self.subTest(aanduiding_aanschrijving=aanduiding_aanschrijving):
result = get_aanschrijfwijze(
voervoegsel,
geslachtsnaam,
voornamen,
voervoegsel_partner,
geslachtsnaam_partner,
aanduiding_aanschrijving,
None,
None,
None,
)
self.assertEqual(aanschrijfwijze, result)
class TestGetAanschrijfwijzeWithoutPrefix(TestCase):
def test_aanschrijfwijze_without_prefix(self):
table_string = """
| aanduidingAanschrijving | samenstelling aanschrijfwijze | geslachtsnaam | voornamen | aanschrijfwijze |
| E | VL GN | Groenen | Franklin | F. Groenen |
| N | VL GN-GP | Groenen | Franka | F. Groenen-Groenink |
| P | VL GP | Groenink | Johan Frank Robert | J.F.R. Groenen |
| V | VL GP-GN | Groenlo | Franka | F. Groenen-Groenlo |
"""
# Convert table string to rows and remove empty rows, white spaces, and header row
table_rows = [
[item.strip() for item in row.strip().split("|") if item]
for row in table_string.split("\n")
if row.strip()
][1:]
for row in table_rows:
(
aanduiding_aanschrijving,
_,
geslachtsnaam,
voornamen,
aanschrijfwijze,
) = row
geslachtsnaam_partner = None
if aanduiding_aanschrijving == "N":
last_name = aanschrijfwijze.split(" ", 1)[-1]
_, geslachtsnaam_partner = last_name.split("-")
elif aanduiding_aanschrijving == "P":
geslachtsnaam_partner = aanschrijfwijze.split(" ", 1)[-1]
elif aanduiding_aanschrijving == "V":
last_name = aanschrijfwijze.split(" ", 1)[-1]
geslachtsnaam_partner, _ = last_name.split("-")
with self.subTest(aanduiding_aanschrijving=aanduiding_aanschrijving):
result = get_aanschrijfwijze(
None,
geslachtsnaam,
voornamen,
None,
geslachtsnaam_partner,
aanduiding_aanschrijving,
None,
None,
None,
)
self.assertEqual(aanschrijfwijze, result)
class TestGetAanschrijfwijzePersonHasTitlePartnerDoesNot(TestCase):
def test_aanschrijfwijze_person_has_title_partner_does_not(self):
table_string = """
| aanduidingAanschrijving | samenstelling aanschrijfwijze | geslachtsnaam | voornamen | aanschrijfwijze |
| E | VL AT VV GN | Aedel | Hendrik Willem | H.W. graaf van den Aedel |
| N | VL AT VV GN-VP GP | Aedel | Wilhelmina | W. gravin van den Aedel-van der Veen |
| P | VL VP GP | Aedel | Frederique | F. van der Veen |
| V | VL VP GP-AT VV GN | Aedel | Emma Louise | E.L. van der Veen-gravin van den Aedel |
"""
# Convert table string to rows and remove empty rows, white spaces, and header row
table_rows = [
[item.strip() for item in row.strip().split("|") if item]
for row in table_string.split("\n")
if row.strip()
][1:]
for row in table_rows:
(
aanduiding_aanschrijving,
_,
geslachtsnaam,
voornamen,
aanschrijfwijze,
) = row
last_name_prefix = None
partner_last_name_prefix = None
partner_last_name = None
if aanduiding_aanschrijving == "E":
last_name = aanschrijfwijze.split(" ", 1)[-1]
split_last_name = last_name.split(" ")
last_name_prefix = " ".join(split_last_name[:-1])
elif aanduiding_aanschrijving == "N":
last_name = aanschrijfwijze.split(" ", 1)[-1]
last_name, partner_last_name = last_name.split("-")
split_last_name = last_name.split(" ")
last_name_prefix = " ".join(split_last_name[:-1])
split_partner_last_name = partner_last_name.split(" ")
partner_last_name_prefix = " ".join(split_partner_last_name[:-1])
partner_last_name = split_partner_last_name[-1]
elif aanduiding_aanschrijving == "P":
partner_last_name = aanschrijfwijze.split(" ", 1)[-1]
split_partner_last_name = partner_last_name.split(" ")
partner_last_name_prefix = " ".join(split_partner_last_name[:-1])
partner_last_name = split_partner_last_name[-1]
elif aanduiding_aanschrijving == "V":
last_name = aanschrijfwijze.split(" ", 1)[-1]
partner_last_name, last_name = last_name.split("-")
split_last_name = last_name.split(" ")
last_name_prefix = " ".join(split_last_name[:-1])
split_partner_last_name = partner_last_name.split(" ")
partner_last_name_prefix = " ".join(split_partner_last_name[:-1])
partner_last_name = split_partner_last_name[-1]
with self.subTest(aanduiding_aanschrijving=aanduiding_aanschrijving):
result = get_aanschrijfwijze(
last_name_prefix,
geslachtsnaam,
voornamen,
partner_last_name_prefix,
partner_last_name,
aanduiding_aanschrijving,
None,
None,
None,
)
self.assertEqual(aanschrijfwijze, result)
class TestGetAanschrijfwijzePersonHasPredicatePartnerDoesNot(TestCase):
def test_aanschrijfwijze_person_has_predicate_partner_does_not(self):
table_string = """
| adellijkeTitel_predikaat | aanduidingNaamgebruik | aanschrijfwijze |
| Jonkheer | Eigen | jonkheer T. van Hoogh |
| Jonkvrouw | Eigen | jonkvrouw T. van Hoogh |
| Jonkvrouw | Partner na eigen | jonkvrouw T. van Hoogh-in het Veld |
| Jonkvrouw | Partner | T. in het Veld |
| Jonkheer | Partner | T. in het Veld |
| Jonkvrouw | Partner voor eigen | T. in het Veld-jonkvrouw van Hoogh |
| Jonkheer | Partner na eigen | jonkheer T. van Hoogh-in het Veld |
"""
aanduiding_naamgebruik_to_enumeration = {
"Eigen": "E",
"Partner na eigen": "N",
"Partner": "P",
"Partner voor eigen": "V",
}
# Convert table string to rows and remove empty rows, white spaces, and header row
table_rows = [
[item.strip() for item in row.strip().split("|") if item]
for row in table_string.split("\n")
if row.strip()
][1:]
voornamen = "Tom" # Example first name to use
for row in table_rows:
(
title,
aanduiding_naamgebruik,
aanschrijfwijze,
) = row
name = aanschrijfwijze.replace("jonkheer ", "")
name = name.replace("jonkvrouw ", "")
last_name = None
last_name_prefix = None
partner_last_name_prefix = None
partner_last_name = None
if aanduiding_naamgebruik == "Eigen":
last_name = name.split(" ", 1)[-1]
if len(last_name.split(" ")) > 1:
split_last_name = last_name.split(" ")
last_name_prefix = " ".join(split_last_name[:-1])
last_name = split_last_name[-1]
elif aanduiding_naamgebruik == "Partner na eigen":
last_name = name.split(" ", 1)[-1]
last_name, partner_last_name = last_name.split("-")
if len(last_name.split(" ")) > 1:
split_last_name = last_name.split(" ")
last_name_prefix = " ".join(split_last_name[:-1])
last_name = split_last_name[-1]
if len(partner_last_name.split(" ")) > 1:
split_partner_last_name = partner_last_name.split(" ")
partner_last_name_prefix = " ".join(split_partner_last_name[:-1])
partner_last_name = split_partner_last_name[-1]
elif aanduiding_naamgebruik == "Partner":
partner_last_name = name.split(" ", 1)[-1]
if len(partner_last_name.split(" ")) > 1:
split_partner_last_name = partner_last_name.split(" ")
partner_last_name_prefix = " ".join(split_partner_last_name[:-1])
partner_last_name = split_partner_last_name[-1]
elif aanduiding_naamgebruik == "Partner voor eigen":
last_name = name.split(" ", 1)[-1]
partner_last_name, last_name = last_name.split("-")
if len(last_name.split(" ")) > 1:
split_last_name = last_name.split(" ")
last_name_prefix = " ".join(split_last_name[:-1])
last_name = split_last_name[-1]
if len(partner_last_name.split(" ")) > 1:
split_partner_last_name = partner_last_name.split(" ")
partner_last_name_prefix = " ".join(split_partner_last_name[:-1])
partner_last_name = split_partner_last_name[-1]
with self.subTest(
title=title, aanduiding_naamgebruik=aanduiding_naamgebruik
):
result = get_aanschrijfwijze(
last_name_prefix,
last_name,
voornamen,
partner_last_name_prefix,
partner_last_name,
aanduiding_naamgebruik_to_enumeration[aanduiding_naamgebruik],
None,
title,
None,
)
self.assertEqual(aanschrijfwijze, result)
class TestGetAanschrijfwijzePartnerHasTitle(TestCase):
def test_aanschrijfwijze_partner_has_title(self):
table_string = """
| geslachtsaanduiding | geslachtsaanduiding partner | adellijkeTitel_predikaat partner | aanduidingAanschrijving | samenstelling aanschrijfwijze | geslachtsnaam | voornamen | aanschrijfwijze |
| V | M | Baron | E | VL VV GN | Veen | Anna Cornelia | A.C. van der Veen |
| V | M | Baron | N | VL VV GN-AP VP GP | Veen | Anna Cornelia | A.C. van der Veen-barones van den Aedel |
| V | M | Baron | P | VL AP VP GP | Veen | Anna Cornelia | A.C. barones van den Aedel |
| V | M | Baron | V | VL AP VP GP-VV GN | Veen | Anna Cornelia | A.C. barones van den Aedel-van der Veen |
| V | M | Prins | E | VL VV GN | Veen | Anna Cornelia | A.C. van der Veen |
| V | M | Prins | N | VL VV GN-AP VP GP | Veen | Anna Cornelia | A.C. van der Veen-prinses van den Aedel |
| V | M | Prins | P | VL AP VP GP | Veen | Anna Cornelia | A.C. prinses van den Aedel |
| V | M | Prins | V | VL AP VP GP-VV GN | Veen | Anna Cornelia | A.C. prinses van den Aedel-van der Veen |
| M | V | Gravin | E | VL VV GN | Veen | Johannes | J. van der Veen |
| M | V | Gravin | N | VL VV GN-VP GP | Veen | Johannes | J. van der Veen-van den Aedel |
| M | V | Gravin | P | VL VP GP | Veen | Johannes | J. van den Aedel |
| M | V | Gravin | V | VL VP GP-VV GN | Veen | Johannes | J. van den Aedel-van der Veen |
| V | M | Ridder | E | VL VV GN | Veen | Marlies | M. van der Veen |
| V | M | Ridder | N | VL VV GN-VP GP | Veen | Marlies | M. van der Veen-van den Aedel |
| V | M | Ridder | P | VL VP GP | Veen | Marlies | M. van den Aedel |
| V | M | Ridder | V | VL VP GP-VV GN | Veen | Marlies | M. van den Aedel-van der Veen |
| V | V | Gravin | E | VL VV GN | Veen | Sarah | S. van der Veen |
| V | V | Gravin | N | VL VV GN-VP GP | Veen | Sarah | S. van der Veen-van den Aedel |
| V | V | Gravin | P | VL VP GP | Veen | Sarah | S. van den Aedel |
| V | V | Gravin | V | VL VP GP-VV GN | Veen | Sarah | S. van den Aedel-van der Veen |
| M | M | Baron | E | VL VV GN | Veen | Willem | W. van der Veen |
| M | M | Baron | N | VL VV GN-VP GP | Veen | Willem | W. van der Veen-van den Aedel |
| M | M | Baron | P | VL VP GP | Veen | Willem | W. van den Aedel |
| M | M | Baron | V | VL VP GP-VV GN | Veen | Willem | W. van den Aedel-van der Veen |
"""
# Convert table string to rows and remove empty rows, white spaces, and header row
table_rows = [
[item.strip() for item in row.strip().split("|") if item]
for row in table_string.split("\n")
if row.strip()
][1:]
for row in table_rows:
(
gender,
partner_gender,
partner_title,
aanduiding_aanschrijving,
_,
last_name,
first_name,
aanschrijfwijze,
) = row
name = aanschrijfwijze.replace("barones ", "")
name = name.replace("prinses ", "")
last_name = None
last_name_prefix = None
partner_last_name_prefix = None
partner_last_name = None
if aanduiding_aanschrijving == "E":
last_name = name.split(" ", 1)[-1]
if len(last_name.split(" ")) > 1:
split_last_name = last_name.split(" ")
last_name_prefix = " ".join(split_last_name[:-1])
last_name = split_last_name[-1]
elif aanduiding_aanschrijving == "N":
last_name = name.split(" ", 1)[-1]
last_name, partner_last_name = last_name.split("-")
if len(last_name.split(" ")) > 1:
split_last_name = last_name.split(" ")
last_name_prefix = " ".join(split_last_name[:-1])
last_name = split_last_name[-1]
if len(partner_last_name.split(" ")) > 1:
split_partner_last_name = partner_last_name.split(" ")
partner_last_name_prefix = " ".join(split_partner_last_name[:-1])
partner_last_name = split_partner_last_name[-1]
elif aanduiding_aanschrijving == "P":
partner_last_name = name.split(" ", 1)[-1]
if len(partner_last_name.split(" ")) > 1:
split_partner_last_name = partner_last_name.split(" ")
partner_last_name_prefix = " ".join(split_partner_last_name[:-1])
partner_last_name = split_partner_last_name[-1]
elif aanduiding_aanschrijving == "V":
last_name = name.split(" ", 1)[-1]
partner_last_name, last_name = last_name.split("-")
if len(last_name.split(" ")) > 1:
split_last_name = last_name.split(" ")
last_name_prefix = " ".join(split_last_name[:-1])
last_name = split_last_name[-1]
if len(partner_last_name.split(" ")) > 1:
split_partner_last_name = partner_last_name.split(" ")
partner_last_name_prefix = " ".join(split_partner_last_name[:-1])
partner_last_name = split_partner_last_name[-1]
with self.subTest(
partner_title=partner_title,
aanduiding_aanschrijving=aanduiding_aanschrijving,
):
result = get_aanschrijfwijze(
last_name_prefix,
last_name,
first_name,
partner_last_name_prefix,
partner_last_name,
aanduiding_aanschrijving,
gender,
None,
partner_title,
)
self.assertEqual(aanschrijfwijze, result)
class TestGetAanschrijfwijzePartnerHasPredicate(TestCase):
def test_aanschrijfwijze_person_has_title_partner_does_not(self):
table_string = """
| aanduidingAanschrijving | samenstelling aanschrijfwijze | geslachtsnaam | voornamen | aanschrijfwijze |
| E | VL VV GN | Berg | Sjaak | S. van der Berg |
| N | VL VV GN-VP GP | Berg | Peter | P. van der Berg-van Hoogh |
| P | VL VP GP | Berg | Marlies | M. van Hoogh |
| V | VL VP GP-VV GN | Berg | Fleur | F. van Hoogh-van der Berg |
"""
# Convert table string to rows and remove empty rows, white spaces, and header row
table_rows = [
[item.strip() for item in row.strip().split("|") if item]
for row in table_string.split("\n")
if row.strip()
][1:]
for row in table_rows:
(
aanduiding_aanschrijving,
_,
geslachtsnaam,
voornamen,
aanschrijfwijze,
) = row
last_name_prefix = None
partner_last_name_prefix = None
partner_last_name = None
if aanduiding_aanschrijving == "E":
last_name = aanschrijfwijze.split(" ", 1)[-1]
split_last_name = last_name.split(" ")
last_name_prefix = " ".join(split_last_name[:-1])
elif aanduiding_aanschrijving == "N":
last_name = aanschrijfwijze.split(" ", 1)[-1]
last_name, partner_last_name = last_name.split("-")
split_last_name = last_name.split(" ")
last_name_prefix = " ".join(split_last_name[:-1])
split_partner_last_name = partner_last_name.split(" ")
partner_last_name_prefix = " ".join(split_partner_last_name[:-1])
partner_last_name = split_partner_last_name[-1]
elif aanduiding_aanschrijving == "P":
partner_last_name = aanschrijfwijze.split(" ", 1)[-1]
split_partner_last_name = partner_last_name.split(" ")
partner_last_name_prefix = " ".join(split_partner_last_name[:-1])
partner_last_name = split_partner_last_name[-1]
elif aanduiding_aanschrijving == "V":
last_name = aanschrijfwijze.split(" ", 1)[-1]
partner_last_name, last_name = last_name.split("-")
split_last_name = last_name.split(" ")
last_name_prefix = " ".join(split_last_name[:-1])
split_partner_last_name = partner_last_name.split(" ")
partner_last_name_prefix = " ".join(split_partner_last_name[:-1])
partner_last_name = split_partner_last_name[-1]
with self.subTest(aanduiding_aanschrijving=aanduiding_aanschrijving):
result = get_aanschrijfwijze(
last_name_prefix,
geslachtsnaam,
voornamen,
partner_last_name_prefix,
partner_last_name,
aanduiding_aanschrijving,
None,
None,
None,
)
self.assertEqual(aanschrijfwijze, result)
class TestGetAanschrijfwijzePersonAndPartnerHasTitle(TestCase):
def test_aanschrijfwijze_person_and_partner_has_title(self):
table_string = """
| aanduidingAanschrijving | samenstelling aanschrijfwijze | geslachtsnaam | voornamen | aanschrijfwijze |
| E | VL AT VV GN | Aedel | Hendrik Willem | H.W. graaf van den Aedel |
| N | VL AT VV GN-AP VP GP | Aedel | Wilhelmina | W. gravin van den Aedel-barones van Hoogh |
| P | VL AP VP GP | Aedel | Frederique | F. barones van Hoogh |
| V | VL AP VP GP-AT VV GN | Aedel | Emma Louise | E.L. barones van Hoogh-gravin van den Aedel |
"""
# Convert table string to rows and remove empty rows, white spaces, and header row
table_rows = [
[item.strip() for item in row.strip().split("|") if item]
for row in table_string.split("\n")
if row.strip()
][1:]
for row in table_rows:
(
aanduiding_aanschrijving,
_,
geslachtsnaam,
voornamen,
aanschrijfwijze,
) = row
last_name_prefix = None
partner_last_name_prefix = None
partner_last_name = None
if aanduiding_aanschrijving == "E":
last_name = aanschrijfwijze.split(" ", 1)[-1]
split_last_name = last_name.split(" ")
last_name_prefix = " ".join(split_last_name[:-1])
elif aanduiding_aanschrijving == "N":
last_name = aanschrijfwijze.split(" ", 1)[-1]
last_name, partner_last_name = last_name.split("-")
split_last_name = last_name.split(" ")
last_name_prefix = " ".join(split_last_name[:-1])
split_partner_last_name = partner_last_name.split(" ")
partner_last_name_prefix = " ".join(split_partner_last_name[:-1])
partner_last_name = split_partner_last_name[-1]
elif aanduiding_aanschrijving == "P":
partner_last_name = aanschrijfwijze.split(" ", 1)[-1]
split_partner_last_name = partner_last_name.split(" ")
partner_last_name_prefix = " ".join(split_partner_last_name[:-1])
partner_last_name = split_partner_last_name[-1]
elif aanduiding_aanschrijving == "V":
last_name = aanschrijfwijze.split(" ", 1)[-1]
partner_last_name, last_name = last_name.split("-")
split_last_name = last_name.split(" ")
last_name_prefix = " ".join(split_last_name[:-1])
split_partner_last_name = partner_last_name.split(" ")
partner_last_name_prefix = " ".join(split_partner_last_name[:-1])
partner_last_name = split_partner_last_name[-1]
with self.subTest(aanduiding_aanschrijving=aanduiding_aanschrijving):
result = get_aanschrijfwijze(
last_name_prefix,
geslachtsnaam,
voornamen,
partner_last_name_prefix,
partner_last_name,
aanduiding_aanschrijving,
None,
None,
None,
)
self.assertEqual(aanschrijfwijze, result)
| 57.381044 | 233 | 0.44701 | 2,605 | 29,666 | 4.829175 | 0.056814 | 0.16725 | 0.158585 | 0.09539 | 0.853259 | 0.819952 | 0.768045 | 0.720747 | 0.713673 | 0.684102 | 0 | 0.007468 | 0.476438 | 29,666 | 516 | 234 | 57.492248 | 0.802472 | 0.019956 | 0 | 0.769063 | 0 | 0.078431 | 0.370205 | 0.007122 | 0 | 0 | 0 | 0 | 0.015251 | 1 | 0.015251 | false | 0 | 0.004357 | 0 | 0.034858 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
66ec71e0ed39a3553a7d60c6afcefea8a78e01f3 | 2,749 | py | Python | cu_utils/transform.py | aerdem4/rapids-kaggle-utils | 5e9fab9ec5abe1ba5f5843571e4d4ae34a378646 | [
"MIT"
] | 26 | 2020-03-04T08:21:08.000Z | 2021-08-09T23:06:17.000Z | cu_utils/transform.py | aerdem4/rapids-kaggle-utils | 5e9fab9ec5abe1ba5f5843571e4d4ae34a378646 | [
"MIT"
] | 2 | 2020-03-16T15:48:24.000Z | 2020-05-03T17:19:13.000Z | cu_utils/transform.py | aerdem4/rapids-kaggle-utils | 5e9fab9ec5abe1ba5f5843571e4d4ae34a378646 | [
"MIT"
] | 3 | 2020-04-15T06:14:56.000Z | 2022-01-25T08:39:25.000Z | from numba import cuda, float32
import math
def cu_mean_transform(x, y_out):
res = cuda.shared.array(1, dtype=float32)
res[0] = 0
cuda.syncthreads()
for i in range(cuda.threadIdx.x, len(x), cuda.blockDim.x):
cuda.atomic.add(res, 0, x[i])
cuda.syncthreads()
for i in range(cuda.threadIdx.x, len(x), cuda.blockDim.x):
y_out[i] = res[0] / len(x)
def cu_max_transform(x, y_out):
res = cuda.shared.array(1, dtype=float32)
res[0] = -math.inf
cuda.syncthreads()
for i in range(cuda.threadIdx.x, len(x), cuda.blockDim.x):
cuda.atomic.max(res, 0, x[i])
cuda.syncthreads()
for i in range(cuda.threadIdx.x, len(x), cuda.blockDim.x):
y_out[i] = res[0]
def cu_min_transform(x, y_out):
res = cuda.shared.array(1, dtype=float32)
res[0] = math.inf
cuda.syncthreads()
for i in range(cuda.threadIdx.x, len(x), cuda.blockDim.x):
cuda.atomic.min(res, 0, x[i])
cuda.syncthreads()
for i in range(cuda.threadIdx.x, len(x), cuda.blockDim.x):
y_out[i] = res[0]
def get_cu_shift_transform(shift_by, null_val=-1):
def cu_shift_transform(x, y_out):
for i in range(cuda.threadIdx.x, len(x), cuda.blockDim.x):
y_out[i] = null_val
if 0 <= i-shift_by < len(x):
y_out[i] = x[i-shift_by]
return cu_shift_transform
def get_cu_rolling_mean_transform(window, null_val=-1):
def cu_rolling_mean_transform(x, y_out):
for i in range(cuda.threadIdx.x, len(x), cuda.blockDim.x):
y_out[i] = 0
if i >= window-1:
for j in range(cuda.threadIdx.y, window, cuda.blockDim.y):
cuda.atomic.add(y_out, i, x[i-j])
y_out[i] /= window
else:
y_out[i] = null_val
return cu_rolling_mean_transform
def get_cu_rolling_max_transform(window, null_val=-1):
def cu_rolling_max_transform(x, y_out):
for i in range(cuda.threadIdx.x, len(x), cuda.blockDim.x):
y_out[i] = -math.inf
if i >= window-1:
for j in range(cuda.threadIdx.y, window, cuda.blockDim.y):
cuda.atomic.max(y_out, i, x[i-j])
else:
y_out[i] = null_val
return cu_rolling_max_transform
def get_cu_rolling_min_transform(window, null_val=-1):
def cu_rolling_min_transform(x, y_out):
for i in range(cuda.threadIdx.x, len(x), cuda.blockDim.x):
y_out[i] = math.inf
if i >= window-1:
for j in range(cuda.threadIdx.y, window, cuda.blockDim.y):
cuda.atomic.min(y_out, i, x[i-j])
else:
y_out[i] = null_val
return cu_rolling_min_transform
| 31.238636 | 74 | 0.592579 | 447 | 2,749 | 3.474273 | 0.096197 | 0.056665 | 0.048294 | 0.167418 | 0.885383 | 0.808757 | 0.803606 | 0.803606 | 0.735995 | 0.716033 | 0 | 0.015038 | 0.274282 | 2,749 | 87 | 75 | 31.597701 | 0.763409 | 0 | 0 | 0.507463 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.164179 | false | 0 | 0.029851 | 0 | 0.253731 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dd0cd8c02c810283c07d9fcd3e52955d40eb7e52 | 18,381 | py | Python | harness/determined/_swagger/client/api/shells_api.py | wbwatkinson/determined | f9e099e06746a79a2eaf51a89acc264fc5c301e1 | [
"Apache-2.0"
] | null | null | null | harness/determined/_swagger/client/api/shells_api.py | wbwatkinson/determined | f9e099e06746a79a2eaf51a89acc264fc5c301e1 | [
"Apache-2.0"
] | null | null | null | harness/determined/_swagger/client/api/shells_api.py | wbwatkinson/determined | f9e099e06746a79a2eaf51a89acc264fc5c301e1 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Determined API (Beta)
Determined helps deep learning teams train models more quickly, easily share GPU resources, and effectively collaborate. Determined allows deep learning engineers to focus on building and training models at scale, without needing to worry about DevOps or writing custom code for common tasks like fault tolerance or experiment tracking. You can think of Determined as a platform that bridges the gap between tools like TensorFlow and PyTorch --- which work great for a single researcher with a single GPU --- to the challenges that arise when doing deep learning at scale, as teams, clusters, and data sets all increase in size. # noqa: E501
OpenAPI spec version: 0.1
Contact: community@determined.ai
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from determined._swagger.client.api_client import ApiClient
class ShellsApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def determined_get_shell(self, shell_id, **kwargs): # noqa: E501
"""Get the requested shell. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.determined_get_shell(shell_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str shell_id: The id of the shell. (required)
:return: V1GetShellResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.determined_get_shell_with_http_info(shell_id, **kwargs) # noqa: E501
else:
(data) = self.determined_get_shell_with_http_info(shell_id, **kwargs) # noqa: E501
return data
def determined_get_shell_with_http_info(self, shell_id, **kwargs): # noqa: E501
"""Get the requested shell. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.determined_get_shell_with_http_info(shell_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str shell_id: The id of the shell. (required)
:return: V1GetShellResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['shell_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method determined_get_shell" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'shell_id' is set
if ('shell_id' not in params or
params['shell_id'] is None):
raise ValueError("Missing the required parameter `shell_id` when calling `determined_get_shell`") # noqa: E501
collection_formats = {}
path_params = {}
if 'shell_id' in params:
path_params['shellId'] = params['shell_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['BearerToken'] # noqa: E501
return self.api_client.call_api(
'/api/v1/shells/{shellId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1GetShellResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def determined_get_shells(self, **kwargs): # noqa: E501
"""Get a list of shells. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.determined_get_shells(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str sort_by: Sort shells by the given field. - SORT_BY_UNSPECIFIED: Returns shells in an unsorted list. - SORT_BY_ID: Returns shells sorted by id. - SORT_BY_DESCRIPTION: Returns shells sorted by description. - SORT_BY_START_TIME: Return shells sorted by start time.
:param str order_by: Order shells in either ascending or descending order. - ORDER_BY_UNSPECIFIED: Returns records in no specific order. - ORDER_BY_ASC: Returns records in ascending order. - ORDER_BY_DESC: Returns records in descending order.
:param int offset: Skip the number of shells before returning results. Negative values denote number of shells to skip from the end before returning results.
:param int limit: Limit the number of shells. A value of 0 denotes no limit.
:param list[str] users: Limit shells to those that are owned by the specified users.
:return: V1GetShellsResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.determined_get_shells_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.determined_get_shells_with_http_info(**kwargs) # noqa: E501
return data
def determined_get_shells_with_http_info(self, **kwargs): # noqa: E501
"""Get a list of shells. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.determined_get_shells_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str sort_by: Sort shells by the given field. - SORT_BY_UNSPECIFIED: Returns shells in an unsorted list. - SORT_BY_ID: Returns shells sorted by id. - SORT_BY_DESCRIPTION: Returns shells sorted by description. - SORT_BY_START_TIME: Return shells sorted by start time.
:param str order_by: Order shells in either ascending or descending order. - ORDER_BY_UNSPECIFIED: Returns records in no specific order. - ORDER_BY_ASC: Returns records in ascending order. - ORDER_BY_DESC: Returns records in descending order.
:param int offset: Skip the number of shells before returning results. Negative values denote number of shells to skip from the end before returning results.
:param int limit: Limit the number of shells. A value of 0 denotes no limit.
:param list[str] users: Limit shells to those that are owned by the specified users.
:return: V1GetShellsResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['sort_by', 'order_by', 'offset', 'limit', 'users'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method determined_get_shells" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'sort_by' in params:
query_params.append(('sortBy', params['sort_by'])) # noqa: E501
if 'order_by' in params:
query_params.append(('orderBy', params['order_by'])) # noqa: E501
if 'offset' in params:
query_params.append(('offset', params['offset'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'users' in params:
query_params.append(('users', params['users'])) # noqa: E501
collection_formats['users'] = 'multi' # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['BearerToken'] # noqa: E501
return self.api_client.call_api(
'/api/v1/shells', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1GetShellsResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def determined_kill_shell(self, shell_id, **kwargs): # noqa: E501
"""Kill the requested shell. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.determined_kill_shell(shell_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str shell_id: The id of the shell. (required)
:return: V1KillShellResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.determined_kill_shell_with_http_info(shell_id, **kwargs) # noqa: E501
else:
(data) = self.determined_kill_shell_with_http_info(shell_id, **kwargs) # noqa: E501
return data
def determined_kill_shell_with_http_info(self, shell_id, **kwargs): # noqa: E501
"""Kill the requested shell. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.determined_kill_shell_with_http_info(shell_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str shell_id: The id of the shell. (required)
:return: V1KillShellResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['shell_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method determined_kill_shell" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'shell_id' is set
if ('shell_id' not in params or
params['shell_id'] is None):
raise ValueError("Missing the required parameter `shell_id` when calling `determined_kill_shell`") # noqa: E501
collection_formats = {}
path_params = {}
if 'shell_id' in params:
path_params['shellId'] = params['shell_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['BearerToken'] # noqa: E501
return self.api_client.call_api(
'/api/v1/shells/{shellId}/kill', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1KillShellResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def determined_launch_shell(self, body, **kwargs): # noqa: E501
"""Launch a shell. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.determined_launch_shell(body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param V1LaunchShellRequest body: (required)
:return: V1LaunchShellResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.determined_launch_shell_with_http_info(body, **kwargs) # noqa: E501
else:
(data) = self.determined_launch_shell_with_http_info(body, **kwargs) # noqa: E501
return data
def determined_launch_shell_with_http_info(self, body, **kwargs): # noqa: E501
"""Launch a shell. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.determined_launch_shell_with_http_info(body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param V1LaunchShellRequest body: (required)
:return: V1LaunchShellResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method determined_launch_shell" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `determined_launch_shell`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['BearerToken'] # noqa: E501
return self.api_client.call_api(
'/api/v1/shells', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1LaunchShellResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 42.158257 | 647 | 0.628475 | 2,192 | 18,381 | 5.03604 | 0.118157 | 0.043482 | 0.020292 | 0.026089 | 0.873086 | 0.864571 | 0.851073 | 0.833046 | 0.825347 | 0.825347 | 0 | 0.015813 | 0.28437 | 18,381 | 435 | 648 | 42.255172 | 0.8234 | 0.38839 | 0 | 0.735931 | 0 | 0 | 0.178682 | 0.045586 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038961 | false | 0 | 0.017316 | 0 | 0.112554 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dd2f7fd74be5a0b5f0a4f9d0df8f9db25cfb8c33 | 128 | py | Python | sleeppy/__init__.py | yiorg/sleeppy | 7c24a90b35ce1b24ce46e4106878e9f9a1fd36cc | [
"MIT"
] | 36 | 2019-11-01T20:29:42.000Z | 2022-03-15T09:40:50.000Z | sleeppy/__init__.py | 0xBADBAC0N/sleeppy | 2a98c6a86c48e84dede7a55dd6f5727d3c060165 | [
"MIT"
] | 37 | 2020-10-06T10:43:31.000Z | 2021-08-02T10:27:58.000Z | sleeppy/__init__.py | 0xBADBAC0N/sleeppy | 2a98c6a86c48e84dede7a55dd6f5727d3c060165 | [
"MIT"
] | 17 | 2019-08-01T16:07:15.000Z | 2022-02-07T13:42:33.000Z | from sleeppy.sleep import SleepPy, ColeKripke, band_pass_filter, bin2df, activity_index
from sleeppy.version import __version__
| 42.666667 | 87 | 0.859375 | 17 | 128 | 6.058824 | 0.705882 | 0.213592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008621 | 0.09375 | 128 | 2 | 88 | 64 | 0.87931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
dd4e02e030704d9c865c933686f40b84686bbe03 | 113 | py | Python | build/lib.win-amd64-3.6/alphapose/version.py | RajaSudalai/Track_Phone_Usage | d7e153242d7114bf8745e2de9c84355c0a9d9628 | [
"Apache-2.0"
] | 3 | 2020-06-23T12:55:31.000Z | 2021-04-07T05:28:56.000Z | build/lib.win-amd64-3.6/alphapose/version.py | RajaSudalai/Track_Phone_Usage | d7e153242d7114bf8745e2de9c84355c0a9d9628 | [
"Apache-2.0"
] | 2 | 2021-10-09T10:37:56.000Z | 2021-10-09T12:13:18.000Z | build/lib.win-amd64-3.6/alphapose/version.py | RajaSudalai/Track_Phone_Usage | d7e153242d7114bf8745e2de9c84355c0a9d9628 | [
"Apache-2.0"
] | 1 | 2022-01-07T08:01:11.000Z | 2022-01-07T08:01:11.000Z | # GENERATED VERSION FILE
# TIME: Sat May 9 21:49:05 2020
__version__ = '0.3.0+38e00c6'
short_version = '0.3.0'
| 18.833333 | 32 | 0.690265 | 21 | 113 | 3.47619 | 0.714286 | 0.219178 | 0.246575 | 0.273973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.234043 | 0.168142 | 113 | 5 | 33 | 22.6 | 0.542553 | 0.469027 | 0 | 0 | 1 | 0 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dd588b385e161f2fd2fcab5ee6b3bacecd8dadde | 44,408 | py | Python | google/ads/google_ads/v4/proto/common/simulation_pb2.py | arammaliachi/google-ads-python | a4fe89567bd43eb784410523a6306b5d1dd9ee67 | [
"Apache-2.0"
] | 1 | 2021-04-09T04:28:47.000Z | 2021-04-09T04:28:47.000Z | google/ads/google_ads/v4/proto/common/simulation_pb2.py | arammaliachi/google-ads-python | a4fe89567bd43eb784410523a6306b5d1dd9ee67 | [
"Apache-2.0"
] | null | null | null | google/ads/google_ads/v4/proto/common/simulation_pb2.py | arammaliachi/google-ads-python | a4fe89567bd43eb784410523a6306b5d1dd9ee67 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/ads/googleads_v4/proto/common/simulation.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import wrappers_pb2 as google_dot_protobuf_dot_wrappers__pb2
from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='google/ads/googleads_v4/proto/common/simulation.proto',
package='google.ads.googleads.v4.common',
syntax='proto3',
serialized_options=_b('\n\"com.google.ads.googleads.v4.commonB\017SimulationProtoP\001ZDgoogle.golang.org/genproto/googleapis/ads/googleads/v4/common;common\242\002\003GAA\252\002\036Google.Ads.GoogleAds.V4.Common\312\002\036Google\\Ads\\GoogleAds\\V4\\Common\352\002\"Google::Ads::GoogleAds::V4::Common'),
serialized_pb=_b('\n5google/ads/googleads_v4/proto/common/simulation.proto\x12\x1egoogle.ads.googleads.v4.common\x1a\x1egoogle/protobuf/wrappers.proto\x1a\x1cgoogle/api/annotations.proto\"l\n\x1e\x42idModifierSimulationPointList\x12J\n\x06points\x18\x01 \x03(\x0b\x32:.google.ads.googleads.v4.common.BidModifierSimulationPoint\"b\n\x19\x43pcBidSimulationPointList\x12\x45\n\x06points\x18\x01 \x03(\x0b\x32\x35.google.ads.googleads.v4.common.CpcBidSimulationPoint\"b\n\x19\x43pvBidSimulationPointList\x12\x45\n\x06points\x18\x01 \x03(\x0b\x32\x35.google.ads.googleads.v4.common.CpvBidSimulationPoint\"h\n\x1cTargetCpaSimulationPointList\x12H\n\x06points\x18\x01 \x03(\x0b\x32\x38.google.ads.googleads.v4.common.TargetCpaSimulationPoint\"j\n\x1dTargetRoasSimulationPointList\x12I\n\x06points\x18\x01 \x03(\x0b\x32\x39.google.ads.googleads.v4.common.TargetRoasSimulationPoint\"\xd2\x06\n\x1a\x42idModifierSimulationPoint\x12\x32\n\x0c\x62id_modifier\x18\x01 \x01(\x0b\x32\x1c.google.protobuf.DoubleValue\x12:\n\x14\x62iddable_conversions\x18\x02 \x01(\x0b\x32\x1c.google.protobuf.DoubleValue\x12@\n\x1a\x62iddable_conversions_value\x18\x03 \x01(\x0b\x32\x1c.google.protobuf.DoubleValue\x12+\n\x06\x63licks\x18\x04 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x30\n\x0b\x63ost_micros\x18\x05 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x30\n\x0bimpressions\x18\x06 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x39\n\x14top_slot_impressions\x18\x07 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x41\n\x1bparent_biddable_conversions\x18\x08 \x01(\x0b\x32\x1c.google.protobuf.DoubleValue\x12G\n!parent_biddable_conversions_value\x18\t \x01(\x0b\x32\x1c.google.protobuf.DoubleValue\x12\x32\n\rparent_clicks\x18\n \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x37\n\x12parent_cost_micros\x18\x0b \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x37\n\x12parent_impressions\x18\x0c \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12@\n\x1bparent_top_slot_impressions\x18\r \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x42\n\x1dparent_required_budget_micros\x18\x0e \x01(\x0b\x32\x1b.google.protobuf.Int64Value\"\x96\x03\n\x15\x43pcBidSimulationPoint\x12\x33\n\x0e\x63pc_bid_micros\x18\x01 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12:\n\x14\x62iddable_conversions\x18\x02 \x01(\x0b\x32\x1c.google.protobuf.DoubleValue\x12@\n\x1a\x62iddable_conversions_value\x18\x03 \x01(\x0b\x32\x1c.google.protobuf.DoubleValue\x12+\n\x06\x63licks\x18\x04 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x30\n\x0b\x63ost_micros\x18\x05 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x30\n\x0bimpressions\x18\x06 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x39\n\x14top_slot_impressions\x18\x07 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\"\xdc\x01\n\x15\x43pvBidSimulationPoint\x12\x33\n\x0e\x63pv_bid_micros\x18\x01 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x30\n\x0b\x63ost_micros\x18\x02 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x30\n\x0bimpressions\x18\x03 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12*\n\x05views\x18\x04 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\"\x9c\x03\n\x18TargetCpaSimulationPoint\x12\x36\n\x11target_cpa_micros\x18\x01 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12:\n\x14\x62iddable_conversions\x18\x02 \x01(\x0b\x32\x1c.google.protobuf.DoubleValue\x12@\n\x1a\x62iddable_conversions_value\x18\x03 \x01(\x0b\x32\x1c.google.protobuf.DoubleValue\x12+\n\x06\x63licks\x18\x04 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x30\n\x0b\x63ost_micros\x18\x05 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x30\n\x0bimpressions\x18\x06 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x39\n\x14top_slot_impressions\x18\x07 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\"\x98\x03\n\x19TargetRoasSimulationPoint\x12\x31\n\x0btarget_roas\x18\x01 \x01(\x0b\x32\x1c.google.protobuf.DoubleValue\x12:\n\x14\x62iddable_conversions\x18\x02 \x01(\x0b\x32\x1c.google.protobuf.DoubleValue\x12@\n\x1a\x62iddable_conversions_value\x18\x03 \x01(\x0b\x32\x1c.google.protobuf.DoubleValue\x12+\n\x06\x63licks\x18\x04 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x30\n\x0b\x63ost_micros\x18\x05 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x30\n\x0bimpressions\x18\x06 \x01(\x0b\x32\x1b.google.protobuf.Int64Value\x12\x39\n\x14top_slot_impressions\x18\x07 \x01(\x0b\x32\x1b.google.protobuf.Int64ValueB\xea\x01\n\"com.google.ads.googleads.v4.commonB\x0fSimulationProtoP\x01ZDgoogle.golang.org/genproto/googleapis/ads/googleads/v4/common;common\xa2\x02\x03GAA\xaa\x02\x1eGoogle.Ads.GoogleAds.V4.Common\xca\x02\x1eGoogle\\Ads\\GoogleAds\\V4\\Common\xea\x02\"Google::Ads::GoogleAds::V4::Commonb\x06proto3')
,
dependencies=[google_dot_protobuf_dot_wrappers__pb2.DESCRIPTOR,google_dot_api_dot_annotations__pb2.DESCRIPTOR,])
_BIDMODIFIERSIMULATIONPOINTLIST = _descriptor.Descriptor(
name='BidModifierSimulationPointList',
full_name='google.ads.googleads.v4.common.BidModifierSimulationPointList',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='points', full_name='google.ads.googleads.v4.common.BidModifierSimulationPointList.points', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=151,
serialized_end=259,
)
_CPCBIDSIMULATIONPOINTLIST = _descriptor.Descriptor(
name='CpcBidSimulationPointList',
full_name='google.ads.googleads.v4.common.CpcBidSimulationPointList',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='points', full_name='google.ads.googleads.v4.common.CpcBidSimulationPointList.points', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=261,
serialized_end=359,
)
_CPVBIDSIMULATIONPOINTLIST = _descriptor.Descriptor(
name='CpvBidSimulationPointList',
full_name='google.ads.googleads.v4.common.CpvBidSimulationPointList',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='points', full_name='google.ads.googleads.v4.common.CpvBidSimulationPointList.points', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=361,
serialized_end=459,
)
_TARGETCPASIMULATIONPOINTLIST = _descriptor.Descriptor(
name='TargetCpaSimulationPointList',
full_name='google.ads.googleads.v4.common.TargetCpaSimulationPointList',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='points', full_name='google.ads.googleads.v4.common.TargetCpaSimulationPointList.points', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=461,
serialized_end=565,
)
_TARGETROASSIMULATIONPOINTLIST = _descriptor.Descriptor(
name='TargetRoasSimulationPointList',
full_name='google.ads.googleads.v4.common.TargetRoasSimulationPointList',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='points', full_name='google.ads.googleads.v4.common.TargetRoasSimulationPointList.points', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=567,
serialized_end=673,
)
_BIDMODIFIERSIMULATIONPOINT = _descriptor.Descriptor(
name='BidModifierSimulationPoint',
full_name='google.ads.googleads.v4.common.BidModifierSimulationPoint',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='bid_modifier', full_name='google.ads.googleads.v4.common.BidModifierSimulationPoint.bid_modifier', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='biddable_conversions', full_name='google.ads.googleads.v4.common.BidModifierSimulationPoint.biddable_conversions', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='biddable_conversions_value', full_name='google.ads.googleads.v4.common.BidModifierSimulationPoint.biddable_conversions_value', index=2,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='clicks', full_name='google.ads.googleads.v4.common.BidModifierSimulationPoint.clicks', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='cost_micros', full_name='google.ads.googleads.v4.common.BidModifierSimulationPoint.cost_micros', index=4,
number=5, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='impressions', full_name='google.ads.googleads.v4.common.BidModifierSimulationPoint.impressions', index=5,
number=6, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='top_slot_impressions', full_name='google.ads.googleads.v4.common.BidModifierSimulationPoint.top_slot_impressions', index=6,
number=7, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='parent_biddable_conversions', full_name='google.ads.googleads.v4.common.BidModifierSimulationPoint.parent_biddable_conversions', index=7,
number=8, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='parent_biddable_conversions_value', full_name='google.ads.googleads.v4.common.BidModifierSimulationPoint.parent_biddable_conversions_value', index=8,
number=9, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='parent_clicks', full_name='google.ads.googleads.v4.common.BidModifierSimulationPoint.parent_clicks', index=9,
number=10, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='parent_cost_micros', full_name='google.ads.googleads.v4.common.BidModifierSimulationPoint.parent_cost_micros', index=10,
number=11, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='parent_impressions', full_name='google.ads.googleads.v4.common.BidModifierSimulationPoint.parent_impressions', index=11,
number=12, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='parent_top_slot_impressions', full_name='google.ads.googleads.v4.common.BidModifierSimulationPoint.parent_top_slot_impressions', index=12,
number=13, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='parent_required_budget_micros', full_name='google.ads.googleads.v4.common.BidModifierSimulationPoint.parent_required_budget_micros', index=13,
number=14, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=676,
serialized_end=1526,
)
_CPCBIDSIMULATIONPOINT = _descriptor.Descriptor(
name='CpcBidSimulationPoint',
full_name='google.ads.googleads.v4.common.CpcBidSimulationPoint',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='cpc_bid_micros', full_name='google.ads.googleads.v4.common.CpcBidSimulationPoint.cpc_bid_micros', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='biddable_conversions', full_name='google.ads.googleads.v4.common.CpcBidSimulationPoint.biddable_conversions', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='biddable_conversions_value', full_name='google.ads.googleads.v4.common.CpcBidSimulationPoint.biddable_conversions_value', index=2,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='clicks', full_name='google.ads.googleads.v4.common.CpcBidSimulationPoint.clicks', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='cost_micros', full_name='google.ads.googleads.v4.common.CpcBidSimulationPoint.cost_micros', index=4,
number=5, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='impressions', full_name='google.ads.googleads.v4.common.CpcBidSimulationPoint.impressions', index=5,
number=6, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='top_slot_impressions', full_name='google.ads.googleads.v4.common.CpcBidSimulationPoint.top_slot_impressions', index=6,
number=7, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1529,
serialized_end=1935,
)
_CPVBIDSIMULATIONPOINT = _descriptor.Descriptor(
name='CpvBidSimulationPoint',
full_name='google.ads.googleads.v4.common.CpvBidSimulationPoint',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='cpv_bid_micros', full_name='google.ads.googleads.v4.common.CpvBidSimulationPoint.cpv_bid_micros', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='cost_micros', full_name='google.ads.googleads.v4.common.CpvBidSimulationPoint.cost_micros', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='impressions', full_name='google.ads.googleads.v4.common.CpvBidSimulationPoint.impressions', index=2,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='views', full_name='google.ads.googleads.v4.common.CpvBidSimulationPoint.views', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1938,
serialized_end=2158,
)
_TARGETCPASIMULATIONPOINT = _descriptor.Descriptor(
name='TargetCpaSimulationPoint',
full_name='google.ads.googleads.v4.common.TargetCpaSimulationPoint',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='target_cpa_micros', full_name='google.ads.googleads.v4.common.TargetCpaSimulationPoint.target_cpa_micros', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='biddable_conversions', full_name='google.ads.googleads.v4.common.TargetCpaSimulationPoint.biddable_conversions', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='biddable_conversions_value', full_name='google.ads.googleads.v4.common.TargetCpaSimulationPoint.biddable_conversions_value', index=2,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='clicks', full_name='google.ads.googleads.v4.common.TargetCpaSimulationPoint.clicks', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='cost_micros', full_name='google.ads.googleads.v4.common.TargetCpaSimulationPoint.cost_micros', index=4,
number=5, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='impressions', full_name='google.ads.googleads.v4.common.TargetCpaSimulationPoint.impressions', index=5,
number=6, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='top_slot_impressions', full_name='google.ads.googleads.v4.common.TargetCpaSimulationPoint.top_slot_impressions', index=6,
number=7, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=2161,
serialized_end=2573,
)
_TARGETROASSIMULATIONPOINT = _descriptor.Descriptor(
name='TargetRoasSimulationPoint',
full_name='google.ads.googleads.v4.common.TargetRoasSimulationPoint',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='target_roas', full_name='google.ads.googleads.v4.common.TargetRoasSimulationPoint.target_roas', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='biddable_conversions', full_name='google.ads.googleads.v4.common.TargetRoasSimulationPoint.biddable_conversions', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='biddable_conversions_value', full_name='google.ads.googleads.v4.common.TargetRoasSimulationPoint.biddable_conversions_value', index=2,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='clicks', full_name='google.ads.googleads.v4.common.TargetRoasSimulationPoint.clicks', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='cost_micros', full_name='google.ads.googleads.v4.common.TargetRoasSimulationPoint.cost_micros', index=4,
number=5, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='impressions', full_name='google.ads.googleads.v4.common.TargetRoasSimulationPoint.impressions', index=5,
number=6, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='top_slot_impressions', full_name='google.ads.googleads.v4.common.TargetRoasSimulationPoint.top_slot_impressions', index=6,
number=7, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=2576,
serialized_end=2984,
)
_BIDMODIFIERSIMULATIONPOINTLIST.fields_by_name['points'].message_type = _BIDMODIFIERSIMULATIONPOINT
_CPCBIDSIMULATIONPOINTLIST.fields_by_name['points'].message_type = _CPCBIDSIMULATIONPOINT
_CPVBIDSIMULATIONPOINTLIST.fields_by_name['points'].message_type = _CPVBIDSIMULATIONPOINT
_TARGETCPASIMULATIONPOINTLIST.fields_by_name['points'].message_type = _TARGETCPASIMULATIONPOINT
_TARGETROASSIMULATIONPOINTLIST.fields_by_name['points'].message_type = _TARGETROASSIMULATIONPOINT
_BIDMODIFIERSIMULATIONPOINT.fields_by_name['bid_modifier'].message_type = google_dot_protobuf_dot_wrappers__pb2._DOUBLEVALUE
_BIDMODIFIERSIMULATIONPOINT.fields_by_name['biddable_conversions'].message_type = google_dot_protobuf_dot_wrappers__pb2._DOUBLEVALUE
_BIDMODIFIERSIMULATIONPOINT.fields_by_name['biddable_conversions_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._DOUBLEVALUE
_BIDMODIFIERSIMULATIONPOINT.fields_by_name['clicks'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_BIDMODIFIERSIMULATIONPOINT.fields_by_name['cost_micros'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_BIDMODIFIERSIMULATIONPOINT.fields_by_name['impressions'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_BIDMODIFIERSIMULATIONPOINT.fields_by_name['top_slot_impressions'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_BIDMODIFIERSIMULATIONPOINT.fields_by_name['parent_biddable_conversions'].message_type = google_dot_protobuf_dot_wrappers__pb2._DOUBLEVALUE
_BIDMODIFIERSIMULATIONPOINT.fields_by_name['parent_biddable_conversions_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._DOUBLEVALUE
_BIDMODIFIERSIMULATIONPOINT.fields_by_name['parent_clicks'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_BIDMODIFIERSIMULATIONPOINT.fields_by_name['parent_cost_micros'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_BIDMODIFIERSIMULATIONPOINT.fields_by_name['parent_impressions'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_BIDMODIFIERSIMULATIONPOINT.fields_by_name['parent_top_slot_impressions'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_BIDMODIFIERSIMULATIONPOINT.fields_by_name['parent_required_budget_micros'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_CPCBIDSIMULATIONPOINT.fields_by_name['cpc_bid_micros'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_CPCBIDSIMULATIONPOINT.fields_by_name['biddable_conversions'].message_type = google_dot_protobuf_dot_wrappers__pb2._DOUBLEVALUE
_CPCBIDSIMULATIONPOINT.fields_by_name['biddable_conversions_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._DOUBLEVALUE
_CPCBIDSIMULATIONPOINT.fields_by_name['clicks'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_CPCBIDSIMULATIONPOINT.fields_by_name['cost_micros'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_CPCBIDSIMULATIONPOINT.fields_by_name['impressions'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_CPCBIDSIMULATIONPOINT.fields_by_name['top_slot_impressions'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_CPVBIDSIMULATIONPOINT.fields_by_name['cpv_bid_micros'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_CPVBIDSIMULATIONPOINT.fields_by_name['cost_micros'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_CPVBIDSIMULATIONPOINT.fields_by_name['impressions'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_CPVBIDSIMULATIONPOINT.fields_by_name['views'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_TARGETCPASIMULATIONPOINT.fields_by_name['target_cpa_micros'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_TARGETCPASIMULATIONPOINT.fields_by_name['biddable_conversions'].message_type = google_dot_protobuf_dot_wrappers__pb2._DOUBLEVALUE
_TARGETCPASIMULATIONPOINT.fields_by_name['biddable_conversions_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._DOUBLEVALUE
_TARGETCPASIMULATIONPOINT.fields_by_name['clicks'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_TARGETCPASIMULATIONPOINT.fields_by_name['cost_micros'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_TARGETCPASIMULATIONPOINT.fields_by_name['impressions'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_TARGETCPASIMULATIONPOINT.fields_by_name['top_slot_impressions'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_TARGETROASSIMULATIONPOINT.fields_by_name['target_roas'].message_type = google_dot_protobuf_dot_wrappers__pb2._DOUBLEVALUE
_TARGETROASSIMULATIONPOINT.fields_by_name['biddable_conversions'].message_type = google_dot_protobuf_dot_wrappers__pb2._DOUBLEVALUE
_TARGETROASSIMULATIONPOINT.fields_by_name['biddable_conversions_value'].message_type = google_dot_protobuf_dot_wrappers__pb2._DOUBLEVALUE
_TARGETROASSIMULATIONPOINT.fields_by_name['clicks'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_TARGETROASSIMULATIONPOINT.fields_by_name['cost_micros'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_TARGETROASSIMULATIONPOINT.fields_by_name['impressions'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
_TARGETROASSIMULATIONPOINT.fields_by_name['top_slot_impressions'].message_type = google_dot_protobuf_dot_wrappers__pb2._INT64VALUE
DESCRIPTOR.message_types_by_name['BidModifierSimulationPointList'] = _BIDMODIFIERSIMULATIONPOINTLIST
DESCRIPTOR.message_types_by_name['CpcBidSimulationPointList'] = _CPCBIDSIMULATIONPOINTLIST
DESCRIPTOR.message_types_by_name['CpvBidSimulationPointList'] = _CPVBIDSIMULATIONPOINTLIST
DESCRIPTOR.message_types_by_name['TargetCpaSimulationPointList'] = _TARGETCPASIMULATIONPOINTLIST
DESCRIPTOR.message_types_by_name['TargetRoasSimulationPointList'] = _TARGETROASSIMULATIONPOINTLIST
DESCRIPTOR.message_types_by_name['BidModifierSimulationPoint'] = _BIDMODIFIERSIMULATIONPOINT
DESCRIPTOR.message_types_by_name['CpcBidSimulationPoint'] = _CPCBIDSIMULATIONPOINT
DESCRIPTOR.message_types_by_name['CpvBidSimulationPoint'] = _CPVBIDSIMULATIONPOINT
DESCRIPTOR.message_types_by_name['TargetCpaSimulationPoint'] = _TARGETCPASIMULATIONPOINT
DESCRIPTOR.message_types_by_name['TargetRoasSimulationPoint'] = _TARGETROASSIMULATIONPOINT
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
BidModifierSimulationPointList = _reflection.GeneratedProtocolMessageType('BidModifierSimulationPointList', (_message.Message,), dict(
DESCRIPTOR = _BIDMODIFIERSIMULATIONPOINTLIST,
__module__ = 'google.ads.googleads_v4.proto.common.simulation_pb2'
,
__doc__ = """A container for simulation points for simulations of type BID\_MODIFIER.
Attributes:
points:
Projected metrics for a series of bid modifier amounts.
""",
# @@protoc_insertion_point(class_scope:google.ads.googleads.v4.common.BidModifierSimulationPointList)
))
_sym_db.RegisterMessage(BidModifierSimulationPointList)
CpcBidSimulationPointList = _reflection.GeneratedProtocolMessageType('CpcBidSimulationPointList', (_message.Message,), dict(
DESCRIPTOR = _CPCBIDSIMULATIONPOINTLIST,
__module__ = 'google.ads.googleads_v4.proto.common.simulation_pb2'
,
__doc__ = """A container for simulation points for simulations of type CPC\_BID.
Attributes:
points:
Projected metrics for a series of CPC bid amounts.
""",
# @@protoc_insertion_point(class_scope:google.ads.googleads.v4.common.CpcBidSimulationPointList)
))
_sym_db.RegisterMessage(CpcBidSimulationPointList)
CpvBidSimulationPointList = _reflection.GeneratedProtocolMessageType('CpvBidSimulationPointList', (_message.Message,), dict(
DESCRIPTOR = _CPVBIDSIMULATIONPOINTLIST,
__module__ = 'google.ads.googleads_v4.proto.common.simulation_pb2'
,
__doc__ = """A container for simulation points for simulations of type CPV\_BID.
Attributes:
points:
Projected metrics for a series of CPV bid amounts.
""",
# @@protoc_insertion_point(class_scope:google.ads.googleads.v4.common.CpvBidSimulationPointList)
))
_sym_db.RegisterMessage(CpvBidSimulationPointList)
TargetCpaSimulationPointList = _reflection.GeneratedProtocolMessageType('TargetCpaSimulationPointList', (_message.Message,), dict(
DESCRIPTOR = _TARGETCPASIMULATIONPOINTLIST,
__module__ = 'google.ads.googleads_v4.proto.common.simulation_pb2'
,
__doc__ = """A container for simulation points for simulations of type TARGET\_CPA.
Attributes:
points:
Projected metrics for a series of target CPA amounts.
""",
# @@protoc_insertion_point(class_scope:google.ads.googleads.v4.common.TargetCpaSimulationPointList)
))
_sym_db.RegisterMessage(TargetCpaSimulationPointList)
TargetRoasSimulationPointList = _reflection.GeneratedProtocolMessageType('TargetRoasSimulationPointList', (_message.Message,), dict(
DESCRIPTOR = _TARGETROASSIMULATIONPOINTLIST,
__module__ = 'google.ads.googleads_v4.proto.common.simulation_pb2'
,
__doc__ = """A container for simulation points for simulations of type TARGET\_ROAS.
Attributes:
points:
Projected metrics for a series of target ROAS amounts.
""",
# @@protoc_insertion_point(class_scope:google.ads.googleads.v4.common.TargetRoasSimulationPointList)
))
_sym_db.RegisterMessage(TargetRoasSimulationPointList)
BidModifierSimulationPoint = _reflection.GeneratedProtocolMessageType('BidModifierSimulationPoint', (_message.Message,), dict(
DESCRIPTOR = _BIDMODIFIERSIMULATIONPOINT,
__module__ = 'google.ads.googleads_v4.proto.common.simulation_pb2'
,
__doc__ = """Projected metrics for a specific bid modifier amount.
Attributes:
bid_modifier:
The simulated bid modifier upon which projected metrics are
based.
biddable_conversions:
Projected number of biddable conversions. Only search
advertising channel type supports this field.
biddable_conversions_value:
Projected total value of biddable conversions. Only search
advertising channel type supports this field.
clicks:
Projected number of clicks.
cost_micros:
Projected cost in micros.
impressions:
Projected number of impressions.
top_slot_impressions:
Projected number of top slot impressions. Only search
advertising channel type supports this field.
parent_biddable_conversions:
Projected number of biddable conversions for the parent
resource. Only search advertising channel type supports this
field.
parent_biddable_conversions_value:
Projected total value of biddable conversions for the parent
resource. Only search advertising channel type supports this
field.
parent_clicks:
Projected number of clicks for the parent resource.
parent_cost_micros:
Projected cost in micros for the parent resource.
parent_impressions:
Projected number of impressions for the parent resource.
parent_top_slot_impressions:
Projected number of top slot impressions for the parent
resource. Only search advertising channel type supports this
field.
parent_required_budget_micros:
Projected minimum daily budget that must be available to the
parent resource to realize this simulation.
""",
# @@protoc_insertion_point(class_scope:google.ads.googleads.v4.common.BidModifierSimulationPoint)
))
_sym_db.RegisterMessage(BidModifierSimulationPoint)
CpcBidSimulationPoint = _reflection.GeneratedProtocolMessageType('CpcBidSimulationPoint', (_message.Message,), dict(
DESCRIPTOR = _CPCBIDSIMULATIONPOINT,
__module__ = 'google.ads.googleads_v4.proto.common.simulation_pb2'
,
__doc__ = """Projected metrics for a specific CPC bid amount.
Attributes:
cpc_bid_micros:
The simulated CPC bid upon which projected metrics are based.
biddable_conversions:
Projected number of biddable conversions.
biddable_conversions_value:
Projected total value of biddable conversions.
clicks:
Projected number of clicks.
cost_micros:
Projected cost in micros.
impressions:
Projected number of impressions.
top_slot_impressions:
Projected number of top slot impressions. Only search
advertising channel type supports this field.
""",
# @@protoc_insertion_point(class_scope:google.ads.googleads.v4.common.CpcBidSimulationPoint)
))
_sym_db.RegisterMessage(CpcBidSimulationPoint)
CpvBidSimulationPoint = _reflection.GeneratedProtocolMessageType('CpvBidSimulationPoint', (_message.Message,), dict(
DESCRIPTOR = _CPVBIDSIMULATIONPOINT,
__module__ = 'google.ads.googleads_v4.proto.common.simulation_pb2'
,
__doc__ = """Projected metrics for a specific CPV bid amount.
Attributes:
cpv_bid_micros:
The simulated CPV bid upon which projected metrics are based.
cost_micros:
Projected cost in micros.
impressions:
Projected number of impressions.
views:
Projected number of views.
""",
# @@protoc_insertion_point(class_scope:google.ads.googleads.v4.common.CpvBidSimulationPoint)
))
_sym_db.RegisterMessage(CpvBidSimulationPoint)
TargetCpaSimulationPoint = _reflection.GeneratedProtocolMessageType('TargetCpaSimulationPoint', (_message.Message,), dict(
DESCRIPTOR = _TARGETCPASIMULATIONPOINT,
__module__ = 'google.ads.googleads_v4.proto.common.simulation_pb2'
,
__doc__ = """Projected metrics for a specific target CPA amount.
Attributes:
target_cpa_micros:
The simulated target CPA upon which projected metrics are
based.
biddable_conversions:
Projected number of biddable conversions.
biddable_conversions_value:
Projected total value of biddable conversions.
clicks:
Projected number of clicks.
cost_micros:
Projected cost in micros.
impressions:
Projected number of impressions.
top_slot_impressions:
Projected number of top slot impressions. Only search
advertising channel type supports this field.
""",
# @@protoc_insertion_point(class_scope:google.ads.googleads.v4.common.TargetCpaSimulationPoint)
))
_sym_db.RegisterMessage(TargetCpaSimulationPoint)
TargetRoasSimulationPoint = _reflection.GeneratedProtocolMessageType('TargetRoasSimulationPoint', (_message.Message,), dict(
DESCRIPTOR = _TARGETROASSIMULATIONPOINT,
__module__ = 'google.ads.googleads_v4.proto.common.simulation_pb2'
,
__doc__ = """Projected metrics for a specific target ROAS amount.
Attributes:
target_roas:
The simulated target ROAS upon which projected metrics are
based.
biddable_conversions:
Projected number of biddable conversions.
biddable_conversions_value:
Projected total value of biddable conversions.
clicks:
Projected number of clicks.
cost_micros:
Projected cost in micros.
impressions:
Projected number of impressions.
top_slot_impressions:
Projected number of top slot impressions. Only Search
advertising channel type supports this field.
""",
# @@protoc_insertion_point(class_scope:google.ads.googleads.v4.common.TargetRoasSimulationPoint)
))
_sym_db.RegisterMessage(TargetRoasSimulationPoint)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)
| 50.926606 | 4,647 | 0.779454 | 5,315 | 44,408 | 6.209407 | 0.055503 | 0.034421 | 0.039875 | 0.052116 | 0.822471 | 0.805139 | 0.780444 | 0.76887 | 0.731327 | 0.699088 | 0 | 0.036977 | 0.118807 | 44,408 | 871 | 4,648 | 50.985075 | 0.806399 | 0.026279 | 0 | 0.662893 | 1 | 0.002516 | 0.378485 | 0.249046 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.008805 | 0 | 0.008805 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
dd62ea9a982c43a8742af25a563e9c4af7bccb29 | 85,075 | py | Python | pulp_lineup_optimizer.py | CandyOates/dfs_lu_opt | b382b908c65c958d6bb8606576f430f064395c66 | [
"Apache-2.0"
] | null | null | null | pulp_lineup_optimizer.py | CandyOates/dfs_lu_opt | b382b908c65c958d6bb8606576f430f064395c66 | [
"Apache-2.0"
] | null | null | null | pulp_lineup_optimizer.py | CandyOates/dfs_lu_opt | b382b908c65c958d6bb8606576f430f064395c66 | [
"Apache-2.0"
] | null | null | null | import pulp
import numpy as np
def fd_nba_minprob_con_factory(x, prob, vs, table):
prob += pulp.lpSum([float(table.Position.iloc[n]=='PG')*vs[n] for n in range(len(vs))]) <= 2, "PG_Total_UB"
prob += pulp.lpSum([float(table.Position.iloc[n]=='SG')*vs[n] for n in range(len(vs))]) <= 2, "SG_Total_UB"
prob += pulp.lpSum([float(table.Position.iloc[n]=='SF')*vs[n] for n in range(len(vs))]) <= 2, "SF_Total_UB"
prob += pulp.lpSum([float(table.Position.iloc[n]=='PF')*vs[n] for n in range(len(vs))]) <= 2, "PF_Total_UB"
prob += pulp.lpSum([float(table.Position.iloc[n]=='C')*vs[n] for n in range(len(vs))]) <= 1, "C_Total_UB"
prob += pulp.lpSum([float(table.Position.iloc[n]=='PG')*vs[n] for n in range(len(vs))]) >= 0, "PG_Total_LB"
prob += pulp.lpSum([float(table.Position.iloc[n]=='SG')*vs[n] for n in range(len(vs))]) >= 0, "SG_Total_LB"
prob += pulp.lpSum([float(table.Position.iloc[n]=='SF')*vs[n] for n in range(len(vs))]) >= 0, "SF_Total_LB"
prob += pulp.lpSum([float(table.Position.iloc[n]=='PF')*vs[n] for n in range(len(vs))]) >= 0, "PF_Total_LB"
prob += pulp.lpSum([float(table.Position.iloc[n]=='C')*vs[n] for n in range(len(vs))]) >= 0, "C_Total_LB"
return prob
class RobustCongruentLineupOptimizer(object):
def __init__(self, table, roster_slots, penalty, salary_cap=None, min_prob_con_factory=None):
"""
Makes Lineup Optimization easy.
max_x min_w {s'(x-w) - .5*penalty*||x-w||^2}
sum(w) = roster_slots
Input:
table - pd.DataFrame
Contains relevant data on each player. Must include the columns ['Pos','Salary','Team','OwnLB','OwnUB'].
If you want
the resulting table sorted, include a column ['PosNum'] which assigns a numeric value to each position.
roster_slots - int
the total number of spaces in the roster
For example, in NBA 1 through 5 for PG through C.
penalty - float
the weight assigned to the penalty term for weight misalignment. Preliminary tests suggest 10 is a
good value for the NBA. A penalty of 0 is equivalent to the MILP formulation solved by the
LineupOptimizer class.
salary_cap - float
The total salary limit of the team, if there is any. Default is None.
min_prob_con_factory - function
Takes the most recent candidate solution of the maximization problem as an array of floats or ints,
a pulp.LpProblem, a list of pulp.LpVariable, and a pd.DataFrame and returns a pulp.LpProblem with
the appropriate constraints for the minimization problem for the specific contest type and site this class
is being used for. If unspecified, None is defaulted and the only constraint in the minimization problem will
be that the total of the weights sum to roster_slots and the weights stay between their bounds.
"""
self.nslots = roster_slots
self.table = table
self.salary_cap = salary_cap
self.pen = penalty
if self.pen == 0:
raise Exception('penalty must be nonzero.')
self.min_prob_con_factory = min_prob_con_factory
self.players = table.index.tolist()
self.prob = pulp.LpProblem('Lineup Optimization', pulp.LpMaximize)
self.player_vars = pulp.LpVariable.dicts("p", self.players, 0, 1, 'Binary')
self.rev_player_vars = dict(zip([str(x) for x in self.player_vars.values()], \
self.player_vars.keys()))
self.z = pulp.LpVariable('z')
mu = np.array(self.table.Proj) / (2.*self.pen)
self.m = np.empty(len(mu))
projs = self.table.Proj.loc[self.players].tolist()
for n in range(len(self.players)):
t = mu
el = self.table.OwnLB.loc[self.players[n]]
u = self.table.OwnUB.loc[self.players[n]]
t[n] = el
fl = t.dot(projs) - .5*self.pen*t.dot(t)
t[n] = u
fu = t.dot(projs) - .5*self.pen*t.dot(t)
self.m[n] = (fu - fl) / (u - el)
# Objective
self.prob += pulp.lpSum([self.table.Proj[k]*self.player_vars[k] for k in self.players]) - self.z*self.pen, 'Aggregate Projected Score'
# Salary Cap Constraint
if self.salary_cap is not None:
self.prob += pulp.lpSum([self.table.Salary[k]*self.player_vars[k] for k in self.players]) <= self.salary_cap, 'Salary Cap Constraint'
self.const_tracker = {}
def _addQuadraticConstraint(self, x, iteration):
"""
Add a linear approximation to the quadratic constraint.
x - binary array
solution of the last MILP iteration
iteration - int
the iteration number of the last MILP iteration
"""
con_name = 'QUADCON%d'%iteration
y = np.array([x[n] - self.table.RobustOwn.loc[k] for n, k in enumerate(self.players)], dtype=float)
self.prob += 2.*pulp.lpSum([y[n]*self.player_vars[k] for n, k in enumerate(self.players)]) - self.z <= \
y.dot(y) + 2.*sum([y[n]*self.table.RobustOwn.loc[k] for n, k in enumerate(self.players)]), con_name
def _removeQuadraticConstraints(self, ct):
"""
Remove all linear approximation quadratic constraints.
"""
for k in xrange(ct):
con_name = 'QUADCON%d'%k
self.removeConstraint(con_name)
def removeConstraint(self, con_name):
"""
Remove the constraint with name given by con_name.
"""
try:
del self.prob.constraints[con_name]
return True
except:
return False
def addPositionConstraint(self, pos, con_type, bound, con_name=None):
"""
Constrain the number of position appearances.
Inputs:
pos - string
position of player, if multiple, concatenated with '/'
con_type - string
{'le','eq','ge'}
bound - float
the rhs of the constraint
con_name - string
the name to assign the constraint. If not given, defaults to 'Column %s' % column
Returns:
True if successful, False otherwise
"""
if con_name is None:
con_name = 'Pos %s %s' % (con_type,pos)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
pos = pos.split('/')
if con_type == 'eq':
self.prob += pulp.lpSum([(self.table['Pos'].loc[k] in pos)*self.player_vars[k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([(self.table['Pos'].loc[k] in pos)*self.player_vars[k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([(self.table['Pos'].loc[k] in pos)*self.player_vars[k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def _solve_minimization(self, x):
minprob = pulp.LpProblem("min_prob", pulp.LpMinimize)
varis = [pulp.LpVariable("y%d"%n, lowBound=x[n]-self.table.OwnUB.loc[self.players[n]], upBound=x[n]-self.table.OwnLB.loc[self.players[n]]) for n in range(len(self.players))]
minprob += pulp.lpSum([self.m[n]*varis[n] for n in range(len(self.players))]), "LinearizedObjective"
if self.min_prob_con_factory is not None:
minprob = self.min_prob_con_factory(x, minprob, varis, self.table.loc[self.players])
print minprob
minprob.solve()
if pulp.LpStatus[minprob.status] != 'Optimal':
raise Exception('Minimization subproblem did not converge')
self.table['RobustOwn'] = np.array([x[n]-float(v.value()) for n, v in enumerate(varis)], dtype=float)
def _solve_maximization(self):
self.prob.solve()
x = [int(self.player_vars[k].varValue == 1 and str(self.player_vars[k])[0] == 'p') for k in self.players]
xold = len(x) * [0]
ct = 0
while np.array(xold).dot(x) != self.nslots:
xold = x
x = [int(self.player_vars[k].varValue == 1 and str(self.player_vars[k])[0] == 'p') for k in self.players]
self._addQuadraticConstraint(x, ct)
ct += 1
self.prob.solve()
self._removeQuadraticConstraints(ct)
x = [int(self.player_vars[k].varValue == 1 and str(self.player_vars[k])[0] == 'p') for k in self.players]
return x
def solve(self):
"""
Solves the problem.
Returns:
status - the solver's status
lineup - the inputed table subsetted to hold the players in the lineup. Sorts the lineup if a 'PosNum'
column was given.
"""
self.prob.solve()
x = [int(v.varValue == 1 and str(v)[0] == 'p') for v in self.prob.variables()]
xold = len(x) * [0]
xoldold = len(x) * [0]
ct = 0
while np.array(xold).dot(x) != self.nslots and np.array(xoldold).dot(x) != self.nslots:
xoldold = xold
xold = x
self._solve_minimization(x)
x = self._solve_maximization()
I = []
for v in self.prob.variables():
if v.varValue == 1 and str(v)[0] == 'p':
player = self.rev_player_vars[v.getName()]
I.append(player)
lu = self.table.loc[I]
try:
lu = lu.sort(['PosNum','Salary'], ascending=[True,False])
except Exception, exc:
pass
if np.array(xoldold).dot(x) != self.nslots:
return 'Oscillating', lu
return pulp.LpStatus[self.prob.status], lu
def addTableConstraint(self, column, con_type, bound, con_name=None):
"""
Add a constraint with the dot product of a column in the inputed table.
Inputs:
column - string
name of the column in the table to use
con_type - string
{'le','eq','ge'}
bound - float
the rhs of the constraint
con_name - string
the name to assign the constraint. If not given, defaults to 'Column %s' % column
Returns:
True if successful, False otherwise
"""
if con_name is None:
con_name = 'Column %s' % column
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
if con_type == 'eq':
self.prob += pulp.lpSum([self.table[column].loc[k]*self.player_vars[k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([self.table[column].loc[k]*self.player_vars[k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([self.table[column].loc[k]*self.player_vars[k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def addTeamLimitConstraint(self, teams, con_type, bound):
"""
Constrain the number of players from the combination of the teams. Multiple teams are entered
delimited by a '/' as in 'CLE/DET'
Inputs:
teams - string
teams to be constrained delimited by '/'
see addColumnConstraint
"""
teams = teams.split('/')
con_name = 'Team %s Limit %s' % (teams, con_type)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
if con_type == 'eq':
self.prob += pulp.lpSum([(self.table.loc[k]['Team'] in teams)*self.player_vars[k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([(self.table.loc[k]['Team'] in teams)*self.player_vars[k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([(self.table.loc[k]['Team'] in teams)*self.player_vars[k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def addCustomConstraint(self, func, con_type, bound, con_name=None):
"""
Add a custom constraint which results from passing the inputed table into func. This is a more
general version of addColumnConstraints.
Inputs:
func - function
Takes self.table as an input and returns a pd.Series with indices matching that of self.table
see addColumnConstraints
"""
series = func(self.table)
if con_name is None:
con_name = 'Custom Func %d'
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
return self.addSeriesConstraint(series, con_type, bound, con_name)
def addTableColumns(self, series, names):
"""
Add the series to self.table with the corresponding column names.
"""
for serie, name in zip(series, names):
self.table[name] = serie
return True
def addSeriesConstraint(self, series, con_type, bound, con_name=None):
"""
Add a custom constraint which results from passing the inputed table into func. This is a more
general version of addColumnConstraints.
Inputs:
func - function
Takes self.table as an input and returns a pd.Series with indices matching that of self.table
see addColumnConstraints
"""
if con_name is None:
con_name = 'Custom Func %d'
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
if con_type == 'eq':
self.prob += pulp.lpSum([series.loc[k]*self.player_vars[k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([series.loc[k]*self.player_vars[k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([series.loc[k]*self.player_vars[k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def updateObjective(self, series):
"""
Replace the current objective function with the one specified by series.
"""
self.prob.setObjective(pulp.lpSum([series[k]*self.player_vars[k] for k in self.players]) - self.pen*self.z)
def disallowLineup(self, lineup):
"""
Take a lineup DataFrame (subset of self.table) and don't allow this specific lineup.
"""
players = lineup.index.tolist()
con_name = 'BlockLineup %s' % str(players)
self.prob += pulp.lpSum([(k in players)*self.player_vars[k] for k in self.players]) <= self.nslots-1, con_name
def addPlayerConstraint(self, players, con_type, bound, con_name=None):
"""
Make a constraint based on appearances of players.
Inputs:
players - string
the names of players as found in the index of self.table delimited by '/'
see addColumnConstraints
"""
if con_name is None:
con_name = 'Players %s %%d' % (players)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
players = players.split('/')
if con_type == 'eq':
self.prob += pulp.lpSum([self.player_vars[k] for k in players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([self.player_vars[k] for k in players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([self.player_vars[k] for k in players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
class CongruentLineupOptimizer(object):
def __init__(self, table, roster_slots, penalty, salary_cap=None):
"""
Makes Lineup Optimization easy.
max s'(x-w) - .5*penalty*||x-w||^2
Input:
table - pd.DataFrame
Contains relevant data on each player. Must include the columns ['Pos','Salary','Team','Own']. If you want
the resulting table sorted, include a column ['PosNum'] which assigns a numeric value to each position.
roster_slots - int
the total number of spaces in the roster
For example, in NBA 1 through 5 for PG through C.
penalty - float
the weight assigned to the penalty term for weight misalignment. Preliminary tests suggest 10 is a
good value for the NBA. A penalty of 0 is equivalent to the MILP formulation solved by the
LineupOptimizer class.
salary_cap - float
The total salary limit of the team, if there is any. Default is None.
"""
self.nslots = roster_slots
self.table = table
self.salary_cap = salary_cap
self.pen = penalty
self.players = table.index.tolist()
self.prob = pulp.LpProblem('Lineup Optimization', pulp.LpMaximize)
self.player_vars = pulp.LpVariable.dicts("p", self.players, 0, 1, 'Binary')
self.rev_player_vars = dict(zip([str(x) for x in self.player_vars.values()], \
self.player_vars.keys()))
self.z = pulp.LpVariable('z')
# Objective
self.prob += pulp.lpSum([self.table.Proj[k]*self.player_vars[k] for k in self.players]) - self.z*self.pen, 'Aggregate Projected Score'
# Salary Cap Constraint
if self.salary_cap is not None:
self.prob += pulp.lpSum([self.table.Salary[k]*self.player_vars[k] for k in self.players]) <= self.salary_cap, 'Salary Cap Constraint'
self.const_tracker = {}
def _addQuadraticConstraint(self, x, iteration):
"""
Add a linear approximation to the quadratic constraint.
x - binary array
solution of the last MILP iteration
iteration - int
the iteration number of the last MILP iteration
"""
con_name = 'QUADCON%d'%iteration
y = np.array([x[n] - self.table.Own.loc[k] for n, k in enumerate(self.players)], dtype=float)
self.prob += 2.*pulp.lpSum([y[n]*self.player_vars[k] for n, k in enumerate(self.players)]) - self.z <= \
y.dot(y) + 2.*sum([y[n]*self.table.Own.loc[k] for n, k in enumerate(self.players)]), con_name
def _removeQuadraticConstraints(self, ct):
"""
Remove all linear approximation quadratic constraints.
"""
for k in xrange(ct):
con_name = 'QUADCON%d'%k
self.removeConstraint(con_name)
def removeConstraint(self, con_name):
"""
Remove the constraint with name given by con_name.
"""
try:
del self.prob.constraints[con_name]
return True
except:
return False
def addPositionConstraint(self, pos, con_type, bound, con_name=None):
"""
Constrain the number of position appearances.
Inputs:
pos - string
position of player, if multiple, concatenated with '/'
con_type - string
{'le','eq','ge'}
bound - float
the rhs of the constraint
con_name - string
the name to assign the constraint. If not given, defaults to 'Column %s' % column
Returns:
True if successful, False otherwise
"""
if con_name is None:
con_name = 'Pos %s %s' % (con_type,pos)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
pos = pos.split('/')
if con_type == 'eq':
self.prob += pulp.lpSum([(self.table['Pos'].loc[k] in pos)*self.player_vars[k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([(self.table['Pos'].loc[k] in pos)*self.player_vars[k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([(self.table['Pos'].loc[k] in pos)*self.player_vars[k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def solve(self):
"""
Solves the problem.
Returns:
status - the solver's status
lineup - the inputed table subsetted to hold the players in the lineup. Sorts the lineup if a 'PosNum'
column was given.
"""
self.prob.solve()
x = [int(v.varValue == 1 and str(v)[0] == 'p') for v in self.prob.variables()]
xold = len(x) * [0]
ct = 0
while np.array(xold).dot(x) != self.nslots:
xold = x
x = [int(v.varValue == 1 and str(v)[0] == 'p') for v in self.prob.variables()]
self._addQuadraticConstraint(x, ct)
ct += 1
self.prob.solve()
self._removeQuadraticConstraints(ct)
I = []
for v in self.prob.variables():
if v.varValue == 1 and str(v)[0] == 'p':
player = self.rev_player_vars[v.getName()]
I.append(player)
lu = self.table.loc[I]
try:
lu = lu.sort(['PosNum','Salary'], ascending=[True,False])
except Exception, exc:
pass
return pulp.LpStatus[self.prob.status], lu
def addTableConstraint(self, column, con_type, bound, con_name=None):
"""
Add a constraint with the dot product of a column in the inputed table.
Inputs:
column - string
name of the column in the table to use
con_type - string
{'le','eq','ge'}
bound - float
the rhs of the constraint
con_name - string
the name to assign the constraint. If not given, defaults to 'Column %s' % column
Returns:
True if successful, False otherwise
"""
if con_name is None:
con_name = 'Column %s' % column
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
if con_type == 'eq':
self.prob += pulp.lpSum([self.table[column].loc[k]*self.player_vars[k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([self.table[column].loc[k]*self.player_vars[k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([self.table[column].loc[k]*self.player_vars[k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def addTeamLimitConstraint(self, teams, con_type, bound):
"""
Constrain the number of players from the combination of the teams. Multiple teams are entered
delimited by a '/' as in 'CLE/DET'
Inputs:
teams - string
teams to be constrained delimited by '/'
see addColumnConstraint
"""
teams = teams.split('/')
con_name = 'Team %s Limit %s' % (teams, con_type)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
if con_type == 'eq':
self.prob += pulp.lpSum([(self.table.loc[k]['Team'] in teams)*self.player_vars[k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([(self.table.loc[k]['Team'] in teams)*self.player_vars[k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([(self.table.loc[k]['Team'] in teams)*self.player_vars[k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def addCustomConstraint(self, func, con_type, bound, con_name=None):
"""
Add a custom constraint which results from passing the inputed table into func. This is a more
general version of addColumnConstraints.
Inputs:
func - function
Takes self.table as an input and returns a pd.Series with indices matching that of self.table
see addColumnConstraints
"""
series = func(self.table)
if con_name is None:
con_name = 'Custom Func %d'
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
return self.addSeriesConstraint(series, con_type, bound, con_name)
def addTableColumns(self, series, names):
"""
Add the series to self.table with the corresponding column names.
"""
for serie, name in zip(series, names):
self.table[name] = serie
return True
def addSeriesConstraint(self, series, con_type, bound, con_name=None):
"""
Add a custom constraint which results from passing the inputed table into func. This is a more
general version of addColumnConstraints.
Inputs:
func - function
Takes self.table as an input and returns a pd.Series with indices matching that of self.table
see addColumnConstraints
"""
if con_name is None:
con_name = 'Custom Func %d'
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
if con_type == 'eq':
self.prob += pulp.lpSum([series.loc[k]*self.player_vars[k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([series.loc[k]*self.player_vars[k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([series.loc[k]*self.player_vars[k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def updateObjective(self, series):
"""
Replace the current objective function with the one specified by series.
"""
self.prob.setObjective(pulp.lpSum([series[k]*self.player_vars[k] for k in self.players]) - self.pen*self.z)
def disallowLineup(self, lineup):
"""
Take a lineup DataFrame (subset of self.table) and don't allow this specific lineup.
"""
players = lineup.index.tolist()
con_name = 'BlockLineup %s' % str(players)
self.prob += pulp.lpSum([(k in players)*self.player_vars[k] for k in self.players]) <= self.nslots-1, con_name
def addPlayerConstraint(self, players, con_type, bound, con_name=None):
"""
Make a constraint based on appearances of players.
Inputs:
players - string
the names of players as found in the index of self.table delimited by '/'
see addColumnConstraints
"""
if con_name is None:
con_name = 'Players %s %%d' % (players)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
players = players.split('/')
if con_type == 'eq':
self.prob += pulp.lpSum([self.player_vars[k] for k in players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([self.player_vars[k] for k in players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([self.player_vars[k] for k in players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
#################################################
class LineupOptimizer(object):
def __init__(self, table, roster_slots, salary_cap=None):
"""
Makes Lineup Optimization easy.
Input:
table - pd.DataFrame
Contains relevant data on each player. Must include the columns ['Pos','Salary','Team']. If you want
the resulting table sorted, include a column ['PosNum'] which assigns a numeric value to each position.
roster_slots - int
the total number of spaces in the roster
For example, in NBA 1 through 5 for PG through C.
salary_cap - float
The total salary limit of the team, if there is any. Default is None.
"""
self.nslots = roster_slots
self.table = table
self.salary_cap = salary_cap
self.players = table.index.tolist()
self.prob = pulp.LpProblem('Lineup Optimization', pulp.LpMaximize)
self.player_vars = pulp.LpVariable.dicts("p", self.players, 0, 1, 'Binary')
self.rev_player_vars = dict(zip([str(x) for x in self.player_vars.values()], \
self.player_vars.keys()))
# Objective
self.prob += pulp.lpSum([self.table.Proj[k]*self.player_vars[k] for k in self.players]), 'Aggregate Projected Score'
# Salary Cap Constraint
if self.salary_cap is not None:
self.prob += pulp.lpSum([self.table.Salary[k]*self.player_vars[k] for k in self.players]) <= self.salary_cap, 'Salary Cap Constraint'
self.const_tracker = {}
def removeConstraint(self, con_name):
"""
Remove the constraint with name given by con_name.
"""
try:
del self.prob.constraints[con_name]
return True
except:
return False
def addPositionConstraint(self, pos, con_type, bound, con_name=None):
"""
Constrain the number of position appearances.
Inputs:
pos - string
position of player, if multiple, concatenated with '/'
con_type - string
{'le','eq','ge'}
bound - float
the rhs of the constraint
con_name - string
the name to assign the constraint. If not given, defaults to 'Column %s' % column
Returns:
True if successful, False otherwise
"""
if con_name is None:
con_name = 'Pos %s %s' % (con_type,pos)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
pos = pos.split('/')
if con_type == 'eq':
self.prob += pulp.lpSum([(self.table['Pos'].loc[k] in pos)*self.player_vars[k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([(self.table['Pos'].loc[k] in pos)*self.player_vars[k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([(self.table['Pos'].loc[k] in pos)*self.player_vars[k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def solve(self):
"""
Solves the problem.
Returns:
status - the solver's status
lineup - the inputed table subsetted to hold the players in the lineup. Sorts the lineup if a 'PosNum'
column was given.
"""
self.prob.solve()
I = []
for v in self.prob.variables():
if v.varValue == 1 and str(v)[0] == 'p':
player = self.rev_player_vars[v.getName()]
I.append(player)
lu = self.table.loc[I]
try:
lu = lu.sort(['PosNum','Salary'], ascending=[True,False])
except Exception, exc:
pass
return pulp.LpStatus[self.prob.status], lu
def addTableConstraint(self, column, con_type, bound, con_name=None):
"""
Add a constraint with the dot product of a column in the inputed table.
Inputs:
column - string
name of the column in the table to use
con_type - string
{'le','eq','ge'}
bound - float
the rhs of the constraint
con_name - string
the name to assign the constraint. If not given, defaults to 'Column %s' % column
Returns:
True if successful, False otherwise
"""
if con_name is None:
con_name = 'Column %s' % column
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
if con_type == 'eq':
self.prob += pulp.lpSum([self.table[column].loc[k]*self.player_vars[k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([self.table[column].loc[k]*self.player_vars[k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([self.table[column].loc[k]*self.player_vars[k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def addTeamLimitConstraint(self, teams, con_type, bound):
"""
Constrain the number of players from the combination of the teams. Multiple teams are entered
delimited by a '/' as in 'CLE/DET'
Inputs:
teams - string
teams to be constrained delimited by '/'
see addColumnConstraint
"""
teams = teams.split('/')
con_name = 'Team %s Limit %s' % (teams, con_type)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
if con_type == 'eq':
self.prob += pulp.lpSum([(self.table.loc[k]['Team'] in teams)*self.player_vars[k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([(self.table.loc[k]['Team'] in teams)*self.player_vars[k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([(self.table.loc[k]['Team'] in teams)*self.player_vars[k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def addCustomConstraint(self, func, con_type, bound, con_name=None):
"""
Add a custom constraint which results from passing the inputed table into func. This is a more
general version of addColumnConstraints.
Inputs:
func - function
Takes self.table as an input and returns a pd.Series with indices matching that of self.table
see addColumnConstraints
"""
series = func(self.table)
if con_name is None:
con_name = 'Custom Func %d'
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
return self.addSeriesConstraint(series, con_type, bound, con_name)
def addTableColumns(self, series, names):
"""
Add the series to self.table with the corresponding column names.
"""
for serie, name in zip(series, names):
self.table[name] = serie
return True
def addSeriesConstraint(self, series, con_type, bound, con_name=None):
"""
Add a custom constraint which results from passing the inputed table into func. This is a more
general version of addColumnConstraints.
Inputs:
func - function
Takes self.table as an input and returns a pd.Series with indices matching that of self.table
see addColumnConstraints
"""
if con_name is None:
con_name = 'Custom Func %d'
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
if con_type == 'eq':
self.prob += pulp.lpSum([series.loc[k]*self.player_vars[k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([series.loc[k]*self.player_vars[k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([series.loc[k]*self.player_vars[k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def updateObjective(self, series):
"""
Replace the current objective function with the one specified by series.
"""
self.prob.setObjective(pulp.lpSum([series[k]*self.player_vars[k] for k in self.players]))
def disallowLineup(self, lineup):
"""
Take a lineup DataFrame (subset of self.table) and don't allow this specific lineup.
"""
players = lineup.index.tolist()
con_name = 'BlockLineup %s' % str(players)
self.prob += pulp.lpSum([(k in players)*self.player_vars[k] for k in self.players]) <= self.nslots-1, con_name
def addPlayerConstraint(self, players, con_type, bound, con_name=None):
"""
Make a constraint based on appearances of players.
Inputs:
players - string
the names of players as found in the index of self.table delimited by '/'
see addColumnConstraints
"""
if con_name is None:
con_name = 'Players %s %%d' % (players)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
players = players.split('/')
if con_type == 'eq':
self.prob += pulp.lpSum([self.player_vars[k] for k in players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([self.player_vars[k] for k in players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([self.player_vars[k] for k in players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
####################################################################################################
####################################################################################################
####################################################################################################
####################################################################################################
####################################################################################################
####################################################################################################
class DNormLineupOptimizer(object):
def __init__(self, table, roster_slots, Gamma, salary_cap=None):
"""
Makes Robust Lineup Optimization easy using the D-Norm with parameter Gamma.
U = {x | xj = xj^ - (xj^-xj_)wj for all j, wj in [0,1], sum(wj) <= Gamma}
Input:
table - pd.DataFrame
Contains relevant data on each player. Must include the columns ['Proj','LB','UB','Pos','Salary','Team']. If you want
the resulting table sorted, include a column ['PosNum'] which assigns a numeric value to each position.
For example, in NBA 1 through 5 for PG through C.
roster_slots - int
the total number of spaces in the roster
salary_cap - float
The total salary limit of the team, if there is any. Default is None.
Gamma - flaot
The D-Norm cardinality constraint
"""
self.nslots = roster_slots
self.table = table
self.salary_cap = salary_cap
self.players = table.index.tolist()
n = len(self.players)
self.prob = pulp.LpProblem('Lineup Optimization', pulp.LpMaximize)
self.player_vars = pulp.LpVariable.dicts("p", self.players, 0, 1, 'Binary')
self.rev_player_vars = dict(zip([str(x) for x in self.player_vars.values()], \
self.player_vars.keys()))
y = [pulp.LpVariable('y%d'%d, lowBound=0, cat='Continuous') for d in range(4*n+1)]
# Objective
self.prob += pulp.lpSum([self.table.UB.loc[k]*self.player_vars[k] for k in self.players]) \
- pulp.lpSum(y[2*n:3*n]) - Gamma*y[-1], 'Objective'
# Robust Dual Equality
for k in xrange(n):
self.prob += self.player_vars[self.players[k]] + y[k] - y[n+k] == 0, 'RobustEqX_%d' % k
delta = self.table.UB - self.table.LB
for k in xrange(n):
dlta = delta.loc[self.players[k]]
try:
self.prob += dlta*y[k] - dlta*y[n+k] + y[2*n+k] - y[3*n+k] + y[-1] == 0, \
'RobustDualEq0_%d' % k
except Exception, exc:
print exc.message
print dlta
raise(exc)
# Salary Cap Constraint
if self.salary_cap is not None:
self.prob += pulp.lpSum([self.table.Salary[k]*self.player_vars[k] for k in self.players]) <= self.salary_cap, 'Salary Cap Constraint'
self.const_tracker = {}
def addPositionConstraint(self, pos, con_type, bound, con_name=None):
"""
Constrain the number of position appearances.
Inputs:
pos - string
position of player, if multiple, concatenated with '/'
con_type - string
{'le','eq','ge'}
bound - float
the rhs of the constraint
con_name - string
the name to assign the constraint. If not given, defaults to 'Column %s' % column
Returns:
True if successful, False otherwise
"""
if con_name is None:
con_name = 'Pos %s %s' % (con_type,pos)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
pos = pos.split('/')
if con_type == 'eq':
self.prob += pulp.lpSum([(self.table['Pos'].loc[k] in pos)*self.player_vars[k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([(self.table['Pos'].loc[k] in pos)*self.player_vars[k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([(self.table['Pos'].loc[k] in pos)*self.player_vars[k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def solve(self):
"""
Solves the problem.
Returns:
status - the solver's status
lineup - the inputed table subsetted to hold the players in the lineup. Sorts the lineup if a 'PosNum'
column was given.
"""
self.prob.solve()
I = []
for k, v in enumerate(self.prob.variables()):
if v.varValue == 1 and str(v)[0] == 'p':
player = self.rev_player_vars[v.getName()]
I.append(player)
lu = self.table.loc[I]
try:
lu = lu.sort(['PosNum','Salary'], ascending=[True,False])
except Exception, exc:
pass
return pulp.LpStatus[self.prob.status], lu
def addTableConstraint(self, column, con_type, bound, con_name=None):
"""
Add a constraint with the dot product of a column in the inputed table.
Inputs:
column - string
name of the column in the table to use
con_type - string
{'le','eq','ge'}
bound - float
the rhs of the constraint
con_name - string
the name to assign the constraint. If not given, defaults to 'Column %s' % column
Returns:
True if successful, False otherwise
"""
if con_name is None:
con_name = 'Column %s' % column
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
if con_type == 'eq':
self.prob += pulp.lpSum([self.table[column].loc[k]*self.player_vars[k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([self.table[column].loc[k]*self.player_vars[k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([self.table[column].loc[k]*self.player_vars[k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def addTeamLimitConstraint(self, teams, con_type, bound):
"""
Constrain the number of players from the combination of the teams. Multiple teams are entered
delimited by a '/' as in 'CLE/DET'
Inputs:
teams - string
teams to be constrained delimited by '/'
see addColumnConstraint
"""
teams = teams.split('/')
con_name = 'Team %s Limit %s' % (teams, con_type)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
if con_type == 'eq':
self.prob += pulp.lpSum([(self.table.loc[k]['Team'] in teams)*self.player_vars[k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([(self.table.loc[k]['Team'] in teams)*self.player_vars[k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([(self.table.loc[k]['Team'] in teams)*self.player_vars[k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def addCustomConstraint(self, func, con_type, bound, con_name=None):
"""
Add a custom constraint which results from passing the inputed table into func. This is a more
general version of addColumnConstraints.
Inputs:
func - function
Takes self.table as an input and returns a pd.Series with indices matching that of self.table
see addColumnConstraints
"""
series = func(self.table)
if con_name is None:
con_name = 'Custom Func %d'
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
return self.addSeriesConstraint(series, con_type, bound, con_name)
def addTableColumns(self, series, names):
"""
Add the series to self.table with the corresponding column names.
"""
for serie, name in zip(series, names):
self.table[name] = serie
return True
def addSeriesConstraint(self, series, con_type, bound, con_name=None):
"""
Add a custom constraint which results from passing the inputed table into func. This is a more
general version of addColumnConstraints.
Inputs:
func - function
Takes self.table as an input and returns a pd.Series with indices matching that of self.table
see addColumnConstraints
"""
if con_name is None:
con_name = 'Custom Func %d'
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
if con_type == 'eq':
self.prob += pulp.lpSum([series.loc[k]*self.player_vars[k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([series.loc[k]*self.player_vars[k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([series.loc[k]*self.player_vars[k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def updateObjective(self, series):
"""
Replace the current objective function with the one specified by series.
"""
self.prob.setObjective(pulp.lpSum([series[k]*self.player_vars[k] for k in self.players]))
def disallowLineup(self, lineup):
"""
Take a lineup DataFrame (subset of self.table) and don't allow this specific lineup.
"""
players = lineup.index.tolist()
con_name = 'BlockLineup %s' % str(players)
self.prob += pulp.lpSum([(k in players)*self.player_vars[k] for k in self.players]) <= self.nslots-1, con_name
def addPlayerConstraint(self, players, con_type, bound, con_name=None):
"""
Make a constraint based on appearances of players.
Inputs:
players - string
the names of players as found in the index of self.table delimited by '/'
see addColumnConstraints
"""
if con_name is None:
con_name = 'Players %s %%d' % (players)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
players = players.split('/')
if con_type == 'eq':
self.prob += pulp.lpSum([self.player_vars[k] for k in players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([self.player_vars[k] for k in players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([self.player_vars[k] for k in players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
####################################################################################################
####################################################################################################
####################################################################################################
####################################################################################################
####################################################################################################
####################################################################################################
class MultiLineupOptimizer(object):
def __init__(self, table, roster_slots, nlineups=1, salary_cap=None):
"""
Makes Lineup Optimization easy. Set multiple lineups simultaneously. Maximizes the sum of
the projected score of all of the lineups.
Note for use:
If you want the top N highest projected lineups, just use the LineupOptimizer in a for-loop
which calls disallowLineup(lu) on each outputed optimal lineup.
This is only beneficial when there are constraints on the number of appearances on all the
players, otherwise it will just spit out the same lineup several times. For example, if we
want 3 lineups, and we want each player to appear a maximum of two times, this is the
appropriate optimizer to use.
Input:
table - pd.DataFrame
Contains relevant data on each player. Must include the columns ['Pos','Salary','Team']. If you want
the resulting table sorted, include a column ['PosNum'] which assigns a numeric value to each position.
roster_slots - int
the total number of spaces in the roster
For example, in NBA 1 through 5 for PG through C.
nlineups - int
the number of lineups to set simultaneously
salary_cap - float
The total salary limit of the team, if there is any. Default is None.
"""
self.nslots = roster_slots
self.table = table
self.salary_cap = salary_cap
self.nlineups = nlineups
self.players = table.index.tolist()
self.prob = pulp.LpProblem('Lineup Optimization', pulp.LpMaximize)
self.player_vars = []
for k in xrange(self.nlineups):
self.player_vars.append(pulp.LpVariable.dicts("p%d" % k, self.players, 0, 1, 'Binary'))
self.rev_player_vars = dict(zip([str(x) for y in self.player_vars for x in y.values()], \
[x for y in self.player_vars for x in y.keys()]))
# Objective
self.prob += pulp.lpSum([self.table.Proj[k]*self.player_vars[j][k] for k in self.players for j in range(self.nlineups)]), 'Aggregate Projected Score'
# Salary Cap Constraint
if self.salary_cap is not None:
for j in xrange(self.nlineups):
self.prob += pulp.lpSum([self.table.Salary[k]*self.player_vars[j][k] for k in self.players]) <= self.salary_cap, 'Salary Cap Constraint %d' % j
self.const_tracker = {}
def addPositionConstraint(self, pos, con_type, bound, lineup=0, con_name=None):
"""
Constrain the number of position appearances.
Inputs:
pos - string
position of player, if multiple, concatenated with '/'
con_type - string
{'le','eq','ge'}
bound - float
the rhs of the constraint
lineup - int
index of the lineup to set the constraint to.
con_name - string
the name to assign the constraint. If not given, defaults to 'Column %s' % column
Returns:
True if successful, False otherwise
"""
if con_name is None:
con_name = '%d Pos %s %s' % (lineup,con_type,pos)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
pos = pos.split('/')
if con_type == 'eq':
self.prob += pulp.lpSum([(self.table['Pos'].loc[k] in pos)*self.player_vars[lineup][k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([(self.table['Pos'].loc[k] in pos)*self.player_vars[lineup][k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([(self.table['Pos'].loc[k] in pos)*self.player_vars[lineup][k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def solve(self):
"""
Solves the problem.
Returns:
status - the solver's status
lineup - the inputed table subsetted to hold the players in the lineup. Sorts the lineup if a 'PosNum'
column was given.
"""
self.prob.solve()
lus = []
for k in xrange(self.nlineups):
I = []
for v in self.prob.variables()[k*len(self.players):(k+1)*len(self.players)]:
if v.varValue == 1 and str(v)[0] == 'p':
player = self.rev_player_vars[v.getName()]
I.append(player)
lu = self.table.loc[I]
try:
lu = lu.sort(['PosNum','Salary'], ascending=[True,False])
except Exception, exc:
pass
lus.append(lu)
return pulp.LpStatus[self.prob.status], lus
def addTableConstraint(self, column, con_type, bound, lineup=0, con_name=None):
"""
Add a constraint with the dot product of a column in the inputed table.
Inputs:
column - string
name of the column in the table to use
con_type - string
{'le','eq','ge'}
bound - float
the rhs of the constraint
lineup - int
index of the lineup to apply the constraint to
con_name - string
the name to assign the constraint. If not given, defaults to 'Column %s' % column
Returns:
True if successful, False otherwise
"""
if con_name is None:
con_name = '%d Column %s' % (lineup,column)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
if con_type == 'eq':
self.prob += pulp.lpSum([self.table[column].loc[k]*self.player_vars[lineup][k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([self.table[column].loc[k]*self.player_vars[lineup][k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([self.table[column].loc[k]*self.player_vars[lineup][k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def addTeamLimitConstraint(self, teams, con_type, bound, lineup=0):
"""
Constrain the number of players from the combination of the teams. Multiple teams are entered
delimited by a '/' as in 'CLE/DET'
Inputs:
teams - string
teams to be constrained delimited by '/'
see addColumnConstraint
"""
teams = teams.split('/')
con_name = '%d Team %s Limit %s' % (lineup, teams, con_type)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
if con_type == 'eq':
self.prob += pulp.lpSum([(self.table.loc[k]['Team'] in teams)*self.player_vars[lineup][k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([(self.table.loc[k]['Team'] in teams)*self.player_vars[lineup][k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([(self.table.loc[k]['Team'] in teams)*self.player_vars[lineup][k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def addCustomConstraint(self, func, con_type, bound, lineup=0, con_name=None):
"""
Add a custom constraint which results from passing the inputed table into func. This is a more
general version of addColumnConstraints.
Inputs:
func - function
Takes self.table as an input and returns a pd.Series with indices matching that of self.table
see addColumnConstraints
"""
series = func(self.table)
if con_name is None:
con_name = '%d Custom Func %%d' % lineup
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
return self.addSeriesConstraint(series, con_type, bound, lineup, con_name)
def addTableColumns(self, series, names):
"""
Add the series to self.table with the corresponding column names.
"""
for serie, name in zip(series, names):
self.table[name] = serie
return True
def addSeriesConstraint(self, series, con_type, bound, lineup=0, con_name=None):
"""
Add a custom constraint which results from passing the inputed table into func. This is a more
general version of addColumnConstraints.
Inputs:
func - function
Takes self.table as an input and returns a pd.Series with indices matching that of self.table
see addColumnConstraints
"""
if con_name is None:
con_name = '%d Custom Func %%d' % lineup
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
if con_type == 'eq':
self.prob += pulp.lpSum([series.loc[k]*self.player_vars[lineup][k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([series.loc[k]*self.player_vars[lineup][k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([series.loc[k]*self.player_vars[lineup][k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def updateObjective(self, series):
"""
Replace the current objective function with the one specified by series.
"""
self.prob.setObjective(pulp.lpSum([series[k]*self.player_vars[j][k] for k in self.players for j in range(self.nlineups)]))
def disallowLineup(self, lineup):
"""
Take a lineup DataFrame (subset of self.table) and don't allow this specific lineup.
"""
players = lineup.index.tolist()
con_name = '%%d BlockLineup %s' % str(players)
for j in xrange(self.nlineups):
self.prob += pulp.lpSum([(k in players)*self.player_vars[k] for k in self.players]) <= self.nslots-1, con_name % j
def addPlayerConstraint(self, players, con_type, bound, lineup=0, con_name=None):
"""
Make a constraint based on appearances of players.
Inputs:
players - string
the names of players as found in the index of self.table delimited by '/'
lineup - int
if lineup >= 0, then it is the index of the lineup to which the constraint should
be added. if lineup==-1, then the aggregate of all lineups will be constrained.
e.g. addPlayerConstraint('LeBron James/Anthony Davis', 'eq', 3, lineup=-1,
con_name='LJ/AD eq 3') will make it so that in total of all the lineups,
LeBron James + Anthony Davis will occur exactly 3 times.
see addColumnConstraints
"""
if con_name is None:
con_name = '%d Players %s %%d' % (lineup, players)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
players = players.split('/')
if lineup >= 0:
if con_type == 'eq':
self.prob += pulp.lpSum([self.player_vars[lineup][k] for k in players]) == bound,\
con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([self.player_vars[lineup][k] for k in players]) <= bound,\
con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([self.player_vars[lineup][k] for k in players]) >= bound,\
con_name
return True
elif lineup == -1:
if con_type == 'eq':
self.prob += pulp.lpSum([self.player_vars[j][k] for k in players
for j in range(self.nlineups)]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([self.player_vars[j][k] for k in players
for j in range(self.nlineups)]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([self.player_vars[j][k] for k in players
for j in range(self.nlineups)]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
####################################################################################################
####################################################################################################
####################################################################################################
####################################################################################################
####################################################################################################
####################################################################################################
class RobustMultiLineupOptimizer(object):
def __init__(self, table, roster_slots, Gamma, nlineups=1, salary_cap=None):
"""
Makes Lineup Optimization easy. Set multiple lineups simultaneously. Maximizes the sum of
the projected score of all of the lineups. Uses the D-Norm from RobustLineupOptimizer
Note for use:
If you want the top N highest projected lineups, just use the LineupOptimizer in a for-loop
which calls disallowLineup(lu) on each outputed optimal lineup.
This is only beneficial when there are constraints on the number of appearances on all the
players, otherwise it will just spit out the same lineup several times. For example, if we
want 3 lineups, and we want each player to appear a maximum of two times, this is the
appropriate optimizer to use.
Input:
table - pd.DataFrame
Contains relevant data on each player. Must include the columns ['Pos','Salary','Team']. If you want
the resulting table sorted, include a column ['PosNum'] which assigns a numeric value to each position.
roster_slots - int
the total number of spaces in the roster
For example, in NBA 1 through 5 for PG through C.
Gamma - float
Robustness parameter. See RobustLineupOptimizer for more information.
nlineups - int
the number of lineups to set simultaneously
salary_cap - float
The total salary limit of the team, if there is any. Default is None.
"""
self.nslots = roster_slots
self.table = table
self.salary_cap = salary_cap
self.nlineups = nlineups
self.players = table.index.tolist()
n = len(self.players)
self.prob = pulp.LpProblem('Lineup Optimization', pulp.LpMaximize)
self.player_vars = []
for k in xrange(self.nlineups):
self.player_vars.append(pulp.LpVariable.dicts("p%d" % k, self.players, 0, 1, 'Binary'))
self.rev_player_vars = dict(zip([str(x) for y in self.player_vars for x in y.values()], \
[x for y in self.player_vars for x in y.keys()]))
y = [pulp.LpVariable('y%d'%d, lowBound=0, cat='Continuous') for d in range(4*n+1)]
# Objective
self.prob += pulp.lpSum([self.table.UB.loc[k]*self.player_vars[j][k] for k in self.players for j in range(self.nlineups)]) \
- pulp.lpSum(y[2*n:3*n]) - Gamma*y[-1], 'Objective'
# Robust Dual Equality
for k in xrange(n):
self.prob += pulp.lpSum([self.player_vars[j][self.players[k]]
for j in range(self.nlineups)]) + y[k] - y[n+k] == 0, 'RobustEqX_%d' % k
delta = self.table.UB - self.table.LB
for k in xrange(n):
dlta = delta.loc[self.players[k]]
try:
self.prob += dlta*y[k] - dlta*y[n+k] + y[2*n+k] - y[3*n+k] + y[-1] == 0, \
'RobustDualEq0_%d' % k
except Exception, exc:
print exc.message
print dlta
raise(exc)
# Salary Cap Constraint
if self.salary_cap is not None:
for j in xrange(self.nlineups):
self.prob += pulp.lpSum([self.table.Salary[k]*self.player_vars[j][k] for k in self.players]) <= self.salary_cap, 'Salary Cap Constraint %d' % j
self.const_tracker = {}
def addPositionConstraint(self, pos, con_type, bound, lineup=0, con_name=None):
"""
Constrain the number of position appearances.
Inputs:
pos - string
position of player, if multiple, concatenated with '/'
con_type - string
{'le','eq','ge'}
bound - float
the rhs of the constraint
lineup - int
index of the lineup to set the constraint to.
con_name - string
the name to assign the constraint. If not given, defaults to 'Column %s' % column
Returns:
True if successful, False otherwise
"""
if con_name is None:
con_name = '%d Pos %s %s' % (lineup,con_type,pos)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
pos = pos.split('/')
if con_type == 'eq':
self.prob += pulp.lpSum([(self.table['Pos'].loc[k] in pos)*self.player_vars[lineup][k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([(self.table['Pos'].loc[k] in pos)*self.player_vars[lineup][k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([(self.table['Pos'].loc[k] in pos)*self.player_vars[lineup][k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def solve(self):
"""
Solves the problem.
Returns:
status - the solver's status
lineup - the inputed table subsetted to hold the players in the lineup. Sorts the lineup if a 'PosNum'
column was given.
"""
self.prob.solve()
lus = []
for k in xrange(self.nlineups):
I = []
for v in self.prob.variables()[k*len(self.players):(k+1)*len(self.players)]:
if v.varValue == 1 and str(v)[0] == 'p':
player = self.rev_player_vars[v.getName()]
I.append(player)
lu = self.table.loc[I]
try:
lu = lu.sort(['PosNum','Salary'], ascending=[True,False])
except Exception, exc:
pass
lus.append(lu)
return pulp.LpStatus[self.prob.status], lus
def addTableConstraint(self, column, con_type, bound, lineup=0, con_name=None):
"""
Add a constraint with the dot product of a column in the inputed table.
Inputs:
column - string
name of the column in the table to use
con_type - string
{'le','eq','ge'}
bound - float
the rhs of the constraint
lineup - int
index of the lineup to apply the constraint to
con_name - string
the name to assign the constraint. If not given, defaults to 'Column %s' % column
Returns:
True if successful, False otherwise
"""
if con_name is None:
con_name = '%d Column %s' % (lineup,column)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
if con_type == 'eq':
self.prob += pulp.lpSum([self.table[column].loc[k]*self.player_vars[lineup][k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([self.table[column].loc[k]*self.player_vars[lineup][k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([self.table[column].loc[k]*self.player_vars[lineup][k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def addTeamLimitConstraint(self, teams, con_type, bound, lineup=0):
"""
Constrain the number of players from the combination of the teams. Multiple teams are entered
delimited by a '/' as in 'CLE/DET'
Inputs:
teams - string
teams to be constrained delimited by '/'
see addColumnConstraint
"""
teams = teams.split('/')
con_name = '%d Team %s Limit %s' % (lineup, teams, con_type)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
if con_type == 'eq':
self.prob += pulp.lpSum([(self.table.loc[k]['Team'] in teams)*self.player_vars[lineup][k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([(self.table.loc[k]['Team'] in teams)*self.player_vars[lineup][k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([(self.table.loc[k]['Team'] in teams)*self.player_vars[lineup][k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def addCustomConstraint(self, func, con_type, bound, lineup=0, con_name=None):
"""
Add a custom constraint which results from passing the inputed table into func. This is a more
general version of addColumnConstraints.
Inputs:
func - function
Takes self.table as an input and returns a pd.Series with indices matching that of self.table
see addColumnConstraints
"""
series = func(self.table)
if con_name is None:
con_name = '%d Custom Func %%d' % lineup
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
return self.addSeriesConstraint(series, con_type, bound, lineup, con_name)
def addTableColumns(self, series, names):
"""
Add the series to self.table with the corresponding column names.
"""
for serie, name in zip(series, names):
self.table[name] = serie
return True
def addSeriesConstraint(self, series, con_type, bound, lineup=0, con_name=None):
"""
Add a custom constraint which results from passing the inputed table into func. This is a more
general version of addColumnConstraints.
Inputs:
func - function
Takes self.table as an input and returns a pd.Series with indices matching that of self.table
see addColumnConstraints
"""
if con_name is None:
con_name = '%d Custom Func %%d' % lineup
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
if con_type == 'eq':
self.prob += pulp.lpSum([series.loc[k]*self.player_vars[lineup][k] for k in self.players]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([series.loc[k]*self.player_vars[lineup][k] for k in self.players]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([series.loc[k]*self.player_vars[lineup][k] for k in self.players]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
def updateObjective(self, series):
"""
Replace the current objective function with the one specified by series.
"""
self.prob.setObjective(pulp.lpSum([series[k]*self.player_vars[j][k] for k in self.players for j in range(self.nlineups)]))
def disallowLineup(self, lineup):
"""
Take a lineup DataFrame (subset of self.table) and don't allow this specific lineup.
"""
players = lineup.index.tolist()
con_name = '%%d BlockLineup %s' % str(players)
for j in xrange(self.nlineups):
self.prob += pulp.lpSum([(k in players)*self.player_vars[k] for k in self.players]) <= self.nslots-1, con_name % j
def addPlayerConstraint(self, players, con_type, bound, lineup=0, con_name=None):
"""
Make a constraint based on appearances of players.
Inputs:
players - string
the names of players as found in the index of self.table delimited by '/'
lineup - int
if lineup >= 0, then it is the index of the lineup to which the constraint should
be added. if lineup==-1, then the aggregate of all lineups will be constrained.
e.g. addPlayerConstraint('LeBron James/Anthony Davis', 'eq', 3, lineup=-1,
con_name='LJ/AD eq 3') will make it so that in total of all the lineups,
LeBron James + Anthony Davis will occur exactly 3 times.
see addColumnConstraints
"""
if con_name is None:
con_name = '%d Players %s %%d' % (lineup, players)
try:
self.const_tracker[con_name] += 1
except KeyError:
self.const_tracker[con_name] = 0
con_name = '%s%d' % (con_name , self.const_tracker[con_name])
players = players.split('/')
if lineup >= 0:
if con_type == 'eq':
self.prob += pulp.lpSum([self.player_vars[lineup][k] for k in players]) == bound,\
con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([self.player_vars[lineup][k] for k in players]) <= bound,\
con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([self.player_vars[lineup][k] for k in players]) >= bound,\
con_name
return True
elif lineup == -1:
if con_type == 'eq':
self.prob += pulp.lpSum([self.player_vars[j][k] for k in players
for j in range(self.nlineups)]) == bound, con_name
return True
elif con_type == 'le':
self.prob += pulp.lpSum([self.player_vars[j][k] for k in players
for j in range(self.nlineups)]) <= bound, con_name
return True
elif con_type == 'ge':
self.prob += pulp.lpSum([self.player_vars[j][k] for k in players
for j in range(self.nlineups)]) >= bound, con_name
return True
raise(Exception('%s is not a valid input for con_type' % con_type))
| 44.24077 | 181 | 0.550009 | 10,834 | 85,075 | 4.219586 | 0.036459 | 0.064465 | 0.045937 | 0.018375 | 0.963097 | 0.960691 | 0.959379 | 0.952138 | 0.948201 | 0.947041 | 0 | 0.003879 | 0.324243 | 85,075 | 1,922 | 182 | 44.263788 | 0.791303 | 0.002739 | 0 | 0.929126 | 0 | 0 | 0.053527 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.005825 | 0.001942 | null | null | 0.004854 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
6609112de6143ab00cb9e198eab24cb41514c315 | 67,534 | py | Python | kite_api/kite.py | mark483/kite-multithreading-for-websocket | 77dd8d3abb4d937502b8b8f8cdca6b5f4f9d8498 | [
"MIT"
] | 2 | 2021-04-14T04:14:13.000Z | 2021-08-17T13:27:38.000Z | kite_api/kite.py | ajmal017/kite-multithreading-for-websocket | 77dd8d3abb4d937502b8b8f8cdca6b5f4f9d8498 | [
"MIT"
] | null | null | null | kite_api/kite.py | ajmal017/kite-multithreading-for-websocket | 77dd8d3abb4d937502b8b8f8cdca6b5f4f9d8498 | [
"MIT"
] | 2 | 2021-08-17T12:53:08.000Z | 2021-08-17T13:28:42.000Z | """
##This file is with modification where you added line 233-294 for the orders to be executed when you have action == 'do nothing'
action = action from neural net
action_1 = action of kite.py
action_2 = action from orders_q
action_stag = results action from stagnation params
action_stag_1 = results action from def stag() function
#12345 serach for the lines where the mods happened
#123456:
1. self.quantity > self.min_quantity
2. actiona_queue.append moved it to profits function from orders_q
3. Moved self.quantity += units and -= units to the profits function
#789
1. Muted the 'buy' and 'sell' in actions_queue[-4:]
2. Addded the units = abs (self.quantity) - self.buffer_quantity
"""
import logging
import statistics
from datetime import datetime
from kiteconnect import KiteConnect
from kite_api.kite_modes import CNCMode, GTTMode, MISMode
#Function 2 -- Where you define investment() in scenarios where the you buyied and havent sell and you want to know whether you are profitable or not ?
import statistics as s
##import rocp variable
#import sys
#sys.path.append(".")
#from corrector import Corrector
#corrector = Corrector ("ROCP", "60", "MIS")
#cost_factor_1, cost_factor_2, rocp, pull = corrector._rocp()
#print(rocp)
##Replaced units with units and units
units_factor = 1
#units = 1*units_factor
#units = 1*units_factor
#units = 1*units_factor
max_total_sell = 50
STAG = {
"MIS":1
,
"CNC":1
}
LT_2 = {
"MIS":0
,
"CNC":0
}
PULL = {
"MIS":0
,
"CNC":0
}
COST_FACTORS = {
"MIS": {
"BUY": 0.00125,
"SELL": 0.00125,
},
"CNC": {
"BUY": 0.003,
"SELL": 0.0025,
},
"CNC_GTT": {
"BUY": 0.0035,
"SELL": 0.0035,
},
}
D_Factor = 0 #15.94
cost_factor_4 = 0
logging.basicConfig(level=logging.DEBUG)
class KiteTrader:
def __init__(self, agent, client, order_frequency,
product_mode="MIS"):
self.agent = agent
self.max_buy = self.agent.max_buy
self.max_sell = self.agent.max_sell
self.balance = self.agent.initial_money
self.queue_size = self.agent.window_size + 1
self._queue = []
#1234
self.action_window = 10
self.o_window = 5
self.buy_price_queue= []
self.sell_price_queue= []
self.actions_queue = []
# action_2 = "None"
# action_4 = "None"
# action_1 = "None"
# cost = 0
self.close_data = 0
self.min_quantity = 5
self.target_profit = float(0)
self.units_1 = 1
self.units = 0
# trade_data = 0
# units_factor = 1
# units = 1*units_factor
# units = 1*units_factor
# units = 1
self.quantity = 0
self.buffer_quantity = 1
self.inventory = []
#123
self.inventory_sell = []
self.bought_price = 0
self.sold_price = 0
#1234
# action_1= ""
# action_2 = ""
# trade_data = ""
self.kite_client = client
self.order_frequency_min = order_frequency
self.product_mode = product_mode
if self.product_mode == 'CNC':
self.order_maker = CNCMode(self.kite_client)
elif self.product_mode == 'CNC_GTT':
self.order_maker = GTTMode(self.kite_client)
# else MIS mode
else:
self.order_maker = MISMode(self.kite_client)
def reset(self):
self.balance = self.agent.initial_money
self.inventory = []
self.quantity = 0
def trade(self, data):
action_1 = "None"
action_stag = "None_wait"
self.close_data = data[0]
if len(self._queue) >= self.queue_size:
self._queue.pop(0)
self._queue.append(self.close_data)
if len(self._queue) < self.queue_size:
return {
'status': 'data not enough to trade',
'action': 'fail',
'timestamp': str(datetime.now()),
}
predicted_action, buy = self.agent.predict(self._queue)
cost = self._queue[-1]
action, self.units = self._trade_on_prediction(predicted_action, buy, cost)
mode = self.product_mode.split('_')[0]
if action == 'buy':
if LT_2[self.product_mode] == 1:
if PULL[self.product_mode] == 1:
if STAG[self.product_mode] == 0:
action_stag = "buy_wait"
trade_data_stag = cost
self.stag(action_stag, trade_data_stag)
# return {
# 'status': 'do nothing',
# 'action': 'nothing',
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
# }
elif STAG[self.product_mode] == 1:
action_1 = "buy"
buy_price = round(cost * (1 - COST_FACTORS[self.product_mode]["BUY"]),1) - cost_factor_4
trade_data = buy_price
self.orders_q(action_1, trade_data)
# action_2 = self.orders_q(action_1, trade_data)
# if action_2 == "buy":
# try:
# order_id = self.order_maker.buy(buy_price, units, self.close_data)
# except Exception as e:
# msg = "Order placement failed: {}".format(e)
# logging.error(msg)
# return {
# 'error': msg,
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
# }
# #123 Added self.sold_price
# total_buy, self.sold_price = self.commit_buy(units, buy_price)
# #123 Added self.sold_price
# try:
# invest = ((total_buy - self.sold_price) / self.sold_price) * 100
# except:
# invest = 0
# msg = 'place an order %s to buy %d units at price %f' % (order_id, units, total_buy)
# print(msg)
# return {
# 'status': msg,
# 'units': units,
# #123
# 'investment': invest,
# ##improve the the code as 'gain' = total_sell - (cost*units) or 'gain': total_sell - self.bought_price . because you can't use the logic of 'if len(self.inventory) < 0:' here in return statement (29_oct_20)
# 'gain': total_buy - self.sold_price,
# 'action': 'buy',
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
# 'order_id': order_id,
# }
# else:
# print('do nothing')
#sell_call
elif action == 'sell':
if LT_2[self.product_mode] == 2:
if PULL[self.product_mode] == 2 :
if STAG[self.product_mode] == 0:
action_stag = "sell_wait"
trade_data_stag = cost
self.stag(action_stag, trade_data_stag)
# return {
# 'status': 'do nothing',
# 'action': 'nothing',
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
# }
elif STAG[self.product_mode] == 1:
action_1= "sell"
sell_price = round(cost * (1 + COST_FACTORS[self.product_mode]["SELL"]),1) + cost_factor_4
trade_data = sell_price
self.orders_q(action_1, trade_data)
# action_2 = self.orders_q(action_1, trade_data)
# if action_2 == "sell":
# try:
# order_id = self.order_maker.sell(sell_price, units, self.close_data)
# except Exception as e:
# msg = "Order placement failed: {}".format(e)
# logging.error(msg)
# return {
# 'error': msg,
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
# }
# ##do we need a bought price ? Needs to remove it (29_oct_20)
# total_sell, self.bought_price = self.commit_sell(units, sell_price)
# ##Ading a logic to improve the code when inventory <0
# ##
# try:
# invest = ((total_sell - self.bought_price) / self.bought_price) * 100
# except:
# invest = 0
# msg = 'place an order %s to sell %d units at price %f' % (order_id, units, total_sell)
# print(msg)
# return {
# 'status': msg,
# 'units': units,
# 'investment': invest,
# ##improve the the code as 'gain' = total_sell - (cost*units) or 'gain': total_sell - self.bought_price . because you can't use the logic of 'if len(self.inventory) < 0:' here in return statement (29_oct_20)
# 'gain': total_sell - self.bought_price,
# 'balance': self.balance,
# 'action': 'sell',
# 'timestamp': str(datetime.now()),
# 'order': order_id,
# }
# else:
# print('do nothing')
##Added another pair of Pull & 'buy' and 'sell' api to this
##Removed
#return {
# 'status': 'do nothing',
# 'action': 'nothing',
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
#}
elif LT_2[self.product_mode] == 1:
if PULL[self.product_mode] == 1:
if STAG[self.product_mode] == 0:
action_stag = "buy_wait"
trade_data_stag = cost
self.stag(action_stag, trade_data_stag)
# return {
# 'status': 'do nothing',
# 'action': 'nothing',
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
# }
elif STAG[self.product_mode] == 1:
action_1= "buy"
buy_price = round(cost * (1 - COST_FACTORS[self.product_mode]["BUY"]),1) - cost_factor_4
trade_data = buy_price
self.orders_q(action_1, trade_data)
# action_2 = self.orders_q(action_1, trade_data)
# if action_2 == "buy":
# try:
# order_id = self.order_maker.buy(buy_price, units, self.close_data)
# except Exception as e:
# msg = "Order placement failed: {}".format(e)
# logging.error(msg)
# return {
# 'error': msg,
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
# }
# #123 Added self.sold_price
# total_buy, self.sold_price = self.commit_buy(units, buy_price)
# #123 Added self.sold_price
# try:
# invest = ((total_buy - self.sold_price) / self.sold_price) * 100
# except:
# invest = 0
# msg = 'place an order %s to buy %d units at price %f' % (order_id, units, total_buy)
# print(msg)
# return {
# 'status': msg,
# 'units': units,
# #123
# 'investment': invest,
# ##improve the the code as 'gain' = total_sell - (cost*units) or 'gain': total_sell - self.bought_price . because you can't use the logic of 'if len(self.inventory) < 0:' here in return statement (29_oct_20)
# 'gain': total_buy - self.sold_price,
# 'action': 'buy',
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
# 'order_id': order_id,
# }
# else:
# print('do nothing')
#sell_call
elif LT_2[self.product_mode] == 2:
if PULL[self.product_mode] == 2:
if STAG[self.product_mode] == 0:
action_stag = "sell_wait"
trade_data_stag = cost
self.stag(action_stag, trade_data_stag)
# return {
# 'status': 'do nothing',
# 'action': 'nothing',
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
# }
elif STAG[self.product_mode] == 1:
action_1 = "sell"
sell_price = round(cost * (1 + COST_FACTORS[self.product_mode]["SELL"]),1) + cost_factor_4
trade_data = sell_price
self.orders_q(action_1, trade_data)
# action_2 = self.orders_q(action_1, trade_data)
# if action_2 == "sell":
# try:
# order_id = self.order_maker.sell(sell_price, units, self.close_data)
# except Exception as e:
# msg = "Order placement failed: {}".format(e)
# logging.error(msg)
# return {
# 'error': msg,
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
# }
# ##do we need a bought price ? Needs to remove it (29_oct_20)
# total_sell, self.bought_price = self.commit_sell(units, sell_price)
# ##Ading a logic to improve the code when inventory <0
# ##
# try:
# invest = ((total_sell - self.bought_price) / self.bought_price) * 100
# except:
# invest = 0
# msg = 'place an order %s to sell %d units at price %f' % (order_id, units, total_sell)
# print(msg)
# return {
# 'status': msg,
# 'units': units,
# 'investment': invest,
# ##improve the the code as 'gain' = total_sell - (cost*units) or 'gain': total_sell - self.bought_price . because you can't use the logic of 'if len(self.inventory) < 0:' here in return statement (29_oct_20)
# 'gain': total_sell - self.bought_price,
# 'balance': self.balance,
# 'action': 'sell',
# 'timestamp': str(datetime.now()),
# 'order': order_id,
# }
# else:
# print('do nothing')
## Added a buy and sell call simultaneously to LT_2 = 0 added PULL == 1, 2, -1, -2
elif LT_2[self.product_mode] == 0:
if PULL[self.product_mode] == -1 or PULL[self.product_mode] == 1 :
if STAG[self.product_mode] == 0:
action_stag = "buy_wait"
trade_data_stag = cost
self.stag(action_stag, trade_data_stag)
# return {
# 'status': 'do nothing',
# 'action': 'nothing',
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
# }
elif STAG[self.product_mode] == 1:
action_1= "buy"
buy_price = round(cost * (1 - COST_FACTORS[self.product_mode]["BUY"]),1) - cost_factor_4
trade_data = buy_price
self.orders_q(action_1, trade_data)
# action_2 = self.orders_q(action_1, trade_data)
# if action_2 == "buy":
# try:
# order_id = self.order_maker.buy(buy_price, units, self.close_data)
# except Exception as e:
# msg = "Order placement failed: {}".format(e)
# logging.error(msg)
# return {
# 'error': msg,
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
# }
# #123 Added self.sold_price
# total_buy, self.sold_price = self.commit_buy(units, buy_price)
# #123 Added self.sold_price
# try:
# invest = ((total_buy - self.sold_price) / self.sold_price) * 100
# except:
# invest = 0
# msg = 'place an order %s to buy %d units at price %f' % (order_id, units, total_buy)
# print(msg)
# return {
# 'status': msg,
# 'units': units,
# #123
# 'investment': invest,
# ##improve the the code as 'gain' = total_sell - (cost*units) or 'gain': total_sell - self.bought_price . because you can't use the logic of 'if len(self.inventory) < 0:' here in return statement (29_oct_20)
# 'gain': total_buy - self.sold_price,
# 'action': 'buy',
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
# 'order_id': order_id,
# }
# else:
# print('do nothing')
#sell_call
elif PULL[self.product_mode] == -2 or PULL[self.product_mode] == 2 :
if STAG[self.product_mode] == 0:
action_stag = "sell_wait"
trade_data_stag = cost
self.stag(action_stag, trade_data_stag)
# return {
# 'status': 'do nothing',
# 'action': 'nothing',
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
# }
elif STAG[self.product_mode] == 1:
action_1= "sell"
sell_price = round(cost * (1 + COST_FACTORS[self.product_mode]["SELL"]),1) + cost_factor_4
trade_data = sell_price
self.orders_q(action_1, trade_data)
# action_2 = self.orders_q(action_1, trade_data)
# if action_2 == "sell":
# try:
# order_id = self.order_maker.sell(sell_price, units, self.close_data)
# except Exception as e:
# msg = "Order placement failed: {}".format(e)
# logging.error(msg)
# return {
# 'error': msg,
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
# }
# ##do we need a bought price ? Needs to remove it (29_oct_20)
# total_sell, self.bought_price = self.commit_sell(units, sell_price)
# ##Ading a logic to improve the code when inventory <0
# ##
# try:
# invest = ((total_sell - self.bought_price) / self.bought_price) * 100
# except:
# invest = 0
# msg = 'place an order %s to sell %d units at price %f' % (order_id, units, total_sell)
# print(msg)
# return {
# 'status': msg,
# 'units': units,
# 'investment': invest,
# ##improve the the code as 'gain' = total_sell - (cost*units) or 'gain': total_sell - self.bought_price . because you can't use the logic of 'if len(self.inventory) < 0:' here in return statement (29_oct_20)
# 'gain': total_sell - self.bought_price,
# 'balance': self.balance,
# 'action': 'sell',
# 'timestamp': str(datetime.now()),
# 'order': order_id,
# }
# else:
# print('do nothing')
elif LT_2[self.product_mode] == 1:
if PULL[self.product_mode] == 0:
if STAG[self.product_mode] == 1:
action_stag = "buy_wait"
trade_data_stag = cost
self.stag(action_stag, trade_data_stag)
# return {
# 'status': 'do nothing',
# 'action': 'nothing',
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
# }
elif LT_2[self.product_mode] == 2:
if PULL[self.product_mode] == 0:
if STAG[self.product_mode] == 1:
action_stag = "sell_wait"
trade_data_stag = cost
self.stag(action_stag, trade_data_stag)
# return {
# 'status': 'do nothing',
# 'action': 'nothing',
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
# }
# elif PULL[self.product_mode] == 0:
# return {
# 'status': 'do nothing',
# 'action': 'nothing',
# 'balance': self.balance,
# 'timestamp': str(datetime.now()),
# }
# Is the last data value is really a cost?
def _trade_on_prediction(self, action, buy, cost):
if action == 1 and self.balance >= cost:
if buy < 0:
buy = self.units_1 #1
if buy > self.max_buy:
buy = self.units_1 #self.max_buy
else:
buy = self.units_1
buy_units = buy
# buy_units = self.units_1
return "buy", buy_units
#changed the from -- 'len(self.inventory) > 0' to 'len(self.inventory) > max_total_sell' (29_oct_20)
elif action == 2 and len(self.inventory_sell) >= 0 and self.max_sell > 0:
if self.quantity > self.max_sell:
sell_units = self.units_1 #self.max_sell
else:
sell_units = self.units_1 #self.quantity
# sell_units = self.units_1
return "sell", sell_units
return "do nothing", 0
def commit_buy(self, units, buy_price):
# global self.sold_price
# total_buy = units * buy_price * (1 + CORR_FACTOR)
#123456 Moved to profits "buy"
# self.quantity += units
total_buy = units * buy_price
self.balance -= total_buy
# self.inventory.append(total_buy)
# 123 Added logic
# for i in self.inventory():
# if self.inventory[i] < 0:
# self.sold_price = self.inventory.pop(-1)
# elif self.inventory[i] >= 0:
# self.inventory.append(total_buy)
# self.sold_price = 0
# 123 Added logic modified
# for i in self.inventory() and self.inventory_sell():
# if self.inventory[i] >= 0 and self.inventory_sell[i] == 0:
# self.inventory.append(total_buy)
# self.sold_price = 0
# elif self.inventory[i] >= 0 and self.inventory_sell[i] > 0:
# self.inventory.append(total_buy)
# for v in self.inventory_sell():
# if total_buy in v:
# self.sold_price = v.remove(total_buy)
# else:
# self.sold_price = self.inventory_sell.pop(0)
# elif self.inventory[i] > 0 and self.inventory_sell[i] > 0:
self.inventory.append(total_buy)
if len(self.inventory_sell) == 0:
self.sold_price = 0
elif len(self.inventory_sell) > 0:
self.sold_price = self.inventory_sell.pop(0)
# for item in enumerate(self.inventory_sell):
# if int(total_buy) in item:
# self.sold_price = item.remove(total_buy)
# else:
# self.sold_price = self.inventory.pop(0)
# self.quantity += units
return total_buy, self.sold_price
def commit_sell(self, units, sell_price):
# global self.bought_price
# Moved to profits "sell"
# self.quantity -= units
# total_sell = units * sell_price * (1 - CORR_FACTOR)
total_sell = units * sell_price
# For kite CNC
# total_sell = (units * sell_price) - D_FACTOR
self.balance += total_sell
# self.bought_price = self.inventory.pop(0)
# 123 Added logic
# for i in self.inventory():
# if self.inventory[i] > 0:
# self.bought_price = self.inventory.pop(0)
# elif self.inventory[i] <= 0:
# self.inventory.append(-total_sell)
# self.bought_price = 0
# 123 Added logic modified
# Replaced key with i=index for list[] function
# for i in self.inventory_sell() and self.inventory():
# if self.inventory_sell[i] >= 0 and self.inventory[i] == 0:
# self.inventory_sell.append(total_sell)
# self.bought_price = 0
# elif self.inventory_sell[i] >= 0 and self.inventory[i] > 0:
# self.inventory_sell.append(total_sell)
# for v in self.inventory():
# if total_sell in v:
# self.bought_price = v.remove(total_sell)
# else:
# self.bought_price = self.inventory.pop(0)
self.inventory_sell.append(total_sell)
if len(self.inventory) == 0:
self.bought_price = 0
elif len(self.inventory) > 0:
self.bought_price = self.inventory.pop(0)
# for item in enumerate(self.inventory):
# if int(total_sell) in item:
# self.bought_price = item.remove(total_sell)
# else:
# self.bought_price = self.inventory.pop(0)
# self.quantity -= units
return total_sell, self.bought_price
## Check the order status and avoid making repeted orders at the same price .
def orders_q(self, action_1, trade_data):
# global action_2
# self.action_window = 6
# window = 3
# self. self.buy_price_queue= []
# self. self.sell_price_queue= []
# self.actions_queue = []
# action_1= action_1
action_2 = "None"
if len(self.actions_queue) <= self.action_window:
self.actions_queue.append(action_1)
if action_1 == "buy":
print("O_1")
# buy_price = trade_data
if len(self.buy_price_queue) <= self.o_window :
self.buy_price_queue.append(trade_data)
action_2 = "buy"
elif action_1== "sell":
print("O_2")
# sell_price = trade_data
if len(self.buy_price_queue) <= self.o_window :
self.sell_price_queue.append(trade_data)
action_2 = "sell"
elif len(self.actions_queue) >= self.action_window + 1 :
#123456 Moved to next
# self.actions_queue.append(action_1)
# self.actions_queue.pop(0)
if action_1== "buy":
# buy_price = trade_data
if len(self.buy_price_queue) <= self.o_window :
print("O_3")
self.buy_price_queue.append(trade_data)
# self.buy_price_queue.pop(0)
action_2 = "buy"
#123456
self.actions_queue.append(action_2)
# self.actions_queue.pop(0)
elif len(self.buy_price_queue) >= self.o_window + 1 :
# self.buy_price_queue.append(trade_data)
# self.buy_price_queue.pop(0)
# [-2:]
if "buy" in self.actions_queue[-1]:
print("O_4")
#Added [-3:] so that it sould not buy again for the same price
##Added difference so that you can get every order with 10 paisa difference
# Added and abs() function to tackle fluctuations in the Market so that it can buy at lower prices if wanted
if trade_data not in self.buy_price_queue[-3:] and (abs(trade_data - self.buy_price_queue[-1]) >= 0.095) :
# if trade_data not in self.buy_price_queue[-3:]: replaced with above
self.buy_price_queue.append(trade_data)
self.buy_price_queue.pop(0)
#Only going to append whn you are ready to take action
# self.actions_queue.append(action_1)
# self.actions_queue.pop(0)
action_2 = "buy"
else : action_2 = "do nothing"
# [-2:]
elif "sell" in self.actions_queue[-1]:
print("O_5")
if trade_data not in self.sell_price_queue[-1:]:
self.buy_price_queue.append(trade_data)
self.buy_price_queue.pop(0)
#Only going to append whn you are ready to take action
# self.actions_queue.append(action_1)
# self.actions_queue.pop(0)
action_2 = "buy"
else : action_2= "do nothing"
#123456 Added and then moved to profits
# self.actions_queue.append(action_2)
# self.actions_queue.pop(0)
elif action_1 == "sell":
# sell_price = trade_data
if len(self.sell_price_queue) <= self.o_window :
print("O_6")
self.sell_price_queue.append(trade_data)
# self.sell_price_queue.pop(0)
action_2 = "sell"
#123456 Added
self.actions_queue.append(action_2)
# self.actions_queue.pop(0)
elif len(self.sell_price_queue) >= self.o_window + 1 :
#123456 Moved to next
# self.sell_price_queue.append(trade_data)
# self.sell_price_queue.pop(0)
# [-2:]
if "buy" in self.actions_queue[-1]:
print("O_7")
if trade_data not in self.buy_price_queue[-1:]:
self.sell_price_queue.pop(0)
self.sell_price_queue.append(trade_data)
#Only going to append whn you are ready to take action
# self.actions_queue.append(action_1)
# self.actions_queue.pop(0)
action_2 = "sell"
else : action_2 = "do nothing"
# [-2:] #Added [-3:] so that it sould not buy again for the same price
##Added difference so that you can get every order with 10 paisa difference
# Added and abs() function to tackle fluctuations in the Market so that it can sell at higher prices if wanted
elif "sell" in self.actions_queue[-1] and (abs(trade_data - self.sell_price_queue[-1]) >= 0.095) :
print("O_8")
# elif "sell" in self.actions_queue[-2]: replaced with above
if trade_data not in self.sell_price_queue[-3:]:
self.sell_price_queue.pop(0)
self.sell_price_queue.append(trade_data)
#Only going to append whn you are ready to take action
# self.actions_queue.append(action_1)
# self.actions_queue.pop(0)
action_2 = "sell"
else : action_2 = "do nothing"
#123456 Added here and moved to orders_q
# self.actions_queue.append(action_2)
# self.actions_queue.pop(0)
else:
print('Conditions are not met. action_1val : ',action_1)
self.profits(action_2, trade_data)
# print(action_1, trade_data)
print("action_1=", action_1, "action_2= ", action_2, +123, self.actions_queue, self.buy_price_queue, self.sell_price_queue)
print("orders_q session------------------------------")
return action_2
def profits(self, action_2, trade_data):
# global action_4, units
# trade_data = 195.5
# buy_price_queue = [195, 195.2, 195.4, 195.5]
# buy_price_queue_window = 3
# average_buy_price = 0
# average_buy_price_window = 1
# sell_price_queue = [195.6, 195.55, 195.5, 195.65]
# sell_price_queue_window = 3
# average_sell_price = 0
# average_sell_price_window = 1
# inventory = []
# sell_inventory = []
# invest = 0
# action_2 = "buy"
# action_4 = "None"
# profits = s.mean(self.sell_price_queue) - s.mean(self.buy_price_queue)
profits = float(0)
average_buy_price = 0
average_sell_price = 0
action_profits = "None"
#12345
# buy_price = 0
# sell_price = 0
new_price = 0
units = 0
# quantity = 4
# self.close_data = 195.5
if action_2 == "buy":
if len(self.buy_price_queue) <= self.o_window:
##12345 inserted a buy call for the action_2 = buy for regular orders to get executed
# buy_price = round(cost * (1 - COST_FACTORS[self.product_mode]["BUY"]),1) - cost_factor_4
action_profits = "buy"
# action_2 = self.orders_q(action_1, trade_data)
buy_price = trade_data
units = self.units
try:
order_id = self.order_maker.buy(buy_price, units, self.close_data)
except Exception as e:
msg = "Order placement failed: {}".format(e)
logging.error(msg)
return {
'error': msg,
'balance': self.balance,
'timestamp': str(datetime.now()),
}
#123 Added self.sold_price
total_buy, self.sold_price = self.commit_buy(units, buy_price)
#123456 Moveed here from commit_buy
self.quantity += units
#123 Added self.sold_price
try:
invest = ((total_buy - self.sold_price) / self.sold_price) * 100
except:
invest = 0
msg = 'place an order %s to buy %d units at price %f' % (order_id, units, total_buy)
print(msg)
return {
'status': msg,
'units': units,
#123
'investment': invest,
##improve the the code as 'gain' = total_sell - (cost*units) or 'gain': total_sell - self.bought_price . because you can't use the logic of 'if len(self.inventory) < 0:' here in return statement (29_oct_20)
'gain': total_buy - self.sold_price,
'action': 'buy',
'balance': self.balance,
'timestamp': str(datetime.now()),
'order_id': order_id,
}
elif len(self.buy_price_queue) >= self.o_window + 1:
print("A")
#123456 moved here from orders_q
self.actions_queue.append(action_2)
self.actions_queue.pop(0)
average_buy_price = s.mean(self.buy_price_queue[:-1])
# buy_price_queue.pop(0)
# buy_price_queue.append(trade_data)
# for i in inventory:
# if inventory[i] == 0:
# invest == 0
# elif inventory[i] > 0:
# profit = avergae_buy_price < self.close_data
# invest = (((self.close_data - average_buy_price ))/(average_buy_price))*100
#789 muted "buy" in self.actions_queue [-4:]:
# if self.quantity > self.min_quantity and "buy" in self.actions_queue [-4:]:
if self.quantity > self.min_quantity:
profits = trade_data - average_buy_price
if profits >= 0.25:
new_price = trade_data
# return buy_price
elif profits <= 0.25:
self.target_profit = 0.3 - profits
new_price = trade_data + self.target_profit
##12345 action_profits is added to append the action taken into the action_queue list
action_profits = "sell"
##12345 removes the last action in action_queue which is added in orders_q fun() line 699, 711, 712 and replaced it with the current action same applies for the price
self.actions_queue.pop(-1)
self.actions_queue.append(action_profits)
self.buy_price_queue.pop(-1)
self.sell_price_queue.append(new_price)
sell_price = new_price
#789
units = abs(self.quantity) - self.buffer_quantity
try:
#pulling cost as buy_rpice and sell_rpice respectively
# sell_price = new_price
# units = self.quantity
order_id = self.order_maker.sell(sell_price, units, self.close_data)
except Exception as e:
msg = "Order placement failed: {}".format(e)
logging.error(msg)
return {
'error': msg,
'balance': self.balance,
'timestamp': str(datetime.now()),
}
##do we need a bought price ? Needs to remove it (29_oct_20)
total_sell, self.bought_price = self.commit_sell(units, sell_price)
#Moved here from commit_sell
self.quantity -= units
##Ading a logic to improve the code when inventory <0
##
try:
invest = ((total_sell - self.bought_price) / self.bought_price) * 100
except:
invest = 0
msg = 'place an order %s to sell %d units at price %f' % (order_id, units, total_sell)
print(msg)
return {
'status': msg,
'units': units,
'investment': invest,
'gain': total_sell - self.bought_price,
'balance': self.balance,
'action': 'sell',
'timestamp': str(datetime.now()),
'order': order_id,
}
##12345 inserted the below function and buy call
# Replaced 0 with - self.min_quantity in elif 0 >= self.quantity < self.min_quantity :
# Replaced with below
# elif -self.min_quantity >= self.quantity <= self.min_quantity :
# elif self.quantity >= -self.min_quantity and self.quantity <= self.min_quantity :
elif self.quantity <= self.min_quantity :
print("B")
action_profits = "buy"
# action_2 = self.orders_q(action_1, trade_data)
buy_price = trade_data
units = self.units
try:
order_id = self.order_maker.buy(buy_price, units, self.close_data)
except Exception as e:
msg = "Order placement failed: {}".format(e)
logging.error(msg)
return {
'error': msg,
'balance': self.balance,
'timestamp': str(datetime.now()),
}
#123 Added self.sold_price
total_buy, self.sold_price = self.commit_buy(units, buy_price)
#123456 Moveed here from commit_buy
self.quantity += units
#123 Added self.sold_price
try:
invest = ((total_buy - self.sold_price) / self.sold_price) * 100
except:
invest = 0
msg = 'place an order %s to buy %d units at price %f' % (order_id, units, total_buy)
print(msg)
return {
'status': msg,
'units': units,
#123
'investment': invest,
##improve the the code as 'gain' = total_sell - (cost*units) or 'gain': total_sell - self.bought_price . because you can't use the logic of 'if len(self.inventory) < 0:' here in return statement (29_oct_20)
'gain': total_buy - self.sold_price,
'action': 'buy',
'balance': self.balance,
'timestamp': str(datetime.now()),
'order_id': order_id,
}
# else : print('do nothing') #action_4 = "do nothing"
# elif inventory[i] < 0:
# invest = ((total_sell - self.bought_price) / self.bought_price) * 10
# invest = (((self.close_data - average_sell_price )*units)/(average_sell_price*units))*100
elif action_2 == "sell":
if len(self.sell_price_queue) <= self.o_window:
##12345 inserted a sell call for the action_2 = sell for regular orders to get executed
# sell_price = round(cost * (1 + COST_FACTORS[self.product_mode]["SELL"]),1) + cost_factor_4
action_profits = "sell"
# action_2 = self.orders_q(action_1, trade_data)
sell_price = trade_data
units = self.units
try:
order_id = self.order_maker.sell(sell_price, units, self.close_data)
except Exception as e:
msg = "Order placement failed: {}".format(e)
logging.error(msg)
return {
'error': msg,
'balance': self.balance,
'timestamp': str(datetime.now()),
}
##do we need a bought price ? Needs to remove it (29_oct_20)
total_sell, self.bought_price = self.commit_sell(units, sell_price)
#123456 Moveed here from commit_sell
self.quantity -= units
##Ading a logic to improve the code when inventory <0
try:
invest = ((total_sell - self.bought_price) / self.bought_price) * 100
except:
invest = 0
msg = 'place an order %s to sell %d units at price %f' % (order_id, units, total_sell)
print(msg)
return {
'status': msg,
'units': units,
'investment': invest,
##improve the the code as 'gain' = total_sell - (cost*units) or 'gain': total_sell - self.bought_price . because you can't use the logic of 'if len(self.inventory) < 0:' here in return statement (29_oct_20)
'gain': total_sell - self.bought_price,
'balance': self.balance,
'action': 'sell',
'timestamp': str(datetime.now()),
'order': order_id,
}
elif len(self.sell_price_queue) >= self.o_window + 1:
print("C")
#123456 MOved here from orders_q
self.actions_queue.append(action_2)
self.actions_queue.pop(0)
average_sell_price = s.mean(self.sell_price_queue[:-1])
# sell_price_queue.pop(0)
# sell_price_queue.append(trade_data)
#789 Muted "sell" in self.actions_queue [-4:]:
# if self.quantity < -self.min_quantity and "sell" in self.actions_queue [-4:]:
if self.quantity < (-self.min_quantity):
##Replaced self.close_data with trade_data
#self._queue[-1] replaced with trade_data
profits = average_sell_price - trade_data
if profits >= 0.25:
new_price = trade_data
# return buy_price
elif profits <= 0.25:
self.target_profit = 0.3 - profits
new_price = trade_data + self.target_profit
#12345 added the new variable action_profits to deifne the actions taken
action_profits = "buy"
##12345removes the last action in action_queue which is added in orders_q fun() line 699, 711, 712 and replaced it with the current action same applies for the price
self.actions_queue.pop(-1)
self.actions_queue.append(action_profits)
self.sell_price_queue.pop(-1)
self.buy_price_queue.append(new_price)
buy_price = new_price
#789
units = abs(self.quantity) - self.buffer_quantity
try:
# buy_price = new_price
# units = self.quantity
order_id = self.order_maker.buy(buy_price, units, self.close_data)
except Exception as e:
msg = "Order placement failed: {}".format(e)
logging.error(msg)
return {
'error': msg,
'balance': self.balance,
'timestamp': str(datetime.now()),
}
#123 Added self.sold_price
total_buy, self.sold_price = self.commit_buy(units, buy_price)
#123456 Moveed here from commit_buy
self.quantity += units
#123 Added self.sold_price
try:
invest = ((total_buy - self.sold_price) / self.sold_price) * 100
except:
invest = 0
msg = 'place an order %s to buy %d units at price %f' % (order_id, units, total_buy)
print(msg)
return {
'status': msg,
'units': units,
#123
'investment': invest,
##improve the the code as 'gain' = total_sell - (cost*units) or 'gain': total_sell - self.bought_price . because you can't use the logic of 'if len(self.inventory) < 0:' here in return statement (29_oct_20)
'gain': total_buy - self.sold_price,
'action': 'buy',
'balance': self.balance,
'timestamp': str(datetime.now()),
'order_id': order_id,
}
##12345 inserted the below function and sell call
# Replaced with below
# elif self.min_quantity >= self.quantity >= -self.min_quantity :
elif self.quantity >= (-self.min_quantity) and self.quantity <= self.min_quantity :
print("D")
action_profits = "sell"
# action_2 = self.orders_q(action_1, trade_data)
sell_price = trade_data
units = self.units
try:
# sell_price = trade_data
# units = self.units
order_id = self.order_maker.sell(sell_price, units, self.close_data)
except Exception as e:
msg = "Order placement failed: {}".format(e)
logging.error(msg)
return {
'error': msg,
'balance': self.balance,
'timestamp': str(datetime.now()),
}
##do we need a bought price ? Needs to remove it (29_oct_20)
total_sell, self.bought_price = self.commit_sell(units, sell_price)
##Ading a logic to improve the code when inventory <0
#123456 Moveed here from commit_sell
self.quantity -= units
try:
invest = ((total_sell - self.bought_price) / self.bought_price) * 100
except:
invest = 0
msg = 'place an order %s to sell %d units at price %f' % (order_id, units, total_sell)
print(msg)
return {
'status': msg,
'units': units,
'investment': invest,
##improve the the code as 'gain' = total_sell - (cost*units) or 'gain': total_sell - self.bought_price . because you can't use the logic of 'if len(self.inventory) < 0:' here in return statement (29_oct_20)
'gain': total_sell - self.bought_price,
'balance': self.balance,
'action': 'sell',
'timestamp': str(datetime.now()),
'order': order_id,
}
print('units=', units, 'quantity = ',self.quantity,'profits = ', profits, 'trade_data = ', trade_data, 'average_buy_price = ', average_buy_price, 'average_sell_price = ',average_sell_price, "new_price=", new_price, "target_profits=", self.target_profit, +1234)
print("action_profits=", action_profits)
print("Profit session------------------------------")
# return action_4
def stag(self, action_stag, trade_data_stag):
action_stag_1 = "None"
units = 0
if action_stag == "buy_wait":
if len(self.buy_price_queue) <= self.o_window:
print("stag_nothing")
elif len(self.buy_price_queue) >= self.o_window + 1:
#Muted the below function because market can get into stagnation phase any time so no matter of profits ? or shall we consider profits ?
# average_buy_price = s.mean(self.buy_price_queue[:-1])
#789 Muted the "buy" in self.actions_queue [-4:]:
# if self.quantity > self.min_quantity and "buy" in self.actions_queue [-4:]:
if self.quantity > self.min_quantity:
#12345 added the new variable action_stag_1 to deifne the actions taken
action_stag_1 = "sell"
#12345 Since the actions are coming straight forward by-pasiing the orders_q function() we need not to pop(-1) for both actions_queue and price_queue
# self.actions_queue.pop(-1)
self.actions_queue.append(action_stag_1)
# sell_price_queue.pop(-1)
self.sell_price_queue.append(trade_data_stag)
sell_price = trade_data_stag
#789
units = abs(self.quantity) - self.buffer_quantity
try:
# sell_price = trade_data_stag
# units = self.quantity
order_id = self.order_maker.sell(sell_price, units, self.close_data)
except Exception as e:
msg = "Order placement failed: {}".format(e)
logging.error(msg)
return {
'error': msg,
'balance': self.balance,
'timestamp': str(datetime.now()),
}
##do we need a bought price ? Needs to remove it (29_oct_20)
total_sell, self.bought_price = self.commit_sell(units, sell_price)
#123456 Moveed here from commit_sell
self.quantity -= units
##Ading a logic to improve the code when inventory <0
try:
invest = ((total_sell - self.bought_price) / self.bought_price) * 100
except:
invest = 0
msg = 'place an order %s to sell %d units at price %f' % (order_id, units, total_sell)
print(msg)
return {
'status': msg,
'units': units,
'investment': invest,
'gain': total_sell - self.bought_price,
'balance': self.balance,
'action': 'sell',
'timestamp': str(datetime.now()),
'order': order_id,
}
else:
print("stag_nothing")
elif action_stag == "sell_wait":
if len(self.sell_price_queue) <= self.o_window:
print("stag_nothing")
elif len(self.sell_price_queue) >= self.o_window + 1:
#Muted the below function because market can get into stagnation phase any time so no matter of profits ? or shall we consider profits ?
# average_sell_price = s.mean(self.sell_price_queue[:-1])
# sell_price_queue.pop(0)
# sell_price_queue.append(trade_data)
#789 Muted --"sell" in self.actions_queue [-4:]:
# if self.quantity < -self.min_quantity and "sell" in self.actions_queue [-4:]:
if self.quantity < (-self.min_quantity):
#12345 added the new variable action_stag_1 to deifne the actions taken
action_stag_1 = "buy"
#12345 Since the actions are coming straight forward by pasiing the orders_q function() we need not to pop(-1) for both actions_queue and price_queue
# self.actions_queue.pop(-1)
self.actions_queue.append(action_stag_1)
# sell_price_queue.pop(-1)
self.buy_price_queue.append(trade_data_stag)
buy_price = trade_data_stag
units = abs(self.quantity) - self.buffer_quantity
try:
# buy_price = trade_data_stag
# units = self.quantity
order_id = self.order_maker.buy(buy_price, units, self.close_data)
except Exception as e:
msg = "Order placement failed: {}".format(e)
logging.error(msg)
return {
'error': msg,
'balance': self.balance,
'timestamp': str(datetime.now()),
}
#123 Added self.sold_price
total_buy, self.sold_price = self.commit_buy(units, buy_price)
#123456 Moveed here from commit_buy
self.quantity += units
# 123 Added self.sold_price
try:
invest = ((total_buy - self.sold_price) / self.sold_price) * 100
except:
invest = 0
msg = 'place an order %s to buy %d units at price %f' % (order_id, units, total_buy)
print(msg)
return {
'status': msg,
'units': units,
#123
'investment': invest,
##improve the the code as 'gain' = total_sell - (cost*units) or 'gain': total_sell - self.bought_price . because you can't use the logic of 'if len(self.inventory) < 0:' here in return statement (29_oct_20)
'gain': total_buy - self.sold_price,
'action': 'buy',
'balance': self.balance,
'timestamp': str(datetime.now()),
'order_id': order_id,
}
else:
print("stag_nothing")
print("action_stag=",action_stag, "action_stag_1=", action_stag_1, "units=", units)
print("STAG session------------------------------")
| 50.663166 | 268 | 0.414132 | 6,240 | 67,534 | 4.28125 | 0.048878 | 0.030657 | 0.030882 | 0.027737 | 0.827513 | 0.792813 | 0.769306 | 0.742766 | 0.725435 | 0.707019 | 0 | 0.028565 | 0.501851 | 67,534 | 1,333 | 269 | 50.663166 | 0.765531 | 0.403382 | 0 | 0.676223 | 0 | 0 | 0.051353 | 0.0028 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015177 | false | 0 | 0.010118 | 0 | 0.065767 | 0.053963 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6616f8f0d7e5998ef32ad6ce5a58c7f1b225c30d | 691 | py | Python | tests/test_utils.py | tommyip/iBrainfuck | d20114f294ded1d7110486d1fe08bef28819e669 | [
"Apache-2.0"
] | 3 | 2016-11-06T16:10:30.000Z | 2017-05-08T04:59:58.000Z | tests/test_utils.py | tommyip/iBrainfuck | d20114f294ded1d7110486d1fe08bef28819e669 | [
"Apache-2.0"
] | null | null | null | tests/test_utils.py | tommyip/iBrainfuck | d20114f294ded1d7110486d1fe08bef28819e669 | [
"Apache-2.0"
] | null | null | null | from ..utils import extract_loops, in_nD_list
def test_extract_loops():
assert extract_loops(["++>-[++<-]+-[[+]-][++->[--<+]]"]) == \
["++>-", ["++<-"], "+-", [["+"], "-"], ["++->", ["--<+"]]]
assert extract_loops(["[[++[>+++<-]]]."]) == [[["++", [">+++<-"]]], "."]
assert extract_loops([]) == []
def test_find_in_nD_list():
assert in_nD_list(["++--,.", ["++>-", [">++"]], [["+"], [["-"]]]], "]") is False
assert in_nD_list(["++--,.", ["++>-", [">++"]], [["+"], [["-", "+-[]"]]]], "]") is True
assert in_nD_list(["++--++>>>..,,...++---"], "]") is False
assert in_nD_list([[[[[[[["]"]]]]]]]], "]") is True
assert in_nD_list([], "+") is False
| 30.043478 | 91 | 0.364689 | 57 | 691 | 4.035088 | 0.263158 | 0.121739 | 0.243478 | 0.304348 | 0.734783 | 0.734783 | 0.734783 | 0.734783 | 0.447826 | 0.447826 | 0 | 0 | 0.164978 | 691 | 22 | 92 | 31.409091 | 0.398614 | 0 | 0 | 0 | 0 | 0 | 0.195369 | 0.073806 | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0.166667 | true | 0 | 0.083333 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b0b564ef9c344ae71001b56d5656d49b3871ff13 | 5,587 | py | Python | backend/forum/base/tests/views/api/test_v1_archived_topics_start.py | karolyi/forum-django | a498be3123deb836e0108258c493b88c645b2163 | [
"MIT"
] | 7 | 2016-09-20T11:49:49.000Z | 2017-06-24T23:51:56.000Z | backend/forum/base/tests/views/api/test_v1_archived_topics_start.py | karolyi/forum-django | a498be3123deb836e0108258c493b88c645b2163 | [
"MIT"
] | 17 | 2019-12-22T10:41:48.000Z | 2021-11-17T10:58:50.000Z | backend/forum/base/tests/views/api/test_v1_archived_topics_start.py | karolyi/forum-django | a498be3123deb836e0108258c493b88c645b2163 | [
"MIT"
] | 1 | 2016-09-20T11:50:57.000Z | 2016-09-20T11:50:57.000Z | from django.http.response import HttpResponse
from django.test import Client, TestCase
from django.urls.base import reverse
from forum.testutils.html_result_parser import TopicListingParser
class V1ArchivedTopicStartTestCase(TestCase):
"""
Testing `v1_topic_list_page`.
"""
fixtures = [
'topic-tests-user', 'topic-tests-topic', 'topic-tests-topic-archived',
'topic-tests-comments-staffonly', 'topic-tests-comments-normal',
'topic-tests-comments-highlighted', 'topic-tests-comments-disabled',
'topic-tests-comments-archived']
def _get_parser(self, response: HttpResponse) -> TopicListingParser:
"""
Start and return the parser for each test case.
"""
parser = TopicListingParser(test=self, response=response)
parser.parse_as_archived_page_start()
return parser
def test_lists_topics_for_anonymous(self):
"""
Should lists topics for `AnonymousUser`.
"""
client = Client()
response = client.get(reverse(
viewname='forum:rest-api:v1-archived-topics-start'))
parser = self._get_parser(response=response)
parser.assert_topic_listed(
topic_type='archived',
topic_name='Archived enabled non-staff topic 1 html name',
slug='archived-enabled-non-staff-topic-1',
username='SuperStaffUser', total_comments=2,
preview_contains='Archived enabled non-staff second comment HTML')
parser.assert_topic_not_listed(
topic_type='archived', slug='archived-disabled-non-staff-topic-1')
parser.assert_topic_not_listed(
topic_type='archived', slug='archived-enabled-staff-topic-1')
parser.assert_topic_not_listed(
topic_type='archived', slug='archived-disabled-staff-topic-1')
parser.assert_no_more_topics_listed()
def test_lists_topics_for_valid_user(self):
"""
Should lists topics for `ValidUser`.
"""
client = Client()
client.login(username='ValidUser', password='ValidPassword')
response = client.get(reverse(
viewname='forum:rest-api:v1-archived-topics-start'))
parser = self._get_parser(response=response)
parser.assert_topic_listed(
topic_type='archived',
topic_name='Archived enabled non-staff topic 1 html name',
slug='archived-enabled-non-staff-topic-1',
username='SuperStaffUser', total_comments=2,
preview_contains='Archived enabled non-staff second comment HTML')
parser.assert_topic_not_listed(
topic_type='archived', slug='archived-disabled-non-staff-topic-1')
parser.assert_topic_not_listed(
topic_type='archived', slug='archived-enabled-staff-topic-1')
parser.assert_topic_not_listed(
topic_type='archived', slug='archived-disabled-staff-topic-1')
parser.assert_no_more_topics_listed()
def test_lists_topics_for_staff_user(self):
"""
Should lists topics for `StaffUser`.
"""
client = Client()
client.login(username='StaffUser', password='ValidPassword')
response = client.get(reverse(
viewname='forum:rest-api:v1-archived-topics-start'))
parser = self._get_parser(response=response)
parser.assert_topic_listed(
topic_type='archived',
topic_name='Archived enabled non-staff topic 1 html name',
slug='archived-enabled-non-staff-topic-1',
username='SuperStaffUser', total_comments=2,
preview_contains='Archived enabled non-staff second comment HTML')
parser.assert_topic_listed(
topic_type='archived',
topic_name='Archived enabled staff topic 1 html name',
slug='archived-enabled-staff-topic-1',
username='StaffUser', total_comments=1,
preview_contains='Archived enabled staff first comment HTML')
parser.assert_topic_not_listed(
topic_type='archived', slug='archived-disabled-non-staff-topic-1')
parser.assert_topic_not_listed(
topic_type='archived', slug='archived-disabled-staff-topic-1')
parser.assert_no_more_topics_listed()
def test_lists_topics_for_superuser(self):
"""
Should lists topics for `SuperUser`.
"""
client = Client()
client.login(username='SuperUser', password='ValidPassword')
response = client.get(reverse(
viewname='forum:rest-api:v1-archived-topics-start'))
parser = self._get_parser(response=response)
parser.assert_topic_listed(
topic_type='archived',
topic_name='Archived enabled non-staff topic 1 html name',
slug='archived-enabled-non-staff-topic-1',
username='SuperStaffUser', total_comments=2,
preview_contains='Archived enabled non-staff second comment HTML')
parser.assert_topic_listed(
topic_type='archived',
topic_name='Archived enabled staff topic 1 html name',
slug='archived-enabled-staff-topic-1',
username='StaffUser', total_comments=1,
preview_contains='Archived enabled staff first comment HTML')
parser.assert_topic_not_listed(
topic_type='archived', slug='archived-disabled-non-staff-topic-1')
parser.assert_topic_not_listed(
topic_type='archived', slug='archived-disabled-staff-topic-1')
parser.assert_no_more_topics_listed()
| 45.056452 | 78 | 0.658672 | 633 | 5,587 | 5.609795 | 0.129542 | 0.061954 | 0.06815 | 0.103633 | 0.800901 | 0.75528 | 0.73951 | 0.73951 | 0.73951 | 0.73951 | 0 | 0.00795 | 0.234473 | 5,587 | 123 | 79 | 45.422764 | 0.822305 | 0.040988 | 0 | 0.783505 | 0 | 0 | 0.319878 | 0.162426 | 0 | 0 | 0 | 0 | 0.206186 | 1 | 0.051546 | false | 0.030928 | 0.041237 | 0 | 0.123711 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9fd210f5699667d16d6d65d3e21a56cdb1c2eba7 | 171,160 | py | Python | config/tech.py | CDE-UNIBE/qcat-api-scripts | a91d36ec6fed530cdc9926cd1f221442953aa1c0 | [
"Apache-2.0"
] | 2 | 2018-09-07T11:42:34.000Z | 2018-09-10T13:54:03.000Z | config/tech.py | CDE-UNIBE/qcat-api-scripts | a91d36ec6fed530cdc9926cd1f221442953aa1c0 | [
"Apache-2.0"
] | null | null | null | config/tech.py | CDE-UNIBE/qcat-api-scripts | a91d36ec6fed530cdc9926cd1f221442953aa1c0 | [
"Apache-2.0"
] | 2 | 2017-10-02T06:36:57.000Z | 2018-03-23T13:01:01.000Z | """
Sample configuration for extracting a CSV data from SLM Technologies in QCAT.
The file will be updated continuesly until all all variables of the database
are included.
Copy and rename this file to csv.py and don't forget to set or provide your
API token.
Run this configuration as follows:
$(env)python config/qcat_data_csv.py config/tech.py -v -t ADD HERE THE TOKEN
"""
config = {
'api_filter_params': '?type=technologies',
'output_file': 'tech.csv',
'qcat_attributes': [
# 1 General information
# 1.1 Name ofr the SLM Technology
{
'path': [
'section_general_information',
'children',
'tech__1',
'children',
'tech__1__1',
'children',
'qg_name',
'children',
'name',
'value',
'value',
],
'name': '1_1_tech_name',
},
# 1.5 Reference to Questionnaire(s) on SLM approach
{
'path': [
'section_general_information',
'children',
'tech__1',
'children',
'tech__1__5',
'value',
'code',
],
'name': '1_5_tech_ref_to_approach_code',
},
{
'path': [
'section_general_information',
'children',
'tech__1',
'children',
'tech__1__5',
'value',
'name',
],
'name': '1_5_tech_ref_to_approach_name',
},
{
'path': [
'section_general_information',
'children',
'tech__1',
'children',
'tech__1__5',
'value',
'url',
],
'name': '1_5_tech_ref_to_approach_url',
},
# Description of the SLM Technology
# 2.1 Short description of the Technology
{
'path': [
'section_specifications',
'children',
'tech__2',
'children',
'tech__2__1',
'children',
'tech_qg_1',
'children',
'tech_definition',
'value',
'value',
],
'name': '2_1_tech_definition'
},
# 2.2 Detailed description of the Technology
{
'path': [
'section_specifications',
'children',
'tech__2',
'children',
'tech__2__2',
'children',
'tech_qg_2',
'children',
'tech_description',
'value',
'value',
],
'name': '2_2_tech_description'
},
# 2.5 Country / region / location where the Technology has been applied and which are covered by this assessment
{
'path': [
'section_specifications',
'children',
'tech__2',
'children',
"tech__2__5",
"children",
"qg_location",
"children",
"country",
"value",
"value"
],
'name': '2_5_tech_country'
},
{
'path': [
'section_specifications',
'children',
'tech__2',
'children',
"tech__2__5",
"children",
"qg_location",
"children",
"state_province",
"value",
"value"
],
'name': '2_5_tech_provinve'
},
{
'path': [
'section_specifications',
'children',
'tech__2',
'children',
"tech__2__5",
"children",
"qg_location",
"children",
"further_location",
"value",
"value"
],
'name': '2_5_tech_further_location'
},
{
'path': [
'section_specifications',
'children',
'tech__2',
'children',
"tech__2__5",
"children",
"tech_qg_3",
"children",
"tech_sites_considered",
"value"
],
'name': '2_5_tech_sites_considered'
},
{
'path': [
'section_specifications',
'children',
'tech__2',
'children',
"tech__2__5",
"children",
"qg_location_map",
"children",
"location_map",
"value"
],
'name': '2_5_tech_location_map'
},
{
'path': [
'section_specifications',
'children',
'tech__2',
'children',
"tech__2__5",
"children",
"tech_qg_225",
"children",
"location_comments",
"value"
],
'name': '2_5_tech_location_comments'
},
# 2.6 Date of implementation
{
'path': [
'section_specifications',
'children',
'tech__2',
'children',
"tech__2__6",
"children",
"tech_qg_160",
"children",
"tech_implementation_year",
"value",
"value",
],
'name': '2_6_tech_implementation_year'
},
{
'path': [
'section_specifications',
'children',
'tech__2',
'children',
"tech__2__6",
"children",
"tech_qg_160",
"children",
"tech_implementation_decades",
"value",
"values",
],
'name': '2_6_tech_implementation_decades'
},
# 2.7 Introduction of the Technology
{
'path': [
'section_specifications',
'children',
'tech__2',
'children',
"tech__2__7",
"children",
"tech_qg_5",
"children",
"tech_who_implemented",
"value",
"values",
],
'name': '2_7_tech_who_implemented'
},
{
'path': [
'section_specifications',
'children',
'tech__2',
'children',
"tech__2__7",
"children",
"tech_qg_5",
"children",
"tech_who_implemented_other",
"value",
"value",
],
'name': '2_7_tech_who_implemented_other'
},
# 3 Classification of the SLM Technology
# 3.1 Main purpose(s) of the Technology
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__1',
'children',
'tech_qg_6',
'children',
'tech_main_purpose',
'value',
'values',
],
'name': '3_1_tech_main_purpose'
},
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__1',
'children',
'tech_qg_6',
'children',
'tech_main_purpose_other',
'value',
'value',
],
'name': '3_1_tech_main_purpose_other'
},
# 3.2 Current land use types where the Technology is applied
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__2',
'children',
'tech_qg_9',
'children',
'tech_landuse',
'value',
'values',
],
'name': '3_2_tech_landuse_type'
},
# 3.3 Further information about land use
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__3',
'children',
'tech_qg_19',
'children',
'tech_watersupply',
'value',
'values',
],
'name': '3_2_tech_watersupply'
},
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__3',
'children',
'tech_qg_19',
'children',
'tech_watersupply_other',
'value',
'value',
],
'name': '3_2_tech_watersupply_other'
},
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__3',
'children',
'tech_qg_19',
'children',
'tech_watersupply_comments',
'value',
'value',
],
'name': '3_2_tech_watersupply_comments'
},
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__3',
'children',
'tech_qg_19',
'children',
'tech_growing_seasons',
'value',
'values',
],
'name': '3_2_tech_growing_seasons'
},
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__3',
'children',
'tech_qg_19',
'children',
'tech_growing_seasons_specify',
'value',
'value',
],
'name': '3_2_tech_growing_seasons_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__3',
'children',
'tech_qg_19',
'children',
'tech_livestock_density',
'value',
'value',
],
'name': '3_2_tech_livestock_density'
},
# 3.4 SLM group to which the Technology belongs
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__4',
'children',
'tech_qg_20',
'children',
'tech_slm_group',
'value',
'values',
],
'name': '3_4_tech_slm_group'
},
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__4',
'children',
'tech_qg_20',
'children',
'tech_slm_group_other',
'value',
'value',
],
'name': '3_4_tech_slm_group_other'
},
# 3.5 Spread of the Technology
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__5',
'children',
'tech_qg_4',
'children',
'tech_spread_tech',
'value',
'values',
],
'name': '3_5_tech_spread_tech'
},
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__5',
'children',
'tech_qg_4',
'children',
'tech_spread_area',
'value',
'values',
],
'name': '3_5_tech_slm_group_other'
},
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__5',
'children',
'tech_qg_4',
'children',
'tech_spread_tech_comments',
'value',
'value',
],
'name': '3_5_tech_spread_tech_comments'
},
# 3.6 SLM measures comprising the Technology
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__6',
'children',
'tech_qg_8',
'children',
'tech_measures',
'value',
'values',
],
'name': '3_6_tech_measures'
},
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__6',
'children',
'tech_qg_26',
'children',
'tech_measures_comments',
'value',
'value',
],
'name': '3_6_tech_measures_comments'
},
# 3.7 Main types of land degradation addressed by the Technology
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__7',
'children',
'tech_qg_27',
'children',
'tech_degradation',
'value',
'values',
],
'name': '3_7_tech_degradation'
},
# 3.8 Prevention, reduction, or restoration of land degradation
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__8',
'children',
'tech_qg_35',
'children',
'tech_prevention',
'value',
'values',
],
'name': '3_8_tech_prevention'
},
{
'path': [
'section_specifications',
'children',
'tech__3',
'children',
'tech__3__8',
'children',
'tech_qg_35',
'children',
'tech_prevention_comments',
'value',
'value',
],
'name': '3_8_tech_prevention_comments'
},
# 4 Technical specifications, implementation activities, inputs, and costs
# 4.1 Technical drawing of the Technology
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__1',
'children',
'tech_qg_185',
'children',
'tech_drawing',
'value',
'value',
],
'name': '4_1_tech_drawing'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__1',
'children',
'tech_qg_185',
'children',
'tech_drawing_author',
'value',
'value',
],
'name': '4_1_tech_drawing_author'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__1',
'children',
'tech_qg_185',
'children',
'tech_drawing_date',
'value',
'value',
],
'name': '4_1_tech_drawing_date'
},
# 4.2 Technical specifications/ explanations of technical drawing
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__2',
'children',
'tech_qg_161',
'children',
'tech_specifications',
'value',
'value',
],
'name': '4_2_tech_specifications'
},
# 4.3 General information regarding the calculation of inputs and costs
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__3',
'children',
'tech_qg_217',
'children',
'tech_cost_calculation_base',
'value',
],
'name': '4_3_tech_cost_calculation_base'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__3',
'children',
'tech_qg_163',
'children',
'tech_perarea_size',
'value',
],
'name': '4_3_tech_perarea_size'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__3',
'children',
'tech_qg_163',
'children',
'tech_area_unit_conversion',
'value',
],
'name': '4_3_tech_area_unit_conversion'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__3',
'children',
'tech_qg_162',
'children',
'tech_input_perunit_unit',
'value',
],
'name': '4_3_tech_input_perunit_unit'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__3',
'children',
'tech_qg_162',
'children',
'tech_input_perunit_volume',
'value',
],
'name': '4_3_tech_input_perunit_volume'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__3',
'children',
'tech_qg_164',
'children',
'tech_input_dollar',
'value',
'values',
],
'name': '4_3_tech_input_dollar'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__3',
'children',
'tech_qg_164',
'children',
'tech_input_national_currency',
'value',
'value',
],
'name': '4_3_tech_input_national_currency'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__3',
'children',
'tech_qg_164',
'children',
'tech_input_exchange_rate',
'value',
'value',
],
'name': '4_3_tech_input_exchange_rate'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__3',
'children',
'tech_qg_164',
'children',
'tech_input_average_wage',
'value',
'value',
],
'name': '4_3_tech_input_average_wage'
},
# 4.4 Establishment activities
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__4',
'children',
'tech_qg_165',
'children',
'tech_est_activity',
'value',
'value',
],
'name': '4_4_tech_est_activity'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__4',
'children',
'tech_qg_165',
'children',
'tech_est_type',
'value',
'value',
],
'name': '4_4_tech_est_type'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__4',
'children',
'tech_qg_165',
'children',
'tech_est_timing',
'value',
'value',
],
'name': '4_4_tech_est_timing'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__4',
'children',
'tech_qg_166',
'children',
'tech_est_comments',
'value',
'value',
],
'name': '4_4_tech_est_comments'
},
# 4.5 Costs and inputs needed for establishment
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_42',
'children',
'tech_input_est_total_estimation',
'value',
'value',
],
'name': '4_5_tech_input_est_total_estimation'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_36',
'children',
'tech_input_est_specify',
'value',
'value',
],
'name': '4_5_tech_input_est_labour_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_36',
'children',
'tech_input_est_unit',
'value',
'value',
],
'name': '4_5_tech_input_est_labour_unit'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_36',
'children',
'tech_input_est_quantity',
'value',
'value',
],
'name': '4_5_tech_input_est_labour_quantity'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_36',
'children',
'tech_input_est_costs',
'value',
'value',
],
'name': '4_5_tech_input_est_labour_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_36',
'children',
'tech_input_est_total_costs_pi',
'value',
'value',
],
'name': '4_5_tech_input_est_labour_total_costs_pi'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_36',
'children',
'tech_input_est_percentage_costs',
'value',
'value',
],
'name': '4_5_tech_input_est_labour_percentage_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_37',
'children',
'tech_input_est_specify',
'value',
'value',
],
'name': '4_5_tech_input_est_equipment_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_37',
'children',
'tech_input_est_unit',
'value',
'value',
],
'name': '4_5_tech_input_est_equipment_unit'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_37',
'children',
'tech_input_est_quantity',
'value',
'value',
],
'name': '4_5_tech_input_est_equipment_quantity'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_37',
'children',
'tech_input_est_costs',
'value',
'value',
],
'name': '4_5_tech_input_est_equipment_costs',
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_37',
'children',
'tech_input_est_total_costs_pi',
'value',
'value',
],
'name': '4_5_tech_input_est_equipment_total_costs_pi'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_37',
'children',
'tech_input_est_percentage_costs',
'value',
'value',
],
'name': '4_5_tech_input_est_equipment_percentage_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_38',
'children',
'tech_input_est_specify',
'value',
'value',
],
'name': '4_5_tech_input_est_plantmaterial_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_38',
'children',
'tech_input_est_unit',
'value',
'value',
],
'name': '4_5_tech_input_est_plantmaterial_unit'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_38',
'children',
'tech_input_est_quantity',
'value',
'value',
],
'name': '4_5_tech_input_est_plantmaterial_quantity'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_38',
'children',
'tech_input_est_costs',
'value',
'value',
],
'name': '4_5_tech_input_est_plantmaterial_costs',
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_38',
'children',
'tech_input_est_total_costs_pi',
'value',
'value',
],
'name': '4_5_tech_input_est_plantmaterial_total_costs_pi'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_38',
'children',
'tech_input_est_percentage_costs',
'value',
'value',
],
'name': '4_5_tech_input_est_plantmaterial_percentage_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_218',
'children',
'tech_input_est_specify',
'value',
'value',
],
'name': '4_5_tech_input_est_fertilizerandbiocides_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_218',
'children',
'tech_input_est_unit',
'value',
'value',
],
'name': '4_5_tech_input_est_fertilizerandbiocides_unit'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_218',
'children',
'tech_input_est_quantity',
'value',
'value',
],
'name': '4_5_tech_input_est_fertilizerandbiocides_quantity'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_218',
'children',
'tech_input_est_costs',
'value',
'value',
],
'name': '4_5_tech_input_est_fertilizerandbiocides_costs',
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_218',
'children',
'tech_input_est_total_costs_pi',
'value',
'value',
],
'name': '4_5_tech_input_est_fertilizerandbiocides_total_costs_pi'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_218',
'children',
'tech_input_est_percentage_costs',
'value',
'value',
],
'name': '4_5_tech_input_est_fertilizerandbiocides_percentage_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_39',
'children',
'tech_input_est_specify',
'value',
'value',
],
'name': '4_5_tech_input_est_constructionmaterial_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_39',
'children',
'tech_input_est_unit',
'value',
'value',
],
'name': '4_5_tech_input_est_constructionmaterial_unit'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_39',
'children',
'tech_input_est_quantity',
'value',
'value',
],
'name': '4_5_tech_input_est_constructionmaterial_quantity'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_39',
'children',
'tech_input_est_costs',
'value',
'value',
],
'name': '4_5_tech_input_est_constructionmaterial_costs',
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_39',
'children',
'tech_input_est_total_costs_pi',
'value',
'value',
],
'name': '4_5_tech_input_est_constructionmaterial_total_costs_pi'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_39',
'children',
'tech_input_est_percentage_costs',
'value',
'value',
],
'name': '4_5_tech_input_est_constructionmaterial_percentage_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_40',
'children',
'tech_input_est_specify',
'value',
'value',
],
'name': '4_5_tech_input_est_other_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_40',
'children',
'tech_input_est_unit',
'value',
'value',
],
'name': '4_5_tech_input_est_other_unit'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_40',
'children',
'tech_input_est_quantity',
'value',
'value',
],
'name': '4_5_tech_input_est_other_quantity'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_40',
'children',
'tech_input_est_costs',
'value',
'value',
],
'name': '4_5_tech_input_est_other_costs',
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_40',
'children',
'tech_input_est_total_costs_pi',
'value',
'value',
],
'name': '4_5_tech_input_est_other_total_costs_pi'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_40',
'children',
'tech_input_est_percentage_costs',
'value',
'value',
],
'name': '4_5_tech_input_est_cost_percentage_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_222',
'children',
'tech_input_est_total_costs',
'value',
'value',
],
'name': '4_5_tech_input_est_total_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_93',
'children',
'tech_input_est_remaining_costs',
'value',
'value',
],
'name': '4_5_tech_input_est_remaining_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__5',
'children',
'tech_qg_93',
'children',
'tech_input_est_comments',
'value',
'value',
],
'name': '4_5_tech_input_est_comments'
},
# 4.6 Maintenance/ recurrent activities
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__6',
'children',
'tech_qg_43',
'children',
'tech_maint_activity',
'value',
'value',
],
'name': '4_6_tech_input_est_comments'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__6',
'children',
'tech_qg_43',
'children',
'tech_maint_type',
'value',
'value',
],
'name': '4_6_tech_maint_type'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__6',
'children',
'tech_qg_43',
'children',
'tech_maint_timing',
'value',
'value',
],
'name': '4_6_tech_maint_timing'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__6',
'children',
'tech_qg_44',
'children',
'tech_maint_comments',
'value',
'value',
],
'name': '4_6_tech_maint_comments'
},
# 4.7 Costs and inputs needed for maintenance/ recurrent activities (per year)
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_50',
'children',
'tech_input_maint_total_estimation',
'value',
'value',
],
'name': '4_7_tech_input_maint_total_estimation'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_45',
'children',
'tech_input_maint_specify',
'value',
'value',
],
'name': '4_7_tech_input_maint_labour_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_45',
'children',
'tech_input_maint_unit',
'value',
'value',
],
'name': '4_7_tech_input_maint_labour_unit'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_45',
'children',
'tech_input_maint_quantity',
'value',
'value',
],
'name': '4_7_tech_input_maint_labour_quantity'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_45',
'children',
'tech_input_maint_costs',
'value',
'value',
],
'name': '4_7_tech_input_maint_labour_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_45',
'children',
'tech_input_maint_total_costs_pi',
'value',
'value',
],
'name': '4_7_tech_input_maint_labour_total_costs_pi'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_45',
'children',
'tech_input_maint_percentage_costs',
'value',
'value',
],
'name': '4_7_tech_input_maint_labour_percentage_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_46',
'children',
'tech_input_maint_specify',
'value',
'value',
],
'name': '4_7_tech_input_maint_equipment_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_46',
'children',
'tech_input_maint_unit',
'value',
'value',
],
'name': '4_7_tech_input_maint_equipment_unit'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_46',
'children',
'tech_input_maint_quantity',
'value',
'value',
],
'name': '4_7_tech_input_maint_equipment_quantity'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_46',
'children',
'tech_input_maint_costs',
'value',
'value',
],
'name': '4_7_tech_input_maint_equipment_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_46',
'children',
'tech_input_maint_total_costs_pi',
'value',
'value',
],
'name': '4_7_tech_input_maint_equipment_total_costs_pi'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_46',
'children',
'tech_input_maint_percentage_costs',
'value',
'value',
],
'name': '4_7_tech_input_maint_equipment_percentage_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_47',
'children',
'tech_input_maint_specify',
'value',
'value',
],
'name': '4_7_tech_input_maint_plantmaterial_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_47',
'children',
'tech_input_maint_unit',
'value',
'value',
],
'name': '4_7_tech_input_maint_plantmaterial_unit'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_47',
'children',
'tech_input_maint_quantity',
'value',
'value',
],
'name': '4_7_tech_input_maint_plantmaterial_quantity'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_47',
'children',
'tech_input_maint_costs',
'value',
'value',
],
'name': '4_7_tech_input_maint_plantmaterial_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_47',
'children',
'tech_input_maint_total_costs_pi',
'value',
'value',
],
'name': '4_7_tech_input_maint_plantmaterial_total_costs_pi'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_47',
'children',
'tech_input_maint_percentage_costs',
'value',
'value',
],
'name': '4_7_tech_input_maint_plantmaterial_percentage_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_219',
'children',
'tech_input_maint_specify',
'value',
'value',
],
'name': '4_7_tech_input_maint_fertilizerandbiocides_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_219',
'children',
'tech_input_maint_unit',
'value',
'value',
],
'name': '4_7_tech_input_maint_fertilizerandbiocides_unit'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_219',
'children',
'tech_input_maint_quantity',
'value',
'value',
],
'name': '4_7_tech_input_maint_fertilizerandbiocides_quantity'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_219',
'children',
'tech_input_maint_costs',
'value',
'value',
],
'name': '4_7_tech_input_maint_fertilizerandbiocides_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_219',
'children',
'tech_input_maint_total_costs_pi',
'value',
'value',
],
'name': '4_7_tech_input_maint_fertilizerandbiocides_total_costs_pi'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_219',
'children',
'tech_input_maint_percentage_costs',
'value',
'value',
],
'name': '4_7_tech_input_maint_fertilizerandbiocides_percentage_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_48',
'children',
'tech_input_maint_specify',
'value',
'value',
],
'name': '4_7_tech_input_maint_constructionmaterial_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_48',
'children',
'tech_input_maint_unit',
'value',
'value',
],
'name': '4_7_tech_input_maint_constructionmaterial_unit'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_48',
'children',
'tech_input_maint_quantity',
'value',
'value',
],
'name': '4_7_tech_input_maint_constructionmaterial_quantity'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_48',
'children',
'tech_input_maint_costs',
'value',
'value',
],
'name': '4_7_tech_input_maint_constructionmaterial_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_48',
'children',
'tech_input_maint_total_costs_pi',
'value',
'value',
],
'name': '4_7_tech_input_maint_constructionmaterial_total_costs_pi'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_48',
'children',
'tech_input_maint_percentage_costs',
'value',
'value',
],
'name': '4_7_tech_input_maint_constructionmaterial_percentage_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_49',
'children',
'tech_input_maint_specify',
'value',
'value',
],
'name': '4_7_tech_input_maint_other_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_49',
'children',
'tech_input_maint_unit',
'value',
'value',
],
'name': '4_7_tech_input_maint_other_unit'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_49',
'children',
'tech_input_maint_quantity',
'value',
'value',
],
'name': '4_7_tech_input_maint_other_quantity'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_49',
'children',
'tech_input_maint_costs',
'value',
'value',
],
'name': '4_7_tech_input_maint_other_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_49',
'children',
'tech_input_maint_total_costs_pi',
'value',
'value',
],
'name': '4_7_tech_input_maint_other_total_costs_pi'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_49',
'children',
'tech_input_maint_percentage_costs',
'value',
'value',
],
'name': '4_7_tech_input_maint_other_percentage_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_223',
'children',
'tech_input_maint_total_costs',
'value',
'value',
],
'name': '4_7_tech_input_maint_total_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_52',
'children',
'tech_input_maint_remaining_costs',
'value',
'value',
],
'name': '4_7_tech_input_maint_remaining_costs'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__7',
'children',
'tech_qg_52',
'children',
'tech_input_maint_comments',
'value',
'value',
],
'name': '4_7_tech_input_maint_comments'
},
{
'path': [
'section_specifications',
'children',
'tech__4',
'children',
'tech__4__8',
'children',
'tech_qg_95',
'children',
'tech_input_determinate_factors',
'value',
'value',
],
'name': '4_8_tech_input_determinate_factors'
},
# 5 Natural and human environment
# 5.1 Annual rainfall
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__1',
'children',
'tech_qg_54',
'children',
'tech_rainfall',
'value',
'values',
],
'name': '5_1_tech_rainfall'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__1',
'children',
'tech_qg_54',
'children',
'tech_rainfall_annual',
'value',
'value',
],
'name': '5_1_tech_rainfall_annual'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__1',
'children',
'tech_qg_54',
'children',
'tech_rainfall_specifications',
'value',
'value',
],
'name': '5_1_tech_rainfall_specifications'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__1',
'children',
'tech_qg_54',
'children',
'tech_rainfall_meteostation',
'value',
'value',
],
'name': '5_1_tech_rainfall_meteostation'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__1',
'children',
'tech_qg_55',
'children',
'tech_agroclimatic_zone',
'value',
'values',
],
'name': '5_1_tech_agroclimatic_zone'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__1',
'children',
'tech_qg_55',
'children',
'tech_agroclimatic_zone_specifications',
'value',
'value',
],
'name': '5_1_tech_agroclimatic_zone_specifications'
},
# 5.2 Topography
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__2',
'children',
'tech_qg_56',
'children',
'tech_slopes',
'value',
'values',
],
'name': '5_2_tech_slopes'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__2',
'children',
'tech_qg_56',
'children',
'tech_landforms',
'value',
'values',
],
'name': '5_2_tech_landforms'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__2',
'children',
'tech_qg_56',
'children',
'tech_altitudinalzone',
'value',
'values',
],
'name': '5_2_tech_landforms'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__2',
'children',
'tech_qg_57',
'children',
'tech_convex_concave',
'value',
'values',
],
'name': '5_2_tech_convex_concave'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__2',
'children',
'tech_qg_57',
'children',
'tech_topography_comments',
'value',
'value',
],
'name': '5_2_tech_topography_comments'
},
# 5.3 Soils
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__3',
'children',
'tech_qg_58',
'children',
'tech_soil_depth',
'value',
'values',
],
'name': '5_3_tech_soil_depth'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__3',
'children',
'tech_qg_58',
'children',
'tech_soil_texture_topsoil',
'value',
'values',
],
'name': '5_3_tech_soil_texture_topsoil'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__3',
'children',
'tech_qg_58',
'children',
'tech_soil_texture_20cm',
'value',
'values',
],
'name': '5_3_tech_soil_texture_20cm'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__3',
'children',
'tech_qg_58',
'children',
'tech_topsoil_organic',
'value',
'values',
],
'name': '5_3_tech_topsoil_organic'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__3',
'children',
'tech_qg_59',
'children',
'tech_soil_comments',
'value',
'value',
],
'name': '5_3_tech_soil_comments'
},
# 5.4 Water availability and quality
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__4',
'children',
'tech_qg_60',
'children',
'tech_groundwater',
'value',
'values',
],
'name': '5_4_tech_groundwater'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__4',
'children',
'tech_qg_60',
'children',
'tech_surfacewater',
'value',
'values',
],
'name': '5_4_tech_surfacewater'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__4',
'children',
'tech_qg_60',
'children',
'tech_waterquality',
'value',
'values',
],
'name': '5_4_tech_waterquality'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__4',
'children',
'tech_qg_61',
'children',
'tech_salinization',
'value',
],
'name': '5_4_tech_salinization'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__4',
'children',
'tech_qg_62',
'children',
'tech_salinization_specification',
'value',
],
'name': '5_4_tech_salinization_specification'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__4',
'children',
'tech_qg_63',
'children',
'tech_flooding',
'value',
],
'name': '5_4_tech_flooding'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__4',
'children',
'tech_qg_64',
'children',
'tech_flooding_regularity',
'value',
],
'name': '5_4_tech_flooding_regularity'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__4',
'children',
'tech_qg_65',
'children',
'tech_water_comments',
'value',
],
'name': '5_4_tech_water_comments'
},
# 5.5 Biodiversity
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__5',
'children',
'tech_qg_66',
'children',
'tech_speciesdiversity',
'value',
'values',
],
'name': '5_5_tech_speciesdiversity'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__5',
'children',
'tech_qg_66',
'children',
'tech_habitatdiversity',
'value',
'values',
],
'name': '5_5_tech_habitatdiversity'
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__5',
'children',
'tech_qg_67',
'children',
'tech_biodiversity_comments',
'value',
],
'name': '5_5_tech_biodiversity_comments'
},
# 5.6 Characteristics of the land user applying technology
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__6',
'children',
'tech_qg_71',
'children',
'tech_sedentary_nomadic',
'value',
'values',
],
'name': '5_6_tech_sedentary_nomadic',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__6',
'children',
'tech_qg_71',
'children',
'tech_sedentary_other',
'value',
'value',
],
'name': '5_6_tech_sedentary_other',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__6',
'children',
'tech_qg_71',
'children',
'tech_market_orientation',
'value',
'values',
],
'name': '5_6_tech_market_orientation',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__6',
'children',
'tech_qg_71',
'children',
'tech_offfarm_income',
'value',
'values',
],
'name': '5_6_offfarm_income',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__6',
'children',
'tech_qg_71',
'children',
'tech_wealth',
'value',
'values',
],
'name': '5_6_tech_wealth',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__6',
'children',
'tech_qg_71',
'children',
'tech_individuals',
'value',
'values',
],
'name': '5_6_tech_inividuals',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__6',
'children',
'tech_qg_71',
'children',
'tech_mechanisation',
'value',
'values',
],
'name': '5_6_tech_mechanisation',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__6',
'children',
'tech_qg_71',
'children',
'tech_gender',
'value',
'values',
],
'name': '5_6_tech_gender',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__6',
'children',
'tech_qg_71',
'children',
'tech_age_landusers',
'value',
'values',
],
'name': '5_6_tech_age_landusers',
},
# 5.7 Average area of land owned or leased by land users applying the Technology
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__7',
'children',
'tech_qg_72',
'children',
'tech_land_size',
'value',
'values',
],
'name': '5_7_tech_tech_land_size',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__7',
'children',
'tech_qg_72',
'children',
'tech_land_size_relative',
'value',
'values',
],
'name': '5_7_tech_land_size_relative',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__7',
'children',
'tech_qg_72',
'children',
'tech_land_comments',
'value',
'value',
],
'name': '5_7_tech_land_comments',
},
# 5.8 Land ownership, land use rights, and water use rights
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__8',
'children',
'tech_qg_73',
'children',
'tech_ownership',
'value',
'values',
],
'name': '5_8_tech_ownership',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__8',
'children',
'tech_qg_73',
'children',
'tech_ownership_other',
'value',
'value',
],
'name': '5_8_tech_ownership_other',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__8',
'children',
'tech_qg_73',
'children',
'tech_landuserights',
'value',
'values',
],
'name': '5_8_tech_landuserrights',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__8',
'children',
'tech_qg_73',
'children',
'tech_landrights_other',
'value',
'value',
],
'name': '5_8_tech_landrights_other',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__8',
'children',
'tech_qg_73',
'children',
'tech_wateruserights',
'value',
'values',
],
'name': '5_8_tech_wateruserights',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__8',
'children',
'tech_qg_73',
'children',
'tech_waterrights_other',
'value',
'value',
],
'name': '5_8_tech_waterrights_other',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__8',
'children',
'tech_qg_75',
'children',
'tech_ownership_comments',
'value',
'value',
],
'name': '5_8_tech_ownership_comments',
},
# 5.9 Access to services and infrastructure",
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__9',
'children',
'tech_qg_226',
'children',
'tech_access_health',
'value',
'value',
],
'name': '5_9_tech_access_health',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__9',
'children',
'tech_qg_226',
'children',
'tech_access_education',
'value',
'value',
],
'name': '5_9_tech_access_education',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__9',
'children',
'tech_qg_226',
'children',
'tech_access_techassistance',
'value',
'value',
],
'name': '5_9_tech_access_techassistance',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__9',
'children',
'tech_qg_226',
'children',
'tech_access_employment',
'value',
'value',
],
'name': '5_9_tech_access_employment',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__9',
'children',
'tech_qg_226',
'children',
'tech_access_markets',
'value',
'value',
],
'name': '5_9_tech_access_markets',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__9',
'children',
'tech_qg_226',
'children',
'tech_access_energy',
'value',
'value',
],
'name': '5_9_tech_access_energy',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__9',
'children',
'tech_qg_226',
'children',
'tech_access_roads',
'value',
'value',
],
'name': '5_9_tech_access_roads',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__9',
'children',
'tech_qg_226',
'children',
'tech_access_water',
'value',
'value',
],
'name': '5_9_tech_access_water',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__9',
'children',
'tech_qg_226',
'children',
'tech_access_financial',
'value',
'value',
],
'name': '5_9_tech_access_financial',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__9',
'children',
'tech_qg_227',
'children',
'tech_access_other_specify',
'value',
'value',
],
'name': '5_9_tech_access_other_specify',
},
{
'path': [
'section_specifications',
'children',
'tech__5',
'children',
'tech__5__9',
'children',
'tech_qg_227',
'children',
'tech_access_other_measure',
'value',
'value',
],
'name': '5_9_tech_access_other_measure',
},
# 6 Impacts and concluding statements
# 6.1 On-site impacts the Technology has shown
# Socio-economic impacts (sei) - Production
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_production',
'children',
'tech_qg_76',
'children',
'tech_impacts_cropproduction',
'value',
'value',
],
'name': '6_1_sei_production_cropproduction'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_production',
'children',
'tech_qg_78',
'children',
'tech_impacts_cropquality',
'value',
'value',
],
'name': '6_1_sei_production_cropquality'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_production',
'children',
'tech_qg_79',
'children',
'tech_impacts_fodderproduction',
'value',
'value',
],
'name': '6_1_sei_production_fodderproduction'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_production',
'children',
'tech_qg_80',
'children',
'tech_impacts_fodderquality',
'value',
'value',
],
'name': '6_1_sei_production_fodderquality'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_production',
'children',
'tech_qg_81',
'children',
'tech_impacts_animalproduction',
'value',
'value',
],
'name': '6_1_sei_production_animalproduction'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_production',
'children',
'tech_qg_82',
'children',
'tech_impacts_woodproduction',
'value',
'value',
],
'name': '6_1_sei_production_woodproduction'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_production',
'children',
'tech_qg_83',
'children',
'tech_impacts_forestquality',
'value',
'value',
],
'name': '6_1_sei_production_forestquality'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_production',
'children',
'tech_qg_84',
'children',
'tech_impacts_nonwoodforestproduction',
'value',
'value',
],
'name': '6_1_sei_production_nonwoodforestproduction'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_production',
'children',
'tech_qg_85',
'children',
'tech_impacts_productionfailure',
'value',
'value',
],
'name': '6_1_sei_production_productionfailure'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_production',
'children',
'tech_qg_86',
'children',
'tech_impacts_productdiversity',
'value',
'value',
],
'name': '6_1_sei_production_productdiversity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_production',
'children',
'tech_qg_87',
'children',
'tech_impacts_productionarea',
'value',
'value',
],
'name': '6_1_sei_production_productionarea'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_production',
'children',
'tech_qg_88',
'children',
'tech_impacts_landmanagement',
'value',
'value',
],
'name': '6_1_sei_production_landmanagement'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_production',
'children',
'tech_qg_90',
'children',
'tech_impacts_energygeneration',
'value',
'value',
],
'name': '6_1_sei_production_energygeneration'
},
# Socio-economic impacts (sei) - Water availability and quality
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_water',
'children',
'tech_qg_91',
'children',
'tech_impacts_drinkingwateravailability',
'value',
'value',
],
'name': '6_1_sei_water_drinkingwateravailability'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_water',
'children',
'tech_qg_97',
'children',
'tech_impacts_drinkingwaterquality',
'value',
'value',
],
'name': '6_1_sei_water_drinkingwaterquality'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_water',
'children',
'tech_qg_98',
'children',
'tech_impacts_livestockwateravailability',
'value',
'value',
],
'name': '6_1_sei_water_livestockwateravailability'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_water',
'children',
'tech_qg_99',
'children',
'tech_impacts_livestockwaterquality',
'value',
'value',
],
'name': '6_1_sei_water_livestockwaterquality'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_water',
'children',
'tech_qg_100',
'children',
'tech_impacts_irrigationwateravailability',
'value',
'value',
],
'name': '6_1_sei_water_irrigationwateravailability'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_water',
'children',
'tech_qg_101',
'children',
'tech_impacts_irrigationwaterquality',
'value',
'value',
],
'name': '6_1_sei_water_irrigationwaterquality'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_water',
'children',
'tech_qg_102',
'children',
'tech_impacts_demandirrigationwater',
'value',
'value',
],
'name': '6_1_sei_water_demandirrigationwater'
},
# Socio-economic impacts (sei) - Income and costs
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_income',
'children',
'tech_qg_103',
'children',
'tech_impacts_expenses',
'value',
'value',
],
'name': '6_1_sei_income_expenses'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_income',
'children',
'tech_qg_103',
'children',
'tech_impacts_expenses',
'value',
'value',
],
'name': '6_1_sei_income_expenses'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_income',
'children',
'tech_qg_104',
'children',
'tech_impacts_farmincome',
'value',
'value',
],
'name': '6_1_sei_income_farmincome'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_income',
'children',
'tech_qg_105',
'children',
'tech_impacts_diversityincome',
'value',
'value',
],
'name': '6_1_sei_income_diversityincome'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_income',
'children',
'tech_qg_106',
'children',
'tech_impacts_economicdisparities',
'value',
'value',
],
'name': '6_1_sei_income_economicdisparities'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_income',
'children',
'tech_qg_107',
'children',
'tech_impacts_workload',
'value',
'value',
],
'name': '6_1_sei_income_workload'
},
# Socio-economic impacts (sei) - Other
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_other',
'children',
'tech_qg_186',
'children',
'tech_impacts_other_specify',
'value',
'value',
],
'name': '6_1_sei_other_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__socioeconomic',
'children',
'tech__8__1__socioeconomic_other',
'children',
'tech_qg_186',
'children',
'tech_impacts_other_measure',
'value',
'value',
],
'name': '6_1_sei_other'
},
# Socio-cultural impacts (sci)
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__sociocultural',
'children',
'tech__8__1__sociocultural',
'children',
'tech_qg_108',
'children',
'tech_impacts_foodsecurity',
'value',
'value',
],
'name': '6_1_sci_foodsecurity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__sociocultural',
'children',
'tech__8__1__sociocultural',
'children',
'tech_qg_109',
'children',
'tech_impacts_health',
'value',
'value',
],
'name': '6_1_sci_healthsituation'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__sociocultural',
'children',
'tech__8__1__sociocultural',
'children',
'tech_qg_110',
'children',
'tech_impacts_landuse',
'value',
'value',
],
'name': '6_1_sci_landusewaterrights'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__sociocultural',
'children',
'tech__8__1__sociocultural',
'children',
'tech_qg_111',
'children',
'tech_impacts_cultural',
'value',
'value',
],
'name': '6_1_sci_culturalopportunities'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__sociocultural',
'children',
'tech__8__1__sociocultural',
'children',
'tech_qg_112',
'children',
'tech_impacts_recreational',
'value',
'value',
],
'name': '6_1_sci_recreationalopportunities'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__sociocultural',
'children',
'tech__8__1__sociocultural',
'children',
'tech_qg_113',
'children',
'tech_impacts_community',
'value',
'value',
],
'name': '6_1_sci_communityinstitutions'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__sociocultural',
'children',
'tech__8__1__sociocultural',
'children',
'tech_qg_114',
'children',
'tech_impacts_national',
'value',
'value',
],
'name': '6_1_sci_nationalinstitutions'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__sociocultural',
'children',
'tech__8__1__sociocultural',
'children',
'tech_qg_115',
'children',
'tech_impacts_slmknowledge',
'value',
'value',
],
'name': '6_1_sci_slmknowledge'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__sociocultural',
'children',
'tech__8__1__sociocultural',
'children',
'tech_qg_116',
'children',
'tech_impacts_conflictmitigation',
'value',
'value',
],
'name': '6_1_sci_conflictmitigation'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__sociocultural',
'children',
'tech__8__1__sociocultural',
'children',
'tech_qg_117',
'children',
'tech_impacts_situationdisadvantaged',
'value',
'value',
],
'name': '6_1_sci_situationdisadvantaged'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__sociocultural',
'children',
'tech__8__1__sociocultural',
'children',
'tech_qg_187',
'children',
'tech_impacts_other_specify',
'value',
'value',
],
'name': '6_1_sci_other_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__sociocultural',
'children',
'tech__8__1__sociocultural',
'children',
'tech_qg_187',
'children',
'tech_impacts_other_measure',
'value',
'value',
],
'name': '6_1_sci_other'
},
# Ecological impacts (ei)
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_water',
'children',
'tech_qg_118',
'children',
'tech_impacts_waterquantity',
'value',
'value',
],
'name': '6_1_ei_waterquantity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_water',
'children',
'tech_qg_119',
'children',
'tech_impacts_waterquality',
'value',
'value',
],
'name': '6_1_ei_waterquality'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_water',
'children',
'tech_qg_120',
'children',
'tech_impacts_harvestingwater',
'value',
'value',
],
'name': '6_1_ei_harvestingwater'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_water',
'children',
'tech_qg_121',
'children',
'tech_impacts_surfacerunoff',
'value',
'value',
],
'name': '6_1_ei_surfacerunoff'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_water',
'children',
'tech_qg_122',
'children',
'tech_impacts_waterdrainage',
'value',
'value',
],
'name': '6_1_ei_waterdrainage'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_water',
'children',
'tech_qg_123',
'children',
'tech_impacts_groundwater',
'value',
'value',
],
'name': '6_1_ei_groundwater'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_water',
'children',
'tech_qg_124',
'children',
'tech_impacts_evaporation',
'value',
'value',
],
'name': '6_1_ei_evaporation'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_soil',
'children',
'tech_qg_125',
'children',
'tech_impacts_soilmoisture',
'value',
'value',
],
'name': '6_1_ei_soilmoisture'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_soil',
'children',
'tech_qg_126',
'children',
'tech_impacts_soilcover',
'value',
'value',
],
'name': '6_1_ei_soilcover'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_soil',
'children',
'tech_qg_127',
'children',
'tech_impacts_soilloss',
'value',
'value',
],
'name': '6_1_ei_soilloss'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_soil',
'children',
'tech_qg_128',
'children',
'tech_impacts_soilaccumulation',
'value',
'value',
],
'name': '6_1_ei_soilaccumulation'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_soil',
'children',
'tech_qg_129',
'children',
'tech_impacts_soilcrusting',
'value',
'value',
],
'name': '6_1_ei_soilcrusting'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_soil',
'children',
'tech_qg_130',
'children',
'tech_impacts_soilcompaction',
'value',
'value',
],
'name': '6_1_ei_soilcompaction'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_soil',
'children',
'tech_qg_131',
'children',
'tech_impacts_nutrientcycling',
'value',
'value',
],
'name': '6_1_ei_nutrientcycling'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_soil',
'children',
'tech_qg_132',
'children',
'tech_impacts_soilsalinity',
'value',
'value',
],
'name': '6_1_ei_soilsalinity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_soil',
'children',
'tech_qg_133',
'children',
'tech_impacts_soilorganicmatter',
'value',
'value',
],
'name': '6_1_ei_soilorganicmatter'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_soil',
'children',
'tech_qg_134',
'children',
'tech_impacts_soilacidity',
'value',
'value',
],
'name': '6_1_ei_soilacidity'
},
# Ecological impact: Biodiversity
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_biodiversity',
'children',
'tech_qg_167',
'children',
'tech_impacts_vegcover',
'value',
'value',
],
'name': '6_1_ei_vegetationcover'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_biodiversity',
'children',
'tech_qg_140',
'children',
'tech_impacts_biomass',
'value',
'value',
],
'name': '6_1_ei_biomass'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_biodiversity',
'children',
'tech_qg_136',
'children',
'tech_impacts_plantdiversity',
'value',
'value',
],
'name': '6_1_ei_plantdiversity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_biodiversity',
'children',
'tech_qg_137',
'children',
'tech_impacts_invasivespecies',
'value',
'value',
],
'name': '6_1_ei_invasivespecies'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_biodiversity',
'children',
'tech_qg_135',
'children',
'tech_impacts_animaldiversity',
'value',
'value',
],
'name': '6_1_ei_animaldiversity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_biodiversity',
'children',
'tech_qg_138',
'children',
'tech_impacts_beneficialspecies',
'value',
'value',
],
'name': '6_1_ei_beneficialspecies'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_biodiversity',
'children',
'tech_qg_139',
'children',
'tech_impacts_habitatdiversity',
'value',
'value',
],
'name': '6_1_ei_habitatdiversity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_biodiversity',
'children',
'tech_qg_145',
'children',
'tech_impacts_pestcontrol',
'value',
'value',
],
'name': '6_1_ei_pestcontrol'
},
# Ecological impacts: climate
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_climate',
'children',
'tech_qg_142',
'children',
'tech_impacts_floodimpacts',
'value',
'value',
],
'name': '6_1_ei_floodimpacts'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_climate',
'children',
'tech_qg_228',
'children',
'tech_impacts_landslides',
'value',
'value',
],
'name': '6_1_ei_landslides'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_climate',
'children',
'tech_qg_220',
'children',
'tech_impacts_droughtimpacts',
'value',
'value',
],
'name': '6_1_ei_droughtimpacts'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_climate',
'children',
'tech_qg_221',
'children',
'tech_impacts_cyclones',
'value',
'value',
],
'name': '6_1_ei_cyclones'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_climate',
'children',
'tech_qg_141',
'children',
'tech_impacts_emissioncarbon',
'value',
'value',
],
'name': '6_1_ei_emissioncarbon'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_climate',
'children',
'tech_qg_143',
'children',
'tech_impacts_firerisk',
'value',
'value',
],
'name': '6_1_ei_firerisk'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_climate',
'children',
'tech_qg_144',
'children',
'tech_impacts_windvelocity',
'value',
'value',
],
'name': '6_1_ei_windvelocity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_climate',
'children',
'tech_qg_188',
'children',
'tech_impacts_microclimate',
'value',
'value',
],
'name': '6_1_ei_microclimate_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_climate',
'children',
'tech_qg_188',
'children',
'tech_impacts_other_measure',
'value',
'value',
],
'name': '6_1_ei_microclimate'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__1',
'children',
'tech__8__1__ecological',
'children',
'tech__8__1__ecological_other',
'children',
'tech_qg_189',
'children',
'tech_impacts_other_specify',
'value',
'value',
],
'name': '6_1_ei_microclimate'
},
# 6.2 Off-site impacts the Technology has shown
# Water availability
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_146',
'children',
'tech_impacts_wateravailability',
'value',
'value',
],
'name': '6_2_tech_impacts_wateravailability'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_146',
'children',
'tech_impacts_quant_before',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_before_wa'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_146',
'children',
'tech_impacts_quant_after',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_after_wa'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_146',
'children',
'tech_impacts_specify',
'value',
'value',
],
'name': '6_2_tech_impacts_specify_wa'
},
# Reliable and stable stream flows
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_148',
'children',
'tech_impacts_reliableflows',
'value',
'value',
],
'name': '6_2_tech_impacts_reliableflows'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_148',
'children',
'tech_impacts_quant_before',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_before_rf'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_148',
'children',
'tech_impacts_quant_after',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_after_rf'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_148',
'children',
'tech_impacts_specify',
'value',
'value',
],
'name': '6_2_tech_impacts_specify_rf'
},
# Downstream flooding
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_147',
'children',
'tech_impacts_downstreamflooding',
'value',
'value',
],
'name': '6_2_tech_impacts_downstreamflooding'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_147',
'children',
'tech_impacts_quant_before',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_before_dsf'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_147',
'children',
'tech_impacts_quant_after',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_after_dsf'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_147',
'children',
'tech_impacts_specify',
'value',
'value',
],
'name': '6_2_tech_impacts_specify_dsf'
},
# Downstream siltation
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_149',
'children',
'tech_impacts_downstreamsiltation',
'value',
'value',
],
'name': '6_2_tech_impacts_downstreamsiltation'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_149',
'children',
'tech_impacts_quant_before',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_before_dss'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_149',
'children',
'tech_impacts_quant_after',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_after_dss'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_149',
'children',
'tech_impacts_specify',
'value',
'value',
],
'name': '6_2_tech_impacts_specify_dss'
},
# Groundwater / river pollution
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_150',
'children',
'tech_impacts_groundwaterpollution',
'value',
'value',
],
'name': '6_2_tech_impacts_groundwaterpollution'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_150',
'children',
'tech_impacts_quant_before',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_before_gwp'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_150',
'children',
'tech_impacts_quant_after',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_after_gwp'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_150',
'children',
'tech_impacts_specify',
'value',
'value',
],
'name': '6_2_tech_impacts_specify_dss'
},
# Buffering / filter in capacity
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_151',
'children',
'tech_impacts_bufferingcapacity',
'value',
'value',
],
'name': '6_2_tech_impacts_bufferingcapacity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_151',
'children',
'tech_impacts_quant_before',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_before_gc'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_151',
'children',
'tech_impacts_quant_after',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_after_gc'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_151',
'children',
'tech_impacts_specify',
'value',
'value',
],
'name': '6_2_tech_impacts_specify_gc'
},
# Wind transported sediments
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_152',
'children',
'tech_impacts_windtransportedsediments',
'value',
'value',
],
'name': '6_2_tech_impacts_windtransportedsediments'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_152',
'children',
'tech_impacts_quant_before',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_before_wts'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_152',
'children',
'tech_impacts_quant_after',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_after_wts'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_152',
'children',
'tech_impacts_specify',
'value',
'value',
],
'name': '6_2_tech_impacts_specify_wts'
},
# Damage on neighbour's fields
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_153',
'children',
'tech_impacts_damageneighbourfield',
'value',
'value',
],
'name': '6_2_tech_impacts_damageneighbourfield'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_153',
'children',
'tech_impacts_quant_before',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_before_dnf'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_153',
'children',
'tech_impacts_quant_after',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_after_dnf'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_153',
'children',
'tech_impacts_specify',
'value',
'value',
],
'name': '6_2_tech_impacts_specify_dnf'
},
# Damage on public / private infrastructure
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_154',
'children',
'tech_impacts_damageinfrastructure',
'value',
'value',
],
'name': '6_2_tech_impacts_damageinfrastructure'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_154',
'children',
'tech_impacts_quant_before',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_before_dpi'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_154',
'children',
'tech_impacts_quant_after',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_after_dpi'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_154',
'children',
'tech_impacts_specify',
'value',
'value',
],
'name': '6_2_tech_impacts_specify_dpi'
},
# Impact of greenhouse gases
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_155',
'children',
'tech_impacts_impactgreenhousegases',
'value',
'value',
],
'name': '6_2_tech_impacts_impactgreenhousegases'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_155',
'children',
'tech_impacts_quant_before',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_before_ighg'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_155',
'children',
'tech_impacts_quant_after',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_after_ighg'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_155',
'children',
'tech_impacts_specify',
'value',
'value',
],
'name': '6_2_tech_impacts_specify_ighg'
},
# Other off-site impacts
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_190',
'children',
'tech_impacts_other_specify',
'value',
'value',
],
'name': '6_2_tech_impacts_other_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_190',
'children',
'tech_impacts_other_labelleft',
'value',
'value',
],
'name': '6_2_tech_impacts_other_labelleft'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_190',
'children',
'tech_impacts_other_measure',
'value',
'value',
],
'name': '6_2_tech_impacts_other_measure'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_190',
'children',
'tech_impacts_other_labelright',
'value',
'value',
],
'name': '6_2_tech_impacts_other_labelright'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_190',
'children',
'tech_impacts_quant_before',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_before'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_190',
'children',
'tech_impacts_quant_after',
'value',
'value',
],
'name': '6_2_tech_impacts_quant_after'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_190',
'children',
'tech_impacts_specify',
'value',
'value',
],
'name': '6_2_tech_impacts_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__2',
'children',
'tech_qg_229',
'children',
'tech_impacts_comments',
'value',
'value',
],
'name': '6_2_tech_impacts_comments'
},
# 6.3 Exposure and sensitivity of the Technology to gradual climate change and climate-related extremes/ disasters (as perceived by land users)
# Annual temperature
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__gradual',
'children',
'tech__8__3__gradual',
'children',
'tech_qg_168',
'children',
'tech_exposure_incrdecr',
'value',
],
'name': '6_3_tech_annualtemperature_exposure_incrdecr'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__gradual',
'children',
'tech__8__3__gradual',
'children',
'tech_qg_168',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_annualtemperature_exposure_sensitivity'
},
# Seasonal temperature
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__gradual',
'children',
'tech__8__3__gradual',
'children',
'tech_qg_169',
'children',
'tech_exposure_season',
'value',
],
'name': '6_3_tech_seasonaltemperature_exposure_season'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__gradual',
'children',
'tech__8__3__gradual',
'children',
'tech_qg_169',
'children',
'tech_exposure_incrdecr',
'value',
],
'name': '6_3_tech_seasonaltemperature_exposure_incrdecr'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__gradual',
'children',
'tech__8__3__gradual',
'children',
'tech_qg_169',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_seasonaltemperature_exposure_sensitivity'
},
# Annual rainfall
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__gradual',
'children',
'tech__8__3__gradual',
'children',
'tech_qg_170',
'children',
'tech_exposure_incrdecr',
'value',
],
'name': '6_3_tech_annualrainfall_exposure_incrdecr'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__gradual',
'children',
'tech__8__3__gradual',
'children',
'tech_qg_170',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_annualrainfall_exposure_sensitivity'
},
# Sesonal rainfall
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__gradual',
'children',
'tech__8__3__gradual',
'children',
'tech_qg_171',
'children',
'tech_exposure_season',
'value',
],
'name': '6_3_tech_seasonalrainfall_exposure_season'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__gradual',
'children',
'tech__8__3__gradual',
'children',
'tech_qg_171',
'children',
'tech_exposure_incrdecr',
'value',
],
'name': '6_3_tech_seasonalrainfall_exposure_incrdecr'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__gradual',
'children',
'tech__8__3__gradual',
'children',
'tech_qg_171',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_seasonalrainfall_exposure_sensitivity'
},
# Other gradual climate change
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__gradual',
'children',
'tech__8__3__gradual',
'children',
'tech_qg_172',
'children',
'tech_exposure_other_specify',
'value',
],
'name': '6_3_tech_othergradclimatechange_otherspecifiy'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__gradual',
'children',
'tech__8__3__gradual',
'children',
'tech_qg_172',
'children',
'tech_exposure_incrdecr',
'value',
],
'name': '6_3_tech_othergradclimatechange_exposure_incrdecr'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__gradual',
'children',
'tech__8__3__gradual',
'children',
'tech_qg_172',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_othergradclimatechange_exposure_sensitivity'
},
# Climate-related extremes (disasters)
# Meteorological disasters
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__meteorological',
'children',
'tech_qg_177',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_tropicalstrom_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__meteorological',
'children',
'tech_qg_178',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_extratropicalcyclone_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__meteorological',
'children',
'tech_qg_179',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_localrainstorm_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__meteorological',
'children',
'tech_qg_193',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_localthunderstrom_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__meteorological',
'children',
'tech_qg_194',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_localhailstorm_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__meteorological',
'children',
'tech_qg_195',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_localsnowstorm_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__meteorological',
'children',
'tech_qg_196',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_localsandduststorm_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__meteorological',
'children',
'tech_qg_197',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_localwindstorm_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__meteorological',
'children',
'tech_qg_198',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_tornado_exposure_sensitivity'
},
# Climatological disasters
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__climatological',
'children',
'tech_qg_199',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_heatwave_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__climatological',
'children',
'tech_qg_200',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_coldwave_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__climatological',
'children',
'tech_qg_201',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_extremewinter_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__climatological',
'children',
'tech_qg_202',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_drought_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__climatological',
'children',
'tech_qg_203',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_forestfire_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__climatological',
'children',
'tech_qg_204',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_landfire_exposure_sensitivity'
},
# Hydrological disasters
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__hydrological',
'children',
'tech_qg_205',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_generalriverflood_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__hydrological',
'children',
'tech_qg_206',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_flashflood_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__hydrological',
'children',
'tech_qg_207',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_stromsurgecostalflood_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__hydrological',
'children',
'tech_qg_208',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_landslide_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__hydrological',
'children',
'tech_qg_209',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_avalanche_exposure_sensitivity'
},
# Biological
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__biological',
'children',
'tech_qg_210',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_biologicaldesaster_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__biological',
'children',
'tech_qg_211',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_insectinfestation_exposure_sensitivity'
},
# Other climate
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__otherclimate',
'children',
'tech_qg_212',
'children',
'tech_exposure_other_specify',
'value',
],
'name': '6_3_tech_climarelated_other_specify'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__climaterelated',
'children',
'tech__8__3__otherclimate',
'children',
'tech_qg_212',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_climarelated_exposure_sensitivity'
},
# Other climate-related consequences
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__other',
'children',
'tech__8__3__other',
'children',
'tech_qg_213',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_other_extendedgrowingperiod_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__other',
'children',
'tech__8__3__other',
'children',
'tech_qg_214',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_other_reducedgrowingperiod_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__other',
'children',
'tech__8__3__other',
'children',
'tech_qg_215',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_other_sealevelrise_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__other',
'children',
'tech__8__3__other',
'children',
'tech_qg_216',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_other_other_specifyy'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__other',
'children',
'tech__8__3__other',
'children',
'tech_qg_216',
'children',
'tech_exposure_sensitivity',
'value',
],
'name': '6_3_tech_other_other_exposure_sensitivity'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
'tech__6__3',
'children',
'tech__8__3__comments',
'children',
'tech_qg_180',
'children',
'tech_tolerance_comments',
'value',
],
'name': '6_3_tech_other_tolerance_comments'
},
# 6.4 Cost-benefit analysis
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
"tech__6__4",
"children",
"tech_qg_181",
"children",
"tech_costbenefit_est_short",
"value",
"values"
],
'name': '6_4_tech_costbenefit_est_short'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
"tech__6__4",
"children",
"tech_qg_181",
"children",
"tech_costbenefit_est_long",
"value",
"values"
],
'name': '6_4_tech_costbenefit_establishment_est_long'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
"tech__6__4",
"children",
"tech_qg_182",
"children",
"tech_costbenefit_est_short",
"value",
"values"
],
'name': '6_4_tech_costbenefit_establishment_est_short'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
"tech__6__4",
"children",
"tech_qg_182",
"children",
"tech_costbenefit_est_long",
"value",
"values"
],
'name': '6_4_tech_costbenefit_maintenance_est_long'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
"tech__6__4",
"children",
"tech_qg_183",
"children",
"tech_costbenefit_comments",
"value",
"value"
],
'name': '6_4_tech_costbenefit_maintenance_comments'
},
# 6.5 Adoption of techology
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
"tech__6__5",
"children",
"tech_qg_191",
"children",
"tech_adoption_percentage",
"value",
"values"
],
'name': '6_5_tech_adoption_percentage'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
"tech__6__5",
"children",
"tech_qg_191",
"children",
"tech_adoption_quantify",
"value",
"value"
],
'name': '6_5_tech_adoption_quantify'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
"tech__6__5",
"children",
"tech_qg_191",
"children",
"tech_adoption_spontaneously",
"value",
"values"
],
'name': '6_5_tech_adoption_spontaneously'
},
{
'path': [
'section_specifications',
'children',
'tech__6',
'children',
"tech__6__5",
"children",
"tech_qg_191",
"children",
"tech_adoption_comments",
"value",
"value"
],
'name': '6_5_tech_adoption_comments'
},
],
}
| 26.735395 | 151 | 0.394555 | 11,683 | 171,160 | 5.144911 | 0.05401 | 0.338191 | 0.15181 | 0.200389 | 0.849471 | 0.838457 | 0.800359 | 0.760015 | 0.755157 | 0.744244 | 0 | 0.038336 | 0.495706 | 171,160 | 6,401 | 152 | 26.739572 | 0.658043 | 0.018743 | 0 | 0.771669 | 0 | 0 | 0.386634 | 0.187202 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
b00c887529497e3174ae1ef11b821b93c76d354f | 3,462 | py | Python | lazy/io/cachez/_json.py | trisongz/lazycls | 701bad1a358ed3bb136347d0c5eb81de3201f6a3 | [
"MIT"
] | 2 | 2021-12-02T00:13:16.000Z | 2022-02-26T11:18:33.000Z | lazy/io/cachez/_json.py | trisongz/lazycls | 701bad1a358ed3bb136347d0c5eb81de3201f6a3 | [
"MIT"
] | null | null | null | lazy/io/cachez/_json.py | trisongz/lazycls | 701bad1a358ed3bb136347d0c5eb81de3201f6a3 | [
"MIT"
] | null | null | null |
import json as _defaultjson
import orjson as _orjson
import simdjson as _simdjson
from .static import *
from .base import Disk, _zlib
from .config import CachezConfigz
_parser = _simdjson.Parser()
## OrJSON is only stable for non-float/int types like strings
## Since we already use orjson, we include this
__all__ = ('JSONDisk', 'OrJSONDisk')
class JSONDisk(Disk):
"Cache key and value using JSON serialization with zlib compression."
def __init__(self, directory, compress_level: int = CachezConfigz.compression_lvl, **kwargs):
"""Initialize JSON disk instance.
Keys and values are compressed using the zlib library. The
`compress_level` is an integer from 0 to 9 controlling the level of
compression; 1 is fastest and produces the least compression, 9 is
slowest and produces the most compression, and 0 is no compression.
:param str directory: directory path
:param int compress_level: zlib compression level (default 1)
:param kwargs: super class arguments
"""
self.compress_level = compress_level
super().__init__(directory, **kwargs)
def put(self, key):
json_bytes = _defaultjson.dumps(key).encode('utf-8')
data = _zlib.compress(json_bytes, self.compress_level)
return super().put(data)
def get(self, key, raw):
data = super().get(key, raw)
return _defaultjson.loads(_zlib.decompress(data).decode('utf-8'))
def store(self, value, read, key=UNKNOWN):
if not read:
json_bytes = _defaultjson.dumps(value).encode('utf-8')
value = _zlib.compress(json_bytes, self.compress_level)
return super().store(value, read, key=key)
def fetch(self, mode, filename, value, read):
data = super().fetch(mode, filename, value, read)
if not read:
data = _defaultjson.loads(_zlib.decompress(data).decode('utf-8'))
return data
class OrJSONDisk(Disk):
"Cache key and value using JSON serialization with _zlib compression."
def __init__(self, directory, compress_level: int = CachezConfigz.compression_lvl, **kwargs):
"""Initialize JSON disk instance.
Keys and values are compressed using the _zlib library. The
`compress_level` is an integer from 0 to 9 controlling the level of
compression; 1 is fastest and produces the least compression, 9 is
slowest and produces the most compression, and 0 is no compression.
:param str directory: directory path
:param int compress_level: _zlib compression level (default 5)
:param kwargs: super class arguments
"""
self.compress_level = compress_level
super().__init__(directory, **kwargs)
def put(self, key):
json_bytes = _orjson.dumps(key)
data = _zlib.compress(json_bytes, self.compress_level)
return super().put(data)
def get(self, key, raw):
data = super().get(key, raw)
return _orjson.loads(_zlib.decompress(data))
def store(self, value, read, key=UNKNOWN):
if not read:
json_bytes = _orjson.dumps(value)
value = _zlib.compress(json_bytes, self.compress_level)
return super().store(value, read, key=key)
def fetch(self, mode, filename, value, read):
data = super().fetch(mode, filename, value, read)
if not read: data = _orjson.loads(_zlib.decompress(data))
return data
| 38.043956 | 97 | 0.670711 | 447 | 3,462 | 5.040268 | 0.223714 | 0.080781 | 0.045273 | 0.037284 | 0.822903 | 0.797159 | 0.797159 | 0.797159 | 0.7581 | 0.7581 | 0 | 0.006056 | 0.236857 | 3,462 | 90 | 98 | 38.466667 | 0.846707 | 0.318024 | 0 | 0.607843 | 0 | 0 | 0.073554 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.196078 | false | 0 | 0.117647 | 0 | 0.509804 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
c6b5d01acd41897477383d11cb17239ff28767c4 | 2,907 | py | Python | test/models/test_models.py | parmeet/text | 1fb2aedb48b5ecc5e81741e7c8504486b91655c6 | [
"BSD-3-Clause"
] | null | null | null | test/models/test_models.py | parmeet/text | 1fb2aedb48b5ecc5e81741e7c8504486b91655c6 | [
"BSD-3-Clause"
] | null | null | null | test/models/test_models.py | parmeet/text | 1fb2aedb48b5ecc5e81741e7c8504486b91655c6 | [
"BSD-3-Clause"
] | null | null | null | import torchtext
import torch
from ..common.torchtext_test_case import TorchtextTestCase
from ..common.assets import get_asset_path
class TestModels(TorchtextTestCase):
def test_xlmr_base_output(self):
asset_name = "xlmr.base.output.pt"
asset_path = get_asset_path(asset_name)
xlmr_base = torchtext.models.XLMR_BASE_ENCODER
model = xlmr_base.get_model()
model = model.eval()
model_input = torch.tensor([[0, 43523, 52005, 3647, 13293, 113307, 40514, 2]])
actual = model(model_input)
expected = torch.load(asset_path)
torch.testing.assert_close(actual, expected)
def test_xlmr_base_jit_output(self):
asset_name = "xlmr.base.output.pt"
asset_path = get_asset_path(asset_name)
xlmr_base = torchtext.models.XLMR_BASE_ENCODER
model = xlmr_base.get_model()
model = model.eval()
model_jit = torch.jit.script(model)
model_input = torch.tensor([[0, 43523, 52005, 3647, 13293, 113307, 40514, 2]])
actual = model_jit(model_input)
expected = torch.load(asset_path)
torch.testing.assert_close(actual, expected)
def test_xlmr_large_output(self):
asset_name = "xlmr.large.output.pt"
asset_path = get_asset_path(asset_name)
xlmr_base = torchtext.models.XLMR_LARGE_ENCODER
model = xlmr_base.get_model()
model = model.eval()
model_input = torch.tensor([[0, 43523, 52005, 3647, 13293, 113307, 40514, 2]])
actual = model(model_input)
expected = torch.load(asset_path)
torch.testing.assert_close(actual, expected)
def test_xlmr_large_jit_output(self):
asset_name = "xlmr.large.output.pt"
asset_path = get_asset_path(asset_name)
xlmr_base = torchtext.models.XLMR_LARGE_ENCODER
model = xlmr_base.get_model()
model = model.eval()
model_jit = torch.jit.script(model)
model_input = torch.tensor([[0, 43523, 52005, 3647, 13293, 113307, 40514, 2]])
actual = model_jit(model_input)
expected = torch.load(asset_path)
torch.testing.assert_close(actual, expected)
def test_xlmr_transform(self):
xlmr_base = torchtext.models.XLMR_BASE_ENCODER
transform = xlmr_base.transform()
test_text = "XLMR base Model Comparison"
actual = transform([test_text])
expected = [[0, 43523, 52005, 3647, 13293, 113307, 40514, 2]]
torch.testing.assert_close(actual, expected)
def test_xlmr_transform_jit(self):
xlmr_base = torchtext.models.XLMR_BASE_ENCODER
transform = xlmr_base.transform()
transform_jit = torch.jit.script(transform)
test_text = "XLMR base Model Comparison"
actual = transform_jit([test_text])
expected = [[0, 43523, 52005, 3647, 13293, 113307, 40514, 2]]
torch.testing.assert_close(actual, expected)
| 40.943662 | 86 | 0.671483 | 371 | 2,907 | 4.994609 | 0.126685 | 0.094981 | 0.056125 | 0.055046 | 0.895845 | 0.895845 | 0.892607 | 0.892607 | 0.892607 | 0.8381 | 0 | 0.085485 | 0.227382 | 2,907 | 70 | 87 | 41.528571 | 0.739537 | 0 | 0 | 0.774194 | 0 | 0 | 0.04472 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 1 | 0.096774 | false | 0 | 0.064516 | 0 | 0.177419 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c6db2cf5691e76fe4921ee23de9ef315ff86c83e | 70 | py | Python | ttemplates/pc_utils/__init__.py | wroblewskipawel/pytorch-seg-tools | bbff85eb9665f09e10aa5205ad6f2d879eed26b7 | [
"MIT"
] | null | null | null | ttemplates/pc_utils/__init__.py | wroblewskipawel/pytorch-seg-tools | bbff85eb9665f09e10aa5205ad6f2d879eed26b7 | [
"MIT"
] | null | null | null | ttemplates/pc_utils/__init__.py | wroblewskipawel/pytorch-seg-tools | bbff85eb9665f09e10aa5205ad6f2d879eed26b7 | [
"MIT"
] | null | null | null | from . import pc_dataset
from . import pc_tf
from . import pc_trainer
| 17.5 | 24 | 0.785714 | 12 | 70 | 4.333333 | 0.5 | 0.576923 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171429 | 70 | 3 | 25 | 23.333333 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
c6e3bcd9cafe80e1d15d8ebb9eb70888969f1d74 | 58 | py | Python | build/lib.linux-x86_64-2.7/error_generator/strategies/missing_value/implicit_missing_value/__init__.py | BigDaMa/error-generator | 7d8949fc8fb00b6285c7c220dbda7451dc152e44 | [
"Apache-2.0"
] | 2 | 2018-11-11T07:52:51.000Z | 2019-06-19T05:44:55.000Z | build/lib.linux-x86_64-2.7/error_generator/strategies/missing_value/implicit_missing_value/__init__.py | BigDaMa/error-generator | 7d8949fc8fb00b6285c7c220dbda7451dc152e44 | [
"Apache-2.0"
] | 5 | 2018-07-20T15:08:23.000Z | 2018-11-21T13:18:01.000Z | error_generator/strategies/missing_value/implicit_missing_value/__init__.py | BigDaMa/error-generator | 7d8949fc8fb00b6285c7c220dbda7451dc152e44 | [
"Apache-2.0"
] | 1 | 2020-11-25T15:16:16.000Z | 2020-11-25T15:16:16.000Z | from .implicit_missing_value import Implicit_Missing_Value | 58 | 58 | 0.931034 | 8 | 58 | 6.25 | 0.625 | 0.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051724 | 58 | 1 | 58 | 58 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
05cee77f26c744a8e68af719da537bc8dc9d822c | 41,426 | py | Python | feature/test/feature_test.py | lunlun1992/vmaf | efb80822c6a7d8431928404be68a32a1f1574311 | [
"Apache-2.0"
] | null | null | null | feature/test/feature_test.py | lunlun1992/vmaf | efb80822c6a7d8431928404be68a32a1f1574311 | [
"Apache-2.0"
] | null | null | null | feature/test/feature_test.py | lunlun1992/vmaf | efb80822c6a7d8431928404be68a32a1f1574311 | [
"Apache-2.0"
] | null | null | null | __copyright__ = "Copyright 2016, Netflix, Inc."
__license__ = "Apache, Version 2.0"
import os
import re
import subprocess
import unittest
import config
REMOVE_LOG = 1 # for debug, make this 0
def read_log(log_filename, type):
scores = []
idx = 0
with open(log_filename, 'rt') as log_file:
for line in log_file.readlines():
mo = re.match("{type}: ([0-9]+) ([0-9.-]+)".format(type=type), line)
if mo:
cur_idx = int(mo.group(1))
assert cur_idx == idx
scores.append(float(mo.group(2)))
idx += 1
score = sum(scores) / float(len(scores))
return score, scores
class FeatureTest(unittest.TestCase):
VMAF = config.ROOT + "/feature/vmaf"
PSNR = config.ROOT + "/feature/psnr"
MOMENT = config.ROOT + "/feature/moment"
SSIM = config.ROOT + "/feature/ssim"
MS_SSIM = config.ROOT + "/feature/ms_ssim"
LOG_FILENAME = config.ROOT + "/workspace/workdir/logFeatureTest"
REF_YUV = config.ROOT + "/resource/yuv/src01_hrc00_576x324.yuv"
DIS_YUV = config.ROOT + "/resource/yuv/src01_hrc01_576x324.yuv"
YUV_FMT = "yuv420p"
YUV_WIDTH = 576
YUV_HEIGHT = 324
def setUp(self):
if os.path.exists(self.LOG_FILENAME):
os.remove(self.LOG_FILENAME)
def tearDown(self):
if os.path.exists(self.LOG_FILENAME):
os.remove(self.LOG_FILENAME)
if(REMOVE_LOG):
(logPath, logFilePrefix) = os.path.split(self.LOG_FILENAME)
filenames = [filename for filename in os.listdir(logPath) if filename.startswith(logFilePrefix)]
for filename in filenames:
os.remove(os.path.join(logPath, filename))
def test_adm(self):
ADM_LOG = self.LOG_FILENAME + '_adm'
print 'test adm...'
cmd = "{vmaf} adm {fmt} {ref} {dis} {w} {h} > {log}".format(
vmaf=self.VMAF, fmt=self.YUV_FMT, ref=self.REF_YUV, dis=self.DIS_YUV,
w=self.YUV_WIDTH, h=self.YUV_HEIGHT, log=ADM_LOG
)
subprocess.call(cmd, shell=True)
score, scores = read_log(ADM_LOG, "adm")
self.assertAlmostEquals(score, 0.915509520833, places=4)
score, scores = read_log(ADM_LOG, "adm_num")
self.assertAlmostEquals(score, 6899.24648475, places=4)
score, scores = read_log(ADM_LOG, "adm_den")
self.assertAlmostEquals(score, 7535.29963308, places=4)
score, scores = read_log(ADM_LOG, "adm_num_scale0")
self.assertAlmostEquals(score, 280.8532403958333, places=4)
score, scores = read_log(ADM_LOG, "adm_den_scale0")
self.assertAlmostEquals(score, 362.16940943749995, places=4)
score, scores = read_log(ADM_LOG, "adm_num_scale1")
self.assertAlmostEquals(score, 1288.2366028125, places=4)
score, scores = read_log(ADM_LOG, "adm_den_scale1")
self.assertAlmostEquals(score, 1509.703552229167, places=4)
score, scores = read_log(ADM_LOG, "adm_num_scale2")
self.assertAlmostEquals(score, 2343.150772125, places=4)
score, scores = read_log(ADM_LOG, "adm_den_scale2")
self.assertAlmostEquals(score, 2554.939768562501, places=4)
score, scores = read_log(ADM_LOG, "adm_num_scale3")
self.assertAlmostEquals(score, 2987.005869583334, places=4)
score, scores = read_log(ADM_LOG, "adm_den_scale3")
self.assertAlmostEquals(score, 3108.4869029375, places=4)
def test_ansnr(self):
ANSNR_LOG = self.LOG_FILENAME + '_ansnr'
print 'test ansnr...'
cmd = "{vmaf} ansnr {fmt} {ref} {dis} {w} {h} > {log}".format(
vmaf=self.VMAF, fmt=self.YUV_FMT, ref=self.REF_YUV, dis=self.DIS_YUV,
w=self.YUV_WIDTH, h=self.YUV_HEIGHT, log=ANSNR_LOG
)
subprocess.call(cmd, shell=True)
score, scores = read_log(ANSNR_LOG, "ansnr")
self.assertAlmostEquals(score, 23.5095715208, places=4)
score, scores = read_log(ANSNR_LOG, "anpsnr")
self.assertAlmostEquals(score, 34.164776875, places=4)
def test_motion(self):
MOTION_LOG = self.LOG_FILENAME + '_motion'
print 'test motion...'
cmd = "{vmaf} motion {fmt} {ref} {dis} {w} {h} > {log}".format(
vmaf=self.VMAF, fmt=self.YUV_FMT, ref=self.REF_YUV, dis=self.DIS_YUV,
w=self.YUV_WIDTH, h=self.YUV_HEIGHT, log=MOTION_LOG
)
subprocess.call(cmd, shell=True)
score, scores = read_log(MOTION_LOG, "motion")
self.assertAlmostEquals(score, 4.04982535417, places=4)
def test_vif(self):
VIF_LOG = self.LOG_FILENAME + '_vif'
print 'test vif...'
cmd = "{vmaf} vif {fmt} {ref} {dis} {w} {h} > {log}".format(
vmaf=self.VMAF, fmt=self.YUV_FMT, ref=self.REF_YUV, dis=self.DIS_YUV,
w=self.YUV_WIDTH, h=self.YUV_HEIGHT, log=VIF_LOG
)
subprocess.call(cmd, shell=True)
score, scores = read_log(VIF_LOG, "vif")
self.assertAlmostEquals(score, 0.4460930625000001, places=4)
self.assertAlmostEquals(scores[0], 0.580304, places=4)
self.assertAlmostEquals(scores[1], 0.492477, places=4)
score, scores = read_log(VIF_LOG, "vif_num")
self.assertAlmostEquals(score, 712650.023478, places=0)
score, scores = read_log(VIF_LOG, "vif_den")
self.assertAlmostEquals(score, 1597314.95249, places=0)
score, scores = read_log(VIF_LOG, "vif_num_scale0")
self.assertAlmostEquals(score, 468101.509766, places=0)
score, scores = read_log(VIF_LOG, "vif_num_scale1")
self.assertAlmostEquals(score, 184971.572266, places=1)
score, scores = read_log(VIF_LOG, "vif_num_scale2")
self.assertAlmostEquals(score, 47588.8323567, places=0)
score, scores = read_log(VIF_LOG, "vif_num_scale3")
self.assertAlmostEquals(score, 11988.1090902, places=1)
score, scores = read_log(VIF_LOG, "vif_den_scale0")
self.assertAlmostEquals(score, 1287822.80208, places=0)
score, scores = read_log(VIF_LOG, "vif_den_scale1")
self.assertAlmostEquals(score, 241255.067708, places=1)
score, scores = read_log(VIF_LOG, "vif_den_scale2")
self.assertAlmostEquals(score, 55149.8169759, places=2)
score, scores = read_log(VIF_LOG, "vif_den_scale3")
self.assertAlmostEquals(score, 13087.2657267, places=2)
def test_all(self):
ALL_LOG = self.LOG_FILENAME + "_all"
print 'test all...'
cmd = "{vmaf} all {fmt} {ref} {dis} {w} {h} > {log}".format(
vmaf=self.VMAF, fmt=self.YUV_FMT, ref=self.REF_YUV, dis=self.DIS_YUV,
w=self.YUV_WIDTH, h=self.YUV_HEIGHT, log= ALL_LOG
)
subprocess.call(cmd, shell=True)
score, scores = read_log(ALL_LOG, "vif")
self.assertAlmostEquals(score, 0.4460930625, places=4)
score, scores = read_log(ALL_LOG, "motion")
self.assertAlmostEquals(score, 4.04982535417, places=4)
score, scores = read_log(ALL_LOG, "ansnr")
self.assertAlmostEquals(score, 23.509571520833337, places=4)
score, scores = read_log(ALL_LOG, "adm")
self.assertAlmostEquals(score, 0.915509520833, places=4)
score, scores = read_log(ALL_LOG, "adm_num")
self.assertAlmostEquals(score, 6899.24648475, places=4)
score, scores = read_log(ALL_LOG, "adm_den")
self.assertAlmostEquals(score, 7535.29963308, places=4)
score, scores = read_log(ALL_LOG, "vif_num")
self.assertAlmostEquals(score, 712650.023478, places=0)
score, scores = read_log(ALL_LOG, "vif_den")
self.assertAlmostEquals(score, 1597314.95249, places=0)
score, scores = read_log(ALL_LOG, "anpsnr")
self.assertAlmostEquals(score, 34.164776874999994, places=4)
score, scores = read_log(ALL_LOG, "vif_num_scale0")
self.assertAlmostEquals(score, 468101.509766, places=0)
score, scores = read_log(ALL_LOG, "vif_num_scale1")
self.assertAlmostEquals(score, 184971.572266, places=1)
score, scores = read_log(ALL_LOG, "vif_num_scale2")
self.assertAlmostEquals(score, 47588.8323567, places=0)
score, scores = read_log(ALL_LOG, "vif_num_scale3")
self.assertAlmostEquals(score, 11988.1090902, places=1)
score, scores = read_log(ALL_LOG, "vif_den_scale0")
self.assertAlmostEquals(score, 1287822.80208, places=0)
score, scores = read_log(ALL_LOG, "vif_den_scale1")
self.assertAlmostEquals(score, 241255.067708, places=1)
score, scores = read_log(ALL_LOG, "vif_den_scale2")
self.assertAlmostEquals(score, 55149.8169759, places=2)
score, scores = read_log(ALL_LOG, "vif_den_scale3")
self.assertAlmostEquals(score, 13087.2657267, places=2)
score, scores = read_log(ALL_LOG, "adm_den_scale0")
self.assertAlmostEquals(score, 362.16940943749995, places=4)
score, scores = read_log(ALL_LOG, "adm_num_scale1")
self.assertAlmostEquals(score, 1288.2366028125, places=4)
score, scores = read_log(ALL_LOG, "adm_den_scale1")
self.assertAlmostEquals(score, 1509.703552229167, places=4)
score, scores = read_log(ALL_LOG, "adm_num_scale2")
self.assertAlmostEquals(score, 2343.150772125, places=4)
score, scores = read_log(ALL_LOG, "adm_den_scale2")
self.assertAlmostEquals(score, 2554.939768562501, places=4)
score, scores = read_log(ALL_LOG, "adm_num_scale3")
self.assertAlmostEquals(score, 2987.005869583334, places=4)
score, scores = read_log(ALL_LOG, "adm_den_scale3")
self.assertAlmostEquals(score, 3108.4869029375, places=4)
def test_psnr(self):
PSNR_LOG = self.LOG_FILENAME + '_psnr'
print 'test psnr...'
cmd = "{psnr} {fmt} {ref} {dis} {w} {h} > {log}".format(
psnr=self.PSNR, fmt=self.YUV_FMT, ref=self.REF_YUV, dis=self.DIS_YUV,
w=self.YUV_WIDTH, h=self.YUV_HEIGHT, log=PSNR_LOG
)
subprocess.call(cmd, shell=True)
score, scores = read_log(PSNR_LOG, "psnr")
self.assertAlmostEquals(score, 30.7550639792, places=4)
self.assertAlmostEquals(scores[0], 34.760779, places=4)
self.assertAlmostEquals(scores[1], 31.88322, places=4)
def test_2nd_moment(self):
MOMENT_LOG = self.LOG_FILENAME + '_moment'
print 'test 2nd moment...'
cmd = "{moment} 2 {fmt} {dis} {w} {h} > {log}".format(
moment=self.MOMENT, fmt=self.YUV_FMT, dis=self.DIS_YUV,
w=self.YUV_WIDTH, h=self.YUV_HEIGHT, log=MOMENT_LOG
)
subprocess.call(cmd, shell=True)
score, scores = read_log(MOMENT_LOG, "1stmoment")
self.assertAlmostEquals(score, 61.332006624999984, places=4)
score, scores = read_log(MOMENT_LOG, "2ndmoment")
self.assertAlmostEquals(score, 4798.659574041666, places=4)
def test_ssim(self):
SSIM_LOG = self.LOG_FILENAME + '_ssim'
print 'test ssim...'
cmd = "{ssim} {fmt} {ref} {dis} {w} {h} > {log}".format(
ssim=self.SSIM, fmt=self.YUV_FMT, ref=self.REF_YUV, dis=self.DIS_YUV,
w=self.YUV_WIDTH, h=self.YUV_HEIGHT, log=SSIM_LOG
)
subprocess.call(cmd, shell=True)
score, scores = read_log(SSIM_LOG, "ssim")
self.assertAlmostEquals(score, 0.863226541666667, places=4)
self.assertAlmostEquals(scores[0], 0.925023, places=4)
self.assertAlmostEquals(scores[1], 0.891992, places=4)
score, scores = read_log(SSIM_LOG, "ssim_l")
self.assertAlmostEquals(score, 0.998147458333333, places=4)
self.assertAlmostEquals(scores[0], 0.999524, places=4)
self.assertAlmostEquals(scores[1], 0.998983, places=4)
score, scores = read_log(SSIM_LOG, "ssim_c")
self.assertAlmostEquals(score, 0.9612679375000001, places=4)
self.assertAlmostEquals(scores[0], 0.979614, places=4)
self.assertAlmostEquals(scores[1], 0.96981, places=4)
score, scores = read_log(SSIM_LOG, "ssim_s")
self.assertAlmostEquals(score, 0.8977363333333335, places=4)
self.assertAlmostEquals(scores[0], 0.943966, places=4)
self.assertAlmostEquals(scores[1], 0.919507, places=4)
def test_ms_ssim(self):
MS_SSIM_LOG = self.LOG_FILENAME + '_msssim'
print 'test ms_ssim...'
cmd = "{ms_ssim} {fmt} {ref} {dis} {w} {h} > {log}".format(
ms_ssim=self.MS_SSIM, fmt=self.YUV_FMT, ref=self.REF_YUV, dis=self.DIS_YUV,
w=self.YUV_WIDTH, h=self.YUV_HEIGHT, log=MS_SSIM_LOG
)
subprocess.call(cmd, shell=True)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim")
self.assertAlmostEquals(score, 0.9632498125, places=4)
self.assertAlmostEquals(scores[0], 0.981968, places=4)
self.assertAlmostEquals(scores[1], 0.973366, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_l_scale0")
self.assertAlmostEquals(score, 0.998147458333333, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_c_scale0")
self.assertAlmostEquals(score, 0.9612679375000001, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_s_scale0")
self.assertAlmostEquals(score, 0.8977363333333335, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_l_scale1")
self.assertAlmostEquals(score, 0.9989961250000002, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_c_scale1")
self.assertAlmostEquals(score, 0.9857694375, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_s_scale1")
self.assertAlmostEquals(score, 0.941185875, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_l_scale2")
self.assertAlmostEquals(score, 0.9992356458333332, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_c_scale2")
self.assertAlmostEquals(score, 0.997034020833, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_s_scale2")
self.assertAlmostEquals(score, 0.977992145833, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_l_scale3")
self.assertAlmostEquals(score, 0.9992921041666665, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_c_scale3")
self.assertAlmostEquals(score, 0.999588104167, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_s_scale3")
self.assertAlmostEquals(score, 0.99387125, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_l_scale4")
self.assertAlmostEquals(score, 0.9994035625000003, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_c_scale4")
self.assertAlmostEquals(score, 0.999907625, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_s_scale4")
self.assertAlmostEquals(score, 0.998222583333, places=4)
class FeatureTestYuv422p10le(unittest.TestCase):
VMAF = config.ROOT + "/feature/vmaf"
PSNR = config.ROOT + "/feature/psnr"
MOMENT = config.ROOT + "/feature/moment"
SSIM = config.ROOT + "/feature/ssim"
MS_SSIM = config.ROOT + "/feature/ms_ssim"
LOG_FILENAME = config.ROOT + "/workspace/workdir/logFeatureTestYuv422p10le"
REF_YUV = config.ROOT + "/resource/yuv/src01_hrc00_576x324.yuv422p10le.yuv"
DIS_YUV = config.ROOT + "/resource/yuv/src01_hrc01_576x324.yuv422p10le.yuv"
YUV_FMT = "yuv422p10le"
YUV_WIDTH = 576
YUV_HEIGHT = 324
def setUp(self):
if os.path.exists(self.LOG_FILENAME):
os.remove(self.LOG_FILENAME)
def tearDown(self):
if os.path.exists(self.LOG_FILENAME):
os.remove(self.LOG_FILENAME)
if (REMOVE_LOG):
(logPath, logFilePrefix) = os.path.split(self.LOG_FILENAME)
filenames = [filename for filename in os.listdir(logPath) if filename.startswith(logFilePrefix)]
for filename in filenames:
os.remove(os.path.join(logPath, filename))
def test_adm(self):
ADM_LOG = self.LOG_FILENAME + '_adm'
print 'test adm on yuv422p10le...'
cmd = "{vmaf} adm {fmt} {ref} {dis} {w} {h} > {log}".format(
vmaf=self.VMAF, fmt=self.YUV_FMT, ref=self.REF_YUV, dis=self.DIS_YUV,
w=self.YUV_WIDTH, h=self.YUV_HEIGHT, log=ADM_LOG
)
subprocess.call(cmd, shell=True)
score, scores = read_log(ADM_LOG, "adm")
self.assertAlmostEquals(score, 0.915509520833, places=4)
score, scores = read_log(ADM_LOG, "adm_num")
self.assertAlmostEquals(score, 6899.24648475, places=4)
score, scores = read_log(ADM_LOG, "adm_den")
self.assertAlmostEquals(score, 7535.29963308, places=4)
score, scores = read_log(ADM_LOG, "adm_den_scale0")
self.assertAlmostEquals(score, 362.16940943749995, places=4)
score, scores = read_log(ADM_LOG, "adm_num_scale1")
self.assertAlmostEquals(score, 1288.2366028125, places=4)
score, scores = read_log(ADM_LOG, "adm_den_scale1")
self.assertAlmostEquals(score, 1509.703552229167, places=4)
score, scores = read_log(ADM_LOG, "adm_num_scale2")
self.assertAlmostEquals(score, 2343.150772125, places=4)
score, scores = read_log(ADM_LOG, "adm_den_scale2")
self.assertAlmostEquals(score, 2554.939768562501, places=4)
score, scores = read_log(ADM_LOG, "adm_num_scale3")
self.assertAlmostEquals(score, 2987.005869583334, places=4)
score, scores = read_log(ADM_LOG, "adm_den_scale3")
self.assertAlmostEquals(score, 3108.4869029375, places=4)
def test_ansnr(self):
ANSNR_LOG = self. LOG_FILENAME + '_ansnr'
print 'test ansnr on yuv422p10le...'
cmd = "{vmaf} ansnr {fmt} {ref} {dis} {w} {h} > {log}".format(
vmaf=self.VMAF, fmt=self.YUV_FMT, ref=self.REF_YUV, dis=self.DIS_YUV,
w=self.YUV_WIDTH, h=self.YUV_HEIGHT, log=ANSNR_LOG
)
subprocess.call(cmd, shell=True)
score, scores = read_log(ANSNR_LOG, "ansnr")
self.assertAlmostEquals(score, 23.5095715208, places=4)
score, scores = read_log(ANSNR_LOG, "anpsnr")
self.assertAlmostEquals(score, 34.1902860625, places=4)
def test_motion(self):
MOTION_LOG = self.LOG_FILENAME + '_motion'
print 'test motion on yuv422p10le...'
cmd = "{vmaf} motion {fmt} {ref} {dis} {w} {h} > {log}".format(
vmaf=self.VMAF, fmt=self.YUV_FMT, ref=self.REF_YUV, dis=self.DIS_YUV,
w=self.YUV_WIDTH, h=self.YUV_HEIGHT, log=MOTION_LOG
)
subprocess.call(cmd, shell=True)
score, scores = read_log(MOTION_LOG, "motion")
self.assertAlmostEquals(score, 4.04982535417, places=4)
def test_vif(self):
VIF_LOG = self.LOG_FILENAME + '_vif'
print 'test vif on yuv422p10le...'
cmd = "{vmaf} vif {fmt} {ref} {dis} {w} {h} > {log}".format(
vmaf=self.VMAF, fmt=self.YUV_FMT, ref=self.REF_YUV, dis=self.DIS_YUV,
w=self.YUV_WIDTH, h=self.YUV_HEIGHT, log=VIF_LOG
)
subprocess.call(cmd, shell=True)
score, scores = read_log(VIF_LOG, "vif")
self.assertAlmostEquals(score, 0.4460930625, places=4)
self.assertAlmostEquals(scores[0], 0.580304, places=4)
self.assertAlmostEquals(scores[1], 0.492477, places=4)
score, scores = read_log(VIF_LOG, "vif_num")
self.assertAlmostEquals(score, 712650.023478, places=0)
score, scores = read_log(VIF_LOG, "vif_den")
self.assertAlmostEquals(score, 1597314.95249, places=0)
score, scores = read_log(VIF_LOG, "vif_num_scale0")
self.assertAlmostEquals(score, 468101.509766, places=0)
score, scores = read_log(VIF_LOG, "vif_num_scale1")
self.assertAlmostEquals(score, 184971.572266, places=1)
score, scores = read_log(VIF_LOG, "vif_num_scale2")
self.assertAlmostEquals(score, 47588.8323567, places=0)
score, scores = read_log(VIF_LOG, "vif_num_scale3")
self.assertAlmostEquals(score, 11988.1090902, places=1)
score, scores = read_log(VIF_LOG, "vif_den_scale0")
self.assertAlmostEquals(score, 1287822.80208, places=0)
score, scores = read_log(VIF_LOG, "vif_den_scale1")
self.assertAlmostEquals(score, 241255.067708, places=1)
score, scores = read_log(VIF_LOG, "vif_den_scale2")
self.assertAlmostEquals(score, 55149.8169759, places=2)
score, scores = read_log(VIF_LOG, "vif_den_scale3")
self.assertAlmostEquals(score, 13087.2657267, places=2)
def test_all(self):
ALL_LOG = self.LOG_FILENAME + "_all"
print 'test all on yuv422p10le...'
cmd = "{vmaf} all {fmt} {ref} {dis} {w} {h} > {log}".format(
vmaf=self.VMAF, fmt=self.YUV_FMT, ref=self.REF_YUV, dis=self.DIS_YUV,
w=self.YUV_WIDTH, h=self.YUV_HEIGHT, log=ALL_LOG
)
subprocess.call(cmd, shell=True)
score, scores = read_log(ALL_LOG, "vif")
self.assertAlmostEquals(score, 0.4460930625, places=4)
score, scores = read_log(ALL_LOG, "motion")
self.assertAlmostEquals(score, 4.04982535417, places=4)
score, scores = read_log(ALL_LOG, "ansnr")
self.assertAlmostEquals(score, 23.5095715208, places=4)
score, scores = read_log(ALL_LOG, "adm")
self.assertAlmostEquals(score, 0.915509520833, places=4)
score, scores = read_log(ALL_LOG, "adm_num")
self.assertAlmostEquals(score, 6899.24648475, places=4)
score, scores = read_log(ALL_LOG, "adm_den")
self.assertAlmostEquals(score, 7535.29963308, places=4)
score, scores = read_log(ALL_LOG, "vif_num")
self.assertAlmostEquals(score, 712650.023478, places=0)
score, scores = read_log(ALL_LOG, "vif_den")
self.assertAlmostEquals(score, 1597314.95249, places=0)
score, scores = read_log(ALL_LOG, "anpsnr")
self.assertAlmostEquals(score, 34.1902860625, places=4)
score, scores = read_log(ALL_LOG, "vif_num_scale0")
self.assertAlmostEquals(score, 468101.509766, places=0)
score, scores = read_log(ALL_LOG, "vif_num_scale1")
self.assertAlmostEquals(score, 184971.572266, places=1)
score, scores = read_log(ALL_LOG, "vif_num_scale2")
self.assertAlmostEquals(score, 47588.8323567, places=0)
score, scores = read_log(ALL_LOG, "vif_num_scale3")
self.assertAlmostEquals(score, 11988.1090902, places=1)
score, scores = read_log(ALL_LOG, "vif_den_scale0")
self.assertAlmostEquals(score, 1287822.80208, places=0)
score, scores = read_log(ALL_LOG, "vif_den_scale1")
self.assertAlmostEquals(score, 241255.067708, places=1)
score, scores = read_log(ALL_LOG, "vif_den_scale2")
self.assertAlmostEquals(score, 55149.8169759, places=2)
score, scores = read_log(ALL_LOG, "vif_den_scale3")
self.assertAlmostEquals(score, 13087.2657267, places=2)
score, scores = read_log(ALL_LOG, "adm_den_scale0")
self.assertAlmostEquals(score, 362.16940943749995, places=4)
score, scores = read_log(ALL_LOG, "adm_num_scale1")
self.assertAlmostEquals(score, 1288.2366028125, places=4)
score, scores = read_log(ALL_LOG, "adm_den_scale1")
self.assertAlmostEquals(score, 1509.703552229167, places=4)
score, scores = read_log(ALL_LOG, "adm_num_scale2")
self.assertAlmostEquals(score, 2343.150772125, places=4)
score, scores = read_log(ALL_LOG, "adm_den_scale2")
self.assertAlmostEquals(score, 2554.939768562501, places=4)
score, scores = read_log(ALL_LOG, "adm_num_scale3")
self.assertAlmostEquals(score, 2987.005869583334, places=4)
score, scores = read_log(ALL_LOG, "adm_den_scale3")
self.assertAlmostEquals(score, 3108.4869029375, places=4)
def test_psnr(self):
PSNR_LOG = self.LOG_FILENAME + '_psnr'
print 'test psnr on yuv422p10le...'
cmd = "{psnr} {fmt} {ref} {dis} {w} {h} > {log}".format(
psnr=self.PSNR, fmt=self.YUV_FMT, ref=self.REF_YUV, dis=self.DIS_YUV,
w=self.YUV_WIDTH, h=self.YUV_HEIGHT, log=PSNR_LOG
)
subprocess.call(cmd, shell=True)
score, scores = read_log(PSNR_LOG, "psnr")
self.assertAlmostEquals(score, 30.7805732917, places=4)
self.assertAlmostEquals(scores[0], 34.786288, places=4)
self.assertAlmostEquals(scores[1], 31.908737, places=4)
def test_ssim(self):
SSIM_LOG = self.LOG_FILENAME + '_ssim'
print 'test ssim on yuv422p10le...'
cmd = "{ssim} {fmt} {ref} {dis} {w} {h} > {log}".format(
ssim=self.SSIM, fmt=self.YUV_FMT, ref=self.REF_YUV, dis=self.DIS_YUV,
w=self.YUV_WIDTH, h=self.YUV_HEIGHT, log=SSIM_LOG
)
subprocess.call(cmd, shell=True)
score, scores = read_log(SSIM_LOG, "ssim")
self.assertAlmostEquals(score, 0.863226541666667, places=4)
self.assertAlmostEquals(scores[0], 0.925023, places=4)
self.assertAlmostEquals(scores[1], 0.891992, places=4)
score, scores = read_log(SSIM_LOG, "ssim_l")
self.assertAlmostEquals(score,0.998147458333333, places=4)
self.assertAlmostEquals(scores[0], 0.999524, places=4)
self.assertAlmostEquals(scores[1], 0.998983, places=4)
score, scores = read_log(SSIM_LOG, "ssim_c")
self.assertAlmostEquals(score, 0.9612679375000001, places=4)
self.assertAlmostEquals(scores[0], 0.979614, places=4)
self.assertAlmostEquals(scores[1], 0.96981, places=4)
score, scores = read_log(SSIM_LOG, "ssim_s")
self.assertAlmostEquals(score, 0.8977363333333335, places=4)
self.assertAlmostEquals(scores[0], 0.943966, places=4)
self.assertAlmostEquals(scores[1], 0.919507, places=4)
def test_ms_ssim(self):
MS_SSIM_LOG = self.LOG_FILENAME + '_msssim'
print 'test ms_ssim on yuv422p10le...'
cmd = "{ms_ssim} {fmt} {ref} {dis} {w} {h} > {log}".format(
ms_ssim=self.MS_SSIM, fmt=self.YUV_FMT, ref=self.REF_YUV, dis=self.DIS_YUV,
w=self.YUV_WIDTH, h=self.YUV_HEIGHT, log=MS_SSIM_LOG
)
subprocess.call(cmd, shell=True)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim")
self.assertAlmostEquals(score, 0.9632498125, places=4)
self.assertAlmostEquals(scores[0], 0.981968, places=4)
self.assertAlmostEquals(scores[1], 0.973366, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_l_scale0")
self.assertAlmostEquals(score, 0.998147458333333, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_c_scale0")
self.assertAlmostEquals(score, 0.9612679375000001, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_s_scale0")
self.assertAlmostEquals(score, 0.8977363333333335, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_l_scale1")
self.assertAlmostEquals(score, 0.9989961250000002, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_c_scale1")
self.assertAlmostEquals(score, 0.9857694375, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_s_scale1")
self.assertAlmostEquals(score, 0.941185875, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_l_scale2")
self.assertAlmostEquals(score, 0.9992356458333332, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_c_scale2")
self.assertAlmostEquals(score, 0.997034020833, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_s_scale2")
self.assertAlmostEquals(score, 0.977992145833, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_l_scale3")
self.assertAlmostEquals(score, 0.9992921041666665, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_c_scale3")
self.assertAlmostEquals(score, 0.9995884375000003, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_s_scale3")
self.assertAlmostEquals(score, 0.9938712499999998, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_l_scale4")
self.assertAlmostEquals(score, 0.9994035625000003, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_c_scale4")
self.assertAlmostEquals(score, 0.999907625, places=4)
score, scores = read_log(MS_SSIM_LOG, "ms_ssim_s_scale4")
self.assertAlmostEquals(score,0.998222583333, places=4)
class CornerCaseTest(unittest.TestCase):
VMAF = config.ROOT + "/feature/vmaf"
LOG_FILENAME = config.ROOT + "/workspace/workdir/logCornerCaseTest"
CMD_TEMPLATE = """
{vmaf} vif {fmt} {ref} {dis} {w} {h} > {log};
{vmaf} adm {fmt} {ref} {dis} {w} {h} >> {log};
{vmaf} ansnr {fmt} {ref} {dis} {w} {h} >> {log};
{vmaf} motion {fmt} {ref} {dis} {w} {h} >> {log};"""
def setUp(self):
unittest.TestCase.setUp(self)
if os.path.exists(self.LOG_FILENAME):
os.remove(self.LOG_FILENAME)
def tearDown(self):
unittest.TestCase.tearDown(self)
if os.path.exists(self.LOG_FILENAME):
os.remove(self.LOG_FILENAME)
if (REMOVE_LOG):
(logPath, logFilePrefix) = os.path.split(self.LOG_FILENAME)
filenames = [filename for filename in os.listdir(logPath) if filename.startswith(logFilePrefix)]
for filename in filenames:
os.remove(os.path.join(logPath, filename))
def test_checkerboard_identical(self):
print 'test on checkerboard pattern identical...'
LOCAL_LOG_FILENAME = self.LOG_FILENAME + '_checkerboardIdentical'
ref_yuv = config.ROOT + "/resource/yuv/checkerboard_1920_1080_10_3_0_0.yuv"
dis_yuv = config.ROOT + "/resource/yuv/checkerboard_1920_1080_10_3_0_0.yuv"
yuv_fmt = "yuv420p"
yuv_width = 1920
yuv_height = 1080
cmd = self.CMD_TEMPLATE.format(vmaf=self.VMAF, fmt=yuv_fmt, ref=ref_yuv,
dis=dis_yuv, w=yuv_width, h=yuv_height,
log=LOCAL_LOG_FILENAME)
subprocess.call(cmd, shell=True)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm")[0], 1.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "ansnr")[0], 21.1138813333, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "motion")[0], 12.554836666666667, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif")[0], 1.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_num")[0], 30814.90966033333, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_den")[0], 30814.90966033333, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_num")[0], 33021350.5, places=-3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_den")[0], 33021387.0625, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "anpsnr")[0], 29.8567246667, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_num_scale0")[0], 25757432.0, places=-3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_den_scale0")[0], 25757473.3333, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_num_scale3")[0], 259774.958333, places=1)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_den_scale3")[0], 259774.9375, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_num_scale0")[0], 5.380622, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_den_scale0")[0], 5.380622, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_num_scale3")[0], 19622.156901, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_den_scale3")[0], 19622.156901, places=3)
def test_checkerboard_shifted_by_1(self):
print 'test on checkerboard pattern shifted by 1...'
LOCAL_LOG_FILENAME = self.LOG_FILENAME + '_checkerboard_shifted_by_1'
ref_yuv = config.ROOT + "/resource/yuv/checkerboard_1920_1080_10_3_0_0.yuv"
dis_yuv = config.ROOT + "/resource/yuv/checkerboard_1920_1080_10_3_1_0.yuv"
yuv_fmt = "yuv420p"
yuv_width = 1920
yuv_height = 1080
cmd = self.CMD_TEMPLATE.format(vmaf=self.VMAF, fmt=yuv_fmt, ref=ref_yuv,
dis=dis_yuv, w=yuv_width, h=yuv_height,
log=LOCAL_LOG_FILENAME)
subprocess.call(cmd, shell=True)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm")[0], 0.81386000000000003, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "ansnr")[0], 7.92623066667, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "motion")[0], 12.5548366667, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif")[0], 0.156834666667, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_num")[0], 25079.528779, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_den")[0], 30814.90966033333, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_num")[0], 5178894.51562, places=-1)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_den")[0], 33021387.0625, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "anpsnr")[0], 16.669074, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_num_scale0")[0], 2908829.0, places=-1)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_den_scale0")[0], 25757473.3333, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_num_scale3")[0], 128957.796875, places=-2)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_den_scale3")[0], 259774.9375, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_num_scale0")[0], 3.679394, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_den_scale0")[0], 5.380622, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_num_scale3")[0], 16194.536132666666, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_den_scale3")[0], 19622.156901, places=3)
def test_checkerboard_opposite(self):
print 'test on checkerboard pattern opposite...'
LOCAL_LOG_FILENAME = self.LOG_FILENAME + '_checkerboard_opposite'
ref_yuv = config.ROOT + "/resource/yuv/checkerboard_1920_1080_10_3_0_0.yuv"
dis_yuv = config.ROOT + "/resource/yuv/checkerboard_1920_1080_10_3_10_0.yuv"
yuv_fmt = "yuv420p"
yuv_width = 1920
yuv_height = 1080
cmd = self.CMD_TEMPLATE.format(vmaf=self.VMAF, fmt=yuv_fmt, ref=ref_yuv,
dis=dis_yuv, w=yuv_width, h=yuv_height,
log=LOCAL_LOG_FILENAME)
subprocess.call(cmd, shell=True)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm")[0], 0.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "ansnr")[0], -5.758091333333334, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "motion")[0], 12.554836666666667, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif")[0], 0.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_num")[0], 0.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_den")[0], 30814.90966033333, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_num")[0], 6.66666666667, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_den")[0], 33021387.0625, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "anpsnr")[0], 2.984752, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_num_scale0")[0], 6.66666666667, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_den_scale0")[0], 25757473.3333, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_num_scale3")[0], 0.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_den_scale3")[0], 259774.9375, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_num_scale0")[0], 0.0, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_den_scale0")[0], 5.380622, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_num_scale3")[0], 0.0, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_den_scale3")[0], 19622.156901, places=3)
def test_flat_identical(self):
print 'test on flat pattern identical...'
LOCAL_LOG_FILENAME = self.LOG_FILENAME + '_flat_identical'
ref_yuv = config.ROOT + "/resource/yuv/flat_1920_1080_0.yuv"
dis_yuv = config.ROOT + "/resource/yuv/flat_1920_1080_0.yuv"
yuv_fmt = "yuv420p"
yuv_width = 1920
yuv_height = 1080
cmd = self.CMD_TEMPLATE.format(vmaf=self.VMAF, fmt=yuv_fmt, ref=ref_yuv,
dis=dis_yuv, w=yuv_width, h=yuv_height,
log=LOCAL_LOG_FILENAME)
subprocess.call(cmd, shell=True)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm")[0], 1.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "ansnr")[0], 60.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "motion")[0], 0.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif")[0], 1.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_num")[0], 0.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_den")[0], 0.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_num")[0], 2754000.15625, places=1)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_den")[0], 2754000.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "anpsnr")[0], 60.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_num_scale0")[0], 2073600.125, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_den_scale0")[0], 2073600.000, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_num_scale3")[0], 32400.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_den_scale3")[0], 32400.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_num_scale0")[0], 0.0, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_den_scale0")[0], 0.0, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_num_scale3")[0], 0.0006910000000000001, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_den_scale3")[0], 0.0006910000000000001, places=3)
def test_flat_value10(self):
print 'test on flat pattern of value 10...'
LOCAL_LOG_FILENAME = self.LOG_FILENAME + '_flat_value10'
ref_yuv = config.ROOT + "/resource/yuv/flat_1920_1080_0.yuv"
dis_yuv = config.ROOT + "/resource/yuv/flat_1920_1080_10.yuv"
yuv_fmt = "yuv420p"
yuv_width = 1920
yuv_height = 1080
cmd = self.CMD_TEMPLATE.format(vmaf=self.VMAF, fmt=yuv_fmt, ref=ref_yuv,
dis=dis_yuv, w=yuv_width, h=yuv_height,
log=LOCAL_LOG_FILENAME)
subprocess.call(cmd, shell=True)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm")[0], 1.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "ansnr")[0], 21.899511, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "motion")[0], 0.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif")[0], 1.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_num")[0], 0.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_den")[0], 0.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_num")[0], 2753999.99219, places=1)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_den")[0], 2754000.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "anpsnr")[0], 29.045954, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_num_scale0")[0],2073600.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_den_scale0")[0], 2073600.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_num_scale3")[0], 32400.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "vif_den_scale3")[0], 32400.0, places=4)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_num_scale0")[0], 0.0, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_den_scale0")[0], 0.0, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_num_scale3")[0], 0.0, places=3)
self.assertAlmostEquals(read_log(LOCAL_LOG_FILENAME, "adm_den_scale3")[0], 0.0006910000000000001, places=3)
if __name__ == '__main__':
unittest.main()
print 'Done.'
| 56.361905 | 115 | 0.671945 | 5,530 | 41,426 | 4.797107 | 0.053345 | 0.210645 | 0.079727 | 0.095673 | 0.944248 | 0.941232 | 0.931205 | 0.919217 | 0.912319 | 0.907871 | 0 | 0.110775 | 0.202216 | 41,426 | 734 | 116 | 56.438692 | 0.691912 | 0.000531 | 0 | 0.766862 | 0 | 0.029326 | 0.12381 | 0.019009 | 0 | 0 | 0 | 0 | 0.3739 | 0 | null | null | 0 | 0.007331 | null | null | 0.033724 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
05ea3c7d321ea5b5232dbfe5df7ad45558392724 | 154 | py | Python | outcome_adaptive_lasso/__init__.py | vishalbelsare/Outcome-Adaptive-LASSO | 2037e2cfd8b80f220a78fc136145871ed1a99de0 | [
"MIT"
] | 6 | 2019-12-03T15:59:34.000Z | 2021-05-17T08:54:11.000Z | outcome_adaptive_lasso/__init__.py | shenlong95/Outcome-Adaptive-LASSO | 2037e2cfd8b80f220a78fc136145871ed1a99de0 | [
"MIT"
] | 1 | 2021-02-03T10:45:38.000Z | 2021-02-03T10:45:38.000Z | outcome_adaptive_lasso/__init__.py | shenlong95/Outcome-Adaptive-LASSO | 2037e2cfd8b80f220a78fc136145871ed1a99de0 | [
"MIT"
] | 4 | 2020-03-03T23:04:37.000Z | 2022-02-10T05:56:42.000Z | from .outcome_adaptive_lasso import calc_outcome_adaptive_lasso, calc_ate_vanilla_ipw
from .synthetic_data_simulation import generate_synthetic_dataset
| 51.333333 | 86 | 0.909091 | 21 | 154 | 6.095238 | 0.666667 | 0.234375 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 154 | 2 | 87 | 77 | 0.895105 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
af79d3a686a22a898c32810f0a784d8f903a5e9b | 106 | py | Python | python/list/list/src/list/list.py | bg1bgst333/Sample | 68e3a5c26c6d9bc1906ce0cca2fd586f0790fa52 | [
"MIT"
] | 9 | 2016-12-22T20:24:09.000Z | 2021-05-08T08:48:24.000Z | python/list/list/src/list/list.py | bg1bgst333/Sample | 68e3a5c26c6d9bc1906ce0cca2fd586f0790fa52 | [
"MIT"
] | 36 | 2018-08-16T06:43:36.000Z | 2022-03-25T19:01:34.000Z | python/list/list/src/list/list.py | bg1bgst333/Sample | 68e3a5c26c6d9bc1906ce0cca2fd586f0790fa52 | [
"MIT"
] | 9 | 2016-09-03T02:57:31.000Z | 2021-09-09T02:42:26.000Z | #!/usr/bin/python
lst = [10, 20, 30]
print lst[1]
print "lst[1] = %s" % lst[1]
print "lst = %s" % lst
| 10.6 | 28 | 0.528302 | 20 | 106 | 2.8 | 0.5 | 0.428571 | 0.321429 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109756 | 0.226415 | 106 | 9 | 29 | 11.777778 | 0.573171 | 0.150943 | 0 | 0 | 0 | 0 | 0.213483 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.75 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
bb83251ff7a692defbf085cc3ee5fa4d8f10e8da | 30 | py | Python | terrestrial/api/v1/__init__.py | andrejcermak/terrestrial | 41d0dccd0a6d0d7d1756a8067270a6b375665636 | [
"MIT"
] | 12 | 2018-12-13T15:19:43.000Z | 2021-11-10T13:22:05.000Z | terrestrial/api/v1/__init__.py | andrejcermak/terrestrial | 41d0dccd0a6d0d7d1756a8067270a6b375665636 | [
"MIT"
] | null | null | null | terrestrial/api/v1/__init__.py | andrejcermak/terrestrial | 41d0dccd0a6d0d7d1756a8067270a6b375665636 | [
"MIT"
] | 1 | 2021-02-18T09:39:56.000Z | 2021-02-18T09:39:56.000Z | from .routes import blueprint
| 15 | 29 | 0.833333 | 4 | 30 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
bbae7365b07436ff4157c851a73652381a41b943 | 17,852 | py | Python | arucoDetection/src/svgfig/svgfig/pathdata.py | LavaHawk0123/Artmis-Drone | b78dcbb28ecdce4d82fc4addb60367e4cc266349 | [
"MIT"
] | null | null | null | arucoDetection/src/svgfig/svgfig/pathdata.py | LavaHawk0123/Artmis-Drone | b78dcbb28ecdce4d82fc4addb60367e4cc266349 | [
"MIT"
] | null | null | null | arucoDetection/src/svgfig/svgfig/pathdata.py | LavaHawk0123/Artmis-Drone | b78dcbb28ecdce4d82fc4addb60367e4cc266349 | [
"MIT"
] | null | null | null | import defaults
############################### convenient functions for making paths
def poly(*data, **kwds):
errstring = "Arguments are: poly((x1,y1), (x2,y2), ..., loop=False)"
loop = False
if "loop" in kwds:
loop = kwds["loop"]
del kwds["loop"]
if len(kwds) > 0:
raise TypeError, errstring
if len(data) <= 1:
data = data[0]
try:
output = []
for x, y in data:
if output == []:
output.append(("M", x, y))
else:
output.append(("L", x, y))
if loop and len(data) > 0:
output.append(("Z",))
return output
except (TypeError, ValueError):
raise TypeError, errstring
def bezier(*data, **kwds):
errstring = "Arguments are: bezier((x,y,c1x,c1y,c2x,c2y), ..., loop=False)"
loop = False
if "loop" in kwds:
loop = kwds["loop"]
del kwds["loop"]
if len(kwds) > 0:
raise TypeError, errstring
try:
output = []
for x, y, c1x, c1y, c2x, c2y in data:
if output == []:
output.append(("M", x, y))
else:
output.append(("C", c1x, c1y, c2x, c2y, x, y))
if loop and len(data) > 0:
output.append(("Z",))
return output
except (TypeError, ValueError):
raise TypeError, errstring
def velocity(*data, **kwds):
errstring = "Arguments are: velocity((x,y,vx,vy), ..., loop=False)"
loop = False
if "loop" in kwds:
loop = kwds["loop"]
del kwds["loop"]
if len(kwds) > 0:
raise TypeError, errstring
try:
output = []
indexes = range(len(data))
if loop and len(data) > 0:
indexes.append(0)
for i in indexes:
if output == []:
output.append(("M", data[i][0], data[i][1]))
else:
inext = (i+1) % len(data)
iprev = (i-1) % len(data)
x, y = data[i][0], data[i][1]
c1x, c1y = data[iprev][2]/3. + data[iprev][0], data[iprev][3]/3. + data[iprev][1]
c2x, c2y = data[i][2]/-3. + x, data[i][3]/-3. + y
output.append(("C", c1x, c1y, c2x, c2y, x, y))
if loop and len(data) > 0:
output.append(("Z",))
return output
except (TypeError, ValueError):
raise TypeError, errstring
def foreback(*data, **kwds):
errstring = "Arguments are: foreback((x,y,vfx,vfy,vbx,vby), ..., loop=False)"
loop = False
if "loop" in kwds:
loop = kwds["loop"]
del kwds["loop"]
if len(kwds) > 0:
raise TypeError, errstring
try:
output = []
indexes = range(len(data))
if loop and len(data) > 0:
indexes.append(0)
for i in indexes:
if output == []:
output.append(("M", data[i][0], data[i][1]))
else:
inext = (i+1) % len(data)
iprev = (i-1) % len(data)
x, y = data[i][0], data[i][1]
c1x, c1y = data[iprev][4]/3. + data[iprev][0], data[iprev][5]/3. + data[iprev][1]
c2x, c2y = data[i][2]/-3. + x, data[i][3]/-3. + y
output.append(("C", c1x, c1y, c2x, c2y, x, y))
if loop and len(data) > 0:
output.append(("Z",))
return output
except (TypeError, ValueError):
raise TypeError, errstring
def smooth(*data, **kwds):
errstring = "Arguments are: smooth((x1,y1), (x2,y2), ..., loop=False)"
loop = False
if "loop" in kwds:
loop = kwds["loop"]
del kwds["loop"]
if len(kwds) > 0:
raise TypeError, errstring
try:
x, y = zip(*data)
vx, vy = [0.]*len(data), [0.]*len(data)
for i in xrange(len(data)):
inext = (i+1) % len(data)
iprev = (i-1) % len(data)
vx[i] = (x[inext] - x[iprev])/2.
vy[i] = (y[inext] - y[iprev])/2.
if not loop and (i == 0 or i == len(data)-1):
vx[i], vy[i] = 0., 0.
return velocity(zip(x, y, vx, vy), loop)
except (TypeError, ValueError):
raise TypeError, errstring
############################### pathdata parsers
def parse_whitespace(index, pathdata):
while index < len(pathdata) and pathdata[index] in (" ", "\t", "\r", "\n", ","):
index += 1
return index, pathdata
def parse_command(index, pathdata):
index, pathdata = parse_whitespace(index, pathdata)
if index >= len(pathdata):
return None, index, pathdata
command = pathdata[index]
if "A" <= command <= "Z" or "a" <= command <= "z":
index += 1
return command, index, pathdata
else:
return None, index, pathdata
def parse_number(index, pathdata):
index, pathdata = parse_whitespace(index, pathdata)
if index >= len(pathdata):
return None, index, pathdata
first_digit = pathdata[index]
if "0" <= first_digit <= "9" or first_digit in ("-", "+", "."):
start = index
while index < len(pathdata) and ("0" <= pathdata[index] <= "9" or pathdata[index] in ("-", "+", ".", "e", "E")):
index += 1
end = index
index = end
return float(pathdata[start:end]), index, pathdata
else:
return None, index, pathdata
def parse_boolean(index, pathdata):
index, pathdata = parse_whitespace(index, pathdata)
if index >= len(pathdata):
return None, index, pathdata
first_digit = pathdata[index]
if first_digit in ("0", "1"):
index += 1
return int(first_digit), index, pathdata
else:
return None, index, pathdata
############################### main parsing function (keeps defaults from getting messy)
def parse(pathdata):
if isinstance(pathdata, (list, tuple)):
return pathdata
output = []
index = 0
while True:
command, index, pathdata = parse_command(index, pathdata)
index, pathdata = parse_whitespace(index, pathdata)
if command is None and index == len(pathdata):
break # this is the normal way out of the loop
if command in ("Z", "z"):
output.append((command,))
######################
elif command in ("H", "h", "V", "v"):
errstring = "Pathdata command \"%s\" requires a number at index %d" % (command, index)
num1, index, pathdata = parse_number(index, pathdata)
if num1 is None:
raise ValueError, errstring
while num1 is not None:
output.append((command, num1))
num1, index, pathdata = parse_number(index, pathdata)
######################
elif command in ("M", "m", "L", "l", "T", "t"):
errstring = "Pathdata command \"%s\" requires an x,y pair at index %d" % (command, index)
num1, index, pathdata = parse_number(index, pathdata)
num2, index, pathdata = parse_number(index, pathdata)
if num1 is None:
raise ValueError, errstring
while num1 is not None:
if num2 is None:
raise ValueError, errstring
output.append((command, num1, num2))
num1, index, pathdata = parse_number(index, pathdata)
num2, index, pathdata = parse_number(index, pathdata)
######################
elif command in ("S", "s", "Q", "q"):
errstring = "Pathdata command \"%s\" requires a cx,cy,x,y quadruplet at index %d" % (command, index)
num1, index, pathdata = parse_number(index, pathdata)
num2, index, pathdata = parse_number(index, pathdata)
num3, index, pathdata = parse_number(index, pathdata)
num4, index, pathdata = parse_number(index, pathdata)
if num1 is None:
raise ValueError, errstring
while num1 is not None:
if num2 is None or num3 is None or num4 is None:
raise ValueError, errstring
output.append((command, num1, num2, num3, num4))
num1, index, pathdata = parse_number(index, pathdata)
num2, index, pathdata = parse_number(index, pathdata)
num3, index, pathdata = parse_number(index, pathdata)
num4, index, pathdata = parse_number(index, pathdata)
######################
elif command in ("C", "c"):
errstring = "Pathdata command \"%s\" requires a c1x,c1y,c2x,c2y,x,y sextuplet at index %d" % (command, index)
num1, index, pathdata = parse_number(index, pathdata)
num2, index, pathdata = parse_number(index, pathdata)
num3, index, pathdata = parse_number(index, pathdata)
num4, index, pathdata = parse_number(index, pathdata)
num5, index, pathdata = parse_number(index, pathdata)
num6, index, pathdata = parse_number(index, pathdata)
if num1 is None:
raise ValueError, errstring
while num1 is not None:
if num2 is None or num3 is None or num4 is None or num5 is None or num6 is None:
raise ValueError, errstring
output.append((command, num1, num2, num3, num4, num5, num6))
num1, index, pathdata = parse_number(index, pathdata)
num2, index, pathdata = parse_number(index, pathdata)
num3, index, pathdata = parse_number(index, pathdata)
num4, index, pathdata = parse_number(index, pathdata)
num5, index, pathdata = parse_number(index, pathdata)
num6, index, pathdata = parse_number(index, pathdata)
######################
elif command in ("A", "a"):
errstring = "Pathdata command \"%s\" requires a rx,ry,angle,large-arc-flag,sweep-flag,x,y septuplet at index %d" % (command, index)
num1, index, pathdata = parse_number(index, pathdata)
num2, index, pathdata = parse_number(index, pathdata)
num3, index, pathdata = parse_number(index, pathdata)
num4, index, pathdata = parse_boolean(index, pathdata)
num5, index, pathdata = parse_boolean(index, pathdata)
num6, index, pathdata = parse_number(index, pathdata)
num7, index, pathdata = parse_number(index, pathdata)
if num1 is None:
raise ValueError, errstring
while num1 is not None:
if num2 is None or num3 is None or num4 is None or num5 is None or num6 is None or num7 is None:
raise ValueError, errstring
output.append((command, num1, num2, num3, num4, num5, num6, num7))
num1, index, pathdata = parse_number(index, pathdata)
num2, index, pathdata = parse_number(index, pathdata)
num3, index, pathdata = parse_number(index, pathdata)
num4, index, pathdata = parse_boolean(index, pathdata)
num5, index, pathdata = parse_boolean(index, pathdata)
num6, index, pathdata = parse_number(index, pathdata)
num7, index, pathdata = parse_number(index, pathdata)
return output
############################### transformation function (keeps defaults from getting messy)
def transform(func, pathdata):
x, y, X, Y = None, None, None, None
output = []
for datum in pathdata:
if not isinstance(datum, (tuple, list)):
raise TypeError, "Pathdata elements must be lists/tuples"
command = datum[0]
args = datum[1:]
######################
if command in ("Z", "z"):
x, y, X, Y = None, None, None, None
output.append(("Z",))
######################
elif command in ("H", "h", "V", "v"):
num1 = args[0]
if command == "H" or (command == "h" and x is None):
x = num1
elif command == "h":
x += num1
elif command == "V" or (command == "v" and y is None):
y = num1
elif command == "v":
y += num1
X, Y = func(x, y)
output.append(("L", X, Y))
######################
elif command in ("M", "m", "L", "l", "T", "t"):
num1, num2 = args
if command.isupper() or x is None or y is None:
x, y = num1, num2
else:
x += num1
y += num2
X, Y = func(x, y)
output.append((command.capitalize(), X, Y))
######################
elif command in ("S", "s", "Q", "q"):
num1, num2, num3, num4 = args
if command.isupper() or x is None or y is None:
cx, cy = num1, num2
else:
cx = x + num1
cy = y + num2
if command.isupper() or x is None or y is None:
x, y = num3, num4
else:
x += num3
y += num4
CX, CY = func(cx, cy)
X, Y = func(x, y)
output.append((command.capitalize(), CX, CY, X, Y))
######################
elif command in ("C", "c"):
num1, num2, num3, num4, num5, num6 = args
if command.isupper() or x is None or y is None:
c1x, c1y = num1, num2
else:
c1x = x + num1
c1y = y + num2
if command.isupper() or x is None or y is None:
c2x, c2y = num3, num4
else:
c2x = x + num3
c2y = y + num4
if command.isupper() or x is None or y is None:
x, y = num5, num6
else:
x += num5
y += num6
C1X, C1Y = func(c1x, c1y)
C2X, C2Y = func(c2x, c2y)
X, Y = func(x, y)
output.append((command.capitalize(), C1X, C1Y, C2X, C2Y, X, Y))
######################
elif command in ("A", "a"):
num1, num2, angle, large_arc_flag, sweep_flag, num3, num4 = args
oldx, oldy = x, y
OLDX, OLDY = X, Y
if command.isupper() or x is None or y is None:
x, y = num3, num4
else:
x += num3
y += num4
X, Y = func(x, y)
if x is not None and y is not None:
centerx, centery = (x + oldx)/2., (y + oldy)/2.
CENTERX, CENTERY = (X + OLDX)/2., (Y + OLDY)/2.
rx = centerx + num1
ry = centery + num2
RX, RY = func(rx, ry)
output.append((command.capitalize(), RX - CENTERX, RY - CENTERY, angle, large_arc_flag, sweep_flag, X, Y))
return output
############################### bbox function (keeps defaults from getting messy)
def bbox(pathdata):
x, y = None, None
output = defaults.BBox(None, None, None, None)
for datum in pathdata:
if not isinstance(datum, (tuple, list)):
raise TypeError, "Pathdata elements must be lists/tuples"
command = datum[0]
args = datum[1:]
######################
if command in ("Z", "z"):
pass
######################
elif command in ("H", "h", "V", "v"):
num1 = args[0]
if command == "H" or (command == "h" and x is None):
x = num1
elif command == "h":
x += num1
elif command == "V" or (command == "v" and y is None):
y = num1
elif command == "v":
y += num1
output.insert(x, y)
######################
elif command in ("M", "m", "L", "l", "T", "t"):
num1, num2 = args
if command.isupper() or x is None or y is None:
x, y = num1, num2
else:
x += num1
y += num2
output.insert(x, y)
######################
elif command in ("S", "s", "Q", "q"):
num1, num2, num3, num4 = args
if command.isupper() or x is None or y is None:
cx, cy = num1, num2
else:
cx = x + num1
cy = y + num2
if command.isupper() or x is None or y is None:
x, y = num3, num4
else:
x += num3
y += num4
output.insert(x, y)
######################
elif command in ("C", "c"):
num1, num2, num3, num4, num5, num6 = args
if command.isupper() or x is None or y is None:
c1x, c1y = num1, num2
else:
c1x = x + num1
c1y = y + num2
if command.isupper() or x is None or y is None:
c2x, c2y = num3, num4
else:
c2x = x + num3
c2y = y + num4
if command.isupper() or x is None or y is None:
x, y = num5, num6
else:
x += num5
y += num6
output.insert(x, y)
######################
elif command in ("A", "a"):
num1, num2, angle, large_arc_flag, sweep_flag, num3, num4 = args
oldx, oldy = x, y
if command.isupper() or x is None or y is None:
x, y = num3, num4
else:
x += num3
y += num4
if x is not None and y is not None:
centerx, centery = (x + oldx)/2., (y + oldy)/2.
output.insert(x, y)
return output
| 32.937269 | 143 | 0.481347 | 2,115 | 17,852 | 4.032624 | 0.074704 | 0.158518 | 0.09497 | 0.104115 | 0.862469 | 0.828937 | 0.796342 | 0.766678 | 0.757064 | 0.715559 | 0 | 0.029655 | 0.370995 | 17,852 | 541 | 144 | 32.998152 | 0.729896 | 0.01462 | 0 | 0.793612 | 0 | 0.007371 | 0.050413 | 0.007152 | 0.004914 | 0 | 0 | 0 | 0 | 0 | null | null | 0.002457 | 0.002457 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
bbe7830f073fbbc840a3ea388cd4e1913145f962 | 1,852 | py | Python | mysql_config/WebMonitoring/generators/resource/tests/test_resource_generate_samples_queries.py | raresraf/rafMetrics | 21eb5e8210364bf70eee746d71c45f3e353dcb10 | [
"MIT"
] | 15 | 2019-11-03T18:01:27.000Z | 2021-05-05T20:54:57.000Z | mysql_config/WebMonitoring/generators/resource/tests/test_resource_generate_samples_queries.py | raresraf/rafMetrics | 21eb5e8210364bf70eee746d71c45f3e353dcb10 | [
"MIT"
] | 392 | 2019-11-09T21:28:01.000Z | 2022-03-31T13:04:45.000Z | mysql_config/WebMonitoring/generators/resource/tests/test_resource_generate_samples_queries.py | raresraf/rafMetrics | 21eb5e8210364bf70eee746d71c45f3e353dcb10 | [
"MIT"
] | 1 | 2021-03-11T18:35:16.000Z | 2021-03-11T18:35:16.000Z | from mysql_config.WebMonitoring.generators.resource.resource_generate_samples_queries import (
resource_generate_samples_queries, )
from mysql_config.WebMonitoring.generators.resource.resource_generate_samples_queries_size import (
resource_generate_samples_queries_size, )
from mysql_config.WebMonitoring.generators.resource.tests.resource_expected_size import (
EXPECTED_DAILY_RESOURCE_GENERATE_SAMPLES_QUERIES_SIZE,
EXPECTED_WEEKLY_RESOURCE_GENERATE_SAMPLES_QUERIES_SIZE,
EXPECTED_MONTHLY_RESOURCE_GENERATE_SAMPLES_QUERIES_SIZE,
)
from mysql_config.WebMonitoring.generators.resource.tests.resource_expected_time import (
EXPECTED_DAILY_RESOURCE_GENERATE_SAMPLES_QUERIES,
EXPECTED_WEEKLY_RESOURCE_GENERATE_SAMPLES_QUERIES,
EXPECTED_MONTHLY_RESOURCE_GENERATE_SAMPLES_QUERIES,
)
def test_resource_generate_samples_queries(capfd):
resource_generate_samples_queries("daily")
out, _ = capfd.readouterr()
assert out == EXPECTED_DAILY_RESOURCE_GENERATE_SAMPLES_QUERIES
resource_generate_samples_queries("weekly")
out, err = capfd.readouterr()
assert out == EXPECTED_WEEKLY_RESOURCE_GENERATE_SAMPLES_QUERIES
resource_generate_samples_queries("monthly")
out, err = capfd.readouterr()
assert out == EXPECTED_MONTHLY_RESOURCE_GENERATE_SAMPLES_QUERIES
def test_resource_generate_samples_queries_size(capfd):
resource_generate_samples_queries_size("daily")
out, _ = capfd.readouterr()
assert out == EXPECTED_DAILY_RESOURCE_GENERATE_SAMPLES_QUERIES_SIZE
resource_generate_samples_queries_size("weekly")
out, err = capfd.readouterr()
assert out == EXPECTED_WEEKLY_RESOURCE_GENERATE_SAMPLES_QUERIES_SIZE
resource_generate_samples_queries_size("monthly")
out, err = capfd.readouterr()
assert out == EXPECTED_MONTHLY_RESOURCE_GENERATE_SAMPLES_QUERIES_SIZE
| 43.069767 | 99 | 0.835853 | 214 | 1,852 | 6.672897 | 0.11215 | 0.268908 | 0.386555 | 0.504202 | 0.994398 | 0.907563 | 0.876751 | 0.758403 | 0.716387 | 0.716387 | 0 | 0 | 0.106911 | 1,852 | 42 | 100 | 44.095238 | 0.863362 | 0 | 0 | 0.176471 | 1 | 0 | 0.019438 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 1 | 0.058824 | false | 0 | 0.117647 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
a58edd9f9428fb607be2aa4e0297b2c21f8f7180 | 76 | py | Python | ufdl-annotation-utils/src/ufdl/annotation_utils/__init__.py | waikato-ufdl/ufdl-backend | 776fc906c61eba6c2f2e6324758e7b8a323e30d7 | [
"Apache-2.0"
] | null | null | null | ufdl-annotation-utils/src/ufdl/annotation_utils/__init__.py | waikato-ufdl/ufdl-backend | 776fc906c61eba6c2f2e6324758e7b8a323e30d7 | [
"Apache-2.0"
] | 85 | 2020-07-24T00:04:28.000Z | 2022-02-10T10:35:15.000Z | ufdl-annotation-utils/src/ufdl/annotation_utils/__init__.py | waikato-ufdl/ufdl-backend | 776fc906c61eba6c2f2e6324758e7b8a323e30d7 | [
"Apache-2.0"
] | null | null | null | from ._converted_annotations_iterator import converted_annotations_iterator
| 38 | 75 | 0.934211 | 8 | 76 | 8.25 | 0.625 | 0.606061 | 0.848485 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 76 | 1 | 76 | 76 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
a5a7c63dcf75e64819d0390579aa40bd58939002 | 5,224 | py | Python | p13.py | daicang/Euler | c00114d588351b6f2c637937558a35738d56345b | [
"MIT"
] | null | null | null | p13.py | daicang/Euler | c00114d588351b6f2c637937558a35738d56345b | [
"MIT"
] | null | null | null | p13.py | daicang/Euler | c00114d588351b6f2c637937558a35738d56345b | [
"MIT"
] | null | null | null | # first 10 digits of sum of 100 50-digit numbers
raw = """37107287533902102798797998220837590246510135740250
46376937677490009712648124896970078050417018260538
74324986199524741059474233309513058123726617309629
91942213363574161572522430563301811072406154908250
23067588207539346171171980310421047513778063246676
89261670696623633820136378418383684178734361726757
28112879812849979408065481931592621691275889832738
44274228917432520321923589422876796487670272189318
47451445736001306439091167216856844588711603153276
70386486105843025439939619828917593665686757934951
62176457141856560629502157223196586755079324193331
64906352462741904929101432445813822663347944758178
92575867718337217661963751590579239728245598838407
58203565325359399008402633568948830189458628227828
80181199384826282014278194139940567587151170094390
35398664372827112653829987240784473053190104293586
86515506006295864861532075273371959191420517255829
71693888707715466499115593487603532921714970056938
54370070576826684624621495650076471787294438377604
53282654108756828443191190634694037855217779295145
36123272525000296071075082563815656710885258350721
45876576172410976447339110607218265236877223636045
17423706905851860660448207621209813287860733969412
81142660418086830619328460811191061556940512689692
51934325451728388641918047049293215058642563049483
62467221648435076201727918039944693004732956340691
15732444386908125794514089057706229429197107928209
55037687525678773091862540744969844508330393682126
18336384825330154686196124348767681297534375946515
80386287592878490201521685554828717201219257766954
78182833757993103614740356856449095527097864797581
16726320100436897842553539920931837441497806860984
48403098129077791799088218795327364475675590848030
87086987551392711854517078544161852424320693150332
59959406895756536782107074926966537676326235447210
69793950679652694742597709739166693763042633987085
41052684708299085211399427365734116182760315001271
65378607361501080857009149939512557028198746004375
35829035317434717326932123578154982629742552737307
94953759765105305946966067683156574377167401875275
88902802571733229619176668713819931811048770190271
25267680276078003013678680992525463401061632866526
36270218540497705585629946580636237993140746255962
24074486908231174977792365466257246923322810917141
91430288197103288597806669760892938638285025333403
34413065578016127815921815005561868836468420090470
23053081172816430487623791969842487255036638784583
11487696932154902810424020138335124462181441773470
63783299490636259666498587618221225225512486764533
67720186971698544312419572409913959008952310058822
95548255300263520781532296796249481641953868218774
76085327132285723110424803456124867697064507995236
37774242535411291684276865538926205024910326572967
23701913275725675285653248258265463092207058596522
29798860272258331913126375147341994889534765745501
18495701454879288984856827726077713721403798879715
38298203783031473527721580348144513491373226651381
34829543829199918180278916522431027392251122869539
40957953066405232632538044100059654939159879593635
29746152185502371307642255121183693803580388584903
41698116222072977186158236678424689157993532961922
62467957194401269043877107275048102390895523597457
23189706772547915061505504953922979530901129967519
86188088225875314529584099251203829009407770775672
11306739708304724483816533873502340845647058077308
82959174767140363198008187129011875491310547126581
97623331044818386269515456334926366572897563400500
42846280183517070527831839425882145521227251250327
55121603546981200581762165212827652751691296897789
32238195734329339946437501907836945765883352399886
75506164965184775180738168837861091527357929701337
62177842752192623401942399639168044983993173312731
32924185707147349566916674687634660915035914677504
99518671430235219628894890102423325116913619626622
73267460800591547471830798392868535206946944540724
76841822524674417161514036427982273348055556214818
97142617910342598647204516893989422179826088076852
87783646182799346313767754307809363333018982642090
10848802521674670883215120185883543223812876952786
71329612474782464538636993009049310363619763878039
62184073572399794223406235393808339651327408011116
66627891981488087797941876876144230030984490851411
60661826293682836764744779239180335110989069790714
85786944089552990653640447425576083659976645795096
66024396409905389607120198219976047599490197230297
64913982680032973156037120041377903785566085089252
16730939319872750275468906903707539413042652315011
94809377245048795150954100921645863754710598436791
78639167021187492431995700641917969777599028300699
15368713711936614952811305876380278410754449733078
40789923115535562561142322423255033685442488917353
44889911501440648020369068063960672322193204149535
41503128880339536053299340368006977710650566631954
81234880673210146739058568557934581403627822703280
82616570773948327592232845941706525094512325230608
22918802058777319719839450180888072429661980811197
77158542502016545090413245809786882778948721859617
72107838435069186155435662884062257473692284509516
20849603980134001723930671666823555245252804609722
53503534226472524250874054075591789781264330331690"""
numbers = [int(x) for x in raw.split()]
print sum(numbers)
| 48.37037 | 59 | 0.972435 | 122 | 5,224 | 41.639344 | 0.95082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.981957 | 0.023928 | 5,224 | 107 | 60 | 48.82243 | 0.014317 | 0.008806 | 0 | 0 | 0 | 0 | 0.985124 | 0.965997 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.009804 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a5ada391c44563aa8941eb6deb12971db557924b | 37,523 | py | Python | tests/integration/operators_test/keras_lstm_tests.py | gglin001/popart | 3225214343f6d98550b6620e809a3544e8bcbfc6 | [
"MIT"
] | 61 | 2020-07-06T17:11:46.000Z | 2022-03-12T14:42:51.000Z | tests/integration/operators_test/keras_lstm_tests.py | gglin001/popart | 3225214343f6d98550b6620e809a3544e8bcbfc6 | [
"MIT"
] | 1 | 2021-02-25T01:30:29.000Z | 2021-11-09T11:13:14.000Z | tests/integration/operators_test/keras_lstm_tests.py | gglin001/popart | 3225214343f6d98550b6620e809a3544e8bcbfc6 | [
"MIT"
] | 6 | 2020-07-15T12:33:13.000Z | 2021-11-07T06:55:00.000Z | # Copyright (c) 2021 Graphcore Ltd. All rights reserved.
#
# THIS IS AN AUTOGENERATED FILE, DO NOT EDIT DIRECTLY
#
# To regenerate this file run:
# python popart/tests/popart/operators_test/generate_keras_lstm_models.py
#
# File generated using TensorFlow version 2.3.0
# and keras2onnx version 1.7.0
import numpy as np
from numpy import array, float32
import popart
import onnx
import pytest
# `import test_util` requires adding to sys.path
import sys
from pathlib import Path
sys.path.append(str(Path(__file__).resolve().parent.parent))
import test_util as tu
def _run_comparison_test(data, result, proto, expected_activations,
lstm_op_pattern):
model = onnx.load_from_string(proto)
if expected_activations:
lstms = [i for i in model.graph.node if i.op_type == 'LSTM']
assert len(lstms) == 1
activations = [
i for i in lstms[0].attribute if i.name == 'activations'
]
assert len(activations) == 1
activations = activations[0].strings
assert len(activations) == len(expected_activations)
for expected, actual in zip(expected_activations, activations):
assert expected == actual.decode('utf-8').lower()
outId = model.graph.output[0].name
inId = model.graph.input[0].name
dataFlow = popart.DataFlow(1, {outId: popart.AnchorReturnType("All")})
patterns = popart.Patterns(popart.PatternsLevel.Default)
patterns.enablePattern('LSTMOp', lstm_op_pattern)
session = popart.InferenceSession(fnModel=proto,
dataFlow=dataFlow,
deviceInfo=tu.create_test_device(),
patterns=patterns)
session.prepareDevice()
anchors = session.initAnchorArrays()
stepio = popart.PyStepIO({inId: data}, anchors)
session.run(stepio)
assert np.allclose(anchors[outId], result)
@pytest.mark.parametrize("lstm_op_pattern", [False, True])
def test_basic(lstm_op_pattern):
data = array([[[0.5488135, 0.71518934, 0.60276335, 0.5448832],
[0.4236548, 0.6458941, 0.4375872, 0.891773],
[0.96366274, 0.3834415, 0.79172504, 0.5288949],
[0.56804454, 0.92559665, 0.07103606, 0.0871293]]],
dtype=float32)
result = array([[-0.02019063, -0.193365, 0.00335996, -0.17738362]],
dtype=float32)
model = b'\x08\x06\x12\nkeras2onnx\x1a\x051.8.1"\x0bonnxmltools(\x002\x00:\xa5\x08\n?\n\nlstm_input\x12\x06lstm_X\x1a\tTranspose"\tTranspose*\x0f\n\x04perm@\x01@\x00@\x02\xa0\x01\x072\x00:\x00\n\xa6\x01\n\x06lstm_X\n\x06lstm_W\n\x06lstm_R\n\x06lstm_B\n\x00\n\x00\n\x00\n\x00\x12\x06lstm_Y\x12\x08lstm_Y_h\x12\x08lstm_Y_c\x1a\x04lstm"\x04LSTM*%\n\x0bactivationsJ\x07SigmoidJ\x04TanhJ\x04Tanh\xa0\x01\x08*\x17\n\tdirection"\x07forward\xa0\x01\x03*\x12\n\x0bhidden_size\x18\x04\xa0\x01\x02:\x00\n3\n\x08lstm_Y_h\x12\x04lstm\x1a\x07Squeeze"\x07Squeeze*\x0b\n\x04axes@\x00\xa0\x01\x072\x00:\x00\x12\nsequential*\x93\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02TYi\xbe,\x18\xe8>\x00\x06j\xbex<\x8d=\xd4\x93\xa4\xbePS5>N\x0f\xe0\xbe>\x1b\xab>\xd0\xcb\x1e=\xde\x08\xbb>\xba\x83\xbb\xbe\x80I\xbc=\xf0m\x89=7X\xe9\xbe\x1c?\x97>\xf6\x97\xde\xbe\x08\x1b\xa2=\xa8.\xfc\xbe\x004\x07?H\xc0\xed>\xfc\xb5\x17\xbe\xe01\x8d>L\xd8\xb5\xbe\xfaq\xd2>\xa4P\xb2\xbd\x9c\x90\xb7\xbe4\xd7\xfe\xbe\x90\xc2\x0f\xbd\xbc\xe3\x11>\x84\xf5R\xbe\xe0\xd4\xda=\x90\xfb\x06\xbd<\xf0\xba\xbd@\rw\xbeX8\xc0=\\\x88\xf9>\x9c\xa6\xac>\x1f\xdd\x07\xbf\x00,\x07?\x12\xdes\xbe\x800\xf3\xbbl\xee}>\x98\xaa\xe8>0$m=>\xb1\x0b?\xa0\xee\x94>z\xfa\xc8\xbe@Yw\xbeT\xaf\\>L\xc3I>l\xb6\xe1\xbe\x9d\xd7\xeb\xbe\xae\x1d\xd2\xbe0\xd9\x12=\xc0n\xee=\xbc\xea\xf7\xbd\xe0[k>\xda\xde\x8f>\xb2\xa4\xb4\xbe`G\xf1\xbe\xfc/6>&\xd6\xfd\xbe \xff\xa1<\x08u\x00\xbfB\x06lstm_W*\x93\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02\x08\x84i>\xec\xb4\xc2=*\xa1\x0b\xbe\x82\xb4\x14>mo\x1b\xbd\r^\x84\xbe\xcc\xfa\x02\xbf\xb6\xc9"?\x85\xbc/>\x10\x9ed\xbe\x10\xdc\xbd\xbd\xb80L=\x8d\x1f\x17\xbf,\x9d\xff>d\xb1\x00\xbeVE2>\x11\x17\x04?)\xa7&>n\x1a\xa1<\x16\xc0\x8e>N\x1a\xd3\xbd\xb4*c\xbb\xde\xf4#=\r\x8a\xcc>\x12\xae\xbf;\xb4f<\xbe\t\xb7\x1b=\t\xeaJ\xbe\xb82\x16>\x10\x0f\x8f\xbd\xaeD\x9f>\xea\xd6\x07>Y\x85\xa4\xbe.8\x9f\xbe~\xae\xd6>(\x8c;>D\x89\xe9\xbc\xabU\x9f<\xdf!\xf9\xbe\xb4\xc8\xae\xbeI\x90\xdf\xbc\\\xc0\xaa>*`\xfa\xbd\xee\xceE\xbd\xfb\xbe\x1e>\xbe\xb9\xfb\xbcj\xe8\x8f\xbeO5\x1b\xbd\x91r\x97\xbe\xa1z\x99\xbe\xa8\x02\x8d\xbb\xaeH\xad=\xd9\xdb(=\x14\x9d\xf7>\xbce&>\xefXH>\x18\xd8,\xbd\x84\xbf\x0f=\x1e!\xc6<L\xdc;\xbe\x99\xe0e\xbe\xef\xc2#\xbe\x94\xf6}\xbe\xca\xab\xf7\xbdB\x06lstm_R*\x91\x01\x08\x01\x08 \x10\x01"\x80\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00B\x06lstm_BZ \n\nlstm_input\x12\x12\n\x10\x08\x01\x12\x0c\n\x02\x08\x01\n\x02\x08\x04\n\x02\x08\x04b\x16\n\x04lstm\x12\x0e\n\x0c\x08\x01\x12\x08\n\x02\x08\x01\n\x02\x08\x04B\x04\n\x00\x10\x0b'
_run_comparison_test(data, result, model, [], lstm_op_pattern)
@pytest.mark.parametrize("lstm_op_pattern", [False, True])
def test_relu_relu(lstm_op_pattern):
data = array([[[0.5488135, 0.71518934, 0.60276335, 0.5448832],
[0.4236548, 0.6458941, 0.4375872, 0.891773],
[0.96366274, 0.3834415, 0.79172504, 0.5288949],
[0.56804454, 0.92559665, 0.07103606, 0.0871293]]],
dtype=float32)
result = array([[0., 0., 0., 0.03986165]], dtype=float32)
model = b'\x08\x06\x12\nkeras2onnx\x1a\x051.8.1"\x0bonnxmltools(\x002\x00:\xc6\x08\nC\n\x0clstm_1_input\x12\x08lstm_1_X\x1a\tTranspose"\tTranspose*\x0f\n\x04perm@\x01@\x00@\x02\xa0\x01\x072\x00:\x00\n\xb3\x01\n\x08lstm_1_X\n\x08lstm_1_W\n\x08lstm_1_R\n\x08lstm_1_B\n\x00\n\x00\n\x00\n\x00\x12\x08lstm_1_Y\x12\nlstm_1_Y_h\x12\nlstm_1_Y_c\x1a\x06lstm_1"\x04LSTM*"\n\x0bactivationsJ\x04ReluJ\x04ReluJ\x04Relu\xa0\x01\x08*\x17\n\tdirection"\x07forward\xa0\x01\x03*\x12\n\x0bhidden_size\x18\x04\xa0\x01\x02:\x00\n7\n\nlstm_1_Y_h\x12\x06lstm_1\x1a\x07Squeeze"\x07Squeeze*\x0b\n\x04axes@\x00\xa0\x01\x072\x00:\x00\x12\x0csequential_1*\x95\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02\xde\xf1\xaa\xbe\x84dQ\xbe$\x07\xd1>\xa4\x917>\x98\xb4\x86>\x1c>\'>\xd4\xaf">0,$>@\xeeZ\xbc\x02G\x0b?\xb4\xe7\x1e>\xb8y.>P\x81\xd2>c\xd0\xfa\xben\xae\xc1>N7\x86>D\xe6\\>\x1a\xab\x01\xbf\x92\xdb\x0b?T\xc5\xe1\xbeb\x0f\x0b\xbf@\x9aQ\xbc\x88{6\xbe,\x13\xfc\xbdVv\x18\xbe\x80\xce\xd7;H\x99\x08>DN\x03\xbf0\xc1\x92=\xa4\xe1\xfd>\xc0\xf9\x08<B\xb2\x03\xbf\x80c\xd0\xbd\xa89\xdb>\xccG\xab>&N\xd9>\x1e#\x08\xbfGW\x82\xbe\x00\x9a\xbc:\xef\xd0\xc8\xbe\xf0\x0ck><1\x85\xbd\x04\xd3r>\xd0\xb9C=\xac\xdc\x16\xbe.\xf4\x84>|,\xa4>8\x88\x1f\xbe\xc8e\xb5=^5\xb9\xbe\x02\x02\x86>\x00\xa2\xce\xbcZ\xd1\xe0>\x0e\xb0\x91\xbe\xaf\xa8\xf1\xbe@0\xef\xbe\xb8\x7f\xce\xbe\x8c\xbe\x04?\xd8\n\xb3\xbe\x9b\x8e\xee\xbeP\xc4I=$q\x07?\x90\xb5>\xbe\x9c\xa0h\xbeB\x08lstm_1_W*\x95\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02d<\xaa>S\xf0r\xbe\xfb\xb8y\xbb\xa4\x08\x82=\xc5L\xa8>\xdaT\x88<_\x84t\xbe\x1f\xf1\x0e>\x8b]Z<\xe9!\x80\xbe\x10\xdef>`\x94\x97\xbe\'a\x1e\xbe\xd3\xdc\xf9>GP\x94\xbe\x8e%V\xbe\xdb\xfc\x8a\xbe\xa7\xe6(>\x84\xbf\xbb=5<@\xbe\xafR\xc0\xbe|*\x0f>\xde\xe4\xd1\xbdD\xf7\xa0;MW">\xc9\xe0\x80>])\xd1\xbc\x8c\xc8\xf8\xbe\x12\x1bK=P\xaf\xf3>\xcc\x92n>\xb1\xd5\xb2>D\xc1?>)\xc5\xe7>P\x88\x03?_\xc3<>T\xc4<\xbe\x92\xe5R<P\xbf\xf9\xbd\xee\xda\x80\xbe\xa5\x0c\x0c\xbe\xbe\x8ag\xbd\xb767\xbe\x84\xbe\xe7\xbb\xd7T\x8b\xbd\x16W\xe9\xbd\xd0r\xa9>\x0f\xad\xbd\xbd\xd2\xb3\x0e\xbf3h\xac\xbd\x0c\xf8F\xbb0"\xc1>\xd1\xf5\x97>X\xee[>\x95\xbe\xf3\xbe+\xf9\xc1=\xbeC\xba\xbd\x9b\xa4+>\x90T\x87=B\x95\xc6\xbeG\x84\xfe\xbdjTp=}z\x93\xbe\xd0\x02W>B\x08lstm_1_R*\x93\x01\x08\x01\x08 \x10\x01"\x80\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00B\x08lstm_1_BZ"\n\x0clstm_1_input\x12\x12\n\x10\x08\x01\x12\x0c\n\x02\x08\x01\n\x02\x08\x04\n\x02\x08\x04b\x18\n\x06lstm_1\x12\x0e\n\x0c\x08\x01\x12\x08\n\x02\x08\x01\n\x02\x08\x04B\x04\n\x00\x10\x0b'
_run_comparison_test(data, result, model, ['relu', 'relu', 'relu'],
lstm_op_pattern)
@pytest.mark.parametrize("lstm_op_pattern", [False, True])
def test_relu_sigmoid(lstm_op_pattern):
data = array([[[0.5488135, 0.71518934, 0.60276335, 0.5448832],
[0.4236548, 0.6458941, 0.4375872, 0.891773],
[0.96366274, 0.3834415, 0.79172504, 0.5288949],
[0.56804454, 0.92559665, 0.07103606, 0.0871293]]],
dtype=float32)
result = array([[0.14236511, 0., 0.17311893, 0.]], dtype=float32)
model = b'\x08\x06\x12\nkeras2onnx\x1a\x051.8.1"\x0bonnxmltools(\x002\x00:\xc9\x08\nC\n\x0clstm_2_input\x12\x08lstm_2_X\x1a\tTranspose"\tTranspose*\x0f\n\x04perm@\x01@\x00@\x02\xa0\x01\x072\x00:\x00\n\xb6\x01\n\x08lstm_2_X\n\x08lstm_2_W\n\x08lstm_2_R\n\x08lstm_2_B\n\x00\n\x00\n\x00\n\x00\x12\x08lstm_2_Y\x12\nlstm_2_Y_h\x12\nlstm_2_Y_c\x1a\x06lstm_2"\x04LSTM*%\n\x0bactivationsJ\x07SigmoidJ\x04ReluJ\x04Relu\xa0\x01\x08*\x17\n\tdirection"\x07forward\xa0\x01\x03*\x12\n\x0bhidden_size\x18\x04\xa0\x01\x02:\x00\n7\n\nlstm_2_Y_h\x12\x06lstm_2\x1a\x07Squeeze"\x07Squeeze*\x0b\n\x04axes@\x00\xa0\x01\x072\x00:\x00\x12\x0csequential_2*\x95\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02\xf5k\x08\xbf\x98\xcd\x80=\xcco >\xfe7\xb6>\x84m\xe4\xbeD\xc3\xc5\xbe\xc0\xc5\x88<\xd4|\xd2>\xc0\x1ds\xbc\x8ah\xc0>\x19\x08\xb2\xbe\xfcV\x97>\x9c\x0bf>\x00\xfc\xb6\xbc\xa0\xba\x11=\x0c;\x1c>\xf2\x90\xa3\xbe\\\xb5o>\xca\xe5\x08\xbe\xb0\xc9n\xbd*%\xa5>\xdet\x8d\xbe\xd2\xb2\x94>@C\x19\xbe\x14\xf5\xef>\xa2\xd3\x00\xbfb4\x01\xbft\xee\xe2\xbe\xe8\xfb\xa6=\x80.\'\xbd\xdaUc\xbe{J\xab\xbev\x9e \xbe\xb2\xb2\x01\xbf\xfe\x9d\xbe>\xca\xb7x\xbe\xa0\x06x=(\x96\x00?\xde\xe1\xa4>\xa44>>\xdc\xc5V>,P\x93\xbe\xb0\x18\x7f=\xc4\x08^\xbeD\x14\xb1\xbd\xf0\xeed>\xc4!\xdd\xbedA\xb8\xben\xcb\xa3\xbe\xfe6\x99>\x0e\xcd\x86>\x84Q\xc2>\xfc\xb4\xd4\xbe\xf2D\x0f\xbe\xe4\xb7S\xbe\xf49\x9e>\xd00N\xbe\x80\x8b\x0b?\xfbm\xfd\xbeb\xfc\xb8>\x02\xff\xff\xbe\x12Ln\xbe\xd0\xf00\xbd\xb2\x8e\xaa\xbeB\x08lstm_2_W*\x95\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02`&\x85\xbee\xa0\x88\xbe\x04\xda\xb0<v]\x1a?\x80\xdf\xe7\xbd\x85\xd0@\xbe\xa1\x16\n\xbe\x1e~\x86=\xc2)$>\x0b.\xf6>\xcd\xf3\xb9\xbeB\x08E=?D\xcf>\x89\xd9\xed\xbd\x0b\x81\t?\xb2\xfc\xab\xbe\x01\xeb\xae=8Q\xeb\xbdp\x97O\xbeVF\x1d\xbe\xea\xe7\xcb\xbe\x94\x19\xbc\xbe\xa2OX\xbe\x14\x0e\xb3\xbe\xc0I\x95\xbd\x8f\xd8\x0f>\x1fx;>\x9a\xbc\x9f=\xab\xb3)?\x1c\xc4\xc4\xbe9\x8b\xb7\xbe\xb8%G>%\x11\x92\xbd_\x19\xb1\xbe\xd7\xe1\x1a>\xe6\xb8\xba=C\xa7\xf8=B6\x86\xbd\xe4F\x0f\xbe\xba\x06\xd4=\x18\x81M\xbep\x1c\xc9\xbb|o%\xbd8`\n\xbc\xcc\x95\xcb\xbd\xbb\xf9\x97>V\x95z<?\rY>\x89\xa2\x84\xbd\xbf\x9au\xbeU\xf0\x8a\xbd\x0b\x04\xb0\xbe~\xb8\xee\xba\xbfPe>\xfd\xb4\xa1\xbe\x89O\xb0\xbe|tX>\xb8\x9d\x02<\xc8O\xe0=*\xbc$>\xde\xa2\xaf\xbc}\xa4\xeb\xbd\xf8\xf4\xc7\xbeO\xbf\xb0<B\x08lstm_2_R*\x93\x01\x08\x01\x08 \x10\x01"\x80\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00B\x08lstm_2_BZ"\n\x0clstm_2_input\x12\x12\n\x10\x08\x01\x12\x0c\n\x02\x08\x01\n\x02\x08\x04\n\x02\x08\x04b\x18\n\x06lstm_2\x12\x0e\n\x0c\x08\x01\x12\x08\n\x02\x08\x01\n\x02\x08\x04B\x04\n\x00\x10\x0b'
_run_comparison_test(data, result, model, ['sigmoid', 'relu', 'relu'],
lstm_op_pattern)
@pytest.mark.parametrize("lstm_op_pattern", [False, True])
def test_relu_tanh(lstm_op_pattern):
data = array([[[0.5488135, 0.71518934, 0.60276335, 0.5448832],
[0.4236548, 0.6458941, 0.4375872, 0.891773],
[0.96366274, 0.3834415, 0.79172504, 0.5288949],
[0.56804454, 0.92559665, 0.07103606, 0.0871293]]],
dtype=float32)
result = array([[-0., -0., -0., 0.01629485]], dtype=float32)
model = b'\x08\x06\x12\nkeras2onnx\x1a\x051.8.1"\x0bonnxmltools(\x002\x00:\xc6\x08\nC\n\x0clstm_4_input\x12\x08lstm_4_X\x1a\tTranspose"\tTranspose*\x0f\n\x04perm@\x01@\x00@\x02\xa0\x01\x072\x00:\x00\n\xb3\x01\n\x08lstm_4_X\n\x08lstm_4_W\n\x08lstm_4_R\n\x08lstm_4_B\n\x00\n\x00\n\x00\n\x00\x12\x08lstm_4_Y\x12\nlstm_4_Y_h\x12\nlstm_4_Y_c\x1a\x06lstm_4"\x04LSTM*"\n\x0bactivationsJ\x04TanhJ\x04ReluJ\x04Relu\xa0\x01\x08*\x17\n\tdirection"\x07forward\xa0\x01\x03*\x12\n\x0bhidden_size\x18\x04\xa0\x01\x02:\x00\n7\n\nlstm_4_Y_h\x12\x06lstm_4\x1a\x07Squeeze"\x07Squeeze*\x0b\n\x04axes@\x00\xa0\x01\x072\x00:\x00\x12\x0csequential_4*\x95\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02\xf0e\x00=\xa0\x87\xe1\xbe\xd4\xa3R>\xa0\n{=Pd\xce\xbe\xc0o_\xbdF\x85\x90\xbe\x05\xb3\xc1\xbe0\nu>\xfch\xce\xbd\x90<\x84>68\x02\xbf\xa0g\x89=d\xce+>\xe8\xacE\xbdZ\x15\xd3>X_\xd1\xben\xf6\x92\xbe\x066\x9d>\x90\x87\x04=\x04j\xa8>w<\xf2\xbe\x0e\xce\xdd>P\x8an\xbe\x80k\x9a\xbe\x10\xd4\x93=F\x8e\x84\xbe<D\x98>\xf0\xc3\xe2>@\r.\xbd\x82\x95\xe7\xbe\x14\xf4\x88>\x08:\x9c=h\xff\xd4\xbe\xa0\xec\x1f=6G\xcf\xbe\x82#\x00\xbf\x94\nU>\x00\x99\x16\xbb\xbe4\xb1>(\xad\xda\xbd\x93\x9d\x08\xbfL\xa8\x86\xbd\'\xb5\x08\xbf\xb0\xc66\xbd:\x8d\xfc\xbe\x8b\x1b\xf7\xbe\xb0\x10l=\x02\x86\xc7>\x18\xec\xa4\xbd\x90\x98;\xbdLu2\xbe\xee]\xb5\xbe@h\x0b?";\xb5>\xa14\x89\xbe`\x0e\\=\xce\xab\xaf\xbe\x00R\xab\xbd\xbc9\xc9\xbe\xd0\x07\xf2>\xf8\xe1\x17\xbe4\x93g>\x93\x05\xb3\xbeB\x08lstm_4_W*\x95\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02\xd8\x8c\x82>\xa0\xbb\x1a>)\x1e\x80>\xc5}\x11?\xd7m\x10\xbe\x11o]\xbeC\xb4\\\xbcs\x1a\xb6\xbd\xb7\x9f\xd1=J\x13\x04>_\x8e*=\x86[d\xbe\x94-\xc5\xbex8\xf8\xbdD\xea8>aS\x7f>\xecw\xac>\x94\x17\x9e\xbe]\x0c\xed>\x98!\xab\xbd\x1a\xed\xa9\xbd\xed9\xa9>\xce\xb9s\xbe\xe2*\x9d\xbeL\xc0\xfd\xbdU?\x1f\xbeE\x04#\xbe\x16\x17/\xbdi\xa7\\\xbe\xe1 \xdf\xbc#y\xdc>\xf2\xaa\xcd\xber)\xc4\xbe\xaem)>\xec\x01I\xbd(3\x83\xbd\x04V\xef=i\x91\xef>0\x8c~\xbb\x93v\x93\xbely\xd0\xbdzWS\xbe\x9e\xf7\x92>\x08mG\xbeQ\x9cu\xbe85\xe3>/\xfe\xed>,U\xd8<YT\xd4\xbd\x94NJ\xbe\xb7\x1dh>\xdd\xe0I\xbe\xbe\x06\xd4\xbe1!\x02\xbe\xe8\r$\xbe\xdb0\x15>5!\xc6\xbeb\xed\xcd\xbd ;\x17\xbbo\xa3\x19>\xe3\xf0\xf0\xbd\xa24\xb0>\x12\xb3T>\x1e\xd8\x94>B\x08lstm_4_R*\x93\x01\x08\x01\x08 \x10\x01"\x80\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00B\x08lstm_4_BZ"\n\x0clstm_4_input\x12\x12\n\x10\x08\x01\x12\x0c\n\x02\x08\x01\n\x02\x08\x04\n\x02\x08\x04b\x18\n\x06lstm_4\x12\x0e\n\x0c\x08\x01\x12\x08\n\x02\x08\x01\n\x02\x08\x04B\x04\n\x00\x10\x0b'
_run_comparison_test(data, result, model, ['tanh', 'relu', 'relu'],
lstm_op_pattern)
@pytest.mark.parametrize("lstm_op_pattern", [False, True])
def test_sigmoid_relu(lstm_op_pattern):
data = array([[[0.5488135, 0.71518934, 0.60276335, 0.5448832],
[0.4236548, 0.6458941, 0.4375872, 0.891773],
[0.96366274, 0.3834415, 0.79172504, 0.5288949],
[0.56804454, 0.92559665, 0.07103606, 0.0871293]]],
dtype=float32)
result = array([[0., 0., 0.17980874, 0.]], dtype=float32)
model = b'\x08\x06\x12\nkeras2onnx\x1a\x051.8.1"\x0bonnxmltools(\x002\x00:\xcc\x08\nC\n\x0clstm_5_input\x12\x08lstm_5_X\x1a\tTranspose"\tTranspose*\x0f\n\x04perm@\x01@\x00@\x02\xa0\x01\x072\x00:\x00\n\xb9\x01\n\x08lstm_5_X\n\x08lstm_5_W\n\x08lstm_5_R\n\x08lstm_5_B\n\x00\n\x00\n\x00\n\x00\x12\x08lstm_5_Y\x12\nlstm_5_Y_h\x12\nlstm_5_Y_c\x1a\x06lstm_5"\x04LSTM*(\n\x0bactivationsJ\x04ReluJ\x07SigmoidJ\x07Sigmoid\xa0\x01\x08*\x17\n\tdirection"\x07forward\xa0\x01\x03*\x12\n\x0bhidden_size\x18\x04\xa0\x01\x02:\x00\n7\n\nlstm_5_Y_h\x12\x06lstm_5\x1a\x07Squeeze"\x07Squeeze*\x0b\n\x04axes@\x00\xa0\x01\x072\x00:\x00\x12\x0csequential_5*\x95\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02.2\xe3\xbe\xd47\xa2>\xc8\xf5\xa9=\xd4\xa6\xa2\xbd\xf4\xdf\x9d>\xfec\xe5\xbeno\x05\xbf6\x91\t\xbf@\xf1\xb5>\xbb\xaa\xf4\xbe\xb0\x92\xff=\\\xb9\xbc\xbe\x90\xfa(=\x1e\xfd\x06\xbe\x80\xcd\xeb>\xcb\x15\x08\xbf\x97\xe6\x97\xbe\\\xd1)\xbe\xd4#\x19\xbe\xbf\xf0\x92\xbeX\xc1\xf4=Mx\xa9\xbe\xe0d\x19>\xcf\xd5\xc0\xbeL\xfa?>$\x8c:>\xa2\xa9\x9e\xbe@Q\xf2>X\x14v\xbe\xbd\xf4\x03\xbf0\x8eq>P\xf9\n=4\xae\xaf>\xbc\xbf\xef\xbd\xc87\xa4>8\xa7%>\x94$P\xbe\xc8\xf2\x00?D:\xac\xbe\x04\xbb5>\xe8_\x92=\xc8\x84_>\x9e\xc5\xc4\xbe\x84?\xb7>\xc4\'\x0c>XL\xe1=\x03\\\xf4\xbe\xd8\xa1\xc8\xbelw\xbd>\x00\xc7k<\x046\n?j\xf0\xf8\xbe\x80V\xe2\xbc\xdc\xd0\x01\xbe\xc8g!\xbe>\x0c\x9d\xbehz\x9c>\xdcF\x93>\xa0G\xe6\xbc\xc0\x94\x0b=\x0e\x08\xe1>\xa9t\xa0\xbe\xc0\x07\xe9>\xd2\xc2\x13\xbeB\x08lstm_5_W*\x95\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02\xd0\x89\xa2\xbeWTv\xbd;\x0b\xda\xbe\x83[\xb6>\xf9\x90\x9e\xbej\xed)=\nH}\xbe\x90J\r>4c\xbb\xbd\xc8\xa3\xdb=V\x98\xeb\xbe1\x97\x9a>\xe8\xbf\x9b=\x9b\xfei>\xe5_\x13>\xa2\x1e\xa8\xbd\x1f\xcbz>\x05\x83\xc7=\xc0\xfe\x8a\xbe~+\x91\xbe5\xa5$\xbe\xad\x7f\x92\xbe\xe7\xb1\xbe\xbd\x031\x1b\xbeQ\x08\x81\xbe\xa6\xa2\xd2>\x87k\xbc>Q\xc4\xcd=+\xb0p>l\x96\xfb\xbd\xa2-V=\x02?\x8d>\xb5\xaaG\xbdj\xf7\x87>\xe07\x8d>\x853\xb2>\xbbL$\xbeu\xc9\xfa<=\x03\x05\xbe\x02p\xc4\xbd\xb2\xae\x9a>\xa4\xb7\xdf>hf\x90\xbe\x80qP>*\xc4\xe3=Xs\xbe\xbd\x10\xe0\x8e>\xcd)Q>\x90;\xde\xbcC\xdeM>\xbbN\x1e\xbeos\t\xbf-"\x0c?\xe4p\x05>\xe4\xbd\xc0\xbd)6.>+B\xbc\xbe\x9a\xff\x04>\xdd\xb0\x04>\xb9\x07T=\xc2]\x06\xbe\xa8\x85\x0e?\xe6l\x8a\xbdz\xb3:\xbeB\x08lstm_5_R*\x93\x01\x08\x01\x08 \x10\x01"\x80\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00B\x08lstm_5_BZ"\n\x0clstm_5_input\x12\x12\n\x10\x08\x01\x12\x0c\n\x02\x08\x01\n\x02\x08\x04\n\x02\x08\x04b\x18\n\x06lstm_5\x12\x0e\n\x0c\x08\x01\x12\x08\n\x02\x08\x01\n\x02\x08\x04B\x04\n\x00\x10\x0b'
_run_comparison_test(data, result, model, ['relu', 'sigmoid', 'sigmoid'],
lstm_op_pattern)
@pytest.mark.parametrize("lstm_op_pattern", [False, True])
def test_sigmoid_sigmoid(lstm_op_pattern):
data = array([[[0.5488135, 0.71518934, 0.60276335, 0.5448832],
[0.4236548, 0.6458941, 0.4375872, 0.891773],
[0.96366274, 0.3834415, 0.79172504, 0.5288949],
[0.56804454, 0.92559665, 0.07103606, 0.0871293]]],
dtype=float32)
result = array([[0.34316674, 0.34707376, 0.42337856, 0.39620247]],
dtype=float32)
model = b'\x08\x06\x12\nkeras2onnx\x1a\x051.8.1"\x0bonnxmltools(\x002\x00:\xcf\x08\nC\n\x0clstm_6_input\x12\x08lstm_6_X\x1a\tTranspose"\tTranspose*\x0f\n\x04perm@\x01@\x00@\x02\xa0\x01\x072\x00:\x00\n\xbc\x01\n\x08lstm_6_X\n\x08lstm_6_W\n\x08lstm_6_R\n\x08lstm_6_B\n\x00\n\x00\n\x00\n\x00\x12\x08lstm_6_Y\x12\nlstm_6_Y_h\x12\nlstm_6_Y_c\x1a\x06lstm_6"\x04LSTM*+\n\x0bactivationsJ\x07SigmoidJ\x07SigmoidJ\x07Sigmoid\xa0\x01\x08*\x17\n\tdirection"\x07forward\xa0\x01\x03*\x12\n\x0bhidden_size\x18\x04\xa0\x01\x02:\x00\n7\n\nlstm_6_Y_h\x12\x06lstm_6\x1a\x07Squeeze"\x07Squeeze*\x0b\n\x04axes@\x00\xa0\x01\x072\x00:\x00\x12\x0csequential_6*\x95\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02V\xfe\x08\xbf\x9c-\xd1\xbd\xfa\x94\x87>\xc8\xa1p>\xc0\xbd\x11<\xe6\xcaR\xbe\xacD|\xbe(tf>Dz\x88>p^b=\xb4\xa4\x7f>N5\xcd\xbe\xe6\xc8\x05?\x1c\xf7\x05?\xc2\xaa\xa1>\xec\xeb\xfb\xbe$\xa9\xf7>hM^\xbe\x80e#>\xa0TA>XhR>\xa0\xef\xeb=$\xec\x7f>\xd5\xf8\x87\xbe\xd2\x0e\x0b?\xf4\xa7\x16\xbe\xfcx\n\xbe\xd2>\xda>\x1a#\xdd>\xe0=\x15>|\xa8\xd9>\xab*\xe5\xbe\x98\x15\xf7>&\n\xac\xbe\xbe\xb4\xf2\xbe\xeaN\xba\xbe\x08\xdbn>sr\x94\xbe\xf0\xf8\x18=\xc4\x13\'>\xac\x1b\x04?\xf0\xf5:=\x04`\x9d\xbd\x18/\xe4\xbd\x00$c;/\xab\x8c\xbe`\x05#>:x\xda\xbeXg\xa0\xbe\x00Le<&$ \xbe\xe0\xf1\xba>$\xa2\xe7\xbe\xd0\xf5\xa4=\xc8\xe57\xbe}\xcf\xe0\xbe\x93f\xa2\xbe\xe8\xc3\x0b>\xb9C\t\xbfP\xffa>h\x12\x81>\xaa\xba\xec\xbe\x90r\xb7\xbd\x8a\x90\x01\xbfB\x08lstm_6_W*\x95\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02\xe0\xb1;>\xa7\xda\xbd\xbd\xecs\x16\xbe\xcd\xfbS>\xc1O1=\xa3]\xae=S\xc0\xb7>$\xd6L\xbe\xd6\xc0\xd1=\x9a\xb7k=>\xb8u>a\t\x9c>\x82\x83\x8b>5\xce9>\x9e\xf5\xf0\xbenZ\x80> \x84\x01\xbf\xb8\xb1\x87=\x9e\x00\xda=\xe8\xaf\xed>\xc8\xfd\xd2\xbcB\xbfP=\xd4uX\xbe\xf7\xcd\xe0=\xbf\xe4\x08\xbe\xa7\x92\xb7>\x8a,\r>\xd0\x14\xc3=u#\x03>\xf1\xd1\x88>\xd2=\xda=\x05\xb9\xe8\xbe[^m>\xe4t\x8f>\\\xa0\xe3\xbdx\xd3\x1b\xbc\x15V"?f\r\xae\xbd\x08#4\xbd\x07KC>\x9cd\xcd<\xea\t\xfa\xbd\xa0[v>|*O>\x165\x8e>n\x1c\x8d\xbd\xe4\xa5\xfa>n.\xfe=\xad\x16\x8d\xbc\xe2j\xf3\xbev\xf3\xf6=ZvK>\x92\x85\xfe\xbd\x80\xab\xf3>V5\xfc\xbdk#\x83>\x89\xfcI>c\x8d\xbf>\xcb\x80\xb5>\xbbw\x84>\x1b\xa4\xcc<\xfe\x92T\xbe\x8da\x05\xbe-\x86l>B\x08lstm_6_R*\x93\x01\x08\x01\x08 \x10\x01"\x80\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00B\x08lstm_6_BZ"\n\x0clstm_6_input\x12\x12\n\x10\x08\x01\x12\x0c\n\x02\x08\x01\n\x02\x08\x04\n\x02\x08\x04b\x18\n\x06lstm_6\x12\x0e\n\x0c\x08\x01\x12\x08\n\x02\x08\x01\n\x02\x08\x04B\x04\n\x00\x10\x0b'
_run_comparison_test(data, result, model,
['sigmoid', 'sigmoid', 'sigmoid'], lstm_op_pattern)
@pytest.mark.parametrize("lstm_op_pattern", [False, True])
def test_sigmoid_tanh(lstm_op_pattern):
data = array([[[0.5488135, 0.71518934, 0.60276335, 0.5448832],
[0.4236548, 0.6458941, 0.4375872, 0.891773],
[0.96366274, 0.3834415, 0.79172504, 0.5288949],
[0.56804454, 0.92559665, 0.07103606, 0.0871293]]],
dtype=float32)
result = array([[0.22958073, -0.02352869, -0.24502626, -0.27619946]],
dtype=float32)
model = b'\x08\x06\x12\nkeras2onnx\x1a\x051.8.1"\x0bonnxmltools(\x002\x00:\xcc\x08\nC\n\x0clstm_8_input\x12\x08lstm_8_X\x1a\tTranspose"\tTranspose*\x0f\n\x04perm@\x01@\x00@\x02\xa0\x01\x072\x00:\x00\n\xb9\x01\n\x08lstm_8_X\n\x08lstm_8_W\n\x08lstm_8_R\n\x08lstm_8_B\n\x00\n\x00\n\x00\n\x00\x12\x08lstm_8_Y\x12\nlstm_8_Y_h\x12\nlstm_8_Y_c\x1a\x06lstm_8"\x04LSTM*(\n\x0bactivationsJ\x04TanhJ\x07SigmoidJ\x07Sigmoid\xa0\x01\x08*\x17\n\tdirection"\x07forward\xa0\x01\x03*\x12\n\x0bhidden_size\x18\x04\xa0\x01\x02:\x00\n7\n\nlstm_8_Y_h\x12\x06lstm_8\x1a\x07Squeeze"\x07Squeeze*\x0b\n\x04axes@\x00\xa0\x01\x072\x00:\x00\x12\x0csequential_8*\x95\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02h\x85\xb5\xbd\xd4#i>:\xe4\x8d>\xf4\xbe\xe8> z8=\xba\xab\x89>H\xf3\x0c>(\x0e\x84>\x9a\x11\xd3>\xf8\xb9\x97\xbe\xfcj\x07?\xe4\x0b\x1e\xbe\x1c\xf4N>\xb8\x7f\xce>\x00\x1c\x12=\xe8\xd51>\xe0,\'\xbd\xb2\xe4\x08?\x909\x04\xbf\xa0\x11\xc8\xbe<\x80$>\xab\x07\x84\xbe1\xa4\xb5\xbe\xc0[\x84<6\xb2\xb9\xbexh\xb1\xbe\x8cE[>PX\xbc>\xfen\xc8>v\x7f\xf4\xbe\xf0\xc1\x12\xbeB\xdb9\xbe\xa8:\x00\xbe`do>\xb6?0\xbe^\xce\x89>\xa1f\x93\xbe\x18ic>f+\xc8\xbe\x81\xdb\xb6\xbe\x1e\xaf\xc3\xbe\xa0\xf3\x06=\xf0\xa2\xcd\xbe@h\xcb=\x80<\xda<\xa0vE>\x88\xce8\xbeES\x95\xbeh~X\xbe8\xdcO\xbd,\xdf\xbc\xbd\x80\xf8\x06\xbe\xfb\xb4\x85\xbe\xd4(\xf8\xbd\xe4\xc6g>\x8c\x17\xdb>\x11\xb1\xe5\xbe\xb1J\xf6\xbe\xc8\xb4\xe9>4\xe6\xf0>*\x88\x86>$\xd1\xf6>l=\x06?\x08J\xe8\xbdB\x08lstm_8_W*\x95\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02\x10(\xa9\xbe@\x91e>g\xa9L\xbe\xde\x02\xb5;|;\x9e>kh\xa6=\x98H\xa7\xbd\xef\x15\xa0>!\xc6G\xbb|\xc6\xb5\xbe\x80\xe1L>\x83[.>\xb6\x1d\xf2=\x0ct\x96\xbe\x1bw\x1e>\xae\xd2\xe3\xbe\xa7H\x15\xbe4i\xbc=\xcdV\xe1>\xe2\x05#>2<\xec\xbe:\x86\xf4<bnn\xbeg\n5>j\x89\x11\xbe\xac%\x0f>\x15\xe0\xda=\x8d\xf75>^\xf9\xf1>\xc7\x89\x8e>\xcc\xb8\x0e\xbe\xdf\x1fh=\xf3*\xc7=\x13`\xb1\xbe\x07\xc8\x16>\xf3\x193>@\xeeQ\xbe!\xef\xb1>p9\x99\xbe\xfb\xb7\xac\xbc|\xbai>E\x87&\xbe?\xd4\x90\xbe\x14h\xbd=\xe1\xeb\xdd\xbd[\xba\xa7\xbe\xe4R\xd3\xbe\xf7\xc6\x8f>3\r\xb9\xbd}\xd4\r>\xc0g\x1b9\xfb\x98\x1f\xbf\x9b\xd8\xcd\xbe\xd4\x0c0\xbe.\xad\x87>\xb4\xf0\x93=\xf3?\x9c\xbc\xf2\x9ee>\xe0\x05\xa7>a\xec>>\'\xff\t\xbe-\xcc\xc6\xbea\xc0\x89\xbe5FD\xbeB\x08lstm_8_R*\x93\x01\x08\x01\x08 \x10\x01"\x80\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00B\x08lstm_8_BZ"\n\x0clstm_8_input\x12\x12\n\x10\x08\x01\x12\x0c\n\x02\x08\x01\n\x02\x08\x04\n\x02\x08\x04b\x18\n\x06lstm_8\x12\x0e\n\x0c\x08\x01\x12\x08\n\x02\x08\x01\n\x02\x08\x04B\x04\n\x00\x10\x0b'
_run_comparison_test(data, result, model, ['tanh', 'sigmoid', 'sigmoid'],
lstm_op_pattern)
@pytest.mark.parametrize("lstm_op_pattern", [False, True])
def test_tanh_relu(lstm_op_pattern):
data = array([[[0.5488135, 0.71518934, 0.60276335, 0.5448832],
[0.4236548, 0.6458941, 0.4375872, 0.891773],
[0.96366274, 0.3834415, 0.79172504, 0.5288949],
[0.56804454, 0.92559665, 0.07103606, 0.0871293]]],
dtype=float32)
result = array([[0., 0., 0., 0.]], dtype=float32)
model = b'\x08\x06\x12\nkeras2onnx\x1a\x051.8.1"\x0bonnxmltools(\x002\x00:\xd8\x08\nE\n\rlstm_13_input\x12\tlstm_13_X\x1a\tTranspose"\tTranspose*\x0f\n\x04perm@\x01@\x00@\x02\xa0\x01\x072\x00:\x00\n\xbb\x01\n\tlstm_13_X\n\tlstm_13_W\n\tlstm_13_R\n\tlstm_13_B\n\x00\n\x00\n\x00\n\x00\x12\tlstm_13_Y\x12\x0blstm_13_Y_h\x12\x0blstm_13_Y_c\x1a\x07lstm_13"\x04LSTM*"\n\x0bactivationsJ\x04ReluJ\x04TanhJ\x04Tanh\xa0\x01\x08*\x17\n\tdirection"\x07forward\xa0\x01\x03*\x12\n\x0bhidden_size\x18\x04\xa0\x01\x02:\x00\n9\n\x0blstm_13_Y_h\x12\x07lstm_13\x1a\x07Squeeze"\x07Squeeze*\x0b\n\x04axes@\x00\xa0\x01\x072\x00:\x00\x12\rsequential_13*\x96\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02\\A\x05\xbf\xe0\x14\xb1>"\xc8+\xbed\x87\xc3\xbez\xf1\x04?\xb5\xc4\xe8\xbe\xe0\xa3\xbb\xbc\xa3c\xb4\xbe\xc0\x11\x94\xbc@s\xeb<L\x84\xe6\xbedz\xcd\xbd\xc0?\xe3="\xdf\xfb\xbe\x82\xa3i\xbe\x00\x9fA;`,\xdb\xbe\xcahA\xbe\x10\x7f\xab>\xf0ZD>,\xfe\x8f>\xb5\xb5\x02\xbfl<\xed\xbeE\xbd\xe5\xbe\xbc\x8bk>\x10p7>|8\xef>`\xf5S>`\xce\xc7>\xb0u\xa2\xbe\x94\xfe.>\x90\x12\x03\xbd\xbd\xea\x90\xbe\xe0f\xe0\xbe@\xc7\xb7\xbd\xff\x86\x80\xbe6\xf7T\xbe\x90\x99\xeb=\xc8A\x05?\x06\xb7\xb7\xbe\xa2o\x88>\xcc\xe5\xd7\xbd\xc6ZO\xbe\'\xba\x06\xbfp\x91\x04>\xa4\xe5a>,C\xcf\xbe>\xd5\xec\xbe@\xeb\xd6<\xb4\x0b\xf4\xbe\xe4\x00\x06\xbf&t\xaa\xbe\xa0\xdd\xcb>\xfc\x18\xa8\xbdL\xe7z>\xaeL\x0f\xbe\x92J;\xbe\x98\xb4\x02\xbf\xe2\xc3\xcf>\x0e\x81N\xbe\x88x5>0\xbeZ\xbd\xa4\xca\x82>ho\xe2=B\tlstm_13_W*\x96\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x020O\x8a\xbe\xd8\xf3\xd6\xbc\x8e\xd6\xb6;\xd0E\x8d\xbd|\xbfZ\xbe\xb4\xfb\xb4\xbeZ\x0bK>x \x0b\xbf\x0fD\xcc>N\xbb\x8a\xbd\x9a\xcc\xff\xbeQ%\xaf=\xf6w\x07>(8\xa5=j\xdb\x89>\xa3\x00\xd0\xbe8\xde\xee<\xf8\xda\x87\xbe\xb5\x14\xca\xbd@\x11\xe9\xbdZ\xb1\xd5>E\xb0\x8d=m\nL=DF\xbb\xbde\x9ex\xbd\xf3\x9b\xfc<\x10\x9a\xae=!\xe7\x88>\x86D\x91\xbe\t\xc0\xb8\xbd#\x8f\xa4>\xa0~\xa1>\xf0\xec\x9c\xbe\xd7\xb4\x1d>23\x1b\xbe\xc11\x9a\xbe\xef*\xb5\xbd\xeb\xec[\xbdl\xf7\x97>z\xbc->Jx\xb8=`\x89c>\x0f\xd0\xcf=]]1\xbe\xacF\x87\xbd\x03\x1d\xf1>\xfc\xd6\xac=\xaa\xfb\x0b\xbd\x03\xd8\xeb\xbd\xa1\xff\xd1\xbe\xf9\xb3\x94=\x12\xfd\x89\xbd"\x89\xd1\xbeK\x05\\\xbe\xf5\xf6\xd7\xbe\xa5\xbfo>\xe5<\x1f\xbe\xe9^\xa8>L\xb0w>\x1dhs>\xbb\xf9\xb5\xbe\xc8\xf7\xcb>vY\xc3\xbeZ8\x86\xbeB\tlstm_13_R*\x94\x01\x08\x01\x08 \x10\x01"\x80\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00B\tlstm_13_BZ#\n\rlstm_13_input\x12\x12\n\x10\x08\x01\x12\x0c\n\x02\x08\x01\n\x02\x08\x04\n\x02\x08\x04b\x19\n\x07lstm_13\x12\x0e\n\x0c\x08\x01\x12\x08\n\x02\x08\x01\n\x02\x08\x04B\x04\n\x00\x10\x0b'
_run_comparison_test(data, result, model, ['relu', 'tanh', 'tanh'],
lstm_op_pattern)
@pytest.mark.parametrize("lstm_op_pattern", [False, True])
def test_tanh_sigmoid(lstm_op_pattern):
data = array([[[0.5488135, 0.71518934, 0.60276335, 0.5448832],
[0.4236548, 0.6458941, 0.4375872, 0.891773],
[0.96366274, 0.3834415, 0.79172504, 0.5288949],
[0.56804454, 0.92559665, 0.07103606, 0.0871293]]],
dtype=float32)
result = array([[-0.09767307, -0.18018779, -0.01351621, -0.18635663]],
dtype=float32)
model = b'\x08\x06\x12\nkeras2onnx\x1a\x051.8.1"\x0bonnxmltools(\x002\x00:\xdb\x08\nE\n\rlstm_14_input\x12\tlstm_14_X\x1a\tTranspose"\tTranspose*\x0f\n\x04perm@\x01@\x00@\x02\xa0\x01\x072\x00:\x00\n\xbe\x01\n\tlstm_14_X\n\tlstm_14_W\n\tlstm_14_R\n\tlstm_14_B\n\x00\n\x00\n\x00\n\x00\x12\tlstm_14_Y\x12\x0blstm_14_Y_h\x12\x0blstm_14_Y_c\x1a\x07lstm_14"\x04LSTM*%\n\x0bactivationsJ\x07SigmoidJ\x04TanhJ\x04Tanh\xa0\x01\x08*\x17\n\tdirection"\x07forward\xa0\x01\x03*\x12\n\x0bhidden_size\x18\x04\xa0\x01\x02:\x00\n9\n\x0blstm_14_Y_h\x12\x07lstm_14\x1a\x07Squeeze"\x07Squeeze*\x0b\n\x04axes@\x00\xa0\x01\x072\x00:\x00\x12\rsequential_14*\x96\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02\x18U\xf2\xbe\xf2\xbe\xb7>\xe7\x85\x97\xbe\xd0\xa6\x95>\x17\x0e\x0c\xbf0\xe4\xb2\xbd\xcc\xc3\x06?@>\x1c\xbd\xcck\x0b\xbe8]\x06\xbe\x02Y\n?\x00\x93\xcf<\x8e\x12\x82\xbe \xf6I=D1\xc7>^R\xb4\xbeN\xfaJ\xbe\xc2\x8e\xd5\xbeV\xfc\xea\xbe\xdc\x171>\xd0r5\xbe\xacv~>\x0e@\xb6>j\x1a\xe0>\x00\xe9\xcc\xbcPM\xe5\xbe`\xe7\xe5><\xa9!\xbe\xe8V\x10>\xe4\xb8,\xbe:M\xc1\xbe\xcc\xc7H>\x10o\xcc>\x92SQ\xbe\xdaa\x00\xbe:\xac\xf7\xbe\xc8\xe8\xaa\xbe\xa6<\x9f\xbe\xb0\xa8\xca=\xb6\xd8m\xbe\x02\xe0\xab>\xb8\x04\xcc\xbe\x9e\xce\x0c\xbe\x94\xa7\xa9\xbe\xd6\xac\xa8>\x14N|>\x926\x12\xbe\xda\xfd\xd8>>k\xb1>\xb2\x9e\x10\xbe\x92\'o\xbes\xce\x83\xbe\xedQ\xbe\xbe\x08|\xe6=\xec\xa3\xe1\xbdr\xd4\xac\xbe\xee\x80\xdb>`\xa2\xd5=\xd0\x12\xf3\xbe\xa4\xf6\xd7\xbeX\xd5\x84=\x02\xda\xe1\xbe\xb0\xe0\n\xbd\xf0y\x12\xbdB\tlstm_14_W*\x96\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02\xd0K\xff>gne\xbe\x9c\x17\xf4=,\xdf\xb7\xbd\x13\x08V\xbe\xd0i5\xbc\xccq\xa6\xbemd\x05\xbe\xa7C\xa9\xbd\xa9T\xe4\xbe\xd0\xb5\x88=a#8>\xfe\xb6\x12>\xa7\x89\xe3=\xd8\x00\xd3\xbe\x9c\xc1\xa0>\x9a\xad\xb4\xbe\xd7D\xb6>\x19\xfc\x9a<\\\x94\x11>\xd8|\xfd\xbd\x83\xbf\x14\xbeC3\xec=\x18\xda\x11\xbe\xf0\xcf\xca=\xe4\xb9\x14>\xfd>\xd6\xbe\x8a\x9dY\xbeg,j>\x14\xde\xc8<@\x1b~:\xc7Mw\xbe\x9d\xe4r\xbc\x12\x1c\xd6<\x18\xbd\xbc>\x81&\x18\xbfl:j>\x90\xdd\xd0>\x8b\xa1\xeb\xbc\xe6\xe4\xbb;\x96b\xfe>\x02i\x87\xbebu\xb8\xbd\x82\x15\xa4=\x88\xbcZ\xbe\xdb?\xad\xbe\x11\xd9u\xbel\x86\x10\xbe$5C>N3\x96>{N\xe5>\x0e\x9b\xc4>\xa7\xfe\xee\xbc\xa4t\x8a>\xcc\xc0\xf2\xbd\xf8kU\xbe\x9b\xcaW\xbe&\xe9o\xbe\x1a\xbe\x83=3U\xb6>\xa1\xf0h>\xecjk=\xb5|\x9e\xbeX\x8fV=B\tlstm_14_R*\x94\x01\x08\x01\x08 \x10\x01"\x80\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00B\tlstm_14_BZ#\n\rlstm_14_input\x12\x12\n\x10\x08\x01\x12\x0c\n\x02\x08\x01\n\x02\x08\x04\n\x02\x08\x04b\x19\n\x07lstm_14\x12\x0e\n\x0c\x08\x01\x12\x08\n\x02\x08\x01\n\x02\x08\x04B\x04\n\x00\x10\x0b'
_run_comparison_test(data, result, model, ['sigmoid', 'tanh', 'tanh'],
lstm_op_pattern)
@pytest.mark.parametrize("lstm_op_pattern", [False, True])
def test_tanh_tanh(lstm_op_pattern):
data = array([[[0.5488135, 0.71518934, 0.60276335, 0.5448832],
[0.4236548, 0.6458941, 0.4375872, 0.891773],
[0.96366274, 0.3834415, 0.79172504, 0.5288949],
[0.56804454, 0.92559665, 0.07103606, 0.0871293]]],
dtype=float32)
result = array([[-0.06280307, 0.02255315, 0.02322592, 0.04542083]],
dtype=float32)
model = b'\x08\x06\x12\nkeras2onnx\x1a\x051.8.1"\x0bonnxmltools(\x002\x00:\xd8\x08\nE\n\rlstm_16_input\x12\tlstm_16_X\x1a\tTranspose"\tTranspose*\x0f\n\x04perm@\x01@\x00@\x02\xa0\x01\x072\x00:\x00\n\xbb\x01\n\tlstm_16_X\n\tlstm_16_W\n\tlstm_16_R\n\tlstm_16_B\n\x00\n\x00\n\x00\n\x00\x12\tlstm_16_Y\x12\x0blstm_16_Y_h\x12\x0blstm_16_Y_c\x1a\x07lstm_16"\x04LSTM*"\n\x0bactivationsJ\x04TanhJ\x04TanhJ\x04Tanh\xa0\x01\x08*\x17\n\tdirection"\x07forward\xa0\x01\x03*\x12\n\x0bhidden_size\x18\x04\xa0\x01\x02:\x00\n9\n\x0blstm_16_Y_h\x12\x07lstm_16\x1a\x07Squeeze"\x07Squeeze*\x0b\n\x04axes@\x00\xa0\x01\x072\x00:\x00\x12\rsequential_16*\x96\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02\xf4D]\xbe\xe2;\xd9>\x80\xb9N\xbc\xc8\xff\xca=\\\x14\xec>\x96.\x9c\xbeZF\xc1>\xfc\x9d+>@H\xcd>R1\x83>\x08K\xf2>4\xfe\xbc>\x08q1\xbe.\xa3\xbe\xbeL\xec\xdf\xbd&\x16=\xbe\xe6\x0b\x0c?\xd6\x80\xbd>\xc0\xfe\xb8>\xa8\xef\xce=\xaa\xf9\x00?\x9c\x01\xc6\xbd`\x96D=\x88\xa2W\xbe\x80\xb1M\xbd\xc0\xb5\xc2\xbe\x00\x89\x14\xbbR$\x0f\xbeV\x02\x92\xbep\xb11\xbd\x10\x86\x16\xbd4\x7f\xf2>S\xe1\x8a\xbeD\xe5\x11>\xc8\xe9\x98\xbe\xe0\xa9\x97\xbch7\xc7\xbe\x98y\xa4\xbex\xa7\xc2=\x06y\x01\xbe\x0ee\xe5>\x96\xc9\x06?\xcc\x1d@>PcA\xbe\x1c\r\xe5>\xf4?\x8c>p8\xcb>B\xa8\xa9>{\xd1\x9e\xbe\xb0\xb9/\xbe\xb8\xa3\x9f\xbeP\xb6\xc6>d\xaam>\xe0\xe8\xa5\xbd\x1c\x95\xbc>\xf8\x05\xf3=\x98V\x80=\x01m\xe6\xbe\x12O\xbc>\xb0\x1e\xa6=\x86>a\xbe\x80V\xf1\xbc\x8c\xe5\xc3>\x06`\x02?B\tlstm_16_W*\x96\x02\x08\x01\x08\x10\x08\x04\x10\x01"\x80\x02P\x0e\x1e\xbeV\xd4&\xbc\x1dh\xb3\xbd\xa9\x95H>f8\xf4\xbe}\xdc\xba\xbe\x7f(>>Cp\x9a>\x98\xc6 \xbd\xd9\x90Q\xbe\xb8\x1d_<0\xfb\x02>\x81\xee\xcb\xbc\xbe\xe2\xc8\xbc\x00\xac\x0b\xbe\x08\xb4\x19>E\xf2y\xbe\xdc3\x9f\xbey\xaeC>\xd8\x89Y\xbe\xd5\\/\xbb\xc1\x8f\xc7\xbe\xe9\x87<\xbfv-\xb9\xbcZ\xf9:\xbe\xc2V\x8e>M\xde\xb1\xbde\xf1\x8a\xbe\x0fu$>)H\xd1>\xd0\x95\x03\xbe\xa2\x07&>\r[\x8b>\x99\xba\xbc=\x0f\x08#\xbe\x91\x01I>e\x8f\xb3>tj*\xbe\xfa\xde\xe3=;\xca\x17\xben\xdfc\xbe\xe0\xf3\xdb>\x7f<\x11>C7g>\xb0r\xb7>\x8bJu\xbe\xdd\xad\xe2>\x02#}>\xda\xfe7>C\xfb\x16\xbepsE:\xcd;\xea\xbes)\xab>\x18{R=i\xd7F=\x0c\t7\xbe\xe0QO>\x94}F\xbd\xcc\xdf\x80\xbe\xf8\x07\xeb>TI\x85\xbez[&>\xfc\xbc\x03\xbe\x86-~\xbeB\tlstm_16_R*\x94\x01\x08\x01\x08 \x10\x01"\x80\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x80?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00B\tlstm_16_BZ#\n\rlstm_16_input\x12\x12\n\x10\x08\x01\x12\x0c\n\x02\x08\x01\n\x02\x08\x04\n\x02\x08\x04b\x19\n\x07lstm_16\x12\x0e\n\x0c\x08\x01\x12\x08\n\x02\x08\x01\n\x02\x08\x04B\x04\n\x00\x10\x0b'
_run_comparison_test(data, result, model, ['tanh', 'tanh', 'tanh'],
lstm_op_pattern)
| 195.432292 | 3,018 | 0.707726 | 7,839 | 37,523 | 3.325552 | 0.111239 | 0.266984 | 0.376309 | 0.492539 | 0.492961 | 0.477732 | 0.477732 | 0.477732 | 0.477732 | 0.477732 | 0 | 0.277572 | 0.056872 | 37,523 | 191 | 3,019 | 196.455497 | 0.459069 | 0.008875 | 0 | 0.483444 | 1 | 0.205298 | 0.604051 | 0.594583 | 0 | 0 | 0 | 0 | 0.033113 | 1 | 0.072848 | false | 0 | 0.05298 | 0 | 0.125828 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
a5b96ceba0d304e6631593fb075fb2a6501bb90c | 231 | py | Python | mskit/post_analysis/spectronaut.py | gureann/MSKit | 8b360d38288100476740ad808e11b6c1b454dc2c | [
"MIT"
] | 2 | 2021-11-25T01:06:18.000Z | 2021-12-22T06:34:53.000Z | mskit/post_analysis/spectronaut.py | gureann/MSKit | 8b360d38288100476740ad808e11b6c1b454dc2c | [
"MIT"
] | null | null | null | mskit/post_analysis/spectronaut.py | gureann/MSKit | 8b360d38288100476740ad808e11b6c1b454dc2c | [
"MIT"
] | 1 | 2021-04-26T03:00:48.000Z | 2021-04-26T03:00:48.000Z |
from ._spectronaut import SpectronautLibrary
from ._spectronaut import sn_constant
from ._spectronaut import sn_utils
from ._spectronaut.sn_utils import *
from ._spectronaut import mod_process
from ._spectronaut import sn_result
| 25.666667 | 44 | 0.852814 | 29 | 231 | 6.413793 | 0.344828 | 0.483871 | 0.564516 | 0.370968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112554 | 231 | 8 | 45 | 28.875 | 0.907317 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
593abd0a3cf5863cdf76184e8837945750fde450 | 16,075 | py | Python | apps/core/tests/tests_navbar.py | bispojr/observatorio-ufj-covid19 | 8667fae1367b95a7dfa8558fbac3b1b0b708af8d | [
"MIT"
] | 3 | 2020-04-02T21:59:19.000Z | 2020-12-03T12:37:26.000Z | apps/core/tests/tests_navbar.py | bispojr/observatorio-ufj-covid19 | 8667fae1367b95a7dfa8558fbac3b1b0b708af8d | [
"MIT"
] | 68 | 2020-03-28T22:40:08.000Z | 2020-07-08T18:04:07.000Z | apps/core/tests/tests_navbar.py | bispojr/observatorio-ufj-covid19 | 8667fae1367b95a7dfa8558fbac3b1b0b708af8d | [
"MIT"
] | 5 | 2020-03-28T21:35:30.000Z | 2020-06-10T01:28:14.000Z | from django.test import TestCase
from django.test import Client
from selenium import webdriver
from webdriver_manager.firefox import GeckoDriverManager
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.firefox.options import Options
# Create your tests here.
class NavbarTestCase(TestCase):
def setUp(self):
options = Options()
options.headless = True
self.driver = webdriver.Firefox(options=options, executable_path=GeckoDriverManager().install())
def tearDown(self):
self.driver.close()
def test_pag_default(self):
self.driver.get('http://127.0.0.1:8000/')
principal_element = self.driver.find_element_by_xpath('//*[@id="navbarColor01"]/ul/li[1]')
grafico_element= self.driver.find_element_by_id("navbarDropdownGraficos")
#tendencia_element= self.driver.find_element_by_id("navbarDropdownTendencias")
saiba_mais_element= self.driver.find_element_by_id("navbarDropdownConhecaMais")
is_active_main = "active" in principal_element.get_attribute("class")
is_active_grafico = "active" in grafico_element.get_attribute("class")
#is_active_tendencias = "active" in tendencia_element.get_attribute("class")
is_active_saiba_mais = "active" in saiba_mais_element.get_attribute("class")
assert is_active_main == True
assert is_active_grafico == False
#assert is_active_tendencias == False
assert is_active_saiba_mais == False
def test_pag_grafico_cacu(self):
self.driver.get('http://127.0.0.1:8000/graficos/cacu/')
principal_element = self.driver.find_element_by_xpath('//*[@id="navbarColor01"]/ul/li[1]')
grafico_element= self.driver.find_element_by_id("navbarDropdownGraficos")
#tendencia_element= self.driver.find_element_by_id("navbarDropdownTendencias")
saiba_mais_element= self.driver.find_element_by_id("navbarDropdownConhecaMais")
is_active_main = "active" in principal_element.get_attribute("class")
is_active_grafico = "active" in grafico_element.get_attribute("class")
#is_active_tendencias = "active" in tendencia_element.get_attribute("class")
is_active_saiba_mais = "active" in saiba_mais_element.get_attribute("class")
assert is_active_main == False
assert is_active_grafico == True
#assert is_active_tendencias == False
assert is_active_saiba_mais == False
def test_pag_grafico_chapadao(self):
self.driver.get('http://127.0.0.1:8000/graficos/chapadao/')
principal_element = self.driver.find_element_by_xpath('//*[@id="navbarColor01"]/ul/li[1]')
grafico_element= self.driver.find_element_by_id("navbarDropdownGraficos")
#tendencia_element= self.driver.find_element_by_id("navbarDropdownTendencias")
saiba_mais_element= self.driver.find_element_by_id("navbarDropdownConhecaMais")
is_active_main = "active" in principal_element.get_attribute("class")
is_active_grafico = "active" in grafico_element.get_attribute("class")
#is_active_tendencias = "active" in tendencia_element.get_attribute("class")
is_active_saiba_mais = "active" in saiba_mais_element.get_attribute("class")
assert is_active_main == False
assert is_active_grafico == True
#assert is_active_tendencias == False
assert is_active_saiba_mais == False
def test_pag_grafico_jatai(self):
self.driver.get('http://127.0.0.1:8000/graficos/jatai/')
principal_element = self.driver.find_element_by_xpath('//*[@id="navbarColor01"]/ul/li[1]')
grafico_element= self.driver.find_element_by_id("navbarDropdownGraficos")
#tendencia_element= self.driver.find_element_by_id("navbarDropdownTendencias")
saiba_mais_element= self.driver.find_element_by_id("navbarDropdownConhecaMais")
is_active_main = "active" in principal_element.get_attribute("class")
is_active_grafico = "active" in grafico_element.get_attribute("class")
#is_active_tendencias = "active" in tendencia_element.get_attribute("class")
is_active_saiba_mais = "active" in saiba_mais_element.get_attribute("class")
assert is_active_main == False
assert is_active_grafico == True
#assert is_active_tendencias == False
assert is_active_saiba_mais == False
def test_pag_grafico_mineiros(self):
self.driver.get('http://127.0.0.1:8000/graficos/mineiros/')
principal_element = self.driver.find_element_by_xpath('//*[@id="navbarColor01"]/ul/li[1]')
grafico_element= self.driver.find_element_by_id("navbarDropdownGraficos")
#tendencia_element= self.driver.find_element_by_id("navbarDropdownTendencias")
saiba_mais_element= self.driver.find_element_by_id("navbarDropdownConhecaMais")
is_active_main = "active" in principal_element.get_attribute("class")
is_active_grafico = "active" in grafico_element.get_attribute("class")
#is_active_tendencias = "active" in tendencia_element.get_attribute("class")
is_active_saiba_mais = "active" in saiba_mais_element.get_attribute("class")
assert is_active_main == False
assert is_active_grafico == True
#assert is_active_tendencias == False
assert is_active_saiba_mais == False
def test_pag_grafico_montividiu(self):
self.driver.get('http://127.0.0.1:8000/graficos/montividiu/')
principal_element = self.driver.find_element_by_xpath('//*[@id="navbarColor01"]/ul/li[1]')
grafico_element= self.driver.find_element_by_id("navbarDropdownGraficos")
#tendencia_element= self.driver.find_element_by_id("navbarDropdownTendencias")
saiba_mais_element= self.driver.find_element_by_id("navbarDropdownConhecaMais")
is_active_main = "active" in principal_element.get_attribute("class")
is_active_grafico = "active" in grafico_element.get_attribute("class")
#is_active_tendencias = "active" in tendencia_element.get_attribute("class")
is_active_saiba_mais = "active" in saiba_mais_element.get_attribute("class")
assert is_active_main == False
assert is_active_grafico == True
#assert is_active_tendencias == False
assert is_active_saiba_mais == False
def test_pag_grafico_rioverde(self):
self.driver.get('http://127.0.0.1:8000/graficos/rioverde/')
principal_element = self.driver.find_element_by_xpath('//*[@id="navbarColor01"]/ul/li[1]')
grafico_element= self.driver.find_element_by_id("navbarDropdownGraficos")
#tendencia_element= self.driver.find_element_by_id("navbarDropdownTendencias")
saiba_mais_element= self.driver.find_element_by_id("navbarDropdownConhecaMais")
is_active_main = "active" in principal_element.get_attribute("class")
is_active_grafico = "active" in grafico_element.get_attribute("class")
#is_active_tendencias = "active" in tendencia_element.get_attribute("class")
is_active_saiba_mais = "active" in saiba_mais_element.get_attribute("class")
assert is_active_main == False
assert is_active_grafico == True
#assert is_active_tendencias == False
assert is_active_saiba_mais == False
def test_pag_grafico_santahelena(self):
self.driver.get('http://127.0.0.1:8000/graficos/santahelena/')
principal_element = self.driver.find_element_by_xpath('//*[@id="navbarColor01"]/ul/li[1]')
grafico_element= self.driver.find_element_by_id("navbarDropdownGraficos")
#tendencia_element= self.driver.find_element_by_id("navbarDropdownTendencias")
saiba_mais_element= self.driver.find_element_by_id("navbarDropdownConhecaMais")
is_active_main = "active" in principal_element.get_attribute("class")
is_active_grafico = "active" in grafico_element.get_attribute("class")
#is_active_tendencias = "active" in tendencia_element.get_attribute("class")
is_active_saiba_mais = "active" in saiba_mais_element.get_attribute("class")
assert is_active_main == False
assert is_active_grafico == True
#assert is_active_tendencias == False
assert is_active_saiba_mais == False
""" def test_pag_comparacao(self):
self.driver.get('http://127.0.0.1:8000/comparacao/')
principal_element = self.driver.find_element_by_xpath('//*[@id="navbarColor01"]/ul/li[1]')
grafico_element= self.driver.find_element_by_id("navbarDropdownGraficos")
tendencia_element= self.driver.find_element_by_id("navbarDropdownTendencias")
saiba_mais_element= self.driver.find_element_by_id("navbarDropdownConhecaMais")
is_active_main = "active" in principal_element.get_attribute("class")
is_active_grafico = "active" in grafico_element.get_attribute("class")
is_active_tendencias = "active" in tendencia_element.get_attribute("class")
is_active_saiba_mais = "active" in saiba_mais_element.get_attribute("class")
assert is_active_main == False
assert is_active_grafico == True
assert is_active_tendencias == False
assert is_active_saiba_mais == False """
def test_pag_como_sao_criados(self):
self.driver.get('http://127.0.0.1:8000/como-sao-criados/')
principal_element = self.driver.find_element_by_xpath('//*[@id="navbarColor01"]/ul/li[1]')
grafico_element= self.driver.find_element_by_id("navbarDropdownGraficos")
#tendencia_element= self.driver.find_element_by_id("navbarDropdownTendencias")
saiba_mais_element= self.driver.find_element_by_id("navbarDropdownConhecaMais")
is_active_main = "active" in principal_element.get_attribute("class")
is_active_grafico = "active" in grafico_element.get_attribute("class")
#is_active_tendencias = "active" in tendencia_element.get_attribute("class")
is_active_saiba_mais = "active" in saiba_mais_element.get_attribute("class")
assert is_active_main == False
assert is_active_grafico == True
#assert is_active_tendencias == False
assert is_active_saiba_mais == False
# def test_pag_tendencias_jatai(self):
# self.driver.get('http://127.0.0.1:8000/tendencias/jatai/')
# principal_element = self.driver.find_element_by_xpath('//*[@id="navbarColor01"]/ul/li[1]')
# grafico_element= self.driver.find_element_by_id("navbarDropdownGraficos")
# #tendencia_element= self.driver.find_element_by_id("navbarDropdownTendencias")
# saiba_mais_element= self.driver.find_element_by_id("navbarDropdownConhecaMais")
# is_active_main = "active" in principal_element.get_attribute("class")
# is_active_grafico = "active" in grafico_element.get_attribute("class")
# #is_active_tendencias = "active" in tendencia_element.get_attribute("class")
# is_active_saiba_mais = "active" in saiba_mais_element.get_attribute("class")
# assert is_active_main == False
# assert is_active_grafico == False
# #assert is_active_tendencias == True
# assert is_active_saiba_mais == False
# def test_pag_tendencias_rioverde(self):
# self.driver.get('http://127.0.0.1:8000/tendencias/rioverde/')
# principal_element = self.driver.find_element_by_xpath('//*[@id="navbarColor01"]/ul/li[1]')
# grafico_element= self.driver.find_element_by_id("navbarDropdownGraficos")
# #tendencia_element= self.driver.find_element_by_id("navbarDropdownTendencias")
# saiba_mais_element= self.driver.find_element_by_id("navbarDropdownConhecaMais")
# is_active_main = "active" in principal_element.get_attribute("class")
# is_active_grafico = "active" in grafico_element.get_attribute("class")
# #is_active_tendencias = "active" in tendencia_element.get_attribute("class")
# is_active_saiba_mais = "active" in saiba_mais_element.get_attribute("class")
# assert is_active_main == False
# assert is_active_grafico == False
# #assert is_active_tendencias == True
# assert is_active_saiba_mais == False
def test_pag_sobre(self):
self.driver.get('http://127.0.0.1:8000/sobre/')
principal_element = self.driver.find_element_by_xpath('//*[@id="navbarColor01"]/ul/li[1]')
grafico_element= self.driver.find_element_by_id("navbarDropdownGraficos")
#tendencia_element= self.driver.find_element_by_id("navbarDropdownTendencias")
saiba_mais_element= self.driver.find_element_by_id("navbarDropdownConhecaMais")
is_active_main = "active" in principal_element.get_attribute("class")
is_active_grafico = "active" in grafico_element.get_attribute("class")
#is_active_tendencias = "active" in tendencia_element.get_attribute("class")
is_active_saiba_mais = "active" in saiba_mais_element.get_attribute("class")
assert is_active_main == False
assert is_active_grafico == False
#assert is_active_tendencias == False
assert is_active_saiba_mais == True
def test_pag_equipe(self):
self.driver.get('http://127.0.0.1:8000/equipe/')
principal_element = self.driver.find_element_by_xpath('//*[@id="navbarColor01"]/ul/li[1]')
grafico_element= self.driver.find_element_by_id("navbarDropdownGraficos")
#tendencia_element= self.driver.find_element_by_id("navbarDropdownTendencias")
saiba_mais_element= self.driver.find_element_by_id("navbarDropdownConhecaMais")
is_active_main = "active" in principal_element.get_attribute("class")
is_active_grafico = "active" in grafico_element.get_attribute("class")
#is_active_tendencias = "active" in tendencia_element.get_attribute("class")
is_active_saiba_mais = "active" in saiba_mais_element.get_attribute("class")
assert is_active_main == False
assert is_active_grafico == False
#assert is_active_tendencias == False
assert is_active_saiba_mais == True
def test_pag_na_midia(self):
self.driver.get('http://127.0.0.1:8000/na-midia/')
principal_element = self.driver.find_element_by_xpath('//*[@id="navbarColor01"]/ul/li[1]')
grafico_element= self.driver.find_element_by_id("navbarDropdownGraficos")
#tendencia_element= self.driver.find_element_by_id("navbarDropdownTendencias")
saiba_mais_element= self.driver.find_element_by_id("navbarDropdownConhecaMais")
is_active_main = "active" in principal_element.get_attribute("class")
is_active_grafico = "active" in grafico_element.get_attribute("class")
#is_active_tendencias = "active" in tendencia_element.get_attribute("class")
is_active_saiba_mais = "active" in saiba_mais_element.get_attribute("class")
assert is_active_main == False
assert is_active_grafico == False
#assert is_active_tendencias == False
assert is_active_saiba_mais == True
def test_pag_colabore(self):
self.driver.get('http://127.0.0.1:8000/colabore/')
principal_element = self.driver.find_element_by_xpath('//*[@id="navbarColor01"]/ul/li[1]')
grafico_element= self.driver.find_element_by_id("navbarDropdownGraficos")
#tendencia_element= self.driver.find_element_by_id("navbarDropdownTendencias")
saiba_mais_element= self.driver.find_element_by_id("navbarDropdownConhecaMais")
is_active_main = "active" in principal_element.get_attribute("class")
is_active_grafico = "active" in grafico_element.get_attribute("class")
#is_active_tendencias = "active" in tendencia_element.get_attribute("class")
is_active_saiba_mais = "active" in saiba_mais_element.get_attribute("class")
assert is_active_main == False
assert is_active_grafico == False
#assert is_active_tendencias == False
assert is_active_saiba_mais == True | 52.361564 | 104 | 0.719938 | 1,992 | 16,075 | 5.433233 | 0.040161 | 0.094613 | 0.100527 | 0.12418 | 0.942345 | 0.942345 | 0.942345 | 0.942345 | 0.942345 | 0.942345 | 0 | 0.015652 | 0.173313 | 16,075 | 307 | 105 | 52.361564 | 0.798781 | 0.268056 | 0 | 0.74359 | 0 | 0 | 0.179523 | 0.096888 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.096154 | false | 0 | 0.038462 | 0 | 0.141026 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3cba94b8cba438c8430833a83c0ab5c39fa4312a | 4,243 | py | Python | axis_inspection/axis_inspection/doctype/end_of_service_calculator/end_of_service_calculator.py | Subramani830/testrepo | 4568c628dc0731d08315ebc830dc813e0abfc60d | [
"MIT"
] | null | null | null | axis_inspection/axis_inspection/doctype/end_of_service_calculator/end_of_service_calculator.py | Subramani830/testrepo | 4568c628dc0731d08315ebc830dc813e0abfc60d | [
"MIT"
] | null | null | null | axis_inspection/axis_inspection/doctype/end_of_service_calculator/end_of_service_calculator.py | Subramani830/testrepo | 4568c628dc0731d08315ebc830dc813e0abfc60d | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (c) 2021, veena and contributors
# For license information, please see license.txt
from __future__ import unicode_literals
import frappe
from frappe.model.document import Document
class EndofServiceCalculator(Document):
pass
@frappe.whitelist()
def get_salary_slip(employee):
return frappe.db.get_list('Salary Slip',filters={'employee':employee,'docstatus':1},fields=['name'],order_by='start_date DESC');
@frappe.whitelist()
def update_earnings_and_deductions(parenttype,parent):
return frappe.db.get_list('Salary Detail',filters={'parenttype':parenttype,'parent':parent},fields=['salary_component','amount','parentfield']);
@frappe.whitelist()
def get_amt_salary_year_small(salary,year):
amt=float(salary)*int(year)*0.5;
return amt;
@frappe.whitelist()
def get_amt_salary_year_large(salary,year):
remaining_year=int(year)-5;
amt1=float(salary)*5*0.5;
amt2=float(salary)*int(remaining_year);
total=amt1+amt2;
return total;
@frappe.whitelist()
def get_amt_salary_year_month_small(salary,year,month):
amt1=float(salary)*int(year)*0.5;
amt2=((float(salary)/12)*0.5)*int(month);
total=amt1+amt2;
return total;
@frappe.whitelist()
def get_amt_salary_year_month_large(salary,year,month):
remaining_year=int(year)-5;
amt1=float(salary)*5*0.5;
amt2=float(salary)*int(remaining_year);
amt3=(float(salary)/12)*int(month);
total= amt1+amt2+amt3;
return total;
@frappe.whitelist()
def get_amt_salary_year_month_days_small(salary,year,month,days):
amt1=float(salary)*int(year)*0.5;
amt2=((float(salary)/12)*0.5)*int(month);
amt3=((float(salary)/359)*0.5)*int(days);
total=amt1+amt2+amt3;
return total;
@frappe.whitelist()
def get_amt_salary_year_month_days_large(salary,year,month,days):
remaining_year=int(year)-5;
amt1=float(salary)*5*0.5;
amt2=float(salary)*int(remaining_year);
amt3=(float(salary)/12)*int(month);
amt4=(float(salary)/359)*int(days);
total= amt1+amt2+amt3+amt4;
return total;
@frappe.whitelist()
def get_amt_salary_year_days_small(salary,year,days):
amt1=float(salary)*int(year)*0.5;
amt2=((float(salary)/359)*0.5)*int(days);
total=amt1+amt2;
return total;
@frappe.whitelist()
def get_amt_salary_year_days_large(salary,year,days):
remaining_year=int(year)-5;
amt1=float(salary)*5*0.5;
amt2=float(salary)*int(remaining_year);
amt3=(float(salary)/359)*int(days);
total= amt1+amt2+amt3;
return total;
@frappe.whitelist()
def resignation_amt_salary_year_small(salary,year):
amt1=(float(salary)/2)*int(year)/3;
return amt1;
@frappe.whitelist()
def resignation_amt_salary_year_large(salary,year):
remaining_year=int(year)-5;
amt1=(float(salary)/2)*5*(2/3);
amt2=float(salary)*int(remaining_year)*(2/3);
total=amt1+amt2;
return total;
@frappe.whitelist()
def resignation_amt_salary_year_month_small(salary,year,month):
amt1=(float(salary)/2)*int(year)/3;
amt2=(float(salary)/12)/6*int(month);
total=amt1+amt2;
return total;
@frappe.whitelist()
def resignation_amt_salary_year_month_large(salary,year,month):
remaining_year=int(year)-5;
amt1=(float(salary)/2)*5*(2/3);
amt2=float(salary)*int(remaining_year)*(2/3);
amt3=float(salary)/72*4*int(month)
total=amt1+amt2+amt3;
return total;
@frappe.whitelist()
def resignation_amt_salary_year_days_small(salary,year,days):
amt1=(float(salary)/2)*int(year)/3;
amt2=(float(salary)/359)/6*int(days);
total=amt1+amt2;
return total;
@frappe.whitelist()
def resignation_amt_salary_year_days_large(salary,year,days):
remaining_year=int(year)-5;
amt1=(float(salary)/2)*5*(2/3);
amt2=float(salary)*int(remaining_year)*(2/3);
amt3=(float(salary)/359)/6*4*int(days);
total=amt1+amt2+amt3;
return total;
@frappe.whitelist()
def resignation_amt_salary_year_month_days_small(salary,year,month,days):
amt1=(float(salary)/2)*int(year)/3;
amt2=(float(salary)/12)/6*int(month);
amt3=(float(salary)/359)/6*int(days);
total=amt1+amt2+amt3;
return total;
@frappe.whitelist()
def resignation_amt_salary_year_month_days_large(salary,year,month,days):
remaining_year=int(year)-5;
amt1=(float(salary)/2)*5*(2/3);
amt2=float(salary)*int(remaining_year)*(2/3);
amt3=float(salary)/72*4*int(month)
amt4=(float(salary)/359)/6*4*int(days);
total=amt1+amt2+amt3+amt4;
return total;
| 27.025478 | 145 | 0.750884 | 679 | 4,243 | 4.528719 | 0.114875 | 0.143089 | 0.105366 | 0.109919 | 0.845528 | 0.845528 | 0.806504 | 0.771382 | 0.771382 | 0.745691 | 0 | 0.051269 | 0.071412 | 4,243 | 156 | 146 | 27.198718 | 0.729188 | 0.026396 | 0 | 0.702479 | 0 | 0 | 0.026463 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.14876 | false | 0.008264 | 0.024793 | 0.016529 | 0.330579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3ce93c45323bea174bf3a4d32ac4ccff1fc20198 | 86 | py | Python | food_recommendation/src/utils.py | junuMoon/TDD_ML | ed2825e35cd2b21e9f96a9017bcc32adb99b07a3 | [
"MIT"
] | 1 | 2022-02-17T09:30:14.000Z | 2022-02-17T09:30:14.000Z | food_recommendation/src/utils.py | junuMoon/TDD_ML | ed2825e35cd2b21e9f96a9017bcc32adb99b07a3 | [
"MIT"
] | 3 | 2022-01-18T13:42:42.000Z | 2022-01-24T01:39:35.000Z | food_recommendation/src/utils.py | junuMoon/TDD_ML | ed2825e35cd2b21e9f96a9017bcc32adb99b07a3 | [
"MIT"
] | null | null | null | import numpy as np
def sigmoid(x, scale=3):
return (1 / (1 + np.exp(-x))) * scale | 21.5 | 41 | 0.593023 | 16 | 86 | 3.1875 | 0.75 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044776 | 0.22093 | 86 | 4 | 41 | 21.5 | 0.716418 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
3cf5a4d2a8e70509de1db381401094f879d82c2e | 87,492 | py | Python | code/utils/rel.py | bflashcp3f/ProcBERT | 72afe18ac30825e1756672235e1369aa8837e62a | [
"MIT"
] | 8 | 2021-09-19T21:42:03.000Z | 2021-12-17T06:17:13.000Z | code/utils/rel.py | bflashcp3f/ProcBERT | 72afe18ac30825e1756672235e1369aa8837e62a | [
"MIT"
] | null | null | null | code/utils/rel.py | bflashcp3f/ProcBERT | 72afe18ac30825e1756672235e1369aa8837e62a | [
"MIT"
] | null | null | null |
import os
import time
import random
import re
import sys
import string
from utils.utils import *
NEW_ENT_TYPE = ['Collection', 'Company', 'Software', 'Info-Type']
WLP_ENT_NAME = ['Amount', 'Reagent', 'Device', 'Time', 'Speed', 'Action', 'Mention', 'Location', 'Numerical',
'Method', 'Temperature', 'Modifier', 'Concentration', 'Size', 'Generic-Measure', 'Seal',
'Measure-Type', 'Misc', 'pH', 'Unit']
PubMed_ENT_NAME = ['Info-Type', 'Method', 'Reagent', 'Action', 'Software', 'Numerical', 'Modifier', 'Collection',
'Company', 'Amount', 'Mention', 'Generic-Measure', 'Location', 'Time', 'Temperature', 'Measure-Type',
'Device', 'Concentration', 'Size', 'pH', 'Speed', 'Seal', 'Misc', 'Unit']
ChemSyn_ENT_NAME = ['Info-Type', 'Method', 'Reagent', 'Action', 'Software', 'Numerical', 'Modifier', 'Collection',
'Company', 'Amount', 'Mention', 'Generic-Measure', 'Location', 'Time', 'Temperature', 'Measure-Type',
'Device', 'Concentration', 'Size', 'pH', 'Speed', 'Seal', 'Misc', 'Unit']
ENT_NAME = {
'wlp': WLP_ENT_NAME,
'pubmed': PubMed_ENT_NAME,
'chemsyn': ChemSyn_ENT_NAME,
}
ENT_END = {
'wlp': [("[ARG1-" + item + "-END]").lower() for item in WLP_ENT_NAME] +
[("[ARG2-" + item + "-END]").lower() for item in WLP_ENT_NAME],
'pubmed': [("[ARG1-" + item + "-END]").lower() for item in PubMed_ENT_NAME] +
[("[ARG2-" + item + "-END]").lower() for item in PubMed_ENT_NAME],
'chemsyn': [("[ARG1-" + item + "-END]").lower() for item in ChemSyn_ENT_NAME] +
[("[ARG2-" + item + "-END]").lower() for item in ChemSyn_ENT_NAME],
}
ENT_START = {
'wlp': [("[ARG1-" + item + "-START]").lower() for item in WLP_ENT_NAME] +
[("[ARG2-" + item + "-START]").lower() for item in WLP_ENT_NAME],
'pubmed': [("[ARG1-" + item + "-START]").lower() for item in PubMed_ENT_NAME] +
[("[ARG2-" + item + "-START]").lower() for item in PubMed_ENT_NAME],
'chemsyn': [("[ARG1-" + item + "-START]").lower() for item in ChemSyn_ENT_NAME] +
[("[ARG2-" + item + "-START]").lower() for item in ChemSyn_ENT_NAME],
}
NO_RELATION = "no_relation"
WLP_REL_NAME = ['no_relation', 'Acts-on', 'Measure', 'Count', 'Creates', 'Using', 'Site',
'Or', 'Product', 'Setting', 'Coreference-Link', 'Mod-Link', 'Meronym',
'Measure-Type-Link', 'Commands', 'Misc-Link', 'Of-Type']
PubMed_REL_NAME = ['no_relation', 'Acts-on', 'Site', 'Using', 'Product', 'Coreference-Link',
'Meronym', 'Setting', 'Measure', 'Mod-Link', 'Belong-To', 'Measure-Type-Link',
'Or', 'Count', 'Of-Type', 'Commands', 'Creates', 'Misc-Link']
ChemSyn_REL_NAME = ['no_relation', 'Acts-on', 'Site', 'Using', 'Product', 'Coreference-Link',
'Meronym', 'Setting', 'Measure', 'Mod-Link', 'Belong-To', 'Measure-Type-Link',
'Or', 'Count', 'Of-Type', 'Commands', 'Creates', 'Misc-Link']
REL_NAME = {
'wlp': WLP_REL_NAME,
'pubmed': PubMed_REL_NAME,
'chemsyn': ChemSyn_REL_NAME,
}
REL2IDX = {
'wlp': dict([(label, id) for id, label in enumerate(REL_NAME['wlp'])]),
'pubmed': dict([(label, id) for id, label in enumerate(REL_NAME['pubmed'])]),
'chemsyn': dict([(label, id) for id, label in enumerate(REL_NAME['chemsyn'])]),
}
ENT_ID = [("[T" + str(i) + "]").lower() for i in range(2000)]
def load_from_txt(data_path, verbose=False, strip=True):
examples = []
with open(data_path, encoding='utf-8') as infile:
while True:
line = infile.readline()
if len(line) == 0:
break
if strip:
line = line.strip()
examples.append(line)
if verbose:
print("{} examples read in {} .".format(len(examples), data_path))
return examples
def decompose_tokenized_text(token_list):
"""
Split the output of BERT tokenizer into word_list and tag_list
"""
# The initial tag is 'O'
tag = 'O'
word_list = []
tag_list = []
# A flag to indicate if it is in the entity now
# 0 means out, 1 means start, 2 means in the middle
in_ent = 0
for token in token_list:
if token == '':
continue
# if the token ends with '-START]'
# change the ent flag
# if token[-len('-START]'):] == '-START]':
if token.endswith('-START]'):
tag = token[1:-1].rsplit('-', 1)[0]
in_ent = 1
# elif token[-len('-END]'):] == '-END]':
elif token.endswith('-END]'):
tag = 'O'
in_ent = 0
else:
word_list.append(token)
if in_ent == 1:
tag_list.append('B-' + tag)
in_ent = 2
elif in_ent == 2:
tag_list.append('I-' + tag)
else:
tag_list.append(tag)
return word_list, tag_list
def index_ent_in_sentence(word_list, tag_list):
ent_queue, ent_idx_queue, ent_type_queue = [], [], []
ent_list, ent_idx_list, ent_type_list = [], [], []
for word_idx in range(len(word_list)):
if 'B-' in tag_list[word_idx]:
if ent_queue:
ent_list.append(' '.join(ent_queue).strip())
ent_idx_list.append((ent_idx_queue[0], ent_idx_queue[-1] + 1))
assert len(set(ent_type_queue)) == 1
ent_type_list.append(ent_type_queue[0])
ent_queue, ent_idx_queue, ent_type_queue = [], [], []
ent_queue.append(word_list[word_idx])
ent_idx_queue.append(word_idx)
ent_type_queue.append(tag_list[word_idx][2:])
if 'I-' in tag_list[word_idx]:
ent_queue.append(word_list[word_idx])
ent_idx_queue.append(word_idx)
ent_type_queue.append(tag_list[word_idx][2:])
if 'O' == tag_list[word_idx] or word_idx == len(word_list) - 1:
if ent_queue:
ent_list.append(' '.join(ent_queue).strip())
ent_idx_list.append((ent_idx_queue[0], ent_idx_queue[-1] + 1))
assert len(set(ent_type_queue)) == 1
ent_type_list.append(ent_type_queue[0])
ent_queue, ent_idx_queue, ent_type_queue = [], [], []
return ent_list, ent_idx_list, ent_type_list
def mask_entities(tokens, entity_offsets, subj_entity_start, subj_entity_end,
obj_entity_start, obj_entity_end):
subj_entity, obj_entity = entity_offsets
if subj_entity[0] < obj_entity[0]:
tokens = tokens[:subj_entity[0]] + [subj_entity_start] + tokens[subj_entity[0]:subj_entity[1]] + \
[subj_entity_end] + tokens[subj_entity[1]:obj_entity[0]] + [obj_entity_start] + \
tokens[obj_entity[0]:obj_entity[1]] + [obj_entity_end] + tokens[obj_entity[1]:]
subj_entity = (subj_entity[0] + 1, subj_entity[1] + 1)
obj_entity = (obj_entity[0] + 3, obj_entity[1] + 3)
else:
tokens = tokens[:obj_entity[0]] + [obj_entity_start] + tokens[obj_entity[0]:obj_entity[1]] + \
[obj_entity_end] + tokens[obj_entity[1]:subj_entity[0]] + [subj_entity_start] + \
tokens[subj_entity[0]:subj_entity[1]] + [subj_entity_end] + tokens[subj_entity[1]:]
obj_entity = (obj_entity[0] + 1, obj_entity[1] + 1)
subj_entity = (subj_entity[0] + 3, subj_entity[1] + 3)
return tokens, (subj_entity, obj_entity)
def score(key, prediction, verbose=False):
correct_by_relation = Counter()
guessed_by_relation = Counter()
gold_by_relation = Counter()
# Loop over the data to compute a score
for row in range(len(key)):
gold = key[row]
guess = prediction[row]
if gold == NO_RELATION and guess == NO_RELATION:
pass
elif gold == NO_RELATION and guess != NO_RELATION:
guessed_by_relation[guess] += 1
elif gold != NO_RELATION and guess == NO_RELATION:
gold_by_relation[gold] += 1
elif gold != NO_RELATION and guess != NO_RELATION:
guessed_by_relation[guess] += 1
gold_by_relation[gold] += 1
if gold == guess:
correct_by_relation[guess] += 1
# Print verbose information
if verbose:
print("Per-relation statistics:")
relations = gold_by_relation.keys()
longest_relation = 0
for relation in sorted(relations):
longest_relation = max(len(relation), longest_relation)
for relation in sorted(relations):
# (compute the score)
correct = correct_by_relation[relation]
guessed = guessed_by_relation[relation]
gold = gold_by_relation[relation]
prec = 1.0
if guessed > 0:
prec = float(correct) / float(guessed)
recall = 0.0
if gold > 0:
recall = float(correct) / float(gold)
f1 = 0.0
if prec + recall > 0:
f1 = 2.0 * prec * recall / (prec + recall)
# (print the score)
sys.stdout.write(("{:<" + str(longest_relation) + "}").format(relation))
sys.stdout.write(" P: ")
if prec < 0.1: sys.stdout.write(' ')
if prec < 1.0: sys.stdout.write(' ')
sys.stdout.write("{:.2%}".format(prec))
sys.stdout.write(" R: ")
if recall < 0.1: sys.stdout.write(' ')
if recall < 1.0: sys.stdout.write(' ')
sys.stdout.write("{:.2%}".format(recall))
sys.stdout.write(" F1: ")
if f1 < 0.1: sys.stdout.write(' ')
if f1 < 1.0: sys.stdout.write(' ')
sys.stdout.write("{:.2%}".format(f1))
sys.stdout.write(" #: %d" % gold)
sys.stdout.write("\n")
print("")
# Print the aggregate score
if verbose:
print("Final Score:")
prec_micro = 1.0
if sum(guessed_by_relation.values()) > 0:
prec_micro = float(sum(correct_by_relation.values())) / float(sum(guessed_by_relation.values()))
recall_micro = 0.0
if sum(gold_by_relation.values()) > 0:
recall_micro = float(sum(correct_by_relation.values())) / float(sum(gold_by_relation.values()))
f1_micro = 0.0
if prec_micro + recall_micro > 0.0:
f1_micro = 2.0 * prec_micro * recall_micro / (prec_micro + recall_micro)
if verbose:
print("Precision (micro): {:.3%}".format(prec_micro))
print(" Recall (micro): {:.3%}".format(recall_micro))
print(" F1 (micro): {:.3%}".format(f1_micro))
return prec_micro, recall_micro, f1_micro
def index_ent_in_prediction(word_list, tag_list):
ent_queue, ent_idx_queue, ent_type_queue = [], [], []
ent_list, ent_idx_list, ent_type_list = [], [], []
for word_idx in range(len(word_list)):
if 'B-' in tag_list[word_idx]:
if ent_queue:
if len(set(ent_type_queue)) != 1:
print(ent_queue)
print(ent_idx_queue)
print(ent_type_queue)
print(Counter(ent_type_queue).most_common())
print()
else:
ent_list.append(' '.join(ent_queue).strip())
# ent_idx_list.append((ent_idx_queue[0], ent_idx_queue[-1]+1))
ent_idx_list.append((ent_idx_queue[0], ent_idx_queue[-1]))
assert len(set(ent_type_queue)) == 1
ent_type_list.append(ent_type_queue[0])
ent_queue, ent_idx_queue, ent_type_queue = [], [], []
ent_queue.append(word_list[word_idx])
ent_idx_queue.append(word_idx)
ent_type_queue.append(tag_list[word_idx][2:])
if 'I-' in tag_list[word_idx]:
if word_idx == 0 or (word_idx > 0 and tag_list[word_idx][2:] == tag_list[word_idx - 1][2:]):
ent_queue.append(word_list[word_idx])
ent_idx_queue.append(word_idx)
ent_type_queue.append(tag_list[word_idx][2:])
else:
if ent_queue:
if len(set(ent_type_queue)) != 1:
print(ent_queue)
print(ent_idx_queue)
print(ent_type_queue)
print(Counter(ent_type_queue).most_common())
print()
else:
ent_list.append(' '.join(ent_queue).strip())
# ent_idx_list.append((ent_idx_queue[0], ent_idx_queue[-1]+1))
ent_idx_list.append((ent_idx_queue[0], ent_idx_queue[-1]))
assert len(set(ent_type_queue)) == 1
ent_type_list.append(ent_type_queue[0])
ent_queue, ent_idx_queue, ent_type_queue = [], [], []
ent_queue.append(word_list[word_idx])
ent_idx_queue.append(word_idx)
ent_type_queue.append(tag_list[word_idx][2:])
if 'O' == tag_list[word_idx] or word_idx == len(word_list) - 1:
if ent_queue:
if len(set(ent_type_queue)) != 1:
print(ent_queue)
print(ent_idx_queue)
print(ent_type_queue)
print(Counter(ent_type_queue).most_common())
print()
else:
ent_list.append(' '.join(ent_queue).strip())
# ent_idx_list.append((ent_idx_queue[0], ent_idx_queue[-1]+1))
ent_idx_list.append((ent_idx_queue[0], ent_idx_queue[-1]))
assert len(set(ent_type_queue)) == 1
ent_type_list.append(ent_type_queue[0])
ent_queue, ent_idx_queue, ent_type_queue = [], [], []
return ent_list, ent_idx_list, ent_type_list
def get_processed_sentences(data_name, data_class, max_len, batch_size,
tokenizer, down_sample=False, down_sample_rate=0.5):
data_dir = DATA_DIR[data_name][data_class]
print(f"\nLoad data from: {data_dir}")
file_order = FILE_ORDER[data_name][data_class]
# total_sen_num = int(budget / DATA_PRICE[data_name]) if budget else None
rel2idx = REL2IDX[data_name]
all_label_list = []
prepared_sen_over_files = []
prepared_ent_id_over_files = []
sen_over_files = []
pos_pairs_over_files = []
neg_pairs_over_files = []
txt_file_list = load_from_txt(file_order)
sen_count = 0
for txt_file in txt_file_list:
prepared_sen_list = []
prepared_ent_id_list = []
ann_file = txt_file[:-3] + "ann"
if not (os.path.isfile(f"{data_dir}/{txt_file}") and os.path.isfile(f"{data_dir}/{ann_file}")):
print(f"{data_dir}/{txt_file}")
print(f"{data_dir}/{ann_file}")
continue
sen_list = load_from_txt(f"{data_dir}/{txt_file}", strip=False)
sen_len_list = [len(item) for item in sen_list]
ann_list = load_from_txt(f"{data_dir}/{ann_file}")
all_sen_str = ''.join(sen_list)
ent_start_list = []
arg1_rel_arg2_tuple_list = []
intermedia_entity_dict = dict([(item.split('\t')[0],
item.split('\t')[1].split(' ')[0].split(":")[1]) for item in ann_list \
if item[0] == 'E'])
for item in ann_list:
# 'T' means the entity
if item[0] == 'T':
try:
ent_id, label_offset, ent_str = item.split('\t')
except:
# print('item split problem')
# print(ann_file)
# print(item)
continue
try:
if ';' not in label_offset:
ent_label, ent_start, ent_end = label_offset.split(' ')
ent_start, ent_end = int(ent_start), int(ent_end)
all_label_list.append(ent_label)
else:
continue
except:
# print('label_offset split problem')
# print(label_offset)
continue
assert ent_str == all_sen_str[ent_start:ent_end] or \
ent_str == all_sen_str[ent_start:ent_end].strip()
ent_start_list.append((ent_start, (ent_str, ent_start, ent_end, ent_label, ent_id)))
# 'E' means the 'Acts-on' relation
# Handle 'Acts-on' relation here
if item[0] == 'E':
# print(item)
try:
rel_id, action_rel_ent = item.split('\t')
except:
# print('item split problem')
# print(item)
continue
_, arg1_id = action_rel_ent.split(' ')[0].split(':')
if len(action_rel_ent.split(' ')) == 1:
continue
for item in action_rel_ent.split(' ')[1:]:
rel_str, arg2_id = item.split(':')
if arg2_id[0] == 'E':
if arg2_id not in intermedia_entity_dict:
print("not in intermedia_entity_dict")
print(rel_id, arg1_id, rel_str, arg2_id)
print()
else:
# print(rel_id, arg1_id, rel_str, arg2_id, intermedia_entity_dict[arg2_id])
arg1_rel_arg2_tuple_list.append((arg1_id, rel_str, intermedia_entity_dict[arg2_id]))
# print(arg1_id, rel_str, intermedia_entity_dict[arg2_id])
else:
arg1_rel_arg2_tuple_list.append((arg1_id, rel_str, arg2_id))
# Handle other relations
if item[0] == 'R':
# print(item)
try:
rel_id, action_rel_ent = item.split('\t')
except:
# print('item split problem')
# print(item)
continue
rel_str, arg1_id_str, arg2_id_str = action_rel_ent.split(' ')
arg1_id = arg1_id_str[len('Arg1:'):]
arg2_id = arg2_id_str[len('Arg2:'):]
# Make sure the reagents start with 'T'
if arg1_id[0] == 'E':
arg1_id = intermedia_entity_dict[arg1_id]
if arg2_id[0] == 'E':
arg2_id = intermedia_entity_dict[arg2_id]
arg1_rel_arg2_tuple_list.append((arg1_id, rel_str, arg2_id))
# Just to split entities by sentence
sen_start_list = [sum(sen_len_list[:index]) for index in range(len(sen_len_list) + 1)]
sen_idx = 0
ent_idx = 0
sen_ent_dict = defaultdict(list)
sorted_ent_start_list = sorted(ent_start_list, key=lambda x: x[0])
while ent_idx < len(sorted_ent_start_list) and sen_idx < len(sen_start_list):
# print(ent_idx, sen_idx)
ent_start, ent_info = sorted_ent_start_list[ent_idx]
if ent_start >= sen_start_list[sen_idx] and \
ent_start < sen_start_list[sen_idx + 1]:
ent_str, ent_start, ent_end, ent_label, ent_id = ent_info
if ent_label in NEW_ENT_TYPE:
ent_idx += 1
continue
# Remove the sentence offset
ent_start, ent_end = ent_start - sen_start_list[sen_idx], \
ent_end - sen_start_list[sen_idx]
sen_ent_dict[sen_idx].append((ent_str, ent_start, ent_end, ent_label, ent_id))
ent_idx += 1
continue
elif ent_start >= sen_start_list[sen_idx + 1]:
sen_idx += 1
else:
print("Bug here")
# # Control the total sentence number
# if total_sen_num:
#
# if sen_count >= total_sen_num:
# break
# else:
# if sen_count + len(sen_list) <= total_sen_num:
# select_num = len(sen_list)
# else:
# select_num = total_sen_num - sen_count
# else:
# select_num = len(sen_list)
#
sen_count += len(sen_list)
# Find the entity position in each sentence
# for sen_idx in range(select_num):
for sen_idx in range(len(sen_list)):
sen_str = sen_list[sen_idx]
# If the sentence doesn't contain any entity
if sen_idx not in sen_ent_dict:
prepared_sen_list.append(sen_str.strip())
prepared_ent_id_list.append([])
continue
ent_list = sen_ent_dict[sen_idx]
span_start, span_end = 0, 0
span_list = []
label_list = []
ent_id_list = [(item[-1], item[0], item[-2]) for item in ent_list]
for ent_str, ent_start, ent_end, ent_label, ent_id in ent_list:
if ent_start > 0:
span_end = ent_start
if sen_str[span_start:span_end].strip():
span_list.append(sen_str[span_start:span_end])
label_list.append('O')
span_list.append(sen_str[ent_start:ent_end])
label_list.append(ent_label)
span_start = ent_end
# Add the last part of sentence
if span_start != len(sen_str.strip()):
span_end = len(sen_str.strip())
span_list.append(sen_str[span_start:span_end])
label_list.append('O')
# Get the label for corresponding span
span_label_list = list([item for item in zip(span_list, label_list) if item[0].strip()])
# Add label to the sentence for bert tokenizer
span_modified_list = []
for span, span_label in span_label_list:
if span_label != 'O':
span_modified_list += ['[{}-START]'.format(span_label), span, \
'[{}-END]'.format(span_label)]
else:
span_modified_list.append(span)
prepared_sen = ' '.join(span_modified_list)
prepared_sen_list.append(prepared_sen)
prepared_ent_id_list.append(ent_id_list)
prepared_sen_over_files.append(prepared_sen_list)
prepared_ent_id_over_files.append(prepared_ent_id_list)
ent_id_str_idx_type_list = []
final_sen_list = []
# Link the entitty to its id so that we can build up the (arg1, rel, arg2) tuple later
for case_idx, (each_sen, each_ent_list) in enumerate(zip(prepared_sen_list, prepared_ent_id_list)):
ent_id_list = [item[0] for item in each_ent_list]
ent_str_org_list = [re.sub(' +', ' ', item[1]) for item in each_ent_list]
ent_type_org_list = [item[2] for item in each_ent_list]
tmp_word, tmp_tag = decompose_tokenized_text(each_sen.split(' '))
ent_list, ent_idx_list, ent_type_list = index_ent_in_sentence(tmp_word, tmp_tag)
final_sen_list.append(' '.join(tmp_word))
if len(ent_list) != len(each_ent_list):
print(case_idx)
print("tmp_word, tmp_tag: ", tmp_word, tmp_tag, '\n')
print("each_sen: ", each_sen, '\n')
print(len(ent_list), len(each_ent_list), '\n')
print("ent_list: ", ent_list, '\n')
print("each_ent_list: ", each_ent_list, '\n')
assert len(ent_list) == len(each_ent_list)
if ent_list != ent_str_org_list:
print("ent_list: ", ent_list)
print("ent_str_org_list: ", ent_str_org_list)
print(list([item1 == item2 for item1, item2 in zip(ent_list, ent_str_org_list)]))
assert ent_list == ent_str_org_list
if ent_type_org_list != ent_type_list:
print(each_sen.split(' '))
print(tmp_word, tmp_tag)
print("ent_type_org_list: ", ent_type_org_list)
print("ent_type_list: ", ent_type_list)
assert ent_type_org_list == ent_type_list
ent_id_str_idx_type_list.append(list(zip(ent_id_list, ent_list,
ent_idx_list, ent_type_list,
[case_idx] * len(ent_id_list))))
# Build the id to entity dict for each file
ent_id_str_idx_type_dict = dict([(item1[0], item1) for item in ent_id_str_idx_type_list for item1 in item])
ent2senid = dict([(item1[0], item1[-1]) for idx, item in enumerate(ent_id_str_idx_type_list) for item1 in item])
senid2ent = defaultdict(list)
for _, item in enumerate(ent_id_str_idx_type_list):
for item1 in item:
senid2ent[item1[-1]].append(ent_id_str_idx_type_dict[item1[0]])
# Make sure each id links to distinct entity
if len(set([(item1[0], idx) for idx, item in enumerate(ent_id_str_idx_type_list)
for item1 in item])) != len(ent2senid):
print(txt_file)
print([(item1[0], item1[-1]) for idx, item in enumerate(ent_id_str_idx_type_list) for item1 in item])
print(ent2senid)
assert len(set([(item1[0], idx) for idx, item in enumerate(ent_id_str_idx_type_list)
for item1 in item])) == len(ent2senid)
# Get the positive data
sen_id_rel_tuple_pos_dict = defaultdict(list)
for arg1_id, rel_str, arg2_id in arg1_rel_arg2_tuple_list:
if arg1_id not in ent2senid:
# if arg1_id in ent_id_str_idx_type_dict:
# print(txt_file)
# print("The entity {} is not in the dictionary.".format(arg1_id))
continue
if arg2_id not in ent2senid:
# if arg2_id in ent_id_str_idx_type_dict:
# print(txt_file)
# print("The entity {} is not in the dictionary.".format(arg2_id))
continue
if ent2senid[arg1_id] != ent2senid[arg2_id]:
# print(txt_file)
# print("Two entities not in the same sentence.", rel_str, arg1_id, arg2_id,
# ent2senid[arg1_id],
# ent2senid[arg2_id], '\n')
continue
assert arg1_id in ent2senid and arg2_id in ent2senid
assert arg1_id in ent2senid and arg2_id in ent2senid and ent2senid[arg1_id] == ent2senid[arg2_id]
if ent_id_str_idx_type_dict[arg1_id][3] in NEW_ENT_TYPE or \
ent_id_str_idx_type_dict[arg2_id][3] in NEW_ENT_TYPE:
continue
sen_id_rel_tuple_pos_dict[ent2senid[arg2_id]].append((rel_str,
ent_id_str_idx_type_dict[arg1_id],
ent_id_str_idx_type_dict[arg2_id]))
# Generate negative pairs
sen_id_rel_tuple_neg_dict = defaultdict(list)
# for sen_idx in range(select_num):
for sen_idx in range(len(sen_list)):
pos_pair_id_list = [item[1][0] + ' ' + item[2][0] for item in sen_id_rel_tuple_pos_dict[sen_idx]]
neg_pair_id_list_all = [item1[0] + ' ' + item2[0] for item1 in senid2ent[sen_idx]
for item2 in senid2ent[sen_idx]
if item1 != item2 and
item1[0] + ' ' + item2[0] not in pos_pair_id_list]
random.seed(1234)
neg_pair_id_list = neg_pair_id_list_all
if not neg_pair_id_list:
sen_id_rel_tuple_neg_dict[sen_idx] = neg_pair_id_list
for item in neg_pair_id_list:
arg1_id, arg2_id = item.split(' ')
assert arg1_id in ent2senid and arg2_id in ent2senid
assert arg1_id in ent2senid and arg2_id in ent2senid and ent2senid[arg1_id] == ent2senid[arg2_id]
if ent_id_str_idx_type_dict[arg1_id][3] in NEW_ENT_TYPE or \
ent_id_str_idx_type_dict[arg2_id][3] in NEW_ENT_TYPE:
continue
sen_id_rel_tuple_neg_dict[ent2senid[arg2_id]].append(('no_relation',
ent_id_str_idx_type_dict[arg1_id],
ent_id_str_idx_type_dict[arg2_id]))
assert len(final_sen_list) == len(sen_id_rel_tuple_pos_dict) and \
len(final_sen_list) == len(sen_id_rel_tuple_neg_dict)
sen_over_files.append(final_sen_list)
pos_pairs_over_files.append(sen_id_rel_tuple_pos_dict)
neg_pairs_over_files.append(sen_id_rel_tuple_neg_dict)
print(f"The number of selected senteces: {sen_count}")
processed_sen_over_files = []
for sen_list, pos_pairs_list, neg_pairs_list in zip(sen_over_files, pos_pairs_over_files, neg_pairs_over_files):
processed_sen_list = []
for sen_idx, sen_str in enumerate(sen_list):
word_list = sen_str.split(' ')
# Process positive data
assert sorted(set([item[1] for item in pos_pairs_list[sen_idx] + neg_pairs_list[sen_idx]])) == \
sorted(set([item[2] for item in pos_pairs_list[sen_idx] + neg_pairs_list[sen_idx]]))
sorted_ent_list = sorted(set([item[1] for item in pos_pairs_list[sen_idx] + neg_pairs_list[sen_idx]]),
key=lambda x: x[2][0])
def mask_all_entities(tokens, sorted_entity_list):
ent_dict = dict([(item[2][0], item) for item in sorted_entity_list])
word_idx = 0
tagged_token_list = []
while word_idx < len(tokens):
if word_idx not in ent_dict:
tagged_token_list.append(tokens[word_idx])
word_idx += 1
else:
ent_id, ent_str, (ent_start, ent_end), ent_type, _ = ent_dict[word_idx]
if ent_str.strip() != ' '.join(tokens[ent_start:ent_end]).strip():
print(ent_str, ' '.join(tokens[ent_start:ent_end]))
assert ent_str.strip() == ' '.join(tokens[ent_start:ent_end]).strip()
tagged_token_list.append("[arg1-" + ent_type.lower() + "-start]")
tagged_token_list.append("[" + ent_id.lower() + "]")
tagged_token_list.append(ent_str)
tagged_token_list.append("[arg1-" + ent_type.lower() + "-end]")
word_idx = ent_end
return tagged_token_list
processed_word_list = mask_all_entities(word_list, sorted_ent_list)
processed_sen_list.append(f"{tokenizer.cls_token} {' '.join(processed_word_list)} {tokenizer.eos_token}")
processed_sen_over_files.append(processed_sen_list)
start_time = time.time()
# tokenized_texts = [tokenizer.tokenize(sent) for sent in processed_sen_list][:500]
tokenized_texts = [[tokenizer.tokenize(sent) for sent in processed_sen_list] for processed_sen_list in
processed_sen_over_files]
print("--- %s seconds ---" % (time.time() - start_time))
# Remove unrelated entities from tokenzied text
purified_texts = []
relation_list = []
type_list = []
arg1_ent_start_list = []
arg2_ent_start_list = []
for tokenized_sen_list, pos_pairs_list, neg_pairs_list in zip(tokenized_texts, pos_pairs_over_files,
neg_pairs_over_files):
for sen_idx, word_list in enumerate(tokenized_sen_list):
assert len(tokenized_sen_list) == len(pos_pairs_list) and len(tokenized_sen_list) == len(neg_pairs_list)
# Process positive data
for relation, arg1, arg2 in pos_pairs_list[sen_idx] + neg_pairs_list[sen_idx]:
# Note that this is for speeding up the fa_me_re experiments
if relation == "no_relation" and down_sample and random.random() > down_sample_rate:
continue
arg1_id, arg1_str, arg1_offset, arg1_type, arg1_senid = arg1
arg2_id, arg2_str, arg2_offset, arg2_type, arg2_senid = arg2
arg1_id = arg1_id.lower()
arg2_id = arg2_id.lower()
if "[{}]".format(arg1_id) not in word_list or "[{}]".format(arg2_id) not in word_list:
print("[{}]".format(arg1_id))
print("[{}]".format(arg2_id))
print(word_list)
assert "[{}]".format(arg1_id) in word_list
assert "[{}]".format(arg2_id) in word_list
arg1_ent_start = "[arg1-" + arg1_type + "-start]"
arg1_ent_end = "[arg1-" + arg1_type + "-end]"
arg2_ent_start = "[arg2-" + arg2_type + "-start]"
arg2_ent_end = "[arg2-" + arg2_type + "-end]"
arg1_ent_start = arg1_ent_start.lower()
arg2_ent_start = arg2_ent_start.lower()
arg1_ent_start_list.append(arg1_ent_start)
arg2_ent_start_list.append(arg2_ent_start)
def purify_word_list(tokens, arg1id, arg2id):
word_idx = 0
arg_tag = 0
purified_token_list = []
while word_idx < len(tokens):
if tokens[word_idx][-len('-start]'):] == '-start]':
if tokens[word_idx + 1] == '[' + arg1id + ']':
purified_token_list.append(tokens[word_idx])
arg_tag = 1
elif tokens[word_idx + 1] == '[' + arg2id + ']':
purified_token_list.append(
tokens[word_idx][:len('[arg')] + '2' + tokens[word_idx][len('[arg1'):])
arg_tag = 2
word_idx += 2
elif tokens[word_idx][-len('-end]'):] == '-end]':
if arg_tag:
purified_token_list.append(
tokens[word_idx][:len('[arg')] + str(arg_tag) + tokens[word_idx][len('[arg1'):])
arg_tag = 0
word_idx += 1
else:
purified_token_list.append(tokens[word_idx])
word_idx += 1
return purified_token_list
purified_word_list = purify_word_list(word_list, arg1_id, arg2_id)
purified_texts.append(purified_word_list)
relation_list.append(relation.rstrip(string.digits))
# print(Counter(relation_list).items())
assert len(purified_texts) == len(arg1_ent_start_list) and len(purified_texts) == len(arg2_ent_start_list)
# Get the input_ids and labels
input_ids = pad_sequences([tokenizer.convert_tokens_to_ids(txt) for txt in purified_texts],
maxlen=max_len, value=tokenizer.pad_token_id, dtype="long", truncating="post", padding="post")
# attention_masks = [[float(i > 0) for i in ii] for ii in input_ids]
attention_masks = [[float(i != tokenizer.pad_token_id) for i in ii] for ii in input_ids]
labels = [rel2idx[l] for l in relation_list]
# print(purified_texts[0])
# print(relation_list[0])
# print(arg1_ent_start_list[0])
# print(arg2_ent_start_list[0])
arg1_idx_list = [purified_texts[sen_idx].index(arg1_ent_start_list[sen_idx])
if purified_texts[sen_idx].index(arg1_ent_start_list[sen_idx]) < max_len else max_len - 1
for sen_idx in range(len(purified_texts))
]
arg2_idx_list = [purified_texts[sen_idx].index(arg2_ent_start_list[sen_idx])
if purified_texts[sen_idx].index(arg2_ent_start_list[sen_idx]) < max_len else max_len - 1
for sen_idx in range(len(purified_texts))
]
# print(len(input_ids), len(attention_masks), len(labels), len(arg1_idx_list), len(arg2_idx_list))
input_ids = torch.tensor(input_ids)
labels = torch.tensor(labels)
attention_masks = torch.tensor(attention_masks)
arg1_idx_list = torch.tensor(arg1_idx_list)
arg2_idx_list = torch.tensor(arg2_idx_list)
final_data = TensorDataset(input_ids, attention_masks, labels, arg1_idx_list, arg2_idx_list)
if data_class == "train":
final_sampler = RandomSampler(final_data)
else:
final_sampler = SequentialSampler(final_data)
final_dataloader = DataLoader(final_data, sampler=final_sampler, batch_size=batch_size)
return final_dataloader
def get_processed_sentences_budget(data_name, data_class, budget, max_len, batch_size,
tokenizer, down_sample=False, down_sample_rate=0.5):
data_dir = DATA_DIR[data_name][data_class]
print(f"\nLoad data from: {data_dir}")
file_order = FILE_ORDER[data_name][data_class]
total_sen_num = int(budget / DATA_PRICE[data_name]) if budget else None
rel2idx = REL2IDX['wlp']
all_label_list = []
prepared_sen_over_files = []
prepared_ent_id_over_files = []
sen_over_files = []
pos_pairs_over_files = []
neg_pairs_over_files = []
txt_file_list = load_from_txt(file_order)
sen_count = 0
for txt_file in txt_file_list:
prepared_sen_list = []
prepared_ent_id_list = []
ann_file = txt_file[:-3] + "ann"
if not (os.path.isfile(f"{data_dir}/{txt_file}") and os.path.isfile(f"{data_dir}/{ann_file}")):
print(f"{data_dir}/{txt_file}")
print(f"{data_dir}/{ann_file}")
continue
sen_list = load_from_txt(f"{data_dir}/{txt_file}", strip=False)
sen_len_list = [len(item) for item in sen_list]
ann_list = load_from_txt(f"{data_dir}/{ann_file}")
all_sen_str = ''.join(sen_list)
ent_start_list = []
arg1_rel_arg2_tuple_list = []
intermedia_entity_dict = dict([(item.split('\t')[0],
item.split('\t')[1].split(' ')[0].split(":")[1]) for item in ann_list \
if item[0] == 'E'])
for item in ann_list:
# 'T' means the entity
if item[0] == 'T':
try:
ent_id, label_offset, ent_str = item.split('\t')
except:
# print('item split problem')
# print(ann_file)
# print(item)
continue
try:
if ';' not in label_offset:
ent_label, ent_start, ent_end = label_offset.split(' ')
ent_start, ent_end = int(ent_start), int(ent_end)
all_label_list.append(ent_label)
else:
continue
except:
# print('label_offset split problem')
# print(label_offset)
continue
assert ent_str == all_sen_str[ent_start:ent_end] or \
ent_str == all_sen_str[ent_start:ent_end].strip()
ent_start_list.append((ent_start, (ent_str, ent_start, ent_end, ent_label, ent_id)))
# 'E' means the 'Acts-on' relation
# Handle 'Acts-on' relation here
if item[0] == 'E':
# print(item)
try:
rel_id, action_rel_ent = item.split('\t')
except:
# print('item split problem')
# print(item)
continue
_, arg1_id = action_rel_ent.split(' ')[0].split(':')
if len(action_rel_ent.split(' ')) == 1:
continue
for item in action_rel_ent.split(' ')[1:]:
rel_str, arg2_id = item.split(':')
if arg2_id[0] == 'E':
if arg2_id not in intermedia_entity_dict:
print("not in intermedia_entity_dict")
print(rel_id, arg1_id, rel_str, arg2_id)
print()
else:
# print(rel_id, arg1_id, rel_str, arg2_id, intermedia_entity_dict[arg2_id])
arg1_rel_arg2_tuple_list.append((arg1_id, rel_str, intermedia_entity_dict[arg2_id]))
# print(arg1_id, rel_str, intermedia_entity_dict[arg2_id])
else:
arg1_rel_arg2_tuple_list.append((arg1_id, rel_str, arg2_id))
# Handle other relations
if item[0] == 'R':
# print(item)
try:
rel_id, action_rel_ent = item.split('\t')
except:
# print('item split problem')
# print(item)
continue
rel_str, arg1_id_str, arg2_id_str = action_rel_ent.split(' ')
arg1_id = arg1_id_str[len('Arg1:'):]
arg2_id = arg2_id_str[len('Arg2:'):]
# Make sure the reagents start with 'T'
if arg1_id[0] == 'E':
arg1_id = intermedia_entity_dict[arg1_id]
if arg2_id[0] == 'E':
arg2_id = intermedia_entity_dict[arg2_id]
arg1_rel_arg2_tuple_list.append((arg1_id, rel_str, arg2_id))
# Just to split entities by sentence
sen_start_list = [sum(sen_len_list[:index]) for index in range(len(sen_len_list) + 1)]
sen_idx = 0
ent_idx = 0
sen_ent_dict = defaultdict(list)
sorted_ent_start_list = sorted(ent_start_list, key=lambda x: x[0])
while ent_idx < len(sorted_ent_start_list) and sen_idx < len(sen_start_list):
# print(ent_idx, sen_idx)
ent_start, ent_info = sorted_ent_start_list[ent_idx]
if ent_start >= sen_start_list[sen_idx] and \
ent_start < sen_start_list[sen_idx + 1]:
ent_str, ent_start, ent_end, ent_label, ent_id = ent_info
if ent_label in NEW_ENT_TYPE:
ent_idx += 1
continue
# Remove the sentence offset
ent_start, ent_end = ent_start - sen_start_list[sen_idx], \
ent_end - sen_start_list[sen_idx]
sen_ent_dict[sen_idx].append((ent_str, ent_start, ent_end, ent_label, ent_id))
ent_idx += 1
continue
elif ent_start >= sen_start_list[sen_idx + 1]:
sen_idx += 1
else:
print("Bug here")
# Control the total sentence number
if total_sen_num:
if sen_count >= total_sen_num:
break
else:
if sen_count + len(sen_list) <= total_sen_num:
select_num = len(sen_list)
else:
select_num = total_sen_num - sen_count
else:
select_num = len(sen_list)
sen_count += select_num
# Find the entity position in each sentence
for sen_idx in range(select_num):
sen_str = sen_list[sen_idx]
# If the sentence doesn't contain any entity
if sen_idx not in sen_ent_dict:
prepared_sen_list.append(sen_str.strip())
prepared_ent_id_list.append([])
continue
ent_list = sen_ent_dict[sen_idx]
span_start, span_end = 0, 0
span_list = []
label_list = []
ent_id_list = [(item[-1], item[0], item[-2]) for item in ent_list]
for ent_str, ent_start, ent_end, ent_label, ent_id in ent_list:
if ent_start > 0:
span_end = ent_start
if sen_str[span_start:span_end].strip():
span_list.append(sen_str[span_start:span_end])
label_list.append('O')
span_list.append(sen_str[ent_start:ent_end])
label_list.append(ent_label)
span_start = ent_end
# Add the last part of sentence
if span_start != len(sen_str.strip()):
span_end = len(sen_str.strip())
span_list.append(sen_str[span_start:span_end])
label_list.append('O')
# Get the label for corresponding span
span_label_list = list([item for item in zip(span_list, label_list) if item[0].strip()])
# Add label to the sentence for bert tokenizer
span_modified_list = []
for span, span_label in span_label_list:
if span_label != 'O':
span_modified_list += ['[{}-START]'.format(span_label), span, \
'[{}-END]'.format(span_label)]
else:
span_modified_list.append(span)
prepared_sen = ' '.join(span_modified_list)
prepared_sen_list.append(prepared_sen)
prepared_ent_id_list.append(ent_id_list)
prepared_sen_over_files.append(prepared_sen_list)
prepared_ent_id_over_files.append(prepared_ent_id_list)
ent_id_str_idx_type_list = []
final_sen_list = []
# Link the entitty to its id so that we can build up the (arg1, rel, arg2) tuple later
for case_idx, (each_sen, each_ent_list) in enumerate(zip(prepared_sen_list, prepared_ent_id_list)):
ent_id_list = [item[0] for item in each_ent_list]
ent_str_org_list = [re.sub(' +', ' ', item[1]) for item in each_ent_list]
ent_type_org_list = [item[2] for item in each_ent_list]
tmp_word, tmp_tag = decompose_tokenized_text(each_sen.split(' '))
ent_list, ent_idx_list, ent_type_list = index_ent_in_sentence(tmp_word, tmp_tag)
final_sen_list.append(' '.join(tmp_word))
if len(ent_list) != len(each_ent_list):
print(case_idx)
print("tmp_word, tmp_tag: ", tmp_word, tmp_tag, '\n')
print("each_sen: ", each_sen, '\n')
print(len(ent_list), len(each_ent_list), '\n')
print("ent_list: ", ent_list, '\n')
print("each_ent_list: ", each_ent_list, '\n')
assert len(ent_list) == len(each_ent_list)
if ent_list != ent_str_org_list:
print("ent_list: ", ent_list)
print("ent_str_org_list: ", ent_str_org_list)
print(list([item1 == item2 for item1, item2 in zip(ent_list, ent_str_org_list)]))
assert ent_list == ent_str_org_list
if ent_type_org_list != ent_type_list:
print(each_sen.split(' '))
print(tmp_word, tmp_tag)
print("ent_type_org_list: ", ent_type_org_list)
print("ent_type_list: ", ent_type_list)
assert ent_type_org_list == ent_type_list
ent_id_str_idx_type_list.append(list(zip(ent_id_list, ent_list,
ent_idx_list, ent_type_list,
[case_idx] * len(ent_id_list))))
# Build the id to entity dict for each file
ent_id_str_idx_type_dict = dict([(item1[0], item1) for item in ent_id_str_idx_type_list for item1 in item])
ent2senid = dict([(item1[0], item1[-1]) for idx, item in enumerate(ent_id_str_idx_type_list) for item1 in item])
senid2ent = defaultdict(list)
for _, item in enumerate(ent_id_str_idx_type_list):
for item1 in item:
senid2ent[item1[-1]].append(ent_id_str_idx_type_dict[item1[0]])
# Make sure each id links to distinct entity
if len(set([(item1[0], idx) for idx, item in enumerate(ent_id_str_idx_type_list)
for item1 in item])) != len(ent2senid):
print(txt_file)
print([(item1[0], item1[-1]) for idx, item in enumerate(ent_id_str_idx_type_list) for item1 in item])
print(ent2senid)
assert len(set([(item1[0], idx) for idx, item in enumerate(ent_id_str_idx_type_list)
for item1 in item])) == len(ent2senid)
# Get the positive data
sen_id_rel_tuple_pos_dict = defaultdict(list)
for arg1_id, rel_str, arg2_id in arg1_rel_arg2_tuple_list:
if arg1_id not in ent2senid:
# if arg1_id in ent_id_str_idx_type_dict:
# print(txt_file)
# print("The entity {} is not in the dictionary.".format(arg1_id))
continue
if arg2_id not in ent2senid:
# if arg2_id in ent_id_str_idx_type_dict:
# print(txt_file)
# print("The entity {} is not in the dictionary.".format(arg2_id))
continue
if ent2senid[arg1_id] != ent2senid[arg2_id]:
# print(txt_file)
# print("Two entities not in the same sentence.", rel_str, arg1_id, arg2_id,
# ent2senid[arg1_id],
# ent2senid[arg2_id], '\n')
continue
assert arg1_id in ent2senid and arg2_id in ent2senid
assert arg1_id in ent2senid and arg2_id in ent2senid and ent2senid[arg1_id] == ent2senid[arg2_id]
if ent_id_str_idx_type_dict[arg1_id][3] in NEW_ENT_TYPE or \
ent_id_str_idx_type_dict[arg2_id][3] in NEW_ENT_TYPE:
continue
sen_id_rel_tuple_pos_dict[ent2senid[arg2_id]].append((rel_str,
ent_id_str_idx_type_dict[arg1_id],
ent_id_str_idx_type_dict[arg2_id]))
# Generate negative pairs
sen_id_rel_tuple_neg_dict = defaultdict(list)
for sen_idx in range(select_num):
# for sen_idx in range(len(sen_list)):
pos_pair_id_list = [item[1][0] + ' ' + item[2][0] for item in sen_id_rel_tuple_pos_dict[sen_idx]]
neg_pair_id_list_all = [item1[0] + ' ' + item2[0] for item1 in senid2ent[sen_idx]
for item2 in senid2ent[sen_idx]
if item1 != item2 and
item1[0] + ' ' + item2[0] not in pos_pair_id_list]
random.seed(1234)
neg_pair_id_list = neg_pair_id_list_all
if not neg_pair_id_list:
sen_id_rel_tuple_neg_dict[sen_idx] = neg_pair_id_list
for item in neg_pair_id_list:
arg1_id, arg2_id = item.split(' ')
assert arg1_id in ent2senid and arg2_id in ent2senid
assert arg1_id in ent2senid and arg2_id in ent2senid and ent2senid[arg1_id] == ent2senid[arg2_id]
if ent_id_str_idx_type_dict[arg1_id][3] in NEW_ENT_TYPE or \
ent_id_str_idx_type_dict[arg2_id][3] in NEW_ENT_TYPE:
continue
sen_id_rel_tuple_neg_dict[ent2senid[arg2_id]].append(('no_relation',
ent_id_str_idx_type_dict[arg1_id],
ent_id_str_idx_type_dict[arg2_id]))
assert len(final_sen_list) == len(sen_id_rel_tuple_pos_dict) and \
len(final_sen_list) == len(sen_id_rel_tuple_neg_dict)
sen_over_files.append(final_sen_list)
pos_pairs_over_files.append(sen_id_rel_tuple_pos_dict)
neg_pairs_over_files.append(sen_id_rel_tuple_neg_dict)
print(f"The number of selected senteces: {sen_count}")
processed_sen_over_files = []
for sen_list, pos_pairs_list, neg_pairs_list in zip(sen_over_files, pos_pairs_over_files, neg_pairs_over_files):
processed_sen_list = []
for sen_idx, sen_str in enumerate(sen_list):
word_list = sen_str.split(' ')
# Process positive data
assert sorted(set([item[1] for item in pos_pairs_list[sen_idx] + neg_pairs_list[sen_idx]])) == \
sorted(set([item[2] for item in pos_pairs_list[sen_idx] + neg_pairs_list[sen_idx]]))
sorted_ent_list = sorted(set([item[1] for item in pos_pairs_list[sen_idx] + neg_pairs_list[sen_idx]]),
key=lambda x: x[2][0])
def mask_all_entities(tokens, sorted_entity_list):
ent_dict = dict([(item[2][0], item) for item in sorted_entity_list])
word_idx = 0
tagged_token_list = []
while word_idx < len(tokens):
if word_idx not in ent_dict:
tagged_token_list.append(tokens[word_idx])
word_idx += 1
else:
ent_id, ent_str, (ent_start, ent_end), ent_type, _ = ent_dict[word_idx]
if ent_str.strip() != ' '.join(tokens[ent_start:ent_end]).strip():
print(ent_str, ' '.join(tokens[ent_start:ent_end]))
assert ent_str.strip() == ' '.join(tokens[ent_start:ent_end]).strip()
tagged_token_list.append("[arg1-" + ent_type.lower() + "-start]")
tagged_token_list.append("[" + ent_id.lower() + "]")
tagged_token_list.append(ent_str)
tagged_token_list.append("[arg1-" + ent_type.lower() + "-end]")
word_idx = ent_end
return tagged_token_list
processed_word_list = mask_all_entities(word_list, sorted_ent_list)
processed_sen_list.append(f"{tokenizer.cls_token} {' '.join(processed_word_list)} {tokenizer.eos_token}")
processed_sen_over_files.append(processed_sen_list)
start_time = time.time()
# tokenized_texts = [tokenizer.tokenize(sent) for sent in processed_sen_list][:500]
tokenized_texts = [[tokenizer.tokenize(sent) for sent in processed_sen_list] for processed_sen_list in
processed_sen_over_files]
print("--- %s seconds ---" % (time.time() - start_time))
# Remove unrelated entities from tokenzied text
purified_texts = []
relation_list = []
type_list = []
arg1_ent_start_list = []
arg2_ent_start_list = []
for tokenized_sen_list, pos_pairs_list, neg_pairs_list in zip(tokenized_texts, pos_pairs_over_files,
neg_pairs_over_files):
for sen_idx, word_list in enumerate(tokenized_sen_list):
assert len(tokenized_sen_list) == len(pos_pairs_list) and len(tokenized_sen_list) == len(neg_pairs_list)
# Process positive data
for relation, arg1, arg2 in pos_pairs_list[sen_idx] + neg_pairs_list[sen_idx]:
# Note that this is for speeding up the fa_me_re experiments
if relation == "no_relation" and down_sample and random.random() > down_sample_rate:
continue
arg1_id, arg1_str, arg1_offset, arg1_type, arg1_senid = arg1
arg2_id, arg2_str, arg2_offset, arg2_type, arg2_senid = arg2
arg1_id = arg1_id.lower()
arg2_id = arg2_id.lower()
if "[{}]".format(arg1_id) not in word_list or "[{}]".format(arg2_id) not in word_list:
print("[{}]".format(arg1_id))
print("[{}]".format(arg2_id))
print(word_list)
assert "[{}]".format(arg1_id) in word_list
assert "[{}]".format(arg2_id) in word_list
arg1_ent_start = "[arg1-" + arg1_type + "-start]"
arg1_ent_end = "[arg1-" + arg1_type + "-end]"
arg2_ent_start = "[arg2-" + arg2_type + "-start]"
arg2_ent_end = "[arg2-" + arg2_type + "-end]"
arg1_ent_start = arg1_ent_start.lower()
arg2_ent_start = arg2_ent_start.lower()
arg1_ent_start_list.append(arg1_ent_start)
arg2_ent_start_list.append(arg2_ent_start)
def purify_word_list(tokens, arg1id, arg2id):
word_idx = 0
arg_tag = 0
purified_token_list = []
while word_idx < len(tokens):
if tokens[word_idx][-len('-start]'):] == '-start]':
if tokens[word_idx + 1] == '[' + arg1id + ']':
purified_token_list.append(tokens[word_idx])
arg_tag = 1
elif tokens[word_idx + 1] == '[' + arg2id + ']':
purified_token_list.append(
tokens[word_idx][:len('[arg')] + '2' + tokens[word_idx][len('[arg1'):])
arg_tag = 2
word_idx += 2
elif tokens[word_idx][-len('-end]'):] == '-end]':
if arg_tag:
purified_token_list.append(
tokens[word_idx][:len('[arg')] + str(arg_tag) + tokens[word_idx][len('[arg1'):])
arg_tag = 0
word_idx += 1
else:
purified_token_list.append(tokens[word_idx])
word_idx += 1
return purified_token_list
purified_word_list = purify_word_list(word_list, arg1_id, arg2_id)
purified_texts.append(purified_word_list)
relation_list.append(relation.rstrip(string.digits))
# print(Counter(relation_list).items())
assert len(purified_texts) == len(arg1_ent_start_list) and len(purified_texts) == len(arg2_ent_start_list)
# Get the input_ids and labels
input_ids = pad_sequences([tokenizer.convert_tokens_to_ids(txt) for txt in purified_texts],
maxlen=max_len, value=tokenizer.pad_token_id, dtype="long", truncating="post", padding="post")
# attention_masks = [[float(i > 0) for i in ii] for ii in input_ids]
attention_masks = [[float(i != tokenizer.pad_token_id) for i in ii] for ii in input_ids]
labels = [rel2idx[l] for l in relation_list]
# print(purified_texts[0])
# print(relation_list[0])
# print(arg1_ent_start_list[0])
# print(arg2_ent_start_list[0])
arg1_idx_list = [purified_texts[sen_idx].index(arg1_ent_start_list[sen_idx])
if purified_texts[sen_idx].index(arg1_ent_start_list[sen_idx]) < max_len else max_len - 1
for sen_idx in range(len(purified_texts))
]
arg2_idx_list = [purified_texts[sen_idx].index(arg2_ent_start_list[sen_idx])
if purified_texts[sen_idx].index(arg2_ent_start_list[sen_idx]) < max_len else max_len - 1
for sen_idx in range(len(purified_texts))
]
# print(len(input_ids), len(attention_masks), len(labels), len(arg1_idx_list), len(arg2_idx_list))
input_ids = torch.tensor(input_ids)
labels = torch.tensor(labels)
attention_masks = torch.tensor(attention_masks)
arg1_idx_list = torch.tensor(arg1_idx_list)
arg2_idx_list = torch.tensor(arg2_idx_list)
final_data = TensorDataset(input_ids, attention_masks, labels, arg1_idx_list, arg2_idx_list)
if data_class == "train":
final_sampler = RandomSampler(final_data)
else:
final_sampler = SequentialSampler(final_data)
final_dataloader = DataLoader(final_data, sampler=final_sampler, batch_size=batch_size)
return final_dataloader
def get_processed_sentences_da_budget(data_name_list, data_class, budget, max_len, batch_size, tokenizer, alpha=1,
down_sample=False, down_sample_rate=0.5):
src_data, tgt_data = data_name_list
total_sen_num = int(budget / DATA_PRICE[tgt_data]) if budget else None
rel2idx = REL2IDX['wlp']
if not tgt_data:
raise ValueError("There must be a target domain")
if total_sen_num:
print(f"Select top {total_sen_num} sentences from the target domain.")
def load_from_dir(data_dir, file_order, total_sen_num=None):
print(f"\nLoad data from: {data_dir}")
all_label_list = []
prepared_sen_over_files = []
prepared_ent_id_over_files = []
sen_over_files = []
pos_pairs_over_files = []
neg_pairs_over_files = []
txt_file_list = load_from_txt(file_order)
sen_count = 0
for txt_file in txt_file_list:
prepared_sen_list = []
prepared_ent_id_list = []
ann_file = txt_file[:-3] + "ann"
if not (os.path.isfile(f"{data_dir}/{txt_file}") and os.path.isfile(f"{data_dir}/{ann_file}")):
print(f"{data_dir}/{txt_file}")
print(f"{data_dir}/{ann_file}")
continue
sen_list = load_from_txt(f"{data_dir}/{txt_file}", strip=False)
sen_len_list = [len(item) for item in sen_list]
ann_list = load_from_txt(f"{data_dir}/{ann_file}")
all_sen_str = ''.join(sen_list)
ent_start_list = []
arg1_rel_arg2_tuple_list = []
intermedia_entity_dict = dict([(item.split('\t')[0],
item.split('\t')[1].split(' ')[0].split(":")[1]) for item in ann_list \
if item[0] == 'E'])
for item in ann_list:
# 'T' means the entity
if item[0] == 'T':
try:
ent_id, label_offset, ent_str = item.split('\t')
except:
# print('item split problem')
# print(ann_file)
# print(item)
continue
try:
if ';' not in label_offset:
ent_label, ent_start, ent_end = label_offset.split(' ')
ent_start, ent_end = int(ent_start), int(ent_end)
all_label_list.append(ent_label)
else:
continue
except:
# print('label_offset split problem')
# print(label_offset)
continue
assert ent_str == all_sen_str[ent_start:ent_end] or \
ent_str == all_sen_str[ent_start:ent_end].strip()
ent_start_list.append((ent_start, (ent_str, ent_start, ent_end, ent_label, ent_id)))
# 'E' means the 'Acts-on' relation
# Handle 'Acts-on' relation here
if item[0] == 'E':
# print(item)
try:
rel_id, action_rel_ent = item.split('\t')
except:
# print('item split problem')
# print(item)
continue
_, arg1_id = action_rel_ent.split(' ')[0].split(':')
if len(action_rel_ent.split(' ')) == 1:
continue
for item in action_rel_ent.split(' ')[1:]:
rel_str, arg2_id = item.split(':')
if arg2_id[0] == 'E':
if arg2_id not in intermedia_entity_dict:
print("not in intermedia_entity_dict")
print(rel_id, arg1_id, rel_str, arg2_id)
print()
else:
# print(rel_id, arg1_id, rel_str, arg2_id, intermedia_entity_dict[arg2_id])
arg1_rel_arg2_tuple_list.append((arg1_id, rel_str, intermedia_entity_dict[arg2_id]))
# print(arg1_id, rel_str, intermedia_entity_dict[arg2_id])
else:
arg1_rel_arg2_tuple_list.append((arg1_id, rel_str, arg2_id))
# Handle other relations
if item[0] == 'R':
# print(item)
try:
rel_id, action_rel_ent = item.split('\t')
except:
# print('item split problem')
# print(item)
continue
rel_str, arg1_id_str, arg2_id_str = action_rel_ent.split(' ')
arg1_id = arg1_id_str[len('Arg1:'):]
arg2_id = arg2_id_str[len('Arg2:'):]
# Make sure the reagents start with 'T'
if arg1_id[0] == 'E':
arg1_id = intermedia_entity_dict[arg1_id]
if arg2_id[0] == 'E':
arg2_id = intermedia_entity_dict[arg2_id]
arg1_rel_arg2_tuple_list.append((arg1_id, rel_str, arg2_id))
# Just to split entities by sentence
sen_start_list = [sum(sen_len_list[:index]) for index in range(len(sen_len_list) + 1)]
sen_idx = 0
ent_idx = 0
sen_ent_dict = defaultdict(list)
sorted_ent_start_list = sorted(ent_start_list, key=lambda x: x[0])
while ent_idx < len(sorted_ent_start_list) and sen_idx < len(sen_start_list):
# print(ent_idx, sen_idx)
ent_start, ent_info = sorted_ent_start_list[ent_idx]
if ent_start >= sen_start_list[sen_idx] and \
ent_start < sen_start_list[sen_idx + 1]:
ent_str, ent_start, ent_end, ent_label, ent_id = ent_info
if ent_label in NEW_ENT_TYPE:
ent_idx += 1
continue
# Remove the sentence offset
ent_start, ent_end = ent_start - sen_start_list[sen_idx], \
ent_end - sen_start_list[sen_idx]
sen_ent_dict[sen_idx].append((ent_str, ent_start, ent_end, ent_label, ent_id))
ent_idx += 1
continue
elif ent_start >= sen_start_list[sen_idx + 1]:
sen_idx += 1
else:
print("Bug here")
# Control the total sentence number
if total_sen_num:
if sen_count >= total_sen_num:
break
else:
if sen_count + len(sen_list) <= total_sen_num:
select_num = len(sen_list)
else:
select_num = total_sen_num - sen_count
else:
select_num = len(sen_list)
sen_count += select_num
# Find the entity position in each sentence
for sen_idx in range(select_num):
sen_str = sen_list[sen_idx]
# If the sentence doesn't contain any entity
if sen_idx not in sen_ent_dict:
prepared_sen_list.append(sen_str.strip())
prepared_ent_id_list.append([])
continue
ent_list = sen_ent_dict[sen_idx]
span_start, span_end = 0, 0
span_list = []
label_list = []
ent_id_list = [(item[-1], item[0], item[-2]) for item in ent_list]
for ent_str, ent_start, ent_end, ent_label, ent_id in ent_list:
if ent_start > 0:
span_end = ent_start
if sen_str[span_start:span_end].strip():
span_list.append(sen_str[span_start:span_end])
label_list.append('O')
span_list.append(sen_str[ent_start:ent_end])
label_list.append(ent_label)
span_start = ent_end
# Add the last part of sentence
if span_start != len(sen_str.strip()):
span_end = len(sen_str.strip())
span_list.append(sen_str[span_start:span_end])
label_list.append('O')
# Get the label for corresponding span
span_label_list = list([item for item in zip(span_list, label_list) if item[0].strip()])
# Add label to the sentence for bert tokenizer
span_modified_list = []
for span, span_label in span_label_list:
if span_label != 'O':
span_modified_list += ['[{}-START]'.format(span_label), span, \
'[{}-END]'.format(span_label)]
else:
span_modified_list.append(span)
prepared_sen = ' '.join(span_modified_list)
prepared_sen_list.append(prepared_sen)
prepared_ent_id_list.append(ent_id_list)
prepared_sen_over_files.append(prepared_sen_list)
prepared_ent_id_over_files.append(prepared_ent_id_list)
ent_id_str_idx_type_list = []
final_sen_list = []
# Link the entitty to its id so that we can build up the (arg1, rel, arg2) tuple later
for case_idx, (each_sen, each_ent_list) in enumerate(zip(prepared_sen_list, prepared_ent_id_list)):
ent_id_list = [item[0] for item in each_ent_list]
ent_str_org_list = [re.sub(' +', ' ', item[1]) for item in each_ent_list]
ent_type_org_list = [item[2] for item in each_ent_list]
tmp_word, tmp_tag = decompose_tokenized_text(each_sen.split(' '))
ent_list, ent_idx_list, ent_type_list = index_ent_in_sentence(tmp_word, tmp_tag)
final_sen_list.append(' '.join(tmp_word))
if len(ent_list) != len(each_ent_list):
print(case_idx)
print("tmp_word, tmp_tag: ", tmp_word, tmp_tag, '\n')
print("each_sen: ", each_sen, '\n')
print(len(ent_list), len(each_ent_list), '\n')
print("ent_list: ", ent_list, '\n')
print("each_ent_list: ", each_ent_list, '\n')
assert len(ent_list) == len(each_ent_list)
if ent_list != ent_str_org_list:
print("ent_list: ", ent_list)
print("ent_str_org_list: ", ent_str_org_list)
print(list([item1 == item2 for item1, item2 in zip(ent_list, ent_str_org_list)]))
assert ent_list == ent_str_org_list
if ent_type_org_list != ent_type_list:
print(each_sen.split(' '))
print(tmp_word, tmp_tag)
print("ent_type_org_list: ", ent_type_org_list)
print("ent_type_list: ", ent_type_list)
assert ent_type_org_list == ent_type_list
ent_id_str_idx_type_list.append(list(zip(ent_id_list, ent_list,
ent_idx_list, ent_type_list,
[case_idx] * len(ent_id_list))))
# Build the id to entity dict for each file
ent_id_str_idx_type_dict = dict([(item1[0], item1) for item in ent_id_str_idx_type_list for item1 in item])
ent2senid = dict(
[(item1[0], item1[-1]) for idx, item in enumerate(ent_id_str_idx_type_list) for item1 in item])
senid2ent = defaultdict(list)
for _, item in enumerate(ent_id_str_idx_type_list):
for item1 in item:
senid2ent[item1[-1]].append(ent_id_str_idx_type_dict[item1[0]])
# Make sure each id links to distinct entity
if len(set([(item1[0], idx) for idx, item in enumerate(ent_id_str_idx_type_list)
for item1 in item])) != len(ent2senid):
print(txt_file)
print([(item1[0], item1[-1]) for idx, item in enumerate(ent_id_str_idx_type_list) for item1 in item])
print(ent2senid)
assert len(set([(item1[0], idx) for idx, item in enumerate(ent_id_str_idx_type_list)
for item1 in item])) == len(ent2senid)
# Get the positive data
sen_id_rel_tuple_pos_dict = defaultdict(list)
for arg1_id, rel_str, arg2_id in arg1_rel_arg2_tuple_list:
if arg1_id not in ent2senid:
# if arg1_id in ent_id_str_idx_type_dict:
# print(txt_file)
# print("The entity {} is not in the dictionary.".format(arg1_id))
continue
if arg2_id not in ent2senid:
# if arg2_id in ent_id_str_idx_type_dict:
# print(txt_file)
# print("The entity {} is not in the dictionary.".format(arg2_id))
continue
if ent2senid[arg1_id] != ent2senid[arg2_id]:
# print(txt_file)
# print("Two entities not in the same sentence.", rel_str, arg1_id, arg2_id,
# ent2senid[arg1_id],
# ent2senid[arg2_id], '\n')
continue
assert arg1_id in ent2senid and arg2_id in ent2senid
assert arg1_id in ent2senid and arg2_id in ent2senid and ent2senid[arg1_id] == ent2senid[arg2_id]
if ent_id_str_idx_type_dict[arg1_id][3] in NEW_ENT_TYPE or \
ent_id_str_idx_type_dict[arg2_id][3] in NEW_ENT_TYPE:
continue
sen_id_rel_tuple_pos_dict[ent2senid[arg2_id]].append((rel_str,
ent_id_str_idx_type_dict[arg1_id],
ent_id_str_idx_type_dict[arg2_id]))
# Generate negative pairs
sen_id_rel_tuple_neg_dict = defaultdict(list)
for sen_idx in range(select_num):
# for sen_idx in range(len(sen_list)):
pos_pair_id_list = [item[1][0] + ' ' + item[2][0] for item in sen_id_rel_tuple_pos_dict[sen_idx]]
neg_pair_id_list_all = [item1[0] + ' ' + item2[0] for item1 in senid2ent[sen_idx]
for item2 in senid2ent[sen_idx]
if item1 != item2 and
item1[0] + ' ' + item2[0] not in pos_pair_id_list]
random.seed(1234)
neg_pair_id_list = neg_pair_id_list_all
if not neg_pair_id_list:
sen_id_rel_tuple_neg_dict[sen_idx] = neg_pair_id_list
for item in neg_pair_id_list:
arg1_id, arg2_id = item.split(' ')
assert arg1_id in ent2senid and arg2_id in ent2senid
assert arg1_id in ent2senid and arg2_id in ent2senid and ent2senid[arg1_id] == ent2senid[arg2_id]
if ent_id_str_idx_type_dict[arg1_id][3] in NEW_ENT_TYPE or \
ent_id_str_idx_type_dict[arg2_id][3] in NEW_ENT_TYPE:
continue
sen_id_rel_tuple_neg_dict[ent2senid[arg2_id]].append(('no_relation',
ent_id_str_idx_type_dict[arg1_id],
ent_id_str_idx_type_dict[arg2_id]))
assert len(final_sen_list) == len(sen_id_rel_tuple_pos_dict) and \
len(final_sen_list) == len(sen_id_rel_tuple_neg_dict)
sen_over_files.append(final_sen_list)
pos_pairs_over_files.append(sen_id_rel_tuple_pos_dict)
neg_pairs_over_files.append(sen_id_rel_tuple_neg_dict)
print(f"The number of selected senteces: {sen_count}")
processed_sen_over_files = []
for sen_list, pos_pairs_list, neg_pairs_list in zip(sen_over_files, pos_pairs_over_files, neg_pairs_over_files):
processed_sen_list = []
for sen_idx, sen_str in enumerate(sen_list):
word_list = sen_str.split(' ')
# Process positive data
assert sorted(set([item[1] for item in pos_pairs_list[sen_idx] + neg_pairs_list[sen_idx]])) == \
sorted(set([item[2] for item in pos_pairs_list[sen_idx] + neg_pairs_list[sen_idx]]))
sorted_ent_list = sorted(set([item[1] for item in pos_pairs_list[sen_idx] + neg_pairs_list[sen_idx]]),
key=lambda x: x[2][0])
def mask_all_entities(tokens, sorted_entity_list):
ent_dict = dict([(item[2][0], item) for item in sorted_entity_list])
word_idx = 0
tagged_token_list = []
while word_idx < len(tokens):
if word_idx not in ent_dict:
tagged_token_list.append(tokens[word_idx])
word_idx += 1
else:
ent_id, ent_str, (ent_start, ent_end), ent_type, _ = ent_dict[word_idx]
if ent_str.strip() != ' '.join(tokens[ent_start:ent_end]).strip():
print(ent_str, ' '.join(tokens[ent_start:ent_end]))
assert ent_str.strip() == ' '.join(tokens[ent_start:ent_end]).strip()
tagged_token_list.append("[arg1-" + ent_type.lower() + "-start]")
tagged_token_list.append("[" + ent_id.lower() + "]")
tagged_token_list.append(ent_str)
tagged_token_list.append("[arg1-" + ent_type.lower() + "-end]")
word_idx = ent_end
return tagged_token_list
processed_word_list = mask_all_entities(word_list, sorted_ent_list)
processed_sen_list.append(
f"{tokenizer.cls_token} {' '.join(processed_word_list)} {tokenizer.eos_token}")
processed_sen_over_files.append(processed_sen_list)
start_time = time.time()
# tokenized_texts = [tokenizer.tokenize(sent) for sent in processed_sen_list][:500]
tokenized_texts = [[tokenizer.tokenize(sent) for sent in processed_sen_list] for processed_sen_list in
processed_sen_over_files]
print("--- %s seconds ---" % (time.time() - start_time))
# Remove unrelated entities from tokenzied text
purified_texts = []
relation_list = []
arg1_ent_start_list = []
arg2_ent_start_list = []
for tokenized_sen_list, pos_pairs_list, neg_pairs_list in zip(tokenized_texts, pos_pairs_over_files,
neg_pairs_over_files):
for sen_idx, word_list in enumerate(tokenized_sen_list):
assert len(tokenized_sen_list) == len(pos_pairs_list) and len(tokenized_sen_list) == len(neg_pairs_list)
# Process positive data
for relation, arg1, arg2 in pos_pairs_list[sen_idx] + neg_pairs_list[sen_idx]:
# Note that this is for speeding up the fa_me_re experiments
if relation == "no_relation" and down_sample and random.random() > down_sample_rate:
continue
arg1_id, arg1_str, arg1_offset, arg1_type, arg1_senid = arg1
arg2_id, arg2_str, arg2_offset, arg2_type, arg2_senid = arg2
arg1_id = arg1_id.lower()
arg2_id = arg2_id.lower()
if "[{}]".format(arg1_id) not in word_list or "[{}]".format(arg2_id) not in word_list:
print("[{}]".format(arg1_id))
print("[{}]".format(arg2_id))
print(word_list)
assert "[{}]".format(arg1_id) in word_list
assert "[{}]".format(arg2_id) in word_list
arg1_ent_start = "[arg1-" + arg1_type + "-start]"
arg1_ent_end = "[arg1-" + arg1_type + "-end]"
arg2_ent_start = "[arg2-" + arg2_type + "-start]"
arg2_ent_end = "[arg2-" + arg2_type + "-end]"
arg1_ent_start = arg1_ent_start.lower()
arg2_ent_start = arg2_ent_start.lower()
arg1_ent_start_list.append(arg1_ent_start)
arg2_ent_start_list.append(arg2_ent_start)
def purify_word_list(tokens, arg1id, arg2id):
word_idx = 0
arg_tag = 0
purified_token_list = []
while word_idx < len(tokens):
if tokens[word_idx][-len('-start]'):] == '-start]':
if tokens[word_idx + 1] == '[' + arg1id + ']':
purified_token_list.append(tokens[word_idx])
arg_tag = 1
elif tokens[word_idx + 1] == '[' + arg2id + ']':
purified_token_list.append(
tokens[word_idx][:len('[arg')] + '2' + tokens[word_idx][len('[arg1'):])
arg_tag = 2
word_idx += 2
elif tokens[word_idx][-len('-end]'):] == '-end]':
if arg_tag:
purified_token_list.append(
tokens[word_idx][:len('[arg')] + str(arg_tag) + tokens[word_idx][len('[arg1'):])
arg_tag = 0
word_idx += 1
else:
purified_token_list.append(tokens[word_idx])
word_idx += 1
return purified_token_list
purified_word_list = purify_word_list(word_list, arg1_id, arg2_id)
purified_texts.append(purified_word_list)
relation_list.append(relation.rstrip(string.digits))
# print(Counter(relation_list).items())
assert len(purified_texts) == len(arg1_ent_start_list) and len(purified_texts) == len(arg2_ent_start_list)
return purified_texts, arg1_ent_start_list, arg2_ent_start_list, relation_list
if src_data:
src_data_dir = DATA_DIR[src_data][data_class]
src_file_order = FILE_ORDER[src_data][data_class]
purified_texts_src, arg1_ent_start_list_src, arg2_ent_start_list_src, relation_list_src = load_from_dir(src_data_dir, src_file_order)
else:
purified_texts_src, arg1_ent_start_list_src, arg2_ent_start_list_src, relation_list_src = [], [], [], []
if tgt_data:
tgt_data_dir = DATA_DIR[tgt_data][data_class]
tgt_file_order = FILE_ORDER[tgt_data][data_class]
purified_texts_tgt, arg1_ent_start_list_tgt, arg2_ent_start_list_tgt, relation_list_tgt = load_from_dir(tgt_data_dir, tgt_file_order, total_sen_num)
else:
purified_texts_tgt, arg1_ent_start_list_tgt, arg2_ent_start_list_tgt, relation_list_tgt = [], [], [], []
purified_texts = purified_texts_src + purified_texts_tgt
arg1_ent_start_list = arg1_ent_start_list_src + arg1_ent_start_list_tgt
arg2_ent_start_list = arg2_ent_start_list_src + arg2_ent_start_list_tgt
relation_list = relation_list_src + relation_list_tgt
data_type_list = [[1, 0, alpha]] * len(purified_texts_src) + [[0, 1, 1]] * len(purified_texts_tgt)
# Get the input_ids and labels
input_ids = pad_sequences([tokenizer.convert_tokens_to_ids(txt) for txt in purified_texts],
maxlen=max_len, value=tokenizer.pad_token_id, dtype="long", truncating="post",
padding="post")
# attention_masks = [[float(i > 0) for i in ii] for ii in input_ids]
attention_masks = [[float(i != tokenizer.pad_token_id) for i in ii] for ii in input_ids]
labels = [rel2idx[l] for l in relation_list]
# print(purified_texts[0])
# print(arg1_ent_start_list[0])
# print(arg2_ent_start_list[0])
arg1_idx_list = [purified_texts[sen_idx].index(arg1_ent_start_list[sen_idx])
if purified_texts[sen_idx].index(arg1_ent_start_list[sen_idx]) < max_len else max_len - 1
for sen_idx in range(len(purified_texts))
]
arg2_idx_list = [purified_texts[sen_idx].index(arg2_ent_start_list[sen_idx])
if purified_texts[sen_idx].index(arg2_ent_start_list[sen_idx]) < max_len else max_len - 1
for sen_idx in range(len(purified_texts))
]
# print(len(input_ids), len(attention_masks), len(labels), len(arg1_idx_list), len(arg2_idx_list),
# len(data_type_list))
input_ids = torch.tensor(input_ids)
labels = torch.tensor(labels)
attention_masks = torch.tensor(attention_masks)
arg1_idx_list = torch.tensor(arg1_idx_list)
arg2_idx_list = torch.tensor(arg2_idx_list)
data_type_list = torch.tensor(data_type_list)
final_data = TensorDataset(input_ids, attention_masks, labels, arg1_idx_list, arg2_idx_list, data_type_list)
if data_class == "train":
final_sampler = RandomSampler(final_data)
else:
final_sampler = SequentialSampler(final_data)
final_dataloader = DataLoader(final_data, sampler=final_sampler, batch_size=batch_size)
return final_dataloader | 41.722461 | 156 | 0.544153 | 10,988 | 87,492 | 3.961321 | 0.032308 | 0.030142 | 0.019298 | 0.015163 | 0.921336 | 0.914559 | 0.910561 | 0.90638 | 0.90298 | 0.891998 | 0 | 0.019639 | 0.35457 | 87,492 | 2,097 | 157 | 41.722461 | 0.75116 | 0.08235 | 0 | 0.851522 | 0 | 0 | 0.045745 | 0.008167 | 0 | 0 | 0 | 0 | 0.039347 | 1 | 0.011878 | false | 0.000742 | 0.005197 | 0 | 0.028953 | 0.081663 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3cfa1fd8da9c57ca5166f9ee8e0c2f7bf2eeb5f4 | 293 | py | Python | lg_offliner/src/lg_offliner/__init__.py | FuriousJulius/lg_ros_nodes | 15a84c5022ab2f5b038d11a5589cd4a34010b1d6 | [
"Apache-2.0"
] | 16 | 2015-10-10T11:55:37.000Z | 2022-02-24T22:47:48.000Z | lg_offliner/src/lg_offliner/__init__.py | FuriousJulius/lg_ros_nodes | 15a84c5022ab2f5b038d11a5589cd4a34010b1d6 | [
"Apache-2.0"
] | 292 | 2015-09-29T21:59:53.000Z | 2022-03-31T15:59:31.000Z | lg_offliner/src/lg_offliner/__init__.py | constantegonzalez/lg_ros_nodes | 1c7b08c42e90205922602c86805285508d1b7971 | [
"Apache-2.0"
] | 5 | 2017-05-03T06:22:43.000Z | 2021-08-19T16:54:14.000Z | from .offliner import ROS_NODE_NAME
from .offliner import LG_OFFLINER_DEBUG_TOPIC_DEFAULT
from .offliner import LG_OFFLINER_OFFLINE_TOPIC_DEFAULT
from .offliner import Checker
from .offliner import ConnectivityResults
from .offliner import process_custom_publishers
from .offliner import main
| 36.625 | 55 | 0.880546 | 40 | 293 | 6.15 | 0.425 | 0.341463 | 0.512195 | 0.162602 | 0.398374 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095563 | 293 | 7 | 56 | 41.857143 | 0.928302 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
a71222c0e4e9bc9a0223a8ebbef23b451969431c | 11,820 | py | Python | tests/test_soccer_environment_scenarios.py | sc420/pygame-rl | f81da559385876616d99c74b43e4345f53d086d2 | [
"MIT"
] | 6 | 2019-02-18T09:34:34.000Z | 2021-11-09T06:58:02.000Z | tests/test_soccer_environment_scenarios.py | ebola777/pygame-soccer | f81da559385876616d99c74b43e4345f53d086d2 | [
"MIT"
] | 3 | 2019-03-14T03:10:48.000Z | 2019-07-15T08:11:25.000Z | tests/test_soccer_environment_scenarios.py | ebola777/pygame-soccer | f81da559385876616d99c74b43e4345f53d086d2 | [
"MIT"
] | 4 | 2019-02-13T13:41:17.000Z | 2019-07-22T13:33:15.000Z | # Native modules
import random
# Third-party modules
import pytest
# Testing targets
import pygame_rl.scenario.soccer_environment as soccer_environment
class SoccerEnvironmentTest(object):
env = None
state = None
player_index = None
computer_index = None
@classmethod
def setup_class(cls):
# Initialize the environment
env_options = soccer_environment.SoccerEnvironmentOptions(team_size=2)
cls.env = soccer_environment.SoccerEnvironment(env_options=env_options)
# Get the environment state
cls.state = cls.env.state
# Get the agent indexes
cls.player_index = [cls.env.get_agent_index('PLAYER', team_index)
for team_index in range(2)]
cls.computer_index = [cls.env.get_agent_index('COMPUTER', team_index)
for team_index in range(2)]
@pytest.mark.parametrize('seed', range(100))
def test_adjacent(self, seed):
# Set the random seed
random.seed(seed)
# Set the initial positions
self.state.set_agent_pos(self.player_index[0], [6, 0])
self.state.set_agent_pos(self.player_index[1], [6, 1])
self.state.set_agent_pos(self.computer_index[0], [7, 0])
self.state.set_agent_pos(self.computer_index[1], [7, 1])
# Set the agent modes
self.state.set_agent_mode(self.player_index[1], 'OFFENSIVE')
self.state.set_agent_mode(self.computer_index[0], 'OFFENSIVE')
self.state.set_agent_mode(self.computer_index[1], 'OFFENSIVE')
# Give the ball to player 1
ball_possession = self.state.get_ball_possession()
ball_agent_index = ball_possession['agent_index']
self.state.switch_ball(ball_agent_index, self.player_index[0])
# Take the action
self.env.take_cached_action(self.player_index[0], 'STAND')
# Update the state
self.env.update_state()
# Player 1 position should have not changed
assert self.state.get_agent_pos(self.player_index[0]) == [6, 0]
# Player 2 position should have either not changed or moved, but not up
possible_pos = [[6, 1], [5, 1], [6, 2], [7, 1]]
assert self.state.get_agent_pos(self.player_index[1]) in possible_pos
# Computer 1 positions should have either not changed or swapped with
# computer 2
possible_pos = [[7, 0], [7, 1]]
assert self.state.get_agent_pos(self.computer_index[0]) in possible_pos
# Computer 2 positions should have either not changed, swapped with
# computer 1, or swapped with player 2
possible_pos = [[7, 0], [7, 1], [6, 1]]
assert self.state.get_agent_pos(self.computer_index[1]) in possible_pos
# Computer 2 can't have the ball
ball_possession = self.state.get_ball_possession()
ball_agent_index = ball_possession['agent_index']
assert ball_agent_index != self.computer_index[1]
@pytest.mark.parametrize('seed', range(100))
def test_avoid_opponent(self, seed):
# Set the random seed
random.seed(seed)
# Set the initial positions
self.state.set_agent_pos(self.player_index[0], [1, 2])
self.state.set_agent_pos(self.player_index[1], [1, 3])
self.state.set_agent_pos(self.computer_index[0], [7, 2])
self.state.set_agent_pos(self.computer_index[1], [7, 3])
# Set the agent modes
self.state.set_agent_mode(self.player_index[1], 'DEFENSIVE')
self.state.set_agent_mode(self.computer_index[0], 'DEFENSIVE')
self.state.set_agent_mode(self.computer_index[1], 'DEFENSIVE')
# Give the ball to computer 1
ball_possession = self.state.get_ball_possession()
ball_agent_index = ball_possession['agent_index']
self.state.switch_ball(ball_agent_index, self.computer_index[0])
# Take the action
self.env.take_cached_action(self.player_index[0], 'MOVE_RIGHT')
# Update the state
self.env.update_state()
# Player 2 should approach the nearest opponent goal position against
# computer 1
possible_pos = [[1, 2], [0, 3]]
assert self.state.get_agent_pos(self.player_index[1]) in possible_pos
# Computer 1 should avoid player 1
assert self.state.get_agent_pos(self.computer_index[0]) == [8, 2]
# Computer 2 should approach the nearest opponent goal position against
# player 2
assert self.state.get_agent_pos(self.computer_index[1]) == [8, 3]
@pytest.mark.parametrize('seed', range(100))
def test_advance_to_goal(self, seed):
# Set the random seed
random.seed(seed)
# Set the initial positions
self.state.set_agent_pos(self.player_index[0], [1, 2])
self.state.set_agent_pos(self.player_index[1], [1, 3])
self.state.set_agent_pos(self.computer_index[0], [7, 2])
self.state.set_agent_pos(self.computer_index[1], [7, 3])
# Set the agent modes
self.state.set_agent_mode(self.player_index[1], 'OFFENSIVE')
self.state.set_agent_mode(self.computer_index[0], 'OFFENSIVE')
self.state.set_agent_mode(self.computer_index[1], 'OFFENSIVE')
# Give the ball to computer 1
ball_possession = self.state.get_ball_possession()
ball_agent_index = ball_possession['agent_index']
self.state.switch_ball(ball_agent_index, self.computer_index[0])
# Take the action
self.env.take_cached_action(self.player_index[0], 'MOVE_RIGHT')
# Update the state
self.env.update_state()
# Player 2 should intercept against computer 1
assert self.state.get_agent_pos(self.player_index[1]) == [2, 3]
# Computer 1 should approach the furthest goal position against player 1
assert self.state.get_agent_pos(self.computer_index[0]) == [6, 2]
# Computer 2 should intercept against player 2
assert self.state.get_agent_pos(self.computer_index[1]) == [6, 3]
@pytest.mark.parametrize('seed', range(100))
def test_defend_goal(self, seed):
# Set the random seed
random.seed(seed)
# Set the initial positions
self.state.set_agent_pos(self.player_index[0], [1, 2])
self.state.set_agent_pos(self.player_index[1], [1, 3])
self.state.set_agent_pos(self.computer_index[0], [7, 2])
self.state.set_agent_pos(self.computer_index[1], [7, 3])
# Set the agent modes
self.state.set_agent_mode(self.player_index[1], 'OFFENSIVE')
self.state.set_agent_mode(self.computer_index[0], 'DEFENSIVE')
self.state.set_agent_mode(self.computer_index[1], 'DEFENSIVE')
# Give the ball to player 1
ball_possession = self.state.get_ball_possession()
ball_agent_index = ball_possession['agent_index']
self.state.switch_ball(ball_agent_index, self.player_index[0])
# Take the action
self.env.take_cached_action(self.player_index[0], 'MOVE_RIGHT')
# Update the state
self.env.update_state()
# Player 2 should intercept against computer 1
assert self.state.get_agent_pos(self.player_index[1]) == [2, 3]
# Computer 1 should approach the nearest opponent goal position against
# player 1
assert self.state.get_agent_pos(self.computer_index[0]) == [8, 2]
# Computer 2 should approach the nearest opponent goal position against
# player 1
possible_pos = [[7, 2], [8, 3]]
assert self.state.get_agent_pos(self.computer_index[1]) in possible_pos
@pytest.mark.parametrize('seed', range(100))
def test_intercept_goal(self, seed):
# Set the random seed
random.seed(seed)
# Set the initial positions
self.state.set_agent_pos(self.player_index[0], [1, 2])
self.state.set_agent_pos(self.player_index[1], [1, 3])
self.state.set_agent_pos(self.computer_index[0], [7, 2])
self.state.set_agent_pos(self.computer_index[1], [7, 3])
# Set the agent modes
self.state.set_agent_mode(self.player_index[1], 'DEFENSIVE')
self.state.set_agent_mode(self.computer_index[0], 'OFFENSIVE')
self.state.set_agent_mode(self.computer_index[1], 'OFFENSIVE')
# Give the ball to player 1
ball_possession = self.state.get_ball_possession()
ball_agent_index = ball_possession['agent_index']
self.state.switch_ball(ball_agent_index, self.player_index[0])
# Take the action
self.env.take_cached_action(self.player_index[0], 'MOVE_RIGHT')
# Update the state
self.env.update_state()
# Player 2 should approach the nearest opponent goal position against
# computer 1
assert self.state.get_agent_pos(self.player_index[1]) == [0, 3]
# Computer 1 should intercept against player 1
assert self.state.get_agent_pos(self.computer_index[0]) == [6, 2]
# Computer 2 should intercept against player 1
assert self.state.get_agent_pos(self.computer_index[1]) == [6, 3]
@pytest.mark.parametrize('seed', range(100))
def test_negative_reward(self, seed):
# Set the random seed
random.seed(seed)
# Set the initial positions
self.state.set_agent_pos(self.player_index[0], [1, 2])
self.state.set_agent_pos(self.player_index[1], [7, 0])
self.state.set_agent_pos(self.computer_index[0], [3, 2])
self.state.set_agent_pos(self.computer_index[1], [3, 3])
# Set the agent modes
self.state.set_agent_mode(self.player_index[1], 'DEFENSIVE')
self.state.set_agent_mode(self.computer_index[0], 'OFFENSIVE')
self.state.set_agent_mode(self.computer_index[1], 'OFFENSIVE')
# Give the ball to player 1
ball_possession = self.state.get_ball_possession()
ball_agent_index = ball_possession['agent_index']
self.state.switch_ball(ball_agent_index, self.player_index[0])
# The computer agent should score in 100 steps
for _ in range(100):
# Take the action
self.env.take_cached_action(self.player_index[0], 'STAND')
# Update the state and get the observation
observation = self.env.update_state()
# Teleport player 2 to the original position so that he can't never
# catch the ball
self.state.set_agent_pos(self.player_index[1], [7, 0])
if observation.next_state.is_team_win('COMPUTER'):
break
assert observation.reward == pytest.approx(-1.0)
@pytest.mark.parametrize('seed', range(100))
def test_positive_reward(self, seed):
# Set the random seed
random.seed(seed)
# Set the initial positions
self.state.set_agent_pos(self.player_index[0], [5, 2])
self.state.set_agent_pos(self.player_index[1], [5, 3])
self.state.set_agent_pos(self.computer_index[0], [3, 2])
self.state.set_agent_pos(self.computer_index[1], [3, 3])
# Set the computer agent modes
self.state.set_agent_mode(self.computer_index[0], 'OFFENSIVE')
self.state.set_agent_mode(self.computer_index[1], 'OFFENSIVE')
# Give the ball to player 1
ball_possession = self.state.get_ball_possession()
ball_agent_index = ball_possession['agent_index']
self.state.switch_ball(ball_agent_index, self.player_index[0])
# The player agent should score in exactly 3 steps
for _ in range(3):
# Take the action
self.env.take_cached_action(self.player_index[0], 'MOVE_RIGHT')
# Update the state
observation = self.env.update_state()
assert observation.next_state.is_team_win('PLAYER')
assert observation.reward == pytest.approx(1.0)
| 48.842975 | 80 | 0.662267 | 1,678 | 11,820 | 4.445173 | 0.073302 | 0.096528 | 0.078831 | 0.111677 | 0.870626 | 0.859767 | 0.826921 | 0.812978 | 0.778254 | 0.762167 | 0 | 0.028772 | 0.229611 | 11,820 | 241 | 81 | 49.045643 | 0.790358 | 0.191286 | 0 | 0.705882 | 0 | 0 | 0.040498 | 0 | 0 | 0 | 0 | 0 | 0.130719 | 1 | 0.052288 | false | 0 | 0.019608 | 0 | 0.104575 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
59a0e827e44b412d4cd0fc6575e7334a458d42cb | 17,837 | py | Python | exp/baseline.py | zfjsail/oag-moe-transfer-2020 | 06b4a7dfdf4a0bf7efad9ce104fb7ca5e5800149 | [
"MIT"
] | null | null | null | exp/baseline.py | zfjsail/oag-moe-transfer-2020 | 06b4a7dfdf4a0bf7efad9ce104fb7ca5e5800149 | [
"MIT"
] | null | null | null | exp/baseline.py | zfjsail/oag-moe-transfer-2020 | 06b4a7dfdf4a0bf7efad9ce104fb7ca5e5800149 | [
"MIT"
] | null | null | null | from os.path import join
import pickle
import json
from collections import Counter
import numpy as np
import sklearn
from sklearn import svm
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import roc_auc_score
from sklearn.metrics import precision_recall_curve
from utils import feature_utils
from utils import data_utils
from utils import settings
from utils.address_normalization import addressNormalization
def load_aff_data():
file_dir = settings.AFF_DATA_DIR
pos_pairs = data_utils.load_json(file_dir, "label_data_aff_zhoushao.json")[:600]
pos_pairs = [({"name": p["affiliation"]}, {"DisplayName": p["label"]}) for p in pos_pairs if p["label"] != "[NIF]"]
neg_pairs = data_utils.load_json(file_dir, 'train_negative_affi_clean.json')[:600]
neg_pairs = [(p['aminer_affi'], p['mag_affi']) for p in neg_pairs]
pairs_add = data_utils.load_json(file_dir, "mag_aminer_hard_correct_zfj_copy.json")
print("add pairs", len(pairs_add))
pos_pairs += [(p['aminer_affi'], p['mag_affi']) for p in pairs_add if p["label_zfj"] == "1"]
neg_pairs += [(p['aminer_affi'], p['mag_affi']) for p in pairs_add if p["label_zfj"] == "0"]
pos_pairs = pos_pairs[-len(neg_pairs):]
labels = [1] * len(pos_pairs) + [0] * len(neg_pairs)
pairs = pos_pairs + neg_pairs # label balanced is important
return pairs, labels
def load_venue_data():
file_dir = settings.VENUE_DATA_DIR
train_data = json.load(open(join(settings.VENUE_DATA_DIR, 'train_filter.txt'), 'r'))
return train_data
def fit_tfidf_for_aff():
corpus = []
pairs, _ = load_aff_data()
print(len(pairs), pairs)
for p in pairs:
corpus.append(p[0]["name"].lower())
corpus.append(p[1]["DisplayName"].lower())
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(corpus)
out_dir = join(settings.OUT_DIR, "aff")
with open(join(out_dir, "aff_tfidf.pkl"), "wb") as wf:
pickle.dump(vectorizer, wf)
def fit_tfidf_for_venue():
corpus = []
pairs = load_venue_data()
print(len(pairs), pairs)
for p in pairs:
# print(p)
corpus.append(p[2].lower())
corpus.append(p[1].lower())
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(corpus)
out_dir = join(settings.OUT_DIR, "venue")
with open(join(out_dir, "venue_tfidf.pkl"), "wb") as wf:
pickle.dump(vectorizer, wf)
def aff_keyword_method():
pairs, labels = load_aff_data()
out_dir = join(settings.OUT_DIR, "aff")
with open(join(out_dir, "aff_tfidf.pkl"), "rb") as rf:
vectorizer = pickle.load(rf)
print(vectorizer.vocabulary_)
vocab = vectorizer.vocabulary_
idf = vectorizer.idf_
pairs, labels = sklearn.utils.shuffle(
pairs, labels, random_state=42
)
features = []
for i, p in enumerate(pairs):
aff1 = p[0]["name"].lower()
aff2 = p[1]["DisplayName"].lower()
# aff1_words = aff1.split()
# aff1_words = aff1.split()
aff1_words = feature_utils.get_words(aff1)
aff2_words = feature_utils.get_words(aff2)
# print(aff1_words, aff2_words)
intersec = Counter(aff1_words) & Counter(aff2_words)
and_idf = sum([intersec[w] * idf[vocab[w]] for w in intersec if w in vocab])
# print(is_idf)
union = Counter(aff1_words) | Counter(aff2_words)
# or_idf = sum([union[w] * idf[vocab[w]] for w in union if w in vocab])
aff1_idf = sum([idf[vocab[w]] for w in aff1_words if w in vocab])
aff2_idf = sum([idf[vocab[w]] for w in aff2_words if w in vocab])
cur_jac = and_idf/(aff1_idf+aff2_idf-and_idf)
# print(and_idf, aff1_idf, aff2_idf, cur_jac, labels[i])
# print(cur_jac, labels[i])
features.append(cur_jac)
n = len(pairs)
n_train = int(n * 0.6)
n_valid = int(n * 0.2)
features_valid = features[n_train: (n_valid + n_train)]
labels_valid = labels[n_train: (n_valid + n_train)]
features_test = features[(n_valid + n_train): ]
labels_test = labels[(n_valid + n_train):]
precs, recs, thrs = precision_recall_curve(labels_valid, features_valid)
f1s = 2 * precs * recs / (precs + recs)
f1s = f1s[:-1]
thrs = thrs[~np.isnan(f1s)]
f1s = f1s[~np.isnan(f1s)]
best_thr = thrs[np.argmax(f1s)]
print("best thr", best_thr)
y_pred = np.zeros_like(labels_test)
y_pred[features_test > best_thr] = 1
prec, rec, f1, _ = precision_recall_fscore_support(labels_test, y_pred, average="binary")
auc = roc_auc_score(labels_test, features_test)
print("AUC: %.4f Prec: %.4f Rec: %.4f F1: %.4f" %
(auc, prec, rec, f1))
def venue_keyword_method():
pairs = load_venue_data()
out_dir = join(settings.OUT_DIR, "venue")
with open(join(out_dir, "venue_tfidf.pkl"), "rb") as rf:
vectorizer = pickle.load(rf)
print(vectorizer.vocabulary_)
vocab = vectorizer.vocabulary_
idf = vectorizer.idf_
# pairs = sklearn.utils.shuffle(
# pairs, random_state=42
# )
train_num = 800
test_num = 200
n_pos_set = int((train_num + 2 * test_num) / 2)
neg_pairs = [p for p in pairs if p[0] == 0]
pos_pairs = [p for p in pairs if p[0] == 1][-n_pos_set:]
n_pos = len(pos_pairs)
neg_pairs = neg_pairs[-n_pos:]
train_data = pos_pairs + neg_pairs
train_data = sklearn.utils.shuffle(train_data, random_state=37)
labels = [x[0] for x in train_data]
features = []
for i, p in enumerate(train_data):
aff1 = p[2].lower()
aff2 = p[1].lower()
aff1_words = feature_utils.get_words(aff1)
aff2_words = feature_utils.get_words(aff2)
intersec = Counter(aff1_words) & Counter(aff2_words)
and_idf = sum([intersec[w] * idf[vocab[w]] for w in intersec if w in vocab])
aff1_idf = sum([idf[vocab[w]] for w in aff1_words if w in vocab])
aff2_idf = sum([idf[vocab[w]] for w in aff2_words if w in vocab])
cur_jac = and_idf/(aff1_idf+aff2_idf-and_idf)
features.append(cur_jac)
n = len(pairs)
n_train = train_num
n_valid = test_num
features_valid = features[n_train: (n_valid + n_train)]
labels_valid = labels[n_train: (n_valid + n_train)]
features_test = features[(n_valid + n_train): ]
labels_test = labels[(n_valid + n_train):]
print("valid", len(labels_valid), "test", len(labels_test))
precs, recs, thrs = precision_recall_curve(labels_valid, features_valid)
f1s = 2 * precs * recs / (precs + recs)
f1s = f1s[:-1]
thrs = thrs[~np.isnan(f1s)]
f1s = f1s[~np.isnan(f1s)]
best_thr = thrs[np.argmax(f1s)]
print("best thr", best_thr)
y_pred = np.zeros_like(labels_test)
y_pred[features_test > best_thr] = 1
prec, rec, f1, _ = precision_recall_fscore_support(labels_test, y_pred, average="binary")
auc = roc_auc_score(labels_test, features_test)
print("AUC: %.4f Prec: %.4f Rec: %.4f F1: %.4f" %
(auc, prec, rec, f1))
def aff_svm():
pairs, labels = load_aff_data()
out_dir = join(settings.OUT_DIR, "aff")
with open(join(out_dir, "aff_tfidf.pkl"), "rb") as rf:
vectorizer = pickle.load(rf)
print(vectorizer.vocabulary_)
vocab = vectorizer.vocabulary_
idf = vectorizer.idf_
pairs, labels = sklearn.utils.shuffle(
pairs, labels, random_state=42
)
features = []
for i, p in enumerate(pairs):
# cur_feat = []
aff1 = p[0]["name"].lower()
aff2 = p[1]["DisplayName"].lower()
# aff1_words = aff1.split()
# aff1_words = aff1.split()
aff1_words = feature_utils.get_words(aff1)
aff2_words = feature_utils.get_words(aff2)
# print(aff1_words, aff2_words)
intersec = Counter(aff1_words) & Counter(aff2_words)
and_idf = sum([intersec[w] * 1 for w in intersec if w in vocab])
# print(is_idf)
union = Counter(aff1_words) | Counter(aff2_words)
# or_idf = sum([union[w] * idf[vocab[w]] for w in union if w in vocab])
aff1_idf = sum([1 for w in aff1_words if w in vocab])
aff2_idf = sum([1 for w in aff2_words if w in vocab])
cur_jac = and_idf/(aff1_idf+aff2_idf-and_idf)
aff_idf_vec = vectorizer.fit_transform([aff1, aff2])
# aff2_idf_vec = vectorizer.fit_transform([aff2])
# print(type(aff_idf_vec), aff_idf_vec)
# print()
cos = cosine_similarity(aff_idf_vec)
# print("cos", cos)
cur_feat = [cur_jac, cos[0, 1]]
features.append(cur_feat)
features = np.array(features)
n = len(pairs)
n_train = int(n * 0.6)
n_valid = int(n * 0.2)
features_train = features[: n_train + n_valid]
labels_train = labels[: n_train + n_valid]
features_test = features[(n_valid + n_train): ]
labels_test = labels[(n_valid + n_train):]
clf = svm.SVC()
clf.fit(features_train, labels_train)
y_pred = clf.predict(features_test)
prec, rec, f1, _ = precision_recall_fscore_support(labels_test, y_pred, average="binary")
print(prec, rec, f1)
def venue_svm():
pairs = load_venue_data()
out_dir = join(settings.OUT_DIR, "venue")
with open(join(out_dir, "venue_tfidf.pkl"), "rb") as rf:
vectorizer = pickle.load(rf)
print(vectorizer.vocabulary_)
vocab = vectorizer.vocabulary_
idf = vectorizer.idf_
train_num = 800
test_num = 200
n_pos_set = int((train_num + 2 * test_num) / 2)
neg_pairs = [p for p in pairs if p[0] == 0]
pos_pairs = [p for p in pairs if p[0] == 1][-n_pos_set:]
n_pos = len(pos_pairs)
neg_pairs = neg_pairs[-n_pos:]
train_data = pos_pairs + neg_pairs
train_data = sklearn.utils.shuffle(train_data, random_state=37)
labels = [x[0] for x in train_data]
features = []
for i, p in enumerate(train_data):
aff1 = p[2].lower()
aff2 = p[1].lower()
aff1_words = feature_utils.get_words(aff1)
aff2_words = feature_utils.get_words(aff2)
intersec = Counter(aff1_words) & Counter(aff2_words)
and_idf = sum([intersec[w] * 1 for w in intersec if w in vocab])
aff1_idf = sum([1 for w in aff1_words if w in vocab])
aff2_idf = sum([1 for w in aff2_words if w in vocab])
cur_jac = and_idf/(aff1_idf+aff2_idf-and_idf)
aff_idf_vec = vectorizer.fit_transform([aff1, aff2])
cos = cosine_similarity(aff_idf_vec)
cur_v_mag = aff1
cur_v_aminer = aff2
overlap = set(cur_v_mag).intersection(cur_v_aminer)
new_seq_mag = []
new_seq_aminer = []
for w in cur_v_mag:
if w in overlap:
new_seq_mag.append(w)
for w in cur_v_aminer:
if w in overlap:
new_seq_aminer.append(w)
intersec = Counter(new_seq_aminer) & Counter(new_seq_mag)
and_idf = sum([intersec[w] * 1 for w in intersec if w in vocab])
aminer_idf_key = sum([1 for w in new_seq_aminer if w in vocab])
mag_idf_key = sum([1 for w in new_seq_mag if w in vocab])
mother = aminer_idf_key+mag_idf_key-and_idf
if mother != 0:
cur_jac_key = and_idf/mother
print("key jac", cur_jac_key)
aff_idf_vec = vectorizer.fit_transform([" ".join(new_seq_aminer), " ".join(new_seq_mag)])
cos_key = cosine_similarity(aff_idf_vec)
else:
cur_jac_key = 0
cos_key = -1
# print("here")
# cur_feat = [cur_jac, cos[0, 1], cur_jac_key, cos_key]
cur_feat = [cur_jac, cos[0, 1]]
# print("cur feature", cur_feat)
features.append(cur_feat)
features = np.array(features)
# n = len(pairs)
n_train = train_num
n_valid = test_num
features_train = features[: n_train + n_valid]
labels_train = labels[: n_train + n_valid]
features_test = features[(n_valid + n_train): ]
labels_test = labels[(n_valid + n_train):]
print("n_test", len(labels_test))
clf = svm.SVC()
clf.fit(features_train, labels_train)
y_pred = clf.predict(features_test)
prec, rec, f1, _ = precision_recall_fscore_support(labels_test, y_pred, average="binary")
print(prec, rec, f1)
def gen_aff_record_linkage_table():
pairs, labels = load_aff_data()
pairs, labels = sklearn.utils.shuffle(
pairs, labels, random_state=42
)
n = len(pairs)
n_train = int(n * 0.6)
n_valid = int(n * 0.2)
aff_to_aid = {}
cur_idx = 0
# table1_aff = []
# table2_aff = []
out_dir = join(settings.OUT_DIR, "aff")
wf1 = open(join(out_dir, "aff_train1.csv"), "w")
wf2 = open(join(out_dir, "aff_train2.csv"), "w")
wf1.write("name,main_body,uid\n")
wf2.write("name,main_body,uid\n")
test_pairs = []
valid_pairs = []
neg_cnt = 0
an = addressNormalization()
for i, p in enumerate(pairs):
aff1_short = an.find_inst(p[0]["name"])[1].lower().replace(",", " ")
aff1 = p[0]["name"].lower().replace(",", " ")
# aff2_short = an.find_inst(p[1]["DisplayName"])[1].lower()
aff2 = p[1]["DisplayName"].lower().replace(",", " ")
label = labels[i]
# if aff2 in aff_to_aid:
# continue
if label == 1:
aff_to_aid[aff2] = cur_idx
aff_to_aid[aff1] = cur_idx
cur_idx += 1
else:
aff_to_aid[aff2] = cur_idx
aff_to_aid[aff1] = cur_idx + 1
cur_idx += 2
if i < n_train:
wf1.write(aff1 + "," + aff1_short + "," + str(aff_to_aid[aff1]) + "\n")
wf2.write(aff2 + "," + aff2 + "," + str(aff_to_aid[aff2]) + "\n")
elif i < n_train + n_valid:
valid_pairs.append(
({"name": aff1, "main_body": aff1_short, "uid": str(aff_to_aid[aff1])},
{"name": aff2, "main_body": aff2, "uid": str(aff_to_aid[aff2])})
)
else:
test_pairs.append(
({"name": aff1, "main_body": aff1_short, "uid": str(aff_to_aid[aff1])},
{"name": aff2, "main_body": aff2, "uid": str(aff_to_aid[aff2])})
)
if aff_to_aid[aff1] != aff_to_aid[aff2]:
neg_cnt += 1
wf1.close()
wf2.close()
print(len(test_pairs), neg_cnt)
data_utils.dump_json(test_pairs, out_dir, "valid_aff_dedupe_pairs.json")
data_utils.dump_json(test_pairs, out_dir, "test_aff_dedupe_pairs.json")
def gen_venue_record_linkage_table():
pairs = load_venue_data()
train_num = 800
test_num = 200
n_pos_set = int((train_num + 2 * test_num) / 2)
neg_pairs = [p for p in pairs if p[0] == 0]
pos_pairs = [p for p in pairs if p[0] == 1][-n_pos_set:]
n_pos = len(pos_pairs)
neg_pairs = neg_pairs[-n_pos:]
train_data = pos_pairs + neg_pairs
train_data = sklearn.utils.shuffle(train_data, random_state=37)
labels = [x[0] for x in train_data]
# n = len(pairs)
n_train = train_num
n_valid = test_num
aff_to_aid = {}
cur_idx = 0
# table1_aff = []
# table2_aff = []
out_dir = join(settings.OUT_DIR, "venue")
wf1 = open(join(out_dir, "venue_train1.csv"), "w")
wf2 = open(join(out_dir, "venue_train2.csv"), "w")
wf1.write("name,main_body,uid\n")
wf2.write("name,main_body,uid\n")
test_pairs = []
valid_pairs = []
neg_cnt = 0
# an = addressNormalization()
for i, p in enumerate(train_data):
# aff1_short = an.find_inst(p[0]["name"])[1].lower().replace(",", " ")
aff1 = p[2].lower().replace(",", " ")
# aff2_short = an.find_inst(p[1]["DisplayName"])[1].lower()
aff2 = p[1].lower().replace(",", " ")
label = labels[i]
# if aff2 in aff_to_aid:
# continue
if label == 1:
aff_to_aid[aff2] = cur_idx
aff_to_aid[aff1] = cur_idx
cur_idx += 1
else:
aff_to_aid[aff2] = cur_idx
aff_to_aid[aff1] = cur_idx + 1
cur_idx += 2
cur_v_mag = aff1.split()
cur_v_aminer = aff2.split()
overlap = set(cur_v_mag).intersection(cur_v_aminer)
new_seq_mag = []
new_seq_aminer = []
for w in cur_v_mag:
if w in overlap:
new_seq_mag.append(w)
for w in cur_v_aminer:
if w in overlap:
new_seq_aminer.append(w)
if i < n_train:
wf1.write(aff1 + "," + " ".join(new_seq_mag) + "," + str(aff_to_aid[aff1]) + "\n")
wf2.write(aff2 + "," + " ".join(new_seq_aminer) + "," + str(aff_to_aid[aff2]) + "\n")
elif i < n_train + n_valid:
valid_pairs.append(
({"name": aff1, "main_body": " ".join(new_seq_mag), "uid": str(aff_to_aid[aff1])},
{"name": aff2, "main_body": " ".join(new_seq_aminer), "uid": str(aff_to_aid[aff2])})
)
else:
test_pairs.append(
({"name": aff1, "main_body": " ".join(new_seq_mag), "uid": str(aff_to_aid[aff1])},
{"name": aff2, "main_body": " ".join(new_seq_aminer), "uid": str(aff_to_aid[aff2])})
)
if aff_to_aid[aff1] != aff_to_aid[aff2]:
neg_cnt += 1
wf1.close()
wf2.close()
print(len(test_pairs), neg_cnt)
data_utils.dump_json(test_pairs, out_dir, "valid_venue_dedupe_pairs.json")
data_utils.dump_json(test_pairs, out_dir, "test_venue_dedupe_pairs.json")
if __name__ == "__main__":
# pairs, _ = load_aff_data()
# fit_tfidf_for_aff()
# aff_keyword_method()
# aff_svm()
gen_aff_record_linkage_table()
# fit_tfidf_for_venue()
# venue_keyword_method()
# venue_svm()
# gen_venue_record_linkage_table()
| 35.602794 | 119 | 0.614621 | 2,646 | 17,837 | 3.853742 | 0.074074 | 0.012357 | 0.021967 | 0.016672 | 0.834069 | 0.799745 | 0.77817 | 0.761793 | 0.755614 | 0.735412 | 0 | 0.024787 | 0.251332 | 17,837 | 500 | 120 | 35.674 | 0.738805 | 0.071817 | 0 | 0.757813 | 0 | 0 | 0.060274 | 0.012418 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026042 | false | 0 | 0.041667 | 0 | 0.072917 | 0.046875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
59cb4e0a3ef6f6651e7a7f459b7f53d63c2f8e01 | 23 | py | Python | test_linter/1/11/111/second.py | noname19871/pyStaticAnalyzer | 05c36829c585c92772b551a5f1d9c0b4626ecb8d | [
"MIT"
] | null | null | null | test_linter/1/11/111/second.py | noname19871/pyStaticAnalyzer | 05c36829c585c92772b551a5f1d9c0b4626ecb8d | [
"MIT"
] | null | null | null | test_linter/1/11/111/second.py | noname19871/pyStaticAnalyzer | 05c36829c585c92772b551a5f1d9c0b4626ecb8d | [
"MIT"
] | null | null | null | def fn():
return 2 | 11.5 | 12 | 0.521739 | 4 | 23 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.347826 | 23 | 2 | 12 | 11.5 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
e6076d0828fe46fc70ca30d12cd393c9cde273e7 | 8,972 | py | Python | errors/address.py | CloudCIX/membership | a7a62918c7d7c65dd1bf2068431dbf2ec2573e4b | [
"Apache-2.0"
] | null | null | null | errors/address.py | CloudCIX/membership | a7a62918c7d7c65dd1bf2068431dbf2ec2573e4b | [
"Apache-2.0"
] | null | null | null | errors/address.py | CloudCIX/membership | a7a62918c7d7c65dd1bf2068431dbf2ec2573e4b | [
"Apache-2.0"
] | null | null | null | """
Error Codes for all of the Methods in the Address Service
"""
# List
membership_address_list_001 = (
'One or more of the sent search fields contains invalid values. Please check the sent parameters and ensure they '
'match the required patterns.'
)
# Create
membership_address_create_101 = 'The "member_id" parameter is invalid. "member_id" is required and must be an integer.'
membership_address_create_102 = 'The "member_id" parameter is invalid. "member_id" must belong to a valid Member.'
membership_address_create_103 = 'The "name" parameter is invalid. "name" is required and must be a string.'
membership_address_create_104 = 'The "name" parameter is invalid. "name" cannot be longer than 250 characters.'
membership_address_create_105 = 'The "address1" parameter is invalid. "address1" is required and must be a string.'
membership_address_create_106 = 'The "address1" parameter is invalid. "address1" cannot be longer than 100 characters.'
membership_address_create_107 = 'The "address2" parameter is invalid. "address2" cannot be longer than 100 characters.'
membership_address_create_108 = 'The "address3" parameter is invalid. "address3" cannot be longer than 100 characters.'
membership_address_create_109 = 'The "city" parameter is invalid. "city" is required and must be a string.'
membership_address_create_110 = 'The "city" parameter is invalid. "city" cannot be longer than 50 characters.'
membership_address_create_111 = (
'The "country_id" parameter is invalid. "country_id" is required and must be an integer.'
)
membership_address_create_112 = 'The "country_id" parameter is invalid. "country_id" must belong to a valid Country.'
membership_address_create_113 = 'The "subdivision_id" parameter is invalid. "subdivision_id" must be an integer.'
membership_address_create_114 = (
'The "subdivision_id" parameter is invalid. "subdivision_id" must belong to a valid Subdivision in the chosen '
'Country.'
)
membership_address_create_115 = 'The "postcode" parameter is invalid. "postcode" cannot be longer than 20 characters.'
membership_address_create_116 = 'The "phones" parameter is invalid. "phones" must be an array.'
membership_address_create_117 = 'The "phones" parameter is invalid. Each item in the array must be an object.'
membership_address_create_118 = (
'The "phones" parameter is invalid. Each item in the array must have both the "name" and "number" keys.'
)
membership_address_create_119 = (
'The "phones" parameter is invalid. One of the sent values for "number" is not a valid phone number.'
)
membership_address_create_120 = 'The "email" parameter is invalid. "email" cannot be longer than 255 characters.'
membership_address_create_121 = 'The "website" parameter is invalid. "website" cannot be longer than 50 characters.'
membership_address_create_122 = 'The "gln" parameter is invalid. "gln" cannot be longer than 13 characters.'
membership_address_create_123 = (
'The "vat_number" parameter is invalid. "vat_number" cannot be longer than 20 characters.'
)
membership_address_create_124 = (
'The "language_id" parameter is invalid. "language_id" is required and must be an integer.'
)
membership_address_create_125 = 'The "language_id" parameter is invalid. "language_id" must belong to a valid Language.'
membership_address_create_126 = (
'The "currency_id" parameter is invalid. "currency_id" is required and must be an integer.'
)
membership_address_create_127 = 'The "currency_id" parameter is invalid. "currency_id" must belong to a valid Currency.'
membership_address_create_128 = (
'The "billing_address_id" parameter is invalid. "billing_address_id" must be an integer.'
)
membership_address_create_129 = (
'The "billing_address_id" parameter is invalid. "billing_address_id" must belong to a valid Address in the '
'specified Member.'
)
membership_address_create_201 = 'You do not have permission to make this request. Your Member must be self-managed.'
membership_address_create_202 = (
'You do not have permission to make this request. You are attempting to make an Address in another self-managed '
'Member.'
)
# Read
membership_address_read_001 = 'The "pk" path parameter is invalid. "pk" must belong to a valid Address record.'
membership_address_read_201 = (
'You do not have permission to make this request. Your Address must be linked to the Address you want to read.'
)
# Update
membership_address_update_001 = 'The "pk" path parameter is invalid. "pk" must belong to a valid Address record.'
membership_address_update_101 = 'The "name" parameter is invalid. "name" is required and must be a string.'
membership_address_update_102 = 'The "name" parameter is invalid. "name" cannot be longer than 250 characters.'
membership_address_update_103 = 'The "address1" parameter is invalid. "address1" is required and must be a string.'
membership_address_update_104 = 'The "address1" parameter is invalid. "address1" cannot be longer than 100 characters.'
membership_address_update_105 = 'The "address2" parameter is invalid. "address2" cannot be longer than 100 characters.'
membership_address_update_106 = 'The "address3" parameter is invalid. "address3" cannot be longer than 100 characters.'
membership_address_update_107 = 'The "city" parameter is invalid. "city" is required and must be a string.'
membership_address_update_108 = 'The "city" parameter is invalid. "city" cannot be longer than 50 characters.'
membership_address_update_109 = (
'The "country_id" parameter is invalid. "country_id" is required and must be an integer.'
)
membership_address_update_110 = 'The "country_id" parameter is invalid. "country_id" must belong to a valid Country.'
membership_address_update_111 = 'The "subdivision_id" parameter is invalid. "subdivision_id" must be an integer.'
membership_address_update_112 = (
'The "subdivision_id" parameter is invalid. "subdivision_id" must belong to a valid Subdivision in the chosen '
'Country.'
)
membership_address_update_113 = 'The "postcode" parameter is invalid. "postcode" cannot be longer than 20 characters.'
membership_address_update_114 = 'The "phones" parameter is invalid. "phones" must be an array.'
membership_address_update_115 = 'The "phones" parameter is invalid. Each item in the array must be an object.'
membership_address_update_116 = (
'The "phones" parameter is invalid. Each item in the array must have both the "name" and "number" keys.'
)
membership_address_update_117 = (
'The "phones" parameter is invalid. One of the sent values for "number" is not a valid phone number.'
)
membership_address_update_118 = 'The "email" parameter is invalid. "email" cannot be longer than 255 characters.'
membership_address_update_119 = 'The "website" parameter is invalid. "website" cannot be longer than 50 characters.'
membership_address_update_120 = 'The "gln" parameter is invalid. "gln" cannot be longer than 13 characters.'
membership_address_update_121 = (
'The "vat_number" parameter is invalid. "vat_number" cannot be longer than 20 characters.'
)
membership_address_update_122 = (
'The "language_id" parameter is invalid. "language_id" is required and must be an integer.'
)
membership_address_update_123 = 'The "language_id" parameter is invalid. "language_id" must belong to a valid Language.'
membership_address_update_124 = (
'The "currency_id" parameter is invalid. "currency_id" is required and must be an integer.'
)
membership_address_update_125 = 'The "currency_id" parameter is invalid. "currency_id" must belong to a valid Currency.'
membership_address_update_126 = (
'The "billing_address_id" parameter is invalid. "billing_address_id" must be an integer.'
)
membership_address_update_127 = (
'The "billing_address_id" parameter is invalid. "billing_address_id" must belong to a valid Address in the '
'specified Member.'
)
membership_address_update_128 = 'The "cloud_region" parameter is invalid. "cloud_region" must be a boolean.'
membership_address_update_129 = (
'The "cloud_region" parameter is invalid. The address member can only have the role cloud_region if they are '
'self-managed.'
)
membership_address_update_201 = (
'You do not have permission to execute this method. Only Member 1 administrator\'s can change the role of an '
'Address.'
)
membership_address_update_202 = 'You do not have permission to make this request. Your Member must be self-managed.'
membership_address_update_203 = (
'You do not have permission to make this request. You are attempting to make an Address in another self-managed '
'Member.'
)
membership_address_update_204 = (
'You do not have permission to make this request. Your Address must be linked to the Address you are trying to '
'update.'
)
# Verbose List
membership_verbose_address_list_001 = (
'One or more of the sent search fields contains invalid values. Please check the sent parameters and ensure they '
'match the required patterns.'
)
| 61.452055 | 120 | 0.776527 | 1,310 | 8,972 | 5.11374 | 0.114504 | 0.172563 | 0.161218 | 0.065681 | 0.88461 | 0.878639 | 0.864607 | 0.860576 | 0.846246 | 0.833707 | 0 | 0.035854 | 0.148239 | 8,972 | 145 | 121 | 61.875862 | 0.840749 | 0.010589 | 0 | 0.227273 | 0 | 0.045455 | 0.690919 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
050d1fc4a2b5466ff4c0d1ba1a4bf2a1addedbaa | 2,836 | py | Python | omaha_server/omaha/migrations/0025_auto_20151209_1040.py | makar21/omaha-server | b84cdf6e67d9106e7a86b447204de4f82397b019 | [
"Apache-2.0"
] | 8 | 2018-06-25T07:20:17.000Z | 2021-02-07T20:01:04.000Z | omaha_server/omaha/migrations/0025_auto_20151209_1040.py | makar21/omaha-server | b84cdf6e67d9106e7a86b447204de4f82397b019 | [
"Apache-2.0"
] | 8 | 2018-06-22T21:56:27.000Z | 2020-06-25T15:22:56.000Z | omaha_server/omaha/migrations/0025_auto_20151209_1040.py | dentalwings/omaha-server | 3d8e18c8f4aac4eb16445c0f3160ed1fc2fc8de5 | [
"Apache-2.0"
] | 11 | 2019-01-22T01:36:42.000Z | 2022-03-09T01:41:32.000Z | # -*- coding: utf-8 -*-
from django.db import migrations, models
import django_extensions.db.fields
class Migration(migrations.Migration):
dependencies = [
('omaha', '0024_merge'),
]
operations = [
migrations.AlterField(
model_name='action',
name='created',
field=django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created'),
),
migrations.AlterField(
model_name='action',
name='modified',
field=django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified'),
),
migrations.AlterField(
model_name='application',
name='created',
field=django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created'),
),
migrations.AlterField(
model_name='application',
name='modified',
field=django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified'),
),
migrations.AlterField(
model_name='channel',
name='created',
field=django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created'),
),
migrations.AlterField(
model_name='channel',
name='modified',
field=django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified'),
),
migrations.AlterField(
model_name='data',
name='created',
field=django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created'),
),
migrations.AlterField(
model_name='data',
name='modified',
field=django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified'),
),
migrations.AlterField(
model_name='platform',
name='created',
field=django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created'),
),
migrations.AlterField(
model_name='platform',
name='modified',
field=django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified'),
),
migrations.AlterField(
model_name='version',
name='created',
field=django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created'),
),
migrations.AlterField(
model_name='version',
name='modified',
field=django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified'),
),
]
| 37.315789 | 112 | 0.623061 | 257 | 2,836 | 6.657588 | 0.143969 | 0.121566 | 0.136762 | 0.18235 | 0.916423 | 0.916423 | 0.849211 | 0.849211 | 0.849211 | 0.849211 | 0 | 0.002397 | 0.264457 | 2,836 | 75 | 113 | 37.813333 | 0.817833 | 0.007405 | 0 | 0.882353 | 0 | 0 | 0.099893 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.029412 | 0 | 0.073529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
05562effcacaddb6461e82d720192f54e43978c5 | 9,884 | py | Python | Make_Plots.py | bernpb/LHL_Final_Project | b5ee105e4f840269d1ec76282511d0620f0ef2c7 | [
"MIT"
] | null | null | null | Make_Plots.py | bernpb/LHL_Final_Project | b5ee105e4f840269d1ec76282511d0620f0ef2c7 | [
"MIT"
] | null | null | null | Make_Plots.py | bernpb/LHL_Final_Project | b5ee105e4f840269d1ec76282511d0620f0ef2c7 | [
"MIT"
] | null | null | null | import streamlit as st
import plotly.graph_objects as go
from plotly.subplots import make_subplots
import pandas as pd
def make_plots(player, data, season):
"""
Make plots to display for a given player based on their position.
Inputs:
- Player: Player name
- data: Dataframe
- season = season
Output:
- 4 Subplots for the given position
"""
# Make a list of unique players
players_list = data['Name'].unique().tolist()
if player in players_list:
visual = data[data['Name'] == player]
position = visual.iloc[0]['Position']
if position == 'QB':
fig = make_subplots(rows=2, cols=2)
# subplot_titles = ('Fantasy Points PPR', 'Rushing Yards', 'Passing Yards', 'Receiving Yards'))
# Display fantasy points
fig.add_trace(go.Scatter(name = 'Fantasy Points PPR',
x = visual['Week'],
y = visual['FantasyPointsPPR'],
fill = 'tozeroy'),
row = 1,
col = 1)
# Display rushing yards
fig.add_trace(go.Scatter(name = 'Rushing Yards',
x = visual['Week'],
y = visual['RushingYards'],
fill = 'tozeroy'),
row = 1,
col = 2)
# Display receiving yards
fig.add_trace(go.Scatter(name = 'Passing Yards',
x = visual['Week'],
y = visual['PassingYards'],
fill = 'tozeroy'),
row = 2,
col = 1)
# Display passing yards
fig.add_trace(go.Bar(name = 'Touchdowns',
x = visual['Week'],
y = visual['TotalTouchdowns']),
row = 2,
col = 2)
# Update xaxis properties
fig.update_xaxes(title_text="Week", row=1, col=1)
fig.update_xaxes(title_text="Week", row=1, col=2)
fig.update_xaxes(title_text="Week", row=2, col=1)
fig.update_xaxes(title_text="Week", row=2, col=2)
fig.update_layout(template = 'plotly_dark',
title = {'text': f'{player} fantasy output by week for the {season}-{season + 1} season.',
'y': 0.9,
'x': 0.135,
'yanchor': 'top',
'xanchor': 'left'},
height = 600,
width = 1000)
return st.plotly_chart(fig, use_container_width=True)
elif position == 'WR':
fig = make_subplots(rows=2, cols=2)
# subplot_titles = ('Fantasy Points PPR', 'Rushing Yards', 'Passing Yards', 'Receiving Yards'))
# Display fantasy points
fig.add_trace(go.Scatter(name = 'Fantasy Points PPR',
x = visual['Week'],
y = visual['FantasyPointsPPR'],
fill = 'tozeroy'),
row = 1,
col = 1)
# Display rushing yards
fig.add_trace(go.Scatter(name = 'Receptions',
x = visual['Week'],
y = visual['Receptions'],
fill = 'tozeroy'),
row = 1,
col = 2)
# Display receiving yards
fig.add_trace(go.Scatter(name = 'Receiving Yards',
x = visual['Week'],
y = visual['ReceivingYards'],
fill = 'tozeroy'),
row = 2,
col = 1)
# Display passing yards
fig.add_trace(go.Bar(name = 'Touchdowns',
x = visual['Week'],
y = visual['TotalTouchdowns']),
row = 2,
col = 2)
# Update xaxis properties
fig.update_xaxes(title_text="Week", row=1, col=1)
fig.update_xaxes(title_text="Week", row=1, col=2)
fig.update_xaxes(title_text="Week", row=2, col=1)
fig.update_xaxes(title_text="Week", row=2, col=2)
fig.update_layout(template = 'plotly_dark',
title = {'text': f'{player} fantasy output by week for the {season}-{season + 1} season.',
'y': 0.9,
'x': 0.135,
'yanchor': 'top',
'xanchor': 'left'},
height = 600,
width = 1000)
return st.plotly_chart(fig, use_container_width=True)
elif position == 'RB':
fig = make_subplots(rows=2, cols=2)
# subplot_titles = ('Fantasy Points PPR', 'Rushing Yards', 'Passing Yards', 'Receiving Yards'))
# Display fantasy points
fig.add_trace(go.Scatter(name = 'Fantasy Points PPR',
x = visual['Week'],
y = visual['FantasyPointsPPR'],
fill = 'tozeroy'),
row = 1,
col = 1)
# Display rushing yards
fig.add_trace(go.Scatter(name = 'Rushing Yards',
x = visual['Week'],
y = visual['RushingYards'],
fill = 'tozeroy'),
row = 1,
col = 2)
# Display receiving yards
fig.add_trace(go.Scatter(name = 'Receiving Yards',
x = visual['Week'],
y = visual['ReceivingYards'],
fill = 'tozeroy'),
row = 2,
col = 2)
# Display passing yards
fig.add_trace(go.Bar(name = 'Touchdowns',
x = visual['Week'],
y = visual['TotalTouchdowns']),
row = 2,
col = 1)
# Update xaxis properties
fig.update_xaxes(title_text="Week", row=1, col=1)
fig.update_xaxes(title_text="Week", row=1, col=2)
fig.update_xaxes(title_text="Week", row=2, col=1)
fig.update_xaxes(title_text="Week", row=2, col=2)
fig.update_layout(template = 'plotly_dark',
title = {'text': f'{player} fantasy output by week for the {season}-{season + 1} season.',
'y': 0.9,
'x': 0.135,
'yanchor': 'top',
'xanchor': 'left'},
height = 600,
width = 1000)
return st.plotly_chart(fig, use_container_width=True)
elif position == 'TE':
fig = make_subplots(rows=2, cols=2)
# subplot_titles = ('Fantasy Points PPR', 'Rushing Yards', 'Passing Yards', 'Receiving Yards'))
# Display fantasy points
fig.add_trace(go.Scatter(name = 'Fantasy Points PPR',
x = visual['Week'],
y = visual['FantasyPointsPPR'],
fill = 'tozeroy'),
row = 1,
col = 1)
# Display rushing yards
fig.add_trace(go.Scatter(name = 'Rushing Yards',
x = visual['Week'],
y = visual['RushingYards'],
fill = 'tozeroy'),
row = 1,
col = 2)
# Display receiving yards
fig.add_trace(go.Scatter(name = 'Receiving Yards',
x = visual['Week'],
y = visual['ReceivingYards'],
fill = 'tozeroy'),
row = 2,
col = 1)
# Display passing yards
fig.add_trace(go.Bar(name = 'Touchdowns',
x = visual['Week'],
y = visual['TotalTouchdowns']),
row = 2,
col = 2)
# Update xaxis properties
fig.update_xaxes(title_text="Week", row=1, col=1)
fig.update_xaxes(title_text="Week", row=1, col=2)
fig.update_xaxes(title_text="Week", row=2, col=1)
fig.update_xaxes(title_text="Week", row=2, col=2)
fig.update_layout(template = 'plotly_dark',
title = {'text': f'{player} fantasy output by week for the {season}-{season + 1} season.',
'y': 0.9,
'x': 0.135,
'yanchor': 'top',
'xanchor': 'left'},
height = 600,
width = 1000)
return st.plotly_chart(fig, use_container_width=True)
else:
fig = make_subplots(rows=2, cols=2)
# subplot_titles = ('Fantasy Points PPR', 'Rushing Yards', 'Passing Yards', 'Receiving Yards'))
# Display fantasy points
fig.add_trace(go.Bar(name = 'Fantasy Points PPR',
x = visual['Week'],
y = visual['FantasyPointsPPR']),
row = 1,
col = 1)
# Display rushing yards
fig.add_trace(go.Bar(name = 'Field Goals Made',
x = visual['Week'],
y = visual['FieldGoalsMade']),
row = 1,
col = 2)
# Display receiving yards
fig.add_trace(go.Bar(name = 'Extra Points Made',
x = visual['Week'],
y = visual['ExtraPointsMade']),
row = 2,
col = 2)
# Display passing yards
fig.add_trace(go.Bar(name = 'Field Goal Completion Percentage',
x = visual['Week'],
y = visual['FieldGoalsMade'] / visual['FieldGoalsAttempted']),
row = 2,
col = 1)
# Update xaxis properties
fig.update_xaxes(title_text="Week", row=1, col=1)
fig.update_xaxes(title_text="Week", row=1, col=2)
fig.update_xaxes(title_text="Week", row=2, col=1)
fig.update_xaxes(title_text="Week", row=2, col=2)
fig.update_layout(template = 'plotly_dark',
title = {'text': f'{player} fantasy output by week for the {season}-{season + 1} season.',
'y': 0.9,
'x': 0.135,
'yanchor': 'top',
'xanchor': 'left'},
height = 600,
width = 1000,
)
return st.plotly_chart(fig, use_container_width=True)
else:
return st.subheader(f"It doesn't look like '{player}' has played any games this season. Won't be able to provide \
a valid prediction. Try somebody else.") | 36.880597 | 123 | 0.498584 | 1,092 | 9,884 | 4.421245 | 0.124542 | 0.046603 | 0.045568 | 0.053853 | 0.867026 | 0.863297 | 0.839271 | 0.837821 | 0.8343 | 0.8343 | 0 | 0.026205 | 0.374545 | 9,884 | 268 | 124 | 36.880597 | 0.754772 | 0.128389 | 0 | 0.855 | 0 | 0.005 | 0.170626 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005 | false | 0.01 | 0.02 | 0 | 0.055 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
055ac1fa7305dce126d168121c3a97d6c66793e6 | 8,928 | py | Python | pywt/tests/test_mra.py | tbbharaj/pywt | 9a72143be347481e2276371efb41ae0266b9c808 | [
"MIT"
] | 1,435 | 2015-07-29T18:28:27.000Z | 2022-03-31T10:16:46.000Z | pywt/tests/test_mra.py | tbbharaj/pywt | 9a72143be347481e2276371efb41ae0266b9c808 | [
"MIT"
] | 547 | 2015-07-29T18:10:15.000Z | 2022-03-24T18:42:57.000Z | pywt/tests/test_mra.py | tbbharaj/pywt | 9a72143be347481e2276371efb41ae0266b9c808 | [
"MIT"
] | 421 | 2015-07-30T13:08:25.000Z | 2022-03-24T11:10:07.000Z | #!/usr/bin/env python
import numpy as np
import pytest
from numpy.testing import assert_allclose
import pywt
from pywt import data
# tolerances used in accuracy comparisons
tol_single = 1e-6
tol_double = 1e-13
atol = 1e-7
####
# 1d mra tests
####
@pytest.mark.parametrize('wavelet', ['db2', 'sym4', 'coif5'])
@pytest.mark.parametrize('transform', ['dwt', 'swt'])
@pytest.mark.parametrize('mode', pywt.Modes.modes)
@pytest.mark.parametrize(
'dtype', ['float32', 'float64', 'complex64', 'complex128']
)
def test_mra_roundtrip(wavelet, transform, mode, dtype):
x = data.ecg()[:64].astype(dtype)
if x.dtype.kind == 'c':
# fill some data for the imaginary channel
x.imag = x[::-1].real
if transform == 'swt':
# swt mode only supports periodization
if mode != 'periodization':
with pytest.raises(ValueError):
pywt.mra(x, wavelet, transform=transform, mode=mode)
return
coeffs = pywt.mra(x, wavelet, transform=transform, mode=mode)
assert isinstance(coeffs, list)
assert isinstance(coeffs[0], np.ndarray)
# assert all(isinstance(coeffs[i], dict) for i in range(1, len(coeffs)))
y = pywt.imra(coeffs)
rtol = tol_single if x.real.dtype.kind == 'f' else tol_double
assert_allclose(x, y, rtol=rtol, atol=rtol)
@pytest.mark.parametrize('wavelet', ['rbio1.3', 'bior2.4'])
@pytest.mark.parametrize('transform', ['dwt', 'swt'])
def test_mra_warns_on_non_orthogonal(wavelet, transform):
dtype = np.float64
x = data.ecg()[:64].astype(dtype)
assert not pywt.Wavelet(wavelet).orthogonal
if transform == 'swt':
# bi-orthogonal wavelets raise a warning for SWT case
msg = 'norm=True, but the wavelet is not orthogonal'
with pytest.warns(UserWarning, match=msg):
coeffs = pywt.mra(x, wavelet, transform=transform)
else:
coeffs = pywt.mra(x, wavelet, transform=transform)
y = pywt.imra(coeffs)
rtol = tol_single if x.real.dtype.kind == 'f' else tol_double
assert_allclose(x, y, rtol=rtol, atol=rtol)
@pytest.mark.parametrize('axis', [0, -1, 1, 2, -3])
@pytest.mark.parametrize('ndim', [1, 2, 3])
@pytest.mark.parametrize('transform', ['dwt', 'swt'])
@pytest.mark.parametrize('dtype', [np.float64, np.complex128])
def test_mra_axis(transform, ndim, axis, dtype):
# Test transforms over a specific axis of 1D, 2D or 3D data
if ndim == 1:
x = data.ecg()[:64]
elif ndim == 2:
x = data.camera()[:64, :32]
elif ndim == 3:
x = data.camera()[:48, :8]
x = np.stack((x,) * 8, axis=-1)
x = x.astype(dtype, copy=False)
# out of range axis
if axis < -x.ndim or axis >= x.ndim:
with pytest.raises(np.AxisError):
pywt.mra(x, 'db1', transform=transform, axis=axis)
return
coeffs = pywt.mra(x, 'db1', transform=transform, axis=axis)
y = pywt.imra(coeffs)
rtol = tol_single if x.real.dtype.kind == 'f' else tol_double
assert_allclose(x, y, rtol=rtol, atol=rtol)
####
# 2d mra tests
####
@pytest.mark.parametrize('wavelet', ['db2', 'sym4', 'coif5'])
@pytest.mark.parametrize('transform', ['dwt2', 'swt2'])
@pytest.mark.parametrize('mode', pywt.Modes.modes)
@pytest.mark.parametrize(
'dtype', ['float32', 'float64', 'complex64', 'complex128']
)
def test_mra2_roundtrip(wavelet, transform, mode, dtype):
x = data.camera()[:32, :16].astype(dtype, copy=False)
if x.dtype.kind == 'c':
# fill some data for the imaginary channel
x.imag = x[::-1, :].real
if transform == 'swt2':
# swt mode only supports periodization
if mode != 'periodization':
with pytest.raises(ValueError):
pywt.mra2(x, wavelet, transform=transform, mode=mode)
return
coeffs = pywt.mra2(x, wavelet, transform=transform, mode=mode)
assert isinstance(coeffs, list)
assert isinstance(coeffs[0], np.ndarray)
# assert all(isinstance(coeffs[i], dict) for i in range(1, len(coeffs)))
y = pywt.imra2(coeffs)
rtol = tol_single if x.real.dtype.kind == 'f' else tol_double
assert_allclose(x, y, rtol=rtol, atol=rtol)
@pytest.mark.parametrize('wavelet', ['rbio1.3', 'bior2.4'])
@pytest.mark.parametrize('transform', ['dwt2', 'swt2'])
def test_mra2_warns_on_non_orthogonal(wavelet, transform):
dtype = np.float64
x = data.camera()[:32, :8].astype(dtype, copy=False)
assert not pywt.Wavelet(wavelet).orthogonal
if transform == 'swt2':
# bi-orthogonal wavelets raise a warning for SWT case
msg = 'norm=True, but the wavelets used are not orthogonal'
with pytest.warns(UserWarning, match=msg):
coeffs = pywt.mra2(x, wavelet, transform=transform)
else:
coeffs = pywt.mra2(x, wavelet, transform=transform)
y = pywt.imra2(coeffs)
rtol = tol_single if x.real.dtype.kind == 'f' else tol_double
assert_allclose(x, y, rtol=rtol, atol=rtol)
@pytest.mark.parametrize('transform', ['dwt2', 'swt2'])
@pytest.mark.parametrize('ndim', [2, 3])
@pytest.mark.parametrize('axes', [(0, 1), (-2, -1), (0, 2), (-3, 1), (0, 4)])
@pytest.mark.parametrize('dtype', [np.float64, np.complex128])
def test_mra2_axes(transform, axes, ndim, dtype):
# Test transforms over various axes of 2D or 3D data.
x = data.camera()[:32, :16].astype(dtype, copy=False)
if ndim == 3:
x = np.stack((x,) * 8, axis=-1)
# out of range axis
if any([axis < -x.ndim or axis >= x.ndim for axis in axes]):
with pytest.raises(np.AxisError):
pywt.mra2(x, 'db1', transform=transform, axes=axes)
return
coeffs = pywt.mra2(x, 'db1', transform=transform, axes=axes)
y = pywt.imra2(coeffs)
rtol = tol_single if x.real.dtype.kind == 'f' else tol_double
assert_allclose(x, y, rtol=rtol, atol=rtol)
####
# nd mra tests
####
@pytest.mark.parametrize('wavelet', ['sym2', ])
@pytest.mark.parametrize('transform', ['dwtn', 'swtn'])
@pytest.mark.parametrize('mode', pywt.Modes.modes)
@pytest.mark.parametrize(
'dtype', ['float32', 'float64', 'complex64', 'complex128']
)
@pytest.mark.parametrize('ndim', [1, 2, 3])
def test_mran_roundtrip(wavelet, transform, mode, dtype, ndim):
if ndim == 1:
x = data.ecg()[:48].astype(dtype, copy=False)
elif ndim == 2:
x = data.camera()[:16, :8].astype(dtype, copy=False)
elif ndim == 3:
x = data.camera()[:16, :8].astype(dtype, copy=False)
x = np.stack((x,) * 8, axis=-1)
if x.dtype.kind == 'c':
# fill some data for the imaginary channel
x.imag = x[::-1, ...].real
if transform == 'swtn':
# swt mode only supports periodization
if mode != 'periodization':
with pytest.raises(ValueError):
pywt.mran(x, wavelet, transform=transform, mode=mode)
return
coeffs = pywt.mran(x, wavelet, transform=transform, mode=mode)
assert isinstance(coeffs, list)
assert isinstance(coeffs[0], np.ndarray)
# assert all(isinstance(coeffs[i], dict) for i in range(1, len(coeffs)))
y = pywt.imran(coeffs)
rtol = tol_single if x.real.dtype.kind == 'f' else tol_double
assert_allclose(x, y, rtol=rtol, atol=rtol)
@pytest.mark.parametrize('wavelet', ['rbio1.3', 'bior2.4'])
@pytest.mark.parametrize('transform', ['dwtn', 'swtn'])
def test_mran_warns_on_non_orthogonal(wavelet, transform):
dtype = np.float64
x = data.camera()[:32, :8].astype(dtype, copy=False)
assert not pywt.Wavelet(wavelet).orthogonal
if transform == 'swtn':
# bi-orthogonal wavelets raise a warning for SWT case
msg = 'norm=True, but the wavelets used are not orthogonal'
with pytest.warns(UserWarning, match=msg):
coeffs = pywt.mran(x, wavelet, transform=transform)
else:
coeffs = pywt.mran(x, wavelet, transform=transform)
y = pywt.imran(coeffs)
rtol = tol_single if x.real.dtype.kind == 'f' else tol_double
assert_allclose(x, y, rtol=rtol, atol=rtol)
@pytest.mark.parametrize(
'axes', [(0, 1), (-2, -1), (0, 2), (-3, 1), (0, 4), (-3, -2, -1),
(0, 2, 1), (0, 5, 1), (0,), (1,), (2,), (-2,), (-3,), (-4,)])
@pytest.mark.parametrize('transform', ['dwtn', 'swtn'])
def test_mran_axes(axes, transform):
# Test with transforms over 1, 2 or 3 axes of 3d data.
# Cases with out of range axes are also tested
dtype = np.float64
x = data.camera()[:32, :16].astype(dtype, copy=False)
x3d = np.stack((x,) * 8, axis=-1)
# out of range axis
if any([axis < -x.ndim or axis >= x.ndim for axis in axes]):
with pytest.raises(np.AxisError):
pywt.mran(x, 'db1', transform='dwtn', axes=axes)
return
coeffs = pywt.mran(x3d, 'db1', transform='dwtn', axes=axes)
y = pywt.imran(coeffs)
rtol = tol_single if x3d.real.dtype.kind == 'f' else tol_double
assert_allclose(x3d, y, rtol=rtol, atol=rtol)
| 34.875 | 77 | 0.63004 | 1,254 | 8,928 | 4.440191 | 0.120415 | 0.052083 | 0.109375 | 0.056034 | 0.896372 | 0.868175 | 0.821839 | 0.747486 | 0.703125 | 0.634519 | 0 | 0.031497 | 0.206989 | 8,928 | 255 | 78 | 35.011765 | 0.754944 | 0.107975 | 0 | 0.704545 | 0 | 0 | 0.083881 | 0 | 0 | 0 | 0 | 0 | 0.107955 | 1 | 0.051136 | false | 0 | 0.028409 | 0 | 0.113636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
05634166d17372a0ffd17fdaeb1288af77fe9fcd | 1,879 | py | Python | restclients/test/kws/key.py | uw-it-cte/uw-restclients | 2b09348bf066e5508304401f93f281805e965af5 | [
"Apache-2.0"
] | null | null | null | restclients/test/kws/key.py | uw-it-cte/uw-restclients | 2b09348bf066e5508304401f93f281805e965af5 | [
"Apache-2.0"
] | null | null | null | restclients/test/kws/key.py | uw-it-cte/uw-restclients | 2b09348bf066e5508304401f93f281805e965af5 | [
"Apache-2.0"
] | null | null | null | from django.test import TestCase
from restclients.kws import KWS
from restclients.exceptions import DataFailureException
from restclients.test import fdao_kws_override
@fdao_kws_override
class KWSTestKeyData(TestCase):
def test_key_by_id(self):
kws = KWS()
key = kws.get_key('ee99defd-baee-43b0-9e1e-f8238dd106bb')
self.assertEquals(key.algorithm, 'AES128CBC', 'Correct algorithm')
self.assertEquals(key.cipher_mode, 'CBC', 'Correct cipher mode')
self.assertEquals(
key.expiration.isoformat(),
'2013-04-11T13:44:33', 'Correct expiration')
self.assertEquals(
key.key_id,
'ee99defd-baee-43b0-9e1e-f8238dd106bb', 'Correct key ID')
self.assertEquals(key.key, 'Uv2JsxggfxF9OQNzIxAzDQ==', 'Correct key')
self.assertEquals(key.size, 128, 'Correct key size')
self.assertEquals(
key.url,
'https://it-wseval1.s.uw.edu/key/v1/encryption/ee99defd-baee-43b0-9e1e-f8238dd106bb.json',
'Correct key URL')
def test_current_key(self):
kws = KWS()
key = kws.get_current_key('uw-student-registration')
self.assertEquals(key.algorithm, 'AES128CBC', 'Correct algorithm')
self.assertEquals(key.cipher_mode, 'CBC', 'Correct cipher mode')
self.assertEquals(
key.expiration.isoformat(),
'2013-04-11T13:44:33', 'Correct expiration')
self.assertEquals(
key.key_id,
'ee99defd-baee-43b0-9e1e-f8238dd106bb', 'Correct key ID')
self.assertEquals(key.key, 'Uv2JsxggfxF9OQNzIxAzDQ==', 'Correct key')
self.assertEquals(key.size, 128, 'Correct key size')
self.assertEquals(
key.url,
'https://it-wseval1.s.uw.edu/key/v1/encryption/ee99defd-baee-43b0-9e1e-f8238dd106bb.json',
'Correct key URL')
| 42.704545 | 102 | 0.648749 | 216 | 1,879 | 5.569444 | 0.25 | 0.186201 | 0.221114 | 0.083126 | 0.789692 | 0.763092 | 0.731505 | 0.731505 | 0.731505 | 0.731505 | 0 | 0.081323 | 0.227781 | 1,879 | 43 | 103 | 43.697674 | 0.74776 | 0 | 0 | 0.75 | 0 | 0.05 | 0.337946 | 0.095263 | 0 | 0 | 0 | 0 | 0.35 | 1 | 0.05 | false | 0 | 0.1 | 0 | 0.175 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
056a201cc229317e48a13bea79f1bed9ffd03429 | 2,639 | py | Python | tests/test_ba.py | R3bs/darwin | 3ebb0888ea9f0b688ba6aa1734667e62b721df53 | [
"MIT"
] | null | null | null | tests/test_ba.py | R3bs/darwin | 3ebb0888ea9f0b688ba6aa1734667e62b721df53 | [
"MIT"
] | 28 | 2019-06-12T19:55:23.000Z | 2019-08-06T02:47:26.000Z | tests/test_ba.py | R3bs/darwin | 3ebb0888ea9f0b688ba6aa1734667e62b721df53 | [
"MIT"
] | null | null | null |
import pytest
import darwin
import numpy as np
# define the mapping parameters used
x = (-200,+200)
y = (-1000,+1000)
# discrete used
map1 = (0, 1, 2, 3)
map2 = ('a', 'b', 'c', 'd')
def test_htcondor_ba_continuous(supplyFitnessFunction):
ba = darwin.Algorithm(darwin.opt.BatAlgorithm)
ba.maxFrequency = np.random.uniform(0.3, 0.8)
ba.minFrequency = np.random.uniform(0.2, 0.5)
ba.pulseRate = np.random.uniform(0.3, 0.98)
ba.loudness = np.random.uniform(0.5, 1.5)
ba.particles = np.random.randint(5, 20)
ba.iterations = np.random.randint(5, 15)
ba.executionEngine = darwin.drm.HTCondor
ba.addVariable('x', x)
ba.addVariable('y', y)
ba.function = supplyFitnessFunction
ba.submitFile = 'sanity.submit'
ba.start()
def test_htcondor_ba_discrete(supplyDiscreteFitnessFunction):
ba = darwin.Algorithm(darwin.opt.BatAlgorithm)
ba.maxFrequency = np.random.uniform(0.3, 0.8)
ba.minFrequency = np.random.uniform(0.2, 0.5)
ba.pulseRate = np.random.uniform(0.3, 0.98)
ba.loudness = np.random.uniform(0.5, 1.5)
ba.particles = np.random.randint(5, 20)
ba.iterations = np.random.randint(5, 15)
ba.executionEngine = darwin.drm.HTCondor
ba.addVariable('map1', map1, discrete=True)
ba.addVariable('map2', map2, discrete=True)
ba.function = supplyDiscreteFitnessFunction
ba.submitFile = 'sanity_discrete.submit'
ba.start()
def test_local_ba_continuous(supplyFitnessFunction):
ba = darwin.Algorithm(darwin.opt.BatAlgorithm)
ba.maxFrequency = np.random.uniform(0.3, 0.8)
ba.minFrequency = np.random.uniform(0.2, 0.5)
ba.pulseRate = np.random.uniform(0.3, 0.98)
ba.loudness = np.random.uniform(0.5, 1.5)
ba.particles = np.random.randint(5, 20)
ba.iterations = np.random.randint(5, 15)
ba.executionEngine = darwin.drm.TaskSpooler
ba.addVariable('x', x)
ba.addVariable('y', y)
ba.function = supplyFitnessFunction
ba.submitFile = 'sanity.submit'
ba.start()
def test_local_ba_discrete(supplyDiscreteFitnessFunction):
ba = darwin.Algorithm(darwin.opt.BatAlgorithm)
ba.maxFrequency = np.random.uniform(0.3, 0.8)
ba.minFrequency = np.random.uniform(0.2, 0.5)
ba.pulseRate = np.random.uniform(0.3, 0.98)
ba.loudness = np.random.uniform(0.5, 1.5)
ba.particles = np.random.randint(5, 20)
ba.iterations = np.random.randint(5, 15)
ba.executionEngine = darwin.drm.TaskSpooler
ba.addVariable('map1', map1, discrete=True)
ba.addVariable('map2', map2, discrete=True)
ba.function = supplyDiscreteFitnessFunction
ba.submitFile = 'sanity_discrete.submit'
ba.start()
| 36.150685 | 61 | 0.693823 | 377 | 2,639 | 4.819629 | 0.161804 | 0.105669 | 0.132086 | 0.140892 | 0.925151 | 0.925151 | 0.925151 | 0.915795 | 0.915795 | 0.915795 | 0 | 0.054225 | 0.161425 | 2,639 | 72 | 62 | 36.652778 | 0.766832 | 0.018189 | 0 | 0.825397 | 0 | 0 | 0.036336 | 0.017008 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063492 | false | 0 | 0.047619 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e9914ca971375696b67de47be44428f986be28b6 | 90,761 | py | Python | players/blue/test_classifiers.py | adamsd5/yavalath | 7b027ef1684ed4041281818fb9ac1b6e64c435dc | [
"MIT"
] | null | null | null | players/blue/test_classifiers.py | adamsd5/yavalath | 7b027ef1684ed4041281818fb9ac1b6e64c435dc | [
"MIT"
] | null | null | null | players/blue/test_classifiers.py | adamsd5/yavalath | 7b027ef1684ed4041281818fb9ac1b6e64c435dc | [
"MIT"
] | null | null | null | import unittest
import yavalath_engine
from players import blue_player
import pprint
import collections
import pickle
import timeit
import numpy
import pathlib
import itertools
from players.blue.classifiers import NextMoveClassifier, SpaceProperies
class TestNextMoveClassifier(unittest.TestCase):
def setUp(self):
self.signature_table, self.properties_table = pickle.load(open("signature_table.dat", "rb"))
self.properties_table = numpy.array(self.properties_table)
self.signature_table = numpy.array(self.signature_table)
def test_all_pairs_of_arms(self):
from players.blue_player import SpaceProperies
def test(a, b, c, d, e, f, g):
return blue_player.NextMoveClassifier.find_wins_and_checks_for_token_and_opp_arms((c, b, a), (e, f, g), d)
# NOTE (dadams, 20170625): This unit test cost me 2 hours, and made me re-evaluate my life choices.
self.assertEqual(test(-1, -1, 0, -1, -1, -1, 0), blue_player.SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, -1, 0, -1, -1, -1, 1), blue_player.SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, -1, 0, -1, -1, 0, -1), blue_player.SpaceProperies.BLACK_DOUBLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, -1, 0, 0), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, -1, 0, 1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, -1, 1, -1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, -1, 1, 0), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, -1, 1, 1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, 0, -1, -1), blue_player.SpaceProperies.BLACK_DOUBLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, 0, -1, 0), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, 0, -1, 1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, 0, 0, -1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, 0, 0, 1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, 0, 1, -1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, 0, 1, 0), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, 0, 1, 1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, 1, -1, -1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, 1, -1, 0), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, 1, -1, 1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, 1, 0, -1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, 1, 0, 0), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, 1, 0, 1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, 1, 1, -1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, -1, 1, 1, 0), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, 1, -1, -1, 0), None)
self.assertEqual(test(-1, -1, 0, 1, -1, -1, 1), None)
self.assertEqual(test(-1, -1, 0, 1, -1, 0, -1), None)
self.assertEqual(test(-1, -1, 0, 1, -1, 0, 0), None)
self.assertEqual(test(-1, -1, 0, 1, -1, 0, 1), None)
self.assertEqual(test(-1, -1, 0, 1, -1, 1, -1), None)
self.assertEqual(test(-1, -1, 0, 1, -1, 1, 0), None)
self.assertEqual(test(-1, -1, 0, 1, -1, 1, 1), None)
self.assertEqual(test(-1, -1, 0, 1, 0, -1, -1), None)
self.assertEqual(test(-1, -1, 0, 1, 0, -1, 0), None)
self.assertEqual(test(-1, -1, 0, 1, 0, -1, 1), None)
self.assertEqual(test(-1, -1, 0, 1, 0, 0, -1), None)
self.assertEqual(test(-1, -1, 0, 1, 0, 0, 1), None)
self.assertEqual(test(-1, -1, 0, 1, 0, 1, -1), None)
self.assertEqual(test(-1, -1, 0, 1, 0, 1, 0), None)
self.assertEqual(test(-1, -1, 0, 1, 0, 1, 1), blue_player.SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, 1, 1, -1, -1), None)
self.assertEqual(test(-1, -1, 0, 1, 1, -1, 0), None)
self.assertEqual(test(-1, -1, 0, 1, 1, -1, 1), None)
self.assertEqual(test(-1, -1, 0, 1, 1, 0, -1), None)
self.assertEqual(test(-1, -1, 0, 1, 1, 0, 0), None)
self.assertEqual(test(-1, -1, 0, 1, 1, 0, 1), blue_player.SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 0, 1, 1, 1, -1), blue_player.SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, -1, 0, 1, 1, 1, 0), blue_player.SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, -1, 1, -1, -1, -1, 0), blue_player.SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, -1, 1, -1, -1, -1, 1), blue_player.SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, -1, 1, -1, -1, 0, -1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 1, -1, -1, 0, 0), None)
self.assertEqual(test(-1, -1, 1, -1, -1, 0, 1), None)
self.assertEqual(test(-1, -1, 1, -1, -1, 1, -1), None)
self.assertEqual(test(-1, -1, 1, -1, -1, 1, 0), None)
self.assertEqual(test(-1, -1, 1, -1, -1, 1, 1), None)
self.assertEqual(test(-1, -1, 1, -1, 0, -1, -1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 1, -1, 0, -1, 0), None)
self.assertEqual(test(-1, -1, 1, -1, 0, -1, 1), None)
self.assertEqual(test(-1, -1, 1, -1, 0, 0, -1), None)
self.assertEqual(test(-1, -1, 1, -1, 0, 0, 1), None)
self.assertEqual(test(-1, -1, 1, -1, 0, 1, -1), None)
self.assertEqual(test(-1, -1, 1, -1, 0, 1, 0), None)
self.assertEqual(test(-1, -1, 1, -1, 0, 1, 1), None)
self.assertEqual(test(-1, -1, 1, -1, 1, -1, -1), None)
self.assertEqual(test(-1, -1, 1, -1, 1, -1, 0), None)
self.assertEqual(test(-1, -1, 1, -1, 1, -1, 1), None)
self.assertEqual(test(-1, -1, 1, -1, 1, 0, -1), None)
self.assertEqual(test(-1, -1, 1, -1, 1, 0, 0), None)
self.assertEqual(test(-1, -1, 1, -1, 1, 0, 1), None)
self.assertEqual(test(-1, -1, 1, -1, 1, 1, -1), None)
self.assertEqual(test(-1, -1, 1, -1, 1, 1, 0), None)
self.assertEqual(test(-1, -1, 1, 1, -1, -1, 0), None)
self.assertEqual(test(-1, -1, 1, 1, -1, -1, 1), None)
self.assertEqual(test(-1, -1, 1, 1, -1, 0, -1), None)
self.assertEqual(test(-1, -1, 1, 1, -1, 0, 0), None)
self.assertEqual(test(-1, -1, 1, 1, -1, 0, 1), None)
self.assertEqual(test(-1, -1, 1, 1, -1, 1, -1), None)
self.assertEqual(test(-1, -1, 1, 1, -1, 1, 0), None)
self.assertEqual(test(-1, -1, 1, 1, -1, 1, 1), None)
self.assertEqual(test(-1, -1, 1, 1, 0, -1, -1), None)
self.assertEqual(test(-1, -1, 1, 1, 0, -1, 0), None)
self.assertEqual(test(-1, -1, 1, 1, 0, -1, 1), None)
self.assertEqual(test(-1, -1, 1, 1, 0, 0, -1), None)
self.assertEqual(test(-1, -1, 1, 1, 0, 0, 1), None)
self.assertEqual(test(-1, -1, 1, 1, 0, 1, -1), blue_player.SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 1, 1, 0, 1, 0), blue_player.SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 1, 1, 0, 1, 1), blue_player.SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, -1, 1, 1, 1, -1, -1), blue_player.SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, -1, 1, 1, 1, -1, 0), blue_player.SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, -1, 1, 1, 1, -1, 1), blue_player.SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, -1, 1, 1, 1, 0, -1), blue_player.SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, -1, 1, 1, 1, 0, 0), blue_player.SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, -1, 1, 1, 1, 0, 1), blue_player.SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, -1, 1, 1, 1, 1, -1), blue_player.SpaceProperies.WHITE_WIN)
self.assertEqual(test(-1, -1, 1, 1, 1, 1, 0), blue_player.SpaceProperies.WHITE_WIN)
self.assertEqual(test(-1, 0, -1, -1, -1, -1, 0), blue_player.SpaceProperies.BLACK_WIN)
self.assertEqual(test(-1, 0, -1, -1, -1, -1, 1), blue_player.SpaceProperies.BLACK_WIN)
self.assertEqual(test(-1, 0, -1, -1, -1, 0, -1), blue_player.SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 0, -1, -1, -1, 0, 0), blue_player.SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 0, -1, -1, -1, 0, 1), blue_player.SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 0, -1, -1, -1, 1, -1), blue_player.SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 0, -1, -1, -1, 1, 0), blue_player.SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 0, -1, -1, -1, 1, 1), blue_player.SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 0, -1, -1, 0, -1, -1), blue_player.SpaceProperies.BLACK_DOUBLE_CHECK)
self.assertEqual(test(-1, 0, -1, -1, 0, -1, 0), blue_player.SpaceProperies.BLACK_DOUBLE_CHECK)
self.assertEqual(test(-1, 0, -1, -1, 0, -1, 1), blue_player.SpaceProperies.BLACK_DOUBLE_CHECK)
self.assertEqual(test(-1, 0, -1, -1, 0, 0, -1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 0, -1, -1, 0, 0, 1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 0, -1, -1, 0, 1, -1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 0, -1, -1, 0, 1, 0), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 0, -1, -1, 0, 1, 1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 0, -1, -1, 1, -1, -1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 0, -1, -1, 1, -1, 0), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 0, -1, -1, 1, -1, 1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 0, -1, -1, 1, 0, -1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 0, -1, -1, 1, 0, 0), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 0, -1, -1, 1, 0, 1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 0, -1, -1, 1, 1, -1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 0, -1, -1, 1, 1, 0), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 0, -1, 1, -1, -1, 0), None)
self.assertEqual(test(-1, 0, -1, 1, -1, -1, 1), None)
self.assertEqual(test(-1, 0, -1, 1, -1, 0, -1), None)
self.assertEqual(test(-1, 0, -1, 1, -1, 0, 0), None)
self.assertEqual(test(-1, 0, -1, 1, -1, 0, 1), None)
self.assertEqual(test(-1, 0, -1, 1, -1, 1, -1), None)
self.assertEqual(test(-1, 0, -1, 1, -1, 1, 0), None)
self.assertEqual(test(-1, 0, -1, 1, -1, 1, 1), None)
self.assertEqual(test(-1, 0, -1, 1, 0, -1, -1), None)
self.assertEqual(test(-1, 0, -1, 1, 0, -1, 0), None)
self.assertEqual(test(-1, 0, -1, 1, 0, -1, 1), None)
self.assertEqual(test(-1, 0, -1, 1, 0, 0, -1), None)
self.assertEqual(test(-1, 0, -1, 1, 0, 0, 1), None)
self.assertEqual(test(-1, 0, -1, 1, 0, 1, -1), None)
self.assertEqual(test(-1, 0, -1, 1, 0, 1, 0), None)
self.assertEqual(test(-1, 0, -1, 1, 0, 1, 1), blue_player.SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, 0, -1, 1, 1, -1, -1), None)
self.assertEqual(test(-1, 0, -1, 1, 1, -1, 0), None)
self.assertEqual(test(-1, 0, -1, 1, 1, -1, 1), None)
self.assertEqual(test(-1, 0, -1, 1, 1, 0, -1), None)
self.assertEqual(test(-1, 0, -1, 1, 1, 0, 0), None)
self.assertEqual(test(-1, 0, -1, 1, 1, 0, 1), blue_player.SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, 0, -1, 1, 1, 1, -1), blue_player.SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 0, -1, 1, 1, 1, 0), blue_player.SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 0, 0, -1, -1, -1, 0), blue_player.SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 0, 0, -1, -1, -1, 1), blue_player.SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 0, 0, -1, -1, 0, -1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 0, 0, -1, -1, 0, 0), None)
self.assertEqual(test(-1, 0, 0, -1, -1, 0, 1), None)
self.assertEqual(test(-1, 0, 0, -1, -1, 1, -1), None)
self.assertEqual(test(-1, 0, 0, -1, -1, 1, 0), None)
self.assertEqual(test(-1, 0, 0, -1, -1, 1, 1), None)
self.assertEqual(test(-1, 0, 0, -1, 0, -1, -1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 0, 0, -1, 0, -1, 0), None)
self.assertEqual(test(-1, 0, 0, -1, 0, -1, 1), None)
self.assertEqual(test(-1, 0, 0, -1, 0, 0, -1), None)
self.assertEqual(test(-1, 0, 0, -1, 0, 0, 1), None)
self.assertEqual(test(-1, 0, 0, -1, 0, 1, -1), None)
self.assertEqual(test(-1, 0, 0, -1, 0, 1, 0), None)
self.assertEqual(test(-1, 0, 0, -1, 0, 1, 1), None)
self.assertEqual(test(-1, 0, 0, -1, 1, -1, -1), None)
self.assertEqual(test(-1, 0, 0, -1, 1, -1, 0), None)
self.assertEqual(test(-1, 0, 0, -1, 1, -1, 1), None)
self.assertEqual(test(-1, 0, 0, -1, 1, 0, -1), None)
self.assertEqual(test(-1, 0, 0, -1, 1, 0, 0), None)
self.assertEqual(test(-1, 0, 0, -1, 1, 0, 1), None)
self.assertEqual(test(-1, 0, 0, -1, 1, 1, -1), None)
self.assertEqual(test(-1, 0, 0, -1, 1, 1, 0), None)
self.assertEqual(test(-1, 0, 0, 1, -1, -1, 0), None)
self.assertEqual(test(-1, 0, 0, 1, -1, -1, 1), None)
self.assertEqual(test(-1, 0, 0, 1, -1, 0, -1), None)
self.assertEqual(test(-1, 0, 0, 1, -1, 0, 0), None)
self.assertEqual(test(-1, 0, 0, 1, -1, 0, 1), None)
self.assertEqual(test(-1, 0, 0, 1, -1, 1, -1), None)
self.assertEqual(test(-1, 0, 0, 1, -1, 1, 0), None)
self.assertEqual(test(-1, 0, 0, 1, -1, 1, 1), None)
self.assertEqual(test(-1, 0, 0, 1, 0, -1, -1), None)
self.assertEqual(test(-1, 0, 0, 1, 0, -1, 0), None)
self.assertEqual(test(-1, 0, 0, 1, 0, -1, 1), None)
self.assertEqual(test(-1, 0, 0, 1, 0, 0, -1), None)
self.assertEqual(test(-1, 0, 0, 1, 0, 0, 1), None)
self.assertEqual(test(-1, 0, 0, 1, 0, 1, -1), None)
self.assertEqual(test(-1, 0, 0, 1, 0, 1, 0), None)
self.assertEqual(test(-1, 0, 0, 1, 0, 1, 1), blue_player.SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, 0, 0, 1, 1, -1, -1), None)
self.assertEqual(test(-1, 0, 0, 1, 1, -1, 0), None)
self.assertEqual(test(-1, 0, 0, 1, 1, -1, 1), None)
self.assertEqual(test(-1, 0, 0, 1, 1, 0, -1), None)
self.assertEqual(test(-1, 0, 0, 1, 1, 0, 0), None)
self.assertEqual(test(-1, 0, 0, 1, 1, 0, 1), blue_player.SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, 0, 0, 1, 1, 1, -1), blue_player.SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 0, 0, 1, 1, 1, 0), blue_player.SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 0, 1, -1, -1, -1, 0), blue_player.SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 0, 1, -1, -1, -1, 1), blue_player.SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 0, 1, -1, -1, 0, -1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 0, 1, -1, -1, 0, 0), None)
self.assertEqual(test(-1, 0, 1, -1, -1, 0, 1), None)
self.assertEqual(test(-1, 0, 1, -1, -1, 1, -1), None)
self.assertEqual(test(-1, 0, 1, -1, -1, 1, 0), None)
self.assertEqual(test(-1, 0, 1, -1, -1, 1, 1), None)
self.assertEqual(test(-1, 0, 1, -1, 0, -1, -1), blue_player.SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 0, 1, -1, 0, -1, 0), None)
self.assertEqual(test(-1, 0, 1, -1, 0, -1, 1), None)
self.assertEqual(test(-1, 0, 1, -1, 0, 0, -1), None)
self.assertEqual(test(-1, 0, 1, -1, 0, 0, 1), None)
self.assertEqual(test(-1, 0, 1, -1, 0, 1, -1), None)
self.assertEqual(test(-1, 0, 1, -1, 0, 1, 0), None)
self.assertEqual(test(-1, 0, 1, -1, 0, 1, 1), None)
self.assertEqual(test(-1, 0, 1, -1, 1, -1, -1), None)
self.assertEqual(test(-1, 0, 1, -1, 1, -1, 0), None)
self.assertEqual(test(-1, 0, 1, -1, 1, -1, 1), None)
self.assertEqual(test(-1, 0, 1, -1, 1, 0, -1), None)
self.assertEqual(test(-1, 0, 1, -1, 1, 0, 0), None)
self.assertEqual(test(-1, 0, 1, -1, 1, 0, 1), None)
self.assertEqual(test(-1, 0, 1, -1, 1, 1, -1), None)
self.assertEqual(test(-1, 0, 1, -1, 1, 1, 0), None)
self.assertEqual(test(-1, 0, 1, 1, -1, -1, 0), None)
self.assertEqual(test(-1, 0, 1, 1, -1, -1, 1), None)
self.assertEqual(test(-1, 0, 1, 1, -1, 0, -1), None)
self.assertEqual(test(-1, 0, 1, 1, -1, 0, 0), None)
self.assertEqual(test(-1, 0, 1, 1, -1, 0, 1), None)
self.assertEqual(test(-1, 0, 1, 1, -1, 1, -1), None)
self.assertEqual(test(-1, 0, 1, 1, -1, 1, 0), None)
self.assertEqual(test(-1, 0, 1, 1, -1, 1, 1), None)
self.assertEqual(test(-1, 0, 1, 1, 0, -1, -1), None)
self.assertEqual(test(-1, 0, 1, 1, 0, -1, 0), None)
self.assertEqual(test(-1, 0, 1, 1, 0, -1, 1), None)
self.assertEqual(test(-1, 0, 1, 1, 0, 0, -1), None)
self.assertEqual(test(-1, 0, 1, 1, 0, 0, 1), None)
self.assertEqual(test(-1, 0, 1, 1, 0, 1, -1), blue_player.SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, 0, 1, 1, 0, 1, 0), blue_player.SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, 0, 1, 1, 0, 1, 1), blue_player.SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, 0, 1, 1, 1, -1, -1), blue_player.SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 0, 1, 1, 1, -1, 0), blue_player.SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 0, 1, 1, 1, -1, 1), blue_player.SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 0, 1, 1, 1, 0, -1), blue_player.SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 0, 1, 1, 1, 0, 0), blue_player.SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 0, 1, 1, 1, 0, 1), blue_player.SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 0, 1, 1, 1, 1, -1), blue_player.SpaceProperies.WHITE_WIN)
self.assertEqual(test(-1, 0, 1, 1, 1, 1, 0), blue_player.SpaceProperies.WHITE_WIN)
self.assertEqual(test(-1, 1, -1, -1, -1, -1, 0), SpaceProperies.BLACK_WIN)
self.assertEqual(test(-1, 1, -1, -1, -1, -1, 1), SpaceProperies.BLACK_WIN)
self.assertEqual(test(-1, 1, -1, -1, -1, 0, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 1, -1, -1, -1, 0, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 1, -1, -1, -1, 0, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 1, -1, -1, -1, 1, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 1, -1, -1, -1, 1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 1, -1, -1, -1, 1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 1, -1, -1, 0, -1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 1, -1, -1, 0, -1, 0), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 1, -1, -1, 0, -1, 1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 1, -1, -1, 0, 0, -1), None)
self.assertEqual(test(-1, 1, -1, -1, 0, 0, 1), None)
self.assertEqual(test(-1, 1, -1, -1, 0, 1, -1), None)
self.assertEqual(test(-1, 1, -1, -1, 0, 1, 0), None)
self.assertEqual(test(-1, 1, -1, -1, 0, 1, 1), None)
self.assertEqual(test(-1, 1, -1, -1, 1, -1, -1), None)
self.assertEqual(test(-1, 1, -1, -1, 1, -1, 0), None)
self.assertEqual(test(-1, 1, -1, -1, 1, -1, 1), None)
self.assertEqual(test(-1, 1, -1, -1, 1, 0, -1), None)
self.assertEqual(test(-1, 1, -1, -1, 1, 0, 0), None)
self.assertEqual(test(-1, 1, -1, -1, 1, 0, 1), None)
self.assertEqual(test(-1, 1, -1, -1, 1, 1, -1), None)
self.assertEqual(test(-1, 1, -1, -1, 1, 1, 0), None)
self.assertEqual(test(-1, 1, -1, 1, -1, -1, 0), None)
self.assertEqual(test(-1, 1, -1, 1, -1, -1, 1), None)
self.assertEqual(test(-1, 1, -1, 1, -1, 0, -1), None)
self.assertEqual(test(-1, 1, -1, 1, -1, 0, 0), None)
self.assertEqual(test(-1, 1, -1, 1, -1, 0, 1), None)
self.assertEqual(test(-1, 1, -1, 1, -1, 1, -1), None)
self.assertEqual(test(-1, 1, -1, 1, -1, 1, 0), None)
self.assertEqual(test(-1, 1, -1, 1, -1, 1, 1), None)
self.assertEqual(test(-1, 1, -1, 1, 0, -1, -1), None)
self.assertEqual(test(-1, 1, -1, 1, 0, -1, 0), None)
self.assertEqual(test(-1, 1, -1, 1, 0, -1, 1), None)
self.assertEqual(test(-1, 1, -1, 1, 0, 0, -1), None)
self.assertEqual(test(-1, 1, -1, 1, 0, 0, 1), None)
self.assertEqual(test(-1, 1, -1, 1, 0, 1, -1), None)
self.assertEqual(test(-1, 1, -1, 1, 0, 1, 0), None)
self.assertEqual(test(-1, 1, -1, 1, 0, 1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, 1, -1, 1, 1, -1, -1), None)
self.assertEqual(test(-1, 1, -1, 1, 1, -1, 0), None)
self.assertEqual(test(-1, 1, -1, 1, 1, -1, 1), None)
self.assertEqual(test(-1, 1, -1, 1, 1, 0, -1), None)
self.assertEqual(test(-1, 1, -1, 1, 1, 0, 0), None)
self.assertEqual(test(-1, 1, -1, 1, 1, 0, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, 1, -1, 1, 1, 1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, -1, 1, 1, 1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 0, -1, -1, -1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 1, 0, -1, -1, -1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 1, 0, -1, -1, 0, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 1, 0, -1, -1, 0, 0), None)
self.assertEqual(test(-1, 1, 0, -1, -1, 0, 1), None)
self.assertEqual(test(-1, 1, 0, -1, -1, 1, -1), None)
self.assertEqual(test(-1, 1, 0, -1, -1, 1, 0), None)
self.assertEqual(test(-1, 1, 0, -1, -1, 1, 1), None)
self.assertEqual(test(-1, 1, 0, -1, 0, -1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 1, 0, -1, 0, -1, 0), None)
self.assertEqual(test(-1, 1, 0, -1, 0, -1, 1), None)
self.assertEqual(test(-1, 1, 0, -1, 0, 0, -1), None)
self.assertEqual(test(-1, 1, 0, -1, 0, 0, 1), None)
self.assertEqual(test(-1, 1, 0, -1, 0, 1, -1), None)
self.assertEqual(test(-1, 1, 0, -1, 0, 1, 0), None)
self.assertEqual(test(-1, 1, 0, -1, 0, 1, 1), None)
self.assertEqual(test(-1, 1, 0, -1, 1, -1, -1), None)
self.assertEqual(test(-1, 1, 0, -1, 1, -1, 0), None)
self.assertEqual(test(-1, 1, 0, -1, 1, -1, 1), None)
self.assertEqual(test(-1, 1, 0, -1, 1, 0, -1), None)
self.assertEqual(test(-1, 1, 0, -1, 1, 0, 0), None)
self.assertEqual(test(-1, 1, 0, -1, 1, 0, 1), None)
self.assertEqual(test(-1, 1, 0, -1, 1, 1, -1), None)
self.assertEqual(test(-1, 1, 0, -1, 1, 1, 0), None)
self.assertEqual(test(-1, 1, 0, 1, -1, -1, 0), None)
self.assertEqual(test(-1, 1, 0, 1, -1, -1, 1), None)
self.assertEqual(test(-1, 1, 0, 1, -1, 0, -1), None)
self.assertEqual(test(-1, 1, 0, 1, -1, 0, 0), None)
self.assertEqual(test(-1, 1, 0, 1, -1, 0, 1), None)
self.assertEqual(test(-1, 1, 0, 1, -1, 1, -1), None)
self.assertEqual(test(-1, 1, 0, 1, -1, 1, 0), None)
self.assertEqual(test(-1, 1, 0, 1, -1, 1, 1), None)
self.assertEqual(test(-1, 1, 0, 1, 0, -1, -1), None)
self.assertEqual(test(-1, 1, 0, 1, 0, -1, 0), None)
self.assertEqual(test(-1, 1, 0, 1, 0, -1, 1), None)
self.assertEqual(test(-1, 1, 0, 1, 0, 0, -1), None)
self.assertEqual(test(-1, 1, 0, 1, 0, 0, 1), None)
self.assertEqual(test(-1, 1, 0, 1, 0, 1, -1), None)
self.assertEqual(test(-1, 1, 0, 1, 0, 1, 0), None)
self.assertEqual(test(-1, 1, 0, 1, 0, 1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, 1, 0, 1, 1, -1, -1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, 1, 0, 1, 1, -1, 0), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, 1, 0, 1, 1, -1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, 1, 0, 1, 1, 0, -1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, 1, 0, 1, 1, 0, 0), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(-1, 1, 0, 1, 1, 0, 1), SpaceProperies.WHITE_DOUBLE_CHECK)
self.assertEqual(test(-1, 1, 0, 1, 1, 1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 0, 1, 1, 1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 1, -1, -1, -1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 1, 1, -1, -1, -1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(-1, 1, 1, -1, -1, 0, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 1, 1, -1, -1, 0, 0), None)
self.assertEqual(test(-1, 1, 1, -1, -1, 0, 1), None)
self.assertEqual(test(-1, 1, 1, -1, -1, 1, -1), None)
self.assertEqual(test(-1, 1, 1, -1, -1, 1, 0), None)
self.assertEqual(test(-1, 1, 1, -1, -1, 1, 1), None)
self.assertEqual(test(-1, 1, 1, -1, 0, -1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(-1, 1, 1, -1, 0, -1, 0), None)
self.assertEqual(test(-1, 1, 1, -1, 0, -1, 1), None)
self.assertEqual(test(-1, 1, 1, -1, 0, 0, -1), None)
self.assertEqual(test(-1, 1, 1, -1, 0, 0, 1), None)
self.assertEqual(test(-1, 1, 1, -1, 0, 1, -1), None)
self.assertEqual(test(-1, 1, 1, -1, 0, 1, 0), None)
self.assertEqual(test(-1, 1, 1, -1, 0, 1, 1), None)
self.assertEqual(test(-1, 1, 1, -1, 1, -1, -1), None)
self.assertEqual(test(-1, 1, 1, -1, 1, -1, 0), None)
self.assertEqual(test(-1, 1, 1, -1, 1, -1, 1), None)
self.assertEqual(test(-1, 1, 1, -1, 1, 0, -1), None)
self.assertEqual(test(-1, 1, 1, -1, 1, 0, 0), None)
self.assertEqual(test(-1, 1, 1, -1, 1, 0, 1), None)
self.assertEqual(test(-1, 1, 1, -1, 1, 1, -1), None)
self.assertEqual(test(-1, 1, 1, -1, 1, 1, 0), None)
self.assertEqual(test(-1, 1, 1, 1, -1, -1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 1, 1, -1, -1, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 1, 1, -1, 0, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 1, 1, -1, 0, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 1, 1, -1, 0, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 1, 1, -1, 1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 1, 1, -1, 1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 1, 1, -1, 1, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 1, 1, 0, -1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 1, 1, 0, -1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 1, 1, 0, -1, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 1, 1, 0, 0, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 1, 1, 0, 0, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 1, 1, 0, 1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 1, 1, 0, 1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 1, 1, 0, 1, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(-1, 1, 1, 1, 1, -1, -1), SpaceProperies.WHITE_WIN)
self.assertEqual(test(-1, 1, 1, 1, 1, -1, 0), SpaceProperies.WHITE_WIN)
self.assertEqual(test(-1, 1, 1, 1, 1, -1, 1), SpaceProperies.WHITE_WIN)
self.assertEqual(test(-1, 1, 1, 1, 1, 0, -1), SpaceProperies.WHITE_WIN)
self.assertEqual(test(-1, 1, 1, 1, 1, 0, 0), SpaceProperies.WHITE_WIN)
self.assertEqual(test(-1, 1, 1, 1, 1, 0, 1), SpaceProperies.WHITE_WIN)
self.assertEqual(test(-1, 1, 1, 1, 1, 1, -1), SpaceProperies.WHITE_WIN)
self.assertEqual(test(-1, 1, 1, 1, 1, 1, 0), SpaceProperies.WHITE_WIN)
self.assertEqual(test(0, -1, -1, -1, -1, -1, 0), SpaceProperies.BLACK_WIN)
self.assertEqual(test(0, -1, -1, -1, -1, -1, 1), SpaceProperies.BLACK_WIN)
self.assertEqual(test(0, -1, -1, -1, -1, 0, -1), SpaceProperies.BLACK_WIN)
self.assertEqual(test(0, -1, -1, -1, -1, 0, 0), SpaceProperies.BLACK_WIN)
self.assertEqual(test(0, -1, -1, -1, -1, 0, 1), SpaceProperies.BLACK_WIN)
self.assertEqual(test(0, -1, -1, -1, -1, 1, -1), SpaceProperies.BLACK_WIN)
self.assertEqual(test(0, -1, -1, -1, -1, 1, 0), SpaceProperies.BLACK_WIN)
self.assertEqual(test(0, -1, -1, -1, -1, 1, 1), SpaceProperies.BLACK_WIN)
self.assertEqual(test(0, -1, -1, -1, 0, -1, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, -1, -1, 0, -1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, -1, -1, 0, -1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, -1, -1, 0, 0, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, -1, -1, 0, 0, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, -1, -1, 0, 1, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, -1, -1, 0, 1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, -1, -1, 0, 1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, -1, -1, 1, -1, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, -1, -1, 1, -1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, -1, -1, 1, -1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, -1, -1, 1, 0, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, -1, -1, 1, 0, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, -1, -1, 1, 0, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, -1, -1, 1, 1, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, -1, -1, 1, 1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, -1, 1, -1, -1, 0), None)
self.assertEqual(test(0, -1, -1, 1, -1, -1, 1), None)
self.assertEqual(test(0, -1, -1, 1, -1, 0, -1), None)
self.assertEqual(test(0, -1, -1, 1, -1, 0, 0), None)
self.assertEqual(test(0, -1, -1, 1, -1, 0, 1), None)
self.assertEqual(test(0, -1, -1, 1, -1, 1, -1), None)
self.assertEqual(test(0, -1, -1, 1, -1, 1, 0), None)
self.assertEqual(test(0, -1, -1, 1, -1, 1, 1), None)
self.assertEqual(test(0, -1, -1, 1, 0, -1, -1), None)
self.assertEqual(test(0, -1, -1, 1, 0, -1, 0), None)
self.assertEqual(test(0, -1, -1, 1, 0, -1, 1), None)
self.assertEqual(test(0, -1, -1, 1, 0, 0, -1), None)
self.assertEqual(test(0, -1, -1, 1, 0, 0, 1), None)
self.assertEqual(test(0, -1, -1, 1, 0, 1, -1), None)
self.assertEqual(test(0, -1, -1, 1, 0, 1, 0), None)
self.assertEqual(test(0, -1, -1, 1, 0, 1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, -1, -1, 1, 1, -1, -1), None)
self.assertEqual(test(0, -1, -1, 1, 1, -1, 0), None)
self.assertEqual(test(0, -1, -1, 1, 1, -1, 1), None)
self.assertEqual(test(0, -1, -1, 1, 1, 0, -1), None)
self.assertEqual(test(0, -1, -1, 1, 1, 0, 0), None)
self.assertEqual(test(0, -1, -1, 1, 1, 0, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, -1, -1, 1, 1, 1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, -1, -1, 1, 1, 1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, -1, 0, -1, -1, -1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, 0, -1, -1, -1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, 0, -1, -1, 0, -1), SpaceProperies.BLACK_DOUBLE_CHECK)
self.assertEqual(test(0, -1, 0, -1, -1, 0, 0), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, -1, 0, -1, -1, 0, 1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, -1, 0, -1, -1, 1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, -1, 0, -1, -1, 1, 0), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, -1, 0, -1, -1, 1, 1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, -1, 0, -1, 0, -1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, -1, 0, -1, 0, -1, 0), None)
self.assertEqual(test(0, -1, 0, -1, 0, -1, 1), None)
self.assertEqual(test(0, -1, 0, -1, 0, 0, -1), None)
self.assertEqual(test(0, -1, 0, -1, 0, 0, 1), None)
self.assertEqual(test(0, -1, 0, -1, 0, 1, -1), None)
self.assertEqual(test(0, -1, 0, -1, 0, 1, 0), None)
self.assertEqual(test(0, -1, 0, -1, 0, 1, 1), None)
self.assertEqual(test(0, -1, 0, -1, 1, -1, -1), None)
self.assertEqual(test(0, -1, 0, -1, 1, -1, 0), None)
self.assertEqual(test(0, -1, 0, -1, 1, -1, 1), None)
self.assertEqual(test(0, -1, 0, -1, 1, 0, -1), None)
self.assertEqual(test(0, -1, 0, -1, 1, 0, 0), None)
self.assertEqual(test(0, -1, 0, -1, 1, 0, 1), None)
self.assertEqual(test(0, -1, 0, -1, 1, 1, -1), None)
self.assertEqual(test(0, -1, 0, -1, 1, 1, 0), None)
self.assertEqual(test(0, -1, 0, 1, -1, -1, 0), None)
self.assertEqual(test(0, -1, 0, 1, -1, -1, 1), None)
self.assertEqual(test(0, -1, 0, 1, -1, 0, -1), None)
self.assertEqual(test(0, -1, 0, 1, -1, 0, 0), None)
self.assertEqual(test(0, -1, 0, 1, -1, 0, 1), None)
self.assertEqual(test(0, -1, 0, 1, -1, 1, -1), None)
self.assertEqual(test(0, -1, 0, 1, -1, 1, 0), None)
self.assertEqual(test(0, -1, 0, 1, -1, 1, 1), None)
self.assertEqual(test(0, -1, 0, 1, 0, -1, -1), None)
self.assertEqual(test(0, -1, 0, 1, 0, -1, 0), None)
self.assertEqual(test(0, -1, 0, 1, 0, -1, 1), None)
self.assertEqual(test(0, -1, 0, 1, 0, 0, -1), None)
self.assertEqual(test(0, -1, 0, 1, 0, 0, 1), None)
self.assertEqual(test(0, -1, 0, 1, 0, 1, -1), None)
self.assertEqual(test(0, -1, 0, 1, 0, 1, 0), None)
self.assertEqual(test(0, -1, 0, 1, 0, 1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, -1, 0, 1, 1, -1, -1), None)
self.assertEqual(test(0, -1, 0, 1, 1, -1, 0), None)
self.assertEqual(test(0, -1, 0, 1, 1, -1, 1), None)
self.assertEqual(test(0, -1, 0, 1, 1, 0, -1), None)
self.assertEqual(test(0, -1, 0, 1, 1, 0, 0), None)
self.assertEqual(test(0, -1, 0, 1, 1, 0, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, -1, 0, 1, 1, 1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, -1, 0, 1, 1, 1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, -1, 1, -1, -1, -1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, 1, -1, -1, -1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, -1, 1, -1, -1, 0, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, -1, 1, -1, -1, 0, 0), None)
self.assertEqual(test(0, -1, 1, -1, -1, 0, 1), None)
self.assertEqual(test(0, -1, 1, -1, -1, 1, -1), None)
self.assertEqual(test(0, -1, 1, -1, -1, 1, 0), None)
self.assertEqual(test(0, -1, 1, -1, -1, 1, 1), None)
self.assertEqual(test(0, -1, 1, -1, 0, -1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, -1, 1, -1, 0, -1, 0), None)
self.assertEqual(test(0, -1, 1, -1, 0, -1, 1), None)
self.assertEqual(test(0, -1, 1, -1, 0, 0, -1), None)
self.assertEqual(test(0, -1, 1, -1, 0, 0, 1), None)
self.assertEqual(test(0, -1, 1, -1, 0, 1, -1), None)
self.assertEqual(test(0, -1, 1, -1, 0, 1, 0), None)
self.assertEqual(test(0, -1, 1, -1, 0, 1, 1), None)
self.assertEqual(test(0, -1, 1, -1, 1, -1, -1), None)
self.assertEqual(test(0, -1, 1, -1, 1, -1, 0), None)
self.assertEqual(test(0, -1, 1, -1, 1, -1, 1), None)
self.assertEqual(test(0, -1, 1, -1, 1, 0, -1), None)
self.assertEqual(test(0, -1, 1, -1, 1, 0, 0), None)
self.assertEqual(test(0, -1, 1, -1, 1, 0, 1), None)
self.assertEqual(test(0, -1, 1, -1, 1, 1, -1), None)
self.assertEqual(test(0, -1, 1, -1, 1, 1, 0), None)
self.assertEqual(test(0, -1, 1, 1, -1, -1, 0), None)
self.assertEqual(test(0, -1, 1, 1, -1, -1, 1), None)
self.assertEqual(test(0, -1, 1, 1, -1, 0, -1), None)
self.assertEqual(test(0, -1, 1, 1, -1, 0, 0), None)
self.assertEqual(test(0, -1, 1, 1, -1, 0, 1), None)
self.assertEqual(test(0, -1, 1, 1, -1, 1, -1), None)
self.assertEqual(test(0, -1, 1, 1, -1, 1, 0), None)
self.assertEqual(test(0, -1, 1, 1, -1, 1, 1), None)
self.assertEqual(test(0, -1, 1, 1, 0, -1, -1), None)
self.assertEqual(test(0, -1, 1, 1, 0, -1, 0), None)
self.assertEqual(test(0, -1, 1, 1, 0, -1, 1), None)
self.assertEqual(test(0, -1, 1, 1, 0, 0, -1), None)
self.assertEqual(test(0, -1, 1, 1, 0, 0, 1), None)
self.assertEqual(test(0, -1, 1, 1, 0, 1, -1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, -1, 1, 1, 0, 1, 0), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, -1, 1, 1, 0, 1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, -1, 1, 1, 1, -1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, -1, 1, 1, 1, -1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, -1, 1, 1, 1, -1, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, -1, 1, 1, 1, 0, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, -1, 1, 1, 1, 0, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, -1, 1, 1, 1, 0, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, -1, 1, 1, 1, 1, -1), SpaceProperies.WHITE_WIN)
self.assertEqual(test(0, -1, 1, 1, 1, 1, 0), SpaceProperies.WHITE_WIN)
self.assertEqual(test(0, 0, -1, -1, -1, -1, 0), SpaceProperies.BLACK_WIN)
self.assertEqual(test(0, 0, -1, -1, -1, -1, 1), SpaceProperies.BLACK_WIN)
self.assertEqual(test(0, 0, -1, -1, -1, 0, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, 0, -1, -1, -1, 0, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, 0, -1, -1, -1, 0, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, 0, -1, -1, -1, 1, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, 0, -1, -1, -1, 1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, 0, -1, -1, -1, 1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, 0, -1, -1, 0, -1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, 0, -1, -1, 0, -1, 0), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, 0, -1, -1, 0, -1, 1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, 0, -1, -1, 0, 0, -1), None)
self.assertEqual(test(0, 0, -1, -1, 0, 0, 1), None)
self.assertEqual(test(0, 0, -1, -1, 0, 1, -1), None)
self.assertEqual(test(0, 0, -1, -1, 0, 1, 0), None)
self.assertEqual(test(0, 0, -1, -1, 0, 1, 1), None)
self.assertEqual(test(0, 0, -1, -1, 1, -1, -1), None)
self.assertEqual(test(0, 0, -1, -1, 1, -1, 0), None)
self.assertEqual(test(0, 0, -1, -1, 1, -1, 1), None)
self.assertEqual(test(0, 0, -1, -1, 1, 0, -1), None)
self.assertEqual(test(0, 0, -1, -1, 1, 0, 0), None)
self.assertEqual(test(0, 0, -1, -1, 1, 0, 1), None)
self.assertEqual(test(0, 0, -1, -1, 1, 1, -1), None)
self.assertEqual(test(0, 0, -1, -1, 1, 1, 0), None)
self.assertEqual(test(0, 0, -1, 1, -1, -1, 0), None)
self.assertEqual(test(0, 0, -1, 1, -1, -1, 1), None)
self.assertEqual(test(0, 0, -1, 1, -1, 0, -1), None)
self.assertEqual(test(0, 0, -1, 1, -1, 0, 0), None)
self.assertEqual(test(0, 0, -1, 1, -1, 0, 1), None)
self.assertEqual(test(0, 0, -1, 1, -1, 1, -1), None)
self.assertEqual(test(0, 0, -1, 1, -1, 1, 0), None)
self.assertEqual(test(0, 0, -1, 1, -1, 1, 1), None)
self.assertEqual(test(0, 0, -1, 1, 0, -1, -1), None)
self.assertEqual(test(0, 0, -1, 1, 0, -1, 0), None)
self.assertEqual(test(0, 0, -1, 1, 0, -1, 1), None)
self.assertEqual(test(0, 0, -1, 1, 0, 0, -1), None)
self.assertEqual(test(0, 0, -1, 1, 0, 0, 1), None)
self.assertEqual(test(0, 0, -1, 1, 0, 1, -1), None)
self.assertEqual(test(0, 0, -1, 1, 0, 1, 0), None)
self.assertEqual(test(0, 0, -1, 1, 0, 1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, 0, -1, 1, 1, -1, -1), None)
self.assertEqual(test(0, 0, -1, 1, 1, -1, 0), None)
self.assertEqual(test(0, 0, -1, 1, 1, -1, 1), None)
self.assertEqual(test(0, 0, -1, 1, 1, 0, -1), None)
self.assertEqual(test(0, 0, -1, 1, 1, 0, 0), None)
self.assertEqual(test(0, 0, -1, 1, 1, 0, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, 0, -1, 1, 1, 1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 0, -1, 1, 1, 1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 0, 1, -1, -1, -1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, 0, 1, -1, -1, -1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, 0, 1, -1, -1, 0, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, 0, 1, -1, -1, 0, 0), None)
self.assertEqual(test(0, 0, 1, -1, -1, 0, 1), None)
self.assertEqual(test(0, 0, 1, -1, -1, 1, -1), None)
self.assertEqual(test(0, 0, 1, -1, -1, 1, 0), None)
self.assertEqual(test(0, 0, 1, -1, -1, 1, 1), None)
self.assertEqual(test(0, 0, 1, -1, 0, -1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, 0, 1, -1, 0, -1, 0), None)
self.assertEqual(test(0, 0, 1, -1, 0, -1, 1), None)
self.assertEqual(test(0, 0, 1, -1, 0, 0, -1), None)
self.assertEqual(test(0, 0, 1, -1, 0, 0, 1), None)
self.assertEqual(test(0, 0, 1, -1, 0, 1, -1), None)
self.assertEqual(test(0, 0, 1, -1, 0, 1, 0), None)
self.assertEqual(test(0, 0, 1, -1, 0, 1, 1), None)
self.assertEqual(test(0, 0, 1, -1, 1, -1, -1), None)
self.assertEqual(test(0, 0, 1, -1, 1, -1, 0), None)
self.assertEqual(test(0, 0, 1, -1, 1, -1, 1), None)
self.assertEqual(test(0, 0, 1, -1, 1, 0, -1), None)
self.assertEqual(test(0, 0, 1, -1, 1, 0, 0), None)
self.assertEqual(test(0, 0, 1, -1, 1, 0, 1), None)
self.assertEqual(test(0, 0, 1, -1, 1, 1, -1), None)
self.assertEqual(test(0, 0, 1, -1, 1, 1, 0), None)
self.assertEqual(test(0, 0, 1, 1, -1, -1, 0), None)
self.assertEqual(test(0, 0, 1, 1, -1, -1, 1), None)
self.assertEqual(test(0, 0, 1, 1, -1, 0, -1), None)
self.assertEqual(test(0, 0, 1, 1, -1, 0, 0), None)
self.assertEqual(test(0, 0, 1, 1, -1, 0, 1), None)
self.assertEqual(test(0, 0, 1, 1, -1, 1, -1), None)
self.assertEqual(test(0, 0, 1, 1, -1, 1, 0), None)
self.assertEqual(test(0, 0, 1, 1, -1, 1, 1), None)
self.assertEqual(test(0, 0, 1, 1, 0, -1, -1), None)
self.assertEqual(test(0, 0, 1, 1, 0, -1, 0), None)
self.assertEqual(test(0, 0, 1, 1, 0, -1, 1), None)
self.assertEqual(test(0, 0, 1, 1, 0, 0, -1), None)
self.assertEqual(test(0, 0, 1, 1, 0, 0, 1), None)
self.assertEqual(test(0, 0, 1, 1, 0, 1, -1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, 0, 1, 1, 0, 1, 0), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, 0, 1, 1, 0, 1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, 0, 1, 1, 1, -1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 0, 1, 1, 1, -1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 0, 1, 1, 1, -1, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 0, 1, 1, 1, 0, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 0, 1, 1, 1, 0, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 0, 1, 1, 1, 0, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 0, 1, 1, 1, 1, -1), SpaceProperies.WHITE_WIN)
self.assertEqual(test(0, 0, 1, 1, 1, 1, 0), SpaceProperies.WHITE_WIN)
self.assertEqual(test(0, 1, -1, -1, -1, -1, 0), SpaceProperies.BLACK_WIN)
self.assertEqual(test(0, 1, -1, -1, -1, -1, 1), SpaceProperies.BLACK_WIN)
self.assertEqual(test(0, 1, -1, -1, -1, 0, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, 1, -1, -1, -1, 0, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, 1, -1, -1, -1, 0, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, 1, -1, -1, -1, 1, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, 1, -1, -1, -1, 1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, 1, -1, -1, -1, 1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, 1, -1, -1, 0, -1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, 1, -1, -1, 0, -1, 0), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, 1, -1, -1, 0, -1, 1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, 1, -1, -1, 0, 0, -1), None)
self.assertEqual(test(0, 1, -1, -1, 0, 0, 1), None)
self.assertEqual(test(0, 1, -1, -1, 0, 1, -1), None)
self.assertEqual(test(0, 1, -1, -1, 0, 1, 0), None)
self.assertEqual(test(0, 1, -1, -1, 0, 1, 1), None)
self.assertEqual(test(0, 1, -1, -1, 1, -1, -1), None)
self.assertEqual(test(0, 1, -1, -1, 1, -1, 0), None)
self.assertEqual(test(0, 1, -1, -1, 1, -1, 1), None)
self.assertEqual(test(0, 1, -1, -1, 1, 0, -1), None)
self.assertEqual(test(0, 1, -1, -1, 1, 0, 0), None)
self.assertEqual(test(0, 1, -1, -1, 1, 0, 1), None)
self.assertEqual(test(0, 1, -1, -1, 1, 1, -1), None)
self.assertEqual(test(0, 1, -1, -1, 1, 1, 0), None)
self.assertEqual(test(0, 1, -1, 1, -1, -1, 0), None)
self.assertEqual(test(0, 1, -1, 1, -1, -1, 1), None)
self.assertEqual(test(0, 1, -1, 1, -1, 0, -1), None)
self.assertEqual(test(0, 1, -1, 1, -1, 0, 0), None)
self.assertEqual(test(0, 1, -1, 1, -1, 0, 1), None)
self.assertEqual(test(0, 1, -1, 1, -1, 1, -1), None)
self.assertEqual(test(0, 1, -1, 1, -1, 1, 0), None)
self.assertEqual(test(0, 1, -1, 1, -1, 1, 1), None)
self.assertEqual(test(0, 1, -1, 1, 0, -1, -1), None)
self.assertEqual(test(0, 1, -1, 1, 0, -1, 0), None)
self.assertEqual(test(0, 1, -1, 1, 0, -1, 1), None)
self.assertEqual(test(0, 1, -1, 1, 0, 0, -1), None)
self.assertEqual(test(0, 1, -1, 1, 0, 0, 1), None)
self.assertEqual(test(0, 1, -1, 1, 0, 1, -1), None)
self.assertEqual(test(0, 1, -1, 1, 0, 1, 0), None)
self.assertEqual(test(0, 1, -1, 1, 0, 1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, 1, -1, 1, 1, -1, -1), None)
self.assertEqual(test(0, 1, -1, 1, 1, -1, 0), None)
self.assertEqual(test(0, 1, -1, 1, 1, -1, 1), None)
self.assertEqual(test(0, 1, -1, 1, 1, 0, -1), None)
self.assertEqual(test(0, 1, -1, 1, 1, 0, 0), None)
self.assertEqual(test(0, 1, -1, 1, 1, 0, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, 1, -1, 1, 1, 1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, -1, 1, 1, 1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 0, -1, -1, -1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, 1, 0, -1, -1, -1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, 1, 0, -1, -1, 0, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, 1, 0, -1, -1, 0, 0), None)
self.assertEqual(test(0, 1, 0, -1, -1, 0, 1), None)
self.assertEqual(test(0, 1, 0, -1, -1, 1, -1), None)
self.assertEqual(test(0, 1, 0, -1, -1, 1, 0), None)
self.assertEqual(test(0, 1, 0, -1, -1, 1, 1), None)
self.assertEqual(test(0, 1, 0, -1, 0, -1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, 1, 0, -1, 0, -1, 0), None)
self.assertEqual(test(0, 1, 0, -1, 0, -1, 1), None)
self.assertEqual(test(0, 1, 0, -1, 0, 0, -1), None)
self.assertEqual(test(0, 1, 0, -1, 0, 0, 1), None)
self.assertEqual(test(0, 1, 0, -1, 0, 1, -1), None)
self.assertEqual(test(0, 1, 0, -1, 0, 1, 0), None)
self.assertEqual(test(0, 1, 0, -1, 0, 1, 1), None)
self.assertEqual(test(0, 1, 0, -1, 1, -1, -1), None)
self.assertEqual(test(0, 1, 0, -1, 1, -1, 0), None)
self.assertEqual(test(0, 1, 0, -1, 1, -1, 1), None)
self.assertEqual(test(0, 1, 0, -1, 1, 0, -1), None)
self.assertEqual(test(0, 1, 0, -1, 1, 0, 0), None)
self.assertEqual(test(0, 1, 0, -1, 1, 0, 1), None)
self.assertEqual(test(0, 1, 0, -1, 1, 1, -1), None)
self.assertEqual(test(0, 1, 0, -1, 1, 1, 0), None)
self.assertEqual(test(0, 1, 0, 1, -1, -1, 0), None)
self.assertEqual(test(0, 1, 0, 1, -1, -1, 1), None)
self.assertEqual(test(0, 1, 0, 1, -1, 0, -1), None)
self.assertEqual(test(0, 1, 0, 1, -1, 0, 0), None)
self.assertEqual(test(0, 1, 0, 1, -1, 0, 1), None)
self.assertEqual(test(0, 1, 0, 1, -1, 1, -1), None)
self.assertEqual(test(0, 1, 0, 1, -1, 1, 0), None)
self.assertEqual(test(0, 1, 0, 1, -1, 1, 1), None)
self.assertEqual(test(0, 1, 0, 1, 0, -1, -1), None)
self.assertEqual(test(0, 1, 0, 1, 0, -1, 0), None)
self.assertEqual(test(0, 1, 0, 1, 0, -1, 1), None)
self.assertEqual(test(0, 1, 0, 1, 0, 0, -1), None)
self.assertEqual(test(0, 1, 0, 1, 0, 0, 1), None)
self.assertEqual(test(0, 1, 0, 1, 0, 1, -1), None)
self.assertEqual(test(0, 1, 0, 1, 0, 1, 0), None)
self.assertEqual(test(0, 1, 0, 1, 0, 1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, 1, 0, 1, 1, -1, -1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, 1, 0, 1, 1, -1, 0), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, 1, 0, 1, 1, -1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, 1, 0, 1, 1, 0, -1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, 1, 0, 1, 1, 0, 0), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(0, 1, 0, 1, 1, 0, 1), SpaceProperies.WHITE_DOUBLE_CHECK)
self.assertEqual(test(0, 1, 0, 1, 1, 1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 0, 1, 1, 1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 1, -1, -1, -1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, 1, 1, -1, -1, -1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(0, 1, 1, -1, -1, 0, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, 1, 1, -1, -1, 0, 0), None)
self.assertEqual(test(0, 1, 1, -1, -1, 0, 1), None)
self.assertEqual(test(0, 1, 1, -1, -1, 1, -1), None)
self.assertEqual(test(0, 1, 1, -1, -1, 1, 0), None)
self.assertEqual(test(0, 1, 1, -1, -1, 1, 1), None)
self.assertEqual(test(0, 1, 1, -1, 0, -1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(0, 1, 1, -1, 0, -1, 0), None)
self.assertEqual(test(0, 1, 1, -1, 0, -1, 1), None)
self.assertEqual(test(0, 1, 1, -1, 0, 0, -1), None)
self.assertEqual(test(0, 1, 1, -1, 0, 0, 1), None)
self.assertEqual(test(0, 1, 1, -1, 0, 1, -1), None)
self.assertEqual(test(0, 1, 1, -1, 0, 1, 0), None)
self.assertEqual(test(0, 1, 1, -1, 0, 1, 1), None)
self.assertEqual(test(0, 1, 1, -1, 1, -1, -1), None)
self.assertEqual(test(0, 1, 1, -1, 1, -1, 0), None)
self.assertEqual(test(0, 1, 1, -1, 1, -1, 1), None)
self.assertEqual(test(0, 1, 1, -1, 1, 0, -1), None)
self.assertEqual(test(0, 1, 1, -1, 1, 0, 0), None)
self.assertEqual(test(0, 1, 1, -1, 1, 0, 1), None)
self.assertEqual(test(0, 1, 1, -1, 1, 1, -1), None)
self.assertEqual(test(0, 1, 1, -1, 1, 1, 0), None)
self.assertEqual(test(0, 1, 1, 1, -1, -1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 1, 1, -1, -1, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 1, 1, -1, 0, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 1, 1, -1, 0, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 1, 1, -1, 0, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 1, 1, -1, 1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 1, 1, -1, 1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 1, 1, -1, 1, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 1, 1, 0, -1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 1, 1, 0, -1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 1, 1, 0, -1, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 1, 1, 0, 0, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 1, 1, 0, 0, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 1, 1, 0, 1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 1, 1, 0, 1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 1, 1, 0, 1, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(0, 1, 1, 1, 1, -1, -1), SpaceProperies.WHITE_WIN)
self.assertEqual(test(0, 1, 1, 1, 1, -1, 0), SpaceProperies.WHITE_WIN)
self.assertEqual(test(0, 1, 1, 1, 1, -1, 1), SpaceProperies.WHITE_WIN)
self.assertEqual(test(0, 1, 1, 1, 1, 0, -1), SpaceProperies.WHITE_WIN)
self.assertEqual(test(0, 1, 1, 1, 1, 0, 0), SpaceProperies.WHITE_WIN)
self.assertEqual(test(0, 1, 1, 1, 1, 0, 1), SpaceProperies.WHITE_WIN)
self.assertEqual(test(0, 1, 1, 1, 1, 1, -1), SpaceProperies.WHITE_WIN)
self.assertEqual(test(0, 1, 1, 1, 1, 1, 0), SpaceProperies.WHITE_WIN)
self.assertEqual(test(1, -1, -1, -1, -1, -1, 0), SpaceProperies.BLACK_WIN)
self.assertEqual(test(1, -1, -1, -1, -1, -1, 1), SpaceProperies.BLACK_WIN)
self.assertEqual(test(1, -1, -1, -1, -1, 0, -1), SpaceProperies.BLACK_WIN)
self.assertEqual(test(1, -1, -1, -1, -1, 0, 0), SpaceProperies.BLACK_WIN)
self.assertEqual(test(1, -1, -1, -1, -1, 0, 1), SpaceProperies.BLACK_WIN)
self.assertEqual(test(1, -1, -1, -1, -1, 1, -1), SpaceProperies.BLACK_WIN)
self.assertEqual(test(1, -1, -1, -1, -1, 1, 0), SpaceProperies.BLACK_WIN)
self.assertEqual(test(1, -1, -1, -1, -1, 1, 1), SpaceProperies.BLACK_WIN)
self.assertEqual(test(1, -1, -1, -1, 0, -1, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, -1, -1, 0, -1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, -1, -1, 0, -1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, -1, -1, 0, 0, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, -1, -1, 0, 0, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, -1, -1, 0, 1, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, -1, -1, 0, 1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, -1, -1, 0, 1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, -1, -1, 1, -1, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, -1, -1, 1, -1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, -1, -1, 1, -1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, -1, -1, 1, 0, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, -1, -1, 1, 0, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, -1, -1, 1, 0, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, -1, -1, 1, 1, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, -1, -1, 1, 1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, -1, 1, -1, -1, 0), None)
self.assertEqual(test(1, -1, -1, 1, -1, -1, 1), None)
self.assertEqual(test(1, -1, -1, 1, -1, 0, -1), None)
self.assertEqual(test(1, -1, -1, 1, -1, 0, 0), None)
self.assertEqual(test(1, -1, -1, 1, -1, 0, 1), None)
self.assertEqual(test(1, -1, -1, 1, -1, 1, -1), None)
self.assertEqual(test(1, -1, -1, 1, -1, 1, 0), None)
self.assertEqual(test(1, -1, -1, 1, -1, 1, 1), None)
self.assertEqual(test(1, -1, -1, 1, 0, -1, -1), None)
self.assertEqual(test(1, -1, -1, 1, 0, -1, 0), None)
self.assertEqual(test(1, -1, -1, 1, 0, -1, 1), None)
self.assertEqual(test(1, -1, -1, 1, 0, 0, -1), None)
self.assertEqual(test(1, -1, -1, 1, 0, 0, 1), None)
self.assertEqual(test(1, -1, -1, 1, 0, 1, -1), None)
self.assertEqual(test(1, -1, -1, 1, 0, 1, 0), None)
self.assertEqual(test(1, -1, -1, 1, 0, 1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, -1, -1, 1, 1, -1, -1), None)
self.assertEqual(test(1, -1, -1, 1, 1, -1, 0), None)
self.assertEqual(test(1, -1, -1, 1, 1, -1, 1), None)
self.assertEqual(test(1, -1, -1, 1, 1, 0, -1), None)
self.assertEqual(test(1, -1, -1, 1, 1, 0, 0), None)
self.assertEqual(test(1, -1, -1, 1, 1, 0, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, -1, -1, 1, 1, 1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, -1, -1, 1, 1, 1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, -1, 0, -1, -1, -1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, 0, -1, -1, -1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, 0, -1, -1, 0, -1), SpaceProperies.BLACK_DOUBLE_CHECK)
self.assertEqual(test(1, -1, 0, -1, -1, 0, 0), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, -1, 0, -1, -1, 0, 1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, -1, 0, -1, -1, 1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, -1, 0, -1, -1, 1, 0), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, -1, 0, -1, -1, 1, 1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, -1, 0, -1, 0, -1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, -1, 0, -1, 0, -1, 0), None)
self.assertEqual(test(1, -1, 0, -1, 0, -1, 1), None)
self.assertEqual(test(1, -1, 0, -1, 0, 0, -1), None)
self.assertEqual(test(1, -1, 0, -1, 0, 0, 1), None)
self.assertEqual(test(1, -1, 0, -1, 0, 1, -1), None)
self.assertEqual(test(1, -1, 0, -1, 0, 1, 0), None)
self.assertEqual(test(1, -1, 0, -1, 0, 1, 1), None)
self.assertEqual(test(1, -1, 0, -1, 1, -1, -1), None)
self.assertEqual(test(1, -1, 0, -1, 1, -1, 0), None)
self.assertEqual(test(1, -1, 0, -1, 1, -1, 1), None)
self.assertEqual(test(1, -1, 0, -1, 1, 0, -1), None)
self.assertEqual(test(1, -1, 0, -1, 1, 0, 0), None)
self.assertEqual(test(1, -1, 0, -1, 1, 0, 1), None)
self.assertEqual(test(1, -1, 0, -1, 1, 1, -1), None)
self.assertEqual(test(1, -1, 0, -1, 1, 1, 0), None)
self.assertEqual(test(1, -1, 0, 1, -1, -1, 0), None)
self.assertEqual(test(1, -1, 0, 1, -1, -1, 1), None)
self.assertEqual(test(1, -1, 0, 1, -1, 0, -1), None)
self.assertEqual(test(1, -1, 0, 1, -1, 0, 0), None)
self.assertEqual(test(1, -1, 0, 1, -1, 0, 1), None)
self.assertEqual(test(1, -1, 0, 1, -1, 1, -1), None)
self.assertEqual(test(1, -1, 0, 1, -1, 1, 0), None)
self.assertEqual(test(1, -1, 0, 1, -1, 1, 1), None)
self.assertEqual(test(1, -1, 0, 1, 0, -1, -1), None)
self.assertEqual(test(1, -1, 0, 1, 0, -1, 0), None)
self.assertEqual(test(1, -1, 0, 1, 0, -1, 1), None)
self.assertEqual(test(1, -1, 0, 1, 0, 0, -1), None)
self.assertEqual(test(1, -1, 0, 1, 0, 0, 1), None)
self.assertEqual(test(1, -1, 0, 1, 0, 1, -1), None)
self.assertEqual(test(1, -1, 0, 1, 0, 1, 0), None)
self.assertEqual(test(1, -1, 0, 1, 0, 1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, -1, 0, 1, 1, -1, -1), None)
self.assertEqual(test(1, -1, 0, 1, 1, -1, 0), None)
self.assertEqual(test(1, -1, 0, 1, 1, -1, 1), None)
self.assertEqual(test(1, -1, 0, 1, 1, 0, -1), None)
self.assertEqual(test(1, -1, 0, 1, 1, 0, 0), None)
self.assertEqual(test(1, -1, 0, 1, 1, 0, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, -1, 0, 1, 1, 1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, -1, 0, 1, 1, 1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, -1, 1, -1, -1, -1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, 1, -1, -1, -1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, -1, 1, -1, -1, 0, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, -1, 1, -1, -1, 0, 0), None)
self.assertEqual(test(1, -1, 1, -1, -1, 0, 1), None)
self.assertEqual(test(1, -1, 1, -1, -1, 1, -1), None)
self.assertEqual(test(1, -1, 1, -1, -1, 1, 0), None)
self.assertEqual(test(1, -1, 1, -1, -1, 1, 1), None)
self.assertEqual(test(1, -1, 1, -1, 0, -1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, -1, 1, -1, 0, -1, 0), None)
self.assertEqual(test(1, -1, 1, -1, 0, -1, 1), None)
self.assertEqual(test(1, -1, 1, -1, 0, 0, -1), None)
self.assertEqual(test(1, -1, 1, -1, 0, 0, 1), None)
self.assertEqual(test(1, -1, 1, -1, 0, 1, -1), None)
self.assertEqual(test(1, -1, 1, -1, 0, 1, 0), None)
self.assertEqual(test(1, -1, 1, -1, 0, 1, 1), None)
self.assertEqual(test(1, -1, 1, -1, 1, -1, -1), None)
self.assertEqual(test(1, -1, 1, -1, 1, -1, 0), None)
self.assertEqual(test(1, -1, 1, -1, 1, -1, 1), None)
self.assertEqual(test(1, -1, 1, -1, 1, 0, -1), None)
self.assertEqual(test(1, -1, 1, -1, 1, 0, 0), None)
self.assertEqual(test(1, -1, 1, -1, 1, 0, 1), None)
self.assertEqual(test(1, -1, 1, -1, 1, 1, -1), None)
self.assertEqual(test(1, -1, 1, -1, 1, 1, 0), None)
self.assertEqual(test(1, -1, 1, 1, -1, -1, 0), None)
self.assertEqual(test(1, -1, 1, 1, -1, -1, 1), None)
self.assertEqual(test(1, -1, 1, 1, -1, 0, -1), None)
self.assertEqual(test(1, -1, 1, 1, -1, 0, 0), None)
self.assertEqual(test(1, -1, 1, 1, -1, 0, 1), None)
self.assertEqual(test(1, -1, 1, 1, -1, 1, -1), None)
self.assertEqual(test(1, -1, 1, 1, -1, 1, 0), None)
self.assertEqual(test(1, -1, 1, 1, -1, 1, 1), None)
self.assertEqual(test(1, -1, 1, 1, 0, -1, -1), None)
self.assertEqual(test(1, -1, 1, 1, 0, -1, 0), None)
self.assertEqual(test(1, -1, 1, 1, 0, -1, 1), None)
self.assertEqual(test(1, -1, 1, 1, 0, 0, -1), None)
self.assertEqual(test(1, -1, 1, 1, 0, 0, 1), None)
self.assertEqual(test(1, -1, 1, 1, 0, 1, -1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, -1, 1, 1, 0, 1, 0), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, -1, 1, 1, 0, 1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, -1, 1, 1, 1, -1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, -1, 1, 1, 1, -1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, -1, 1, 1, 1, -1, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, -1, 1, 1, 1, 0, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, -1, 1, 1, 1, 0, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, -1, 1, 1, 1, 0, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, -1, 1, 1, 1, 1, -1), SpaceProperies.WHITE_WIN)
self.assertEqual(test(1, -1, 1, 1, 1, 1, 0), SpaceProperies.WHITE_WIN)
self.assertEqual(test(1, 0, -1, -1, -1, -1, 0), SpaceProperies.BLACK_WIN)
self.assertEqual(test(1, 0, -1, -1, -1, -1, 1), SpaceProperies.BLACK_WIN)
self.assertEqual(test(1, 0, -1, -1, -1, 0, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, 0, -1, -1, -1, 0, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, 0, -1, -1, -1, 0, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, 0, -1, -1, -1, 1, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, 0, -1, -1, -1, 1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, 0, -1, -1, -1, 1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, 0, -1, -1, 0, -1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, 0, -1, -1, 0, -1, 0), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, 0, -1, -1, 0, -1, 1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, 0, -1, -1, 0, 0, -1), None)
self.assertEqual(test(1, 0, -1, -1, 0, 0, 1), None)
self.assertEqual(test(1, 0, -1, -1, 0, 1, -1), None)
self.assertEqual(test(1, 0, -1, -1, 0, 1, 0), None)
self.assertEqual(test(1, 0, -1, -1, 0, 1, 1), None)
self.assertEqual(test(1, 0, -1, -1, 1, -1, -1), None)
self.assertEqual(test(1, 0, -1, -1, 1, -1, 0), None)
self.assertEqual(test(1, 0, -1, -1, 1, -1, 1), None)
self.assertEqual(test(1, 0, -1, -1, 1, 0, -1), None)
self.assertEqual(test(1, 0, -1, -1, 1, 0, 0), None)
self.assertEqual(test(1, 0, -1, -1, 1, 0, 1), None)
self.assertEqual(test(1, 0, -1, -1, 1, 1, -1), None)
self.assertEqual(test(1, 0, -1, -1, 1, 1, 0), None)
self.assertEqual(test(1, 0, -1, 1, -1, -1, 0), None)
self.assertEqual(test(1, 0, -1, 1, -1, -1, 1), None)
self.assertEqual(test(1, 0, -1, 1, -1, 0, -1), None)
self.assertEqual(test(1, 0, -1, 1, -1, 0, 0), None)
self.assertEqual(test(1, 0, -1, 1, -1, 0, 1), None)
self.assertEqual(test(1, 0, -1, 1, -1, 1, -1), None)
self.assertEqual(test(1, 0, -1, 1, -1, 1, 0), None)
self.assertEqual(test(1, 0, -1, 1, -1, 1, 1), None)
self.assertEqual(test(1, 0, -1, 1, 0, -1, -1), None)
self.assertEqual(test(1, 0, -1, 1, 0, -1, 0), None)
self.assertEqual(test(1, 0, -1, 1, 0, -1, 1), None)
self.assertEqual(test(1, 0, -1, 1, 0, 0, -1), None)
self.assertEqual(test(1, 0, -1, 1, 0, 0, 1), None)
self.assertEqual(test(1, 0, -1, 1, 0, 1, -1), None)
self.assertEqual(test(1, 0, -1, 1, 0, 1, 0), None)
self.assertEqual(test(1, 0, -1, 1, 0, 1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 0, -1, 1, 1, -1, -1), None)
self.assertEqual(test(1, 0, -1, 1, 1, -1, 0), None)
self.assertEqual(test(1, 0, -1, 1, 1, -1, 1), None)
self.assertEqual(test(1, 0, -1, 1, 1, 0, -1), None)
self.assertEqual(test(1, 0, -1, 1, 1, 0, 0), None)
self.assertEqual(test(1, 0, -1, 1, 1, 0, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 0, -1, 1, 1, 1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, 0, -1, 1, 1, 1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, 0, 0, -1, -1, -1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, 0, 0, -1, -1, -1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, 0, 0, -1, -1, 0, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, 0, 0, -1, -1, 0, 0), None)
self.assertEqual(test(1, 0, 0, -1, -1, 0, 1), None)
self.assertEqual(test(1, 0, 0, -1, -1, 1, -1), None)
self.assertEqual(test(1, 0, 0, -1, -1, 1, 0), None)
self.assertEqual(test(1, 0, 0, -1, -1, 1, 1), None)
self.assertEqual(test(1, 0, 0, -1, 0, -1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, 0, 0, -1, 0, -1, 0), None)
self.assertEqual(test(1, 0, 0, -1, 0, -1, 1), None)
self.assertEqual(test(1, 0, 0, -1, 0, 0, -1), None)
self.assertEqual(test(1, 0, 0, -1, 0, 0, 1), None)
self.assertEqual(test(1, 0, 0, -1, 0, 1, -1), None)
self.assertEqual(test(1, 0, 0, -1, 0, 1, 0), None)
self.assertEqual(test(1, 0, 0, -1, 0, 1, 1), None)
self.assertEqual(test(1, 0, 0, -1, 1, -1, -1), None)
self.assertEqual(test(1, 0, 0, -1, 1, -1, 0), None)
self.assertEqual(test(1, 0, 0, -1, 1, -1, 1), None)
self.assertEqual(test(1, 0, 0, -1, 1, 0, -1), None)
self.assertEqual(test(1, 0, 0, -1, 1, 0, 0), None)
self.assertEqual(test(1, 0, 0, -1, 1, 0, 1), None)
self.assertEqual(test(1, 0, 0, -1, 1, 1, -1), None)
self.assertEqual(test(1, 0, 0, -1, 1, 1, 0), None)
self.assertEqual(test(1, 0, 0, 1, -1, -1, 0), None)
self.assertEqual(test(1, 0, 0, 1, -1, -1, 1), None)
self.assertEqual(test(1, 0, 0, 1, -1, 0, -1), None)
self.assertEqual(test(1, 0, 0, 1, -1, 0, 0), None)
self.assertEqual(test(1, 0, 0, 1, -1, 0, 1), None)
self.assertEqual(test(1, 0, 0, 1, -1, 1, -1), None)
self.assertEqual(test(1, 0, 0, 1, -1, 1, 0), None)
self.assertEqual(test(1, 0, 0, 1, -1, 1, 1), None)
self.assertEqual(test(1, 0, 0, 1, 0, -1, -1), None)
self.assertEqual(test(1, 0, 0, 1, 0, -1, 0), None)
self.assertEqual(test(1, 0, 0, 1, 0, -1, 1), None)
self.assertEqual(test(1, 0, 0, 1, 0, 0, -1), None)
self.assertEqual(test(1, 0, 0, 1, 0, 0, 1), None)
self.assertEqual(test(1, 0, 0, 1, 0, 1, -1), None)
self.assertEqual(test(1, 0, 0, 1, 0, 1, 0), None)
self.assertEqual(test(1, 0, 0, 1, 0, 1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 0, 0, 1, 1, -1, -1), None)
self.assertEqual(test(1, 0, 0, 1, 1, -1, 0), None)
self.assertEqual(test(1, 0, 0, 1, 1, -1, 1), None)
self.assertEqual(test(1, 0, 0, 1, 1, 0, -1), None)
self.assertEqual(test(1, 0, 0, 1, 1, 0, 0), None)
self.assertEqual(test(1, 0, 0, 1, 1, 0, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 0, 0, 1, 1, 1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, 0, 0, 1, 1, 1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, 0, 1, -1, -1, -1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, 0, 1, -1, -1, -1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, 0, 1, -1, -1, 0, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, 0, 1, -1, -1, 0, 0), None)
self.assertEqual(test(1, 0, 1, -1, -1, 0, 1), None)
self.assertEqual(test(1, 0, 1, -1, -1, 1, -1), None)
self.assertEqual(test(1, 0, 1, -1, -1, 1, 0), None)
self.assertEqual(test(1, 0, 1, -1, -1, 1, 1), None)
self.assertEqual(test(1, 0, 1, -1, 0, -1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, 0, 1, -1, 0, -1, 0), None)
self.assertEqual(test(1, 0, 1, -1, 0, -1, 1), None)
self.assertEqual(test(1, 0, 1, -1, 0, 0, -1), None)
self.assertEqual(test(1, 0, 1, -1, 0, 0, 1), None)
self.assertEqual(test(1, 0, 1, -1, 0, 1, -1), None)
self.assertEqual(test(1, 0, 1, -1, 0, 1, 0), None)
self.assertEqual(test(1, 0, 1, -1, 0, 1, 1), None)
self.assertEqual(test(1, 0, 1, -1, 1, -1, -1), None)
self.assertEqual(test(1, 0, 1, -1, 1, -1, 0), None)
self.assertEqual(test(1, 0, 1, -1, 1, -1, 1), None)
self.assertEqual(test(1, 0, 1, -1, 1, 0, -1), None)
self.assertEqual(test(1, 0, 1, -1, 1, 0, 0), None)
self.assertEqual(test(1, 0, 1, -1, 1, 0, 1), None)
self.assertEqual(test(1, 0, 1, -1, 1, 1, -1), None)
self.assertEqual(test(1, 0, 1, -1, 1, 1, 0), None)
self.assertEqual(test(1, 0, 1, 1, -1, -1, 0), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 0, 1, 1, -1, -1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 0, 1, 1, -1, 0, -1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 0, 1, 1, -1, 0, 0), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 0, 1, 1, -1, 0, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 0, 1, 1, -1, 1, -1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 0, 1, 1, -1, 1, 0), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 0, 1, 1, -1, 1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 0, 1, 1, 0, -1, -1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 0, 1, 1, 0, -1, 0), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 0, 1, 1, 0, -1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 0, 1, 1, 0, 0, -1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 0, 1, 1, 0, 0, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 0, 1, 1, 0, 1, -1), SpaceProperies.WHITE_DOUBLE_CHECK)
self.assertEqual(test(1, 0, 1, 1, 0, 1, 0), SpaceProperies.WHITE_DOUBLE_CHECK)
self.assertEqual(test(1, 0, 1, 1, 0, 1, 1), SpaceProperies.WHITE_DOUBLE_CHECK)
self.assertEqual(test(1, 0, 1, 1, 1, -1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, 0, 1, 1, 1, -1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, 0, 1, 1, 1, -1, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, 0, 1, 1, 1, 0, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, 0, 1, 1, 1, 0, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, 0, 1, 1, 1, 0, 1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, 0, 1, 1, 1, 1, -1), SpaceProperies.WHITE_WIN)
self.assertEqual(test(1, 0, 1, 1, 1, 1, 0), SpaceProperies.WHITE_WIN)
self.assertEqual(test(1, 1, -1, -1, -1, -1, 0), SpaceProperies.BLACK_WIN)
self.assertEqual(test(1, 1, -1, -1, -1, -1, 1), SpaceProperies.BLACK_WIN)
self.assertEqual(test(1, 1, -1, -1, -1, 0, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, 1, -1, -1, -1, 0, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, 1, -1, -1, -1, 0, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, 1, -1, -1, -1, 1, -1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, 1, -1, -1, -1, 1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, 1, -1, -1, -1, 1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, 1, -1, -1, 0, -1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, 1, -1, -1, 0, -1, 0), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, 1, -1, -1, 0, -1, 1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, 1, -1, -1, 0, 0, -1), None)
self.assertEqual(test(1, 1, -1, -1, 0, 0, 1), None)
self.assertEqual(test(1, 1, -1, -1, 0, 1, -1), None)
self.assertEqual(test(1, 1, -1, -1, 0, 1, 0), None)
self.assertEqual(test(1, 1, -1, -1, 0, 1, 1), None)
self.assertEqual(test(1, 1, -1, -1, 1, -1, -1), None)
self.assertEqual(test(1, 1, -1, -1, 1, -1, 0), None)
self.assertEqual(test(1, 1, -1, -1, 1, -1, 1), None)
self.assertEqual(test(1, 1, -1, -1, 1, 0, -1), None)
self.assertEqual(test(1, 1, -1, -1, 1, 0, 0), None)
self.assertEqual(test(1, 1, -1, -1, 1, 0, 1), None)
self.assertEqual(test(1, 1, -1, -1, 1, 1, -1), None)
self.assertEqual(test(1, 1, -1, -1, 1, 1, 0), None)
self.assertEqual(test(1, 1, -1, 1, -1, -1, 0), None)
self.assertEqual(test(1, 1, -1, 1, -1, -1, 1), None)
self.assertEqual(test(1, 1, -1, 1, -1, 0, -1), None)
self.assertEqual(test(1, 1, -1, 1, -1, 0, 0), None)
self.assertEqual(test(1, 1, -1, 1, -1, 0, 1), None)
self.assertEqual(test(1, 1, -1, 1, -1, 1, -1), None)
self.assertEqual(test(1, 1, -1, 1, -1, 1, 0), None)
self.assertEqual(test(1, 1, -1, 1, -1, 1, 1), None)
self.assertEqual(test(1, 1, -1, 1, 0, -1, -1), None)
self.assertEqual(test(1, 1, -1, 1, 0, -1, 0), None)
self.assertEqual(test(1, 1, -1, 1, 0, -1, 1), None)
self.assertEqual(test(1, 1, -1, 1, 0, 0, -1), None)
self.assertEqual(test(1, 1, -1, 1, 0, 0, 1), None)
self.assertEqual(test(1, 1, -1, 1, 0, 1, -1), None)
self.assertEqual(test(1, 1, -1, 1, 0, 1, 0), None)
self.assertEqual(test(1, 1, -1, 1, 0, 1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, -1, 1, 1, -1, -1), None)
self.assertEqual(test(1, 1, -1, 1, 1, -1, 0), None)
self.assertEqual(test(1, 1, -1, 1, 1, -1, 1), None)
self.assertEqual(test(1, 1, -1, 1, 1, 0, -1), None)
self.assertEqual(test(1, 1, -1, 1, 1, 0, 0), None)
self.assertEqual(test(1, 1, -1, 1, 1, 0, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, -1, 1, 1, 1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, 1, -1, 1, 1, 1, 0), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, 1, 0, -1, -1, -1, 0), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, 1, 0, -1, -1, -1, 1), SpaceProperies.BLACK_LOSE)
self.assertEqual(test(1, 1, 0, -1, -1, 0, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, -1, -1, 0, 0), None)
self.assertEqual(test(1, 1, 0, -1, -1, 0, 1), None)
self.assertEqual(test(1, 1, 0, -1, -1, 1, -1), None)
self.assertEqual(test(1, 1, 0, -1, -1, 1, 0), None)
self.assertEqual(test(1, 1, 0, -1, -1, 1, 1), None)
self.assertEqual(test(1, 1, 0, -1, 0, -1, -1), SpaceProperies.BLACK_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, -1, 0, -1, 0), None)
self.assertEqual(test(1, 1, 0, -1, 0, -1, 1), None)
self.assertEqual(test(1, 1, 0, -1, 0, 0, -1), None)
self.assertEqual(test(1, 1, 0, -1, 0, 0, 1), None)
self.assertEqual(test(1, 1, 0, -1, 0, 1, -1), None)
self.assertEqual(test(1, 1, 0, -1, 0, 1, 0), None)
self.assertEqual(test(1, 1, 0, -1, 0, 1, 1), None)
self.assertEqual(test(1, 1, 0, -1, 1, -1, -1), None)
self.assertEqual(test(1, 1, 0, -1, 1, -1, 0), None)
self.assertEqual(test(1, 1, 0, -1, 1, -1, 1), None)
self.assertEqual(test(1, 1, 0, -1, 1, 0, -1), None)
self.assertEqual(test(1, 1, 0, -1, 1, 0, 0), None)
self.assertEqual(test(1, 1, 0, -1, 1, 0, 1), None)
self.assertEqual(test(1, 1, 0, -1, 1, 1, -1), None)
self.assertEqual(test(1, 1, 0, -1, 1, 1, 0), None)
self.assertEqual(test(1, 1, 0, 1, -1, -1, 0), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, -1, -1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, -1, 0, -1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, -1, 0, 0), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, -1, 0, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, -1, 1, -1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, -1, 1, 0), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, -1, 1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, 0, -1, -1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, 0, -1, 0), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, 0, -1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, 0, 0, -1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, 0, 0, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, 0, 1, -1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, 0, 1, 0), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, 0, 1, 1), SpaceProperies.WHITE_DOUBLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, 1, -1, -1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, 1, -1, 0), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, 1, -1, 1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, 1, 0, -1), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, 1, 0, 0), SpaceProperies.WHITE_SINGLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, 1, 0, 1), SpaceProperies.WHITE_DOUBLE_CHECK)
self.assertEqual(test(1, 1, 0, 1, 1, 1, -1), SpaceProperies.WHITE_LOSE)
self.assertEqual(test(1, 1, 0, 1, 1, 1, 0), SpaceProperies.WHITE_LOSE)
def test_condition_vector(self):
space_to_index = {space: i for i, space in enumerate(yavalath_engine.HexBoard().spaces)}
board = yavalath_engine.HexBoard()
for space in ['e5', 'e4', 'e6', 'd5', 'f5']:
condition_vec = blue_player.NextMoveClassifier.get_condition_vector_for_space(space)
self.assertEqual(condition_vec[space_to_index[board.next_space_in_dir(space, 0, 1)], 0], 3**0)
self.assertEqual(condition_vec[space_to_index[board.next_space_in_dir(space, 0, 2)], 0], 3**1)
self.assertEqual(condition_vec[space_to_index[board.next_space_in_dir(space, 0, 3)], 0], 3**2)
self.assertEqual(condition_vec[space_to_index[board.next_space_in_dir(space, 1, 1)], 0], 3**3)
self.assertEqual(condition_vec[space_to_index[board.next_space_in_dir(space, 1, 2)], 0], 3**4)
self.assertEqual(condition_vec[space_to_index[board.next_space_in_dir(space, 1, 3)], 0], 3**5)
self.assertEqual(condition_vec[space_to_index[board.next_space_in_dir(space, 2, 1)], 0], 3**6)
self.assertEqual(condition_vec[space_to_index[board.next_space_in_dir(space, 2, 2)], 0], 3**7)
self.assertEqual(condition_vec[space_to_index[board.next_space_in_dir(space, 2, 3)], 0], 3**8)
self.assertEqual(condition_vec[space_to_index[board.next_space_in_dir(space, 3, 1)], 0], 3**9)
self.assertEqual(condition_vec[space_to_index[board.next_space_in_dir(space, 3, 2)], 0], 3**10)
self.assertEqual(condition_vec[space_to_index[board.next_space_in_dir(space, 3, 3)], 0], 3**11)
self.assertEqual(condition_vec[space_to_index[board.next_space_in_dir(space, 4, 1)], 0], 3**12)
self.assertEqual(condition_vec[space_to_index[board.next_space_in_dir(space, 4, 2)], 0], 3**13)
self.assertEqual(condition_vec[space_to_index[board.next_space_in_dir(space, 4, 3)], 0], 3**14)
self.assertEqual(condition_vec[space_to_index[board.next_space_in_dir(space, 5, 1)], 0], 3**15)
self.assertEqual(condition_vec[space_to_index[board.next_space_in_dir(space, 5, 2)], 0], 3**16)
self.assertEqual(condition_vec[space_to_index[board.next_space_in_dir(space, 5, 3)], 0], 3**17)
def test_signature_and_propeties(self):
signature, properties = blue_player.NextMoveClassifier.compute_signature_and_properties((
(0,0,0),(0,0,0),(0,0,0),(0,0,0),(0,0,0),(0,0,0)))
self.assertEqual(signature, 0)
self.assertEqual(properties, (None, None))
signature, properties = blue_player.NextMoveClassifier.compute_signature_and_properties((
(1,0,0),(0,0,0),(0,0,0),(0,0,0),(0,0,0),(0,0,0)))
self.assertEqual(signature, 1)
self.assertEqual(properties, (None, None))
signature, properties = blue_player.NextMoveClassifier.compute_signature_and_properties((
(0,1,0),(0,0,0),(0,0,0),(0,0,0),(0,0,0),(0,0,0)))
self.assertEqual(signature, 3)
self.assertEqual(properties, (None, None))
signature, properties = blue_player.NextMoveClassifier.compute_signature_and_properties((
(1,1,0),(0,0,0),(0,0,0),(0,0,0),(0,0,0),(0,0,0)))
self.assertEqual(signature, 4)
self.assertTupleEqual(properties, (blue_player.SpaceProperies.WHITE_LOSE, None))
signature, properties = blue_player.NextMoveClassifier.compute_signature_and_properties((
(-1,-1,0),(0,0,0),(0,0,0),(0,0,0),(0,0,0),(0,0,0)))
self.assertEqual(signature, -4)
self.assertTupleEqual(properties, (None, blue_player.SpaceProperies.BLACK_LOSE))
signature, properties = blue_player.NextMoveClassifier.compute_signature_and_properties((
(1,1,0),(0,0,0),(1,1,0),(1,0,0),(0,0,0),(0,0,0)))
self.assertTupleEqual(properties, (blue_player.SpaceProperies.WHITE_WIN, None))
signature, properties = blue_player.NextMoveClassifier.compute_signature_and_properties((
(1,1,0),(1,0,0),(1,1,0),(1,0,0),(1,0,0),(0,0,0)))
self.assertTupleEqual(properties, (blue_player.SpaceProperies.WHITE_WIN, None))
signature, properties = blue_player.NextMoveClassifier.compute_signature_and_properties((
(1,1,0),(1,0,0),(-1,-1,0),(1,0,0),(0,1,0),(0,0,0)))
self.assertTupleEqual(properties, (blue_player.SpaceProperies.WHITE_WIN, blue_player.SpaceProperies.BLACK_LOSE))
# TODO: Lots more would be nice, but I think this is enough for now.
def test_signature_lookup_table(self):
test_signatures = numpy.array([129140163, 43046721, 14348907]) # Various powers of 3 should not have any properties
move_properties = self.properties_table[self.signature_table[test_signatures]]
pprint.pprint( sorted(test_signatures[numpy.nonzero(test_signatures)].flatten().tolist()) )
pprint.pprint(self.signature_table[test_signatures])
def test_adhoc_classifications(self):
from players.blue_player import SpaceProperies
game_so_far = ['g1', 'd5', 'e7', 'b5', 'e1', 'e3', 'i4', 'g4', 'b6', 'e4', 'f5', 'a5', 'c5', 'e8', 'c3', 'd3', 'f4', 'd2']
# Classify with the signature-file as well as the old 'gamestate' classifier, so that I can compare the things
# they both do.
classifier = blue_player.NextMoveClassifier(game_so_far=game_so_far)
classifier.compute_moves_by_property()
gamestate = blue_player.GameState(game_so_far=game_so_far)
# Compare WINS and LOSE for both. Note that the old classifier doesn't eliminate wins, so I subtract them away
self.assertSetEqual(set(gamestate.white_winning_moves), classifier.moves_by_property[blue_player.SpaceProperies.WHITE_WIN])
self.assertSetEqual(set(gamestate.black_winning_moves), classifier.moves_by_property[blue_player.SpaceProperies.BLACK_WIN])
self.assertSetEqual(set(gamestate.white_losing_moves)-set(gamestate.white_winning_moves), classifier.moves_by_property[blue_player.SpaceProperies.WHITE_LOSE])
self.assertSetEqual(set(gamestate.black_losing_moves)-set(gamestate.black_winning_moves), classifier.moves_by_property[blue_player.SpaceProperies.BLACK_LOSE])
# Now an ad-hoc check on some of the known problem spaces while I was debugging.
signature, properties = classifier.compute_signature_and_properties_for_space('b3')
self.assertTupleEqual((None,None), properties)
signature, properties = classifier.compute_signature_and_properties_for_space('f7')
self.assertTupleEqual((SpaceProperies.WHITE_DOUBLE_CHECK,None), properties)
signature, properties = classifier.compute_signature_and_properties_for_space('f2')
self.assertTupleEqual((SpaceProperies.WHITE_SINGLE_CHECK,SpaceProperies.BLACK_LOSE), properties)
def test_signature_lookup_table_with_random_signatures(self):
SIGNATURE_OFFSET = sum([3**i for i in range(18)]) # Add this to all signatures to make them >= 0.
signature_index = 129140163 + SIGNATURE_OFFSET # This is the
arms = blue_player.NextMoveClassifier.signature_index_to_arms(signature_index)
#arms = (0,1,0),(0,0,0),(0,0,0),(0,0,0),(0,0,0),(0,0,0)
new_signature, properties = blue_player.NextMoveClassifier.compute_signature_and_properties(arms) # Compute slowly
self.assertEqual(new_signature, signature_index - SIGNATURE_OFFSET, "The signatures should be the same.")
move_properties = self.properties_table[self.signature_table[signature_index]]
self.assertEqual(properties, move_properties)
def test_board_analysis(self):
game_so_far = ['g1', 'd5', 'e7', 'b5', 'e1', 'e3', 'i4', 'g4', 'b6', 'e4', 'f5', 'a5', 'c5', 'e8', 'c3', 'd3', 'f4', 'd2']
m = NextMoveClassifier.get_board_properties(game_so_far)
def test_double_check_issue_(self):
game_so_far = ['c4', 'd8', 'c1', 'g7', 'c2', 'c3', 'e2', 'h1', 'f1', 'd2', 'd3', 'a4', 'f4', 'e4'] # , 'f2', 'f3', 'g2'
m = NextMoveClassifier.get_board_properties(game_so_far, verbose=True)
self.assertSetEqual(m[SpaceProperies.WHITE_DOUBLE_CHECK], set(), "In this game, white has no double-checks")
def test_add_remove(self):
c = NextMoveClassifier([], verbose=True)
c.add_move(0, 1)
c.undo_move(0)
| 70.085714 | 166 | 0.576305 | 15,246 | 90,761 | 3.362521 | 0.015414 | 0.13518 | 0.119848 | 0.09488 | 0.954745 | 0.949517 | 0.947664 | 0.945304 | 0.94152 | 0.938691 | 0 | 0.119206 | 0.219819 | 90,761 | 1,294 | 167 | 70.139876 | 0.604773 | 0.007415 | 0 | 0.013439 | 0 | 0 | 0.002343 | 0 | 0 | 0 | 0 | 0.000773 | 0.943083 | 1 | 0.008696 | false | 0 | 0.010277 | 0.000791 | 0.020553 | 0.002372 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e9acee9a3eac89372da68374c618984ac5fb36b0 | 38,501 | py | Python | tests/dashboard/test_permissions.py | tetyanaloskutova/saleor | b3bb51e9c0c4c2febf4aa1e2a7d893e77c331e89 | [
"BSD-3-Clause"
] | 7 | 2019-05-17T14:27:13.000Z | 2021-12-17T22:52:40.000Z | tests/dashboard/test_permissions.py | tetyanaloskutova/saleor | b3bb51e9c0c4c2febf4aa1e2a7d893e77c331e89 | [
"BSD-3-Clause"
] | 9 | 2019-04-13T09:24:28.000Z | 2019-09-09T15:35:05.000Z | tests/dashboard/test_permissions.py | tetyanaloskutova/remote-works | b3bb51e9c0c4c2febf4aa1e2a7d893e77c331e89 | [
"BSD-3-Clause"
] | null | null | null | from django.urls import reverse
from remote_works.account.models import User
def test_staff_can_access_skill_details(
staff_client, staff_user, permission_manage_products, product):
assert not staff_user.has_perm('skill.manage_products')
url = reverse('dashboard:skill-details', kwargs={'pk': product.pk})
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.post(url)
assert response.status_code == 200
def test_staff_can_access_skill_toggle_is_published(
staff_client, staff_user, permission_manage_products, product):
assert not staff_user.has_perm('skill.manage_products')
url = reverse('dashboard:skill-publish', kwargs={'pk': product.pk})
response = staff_client.post(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.post(url)
assert response.status_code == 200
def test_staff_can_access_skill_select_type(
staff_client, staff_user, permission_manage_products):
assert not staff_user.has_perm('skill.manage_products')
url = reverse('dashboard:skill-add-select-type')
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.post(url)
assert response.status_code == 200
def test_staff_can_access_skill_create(
staff_client, staff_user, permission_manage_products, skill_type):
assert not staff_user.has_perm('skill.manage_products')
url = reverse('dashboard:skill-add', kwargs={'type_pk': skill_type.pk})
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.post(url)
assert response.status_code == 200
def test_staff_can_access_skill_edit(
staff_client, staff_user, permission_manage_products, product):
assert not staff_user.has_perm('skill.manage_products')
url = reverse('dashboard:skill-update', kwargs={'pk': product.pk})
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.post(url)
assert response.status_code == 200
def test_staff_can_access_skill_delete(
staff_client, staff_user, permission_manage_products, product):
assert not staff_user.has_perm('skill.manage_products')
url = reverse('dashboard:skill-delete', kwargs={'pk': product.pk})
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(url)
assert response.status_code == 200
def test_staff_can_view_skill_list(
staff_client, staff_user, permission_manage_products):
assert not staff_user.has_perm('skill.manage_products')
response = staff_client.get(reverse('dashboard:skill-list'))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(reverse('dashboard:skill-list'))
assert response.status_code == 200
def test_staff_can_view_category_list(
staff_client, staff_user, permission_manage_products):
assert not staff_user.has_perm('skill.manage_products')
response = staff_client.get(reverse('dashboard:category-list'))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(reverse('dashboard:category-list'))
assert response.status_code == 200
def test_staff_can_view_category_add_root(
staff_client, staff_user, permission_manage_products):
assert not staff_user.has_perm('skill.manage_products')
response = staff_client.get(reverse('dashboard:category-add'))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(reverse('dashboard:category-add'))
assert response.status_code == 200
def test_staff_can_view_category_add_subcategory(
staff_client, staff_user, permission_manage_products, category):
assert not staff_user.has_perm('skill.manage_products')
response = staff_client.get(
reverse('dashboard:category-add', args=[category.pk]))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(
reverse('dashboard:category-add', args=[category.pk]))
assert response.status_code == 200
def test_staff_can_view_category_edit(
staff_client, staff_user, permission_manage_products, category):
assert not staff_user.has_perm('skill.manage_products')
response = staff_client.get(
reverse('dashboard:category-edit', args=[category.pk]))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(
reverse('dashboard:category-edit', args=[category.pk]))
assert response.status_code == 200
def test_staff_can_view_category_delete(
staff_client, staff_user, permission_manage_products, category):
assert not staff_user.has_perm('skill.manage_products')
response = staff_client.get(
reverse('dashboard:category-delete', args=[category.pk]))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(
reverse('dashboard:category-delete', args=[category.pk]))
assert response.status_code == 200
def test_staff_can_view_sale_list(
staff_client, staff_user, permission_manage_discounts):
assert not staff_user.has_perm('discount.manage_discounts')
response = staff_client.get(reverse('dashboard:sale-list'))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_discounts)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('discount.manage_discounts')
response = staff_client.get(reverse('dashboard:sale-list'))
assert response.status_code == 200
def test_staff_can_view_sale_update(
staff_client, staff_user, permission_manage_discounts, sale):
assert not staff_user.has_perm('discount.manage_discounts')
response = staff_client.get(
reverse('dashboard:sale-update', args=[sale.pk]))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_discounts)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('discount.manage_discounts')
response = staff_client.get(
reverse('dashboard:sale-update', args=[sale.pk]))
assert response.status_code == 200
def test_staff_can_view_sale_add(
staff_client, staff_user, permission_manage_discounts, sale):
assert not staff_user.has_perm('discount.manage_discounts')
response = staff_client.get(reverse('dashboard:sale-add'))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_discounts)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('discount.manage_discounts')
response = staff_client.get(reverse('dashboard:sale-add'))
assert response.status_code == 200
def test_staff_can_view_sale_delete(
staff_client, staff_user, permission_manage_discounts, sale):
assert not staff_user.has_perm('discount.manage_discounts')
response = staff_client.get(
reverse('dashboard:sale-delete', args=[sale.pk]))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_discounts)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('discount.manage_discounts')
response = staff_client.get(
reverse('dashboard:sale-delete', args=[sale.pk]))
assert response.status_code == 200
def test_staff_can_view_voucher_list(
staff_client, staff_user, permission_manage_discounts):
assert not staff_user.has_perm('discount.manage_discounts')
response = staff_client.get(reverse('dashboard:voucher-list'))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_discounts)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('discount.manage_discounts')
response = staff_client.get(reverse('dashboard:voucher-list'))
assert response.status_code == 200
def test_staff_can_view_voucher_update(
staff_client, staff_user, permission_manage_discounts, voucher):
assert not staff_user.has_perm('discount.manage_discounts')
response = staff_client.get(
reverse('dashboard:voucher-update', args=[voucher.pk]))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_discounts)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('discount.manage_discounts')
response = staff_client.get(
reverse('dashboard:voucher-update', args=[voucher.pk]))
assert response.status_code == 200
def test_staff_can_view_voucher_add(
staff_client, staff_user, permission_manage_discounts):
assert not staff_user.has_perm('discount.manage_discounts')
response = staff_client.get(reverse('dashboard:voucher-add'))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_discounts)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('discount.manage_discounts')
response = staff_client.get(reverse('dashboard:voucher-add'))
assert response.status_code == 200
def test_staff_can_view_voucher_delete(
staff_client, staff_user, permission_manage_discounts, voucher):
assert not staff_user.has_perm('discount.manage_discounts')
response = staff_client.get(
reverse('dashboard:voucher-delete', args=[voucher.pk]))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_discounts)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('discount.manage_discounts')
response = staff_client.get(
reverse('dashboard:voucher-delete', args=[voucher.pk]))
assert response.status_code == 200
def test_staff_can_view_task_list(
staff_client, staff_user, permission_manage_orders):
assert not staff_user.has_perm('task.manage_orders')
response = staff_client.get(reverse('dashboard:tasks'))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_orders)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('task.manage_orders')
response = staff_client.get(reverse('dashboard:tasks'))
assert response.status_code == 200
def test_staff_can_view_task_details(
staff_client, staff_user, permission_manage_orders, task_with_lines):
assert not staff_user.has_perm('task.manage_orders')
response = staff_client.get(
reverse('dashboard:task-details', args=[task_with_lines.pk]))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_orders)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('task.manage_orders')
response = staff_client.get(
reverse('dashboard:task-details', args=[task_with_lines.pk]))
assert response.status_code == 200
def test_staff_can_view_task_add_note(
staff_client, staff_user, permission_manage_orders, task):
assert not staff_user.has_perm('task.manage_orders')
response = staff_client.get(
reverse('dashboard:task-add-note', args=[task.pk]))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_orders)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('task.manage_orders')
response = staff_client.get(
reverse('dashboard:task-add-note', args=[task.pk]))
assert response.status_code == 200
def test_staff_can_view_cancel_order(
staff_client, staff_user, permission_manage_orders, task):
assert not staff_user.has_perm('task.manage_orders')
response = staff_client.get(
reverse('dashboard:task-cancel', args=[task.pk]))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_orders)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('task.manage_orders')
response = staff_client.get(
reverse('dashboard:task-cancel', args=[task.pk]))
assert response.status_code == 200
def test_staff_can_view_billing_address_edit(
staff_client, staff_user, permission_manage_orders, task):
assert not staff_user.has_perm('task.manage_orders')
response = staff_client.get(
reverse('dashboard:address-edit', args=[task.pk, 'billing']))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_orders)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('task.manage_orders')
response = staff_client.get(
reverse('dashboard:address-edit', args=[task.pk, 'billing']))
assert response.status_code == 200
def test_staff_can_view_customers_list(
staff_client, staff_user, permission_manage_users):
assert not staff_user.has_perm('account.manage_users')
response = staff_client.get(reverse('dashboard:customers'))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_users)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('account.manage_users')
response = staff_client.get(reverse('dashboard:customers'))
assert response.status_code == 200
def test_staff_can_view_customer_details(
staff_client, staff_user, permission_manage_users, customer_user,
task_with_lines):
assert not staff_user.has_perm('account.manage_users')
response = staff_client.get(
reverse('dashboard:customer-details', args=[customer_user.pk]))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_users)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('account.manage_users')
response = staff_client.get(
reverse('dashboard:customer-details', args=[customer_user.pk]))
assert response.status_code == 200
response = staff_client.get(
reverse('dashboard:task-details', args=[task_with_lines.pk]))
assert response.status_code == 302
def test_staff_can_view_staff_members_list(
staff_client, staff_user, permission_manage_staff):
assert not staff_user.has_perm('account.manage_staff')
response = staff_client.get(reverse('dashboard:staff-list'))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_staff)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('account.manage_staff')
response = staff_client.get(reverse('dashboard:staff-list'))
assert response.status_code == 200
def test_staff_can_view_detail_create_and_delete_staff_members(
staff_client, staff_user, permission_manage_staff):
assert not staff_user.has_perm('account.manage_staff')
response = staff_client.get(reverse('dashboard:staff-create'))
assert response.status_code == 302
response = staff_client.get(
reverse('dashboard:staff-delete', args=[staff_user.pk]))
assert response.status_code == 302
response = staff_client.get(
reverse('dashboard:staff-details', args=[staff_user.pk]))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_staff)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('account.manage_staff')
response = staff_client.get(reverse('dashboard:staff-create'))
assert response.status_code == 200
response = staff_client.get(
reverse('dashboard:staff-delete', args=[staff_user.pk]))
assert response.status_code == 200
response = staff_client.get(
reverse('dashboard:staff-details', args=[staff_user.pk]))
assert response.status_code == 200
def test_staff_with_permissions_can_view_skill_types_list(
staff_client, staff_user, permission_manage_products):
assert not staff_user.has_perm('skill.manage_products')
response = staff_client.get(reverse('dashboard:skill-type-list'))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(reverse('dashboard:skill-type-list'))
assert response.status_code == 200
def test_staff_with_permissions_can_edit_add_and_delete_skill_types_list(
staff_client, staff_user, permission_manage_products, skill_type):
assert not staff_user.has_perm('skill.manage_products')
response = staff_client.get(
reverse('dashboard:skill-type-update', args=[skill_type.pk]))
assert response.status_code == 302
response = staff_client.get(
reverse('dashboard:skill-type-delete', args=[skill_type.pk]))
assert response.status_code == 302
response = staff_client.get(reverse('dashboard:skill-type-add'))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(
reverse('dashboard:skill-type-update', args=[skill_type.pk]))
assert response.status_code == 200
response = staff_client.get(
reverse('dashboard:skill-type-delete', args=[skill_type.pk]))
assert response.status_code == 200
response = staff_client.get(reverse('dashboard:skill-type-add'))
assert response.status_code == 200
def test_staff_can_access_variant_details(
staff_client, staff_user, permission_manage_products, product):
skill_type = product.skill_type
skill_type.has_variants = True
skill_type.save()
variant = product.variants.get()
assert not staff_user.has_perm('skill.manage_products')
url = reverse(
'dashboard:variant-details',
kwargs={
'skill_pk': product.pk,
'variant_pk': variant.pk})
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(url)
assert response.status_code == 200
def test_staff_can_access_variant_create(
staff_client, staff_user, permission_manage_products, product):
assert not staff_user.has_perm('skill.manage_products')
url = reverse('dashboard:variant-add', kwargs={'skill_pk': product.pk})
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(url)
assert response.status_code == 200
def test_staff_can_access_variant_edit(
staff_client, staff_user, permission_manage_products, product):
variant = product.variants.get()
assert not staff_user.has_perm('skill.manage_products')
url = reverse(
'dashboard:variant-update',
kwargs={
'skill_pk': product.pk,
'variant_pk': variant.pk})
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(url)
assert response.status_code == 200
def test_staff_can_access_variant_delete(
staff_client, staff_user, permission_manage_products, product):
variant = product.variants.get()
assert not staff_user.has_perm('skill.manage_products')
url = reverse(
'dashboard:variant-delete',
kwargs={
'skill_pk': product.pk,
'variant_pk': variant.pk})
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(url)
assert response.status_code == 200
def test_staff_can_access_variant_images(
staff_client, staff_user, permission_manage_products, product):
variant = product.variants.get()
assert not staff_user.has_perm('skill.manage_products')
url = reverse(
'dashboard:variant-images',
kwargs={
'skill_pk': product.pk,
'variant_pk': variant.pk})
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(url)
assert response.status_code == 200
def test_staff_can_access_skill_image_list(
staff_client, staff_user, permission_manage_products, product):
assert not staff_user.has_perm('skill.manage_products')
url = reverse(
'dashboard:skill-image-list', kwargs={'skill_pk': product.pk})
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(url)
assert response.status_code == 200
def test_staff_can_access_skill_image_add(
staff_client, staff_user, permission_manage_products, skill):
assert not staff_user.has_perm('skill.manage_products')
url = reverse(
'dashboard:skill-image-add', kwargs={'skill_pk': skill.pk})
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(url)
assert response.status_code == 200
def test_staff_can_access_skill_image_update(
staff_client, staff_user, permission_manage_products, skill_with_image):
skill_image = skill_with_image.images.get()
assert not staff_user.has_perm('skill.manage_products')
url = reverse(
'dashboard:skill-image-update',
kwargs={
'skill_pk': skill_with_image.pk,
'img_pk': skill_image.pk})
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(url)
assert response.status_code == 200
def test_staff_can_access_skill_image_delete(
staff_client, staff_user, permission_manage_products, skill_with_image):
skill_image = skill_with_image.images.get()
assert not staff_user.has_perm('skill.manage_products')
url = reverse(
'dashboard:skill-image-delete',
kwargs={
'skill_pk': skill_with_image.pk,
'img_pk': skill_image.pk})
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(url)
assert response.status_code == 200
def test_staff_with_permissions_can_view_products_attributes_list(
staff_client, staff_user, permission_manage_products, color_attribute):
assert not staff_user.has_perm('skill.manage_products')
response = staff_client.get(reverse('dashboard:attributes'))
assert response.status_code == 302
response = staff_client.get(
reverse(
'dashboard:attribute-details', args=[color_attribute.pk]))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(reverse('dashboard:attributes'))
assert response.status_code == 200
response = staff_client.get(
reverse(
'dashboard:attribute-details', args=[color_attribute.pk]))
assert response.status_code == 200
def test_staff_with_permissions_can_update_add_and_delete_products_attributes(
staff_client, staff_user, permission_manage_products, color_attribute):
assert not staff_user.has_perm('skill.manage_products')
response = staff_client.get(
reverse(
'dashboard:attribute-update', args=[color_attribute.pk]))
assert response.status_code == 302
response = staff_client.get(
reverse(
'dashboard:attribute-delete', args=[color_attribute.pk]))
assert response.status_code == 302
response = staff_client.get(reverse('dashboard:attribute-add'))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(
reverse(
'dashboard:attribute-update', args=[color_attribute.pk]))
assert response.status_code == 200
response = staff_client.get(
reverse(
'dashboard:attribute-delete', args=[color_attribute.pk]))
assert response.status_code == 200
response = staff_client.get(reverse('dashboard:attribute-add'))
assert response.status_code == 200
def test_staff_can_access_attribute_create(
staff_client, staff_user, permission_manage_products, color_attribute):
assert not staff_user.has_perm('skill.manage_products')
url = reverse(
'dashboard:attribute-value-add',
kwargs={'attribute_pk': color_attribute.pk})
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(url)
assert response.status_code == 200
def test_staff_can_access_attribute_edit(
staff_client, staff_user, permission_manage_products, color_attribute):
value = color_attribute.values.first()
assert not staff_user.has_perm('skill.manage_products')
url = reverse(
'dashboard:attribute-value-update',
kwargs={
'attribute_pk': color_attribute.pk,
'value_pk': value.pk})
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(url)
assert response.status_code == 200
def test_staff_can_access_attribute_delete(
staff_client, staff_user, permission_manage_products, color_attribute):
value = color_attribute.values.first()
assert not staff_user.has_perm('skill.manage_products')
url = reverse(
'dashboard:attribute-value-delete',
kwargs={
'attribute_pk': color_attribute.pk,
'value_pk': value.pk})
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_products)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('skill.manage_products')
response = staff_client.get(url)
assert response.status_code == 200
def test_staff_with_permissions_can_view_delivery_methods_and_details(
staff_client, staff_user, permission_manage_delivery, delivery_zone):
assert not staff_user.has_perm('delivery.manage_delivery')
response = staff_client.get(reverse('dashboard:delivery-zone-list'))
assert response.status_code == 302
response = staff_client.get(
reverse(
'dashboard:delivery-zone-details', args=[delivery_zone.pk]))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_delivery)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('delivery.manage_delivery')
response = staff_client.get(reverse('dashboard:delivery-zone-list'))
assert response.status_code == 200
response = staff_client.get(
reverse(
'dashboard:delivery-zone-details', args=[delivery_zone.pk]))
assert response.status_code == 200
def test_staff_with_permissions_can_update_add_and_delete_delivery_zone(
staff_client, staff_user, permission_manage_delivery, delivery_zone):
assert not staff_user.has_perm('delivery.manage_delivery')
response = staff_client.get(
reverse('dashboard:delivery-zone-update', args=[delivery_zone.pk]))
assert response.status_code == 302
response = staff_client.get(
reverse('dashboard:delivery-zone-delete', args=[delivery_zone.pk]))
assert response.status_code == 302
response = staff_client.get(reverse('dashboard:delivery-zone-add'))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_delivery)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('delivery.manage_delivery')
response = staff_client.get(
reverse('dashboard:delivery-zone-update', args=[delivery_zone.pk]))
assert response.status_code == 200
response = staff_client.get(
reverse('dashboard:delivery-zone-delete', args=[delivery_zone.pk]))
assert response.status_code == 200
response = staff_client.get(reverse('dashboard:delivery-zone-add'))
assert response.status_code == 200
def test_staff_with_permissions_can_edit_customer(
staff_client, customer_user, staff_user, permission_manage_users):
assert customer_user.email == 'test@example.com'
response = staff_client.get(
reverse('dashboard:customer-update', args=[customer_user.pk]))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_users)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('account.manage_users')
response = staff_client.get(
reverse('dashboard:customer-update', args=[customer_user.pk]))
assert response.status_code == 200
url = reverse('dashboard:customer-update', args=[customer_user.pk])
data = {'email': 'newemail@example.com', 'is_active': True}
staff_client.post(url, data)
customer_user = User.objects.get(pk=customer_user.pk)
assert customer_user.email == 'newemail@example.com'
assert customer_user.is_active
def test_staff_with_permissions_can_add_customer(
staff_client, staff_user, permission_manage_users):
response = staff_client.get(reverse('dashboard:customer-create'))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_users)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('account.manage_users')
response = staff_client.get(reverse('dashboard:customer-create'))
assert response.status_code == 200
url = reverse('dashboard:customer-create')
data = {'email': 'newcustomer@example.com', 'is_active': True}
staff_client.post(url, data)
customer = User.objects.get(email='newcustomer@example.com')
assert customer.is_active
def test_staff_can_view_and_edit_site_settings(
staff_client, staff_user, site_settings, permission_manage_settings):
assert not staff_user.has_perm('site.manage_settings')
response = staff_client.get(
reverse('dashboard:site-update', args=[site_settings.pk]))
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_settings)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('site.manage_settings')
response = staff_client.get(
reverse('dashboard:site-update', args=[site_settings.pk]))
assert response.status_code == 200
def test_staff_can_view_and_edit_taxes_settings(
staff_client, staff_user, site_settings, permission_manage_settings):
assert not staff_user.has_perm('site.manage_settings')
url = reverse('dashboard:configure-taxes')
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_settings)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('site.manage_settings')
response = staff_client.get(url)
def test_staff_can_view_menus_and_details(
staff_client, staff_user, permission_manage_menus, menu_item):
menu_list_url = reverse('dashboard:menu-list')
menu_details_url = reverse(
'dashboard:menu-details', args=[menu_item.menu.pk])
menu_item_details_url = reverse(
'dashboard:menu-item-details', args=[menu_item.menu.pk, menu_item.pk])
assert not staff_user.has_perm('menu.manage_menus')
response = staff_client.get(menu_list_url)
assert response.status_code == 302
response = staff_client.get(menu_details_url)
assert response.status_code == 302
response = staff_client.get(menu_item_details_url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_menus)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('menu.manage_menus')
response = staff_client.get(menu_list_url)
assert response.status_code == 200
response = staff_client.get(menu_details_url)
assert response.status_code == 200
response = staff_client.get(menu_item_details_url)
assert response.status_code == 200
def test_staff_can_manage_menuss(
staff_client, staff_user, permission_manage_menus, menu_item):
menu_add_url = reverse('dashboard:menu-add')
menu_edit_url = reverse('dashboard:menu-edit', args=[menu_item.menu.pk])
menu_delete_url = reverse(
'dashboard:menu-delete', args=[menu_item.menu.pk])
assert not staff_user.has_perm('menu.manage_menus')
response = staff_client.get(menu_add_url)
assert response.status_code == 302
response = staff_client.get(menu_edit_url)
assert response.status_code == 302
response = staff_client.get(menu_delete_url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_menus)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('menu.manage_menus')
response = staff_client.get(menu_add_url)
assert response.status_code == 200
response = staff_client.get(menu_edit_url)
assert response.status_code == 200
response = staff_client.get(menu_delete_url)
assert response.status_code == 200
def test_staff_can_manage_menus_items(
staff_client, staff_user, permission_manage_menus, menu_item):
menu_item_add_url = reverse(
'dashboard:menu-item-add', args=[menu_item.menu.pk, menu_item.pk])
menu_item_edit_url = reverse(
'dashboard:menu-item-edit', args=[menu_item.menu.pk, menu_item.pk])
menu_item_delete_url = reverse(
'dashboard:menu-item-delete', args=[menu_item.menu.pk, menu_item.pk])
assert not staff_user.has_perm('menu.manage_menus')
response = staff_client.get(menu_item_add_url)
assert response.status_code == 302
response = staff_client.get(menu_item_edit_url)
assert response.status_code == 302
response = staff_client.get(menu_item_delete_url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_menus)
staff_user = User.objects.get(pk=staff_user.pk)
assert staff_user.has_perm('menu.manage_menus')
response = staff_client.get(menu_item_add_url)
assert response.status_code == 200
response = staff_client.get(menu_item_edit_url)
assert response.status_code == 200
response = staff_client.get(menu_item_delete_url)
assert response.status_code == 200
def test_staff_can_remove_user(staff_client, staff_user, permission_manage_users):
url = reverse('dashboard:customer-delete', args=[staff_user.pk])
response = staff_client.get(url)
assert response.status_code == 302
staff_user.user_permissions.add(permission_manage_users)
staff_user = User.objects.get(pk=staff_user.pk)
response = staff_client.get(url)
assert response.status_code == 200
| 40.828208 | 82 | 0.745643 | 5,136 | 38,501 | 5.265187 | 0.021417 | 0.110162 | 0.100473 | 0.126026 | 0.967236 | 0.955292 | 0.949708 | 0.938096 | 0.928593 | 0.921419 | 0 | 0.013058 | 0.152645 | 38,501 | 942 | 83 | 40.87155 | 0.815841 | 0 | 0 | 0.853403 | 0 | 0 | 0.13797 | 0.107114 | 0 | 0 | 0 | 0 | 0.329843 | 1 | 0.07199 | false | 0 | 0.002618 | 0 | 0.074607 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e9c8a8842f0d8d0a79fb3470f59bda8a2ef52ffd | 7,731 | py | Python | z2/part2/interactive/jm/random_fuzzy_arrows_1/584648757.py | kozakusek/ipp-2020-testy | 09aa008fa53d159672cc7cbf969a6b237e15a7b8 | [
"MIT"
] | 1 | 2020-04-16T12:13:47.000Z | 2020-04-16T12:13:47.000Z | z2/part2/interactive/jm/random_fuzzy_arrows_1/584648757.py | kozakusek/ipp-2020-testy | 09aa008fa53d159672cc7cbf969a6b237e15a7b8 | [
"MIT"
] | 18 | 2020-03-06T17:50:15.000Z | 2020-05-19T14:58:30.000Z | z2/part2/interactive/jm/random_fuzzy_arrows_1/584648757.py | kozakusek/ipp-2020-testy | 09aa008fa53d159672cc7cbf969a6b237e15a7b8 | [
"MIT"
] | 18 | 2020-03-06T17:45:13.000Z | 2020-06-09T19:18:31.000Z | from part1 import (
gamma_board,
gamma_busy_fields,
gamma_delete,
gamma_free_fields,
gamma_golden_move,
gamma_golden_possible,
gamma_move,
gamma_new,
)
"""
scenario: test_random_actions
uuid: 584648757
"""
"""
random actions, total chaos
"""
board = gamma_new(6, 8, 3, 17)
assert board is not None
assert gamma_move(board, 1, 1, 0) == 1
assert gamma_busy_fields(board, 1) == 1
assert gamma_move(board, 2, 5, 2) == 1
assert gamma_move(board, 2, 0, 3) == 1
assert gamma_busy_fields(board, 2) == 2
assert gamma_move(board, 3, 0, 4) == 1
assert gamma_move(board, 3, 2, 5) == 1
assert gamma_move(board, 1, 7, 0) == 0
assert gamma_busy_fields(board, 1) == 1
assert gamma_free_fields(board, 1) == 43
assert gamma_move(board, 2, 1, 5) == 1
assert gamma_golden_possible(board, 2) == 1
assert gamma_move(board, 3, 2, 5) == 0
assert gamma_move(board, 1, 5, 1) == 1
assert gamma_move(board, 1, 2, 7) == 1
assert gamma_move(board, 2, 0, 2) == 1
assert gamma_move(board, 3, 2, 4) == 1
assert gamma_move(board, 2, 1, 4) == 1
assert gamma_move(board, 3, 4, 6) == 1
assert gamma_move(board, 3, 4, 2) == 1
assert gamma_move(board, 1, 3, 2) == 1
assert gamma_move(board, 2, 5, 5) == 1
assert gamma_move(board, 2, 5, 1) == 0
assert gamma_move(board, 3, 1, 6) == 1
assert gamma_move(board, 3, 2, 0) == 1
assert gamma_move(board, 1, 1, 3) == 1
assert gamma_move(board, 1, 0, 2) == 0
assert gamma_free_fields(board, 1) == 30
assert gamma_move(board, 2, 6, 2) == 0
assert gamma_move(board, 3, 6, 0) == 0
assert gamma_move(board, 1, 4, 4) == 1
assert gamma_golden_possible(board, 1) == 1
assert gamma_move(board, 2, 6, 2) == 0
assert gamma_move(board, 3, 5, 4) == 1
assert gamma_move(board, 3, 1, 3) == 0
assert gamma_free_fields(board, 3) == 28
assert gamma_move(board, 1, 4, 3) == 1
assert gamma_move(board, 1, 1, 5) == 0
assert gamma_move(board, 2, 1, 3) == 0
assert gamma_move(board, 3, 1, 3) == 0
assert gamma_free_fields(board, 3) == 27
assert gamma_move(board, 1, 1, 3) == 0
assert gamma_move(board, 2, 0, 2) == 0
assert gamma_busy_fields(board, 2) == 6
assert gamma_golden_possible(board, 2) == 1
assert gamma_move(board, 3, 3, 1) == 1
assert gamma_move(board, 3, 2, 4) == 0
assert gamma_busy_fields(board, 3) == 9
assert gamma_golden_possible(board, 3) == 1
assert gamma_move(board, 1, 4, 5) == 1
assert gamma_busy_fields(board, 1) == 8
assert gamma_move(board, 2, 6, 2) == 0
assert gamma_move(board, 2, 1, 7) == 1
assert gamma_move(board, 3, 4, 0) == 1
assert gamma_move(board, 2, 6, 3) == 0
assert gamma_move(board, 2, 3, 2) == 0
assert gamma_move(board, 3, 6, 2) == 0
assert gamma_move(board, 3, 3, 2) == 0
assert gamma_free_fields(board, 3) == 23
assert gamma_move(board, 1, 1, 2) == 1
assert gamma_move(board, 2, 2, 5) == 0
assert gamma_move(board, 2, 0, 6) == 1
assert gamma_move(board, 1, 3, 2) == 0
assert gamma_free_fields(board, 1) == 21
assert gamma_move(board, 2, 2, 2) == 1
assert gamma_move(board, 3, 6, 5) == 0
assert gamma_move(board, 3, 5, 3) == 1
assert gamma_golden_possible(board, 3) == 1
assert gamma_move(board, 1, 7, 0) == 0
assert gamma_move(board, 2, 2, 6) == 1
assert gamma_move(board, 3, 7, 0) == 0
assert gamma_move(board, 3, 4, 1) == 1
assert gamma_golden_possible(board, 3) == 1
assert gamma_golden_move(board, 3, 7, 2) == 0
assert gamma_move(board, 1, 6, 3) == 0
assert gamma_move(board, 2, 5, 0) == 1
assert gamma_move(board, 2, 0, 7) == 1
assert gamma_free_fields(board, 2) == 15
assert gamma_move(board, 3, 5, 3) == 0
assert gamma_free_fields(board, 3) == 15
assert gamma_golden_possible(board, 3) == 1
assert gamma_move(board, 1, 0, 5) == 1
assert gamma_move(board, 1, 0, 5) == 0
assert gamma_move(board, 2, 1, 2) == 0
assert gamma_move(board, 2, 4, 0) == 0
assert gamma_move(board, 3, 7, 4) == 0
assert gamma_move(board, 1, 5, 3) == 0
assert gamma_move(board, 2, 7, 5) == 0
assert gamma_busy_fields(board, 2) == 12
assert gamma_move(board, 3, 0, 3) == 0
assert gamma_move(board, 1, 3, 4) == 1
assert gamma_move(board, 1, 1, 0) == 0
assert gamma_busy_fields(board, 1) == 11
assert gamma_move(board, 2, 0, 3) == 0
assert gamma_busy_fields(board, 2) == 12
assert gamma_free_fields(board, 2) == 13
assert gamma_move(board, 3, 1, 7) == 0
assert gamma_move(board, 3, 5, 5) == 0
assert gamma_busy_fields(board, 3) == 12
assert gamma_move(board, 1, 2, 5) == 0
assert gamma_move(board, 3, 7, 5) == 0
assert gamma_move(board, 3, 5, 7) == 1
assert gamma_move(board, 1, 5, 3) == 0
assert gamma_move(board, 2, 2, 5) == 0
assert gamma_move(board, 2, 1, 0) == 0
assert gamma_busy_fields(board, 2) == 12
assert gamma_move(board, 3, 0, 3) == 0
assert gamma_move(board, 3, 2, 4) == 0
assert gamma_move(board, 1, 7, 4) == 0
assert gamma_move(board, 1, 2, 3) == 1
assert gamma_move(board, 2, 7, 3) == 0
assert gamma_move(board, 2, 5, 6) == 1
assert gamma_move(board, 3, 1, 2) == 0
assert gamma_move(board, 1, 7, 4) == 0
assert gamma_move(board, 1, 2, 4) == 0
assert gamma_move(board, 2, 0, 3) == 0
assert gamma_move(board, 3, 4, 0) == 0
assert gamma_move(board, 3, 2, 3) == 0
assert gamma_golden_possible(board, 3) == 1
assert gamma_golden_possible(board, 1) == 1
assert gamma_move(board, 2, 3, 3) == 1
assert gamma_move(board, 2, 1, 7) == 0
assert gamma_golden_possible(board, 2) == 1
assert gamma_move(board, 3, 0, 3) == 0
assert gamma_golden_possible(board, 3) == 1
assert gamma_move(board, 1, 1, 0) == 0
assert gamma_move(board, 1, 4, 1) == 0
assert gamma_free_fields(board, 1) == 9
assert gamma_move(board, 2, 1, 1) == 1
assert gamma_move(board, 2, 3, 1) == 0
board235185295 = gamma_board(board)
assert board235185295 is not None
assert board235185295 == ("221..3\n"
"232.32\n"
"123.12\n"
"323113\n"
"211213\n"
"212132\n"
".2.331\n"
".13.32\n")
del board235185295
board235185295 = None
assert gamma_move(board, 1, 6, 3) == 0
assert gamma_move(board, 1, 4, 2) == 0
assert gamma_move(board, 2, 4, 2) == 0
assert gamma_move(board, 2, 3, 3) == 0
assert gamma_move(board, 3, 6, 3) == 0
assert gamma_move(board, 1, 7, 4) == 0
assert gamma_move(board, 1, 5, 0) == 0
assert gamma_move(board, 2, 0, 6) == 0
assert gamma_move(board, 2, 5, 6) == 0
assert gamma_move(board, 3, 5, 7) == 0
assert gamma_golden_possible(board, 3) == 1
assert gamma_move(board, 1, 1, 2) == 0
assert gamma_move(board, 1, 3, 7) == 1
assert gamma_move(board, 2, 2, 4) == 0
assert gamma_move(board, 3, 0, 0) == 1
assert gamma_move(board, 1, 0, 3) == 0
assert gamma_move(board, 1, 3, 3) == 0
assert gamma_move(board, 2, 1, 2) == 0
assert gamma_move(board, 3, 1, 2) == 0
assert gamma_move(board, 3, 1, 3) == 0
assert gamma_move(board, 1, 5, 3) == 0
board112906211 = gamma_board(board)
assert board112906211 is not None
assert board112906211 == ("2211.3\n"
"232.32\n"
"123.12\n"
"323113\n"
"211213\n"
"212132\n"
".2.331\n"
"313.32\n")
del board112906211
board112906211 = None
assert gamma_move(board, 2, 1, 2) == 0
assert gamma_move(board, 3, 7, 4) == 0
assert gamma_move(board, 1, 1, 0) == 0
assert gamma_free_fields(board, 1) == 6
assert gamma_move(board, 2, 7, 4) == 0
assert gamma_golden_move(board, 2, 7, 2) == 0
assert gamma_move(board, 3, 5, 3) == 0
assert gamma_move(board, 1, 6, 3) == 0
assert gamma_move(board, 1, 2, 5) == 0
assert gamma_busy_fields(board, 1) == 13
assert gamma_move(board, 2, 2, 0) == 0
assert gamma_free_fields(board, 2) == 6
assert gamma_golden_possible(board, 2) == 1
assert gamma_move(board, 3, 0, 7) == 0
assert gamma_move(board, 1, 6, 3) == 0
assert gamma_move(board, 1, 3, 3) == 0
assert gamma_move(board, 2, 5, 3) == 0
assert gamma_busy_fields(board, 2) == 15
assert gamma_move(board, 3, 2, 0) == 0
gamma_delete(board)
| 34.513393 | 46 | 0.659164 | 1,444 | 7,731 | 3.371191 | 0.041551 | 0.384141 | 0.400575 | 0.5341 | 0.897905 | 0.88825 | 0.826828 | 0.647905 | 0.495481 | 0.468159 | 0 | 0.12767 | 0.182383 | 7,731 | 223 | 47 | 34.668161 | 0.642462 | 0 | 0 | 0.352657 | 0 | 0 | 0.016747 | 0 | 0 | 0 | 0 | 0 | 0.845411 | 1 | 0 | false | 0 | 0.004831 | 0 | 0.004831 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e9d7af0b042b7d94a191643942d4cb9014dc9a3b | 9,771 | py | Python | tests/test_usage.py | timgates42/flask-pundit | f0a0cc2e03e2f30225cc2f3d9f467ee0323100da | [
"MIT"
] | 48 | 2016-01-26T03:45:46.000Z | 2022-03-21T15:49:18.000Z | tests/test_usage.py | timgates42/flask-pundit | f0a0cc2e03e2f30225cc2f3d9f467ee0323100da | [
"MIT"
] | 7 | 2016-01-27T03:06:33.000Z | 2021-07-14T10:48:57.000Z | tests/test_usage.py | timgates42/flask-pundit | f0a0cc2e03e2f30225cc2f3d9f467ee0323100da | [
"MIT"
] | 11 | 2016-01-27T03:04:37.000Z | 2021-07-14T10:45:59.000Z | from flask import g, Flask
from flask_pundit import (
FlaskPundit,
verify_authorized,
verify_policy_scoped,
verify_authorized_or_policy_scoped
)
from .models.post import Post
from .models.comment import Comment
from nose.tools import (
assert_raises,
ok_,
eq_)
from unittest import TestCase
from werkzeug.exceptions import BadRequest
import json
class TestUsage(TestCase):
def setUp(self):
self.app = Flask('test')
self.app.debug = True
self.pundit = FlaskPundit(policies_path='tests.policies')
self.pundit.init_app(self.app)
self.client = self.app.test_client()
def test_authorize_with_record_for_admin(self):
def do_authorize_stuff():
post = Post(1)
return self.pundit.authorize(post)
@self.app.route('/test_authorize_admin_get')
def admin_get_post():
g.user = {'id': 1, 'role': 'admin'}
is_authorized = do_authorize_stuff()
ok_(self.pundit._verify_authorized())
if is_authorized:
return 'Success', 200
else:
return 'Forbidden', 403
resp = self.client.get('/test_authorize_admin_get')
eq_(resp.status_code, 200)
def test_authorize_with_record_for_admin_with_params(self):
def do_authorize_stuff():
post = Post(1)
return self.pundit.authorize(post, thing_id=1)
@self.app.route('/test_authorize_admin_get')
def admin_get_post():
g.user = {'id': 1, 'role': 'admin'}
is_authorized = do_authorize_stuff()
ok_(self.pundit._verify_authorized())
if is_authorized:
return 'Success', 200
else:
return 'Forbidden', 403
resp = self.client.get('/test_authorize_admin_get')
eq_(resp.status_code, 403)
def test_authorize_with_record_for_staff(self):
def do_authorize_stuff():
post = Post(1)
return self.pundit.authorize(post)
@self.app.route('/test_authorize_staff_get')
def admin_get_post():
g.user = {'id': 2, 'role': 'staff'}
is_authorized = do_authorize_stuff()
ok_(self.pundit._verify_authorized())
if is_authorized:
return 'Success', 200
else:
return 'Forbidden', 403
resp = self.client.get('/test_authorize_staff_get')
eq_(resp.status_code, 403)
def test_authorize_with_model_for_admin(self):
def do_authorize_stuff():
return self.pundit.authorize(Post, action='create')
@self.app.route('/test_authorize_admin_create')
def admin_create_post():
g.user = {'id': 1, 'role': 'admin'}
is_authorized = do_authorize_stuff()
ok_(self.pundit._verify_authorized())
if is_authorized:
return 'Success', 200
else:
return 'Forbidden', 403
resp = self.client.get('/test_authorize_admin_create')
eq_(resp.status_code, 200)
def test_authorize_with_policy_class_specified_for_admin(self):
def do_authorize_stuff():
return self.pundit.authorize(Comment)
@self.app.route('/test_authorize_admin_get_comment')
def admin_get_comment():
g.user = {'id': 1, 'role': 'admin'}
is_authorized = do_authorize_stuff()
ok_(self.pundit._verify_authorized())
if is_authorized:
return 'Success', 200
else:
return 'Forbidden', 403
resp = self.client.get('/test_authorize_admin_get_comment')
eq_(resp.status_code, 200)
def test_verify_authorized_decorator_success(self):
def do_authorize_stuff():
post = Post(1)
return self.pundit.authorize(post)
@self.app.route('/test_authorize_admin_get')
@verify_authorized
def admin_get_post():
g.user = {'id': 1, 'role': 'admin'}
is_authorized = do_authorize_stuff()
if is_authorized:
return 'Success', 200
else:
return 'Forbidden', 403
resp = self.client.get('/test_authorize_admin_get')
eq_(resp.status_code, 200)
def test_verify_authorized_decorator_raises_exception(self):
def do_authorize_stuff():
post = Post(1)
return self.pundit.authorize(post)
@self.app.route('/test_authorize_admin_get')
@verify_authorized
def admin_get_post():
g.user = {'id': 1, 'role': 'admin'}
return 'Success', 200
assert_raises(RuntimeError, self.client.get,
'/test_authorize_admin_get')
def test_verify_authorized_decorator_ignores_raised_exception(self):
def do_authorize_stuff():
post = Post(1)
return self.pundit.authorize(post)
@self.app.route('/test_authorize_admin_get')
@verify_authorized
def admin_get_post():
g.user = {'id': 1, 'role': 'admin'}
raise BadRequest()
is_authorized = do_authorize_stuff()
if is_authorized:
return 'Success', 200
else:
return 'Forbidden', 403
resp = self.client.get('/test_authorize_admin_get')
eq_(resp.status_code, 400)
def test_policy_scoped_admin(self):
def do_policy_scope_stuff():
return self.pundit.policy_scope(Post)
@self.app.route('/test_policy_scope_admin')
def admin_get_post():
g.user = {'id': 1, 'role': 'admin'}
scoped_posts = do_policy_scope_stuff()
ok_(self.pundit._verify_policy_scoped())
return json.dumps({'posts': scoped_posts})
resp = self.client.get('/test_policy_scope_admin')
eq_(resp.data.decode(), '{"posts": [1, 2]}')
def test_policy_scoped_staff(self):
def do_policy_scope_stuff():
return self.pundit.policy_scope(Post)
@self.app.route('/test_policy_scope_staff')
def admin_get_post():
g.user = {'id': 2, 'role': 'staff'}
scoped_posts = do_policy_scope_stuff()
ok_(self.pundit._verify_policy_scoped())
return json.dumps({'posts': scoped_posts})
resp = self.client.get('/test_policy_scope_staff')
eq_(resp.data.decode(), '{"posts": [3, 4]}')
def test_policy_scope_with_policy_class_specified_for_admin(self):
def do_policy_scope_stuff():
return self.pundit.policy_scope(Comment)
@self.app.route('/test_policy_scope_admin_get_comments')
def admin_get_comments():
g.user = {'id': 1, 'role': 'admin'}
scoped_comments = do_policy_scope_stuff()
ok_(self.pundit._verify_policy_scoped())
return json.dumps({'comments': scoped_comments})
resp = self.client.get('/test_policy_scope_admin_get_comments')
eq_(resp.data.decode(), '{"comments": ["Hello"]}')
def test_verify_policy_scoped_decorator_success(self):
def do_policy_scope_stuff():
return self.pundit.policy_scope(Post)
@self.app.route('/test_policy_scope_admin')
@verify_policy_scoped
def admin_get_post():
g.user = {'id': 1, 'role': 'admin'}
scoped_posts = do_policy_scope_stuff()
return json.dumps({'posts': scoped_posts})
resp = self.client.get('/test_policy_scope_admin')
eq_(resp.data.decode(), '{"posts": [1, 2]}')
def test_verify_policy_scoped_decorator_raises_exception(self):
def do_policy_scope_stuff():
return self.pundit.policy_scope(Post)
@self.app.route('/test_policy_scope_admin')
@verify_policy_scoped
def admin_get_post():
g.user = {'id': 1, 'role': 'admin'}
return json.dumps({'posts': []})
assert_raises(RuntimeError, self.client.get,
'/test_policy_scope_admin')
def test_verify_either_decorator_success_with_authorize(self):
def do_authorize_stuff():
post = Post(1)
return self.pundit.authorize(post)
@self.app.route('/test_authorize_admin_get')
@verify_authorized_or_policy_scoped
def admin_get_post():
g.user = {'id': 1, 'role': 'admin'}
raise BadRequest()
is_authorized = do_authorize_stuff()
if is_authorized:
return 'Success', 200
else:
return 'Forbidden', 403
resp = self.client.get('/test_authorize_admin_get')
eq_(resp.status_code, 400)
def test_verify_either_decorator_success_with_scope(self):
def do_policy_scope_stuff():
return self.pundit.policy_scope(Post)
@self.app.route('/test_policy_scope_admin')
@verify_authorized_or_policy_scoped
def admin_get_post():
g.user = {'id': 1, 'role': 'admin'}
scoped_posts = do_policy_scope_stuff()
return json.dumps({'posts': scoped_posts})
resp = self.client.get('/test_policy_scope_admin')
eq_(resp.data.decode(), '{"posts": [1, 2]}')
def test_verify_either_decorator_raises_exception(self):
def do_policy_scope_stuff():
return self.pundit.policy_scope(Post)
@self.app.route('/test_policy_scope_admin')
@verify_authorized_or_policy_scoped
def admin_get_post():
g.user = {'id': 1, 'role': 'admin'}
return json.dumps({'posts': []})
assert_raises(RuntimeError, self.client.get,
'/test_policy_scope_admin')
| 37.011364 | 72 | 0.60526 | 1,151 | 9,771 | 4.783666 | 0.077324 | 0.067926 | 0.049401 | 0.046495 | 0.876317 | 0.851798 | 0.831275 | 0.783146 | 0.775881 | 0.7579 | 0 | 0.015264 | 0.282571 | 9,771 | 263 | 73 | 37.152091 | 0.770185 | 0 | 0 | 0.733624 | 0 | 0 | 0.132842 | 0.085355 | 0 | 0 | 0 | 0 | 0.017467 | 1 | 0.213974 | false | 0 | 0.034935 | 0.039301 | 0.427948 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
756620f9b66ca705309d8e519c836ee626aee018 | 1,766 | py | Python | olympic/layers.py | oscarknagg/olympic-pytorch | 34061c484356e0fb56abe4cba9d4f3f86fc3eb4e | [
"MIT"
] | 4 | 2018-12-27T07:08:01.000Z | 2020-08-15T14:48:49.000Z | olympic/layers.py | oscarknagg/olympic-pytorch | 34061c484356e0fb56abe4cba9d4f3f86fc3eb4e | [
"MIT"
] | 9 | 2020-03-24T16:25:15.000Z | 2022-03-11T23:35:35.000Z | olympic/layers.py | oscarknagg/olympic-pytorch | 34061c484356e0fb56abe4cba9d4f3f86fc3eb4e | [
"MIT"
] | null | null | null | from torch import nn
import torch.nn.functional as F
class Flatten(nn.Module):
"""Module that flattens N-dimensional Tensor of shape [batch_size, d1, d2, ..., dn]
to 2-dimensional Tensor of shape [batch_size, d1*d2*...*dn].
"""
def forward(self, input):
"""Converts N-dimensional Tensor of shape [batch_size, d1, d2, ..., dn] to 2-dimensional Tensor
of shape [batch_size, d1*d2*...*dn].
# Arguments
input: Input tensor
# Returns
output:
"""
return input.view(input.size(0), -1)
class GlobalMaxPool1d(nn.Module):
"""Performs global max pooling over the entire length of a batched 1D tensor
# Arguments
input: Input tensor
"""
def forward(self, input):
return F.max_pool1d(input, kernel_size=input.size()[2:]).view(-1, input.size(1))
class GlobalAvgPool1d(nn.Module):
"""Performs global average pooling over the entire length of a batched 1D tensor
# Arguments
input: Input tensor
"""
def forward(self, *input):
return F.avg_pool1d(input, kernel_size=input.size()[2:]).view(-1, input.size(1))
class GlobalMaxPool2d(nn.Module):
"""Performs global max pooling over the entire height and width of a batched 2D tensor
# Arguments
input: Input tensor
"""
def forward(self, input):
return F.max_pool2d(input, kernel_size=input.size()[2:]).view(-1, input.size(1))
class GlobalAvgPool2d(nn.Module):
"""Performs global average pooling over the entire height and width of a batched 2D tensor
# Arguments
input: Input tensor
"""
def forward(self, input):
return F.avg_pool2d(input, kernel_size=input.size()[2:]).view(-1, input.size(1))
| 29.433333 | 103 | 0.638732 | 241 | 1,766 | 4.630705 | 0.248963 | 0.072581 | 0.062724 | 0.085125 | 0.787634 | 0.787634 | 0.787634 | 0.787634 | 0.787634 | 0.69086 | 0 | 0.026766 | 0.238392 | 1,766 | 59 | 104 | 29.932203 | 0.802974 | 0.454134 | 0 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.294118 | false | 0 | 0.117647 | 0.235294 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
f9e2e64dfcf885bebe2d08c721eaa440fbe317f1 | 85 | py | Python | dice_roll.py | SimeonRaykov/Algorithms | 6f580bb69ee5badb8550e6424bfd4a8eaca775a2 | [
"MIT"
] | null | null | null | dice_roll.py | SimeonRaykov/Algorithms | 6f580bb69ee5badb8550e6424bfd4a8eaca775a2 | [
"MIT"
] | null | null | null | dice_roll.py | SimeonRaykov/Algorithms | 6f580bb69ee5badb8550e6424bfd4a8eaca775a2 | [
"MIT"
] | null | null | null | import random
def dice_roll():
return random.randint(1,6)
print(dice_roll()) | 17 | 31 | 0.694118 | 13 | 85 | 4.384615 | 0.769231 | 0.280702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028571 | 0.176471 | 85 | 5 | 32 | 17 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0.25 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
3485ef060d2483374673bceb9a2c1d624fe059ff | 1,307 | py | Python | tabtosql/__main__.py | levikanwischer/tabtosql | 29f40b133fa69177239a306aba1b0a0b3fba8c3e | [
"MIT"
] | 5 | 2019-06-21T11:46:27.000Z | 2021-12-19T21:53:50.000Z | tabtosql/__main__.py | LeviKanwischer/tabtosql | 29f40b133fa69177239a306aba1b0a0b3fba8c3e | [
"MIT"
] | 1 | 2019-05-30T05:15:30.000Z | 2019-05-30T05:15:30.000Z | tabtosql/__main__.py | levikanwischer/tabtosql | 29f40b133fa69177239a306aba1b0a0b3fba8c3e | [
"MIT"
] | 4 | 2019-05-29T17:06:54.000Z | 2021-11-06T17:14:30.000Z | # -*- coding: utf-8 -*-
"""
Tableau Workbook SQL Extract Tool
tabtosql is a command line tool for parsing sql queries & related
information out of tableau workbooks (.twb & .twbx files). It works by
taking a tableau workbook, parsing the xml, and formatting information
about worksheets, connections to those worksheets, their connection(db)
details, and the corresponding custom sql (assuming it exists) in a
valid sql & human readable format.
USAGE:
$ tabtosql input.twb(x) > output.sql
See the README for further details.
"""
import sys
import click
import tabtosql
@click.command()
@click.argument('filename')
def cli(filename):
"""Tableau Workbook SQL Extract Tool
tabtosql is a command line tool for parsing sql queries & related
information out of tableau workbooks (.twb & .twbx files). It works by
taking a tableau workbook, parsing the xml, and formatting information
about worksheets, connections to those worksheets, their connection(db)
details, and the corresponding custom sql (assuming it exists) in a
valid sql & human readable format.
USAGE:
$ tabtosql input.twb(x) > output.sql
See the README for further details.
"""
sys.stdout.write(tabtosql.convert(filename))
| 29.044444 | 76 | 0.702372 | 173 | 1,307 | 5.306358 | 0.387283 | 0.065359 | 0.039216 | 0.054466 | 0.860566 | 0.860566 | 0.860566 | 0.860566 | 0.860566 | 0.860566 | 0 | 0.000989 | 0.226473 | 1,307 | 44 | 77 | 29.704545 | 0.907023 | 0.798011 | 0 | 0 | 0 | 0 | 0.048193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
34ba8a0428b6bf21aa62f3401ac37393b26a1bb4 | 59,331 | py | Python | src-v0.7/cheap_image.py | aeb/CheapImagingApp | 8730cdfd6d62a2b36ce4955a593347d9eedaed5c | [
"MIT"
] | 1 | 2021-09-24T17:38:33.000Z | 2021-09-24T17:38:33.000Z | src-v0.7/cheap_image.py | aeb/CheapImagingApp | 8730cdfd6d62a2b36ce4955a593347d9eedaed5c | [
"MIT"
] | null | null | null | src-v0.7/cheap_image.py | aeb/CheapImagingApp | 8730cdfd6d62a2b36ce4955a593347d9eedaed5c | [
"MIT"
] | 1 | 2021-09-24T17:38:39.000Z | 2021-09-24T17:38:39.000Z | import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mi
# import scipy.interpolate as si
import matplotlib.tri as tri
import copy
from os import path
from kivy.uix.floatlayout import FloatLayout
from kivy.graphics import Ellipse, Color, Rectangle, Line, Point
from kivy.metrics import dp, sp
from kivy.uix.label import Label
from kivy.clock import Clock
from kivy.core.window import Window
from mpl_texture import InteractiveWorldMapOverlayWidget, InteractivePlotWidget, InteractiveWorldMapWidget
# from mpl_texture import InteractivePlotWidget, InteractiveWorldMapWidget
import hashlib
#from PIL import Image
# Data dictionary: datadict has form {'u':u, 'v':v, 'V':V}
# Station dictionary: statdict has form {<station code>:{'on':<True/False>,'name':<name>,'loc':(x,y,z)}}
__mydebug__ = True
#########
# To read in data to get the
def read_data(v_file_name) :
# Read in Themis-style data, which is simple and compact
data = np.loadtxt(v_file_name,usecols=[5,6,7,8,9,10,3])
baselines = np.loadtxt(v_file_name,usecols=[4],dtype=str)
s1 = np.array([x[:2] for x in baselines])
s2 = np.array([x[2:] for x in baselines])
u = data[:,0]/1e3
v = data[:,1]/1e3
V = data[:,2] + 1.0j*data[:,4]
err = data[:,3] + 1.0j*data[:,5]
t = data[:,6]
# Make conjugate points
u = np.append(u,-u)
v = np.append(v,-v)
V = np.append(V,np.conj(V))
err = np.append(err,err)
t = np.append(t,t)
s1d = np.append(s1,s2)
s2d = np.append(s2,s1)
return {'u':u,'v':v,'V':V,'s1':s1d,'s2':s2d,'t':t,'err':err}
def read_array(array_file_name,existing_station_list=None) :
if (existing_station_list is None) :
existing_station_list = ['PV','AZ','SM','LM','AA','AP','SP','JC','GL','PB','KP','HA']
stations = np.loadtxt(array_file_name,usecols=[0],dtype=str)
locs = np.loadtxt(array_file_name,usecols=[1,2,3])
statdict = {}
for j in range(len(stations)) :
if (stations[j] in existing_station_list) :
statdict[stations[j]] = {'on':True,'loc':locs[j],'name':stations[j], 'exists':True, 'diameter':None}
else :
statdict[stations[j]] = {'on':True,'loc':locs[j],'name':stations[j], 'exists':False, 'diameter':6}
return statdict
class BaselinePlots :
def __init__(self) :
self.ddict = None
self.ddnew = None
def plot_baselines(self,axs,datadict,statdict,time_range=None,snr_cut=None,ngeht_diameter=6,make_hermitian=False,limits=None) :
# Set this to current axes for convenience, might be a minor performance hit?
plt.sca(axs)
# Exclude stations not in array
# stations = list(np.unique(np.array(list(statdict.keys()))))
stations = list(statdict.keys())
keep = np.array([ (datadict['s1'][j] in stations) and (datadict['s2'][j] in stations) for j in range(len(datadict['s1'])) ])
ddtmp = {}
for key in ['u','v','V','s1','s2','t','err'] :
ddtmp[key] = datadict[key][keep]
self.ddict = ddtmp
# Exclude stations that are "off"
if (len(self.ddict['u'])>0) :
keep = np.array([ statdict[ddtmp['s1'][j]]['on'] and statdict[ddtmp['s2'][j]]['on'] for j in range(len(ddtmp['s1'])) ])
self.ddnew = {}
for key in ['u','v','V','s1','s2','t','err'] :
self.ddnew[key] = ddtmp[key][keep]
else :
self.ddnew = copy.deepcopy(self.ddict)
# Exclude data points outside the specified time range
if (len(self.ddnew['u'])>0) :
if (not time_range is None) :
keep = (self.ddnew['t']>=time_range[0])*(self.ddnew['t']<time_range[1])
for key in ['u','v','V','s1','s2','t','err'] :
self.ddnew[key] = self.ddnew[key][keep]
# Cut points with S/N less than the specified minimum value
if (not snr_cut is None) and (snr_cut>0):
if (len(self.ddnew['u'])>0) :
# Get a list of error adjustments based on stations
diameter_correction_factor = {}
for s in stations :
if (statdict[s]['exists']) :
diameter_correction_factor[s] = 1.0
else :
diameter_correction_factor[s] = statdict[s]['diameter']/ngeht_diameter
keep = np.array([ np.abs(self.ddnew['V'][j])/(self.ddnew['err'][j].real * diameter_correction_factor[self.ddnew['s1'][j]] * diameter_correction_factor[self.ddnew['s2'][j]]) > snr_cut for j in range(len(self.ddnew['s1'])) ])
for key in ['u','v','V','s1','s2','t','err'] :
self.ddnew[key] = self.ddnew[key][keep]
# Double up data to make V hemitian
if (make_hermitian) :
self.ddnew['u'] = np.append(self.ddnew['u'],-self.ddnew['u'])
self.ddnew['v'] = np.append(self.ddnew['v'],-self.ddnew['v'])
self.ddnew['V'] = np.append(self.ddnew['V'],np.conj(self.ddnew['V']))
plt.plot(self.ddict['u'],self.ddict['v'],'.',color=[0.14,0.14,0.14])
plt.plot(self.ddnew['u'],self.ddnew['v'],'.',color='cornflowerblue')
uvmax = np.max(np.sqrt( self.ddict['u']**2 + self.ddict['v']**2 ))
plt.gca().spines['left'].set_position('zero')
plt.gca().spines['left'].set_color('w')
plt.gca().spines['bottom'].set_position('zero')
plt.gca().spines['bottom'].set_color('w')
plt.gca().spines['right'].set_visible(False)
plt.gca().spines['top'].set_visible(False)
plt.gca().xaxis.set_tick_params(bottom='on',top='off',direction='inout',colors='w')
plt.gca().yaxis.set_tick_params(left='on',right='off',direction='inout',colors='w')
if (limits is None) :
limits = [1.1*uvmax,-1.1*uvmax,-1.1*uvmax,1.1*uvmax]
plt.xlim(limits[:2])
plt.ylim(limits[2:])
plt.grid(True,alpha=0.25,linewidth=2)
xc = 0.5*(limits[0]+limits[1])
dx = 0.5*(limits[1]-limits[0])
yc = 0.5*(limits[2]+limits[3])
dy = 0.5*(limits[3]-limits[2])
plt.text( xc+0.85*dx, 0.05*dy, 'u (G$\lambda$)',color='w',ha='center',fontsize=14)
plt.text( 0.05*dx, yc+0.85*dy, 'v (G$\lambda$)',color='w',va='center',fontsize=14)
plt.gca().set_facecolor('k')
def replot(self,axs,limits=None) :
plt.sca(axs)
plt.plot(self.ddict['u'],self.ddict['v'],'.',color=[0.14,0.14,0.14])
plt.plot(self.ddnew['u'],self.ddnew['v'],'.',color='cornflowerblue')
uvmax = np.max(np.sqrt( self.ddict['u']**2 + self.ddict['v']**2 ))
plt.gca().spines['left'].set_position('zero')
plt.gca().spines['left'].set_color('w')
plt.gca().spines['bottom'].set_position('zero')
plt.gca().spines['bottom'].set_color('w')
plt.gca().spines['right'].set_visible(False)
plt.gca().spines['top'].set_visible(False)
plt.gca().xaxis.set_tick_params(bottom='on',top='off',direction='inout',colors='w')
plt.gca().yaxis.set_tick_params(left='on',right='off',direction='inout',colors='w')
if (limits is None) :
limits = [1.1*uvmax,-1.1*uvmax,-1.1*uvmax,1.1*uvmax]
plt.xlim(limits[:2])
plt.ylim(limits[2:])
plt.grid(True,alpha=0.25,linewidth=2)
xc = 0.5*(limits[0]+limits[1])
dx = 0.5*(limits[1]-limits[0])
yc = 0.5*(limits[2]+limits[3])
dy = 0.5*(limits[3]-limits[2])
plt.text( xc+0.85*dx, 0.05*dy, 'u (G$\lambda$)',color='w',ha='center',fontsize=14)
plt.text( 0.05*dx, yc+0.85*dy, 'v (G$\lambda$)',color='w',va='center',fontsize=14)
plt.gca().set_facecolor('k')
class InteractiveBaselinePlot(InteractivePlotWidget) :
def __init__(self,**kwargs) :
self.ddict = {}
self.ddnew = {}
self.sdict = {}
super().__init__(**kwargs)
def generate_mpl_plot(self,fig,ax,**kwargs) :
# This is where we insert a Matplotlib figure. Must use ax. and fig. child commands.
# You probably want, but do not require, the following in your over-lay
self.plot_baselines(ax,self.ddict,self.sdict,**kwargs)
ax.set_facecolor((0,0,0,0))
fig.set_facecolor((0,0,0,0))
def update(self,datadict,statdict,**kwargs) :
self.sdict = statdict
self.ddict = datadict
self.update_mpl(**kwargs)
def replot(self,datadict,statdict,**kwargs) :
self.sdict = statdict
self.ddict = datadict
self.update_mpl(**kwargs)
def plot_baselines(self,axs,datadict,statdict,time_range=None,snr_cut=None,ngeht_diameter=6,make_hermitian=False,limits=None) :
if (len(statdict.keys())==0) :
return
# Exclude stations not in array
# stations = list(np.unique(np.array(list(statdict.keys()))))
stations = list(statdict.keys())
keep = np.array([ (datadict['s1'][j] in stations) and (datadict['s2'][j] in stations) for j in range(len(datadict['s1'])) ])
ddtmp = {}
for key in ['u','v','V','s1','s2','t','err'] :
ddtmp[key] = datadict[key][keep]
self.ddict = ddtmp
# Exclude stations that are "off"
if (len(self.ddict['u'])>0) :
keep = np.array([ statdict[ddtmp['s1'][j]]['on'] and statdict[ddtmp['s2'][j]]['on'] for j in range(len(ddtmp['s1'])) ])
self.ddnew = {}
for key in ['u','v','V','s1','s2','t','err'] :
self.ddnew[key] = ddtmp[key][keep]
else :
self.ddnew = copy.deepcopy(self.ddict)
# Exclude data points outside the specified time range
if (len(self.ddnew['u'])>0) :
if (not time_range is None) :
keep = (self.ddnew['t']>=time_range[0])*(self.ddnew['t']<time_range[1])
for key in ['u','v','V','s1','s2','t','err'] :
self.ddnew[key] = self.ddnew[key][keep]
# Cut points with S/N less than the specified minimum value
if (not snr_cut is None) and (snr_cut>0):
if (len(self.ddnew['u'])>0) :
# Get a list of error adjustments based on stations
diameter_correction_factor = {}
for s in stations :
if (statdict[s]['exists']) :
diameter_correction_factor[s] = 1.0
else :
diameter_correction_factor[s] = statdict[s]['diameter']/ngeht_diameter
keep = np.array([ np.abs(self.ddnew['V'][j])/(self.ddnew['err'][j].real * diameter_correction_factor[self.ddnew['s1'][j]] * diameter_correction_factor[self.ddnew['s2'][j]]) > snr_cut for j in range(len(self.ddnew['s1'])) ])
for key in ['u','v','V','s1','s2','t','err'] :
self.ddnew[key] = self.ddnew[key][keep]
# Double up data to make V hemitian
if (make_hermitian) :
self.ddnew['u'] = np.append(self.ddnew['u'],-self.ddnew['u'])
self.ddnew['v'] = np.append(self.ddnew['v'],-self.ddnew['v'])
self.ddnew['V'] = np.append(self.ddnew['V'],np.conj(self.ddnew['V']))
axs.plot(self.ddict['u'],self.ddict['v'],'.',color=[0.14,0.14,0.14])
axs.plot(self.ddnew['u'],self.ddnew['v'],'.',color='cornflowerblue')
uvmax = np.max(np.sqrt( self.ddict['u']**2 + self.ddict['v']**2 ))
axs.spines['left'].set_position('zero')
axs.spines['left'].set_color('w')
axs.spines['bottom'].set_position('zero')
axs.spines['bottom'].set_color('w')
axs.spines['right'].set_visible(False)
axs.spines['top'].set_visible(False)
axs.xaxis.set_tick_params(bottom='on',top='off',direction='inout',colors='w')
axs.yaxis.set_tick_params(left='on',right='off',direction='inout',colors='w')
if (limits is None) :
limits = [1.1*uvmax,-1.1*uvmax,-1.1*uvmax,1.1*uvmax]
axs.set_xlim(limits[:2])
axs.set_ylim(limits[2:])
axs.grid(color='k',alpha=0.25,linewidth=1)
xc = 0.5*(limits[0]+limits[1])
dx = 0.5*(limits[1]-limits[0])
yc = 0.5*(limits[2]+limits[3])
dy = 0.5*(limits[3]-limits[2])
axs.text( xc+0.25*dx, 0.05*dy, 'u (G$\lambda$)',color='w',ha='center',fontsize=14)
axs.text( 0.05*dx, yc+0.25*dy, 'v (G$\lambda$)',color='w',va='center',fontsize=14)
###################################################################
class InteractiveBaselinePlot_kivygraph(FloatLayout) :
def __init__(self,**kwargs) :
self.ddict = {}
self.ddnew = {}
self.sdict = {}
super().__init__(**kwargs)
self.xp = np.array([])
self.yp = np.array([])
self.on = np.array([])
self.offset = [0,0]
self.N = 0
# Axis label list
self.labels = []
# [xmin,ymin,(xmax-xmin),(ymax-ymin)]
self.plot_location = [15,-15,-30,30]
self.grid_spacing = 2
self.point_size = dp(2)
# self.point_size = dp(10)
self.rescale = 1.0
self.bind(width=self.resize)
self.bind(height=self.resize)
def update(self,datadict,statdict,time_range=None,snr_cut=None,ngeht_diameter=6,make_hermitian=False,limits=None) :
self.sdict = statdict
self.ddict = datadict
if (len(statdict.keys())==0 or len(datadict.keys())==0) :
return
# Exclude stations not in array
# stations = list(np.unique(np.array(list(statdict.keys()))))
stations = list(statdict.keys())
keep = np.array([ (datadict['s1'][j] in stations) and (datadict['s2'][j] in stations) for j in range(len(datadict['s1'])) ])
ddtmp = {}
for key in ['u','v','V','s1','s2','t','err'] :
ddtmp[key] = datadict[key][keep]
self.ddict = ddtmp
self.xp = self.ddict['u']
self.yp = self.ddict['v']
self.on = (self.xp>-np.inf)
# Exclude stations that are "off"
if (len(self.ddict['u'])>0) :
keep = np.array([ statdict[ddtmp['s1'][j]]['on'] and statdict[ddtmp['s2'][j]]['on'] for j in range(len(ddtmp['s1'])) ])
self.on = self.on*keep
# Exclude data points outside the specified time range
if (len(self.ddict['u'])>0) :
if (not time_range is None) :
keep = (self.ddict['t']>=time_range[0])*(self.ddict['t']<time_range[1])
self.on = self.on*keep
# Cut points with S/N less than the specified minimum value
if (not snr_cut is None) and (snr_cut>0):
if (len(self.ddict['u'])>0) :
# Get a list of error adjustments based on stations
diameter_correction_factor = {}
for s in stations :
if (statdict[s]['exists']) :
diameter_correction_factor[s] = 1.0
else :
diameter_correction_factor[s] = statdict[s]['diameter']/ngeht_diameter
keep = np.array([ np.abs(self.ddict['V'][j])/(self.ddict['err'][j].real * diameter_correction_factor[self.ddict['s1'][j]] * diameter_correction_factor[self.ddict['s2'][j]]) > snr_cut for j in range(len(self.ddict['s1'])) ])
self.on = self.on*keep
umax = max(np.max(self.ddict['u']),np.max(self.ddict['v']))
umax = ((2*umax)//5)*5
self.plot_location = [0.5*umax,-0.5*umax,-umax,umax]
self.redraw()
def replot(self,datadict,statdict,**kwargs) :
self.update(datadict,statdict,**kwargs)
if __mydebug__ :
print("InteractiveBaselinePlot_kivygraph.replot")
def x_to_screen(self,x) :
return ((x-self.plot_location[0])*self.width/self.plot_location[2])*self.rescale + self.offset[0]
def y_to_screen(self,y) :
return ((y-self.plot_location[1])*self.width/self.plot_location[3])*self.rescale + self.offset[1]
def screen_to_x(self,xpx) :
return self.plot_location[0] + self.plot_location[2]*(xpx-self.offset[0])/(self.width*self.rescale)
def screen_to_y(self,ypx) :
return self.plot_location[1] + self.plot_location[3]*(ypx-self.offset[1])/(self.width*self.rescale)
def redraw(self) :
self.redraw_points()
self.redraw_axes()
def redraw_axes(self) :
# Background grid
grid_width = 2
self.grid_spacing = 2**(np.ceil(np.log2(abs(self.screen_to_x(self.width)-self.screen_to_x(0.0))/12.0)))
unit = min(max(-9,(((np.log10(self.grid_spacing)+1)//3)*3)),3)
if (unit==-9) :
unit_lbl = ' ('+chr(955)+')'
elif (unit==-6) :
unit_lbl = ' (k'+chr(955)+')'
elif (unit==-3) :
unit_lbl = ' (M'+chr(955)+')'
elif (unit==0) :
unit_lbl = ' (G'+chr(955)+')'
elif (unit==3) :
unit_lbl = ' (T'+chr(955)+')'
unit_factor = 10**(-unit)
with self.canvas :
Color(0.5,0.5,0.5,0.25)
for xgrid in np.arange(np.sign(self.plot_location[2])*self.grid_spacing,self.screen_to_x(self.width),np.sign(self.plot_location[2])*self.grid_spacing) :
xpx = self.x_to_screen(xgrid)
points = [xpx,0,xpx,self.height]
Line(points=points,width=grid_width)
for xgrid in np.arange(-np.sign(self.plot_location[2])*self.grid_spacing,self.screen_to_x(0),-np.sign(self.plot_location[2])*self.grid_spacing) :
xpx = self.x_to_screen(xgrid)
points = [xpx,0,xpx,self.height]
Line(points=points,width=grid_width)
for ygrid in np.arange(np.sign(self.plot_location[3])*self.grid_spacing,self.screen_to_y(self.height),np.sign(self.plot_location[3])*self.grid_spacing) :
ypx = self.y_to_screen(ygrid)
points = [0,ypx,self.width,ypx]
Line(points=points,width=grid_width)
for ygrid in np.arange(-np.sign(self.plot_location[3])*self.grid_spacing,self.screen_to_y(0),-np.sign(self.plot_location[3])*self.grid_spacing) :
ypx = self.y_to_screen(ygrid)
points = [0,ypx,self.width,ypx]
Line(points=points,width=grid_width)
# Axis splines
Color(1,1,1)
xpx = self.x_to_screen(0.0)
points = [xpx,0,xpx,self.height]
if (xpx>0 and xpx<self.width) :
Line(points=points,width=2)
ypx = self.y_to_screen(0.0)
points = [0,ypx,self.width,ypx]
if (ypx>0 and ypx<self.height) :
Line(points=points,width=2)
# Axis labels
# First release all labels
for lbl in self.labels :
lbl.parent.remove_widget(lbl)
self.labels = []
label_spacing_min = sp(40)
x_label_spacing = np.ceil(abs( (label_spacing_min/self.width) * (self.screen_to_x(self.width)-self.screen_to_x(0)) )/self.grid_spacing) * self.grid_spacing
# Second tick labels
ypx = int(self.y_to_screen(0) - 0.5*self.height+0.5) - 0.75*sp(15)
if (ypx>-0.5*self.height and ypx<0.5*self.height) :
for xgrid in np.arange(np.sign(self.plot_location[2])*x_label_spacing,self.screen_to_x(self.width),np.sign(self.plot_location[2])*x_label_spacing) :
xpx = int(self.x_to_screen(xgrid) - 0.5*self.width+0.5)
lbl = Label(text='%4.2f'%(xgrid*unit_factor),pos=(xpx,ypx),font_size=sp(15))
self.labels.append(lbl)
self.add_widget(lbl)
for xgrid in np.arange(-np.sign(self.plot_location[2])*x_label_spacing,self.screen_to_x(0),-np.sign(self.plot_location[2])*x_label_spacing) :
xpx = int(self.x_to_screen(xgrid) - 0.5*self.width+0.5)
lbl = Label(text='%4.2f'%(xgrid*unit_factor),pos=(xpx,ypx),font_size=sp(15))
self.labels.append(lbl)
self.add_widget(lbl)
xpx = int(self.x_to_screen(0) - 0.5*self.width+0.5) - 1.75*sp(15)
if (xpx>-0.5*self.width and xpx<0.5*self.width) :
for ygrid in np.arange(np.sign(self.plot_location[3])*x_label_spacing,self.screen_to_y(self.height),np.sign(self.plot_location[3])*x_label_spacing) :
ypx = int(self.y_to_screen(ygrid) - 0.5*self.height+0.5)
lbl = Label(text='%4.2f'%(ygrid*unit_factor),pos=(xpx,ypx),font_size=sp(15),halign='right')
self.labels.append(lbl)
self.add_widget(lbl)
for ygrid in np.arange(-np.sign(self.plot_location[3])*x_label_spacing,self.screen_to_y(0),-np.sign(self.plot_location[3])*x_label_spacing) :
ypx = int(self.y_to_screen(ygrid) - 0.5*self.height+0.5)
lbl = Label(text='%4.2f'%(ygrid*unit_factor),pos=(xpx,ypx),font_size=sp(15),halign='right')
self.labels.append(lbl)
self.add_widget(lbl)
# Third plot axis labels
ypx = int(self.y_to_screen(0.0) - 0.5*self.height + 0.5) + 0.75*sp(20)
if (ypx>-0.5*self.height and ypx<0.5*self.height) :
xpx = int( 0.375*self.width + 0.5)
points = [xpx,0,xpx,self.height]
xlbl = Label(text='[i]u[/i]'+unit_lbl,pos=(xpx,ypx),font_size=sp(20),halign='right', markup=True)
self.labels.append(xlbl)
self.add_widget(xlbl)
xpx = int(self.x_to_screen(0.0) - 0.5*self.width + 0.5) + 1.5*sp(20)
if (xpx>-0.5*self.width and xpx<0.5*self.width) :
ypx = int( 0.375*self.height + 0.5)
points = [xpx,0,xpx,self.height]
ylbl = Label(text='[i]v[/i]'+unit_lbl,pos=(xpx,ypx),font_size=sp(20),halign='right', markup=True)
self.labels.append(ylbl)
self.add_widget(ylbl)
def redraw_points(self) :
self.canvas.clear()
with self.canvas :
xpx = self.x_to_screen(self.xp)
ypx = self.y_to_screen(self.yp)
#Color(0.14,0.14,0.14)
Color(0.5,0,0)
p=Point(pointsize=self.point_size)
for j in range(len(xpx)) :
if (self.on[j]==False) :
if (ypx[j]<self.height) :
p.add_point(xpx[j],ypx[j])
Color(1,0.75,0.25)
p=Point(pointsize=self.point_size)
for j in range(len(xpx)) :
if (self.on[j]==True) :
if (ypx[j]<self.height) :
p.add_point(xpx[j],ypx[j])
def on_touch_move(self,touch) :
if (self.plot_frozen==False) :
self.offset = (self.offset[0] + touch.dpos[0],self.offset[1] + touch.dpos[1])
self.redraw()
def resize(self,widget,newsize) :
self.rescale = 1.0
self.offset = [0,0.5*(self.height-self.width)]
self.redraw()
def on_touch_down(self,touch) :
if (touch.is_double_tap) :
self.resize(self,self.width)
def zoom_in(self) :
self.rescale = self.rescale * 1.414
self.offset = [ self.offset[0]*1.414 + 0.5*(1.0-1.414)*self.width, self.offset[1]*1.414 + 0.5*(1.0-1.414)*self.width ]
self.redraw()
def zoom_out(self) :
self.rescale = self.rescale * 0.707
self.offset = [ self.offset[0]*0.707 + 0.5*(1.0-0.707)*self.width, self.offset[1]*0.707 + 0.5*(1.0-0.707)*self.width ]
self.redraw()
###############################################################3
class CheapImageReconstruction :
def __init__(self) :
self.argument_hash = None
x=None
y=None
I=None
##########
# Low-level image reconstruction function
def reconstruct_image(self,datadict,statdict,time_range=None,snr_cut=None,ngeht_diameter=6,f=2,method='cubic',make_hermitian=False) :
# Useful constant
uas2rad = np.pi/180.0/3600e6
# Exclude stations not in array
stations = list(np.unique(np.array(list(statdict.keys()))))
keep = np.array([ (datadict['s1'][j] in stations) and (datadict['s2'][j] in stations) for j in range(len(datadict['s1'])) ])
ddtmp = {}
for key in ['u','v','V','s1','s2','t','err'] :
ddtmp[key] = datadict[key][keep]
if (len(ddtmp['u'])==0) :
return None,None,None
# Exclude stations that are "off"
keep = np.array([ statdict[ddtmp['s1'][j]]['on'] and statdict[ddtmp['s2'][j]]['on'] for j in range(len(ddtmp['s1'])) ])
ddnew = {}
for key in ['u','v','V','s1','s2','t','err'] :
ddnew[key] = ddtmp[key][keep]
if (len(ddnew['u'])==0) :
return None,None,None
# Exclude data points outside the specified time range
if (not time_range is None) :
keep = (ddnew['t']>=time_range[0])*(ddnew['t']<time_range[1])
for key in ['u','v','V','s1','s2','t','err'] :
ddnew[key] = ddnew[key][keep]
if (len(ddnew['u'])==0) :
return None,None,None
# Cut points with S/N less than the specified minimum value
if (not snr_cut is None) and snr_cut>0:
# Get a list of error adjustments based on stations
diameter_correction_factor = {}
for s in stations :
if (statdict[s]['exists']) :
diameter_correction_factor[s] = 1.0
else :
diameter_correction_factor[s] = statdict[s]['diameter']/ngeht_diameter
keep = np.array([ np.abs(ddnew['V'][j])/(ddnew['err'][j].real * diameter_correction_factor[ddnew['s1'][j]] * diameter_correction_factor[ddnew['s2'][j]]) > snr_cut for j in range(len(ddnew['s1'])) ])
for key in ['u','v','V','s1','s2','t','err'] :
ddnew[key] = ddnew[key][keep]
if (len(ddnew['u'])==0) :
return None,None,None
# Double up data to make V hemitian
if (make_hermitian) :
ddnew['u'] = np.append(ddnew['u'],-ddnew['u'])
ddnew['v'] = np.append(ddnew['v'],-ddnew['v'])
ddnew['V'] = np.append(ddnew['V'],np.conj(ddnew['V']))
if (len(ddnew['u'])<=2) :
return None,None,None
# Get the region on which to compute gridded visibilities
umax = np.max(ddnew['u'])
vmax = np.max(ddnew['v'])
u2,v2 = np.meshgrid(np.linspace(-f*umax,f*umax,256),np.linspace(-f*vmax,f*vmax,256))
# SciPy
# pts = np.array([ddnew['u'],ddnew['v']]).T
# V2r = si.griddata(pts,np.real(ddnew['V']),(u2,v2),method=method,fill_value=0.0)
# V2i = si.griddata(pts,np.imag(ddnew['V']),(u2,v2),method=method,fill_value=0.0)
# Maptlotlib
triang = tri.Triangulation(ddnew['u'], ddnew['v'])
if (method=='linear') :
V2r = np.array(np.ma.fix_invalid(tri.LinearTriInterpolator(triang, np.real(ddnew['V']))(u2,v2),fill_value=0.0))
V2i = np.array(np.ma.fix_invalid(tri.LinearTriInterpolator(triang, np.imag(ddnew['V']))(u2,v2),fill_value=0.0))
elif (method=='cubic') :
V2r = np.array(np.ma.fix_invalid(tri.CubicTriInterpolator(triang, np.real(ddnew['V']),kind='geom')(u2,v2),fill_value=0.0))
V2i = np.array(np.ma.fix_invalid(tri.CubicTriInterpolator(triang, np.imag(ddnew['V']),kind='geom')(u2,v2),fill_value=0.0))
else :
print("ERROR: method %s not implemented"%(method))
V2 = V2r + 1.0j*V2i
# Filter to smooth at edges
V2 = V2 * np.cos(u2/umax*0.5*np.pi) * np.cos(v2/vmax*0.5*np.pi)
# Generate the x,y grid on which to image
x1d = np.fft.fftshift(np.fft.fftfreq(u2.shape[0],d=(u2[1,1]-u2[0,0])*1e9)/uas2rad)
y1d = np.fft.fftshift(np.fft.fftfreq(v2.shape[1],d=(v2[1,1]-v2[0,0])*1e9)/uas2rad)
x,y = np.meshgrid(-x1d,-y1d)
# Compute image estimate via FFT
I = np.fft.fftshift(np.real(np.fft.ifft2(np.fft.ifftshift(V2))))
# Return
return x,y,I
############
# High-level plot generation
def plot_image_reconstruction(self,axs,datadict,statdict,time_range=None,snr_cut=None,ngeht_diameter=6,limits=None,show_map=True,show_contours=True) :
# new_argument_hash = hashlib.md5(bytes(str(datadict)+str(statdict)+str(time_range)+str(snr_cut)+str(ngeht_diameter)+str(limits)+str(show_map)+str(show_contours)),'utf-8')
# if ( new_argument_hash == self.argument_hash ) :
# return
# print("FOO:",new_argument_hash)
# print("BAR:",self.argument_hash)
# self.argument_hash = new_argument_hash
# Reconstruct image
self.x,self.y,self.I=self.reconstruct_image(datadict,statdict,time_range=time_range,snr_cut=snr_cut,ngeht_diameter=ngeht_diameter)
self.replot_image_reconstruction(axs,time_range=time_range,limits=limits,show_map=show_map,show_contours=show_contours)
############
# High-level plot generation
def replot_image_reconstruction(self,axs,time_range=None,limits=None,show_map=True,show_contours=True) :
plt.sca(axs)
if (self.I is None) :
plt.text(0.5,0.5,"Insufficient Data!",color='w',fontsize=24,ha='center',va='center')
plt.gca().set_facecolor('k')
plt.gcf().set_facecolor('k')
return
# Plot linear image
if (show_map) :
#plt.pcolormesh(self.x,self.y,self.I,cmap='afmhot')
plt.imshow(self.I,origin='lower',extent=[self.x[0,0],self.x[0,-1],self.y[0,0],self.y[-1,0]],cmap='afmhot',vmin=0,interpolation='spline16')
# Plot the log contours
if (show_contours) :
# lI = np.log10(self.I/np.max(self.I)+1e-20)
# lmI = np.log10(-self.I/np.max(self.I)+1e-20)
lI = np.log10(np.maximum(0.0,self.I)/np.max(self.I)+1e-20)
lmI = np.log10(np.maximum(0.0,-self.I)/np.max(self.I)+1e-20)
lev10lo = max(np.min(lI[self.I>0]),-4)
lev10 = np.sort( -np.arange(0,lev10lo,-1) )
plt.contour(self.x,self.y,-lI,levels=lev10,colors='cornflowerblue',alpha=0.5)
#plt.contour(self.x,self.y,-lmI,levels=lev10,colors='green',alpha=0.5)
lev1 = []
for l10 in -lev10[1:] :
lev1.extend( np.log10(np.array([2,3,4,5,6,7,8,9])) + l10 )
lev1 = np.sort(-np.array(lev1))
plt.contour(self.x,self.y,-lI,levels=lev1,colors='cornflowerblue',alpha=0.5,linewidths=0.5)
plt.contour(self.x,self.y,-lmI,levels=lev1[-10:],colors='green',alpha=0.5,linewidths=0.5)
# Fix the limits
if (not limits is None) :
plt.xlim((limits[0],limits[1]))
plt.ylim((limits[2],limits[3]))
else :
xmin = min(np.min(x[lI>-2]),np.min(y[lI>-2]))
xmax = max(np.max(x[lI>-2]),np.max(y[lI>-2]))
plt.xlim((xmax,xmin))
plt.ylim((xmin,xmax))
plt.gca().set_facecolor('k')
class InteractiveImageReconstructionPlot(InteractivePlotWidget) :
def __init__(self,**kwargs) :
self.xarr = 0
self.yarr = 0
self.Iarr = 1
self.ddict = {}
self.sdict = {}
self.argument_hash = None
super().__init__(**kwargs)
##########
# Low-level image reconstruction function
def reconstruct_image(self,datadict,statdict,time_range=None,snr_cut=None,ngeht_diameter=6,f=2,method='cubic',make_hermitian=False) :
# Useful constant
uas2rad = np.pi/180.0/3600e6
# Exclude stations not in array
stations = list(np.unique(np.array(list(statdict.keys()))))
keep = np.array([ (datadict['s1'][j] in stations) and (datadict['s2'][j] in stations) for j in range(len(datadict['s1'])) ])
ddtmp = {}
for key in ['u','v','V','s1','s2','t','err'] :
ddtmp[key] = datadict[key][keep]
if (len(ddtmp['u'])==0) :
return None,None,None
# Exclude stations that are "off"
keep = np.array([ statdict[ddtmp['s1'][j]]['on'] and statdict[ddtmp['s2'][j]]['on'] for j in range(len(ddtmp['s1'])) ])
ddnew = {}
for key in ['u','v','V','s1','s2','t','err'] :
ddnew[key] = ddtmp[key][keep]
if (len(ddnew['u'])==0) :
return None,None,None
# Exclude data points outside the specified time range
if (not time_range is None) :
keep = (ddnew['t']>=time_range[0])*(ddnew['t']<time_range[1])
for key in ['u','v','V','s1','s2','t','err'] :
ddnew[key] = ddnew[key][keep]
if (len(ddnew['u'])==0) :
return None,None,None
# Cut points with S/N less than the specified minimum value
if (not snr_cut is None) and snr_cut>0:
# Get a list of error adjustments based on stations
diameter_correction_factor = {}
for s in stations :
if (statdict[s]['exists']) :
diameter_correction_factor[s] = 1.0
else :
diameter_correction_factor[s] = statdict[s]['diameter']/ngeht_diameter
keep = np.array([ np.abs(ddnew['V'][j])/(ddnew['err'][j].real * diameter_correction_factor[ddnew['s1'][j]] * diameter_correction_factor[ddnew['s2'][j]]) > snr_cut for j in range(len(ddnew['s1'])) ])
for key in ['u','v','V','s1','s2','t','err'] :
ddnew[key] = ddnew[key][keep]
if (len(ddnew['u'])==0) :
return None,None,None
# Double up data to make V hemitian
if (make_hermitian) :
ddnew['u'] = np.append(ddnew['u'],-ddnew['u'])
ddnew['v'] = np.append(ddnew['v'],-ddnew['v'])
ddnew['V'] = np.append(ddnew['V'],np.conj(ddnew['V']))
if (len(ddnew['u'])<=2) :
return None,None,None
# Get the region on which to compute gridded visibilities
umax = np.max(ddnew['u'])
vmax = np.max(ddnew['v'])
u2,v2 = np.meshgrid(np.linspace(-f*umax,f*umax,256),np.linspace(-f*vmax,f*vmax,256))
# SciPy
# pts = np.array([ddnew['u'],ddnew['v']]).T
# V2r = si.griddata(pts,np.real(ddnew['V']),(u2,v2),method=method,fill_value=0.0)
# V2i = si.griddata(pts,np.imag(ddnew['V']),(u2,v2),method=method,fill_value=0.0)
# Maptlotlib
triang = tri.Triangulation(ddnew['u'], ddnew['v'])
if (method=='linear') :
V2r = np.array(np.ma.fix_invalid(tri.LinearTriInterpolator(triang, np.real(ddnew['V']))(u2,v2),fill_value=0.0))
V2i = np.array(np.ma.fix_invalid(tri.LinearTriInterpolator(triang, np.imag(ddnew['V']))(u2,v2),fill_value=0.0))
elif (method=='cubic') :
V2r = np.array(np.ma.fix_invalid(tri.CubicTriInterpolator(triang, np.real(ddnew['V']),kind='geom')(u2,v2),fill_value=0.0))
V2i = np.array(np.ma.fix_invalid(tri.CubicTriInterpolator(triang, np.imag(ddnew['V']),kind='geom')(u2,v2),fill_value=0.0))
else :
print("ERROR: method %s not implemented"%(method))
V2 = V2r + 1.0j*V2i
# Filter to smooth at edges
V2 = V2 * np.cos(u2/umax*0.5*np.pi) * np.cos(v2/vmax*0.5*np.pi)
# Generate the x,y grid on which to image
x1d = np.fft.fftshift(np.fft.fftfreq(u2.shape[0],d=(u2[1,1]-u2[0,0])*1e9)/uas2rad)
y1d = np.fft.fftshift(np.fft.fftfreq(v2.shape[1],d=(v2[1,1]-v2[0,0])*1e9)/uas2rad)
xarr,yarr = np.meshgrid(-x1d,-y1d)
# Compute image estimate via FFT
Iarr = np.fft.fftshift(np.real(np.fft.ifft2(np.fft.ifftshift(V2))))
# Return
return xarr,yarr,Iarr
# def report_clock(self,dt) :
# print("Reporting after",dt)
def generate_mpl_plot(self,fig,ax,**kwargs) :
if (__mydebug__) :
print("InteractiveImageReconstructionPlot.generate_mpl_plot: start")
# print("---- Starting -----")
# self.report_clock(0.0)
# for dt in np.linspace(0,5,32) :
# Clock.schedule_once(self.report_clock,dt)
# print("---- Continuing -----")
# This is where we insert a Matplotlib figure. Must use ax. and fig. child commands.
# You probably want, but do not require, the following in your over-lay
self.plot_image_reconstruction(ax,self.ddict,self.sdict,**kwargs)
ax.set_facecolor((0,0,0,1))
fig.set_facecolor((0,0,0,1))
def update(self,datadict,statdict,**kwargs) :
self.sdict = statdict
self.ddict = datadict
self.update_mpl(**kwargs)
def replot(self,datadict,statdict,**kwargs) :
self.sdict = statdict
self.ddict = datadict
self.update_mpl(**kwargs)
def check_boundaries(self,tex_coords) :
return tex_coords
def check_size(self,size) :
if (size[0]<self.width) :
size = (self.width, size[1]/size[0] * self.width)
elif (size[1]<self.height) :
size = (size[0]/size[1] * self.height, self.height)
return size
############
# High-level plot generation
def plot_image_reconstruction(self,axs,datadict,statdict,time_range=None,snr_cut=None,ngeht_diameter=6,limits=None,show_map=True,show_contours=True) :
if (len(statdict.keys())==0) :
return
# Reconstruct image
self.xarr,self.yarr,self.Iarr=self.reconstruct_image(datadict,statdict,time_range=time_range,snr_cut=snr_cut,ngeht_diameter=ngeht_diameter)
self.replot_image_reconstruction(axs,time_range=time_range,limits=limits,show_map=show_map,show_contours=show_contours)
############
# High-level plot generation
def replot_image_reconstruction(self,axs,time_range=None,limits=None,show_map=True,show_contours=True) :
if (self.Iarr is None) :
axs.text(0.5,0.5,"Insufficient Data!",color='w',fontsize=24,ha='center',va='center')
return
# Plot linear image
if (show_map) :
axs.imshow(self.Iarr,origin='lower',extent=[self.xarr[0,0],self.xarr[0,-1],self.yarr[0,0],self.yarr[-1,0]],cmap='afmhot',vmin=0,interpolation='spline16')
# Plot the log contours
if (show_contours) :
lI = np.log10(np.maximum(0.0,self.Iarr)/np.max(self.Iarr)+1e-20)
lmI = np.log10(np.maximum(0.0,-self.Iarr)/np.max(self.Iarr)+1e-20)
lev10lo = max(np.min(lI[self.Iarr>0]),-4)
lev10 = np.sort( -np.arange(0,lev10lo,-1) )
axs.contour(self.xarr,self.yarr,-lI,levels=lev10,colors='cornflowerblue',alpha=0.5)
#plt.contour(self.x,self.y,-lmI,levels=lev10,colors='green',alpha=0.5)
lev1 = []
for l10 in -lev10[1:] :
lev1.extend( np.log10(np.array([2,3,4,5,6,7,8,9])) + l10 )
lev1 = np.sort(-np.array(lev1))
axs.contour(self.xarr,self.yarr,-lI,levels=lev1,colors='cornflowerblue',alpha=0.5,linewidths=0.5)
axs.contour(self.xarr,self.yarr,-lmI,levels=lev1[-10:],colors='green',alpha=0.5,linewidths=0.5)
# Fix the limits
if (not limits is None) :
axs.set_xlim((limits[0],limits[1]))
axs.set_ylim((limits[2],limits[3]))
else :
xmin = min(np.min(self.xarr[lI>-2]),np.min(self.yarr[lI>-2]))
xmax = max(np.max(self.xarr[lI>-2]),np.max(self.yarr[lI>-2]))
axs.set_xlim((xmax,xmin))
axs.set_ylim((xmin,xmax))
class InteractiveBaselineMapPlot(InteractiveWorldMapOverlayWidget):
statdict = {}
gcdict = {}
lldict = {}
def __init__(self,**kwargs) :
self.statdict = {}
super().__init__(**kwargs)
self.off_color = (0.5,0,0)
self.on_color = (1,0.75,0.25)
self.gcdict = {}
self.lldict = {}
def generate_mpl_plot(self,fig,ax,**kwargs) :
# This is where we insert a Matplotlib figure. Must use ax. and fig. child commands.
# You probably want, but do not require, the following in your over-lay
self.plot_map(ax,self.statdict)
ax.set_facecolor((0,0,0,0))
fig.set_facecolor((0,0,0,0))
def update(self,datadict,statdict,**kwargs) :
self.statdict = statdict
if (__mydebug__):
print("InteractiveBaselineMapPlot.update:",self.statdict.keys())
if (list(self.lldict.keys()) != list(self.statdict.keys())) :
if (__mydebug__):
print("InteractiveBaselineMapPlot.update: remaking circles")
lims=[-180,180,-90,90]
self.generate_all_station_latlon(statdict)
if ('SP' in self.statdict.keys()) :
self.lldict['SP']=[-89.0, 0.5*(lims[0]+lims[1])]
self.generate_all_great_circles(self.lldict, lims)
self.update_mpl(**kwargs)
def replot(self,datadict,statdict,**kwargs) :
if (__mydebug__):
print("InteractiveBaselineMapPlot.replot:",self.statdict.keys())
self.update(datadict,statdict,**kwargs)
# limits is a list that has in degrees the min longitude, max longitude, min latitude, max latitude to be plotted.
def plot_map(self,axs,statdict) :
if (__mydebug__):
print("InteractiveBaselineMapPlot.plot_map:",statdict.keys())
lims=[-180,180,-90,90]
for i in self.gcdict.keys() :
if (self.statdict[self.gcdict[i]['s1']]['on']==False or self.statdict[self.gcdict[i]['s2']]['on']==False) :
axs.plot(self.gcdict[i]['x'],self.gcdict[i]['y'],'-',color=self.off_color,alpha=0.5)
axs.plot(self.gcdict[i]['x']-360,self.gcdict[i]['y'],'-',color=self.off_color,alpha=0.5)
for i in self.gcdict.keys() :
if (self.statdict[self.gcdict[i]['s1']]['on']==True and self.statdict[self.gcdict[i]['s2']]['on']==True) :
axs.plot(self.gcdict[i]['x'],self.gcdict[i]['y'],'-',color=self.on_color,alpha=0.5)
axs.plot(self.gcdict[i]['x']-360,self.gcdict[i]['y'],'-',color=self.on_color,alpha=0.5)
for s in self.statdict.keys() :
if (self.statdict[s]['on']==False) :
axs.plot(self.lldict[s][1], self.lldict[s][0], 'o', color = self.off_color)
for s in self.statdict.keys() :
if (self.statdict[s]['on']==True) :
axs.plot(self.lldict[s][1], self.lldict[s][0], 'o', color = self.on_color)
# Set limits
axs.set_xlim((lims[:2]))
axs.set_ylim((lims[2:]))
# Eliminate axes
for sdir in ['left','right','top','bottom'] :
axs.spines[sdir].set_visible(False)
axs.xaxis.set_tick_params(bottom='off',top='off')
axs.yaxis.set_tick_params(left='off',right='off')
def generate_all_station_latlon(self, statdict) :
self.lldict = {}
for s in statdict.keys():
self.lldict[s] = self.xyz_to_latlon(statdict[s]['loc'])
return statdict
def generate_all_great_circles(self,lldict,limits,N=64) :
self.gcdict = {}
i = 0
for k,s1 in enumerate(list(lldict.keys())) :
for s2 in list(lldict.keys())[(k+1):] :
ll1 = lldict[s1]
ll2 = lldict[s2]
llgA = self.great_circle(ll1,ll2,N=N)
lonc = 0.5*(limits[0]+limits[1])
y = llgA[0]
x = llgA[1] - (llgA[1][0]-lonc) + (llgA[1][0]-lonc)%360
x,y = self.resample_by_length(x,y,N=32)
self.gcdict[i] = {'s1':s1,'s2':s2,'x':x,'y':y}
i += 1
def resample_by_length(self,x0,y0,N=None) :
ds = np.sqrt( (x0[1:]-x0[:-1])**2 + (y0[1:]-y0[:-1])**2 )
s = np.cumsum(ds)
s = np.append([0],s/s[-1])
if (N is None) :
N = len(x0)
t = np.linspace(0,1,N)
x = np.interp(t,s,x0)
y = np.interp(t,s,y0)
return x,y
def great_circle(self,pos1,pos2,N=32) :
lat1, lon1 = pos1
lat2, lon2 = pos2
# First, rotate about z so that latlon1 is in the x-z plane
ll1 = [lat1, 0]
ll2 = [lat2, lon2-lon1]
# Second, rotate about y so that ll1 is at the north pole
ll1 = self.xyz_to_latlon(self.RotateY(self.latlon_to_xyz(ll1),angle=-(90-lat1)))
ll2 = self.xyz_to_latlon(self.RotateY(self.latlon_to_xyz(ll2),angle=-(90-lat1)))
# Third, generate a great circle that goes through the pole (easy) and ll2 (not hard)
latA = np.linspace(ll2[0],90.0,N)
lonA = 0*latA + ll2[1]
llgA = np.array([latA,lonA])
# Fourth, unrotate about y
llgA = self.xyz_to_latlon(self.RotateY(self.latlon_to_xyz(llgA),angle=(90-lat1)))
llgA[1] = llgA[1] + lon1
return llgA
def latlon_to_xyz(self,latlon,radius=1) :
lat_rad = latlon[0]*np.pi/180.
lon_rad = latlon[1]*np.pi/180.
x = radius*np.cos(lat_rad)*np.cos(lon_rad)
y = radius*np.cos(lat_rad)*np.sin(lon_rad)
z = radius*np.sin(lat_rad)
return np.array([x,y,z])
def xyz_to_latlon(self,xyz) :
lat = np.arcsin( xyz[2]/np.sqrt(xyz[0]**2+xyz[1]**2+xyz[2]**2) ) * 180./np.pi
lon = np.arctan2( xyz[1], xyz[0] ) * 180.0/np.pi
return np.array([lat,lon])
def RotateY(self,xyz,angle=0) :
angle = angle*np.pi/180.0
xyz2 = 0*xyz
xyz2[0] = xyz[0]*np.cos(angle) + xyz[2]*np.sin(angle)
xyz2[1] = xyz[1]
xyz2[2] = xyz[2]*np.cos(angle) - xyz[0]*np.sin(angle)
return xyz2
def check_size(self,size) :
if (size[0]==0 or size[1]==0) :
return size
if (size[0]<self.width and size[1]<self.height) :
if (self.width/size[0] < self.height/size[1]) :
size = (self.width, size[1]/size[0] * self.width)
else :
size = (size[0]/size[1] * self.height, self.height)
return size
class InteractiveBaselineMapPlot_kivygraph(InteractiveWorldMapWidget):
statdict = {}
gcdict = {}
lldict = {}
def __init__(self,**kwargs) :
self.statdict = {}
super().__init__(**kwargs)
self.off_color = (0.5,0,0)
self.on_color = (1,0.75,0.25)
self.gcdict = {}
self.lldict = {}
def generate_mpl_plot(self,fig,ax,**kwargs) :
# This is where we insert a Matplotlib figure. Must use ax. and fig. child commands.
# You probably want, but do not require, the following in your over-lay
self.plot_map(ax,self.statdict)
ax.set_facecolor((0,0,0,0))
fig.set_facecolor((0,0,0,0))
def update(self,datadict,statdict,**kwargs) :
self.statdict = statdict
if (__mydebug__):
print("InteractiveBaselineMapPlot.update:",self.statdict.keys())
if (list(self.lldict.keys()) != list(self.statdict.keys())) :
if (__mydebug__):
print("InteractiveBaselineMapPlot.update: remaking circles")
lims=[-180,180,-90,90]
self.generate_all_station_latlon(statdict)
if ('SP' in self.statdict.keys()) :
self.lldict['SP']=[-85.0, 0.5*(lims[0]+lims[1])]
self.generate_all_great_circles(self.lldict, lims)
# self.bmc.plot_stations(self.statdict,self.lldict,self.gcdict,self.rect.size)
# self.update_mpl(**kwargs)
def replot(self,datadict,statdict,**kwargs) :
if (__mydebug__):
print("InteractiveBaselineMapPlot.replot:",self.statdict.keys())
self.update(datadict,statdict,**kwargs)
# limits is a list that has in degrees the min longitude, max longitude, min latitude, max latitude to be plotted.
def plot_map(self,axs,statdict) :
if (__mydebug__):
print("InteractiveBaselineMapPlot.plot_map:",statdict.keys())
lims=[-180,180,-90,90]
for i in self.gcdict.keys() :
if (self.statdict[self.gcdict[i]['s1']]['on']==False or self.statdict[self.gcdict[i]['s2']]['on']==False) :
axs.plot(self.gcdict[i]['x'],self.gcdict[i]['y'],'-',color=self.off_color,alpha=0.5)
axs.plot(self.gcdict[i]['x']-360,self.gcdict[i]['y'],'-',color=self.off_color,alpha=0.5)
for i in self.gcdict.keys() :
if (self.statdict[self.gcdict[i]['s1']]['on']==True and self.statdict[self.gcdict[i]['s2']]['on']==True) :
axs.plot(self.gcdict[i]['x'],self.gcdict[i]['y'],'-',color=self.on_color,alpha=0.5)
axs.plot(self.gcdict[i]['x']-360,self.gcdict[i]['y'],'-',color=self.on_color,alpha=0.5)
for s in self.statdict.keys() :
if (self.statdict[s]['on']==False) :
axs.plot(self.lldict[s][1], self.lldict[s][0], 'o', color = self.off_color)
for s in self.statdict.keys() :
if (self.statdict[s]['on']==True) :
axs.plot(self.lldict[s][1], self.lldict[s][0], 'o', color = self.on_color)
# Set limits
axs.set_xlim((lims[:2]))
axs.set_ylim((lims[2:]))
# Eliminate axes
for sdir in ['left','right','top','bottom'] :
axs.spines[sdir].set_visible(False)
axs.xaxis.set_tick_params(bottom='off',top='off')
axs.yaxis.set_tick_params(left='off',right='off')
def generate_all_station_latlon(self, statdict) :
self.lldict = {}
for s in statdict.keys():
self.lldict[s] = self.xyz_to_latlon(statdict[s]['loc'])
return statdict
def generate_all_great_circles(self,lldict,limits,N=128) :
self.gcdict = {}
i = 0
for k,s1 in enumerate(list(lldict.keys())) :
for s2 in list(lldict.keys())[(k+1):] :
ll1 = lldict[s1]
ll2 = lldict[s2]
llgA = self.great_circle(ll1,ll2,N=N)
lonc = 0.5*(limits[0]+limits[1])
y = llgA[0]
x = llgA[1] - (llgA[1][0]-lonc) + (llgA[1][0]-lonc)%360
x,y = self.resample_by_length(x,y,N=32)
self.gcdict[i] = {'s1':s1,'s2':s2,'x':x,'y':y}
i += 1
def resample_by_length(self,x0,y0,N=None) :
ds = np.sqrt( (x0[1:]-x0[:-1])**2 + (y0[1:]-y0[:-1])**2 )
s = np.cumsum(ds)
s = np.append([0],s/s[-1])
if (N is None) :
N = len(x0)
t = np.linspace(0,1,N)
x = np.interp(t,s,x0)
y = np.interp(t,s,y0)
return x,y
def great_circle(self,pos1,pos2,N=32) :
lat1, lon1 = pos1
lat2, lon2 = pos2
# First, rotate about z so that latlon1 is in the x-z plane
ll1 = [lat1, 0]
ll2 = [lat2, lon2-lon1]
# Second, rotate about y so that ll1 is at the north pole
ll1 = self.xyz_to_latlon(self.RotateY(self.latlon_to_xyz(ll1),angle=-(90-lat1)))
ll2 = self.xyz_to_latlon(self.RotateY(self.latlon_to_xyz(ll2),angle=-(90-lat1)))
# Third, generate a great circle that goes through the pole (easy) and ll2 (not hard)
latA = np.linspace(ll2[0],90.0,N)
lonA = 0*latA + ll2[1]
llgA = np.array([latA,lonA])
# Fourth, unrotate about y
llgA = self.xyz_to_latlon(self.RotateY(self.latlon_to_xyz(llgA),angle=(90-lat1)))
llgA[1] = llgA[1] + lon1
return llgA
def latlon_to_xyz(self,latlon,radius=1) :
lat_rad = latlon[0]*np.pi/180.
lon_rad = latlon[1]*np.pi/180.
x = radius*np.cos(lat_rad)*np.cos(lon_rad)
y = radius*np.cos(lat_rad)*np.sin(lon_rad)
z = radius*np.sin(lat_rad)
return np.array([x,y,z])
def xyz_to_latlon(self,xyz) :
lat = np.arcsin( xyz[2]/np.sqrt(xyz[0]**2+xyz[1]**2+xyz[2]**2) ) * 180./np.pi
lon = np.arctan2( xyz[1], xyz[0] ) * 180.0/np.pi
return np.array([lat,lon])
def RotateY(self,xyz,angle=0) :
angle = angle*np.pi/180.0
xyz2 = 0*xyz
xyz2[0] = xyz[0]*np.cos(angle) + xyz[2]*np.sin(angle)
xyz2[1] = xyz[1]
xyz2[2] = xyz[2]*np.cos(angle) - xyz[0]*np.sin(angle)
return xyz2
def check_size(self,size) :
if (size[0]==0 or size[1]==0) :
return size
# if (size[0]<self.width and size[1]<self.height) :
# if (self.width/size[0] < self.height/size[1]) :
# size = (self.width, size[1]/size[0] * self.width)
# else :
# size = (size[0]/size[1] * self.height, self.height)
if (size[0]<Window.width and size[1]<Window.height) :
if (Window.width/size[0] < Window.height/size[1]) :
size = (Window.width, size[1]/size[0] * Window.width)
else :
size = (size[0]/size[1] * Window.height, Window.height)
if (__mydebug__) :
print("InteractiveBaselineMapPlot_kivygraph.check_size",self.width,self.height,size,Window.width,Window.height)
return size
class BaselineMapCanvas(FloatLayout) :
def __init__(self,**kwargs) :
super().__init__(**kwargs)
self.off_color = (0.5,0,0)
self.on_color = (1,0.75,0.25)
def plot_stations(self,statdict,lldict,gcdict,rect) :
if (__mydebug__):
print("BaselineMapCanvas.plot_stations:",statdict.keys())
if (rect.size[0]==0 or rect.size[1]==0) :
return
lims=[-180,180,-90,90]
lon_to_xpx_scale = rect.size[0]/(lims[1]-lims[0])
lon_to_xpx_offset = lon_to_xpx_scale*(-lims[0]) + rect.pos[0] - rect.tex_coords[0]*rect.size[0]/(rect.tex_coords[2]-rect.tex_coords[0])
lat_to_ypx_scale = rect.size[1]/(lims[3]-lims[2])
lat_to_ypx_offset = lat_to_ypx_scale*(-lims[2]) + rect.pos[1] - rect.tex_coords[5]*rect.size[1]/(rect.tex_coords[5]-rect.tex_coords[1])
# Index manipulation stuff
j = np.arange(len(gcdict[0]['x']))
j2 = 2*j
j2p1 = 2*j+1
points = np.arange(2*len(j))
linewidth = 2
# Get the current limits.
self.canvas.clear()
with self.canvas :
igc = 0
Color(self.off_color[0],self.off_color[1],self.off_color[2],0.5)
for k,s1 in enumerate(list(statdict.keys())) :
for s2 in list(statdict.keys())[(k+1):] :
if (statdict[s1]['on']==False or statdict[s2]['on']==False) :
total_x_shift = ( (lon_to_xpx_scale*gcdict[igc]['x'][0] + lon_to_xpx_offset)//rect.size[0] )*rect.size[0] - 0.5*linewidth
points[j2] = lon_to_xpx_scale*gcdict[igc]['x'] + lon_to_xpx_offset - total_x_shift
points[j2p1] = lat_to_ypx_scale*gcdict[igc]['y'] + lat_to_ypx_offset
Line(points=list(points),width=linewidth)
if (points[0]<0 or points[-2]<0) :
points[j2] = points[j2]+lon_to_xpx_scale*360
Line(points=list(points),width=linewidth)
points[j2] = points[j2]-lon_to_xpx_scale*360
if (points[0]>self.width or points[-2]>self.width) :
points[j2] = points[j2]-lon_to_xpx_scale*360
Line(points=list(points),width=linewidth)
igc += 1
igc = 0
Color(self.on_color[0],self.on_color[1],self.on_color[2],0.5)
for k,s1 in enumerate(list(statdict.keys())) :
for s2 in list(statdict.keys())[(k+1):] :
if (statdict[s1]['on']==True and statdict[s2]['on']==True) :
total_x_shift = ( (lon_to_xpx_scale*gcdict[igc]['x'][0] + lon_to_xpx_offset)//rect.size[0] )*rect.size[0] - 0.5*linewidth
points[j2] = lon_to_xpx_scale*gcdict[igc]['x'] + lon_to_xpx_offset - total_x_shift
points[j2p1] = lat_to_ypx_scale*gcdict[igc]['y'] + lat_to_ypx_offset
Line(points=list(points),width=linewidth)
if (points[0]<0 or points[-2]<0) :
points[j2] = points[j2]+lon_to_xpx_scale*360
Line(points=list(points),width=linewidth)
points[j2] = points[j2]-lon_to_xpx_scale*360
if (points[0]>self.width or points[-2]>self.width) :
points[j2] = points[j2]-lon_to_xpx_scale*360
Line(points=list(points),width=linewidth)
igc += 1
for s in statdict.keys() :
if (statdict[s]['on']==False) :
xpx = (lon_to_xpx_scale*lldict[s][1] + lon_to_xpx_offset)%(rect.size[0])
ypx = lat_to_ypx_scale*lldict[s][0] + lat_to_ypx_offset
Color(0,0,0,0.1)
Ellipse(pos=(xpx-dp(7),ypx-dp(7)),size=(dp(14),dp(14)))
Ellipse(pos=(xpx-dp(6),ypx-dp(6)),size=(dp(12),dp(12)))
Color(self.off_color[0],self.off_color[1],self.off_color[2])
Ellipse(pos=(xpx-dp(5),ypx-dp(5)),size=(dp(10),dp(10)))
# if (__mydebug__) :
# print("Adding OFF circle for",s,xpx,ypx,self.on_color,self.height,rect.size,rect.pos)
for s in statdict.keys() :
if (statdict[s]['on']==True) :
xpx = (lon_to_xpx_scale*lldict[s][1] + lon_to_xpx_offset)%(rect.size[0])
ypx = lat_to_ypx_scale*lldict[s][0] + lat_to_ypx_offset
Color(0,0,0,0.1)
Ellipse(pos=(xpx-dp(7),ypx-dp(7)),size=(dp(14),dp(14)))
Ellipse(pos=(xpx-dp(6),ypx-dp(6)),size=(dp(12),dp(12)))
Color(self.on_color[0],self.on_color[1],self.on_color[2])
Ellipse(pos=(xpx-dp(5),ypx-dp(5)),size=(dp(10),dp(10)))
# if (__mydebug__) :
# print("Adding ON circle for",s,xpx,ypx,self.on_color,self.height,rect.size,rect.pos)
# dd = read_data('./data/V_M87_ngeht_ref1_230_thnoise_scanavg_tygtd.dat')
# sd = read_array('./arrays/ngeht_ref1_230_ehtim.txt')
# plt.figure()
# rp = CheapImageReconstruction()
# x,y,I = reconstruct_image(dd,sd)
# bp = BaselinePlots()
# bp.plot_baselines(dd,sd)
# plt.show()
# mp = MapPlots()
# plt.show()
| 40.471351 | 239 | 0.552005 | 8,491 | 59,331 | 3.754564 | 0.064774 | 0.005583 | 0.009661 | 0.006211 | 0.837077 | 0.806054 | 0.782811 | 0.776035 | 0.764649 | 0.756775 | 0 | 0.039996 | 0.273078 | 59,331 | 1,465 | 240 | 40.498976 | 0.699182 | 0.106386 | 0 | 0.731629 | 0 | 0 | 0.036934 | 0.008507 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070288 | false | 0 | 0.014909 | 0.005325 | 0.14803 | 0.014909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
34d0c8ffb3a5dcae0c309d6804572db6ac2a2b18 | 5,436 | py | Python | owl/test_strategies.py | cmrudolph/simulation | e8507524fc32efc9e84b0f6a487d725ed4ec3a6b | [
"MIT"
] | null | null | null | owl/test_strategies.py | cmrudolph/simulation | e8507524fc32efc9e84b0f6a487d725ed4ec3a6b | [
"MIT"
] | null | null | null | owl/test_strategies.py | cmrudolph/simulation | e8507524fc32efc9e84b0f6a487d725ed4ec3a6b | [
"MIT"
] | null | null | null | from owl import Game, Hand, Card, Color
import strategies
class Game(Game):
@property
def front(self):
return self.occupied[2]
@property
def middle(self):
return self.occupied[1]
@property
def back(self):
return self.occupied[0]
def test_back_owl_random_card():
def fixed_random(start, end):
return 1
# YGOBP|RBPRY|GBORP|YGOBP|RGYOB|PRYGB|ORPYG|BORP
# 32|1 | | | | | |
# v---^
g = Game(3)
h = Hand()
blue = Card.create_colored(Color.blue)
purple = Card.create_colored(Color.purple)
red = Card.create_colored(Color.red)
h.add(blue)
h.add(purple)
h.add(red)
choice = strategies.back_owl_random_card(g, [h], 0, fixed_random)
assert choice == (g.back, h.cards[1])
def test_random_owl_random_card():
def fixed_random(start, end):
return 1
# YGOBP|RBPRY|GBORP|YGOBP|RGYOB|PRYGB|ORPYG|BORP
# 32|1 | | | | | |
# v-----^
g = Game(3)
h = Hand()
blue = Card.create_colored(Color.blue)
purple = Card.create_colored(Color.purple)
red = Card.create_colored(Color.red)
h.add(blue)
h.add(purple)
h.add(red)
choice = strategies.random_owl_random_card(g, [h], 0, fixed_random)
assert choice == (g.middle, h.cards[1])
def test_front_owl_random_card():
def fixed_random(start, end):
return 1
# YGOBP|RBPRY|GBORP|YGOBP|RGYOB|PRYGB|ORPYG|BORP
# 32|1 | | | | | |
# v-^
g = Game(3)
h = Hand()
blue = Card.create_colored(Color.blue)
purple = Card.create_colored(Color.purple)
red = Card.create_colored(Color.red)
h.add(blue)
h.add(purple)
h.add(red)
choice = strategies.front_owl_random_card(g, [h], 0, fixed_random)
assert choice == (g.front, h.cards[1])
def test_back_owl_smallest_gain():
# YGOBP|RBPRY|GBORP|YGOBP|RGYOB|PRYGB|ORPYG|BORP
# 32| 1 | | | | | |
# v--^
g = Game(3)
g.move_owl(5, 7)
h = Hand()
red = Card.create_colored(Color.red)
blue = Card.create_colored(Color.blue)
h.add(red)
h.add(blue)
choice = strategies.back_owl_smallest_gain(g, [h], 0, None)
assert choice == (g.back, red)
def test_any_owl_smallest_gain():
# YGOBP|RBPRY|GBORP|YGOBP|RGYOB|PRYGB|ORPYG|BORP
# 32| 1 | | | | | |
# v-^
g = Game(3)
g.move_owl(5, 8)
h = Hand()
red = Card.create_colored(Color.red)
blue = Card.create_colored(Color.blue)
h.add(red)
h.add(blue)
choice = strategies.any_owl_smallest_gain(g, [h], 0, None)
assert choice == (g.middle, red)
def test_front_owl_smallest_gain():
# YGOBP|RBPRY|GBORP|YGOBP|RGYOB|PRYGB|ORPYG|BORP
# 32| 1 | | | | | |
# v^
g = Game(3)
g.move_owl(5, 8)
h = Hand()
yellow = Card.create_colored(Color.yellow)
green = Card.create_colored(Color.green)
h.add(yellow)
h.add(green)
choice = strategies.front_owl_smallest_gain(g, [h], 0, None)
assert choice == (g.front, yellow)
def test_back_owl_biggest_gain():
# YGOBP|RBPRY|GBORP|YGOBP|RGYOB|PRYGB|ORPYG|BORP
# 32| 1 | | | | | |
# v---^
g = Game(3)
g.move_owl(5, 7)
h = Hand()
red = Card.create_colored(Color.red)
blue = Card.create_colored(Color.blue)
h.add(red)
h.add(blue)
choice = strategies.back_owl_biggest_gain(g, [h], 0, None)
assert choice == (g.back, blue)
def test_any_owl_biggest_gain():
# YGOBP|RBPRY|GBORP|YGOBP|RGYOB|PRYGB|ORPYG|BORP
# 32| 1 | | | | | |
# v----^
g = Game(3)
g.move_owl(5, 7)
h = Hand()
red = Card.create_colored(Color.red)
blue = Card.create_colored(Color.blue)
h.add(red)
h.add(blue)
choice = strategies.any_owl_biggest_gain(g, [h], 0, None)
assert choice == (g.front, blue)
def test_front_owl_biggest_gain():
# YGOBP|RBPRY|GBORP|YGOBP|RGYOB|PRYGB|ORPYG|BORP
# 32| 1 | | | | | |
# v----^
g = Game(3)
g.move_owl(5, 7)
h = Hand()
red = Card.create_colored(Color.red)
yellow = Card.create_colored(Color.yellow)
h.add(red)
h.add(yellow)
choice = strategies.front_owl_biggest_gain(g, [h], 0, None)
assert choice == (g.front, yellow)
def test_back_owl_color_priority():
# YGOBP|RBPRY|GBORP|YGOBP|RGYOB|PRYGB|ORPYG|BORP
# 32| 1 | | | | | |
# v----^
g = Game(3)
g.move_owl(5, 8)
h = Hand()
red = Card.create_colored(Color.red)
blue = Card.create_colored(Color.blue)
purple = Card.create_colored(Color.purple)
h.add(red)
h.add(blue)
h.add(purple)
choice = strategies.back_owl_color_priority(g, [h], 0, None)
assert choice == (g.back, blue)
def test_front_owl_color_priority():
# YGOBP|RBPRY|GBORP|YGOBP|RGYOB|PRYGB|ORPYG|BORP
# 32| 1| | | | | |
# v--^
g = Game(3)
g.move_owl(5, 9)
h = Hand()
green = Card.create_colored(Color.green)
blue = Card.create_colored(Color.blue)
orange = Card.create_colored(Color.orange)
h.add(green)
h.add(blue)
h.add(orange)
choice = strategies.front_owl_color_priority(g, [h], 0, None)
assert choice == (g.front, blue)
| 26.009569 | 71 | 0.573216 | 756 | 5,436 | 3.965608 | 0.079365 | 0.09006 | 0.153102 | 0.198132 | 0.855904 | 0.828552 | 0.773182 | 0.766177 | 0.766177 | 0.760507 | 0 | 0.020497 | 0.282009 | 5,436 | 208 | 72 | 26.134615 | 0.74763 | 0.204562 | 0 | 0.711111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081481 | 1 | 0.125926 | false | 0 | 0.014815 | 0.044444 | 0.192593 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
fd1570731711f9849104cf4791422b46f9b82f2b | 920 | py | Python | reactions/song.py | stamby/easy-pete-bot | 2a611758042496c831eb1bf3a27e2c8100fd425c | [
"MIT"
] | null | null | null | reactions/song.py | stamby/easy-pete-bot | 2a611758042496c831eb1bf3a27e2c8100fd425c | [
"MIT"
] | null | null | null | reactions/song.py | stamby/easy-pete-bot | 2a611758042496c831eb1bf3a27e2c8100fd425c | [
"MIT"
] | null | null | null | async def add(message, payload, db):
c = db.cursor()
if payload.emoji.name == '👍':
c.execute(
'''
update songs set yes = yes + 1 where url = %s
''',
(message.content,))
elif payload.emoji.name == '👎':
c.execute(
'''
update songs set no = no + 1 where url = %s
''',
(message.content,))
else:
return
db.commit()
async def remove(message, payload, db):
c = db.cursor()
if payload.emoji.name == '👍':
c.execute(
'''
update songs set yes = yes - 1 where url = %s
''',
(message.content,))
elif payload.emoji.name == '👎':
c.execute(
'''
update songs set no = no - 1 where url = %s
''',
(message.content,))
else:
return
db.commit()
| 20.909091 | 45 | 0.425 | 98 | 920 | 4.030612 | 0.295918 | 0.121519 | 0.162025 | 0.192405 | 0.936709 | 0.936709 | 0.936709 | 0.936709 | 0.936709 | 0.936709 | 0 | 0.007678 | 0.433696 | 920 | 43 | 46 | 21.395349 | 0.742802 | 0 | 0 | 0.769231 | 0 | 0 | 0.006173 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b5c8eaef265ee6ccb5fc64c58bdee8db68a334c1 | 21,123 | py | Python | tablero.py | rojasdaniel/EM2015 | 9baaa455487c482072b5a768507c99cb3bebfabd | [
"MIT"
] | null | null | null | tablero.py | rojasdaniel/EM2015 | 9baaa455487c482072b5a768507c99cb3bebfabd | [
"MIT"
] | null | null | null | tablero.py | rojasdaniel/EM2015 | 9baaa455487c482072b5a768507c99cb3bebfabd | [
"MIT"
] | null | null | null | import pandas as pd
import os
os.chdir("C:/Users/danielrojas/Desktop/Dash")
import dash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
from Funciones import *
import plotly.graph_objs as go
import dash_table_experiments as dt
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
app.layout = html.Div([ html.Div([
html.Div([
html.Img(src="assets/logo.png", style={'height': '100px', 'float':'center'})
], className="six columns", style = {'display': 'inline-block', 'width': '0.4%', 'align':'left'}),
html.Div([
html.H1('Encuesta de Movilidad de Bogotá 2015', style={'textAlign': 'center'})
], className="six columns", style = {'display': 'inline-block', 'width': '92%', 'align':'center'})
], className="row"),
#_________________________________________________________________________________________
html.Div([
html.Div([
html.H5('Medio de transporte:', style={'textAlign': 'center'})
], style = {'display': 'inline-block', 'width': '100%', 'align':'center'} ),
html.Div([
dcc.Dropdown(
id='filtro',
options=[{'label': i, 'value': i} for i in filtros],
value=filtro
)
], style = {'display': 'inline-block', 'width': '100%', 'align':'center'})
], style = {'display': 'inline-block', 'width': '20%', 'align':'center'}),
#_________________________________________________________________________________________
html.Div([
html.Div([
html.H5('Motivo de viaje:', style={'textAlign': 'center'})
], style = {'display': 'inline-block', 'width': '100%', 'align':'center'} ),
html.Div([
dcc.Dropdown(
id='motivo',
options=[{'label': i, 'value': i} for i in motivos],
multi=True,
value=motivo
)
], style = {'display': 'inline-block', 'width': '100%', 'align':'center'})
], style = {'display': 'inline-block', 'width': '20%', 'align':'center'}),
#_________________________________________________________________________________________
html.Div([
html.Div([
html.H5('Estrato:', style={'textAlign': 'center'})
], style = {'display': 'inline-block', 'width': '100%', 'align':'center'}),
html.Div([
dcc.Dropdown(
id='estrato',
options=[{'label': i, 'value': i} for i in estratos],
multi=True,
value=estrato
)
], style = {'display': 'inline-block', 'width': '100%', 'align':'center'})
], style = {'display': 'inline-block', 'width': '20%', 'align':'center'}),
#_________________________________________________________________________________________
html.Div([
html.Div([
html.H5('Hora Valle:', style={'textAlign': 'center'})
], style = {'display': 'inline-block', 'width': '100%', 'align':'center'} ),
html.Div([
dcc.Dropdown(
id='habil',
options=[{'label': i, 'value': i} for i in ["S", "N"]],
value="N"
)
], style = {'display': 'inline-block', 'width': '100%', 'align':'left'})
], style = {'display': 'inline-block', 'width': '20%', 'align':'center'}),
#_________________________________________________________________________________________
html.Div([
html.Div([
html.H5('Día habil:', style={'textAlign': 'center'})
], style = {'display': 'inline-block', 'width': '100%', 'align':'center'} ),
html.Div([
dcc.Dropdown(
id='dia',
options=[{'label': i, 'value': i} for i in ["S", "N"]],
value="S"
)
], style = {'display': 'inline-block', 'width': '100%', 'align':'left'})
], style = {'display': 'inline-block', 'width': '20%', 'align':'center'}),
#_________________________________________________________________________________________
html.Div([
html.H5('Edad:', style={'textAlign': 'center'})
], style = {'display': 'inline-block', 'width': '100%', 'align':'center'} ),
html.Div([
dcc.RangeSlider(
id='edad',
marks={i*5: i*5 for i in range(20)},
min=min(edades),
max=max(edades)+1,
value=edad
)
], style = {'display': 'inline-block', 'width': '100%', 'align':'center'} ),
#_________________________________________________________________________________________
html.Hr(),
#_________________________________________________________________________________________
html.Div([
html.Div([
html.H5('Franja de inicio de viaje:', style={'textAlign': 'center'})
], style = {'display': 'inline-block', 'width': '100%', 'align':'center'}),
html.Div([
dcc.Dropdown(
id='horita',
options=[{'label': i, 'value': i} for i in hora],
multi=True,
value=horita
)
], style = {'display': 'inline-block', 'width': '100%', 'align':'center'})
], style = {'display': 'inline-block', 'width': '100%', 'align':'center'}),
#_________________________________________________________________________________________
html.Div([
html.Div([
html.H5('Radio de frecuencia de viajes Origen-Destino', style={'textAlign': 'center'})
], style = {'display': 'inline-block', 'width': '100%', 'align':'center'} ),
html.Div([
html.Iframe(
id= 'mapita', srcDoc = abrir_jugarmapas(), width='100%', height='500')],
style = {'display': 'inline-block', 'width': '100%', 'align':'center'})],
style = {'display': 'inline-block', 'width': '100%', 'align':'center'} ),
#_____________________________________________________________________________________________
html.Hr(),
#_____________________________________________________________________________________________
html.Div([
html.Div([
dcc.Graph(
id= 'power_law1', figure = abrir_entrada()
)
], style = {'display': 'inline-block', 'width': '50%', 'align':'center'} ),
#_________________________________________________________________________________________
html.Div([
dcc.Graph(
id= 'power_law2', figure = abrir_salida()
)
], style = {'display': 'inline-block', 'width': '50%', 'align':'center'} )
],style = {'display': 'inline-block', 'width': '100%', 'align':'center'} ),
#_____________________________________________________________________________________________
html.Hr(),
#_____________________________________________________________________________________________
html.Div([
html.Div([
dcc.Graph(id="dens", figure=abrirdens())
],style={'display': 'inline-block', 'width': '50%', 'align':'center'}),
#_____________________________________________________________________________________________
html.Div([
dcc.Graph(id="histogram", figure=abrirhist())
],style={'display': 'inline-block', 'width': '50%', 'align':'center'}),
], style={'display': 'inline-block', 'width': '100%', 'align':'center'}),
#_____________________________________________________________________________________________
html.Hr(),
#_____________________________________________________________________________________________
html.Div([
html.Div([
html.Iframe(
id= 'mapa2', srcDoc = abrir_mapa(), width='100%', height='475')],
style = {'display': 'inline-block', 'width': '45%', 'align':'center'}),
#_____________________________________________________________________________________________
html.Div([html.Hr()],style = {'display': 'inline-block', 'width': '5%', 'align':'center'}),
#_____________________________________________________________________________________________
html.Div(
dt.DataTable(rows=top_viajes.to_dict('records'),
columns=top_viajes.columns,
row_selectable=True,
filterable=True,
sortable=True,
selected_row_indices=[],
id='horitas'),
style={'display': 'inline-block', 'width': '50%', 'align':'center'})
], style={'display': 'inline-block', 'width': '100%', 'align':'center'}),
#_____________________________________________________________________________________________
html.Div(
dt.DataTable(rows=trip_counts.to_dict('records'),
columns=trip_counts.columns,
row_selectable=False,
filterable=True,
sortable=True,
selected_row_indices=[],
id='datatable'),
style={'display': 'inline-block', 'width': '100%', 'align':'center'}),
#_____________________________________________________________________________________________
html.Hr(),
#_____________________________________________________________________________________________
html.Div( [ html.H6('Desarrolado por: Ana Milena Rodriguez - Nicolas Hernandez - Luis Francisco Ortiz - Daniel Rojas', style={'textAlign': 'center'})])
#_________________________________________________________________________________________
], style = {'width': '99%', 'align':'center'} )
@app.callback(
dash.dependencies.Output(component_id='mapa2', component_property='srcDoc'),
[dash.dependencies.Input(component_id='filtro', component_property='value'),
dash.dependencies.Input(component_id='edad', component_property='value'),
dash.dependencies.Input(component_id='motivo', component_property='value'),
dash.dependencies.Input(component_id='estrato', component_property='value'),
dash.dependencies.Input(component_id='dia', component_property='value'),
dash.dependencies.Input(component_id='habil', component_property='value'),
dash.dependencies.Input(component_id='horita', component_property='value'),
dash.dependencies.Input(component_id='horitas', component_property='rows'),
dash.dependencies.Input(component_id='horitas', component_property='selected_row_indices')])
def tablerito(filtro, edad, motivo, estrato, dia, habil, horita, rows, selected_row_indices):
count = pd.DataFrame(rows)
if selected_row_indices:
count = count.loc[selected_row_indices]
else:
count = pd.DataFrame()
return update_mapa(filtro, viajes, edad, motivo, estrato,dia, habil, horita, count)
@app.callback(
dash.dependencies.Output(component_id='horitas', component_property='rows'),
[dash.dependencies.Input(component_id='filtro', component_property='value'),
dash.dependencies.Input(component_id='edad', component_property='value'),
dash.dependencies.Input(component_id='motivo', component_property='value'),
dash.dependencies.Input(component_id='estrato', component_property='value'),
dash.dependencies.Input(component_id='dia', component_property='value'),
dash.dependencies.Input(component_id='habil', component_property='value'),
dash.dependencies.Input(component_id='horita', component_property='value')])
def tablerito(filtro, edad, motivo, estrato, dia, habil, horita):
top_viajes= tviajes(filtro, viajes, edad, motivo, estrato,dia, habil, horita)
rows = top_viajes.to_dict('records')
return rows
@app.callback(
dash.dependencies.Output(component_id='datatable', component_property='rows'),
[dash.dependencies.Input(component_id='filtro', component_property='value'),
dash.dependencies.Input(component_id='edad', component_property='value'),
dash.dependencies.Input(component_id='motivo', component_property='value'),
dash.dependencies.Input(component_id='estrato', component_property='value'),
dash.dependencies.Input(component_id='dia', component_property='value'),
dash.dependencies.Input(component_id='habil', component_property='value'),
dash.dependencies.Input(component_id='horita', component_property='value')])
def tablerito(filtro, edad, motivo, estrato, dia, habil, horita):
trip_counts= tablero_final(filtro, viajes, edad, motivo, estrato,dia, habil, horita)
rows = trip_counts.to_dict('records')
return rows
@app.callback(
dash.dependencies.Output(component_id='histogram', component_property='figure'),
[dash.dependencies.Input(component_id='filtro', component_property='value'),
dash.dependencies.Input(component_id='edad', component_property='value'),
dash.dependencies.Input(component_id='motivo', component_property='value'),
dash.dependencies.Input(component_id='estrato', component_property='value'),
dash.dependencies.Input(component_id='dia', component_property='value'),
dash.dependencies.Input(component_id='habil', component_property='value'),
dash.dependencies.Input(component_id='horita', component_property='value')])
def histog(filtro, edad, motivo, estrato, dia, habil, horita):
return hist_hora(filtro, viajes, edad, motivo, estrato,dia, habil, horita)
@app.callback(
dash.dependencies.Output(component_id='dens', component_property='figure'),
[dash.dependencies.Input(component_id='filtro', component_property='value'),
dash.dependencies.Input(component_id='edad', component_property='value'),
dash.dependencies.Input(component_id='motivo', component_property='value'),
dash.dependencies.Input(component_id='estrato', component_property='value'),
dash.dependencies.Input(component_id='dia', component_property='value'),
dash.dependencies.Input(component_id='habil', component_property='value'),
dash.dependencies.Input(component_id='horita', component_property='value')])
def histog(filtro, edad, motivo, estrato, dia, habil, horita):
return dens_hora(filtro, viajes, edad, motivo, estrato,dia, habil, horita)
@app.callback(
dash.dependencies.Output(component_id='power_law1', component_property='figure'),
[dash.dependencies.Input(component_id='filtro', component_property='value'),
dash.dependencies.Input(component_id='edad', component_property='value'),
dash.dependencies.Input(component_id='motivo', component_property='value'),
dash.dependencies.Input(component_id='estrato', component_property='value'),
dash.dependencies.Input(component_id='dia', component_property='value'),
dash.dependencies.Input(component_id='habil', component_property='value'),
dash.dependencies.Input(component_id='horita', component_property='value')])
def update_PW(filtro, edad, motivo, estrato, dia, habil, horita):
direccion="Entradas"
fig = powerlaws(filtro, viajes, edad, direccion, motivo, estrato,dia, habil, horita)
return fig
@app.callback(
dash.dependencies.Output(component_id='power_law2', component_property='figure'),
[dash.dependencies.Input(component_id='filtro', component_property='value'),
dash.dependencies.Input(component_id='edad', component_property='value'),
dash.dependencies.Input(component_id='motivo', component_property='value'),
dash.dependencies.Input(component_id='estrato', component_property='value'),
dash.dependencies.Input(component_id='dia', component_property='value'),
dash.dependencies.Input(component_id='habil', component_property='value'),
dash.dependencies.Input(component_id='horita', component_property='value')])
def update_PW(filtro, edad, motivo, estrato, dia, habil, horita):
direccion="Salidas"
fig = powerlaws(filtro, viajes, edad, direccion, motivo, estrato,dia, habil, horita)
return fig
@app.callback(
dash.dependencies.Output(component_id='mapita', component_property='srcDoc'),
[dash.dependencies.Input(component_id='filtro', component_property='value'),
dash.dependencies.Input(component_id='edad', component_property='value'),
dash.dependencies.Input(component_id='motivo', component_property='value'),
dash.dependencies.Input(component_id='estrato', component_property='value'),
dash.dependencies.Input(component_id='dia', component_property='value'),
dash.dependencies.Input(component_id='habil', component_property='value'),
dash.dependencies.Input(component_id='horita', component_property='value')])
def update_distribution2(filtro, edad, motivo, estrato, dia, habil,horita):
fig = update_mapas(viajes, filtro, edad, motivo, estrato,dia, habil, horita)
return fig
app.css.append_css({
'external_url': 'https://codepen.io/chriddyp/pen/bWLwgP.css'
})
if __name__ == '__main__':
app.run_server(debug = True)
| 69.712871 | 176 | 0.533778 | 1,629 | 21,123 | 5.513198 | 0.121547 | 0.119363 | 0.13562 | 0.193742 | 0.833649 | 0.820399 | 0.809709 | 0.783654 | 0.744906 | 0.717181 | 0 | 0.009744 | 0.339204 | 21,123 | 302 | 177 | 69.943709 | 0.633687 | 0.099181 | 0 | 0.599278 | 0 | 0 | 0.158317 | 0.001764 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028881 | false | 0 | 0.032491 | 0.00722 | 0.090253 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1faf99cd4ee8aa2c00f6b68092c5deb8f3b21e84 | 6,964 | py | Python | racecar_gym/tasks/progress_based.py | mkolodziejczyk-piap/racecar_gym | e117432572bb5744884ad6059e55282b03aea929 | [
"MIT"
] | 16 | 2020-11-27T02:55:24.000Z | 2022-03-24T01:27:29.000Z | racecar_gym/tasks/progress_based.py | mkolodziejczyk-piap/racecar_gym | e117432572bb5744884ad6059e55282b03aea929 | [
"MIT"
] | 5 | 2020-08-24T15:59:39.000Z | 2020-10-20T19:45:46.000Z | racecar_gym/tasks/progress_based.py | mkolodziejczyk-piap/racecar_gym | e117432572bb5744884ad6059e55282b03aea929 | [
"MIT"
] | 4 | 2020-10-08T16:14:19.000Z | 2021-12-26T18:19:53.000Z | from .task import Task
import numpy as np
class MaximizeProgressTask(Task):
def __init__(self, laps: int, time_limit: float, terminate_on_collision: bool,
delta_progress: float = 0.0, collision_reward: float = 0.0,
frame_reward: float = 0.0, progress_reward: float = 100.0, n_min_rays_termination=1080):
self._time_limit = time_limit
self._laps = laps
self._terminate_on_collision = terminate_on_collision
self._n_min_rays_termination = n_min_rays_termination
self._last_stored_progress = None
# reward params
self._delta_progress = delta_progress
self._progress_reward = progress_reward
self._collision_reward = collision_reward
self._frame_reward = frame_reward
def reward(self, agent_id, state, action) -> float:
agent_state = state[agent_id]
progress = agent_state['lap'] + agent_state['progress']
if self._last_stored_progress is None:
self._last_stored_progress = progress
delta = abs(progress - self._last_stored_progress)
if delta > .5: # the agent is crossing the starting line in the wrong direction
delta = (1 - progress) + self._last_stored_progress
reward = self._frame_reward
if self._check_collision(agent_state):
reward += self._collision_reward
reward += delta * self._progress_reward
self._last_stored_progress = progress
return reward
def done(self, agent_id, state) -> bool:
agent_state = state[agent_id]
if self._terminate_on_collision and self._check_collision(agent_state):
return True
return agent_state['lap'] > self._laps or self._time_limit < agent_state['time']
def _check_collision(self, agent_state):
safe_margin = 0.25
collision = agent_state['wall_collision'] or len(agent_state['opponent_collisions']) > 0
if 'observations' in agent_state and 'lidar' in agent_state['observations']:
n_min_rays = sum(np.where(agent_state['observations']['lidar'] <= safe_margin, 1, 0))
return n_min_rays>self._n_min_rays_termination or collision
return collision
def reset(self):
self._last_stored_progress = None
class MaximizeProgressMaskObstacleTask(MaximizeProgressTask):
def __init__(self, laps: int, time_limit: float, terminate_on_collision: bool, delta_progress=0.0,
collision_reward=0, frame_reward=0, progress_reward=100):
super().__init__(laps, time_limit, terminate_on_collision, delta_progress, collision_reward, frame_reward,
progress_reward)
def reward(self, agent_id, state, action) -> float:
progress_reward = super().reward(agent_id, state, action)
distance_to_obstacle = state[agent_id]['obstacle']
if distance_to_obstacle < .3: # max distance = 1, meaning perfectly centered in the widest point of the track
return 0.0
else:
return progress_reward
class MaximizeProgressRegularizeAction(MaximizeProgressTask):
def __init__(self, laps: int, time_limit: float, terminate_on_collision: bool, delta_progress=0.0,
collision_reward=0, frame_reward=0, progress_reward=100, action_reg=0.25):
super().__init__(laps, time_limit, terminate_on_collision, delta_progress, collision_reward, frame_reward,
progress_reward)
self._action_reg = action_reg
self._last_action = None
def reset(self):
super(MaximizeProgressRegularizeAction, self).reset()
self._last_action = None
def reward(self, agent_id, state, action) -> float:
""" Progress-based with action regularization: penalize sharp change in control"""
reward = super().reward(agent_id, state, action)
action = np.array(list(action.values()))
if self._last_action is not None:
reward -= self._action_reg * np.linalg.norm(action - self._last_action)
self._last_action = action
return reward
class RankDiscountedMaximizeProgressTask(MaximizeProgressTask):
def __init__(self, laps: int, time_limit: float, terminate_on_collision: bool, delta_progress=0.001,
collision_reward=-100, frame_reward=-0.1, progress_reward=1):
super().__init__(laps, time_limit, terminate_on_collision, delta_progress, collision_reward, frame_reward,
progress_reward)
def reward(self, agent_id, state, action) -> float:
rank = state[agent_id]['rank']
reward = super().reward(agent_id, state, action)
reward = reward / float(rank)
return reward
class MaximizeProgressTask(Task):
def __init__(self, laps: int, time_limit: float, terminate_on_collision: bool,
delta_progress: float = 0.0, collision_reward: float = 0.0,
frame_reward: float = 0.0, progress_reward: float = 100.0, n_min_rays_termination=1080):
self._time_limit = time_limit
self._laps = laps
self._terminate_on_collision = terminate_on_collision
self._n_min_rays_termination = n_min_rays_termination
self._last_stored_progress = None
# reward params
self._delta_progress = delta_progress
self._progress_reward = progress_reward
self._collision_reward = collision_reward
self._frame_reward = frame_reward
def reward(self, agent_id, state, action) -> float:
agent_state = state[agent_id]
progress = agent_state['lap'] + agent_state['progress']
if self._last_stored_progress is None:
self._last_stored_progress = progress
delta = abs(progress - self._last_stored_progress)
if delta > .5: # the agent is crossing the starting line in the wrong direction
delta = (1 - progress) + self._last_stored_progress
reward = self._frame_reward
if self._check_collision(agent_state):
reward += self._collision_reward
reward += delta * self._progress_reward
self._last_stored_progress = progress
return reward
def done(self, agent_id, state) -> bool:
agent_state = state[agent_id]
if self._terminate_on_collision and self._check_collision(agent_state):
return True
return agent_state['lap'] > self._laps or self._time_limit < agent_state['time']
def _check_collision(self, agent_state):
safe_margin = 0.25
collision = agent_state['wall_collision'] or len(agent_state['opponent_collisions']) > 0
if 'observations' in agent_state and 'lidar' in agent_state['observations']:
n_min_rays = sum(np.where(agent_state['observations']['lidar'] <= safe_margin, 1, 0))
return n_min_rays>self._n_min_rays_termination or collision
return collision
def reset(self):
self._last_stored_progress = None | 47.054054 | 118 | 0.679351 | 860 | 6,964 | 5.131395 | 0.119767 | 0.063449 | 0.063449 | 0.069794 | 0.845003 | 0.836166 | 0.836166 | 0.812373 | 0.812373 | 0.800589 | 0 | 0.014098 | 0.236071 | 6,964 | 148 | 119 | 47.054054 | 0.815414 | 0.044227 | 0 | 0.813008 | 0 | 0 | 0.030996 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.138211 | false | 0 | 0.01626 | 0 | 0.308943 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1fe5efee6941c2f732e32fb29efe1963a6a66e77 | 146 | py | Python | openpharmacophore/tests/test_ligand_based.py | dprada/OpenPharmacophore | bfcf4bdafd586b27a48fd5d1f13614707b5e55a8 | [
"MIT"
] | 2 | 2021-07-10T05:56:04.000Z | 2021-08-04T14:56:47.000Z | openpharmacophore/tests/test_ligand_based.py | dprada/OpenPharmacophore | bfcf4bdafd586b27a48fd5d1f13614707b5e55a8 | [
"MIT"
] | 21 | 2021-04-27T06:05:05.000Z | 2021-11-01T23:19:36.000Z | openpharmacophore/tests/test_ligand_based.py | dprada/OpenPharmacophore | bfcf4bdafd586b27a48fd5d1f13614707b5e55a8 | [
"MIT"
] | 3 | 2021-06-21T19:09:47.000Z | 2021-07-16T01:16:27.000Z | from openpharmacophore.ligand_based import LigandBasedPharmacophore
def test_from_ligand_list():
pass
def test_from_ligand_file():
pass
| 18.25 | 67 | 0.815068 | 18 | 146 | 6.222222 | 0.611111 | 0.125 | 0.196429 | 0.303571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136986 | 146 | 7 | 68 | 20.857143 | 0.888889 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | true | 0.4 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 8 |
1ff68581e5c97611cc65ac430f2bb47c23dfb68f | 209 | py | Python | jackpot/admin.py | clonetech/jackpotsone | 512d018f431eef7649712ad9c9e8e40d99ddd00c | [
"BSD-3-Clause"
] | null | null | null | jackpot/admin.py | clonetech/jackpotsone | 512d018f431eef7649712ad9c9e8e40d99ddd00c | [
"BSD-3-Clause"
] | 3 | 2020-06-05T18:28:06.000Z | 2021-06-10T20:33:26.000Z | jackpot/admin.py | clonetech/jackpotsone | 512d018f431eef7649712ad9c9e8e40d99ddd00c | [
"BSD-3-Clause"
] | null | null | null | from django.contrib import admin
from . models import Punter, Hexabet, Jackpot, Singlebet
admin.site.register(Hexabet)
admin.site.register(Singlebet)
admin.site.register(Punter)
admin.site.register(Jackpot)
| 23.222222 | 56 | 0.813397 | 28 | 209 | 6.071429 | 0.428571 | 0.211765 | 0.4 | 0.305882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086124 | 209 | 8 | 57 | 26.125 | 0.890052 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
1f33ccfb605a8632cb2d9f949d8978d1d2a7b5a6 | 13,112 | py | Python | tests/impls/perturb_image/test_sliding_window.py | nitesh201/xaitk-saliency | ee86cd3c63bd3b25cad3dc06e5e124ba825190d4 | [
"BSD-3-Clause"
] | null | null | null | tests/impls/perturb_image/test_sliding_window.py | nitesh201/xaitk-saliency | ee86cd3c63bd3b25cad3dc06e5e124ba825190d4 | [
"BSD-3-Clause"
] | null | null | null | tests/impls/perturb_image/test_sliding_window.py | nitesh201/xaitk-saliency | ee86cd3c63bd3b25cad3dc06e5e124ba825190d4 | [
"BSD-3-Clause"
] | null | null | null | from unittest import TestCase
import PIL.Image
import numpy as np
from smqtk_core.configuration import configuration_test_helper
from xaitk_saliency.impls.perturb_image.sliding_window import SlidingWindowPerturb
class TestOcclusionBasedPerturb (TestCase):
def test_init_default(self) -> None:
"""
Test empty construction since we provide defaults.
"""
impl = SlidingWindowPerturb()
assert impl.window_size == (50, 50)
assert impl.stride == (20, 20)
def test_init_valued(self) -> None:
"""
Test that constructor values pass.
"""
ex_w = (777, 776)
ex_s = (444, 445)
impl = SlidingWindowPerturb(window_size=ex_w, stride=ex_s)
assert impl.window_size == ex_w
assert impl.stride == ex_s
def test_standard_config(self) -> None:
ex_w = (777, 776)
ex_s = (444, 445)
impl = SlidingWindowPerturb(window_size=ex_w, stride=ex_s)
for inst in configuration_test_helper(impl):
assert inst.window_size == ex_w
assert inst.stride == ex_s
def test_perturb_1channel(self) -> None:
"""
Test basic perturbation on a known image with even windowing + stride.
"""
impl = SlidingWindowPerturb(window_size=(2, 2), stride=(2, 2))
# Image is slightly wide, should be occluded 6-ways.
white_image = PIL.Image.fromarray(
np.full((4, 6), fill_value=255, dtype=np.uint8)
)
assert white_image.mode == "L"
pert_imgs, pert_masks = impl.perturb(white_image)
assert len(pert_imgs) == 6
assert len(pert_masks) == 6
assert np.allclose(pert_masks, EXPECTED_MASKS_4x6)
# Output image modes should match input
for i, img in enumerate(pert_imgs):
assert img.mode == "L"
# Test for expected output perturbed image content
assert np.allclose(
np.asarray(pert_imgs[0]),
np.vstack([
np.hstack([BLACK_2x2_L, WHITE_2x2_L, WHITE_2x2_L]),
np.hstack([WHITE_2x2_L, WHITE_2x2_L, WHITE_2x2_L]),
])
)
assert np.allclose(
np.asarray(pert_imgs[1]),
np.vstack([
np.hstack([WHITE_2x2_L, BLACK_2x2_L, WHITE_2x2_L]),
np.hstack([WHITE_2x2_L, WHITE_2x2_L, WHITE_2x2_L]),
])
)
assert np.allclose(
np.asarray(pert_imgs[2]),
np.vstack([
np.hstack([WHITE_2x2_L, WHITE_2x2_L, BLACK_2x2_L]),
np.hstack([WHITE_2x2_L, WHITE_2x2_L, WHITE_2x2_L]),
])
)
assert np.allclose(
np.asarray(pert_imgs[3]),
np.vstack([
np.hstack([WHITE_2x2_L, WHITE_2x2_L, WHITE_2x2_L]),
np.hstack([BLACK_2x2_L, WHITE_2x2_L, WHITE_2x2_L]),
])
)
assert np.allclose(
np.asarray(pert_imgs[4]),
np.vstack([
np.hstack([WHITE_2x2_L, WHITE_2x2_L, WHITE_2x2_L]),
np.hstack([WHITE_2x2_L, BLACK_2x2_L, WHITE_2x2_L]),
])
)
assert np.allclose(
np.asarray(pert_imgs[5]),
np.vstack([
np.hstack([WHITE_2x2_L, WHITE_2x2_L, WHITE_2x2_L]),
np.hstack([WHITE_2x2_L, WHITE_2x2_L, BLACK_2x2_L]),
])
)
def test_perturb_3channel(self) -> None:
"""
Test basic perturbation on a known image with even windowing + stride.
"""
impl = SlidingWindowPerturb(window_size=(2, 2), stride=(2, 2))
# Image is slightly wide, should be occluded 6-ways.
white_image = PIL.Image.fromarray(
np.full((4, 6, 3), fill_value=255, dtype=np.uint8)
)
assert white_image.mode == "RGB"
pert_imgs, pert_masks = impl.perturb(white_image)
assert len(pert_imgs) == 6
assert len(pert_masks) == 6
assert np.allclose(pert_masks, EXPECTED_MASKS_4x6)
# Output image modes should match input
for i, img in enumerate(pert_imgs):
assert img.mode == "RGB"
# Test for expected output perturbed image content
assert np.allclose(
np.asarray(pert_imgs[0]),
np.vstack([
np.hstack([BLACK_2x2_RGB, WHITE_2x2_RGB, WHITE_2x2_RGB]),
np.hstack([WHITE_2x2_RGB, WHITE_2x2_RGB, WHITE_2x2_RGB]),
])
)
assert np.allclose(
np.asarray(pert_imgs[1]),
np.vstack([
np.hstack([WHITE_2x2_RGB, BLACK_2x2_RGB, WHITE_2x2_RGB]),
np.hstack([WHITE_2x2_RGB, WHITE_2x2_RGB, WHITE_2x2_RGB]),
])
)
assert np.allclose(
np.asarray(pert_imgs[2]),
np.vstack([
np.hstack([WHITE_2x2_RGB, WHITE_2x2_RGB, BLACK_2x2_RGB]),
np.hstack([WHITE_2x2_RGB, WHITE_2x2_RGB, WHITE_2x2_RGB]),
])
)
assert np.allclose(
np.asarray(pert_imgs[3]),
np.vstack([
np.hstack([WHITE_2x2_RGB, WHITE_2x2_RGB, WHITE_2x2_RGB]),
np.hstack([BLACK_2x2_RGB, WHITE_2x2_RGB, WHITE_2x2_RGB]),
])
)
assert np.allclose(
np.asarray(pert_imgs[4]),
np.vstack([
np.hstack([WHITE_2x2_RGB, WHITE_2x2_RGB, WHITE_2x2_RGB]),
np.hstack([WHITE_2x2_RGB, BLACK_2x2_RGB, WHITE_2x2_RGB]),
])
)
assert np.allclose(
np.asarray(pert_imgs[5]),
np.vstack([
np.hstack([WHITE_2x2_RGB, WHITE_2x2_RGB, WHITE_2x2_RGB]),
np.hstack([WHITE_2x2_RGB, WHITE_2x2_RGB, BLACK_2x2_RGB]),
])
)
def test_perturb_3channel_nonsquare(self) -> None:
"""
Test basic perturbation on a known image with non-square window +
stride.
"""
impl = SlidingWindowPerturb(window_size=(3, 2), stride=(3, 2))
# Image is slightly wide, should be occluded 6-ways.
white_image = PIL.Image.fromarray(
np.full((6, 6, 3), fill_value=255, dtype=np.uint8)
)
assert white_image.mode == "RGB"
pert_imgs, pert_masks = impl.perturb(white_image)
assert len(pert_imgs) == 6
assert len(pert_masks) == 6
assert np.allclose(pert_masks, EXPECTED_MASKS_6x6_rect)
# Output image modes should match input
for i, img in enumerate(pert_imgs):
assert img.mode == "RGB"
# Test for expected output perturbed image content
assert np.allclose(
np.asarray(pert_imgs[0]),
np.vstack([
np.hstack([BLACK_3x2_RGB, WHITE_3x2_RGB, WHITE_3x2_RGB]),
np.hstack([WHITE_3x2_RGB, WHITE_3x2_RGB, WHITE_3x2_RGB]),
])
)
assert np.allclose(
np.asarray(pert_imgs[1]),
np.vstack([
np.hstack([WHITE_3x2_RGB, BLACK_3x2_RGB, WHITE_3x2_RGB]),
np.hstack([WHITE_3x2_RGB, WHITE_3x2_RGB, WHITE_3x2_RGB]),
])
)
assert np.allclose(
np.asarray(pert_imgs[2]),
np.vstack([
np.hstack([WHITE_3x2_RGB, WHITE_3x2_RGB, BLACK_3x2_RGB]),
np.hstack([WHITE_3x2_RGB, WHITE_3x2_RGB, WHITE_3x2_RGB]),
])
)
assert np.allclose(
np.asarray(pert_imgs[3]),
np.vstack([
np.hstack([WHITE_3x2_RGB, WHITE_3x2_RGB, WHITE_3x2_RGB]),
np.hstack([BLACK_3x2_RGB, WHITE_3x2_RGB, WHITE_3x2_RGB]),
])
)
assert np.allclose(
np.asarray(pert_imgs[4]),
np.vstack([
np.hstack([WHITE_3x2_RGB, WHITE_3x2_RGB, WHITE_3x2_RGB]),
np.hstack([WHITE_3x2_RGB, BLACK_3x2_RGB, WHITE_3x2_RGB]),
])
)
assert np.allclose(
np.asarray(pert_imgs[5]),
np.vstack([
np.hstack([WHITE_3x2_RGB, WHITE_3x2_RGB, WHITE_3x2_RGB]),
np.hstack([WHITE_3x2_RGB, WHITE_3x2_RGB, BLACK_3x2_RGB]),
])
)
def test_perturb_4channel(self) -> None:
"""
Test basic perturbation on a known image with even windowing + stride.
"""
impl = SlidingWindowPerturb(window_size=(2, 2), stride=(2, 2))
# Image is slightly wide, should be occluded 6-ways.
white_image = PIL.Image.fromarray(
np.full((4, 6, 4), fill_value=255, dtype=np.uint8)
)
assert white_image.mode == "RGBA"
pert_imgs, pert_masks = impl.perturb(white_image)
assert len(pert_imgs) == 6
assert len(pert_masks) == 6
assert np.allclose(pert_masks, EXPECTED_MASKS_4x6)
# Output image modes should match input
for i, img in enumerate(pert_imgs):
assert img.mode == "RGBA"
# Test for expected output perturbed image content
assert np.allclose(
np.asarray(pert_imgs[0]),
np.vstack([
np.hstack([BLACK_2x2_RGBA, WHITE_2x2_RGBA, WHITE_2x2_RGBA]),
np.hstack([WHITE_2x2_RGBA, WHITE_2x2_RGBA, WHITE_2x2_RGBA]),
])
)
assert np.allclose(
np.asarray(pert_imgs[1]),
np.vstack([
np.hstack([WHITE_2x2_RGBA, BLACK_2x2_RGBA, WHITE_2x2_RGBA]),
np.hstack([WHITE_2x2_RGBA, WHITE_2x2_RGBA, WHITE_2x2_RGBA]),
])
)
assert np.allclose(
np.asarray(pert_imgs[2]),
np.vstack([
np.hstack([WHITE_2x2_RGBA, WHITE_2x2_RGBA, BLACK_2x2_RGBA]),
np.hstack([WHITE_2x2_RGBA, WHITE_2x2_RGBA, WHITE_2x2_RGBA]),
])
)
assert np.allclose(
np.asarray(pert_imgs[3]),
np.vstack([
np.hstack([WHITE_2x2_RGBA, WHITE_2x2_RGBA, WHITE_2x2_RGBA]),
np.hstack([BLACK_2x2_RGBA, WHITE_2x2_RGBA, WHITE_2x2_RGBA]),
])
)
assert np.allclose(
np.asarray(pert_imgs[4]),
np.vstack([
np.hstack([WHITE_2x2_RGBA, WHITE_2x2_RGBA, WHITE_2x2_RGBA]),
np.hstack([WHITE_2x2_RGBA, BLACK_2x2_RGBA, WHITE_2x2_RGBA]),
])
)
assert np.allclose(
np.asarray(pert_imgs[5]),
np.vstack([
np.hstack([WHITE_2x2_RGBA, WHITE_2x2_RGBA, WHITE_2x2_RGBA]),
np.hstack([WHITE_2x2_RGBA, WHITE_2x2_RGBA, BLACK_2x2_RGBA]),
])
)
# Common expected masks for 4x6 tests
EXPECTED_MASKS_4x6 = np.array([
[[0, 0, 1, 1, 1, 1],
[0, 0, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1]],
[[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1]],
[[1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1]],
[[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 0, 1, 1, 1, 1],
[0, 0, 1, 1, 1, 1]],
[[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1]],
[[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 0, 0]],
], dtype=bool)
EXPECTED_MASKS_6x6_rect = np.array([
[[0, 0, 1, 1, 1, 1],
[0, 0, 1, 1, 1, 1],
[0, 0, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1]],
[[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1]],
[[1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1]],
[[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[0, 0, 1, 1, 1, 1],
[0, 0, 1, 1, 1, 1],
[0, 0, 1, 1, 1, 1]],
[[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1],
[1, 1, 0, 0, 1, 1]],
[[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 0, 0]],
], dtype=bool)
WHITE_2x2_L = np.array([
[255, 255],
[255, 255],
])
WHITE_2x2_RGB = np.array([
[[255, 255, 255], [255, 255, 255]],
[[255, 255, 255], [255, 255, 255]],
])
WHITE_3x2_RGB = np.array([
[[255, 255, 255], [255, 255, 255]],
[[255, 255, 255], [255, 255, 255]],
[[255, 255, 255], [255, 255, 255]],
])
WHITE_2x2_RGBA = np.array([
[[255, 255, 255, 255], [255, 255, 255, 255]],
[[255, 255, 255, 255], [255, 255, 255, 255]],
])
BLACK_2x2_L = np.array([
[0, 0],
[0, 0]
])
BLACK_2x2_RGB = np.array([
[[0, 0, 0], [0, 0, 0]],
[[0, 0, 0], [0, 0, 0]]
])
BLACK_3x2_RGB = np.array([
[[0, 0, 0], [0, 0, 0]],
[[0, 0, 0], [0, 0, 0]],
[[0, 0, 0], [0, 0, 0]]
])
BLACK_2x2_RGBA = np.array([
[[0, 0, 0, 0], [0, 0, 0, 0]],
[[0, 0, 0, 0], [0, 0, 0, 0]]
])
| 34.057143 | 82 | 0.519448 | 1,853 | 13,112 | 3.445764 | 0.067458 | 0.0852 | 0.114644 | 0.135317 | 0.885826 | 0.868598 | 0.860454 | 0.860454 | 0.860454 | 0.860454 | 0 | 0.114151 | 0.336562 | 13,112 | 384 | 83 | 34.145833 | 0.619841 | 0.073292 | 0 | 0.783383 | 0 | 0 | 0.001835 | 0 | 0 | 0 | 0 | 0 | 0.148368 | 1 | 0.020772 | false | 0 | 0.014837 | 0 | 0.038576 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1f3cf89fa1934b60784e84a790439883c159ce92 | 11,693 | py | Python | src/register/test_views.py | juliannovoa/SmartScribble | 2359403bb4a71e88ee387724bae58f6873b922d8 | [
"Apache-2.0"
] | null | null | null | src/register/test_views.py | juliannovoa/SmartScribble | 2359403bb4a71e88ee387724bae58f6873b922d8 | [
"Apache-2.0"
] | 3 | 2021-06-09T17:42:59.000Z | 2021-09-22T18:03:31.000Z | src/register/test_views.py | juliannovoa/SmartScribble | 2359403bb4a71e88ee387724bae58f6873b922d8 | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 Julián Novoa Martín
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from django.contrib.auth.forms import PasswordChangeForm
from django.contrib.auth.models import User
from django.test import TestCase
from django.urls import reverse
from common.util import get_logged_users
from register.forms import RegisterForm, PredictionModelForm, LoginForm
from register.models import PredictionModels
class RegisterViewTest(TestCase):
def test_view_url_exists_at_desired_location(self):
response = self.client.get('/register/')
self.assertEqual(response.status_code, 200)
def test_view_url_accessible_by_name(self):
response = self.client.get(reverse('register'))
self.assertEqual(response.status_code, 200)
def test_view_uses_correct_template_user_not_logged(self):
response = self.client.get(reverse('register'))
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, 'register/register.html')
def test_view_redirects_user_logged(self):
user = User.objects.create(username='test_user')
self.client.force_login(user)
response = self.client.get(reverse('register'))
self.assertEqual(response.status_code, 302)
self.assertRedirects(response, reverse('profile'))
def test_form_create_user_is_correct(self):
response = self.client.get(reverse('register'))
self.assertEqual(response.status_code, 200)
form = response.context['form']
self.assertTrue(isinstance(form, RegisterForm))
def test_initial_form_create_user_is_empty(self):
response = self.client.get(reverse('register'))
self.assertEqual(response.status_code, 200)
form = response.context['form']
self.assertFalse(form.is_bound)
def test_view_creates_user(self):
data = {'username': 'test_user',
'email': 'example@email.com',
'password1': 'qawsedrftgyh',
'password2': 'qawsedrftgyh'}
response = self.client.post(reverse('register'), data)
self.assertEqual(response.status_code, 302)
try:
User.objects.get(username='test_user')
except User.DoesNotExist:
self.fail('User does not exists.')
def test_view_response_form_invalid_data(self):
data = {'username': 'test_user',
'email': 'example@email.com',
'password1': 'qawsedrftgyh',
'password2': 'qawsedgyh'}
response = self.client.post(reverse('register'), data)
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, 'register/register.html')
def test_view_logs_user_after_create_it(self):
data = {'username': 'test_user',
'email': 'example@email.com',
'password1': 'qawsedrftgyh',
'password2': 'qawsedrftgyh'}
response = self.client.post(reverse('register'), data)
self.assertEqual(response.status_code, 302)
try:
user = User.objects.get(username='test_user')
except User.DoesNotExist:
self.fail('User does not exists.')
logged_users = get_logged_users()
self.assertTrue(user in logged_users)
def test_redirection_after_user_creation(self):
data = {'username': 'test_user',
'email': 'example@email.com',
'password1': 'qawsedrftgyh',
'password2': 'qawsedrftgyh'}
response = self.client.post(reverse('register'), data)
self.assertEqual(response.status_code, 302)
self.assertTrue(response.url.startswith('/profile/'))
class ChangePasswordViewTest(TestCase):
def setUp(self):
self.test_user = User.objects.create(username='test_user')
self.test_user.set_password('qawsedrftgyh')
self.test_user.save()
def test_redirect_if_not_logged(self):
response = self.client.get('/changepswd/')
self.assertEqual(response.status_code, 302)
self.assertTrue(response.url.startswith('/login/'))
def test_view_url_exists_at_desired_location(self):
self.client.force_login(self.test_user)
response = self.client.get('/changepswd/')
self.assertEqual(response.status_code, 200)
def test_view_url_accessible_by_name(self):
self.client.force_login(self.test_user)
response = self.client.get(reverse('changepswd'))
self.assertEqual(response.status_code, 200)
def test_view_uses_correct_template(self):
self.client.force_login(self.test_user)
response = self.client.get(reverse('changepswd'))
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, 'register/changepswd.html')
def test_form_change_password_is_correct(self):
self.client.force_login(self.test_user)
response = self.client.get(reverse('changepswd'))
self.assertEqual(response.status_code, 200)
form = response.context['form']
self.assertTrue(isinstance(form, PasswordChangeForm))
def test_initial_form_change_password_is_empty(self):
self.client.force_login(self.test_user)
response = self.client.get(reverse('changepswd'))
self.assertEqual(response.status_code, 200)
form = response.context['form']
self.assertFalse(form.is_bound)
def test_view_change_password(self):
self.client.force_login(self.test_user)
new_password = 'newpassw_1!'
data = {'old_password': 'qawsedrftgyh',
'new_password1': new_password,
'new_password2': new_password}
response = self.client.post(reverse('changepswd'), data)
self.assertEqual(response.status_code, 302)
self.assertRedirects(response, reverse('changedata'))
self.test_user.refresh_from_db()
self.assertTrue(self.test_user.check_password(new_password))
def test_view_response_form_invalid_data(self):
self.client.force_login(self.test_user)
new_password = 'newpassw_1!'
data = {'old_password': 'invalid',
'new_password1': new_password,
'new_password2': new_password}
response = self.client.post(reverse('changepswd'), data)
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, 'register/changepswd.html')
class ChangePredictionModelViewTest(TestCase):
def setUp(self):
self.test_user = User.objects.create(username='test_user')
self.test_user.save()
def test_redirect_if_not_logged(self):
response = self.client.get('/changepm/')
self.assertEqual(response.status_code, 302)
self.assertTrue(response.url.startswith('/login/'))
def test_view_url_exists_at_desired_location(self):
self.client.force_login(self.test_user)
response = self.client.get('/changepm/')
self.assertEqual(response.status_code, 200)
def test_view_url_accessible_by_name(self):
self.client.force_login(self.test_user)
response = self.client.get(reverse('changepm'))
self.assertEqual(response.status_code, 200)
def test_view_uses_correct_template(self):
self.client.force_login(self.test_user)
response = self.client.get(reverse('changepm'))
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, 'register/changepm.html')
def test_form_change_prediction_model_is_correct(self):
self.client.force_login(self.test_user)
response = self.client.get(reverse('changepm'))
self.assertEqual(response.status_code, 200)
form = response.context['form']
self.assertTrue(isinstance(form, PredictionModelForm))
def test_initial_form_change_prediction_model_is_empty(self):
self.client.force_login(self.test_user)
response = self.client.get(reverse('changepm'))
self.assertEqual(response.status_code, 200)
form = response.context['form']
self.assertFalse(form.is_bound)
def test_view_change_prediction_model(self):
self.client.force_login(self.test_user)
for model in PredictionModels:
data = {'selected_prediction_model': model.name}
self.assertTrue(PredictionModelForm(data).is_valid())
response = self.client.post(reverse('changepm'), data)
self.assertEqual(response.status_code, 302)
self.test_user.refresh_from_db()
self.assertEqual(self.test_user.settings.prediction_model, model.name)
class CustomLogInViewTest(TestCase):
def setUp(self):
self.test_user = User.objects.create(username='test_user')
self.test_user.set_password('qawsedrftgyh')
self.test_user.save()
def test_view_url_exists_at_desired_location(self):
response = self.client.get('/login/')
self.assertEqual(response.status_code, 200)
def test_view_url_accessible_by_name(self):
response = self.client.get(reverse('login'))
self.assertEqual(response.status_code, 200)
def test_view_uses_correct_template_user_not_logged(self):
response = self.client.get(reverse('login'))
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, 'register/login.html')
def test_view_redirects_user_logged(self):
self.client.force_login(self.test_user)
response = self.client.get(reverse('login'))
self.assertEqual(response.status_code, 302)
self.assertRedirects(response, reverse('profile'))
def test_form_login_user_is_correct(self):
response = self.client.get(reverse('login'))
self.assertEqual(response.status_code, 200)
form = response.context['form']
self.assertTrue(isinstance(form, LoginForm))
def test_initial_form_create_user_is_empty(self):
response = self.client.get(reverse('login'))
self.assertEqual(response.status_code, 200)
form = response.context['form']
self.assertFalse(form.is_bound)
def test_view_logs_user(self):
user_data = {'username': 'test_user',
'password': 'qawsedrftgyh'}
logged_users_list = get_logged_users()
self.assertFalse(self.test_user in logged_users_list)
response = self.client.post(reverse('login'), user_data)
self.assertEqual(response.status_code, 302)
self.assertRedirects(response, reverse('profile'))
logged_users_list = get_logged_users()
self.assertTrue(self.test_user in logged_users_list)
def test_view_response_form_invalid_data(self):
user_data = {'username': 'test_user',
'password': 'invalid'}
logged_users_list = get_logged_users()
self.assertFalse(self.test_user in logged_users_list)
response = self.client.post(reverse('login'), user_data)
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, 'register/login.html')
logged_users_list = get_logged_users()
self.assertFalse(self.test_user in logged_users_list)
| 42.061151 | 82 | 0.687078 | 1,379 | 11,693 | 5.593909 | 0.125453 | 0.062225 | 0.077003 | 0.12406 | 0.829531 | 0.800882 | 0.800363 | 0.77651 | 0.740213 | 0.736842 | 0 | 0.013005 | 0.20431 | 11,693 | 277 | 83 | 42.212996 | 0.816101 | 0.04832 | 0 | 0.772727 | 0 | 0 | 0.104212 | 0.012509 | 0 | 0 | 0 | 0 | 0.286364 | 1 | 0.163636 | false | 0.122727 | 0.031818 | 0 | 0.213636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
1f59fd684a516d0804e297d661e3542a925be7af | 392 | py | Python | workflows/signals.py | blueicepl/django-workflows | a8d0c2d3c56644c45ede714391d51c78d0bb1660 | [
"BSD-3-Clause"
] | null | null | null | workflows/signals.py | blueicepl/django-workflows | a8d0c2d3c56644c45ede714391d51c78d0bb1660 | [
"BSD-3-Clause"
] | null | null | null | workflows/signals.py | blueicepl/django-workflows | a8d0c2d3c56644c45ede714391d51c78d0bb1660 | [
"BSD-3-Clause"
] | null | null | null | import django.dispatch
before_state_change = django.dispatch.Signal(providing_args=["from_state", "to_state"])
after_state_change = django.dispatch.Signal(providing_args=["from_state", "to_state"])
before_transition = django.dispatch.Signal(providing_args=["from_state", "transition", "user"])
after_transition = django.dispatch.Signal(providing_args=["from_state", "transition", "user"])
| 43.555556 | 95 | 0.790816 | 49 | 392 | 6 | 0.285714 | 0.238095 | 0.272109 | 0.394558 | 0.857143 | 0.857143 | 0.857143 | 0.857143 | 0.857143 | 0.857143 | 0 | 0 | 0.058673 | 392 | 8 | 96 | 49 | 0.796748 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
2f42d6d830f4e566b739330b95fa8ef546c4455d | 221 | py | Python | mnsim_noc/Strategy/__init__.py | godfather991/MNSIM_NoC | 402680ad72c46c2a0b040b5fa52232807d554aec | [
"MIT"
] | null | null | null | mnsim_noc/Strategy/__init__.py | godfather991/MNSIM_NoC | 402680ad72c46c2a0b040b5fa52232807d554aec | [
"MIT"
] | 3 | 2021-11-01T15:43:20.000Z | 2021-11-09T03:49:06.000Z | mnsim_noc/Strategy/__init__.py | ILTShade/MNSIM_NoC | 8fa4580cce0ef113b473dd22662748846ec6b45a | [
"MIT"
] | null | null | null | #-*-coding:utf-8-*-
from mnsim_noc.Strategy.mapping import Mapping
from mnsim_noc.Strategy.mapping import NaiveMapping
from mnsim_noc.Strategy.schedule import Schedule
from mnsim_noc.Strategy.schedule import NaiveSchedule | 44.2 | 53 | 0.850679 | 31 | 221 | 5.935484 | 0.387097 | 0.195652 | 0.26087 | 0.434783 | 0.728261 | 0.728261 | 0 | 0 | 0 | 0 | 0 | 0.004878 | 0.072398 | 221 | 5 | 53 | 44.2 | 0.892683 | 0.081448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
2f8bc7f235bdeb335811c65107444f1d5c41d840 | 83 | py | Python | state/__init__.py | JacobChen258/AI-Markov-Probability | 909696597850e746e1cd7eef06df4aee0ce67ef2 | [
"MIT"
] | null | null | null | state/__init__.py | JacobChen258/AI-Markov-Probability | 909696597850e746e1cd7eef06df4aee0ce67ef2 | [
"MIT"
] | null | null | null | state/__init__.py | JacobChen258/AI-Markov-Probability | 909696597850e746e1cd7eef06df4aee0ce67ef2 | [
"MIT"
] | null | null | null | from .game_state import GameState
from .game_state_handler import GameStateHandler
| 27.666667 | 48 | 0.879518 | 11 | 83 | 6.363636 | 0.636364 | 0.228571 | 0.371429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096386 | 83 | 2 | 49 | 41.5 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
23d6b3e87927f395ca9dda8769b347ef496fbe85 | 5,609 | py | Python | day17/python/dfriedenberger/src/util.py | jamhocken/aoc-2020 | b1f9e04177afaf7e7c15fdc7bf7bc76f27a029f6 | [
"MIT"
] | 16 | 2020-11-21T16:11:07.000Z | 2021-12-06T10:02:25.000Z | day17/python/dfriedenberger/src/util.py | jamhocken/aoc-2020 | b1f9e04177afaf7e7c15fdc7bf7bc76f27a029f6 | [
"MIT"
] | 38 | 2020-11-26T05:53:35.000Z | 2021-11-22T17:01:58.000Z | day17/python/dfriedenberger/src/util.py | jamhocken/aoc-2020 | b1f9e04177afaf7e7c15fdc7bf7bc76f27a029f6 | [
"MIT"
] | 41 | 2020-11-21T16:11:10.000Z | 2021-12-07T13:36:07.000Z | import re
# tag::Cube[]
class Cube():
def __init__(self):
self.cubes = dict()
def loadFromFile(self, filename):
"""Read file to map"""
file = open(filename, "r")
z = 0
y = 0
for line in file:
for x in range(len(line.strip())):
if line[x] == '#':
self.set(x,y,z)
y += 1
file.close()
def set(self,x,y,z):
key = "{0}#{1}#{2}".format(x,y,z)
self.cubes[key] = "#"
def get(self,x,y,z):
key = "{0}#{1}#{2}".format(x,y,z)
if key not in self.cubes: return '.'
return self.cubes[key]
def getLoopRange(self):
x1,x2,y1,y2,z1,z2 = 0,1,0,1,0,1
for key in self.cubes:
m = re.search(r'^([0-9-]+)#([0-9-]+)#([0-9-]+)$', key)
if not m: raise Exception(key)
x = int(m.group(1))
x1,x2 = min(x1,x) , max(x2,x + 1)
y = int(m.group(2))
y1,y2 = min(y1,y) , max(y2,y + 1)
z = int(m.group(3))
z1,z2 = min(z1,z) , max(z2,z + 1)
return x1,x2,y1,y2,z1,z2
def getActiveNeighbors(self,x,y,z):
o = 0
for z0 in range(z-1,z+2):
for y0 in range(y-1,y+2):
for x0 in range(x-1,x+2):
if x0 == x and y0 == y and z0 == z: continue
if self.get(x0,y0,z0) == "#": o += 1
return o
def next(self):
next = Cube()
x1,x2,y1,y2,z1,z2 = self.getLoopRange()
for z in range(z1-1,z2+1):
for y in range(y1-1,y2+1):
for x in range(x1-1,x2+1):
o = self.getActiveNeighbors(x,y,z)
if self.get(x,y,z) == "#":
#active and exactly 2 or 3 of its neighbors are also active, the cube remains active
if o == 2 or o == 3:
next.set(x,y,z)
else:
#If a cube is inactive but exactly 3 of its neighbors are active, the cube becomes active
if o == 3:
next.set(x,y,z)
return next
def count(self):
return len(self.cubes)
def dump(self):
x1,x2,y1,y2,z1,z2 = self.getLoopRange()
print("---",x1,x2,y1,y2,z1,z2)
for z in range(z1,z2):
print("z =",z);
for y in range(y1,y2):
row = "";
for x in range(x1,x2):
row += self.get(x,y,z)
print(row)
return
# tag::Cube[]
# tag::Cube4D[]
class Cube4D():
def __init__(self):
self.cubes = dict()
def loadFromFile(self, filename):
"""Read file to map"""
file = open(filename, "r")
z = 0
w = 0
y = 0
for line in file:
for x in range(len(line.strip())):
if line[x] == '#':
self.set(x,y,z,w)
y += 1
file.close()
def set(self,x,y,z,w):
key = "{0}#{1}#{2}#{3}".format(x,y,z,w)
self.cubes[key] = "#"
def get(self,x,y,z,w):
key = "{0}#{1}#{2}#{3}".format(x,y,z,w)
if key not in self.cubes: return '.'
return self.cubes[key]
def getLoopRange(self):
x1,x2,y1,y2,z1,z2,w1,w2 = 0,1,0,1,0,1,0,1
for key in self.cubes:
m = re.search(r'^([0-9-]+)#([0-9-]+)#([0-9-]+)#([0-9-]+)$', key)
if not m: raise Exception(key)
x = int(m.group(1))
x1,x2 = min(x1,x) , max(x2,x + 1)
y = int(m.group(2))
y1,y2 = min(y1,y) , max(y2,y + 1)
z = int(m.group(3))
z1,z2 = min(z1,z) , max(z2,z + 1)
w = int(m.group(4))
w1,w2 = min(w1,w) , max(w2,w + 1)
return x1,x2,y1,y2,z1,z2,w1,w2
def getActiveNeighbors(self,x,y,z,w):
o = 0
for w0 in range(w-1,w+2):
for z0 in range(z-1,z+2):
for y0 in range(y-1,y+2):
for x0 in range(x-1,x+2):
if x0 == x and y0 == y and z0 == z and w0 == w: continue
if self.get(x0,y0,z0,w0) == "#": o += 1
return o
def next(self):
next = Cube4D()
x1,x2,y1,y2,z1,z2,w1,w2 = self.getLoopRange()
for w in range(w1-1,w2+1):
for z in range(z1-1,z2+1):
for y in range(y1-1,y2+1):
for x in range(x1-1,x2+1):
o = self.getActiveNeighbors(x,y,z,w)
if self.get(x,y,z,w) == "#":
#active and exactly 2 or 3 of its neighbors are also active, the cube remains active
if o == 2 or o == 3:
next.set(x,y,z,w)
else:
#If a cube is inactive but exactly 3 of its neighbors are active, the cube becomes active
if o == 3:
next.set(x,y,z,w)
return next
def count(self):
return len(self.cubes)
def dump(self):
x1,x2,y1,y2,z1,z2,w1,w2 = self.getLoopRange()
for w in range(w1,w2):
for z in range(z1,z2):
print("z =",z,"w =",w);
for y in range(y1,y2):
row = "";
for x in range(x1,x2):
row += self.get(x,y,z,w)
print(row)
return
# tag::Cube4D[] | 31.511236 | 117 | 0.413621 | 846 | 5,609 | 2.732861 | 0.105201 | 0.069637 | 0.028547 | 0.019031 | 0.927768 | 0.926471 | 0.884948 | 0.865052 | 0.806661 | 0.770329 | 0 | 0.078101 | 0.42931 | 5,609 | 178 | 118 | 31.511236 | 0.644174 | 0.076306 | 0 | 0.728571 | 0 | 0 | 0.02866 | 0.013943 | 0 | 0 | 0 | 0 | 0 | 1 | 0.128571 | false | 0 | 0.007143 | 0.014286 | 0.235714 | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
23fdeebd10fb7562209f67d84617e5ec7dee9e65 | 19,951 | py | Python | superpoint/settings.py | SwagJ/SuperPoint | ecbf1d6e809ea8c7c832078ad26d2a74ed2fae29 | [
"MIT"
] | null | null | null | superpoint/settings.py | SwagJ/SuperPoint | ecbf1d6e809ea8c7c832078ad26d2a74ed2fae29 | [
"MIT"
] | null | null | null | superpoint/settings.py | SwagJ/SuperPoint | ecbf1d6e809ea8c7c832078ad26d2a74ed2fae29 | [
"MIT"
] | null | null | null | DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
DATA_PATH = '/disk_hdd/superpoint'
EXPER_PATH = '/disk_ssd/SuperPoint'
| 35.436945 | 35 | 0.774648 | 2,810 | 19,951 | 5.1 | 0.002491 | 0.313725 | 0.235294 | 0.294118 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0.084507 | 19,951 | 562 | 36 | 35.5 | 0.784615 | 0 | 0 | 1 | 0 | 0 | 0.56338 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
f1997a2e3f556a040c7a1827ec77ba247bf390ad | 137 | py | Python | run.py | CokkocZateki/eve-freight | d61260efd2d1a0104ea860768e8c87ebc594a3bd | [
"MIT"
] | 1 | 2018-08-03T20:51:04.000Z | 2018-08-03T20:51:04.000Z | run.py | CokkocZateki/eve-freight | d61260efd2d1a0104ea860768e8c87ebc594a3bd | [
"MIT"
] | null | null | null | run.py | CokkocZateki/eve-freight | d61260efd2d1a0104ea860768e8c87ebc594a3bd | [
"MIT"
] | 2 | 2021-02-14T18:33:17.000Z | 2021-11-26T20:50:43.000Z | from flask_sqlalchemy import SQLAlchemy
from app import app
#app.run(debug=True, host='0.0.0.0')
app.run(debug=True, host='127.0.0.1')
| 19.571429 | 39 | 0.729927 | 27 | 137 | 3.666667 | 0.444444 | 0.080808 | 0.222222 | 0.30303 | 0.383838 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081301 | 0.10219 | 137 | 6 | 40 | 22.833333 | 0.723577 | 0.255474 | 0 | 0 | 0 | 0 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
f1cfb50aebcf8762f3fbce313b621f9a703b50bb | 28,717 | py | Python | src/CLI/actioner/routemap_openconfig_to_restconf.py | project-arlo/sonic-mgmt-framework | 562cd84ff3fec9ca705c7df621742f2daa61ce71 | [
"Apache-2.0"
] | 7 | 2019-10-17T06:12:02.000Z | 2021-09-08T11:16:19.000Z | src/CLI/actioner/routemap_openconfig_to_restconf.py | noolex/sonic-mgmt-framework | 5493889adc47fc584b04dca1a0cc0a2007211df4 | [
"Apache-2.0"
] | 207 | 2019-06-24T04:48:11.000Z | 2020-05-06T05:51:37.000Z | src/CLI/actioner/routemap_openconfig_to_restconf.py | noolex/sonic-mgmt-framework | 5493889adc47fc584b04dca1a0cc0a2007211df4 | [
"Apache-2.0"
] | 20 | 2019-06-27T19:24:45.000Z | 2021-07-15T21:12:30.000Z | restconf_map = {
'openconfig_routing_policy_routing_policy' :
'/restconf/data/openconfig-routing-policy:routing-policy:',
'openconfig_routing_policy_routing_policy_defined_sets' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets:',
'openconfig_routing_policy_routing_policy_defined_sets_prefix_sets' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/prefix-sets:',
'openconfig_routing_policy_routing_policy_defined_sets_prefix_sets_prefix_set' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/prefix-sets/prefix-set={name}:',
'list_openconfig_routing_policy_routing_policy_defined_sets_prefix_sets_prefix_set' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/prefix-sets/prefix-set:',
'openconfig_routing_policy_routing_policy_defined_sets_prefix_sets_prefix_set_config_mode' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/prefix-sets/prefix-set={name}/config/mode:',
'openconfig_routing_policy_routing_policy_defined_sets_prefix_sets_prefix_set_prefixes' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/prefix-sets/prefix-set={name}/prefixes:',
'openconfig_routing_policy_routing_policy_defined_sets_prefix_sets_prefix_set_prefixes_prefix' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/prefix-sets/prefix-set={name}/prefixes/prefix={ip-prefix},{masklength-range}',
'list_openconfig_routing_policy_routing_policy_defined_sets_prefix_sets_prefix_set_prefixes_prefix' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/prefix-sets/prefix-set={name}/prefixes/prefix:',
'openconfig_routing_policy_ext_routing_policy_defined_sets_prefix_sets_prefix_set_prefixes_prefix_config_action' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/prefix-sets/prefix-set={name}/prefixes/prefix={ip-prefix},{masklength-range}/config/openconfig-routing-policy-ext:action',
'openconfig_bgp_policy_routing_policy_defined_sets_bgp_defined_sets' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets:',
'openconfig_bgp_policy_routing_policy_defined_sets_bgp_defined_sets_community_sets' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/community-sets:',
'openconfig_bgp_policy_routing_policy_defined_sets_bgp_defined_sets_community_sets_community_set' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/community-sets/community-set={community-set-name}',
'list_openconfig_bgp_policy_routing_policy_defined_sets_bgp_defined_sets_community_sets_community_set' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/community-sets/community-set',
'openconfig_bgp_policy_routing_policy_defined_sets_bgp_defined_sets_community_sets_community_set_config_community_member' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/community-sets/community-set={community-set-name}/config/community-member',
'openconfig_bgp_policy_routing_policy_defined_sets_bgp_defined_sets_community_sets_community_set_config_match_set_options' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/community-sets/community-set={community-set-name}/config/match-set-options',
'openconfig_bgp_policy_routing_policy_defined_sets_bgp_defined_sets_ext_community_sets' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/ext-community-sets',
'openconfig_bgp_policy_routing_policy_defined_sets_bgp_defined_sets_ext_community_sets_ext_community_set' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/ext-community-sets/ext-community-set={ext-community-set-name}',
'list_openconfig_bgp_policy_routing_policy_defined_sets_bgp_defined_sets_ext_community_sets_ext_community_set' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/ext-community-sets/ext-community-set',
'openconfig_bgp_policy_routing_policy_defined_sets_bgp_defined_sets_ext_community_sets_ext_community_set_config_ext_community_member' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/ext-community-sets/ext-community-set={ext-community-set-name}/config/ext-community-member',
'openconfig_bgp_policy_routing_policy_defined_sets_bgp_defined_sets_ext_community_sets_ext_community_set_config_match_set_options' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/ext-community-sets/ext-community-set={ext-community-set-name}/config/match-set-options',
'openconfig_bgp_policy_routing_policy_defined_sets_bgp_defined_sets_as_path_sets' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/as-path-sets:',
'openconfig_bgp_policy_routing_policy_defined_sets_bgp_defined_sets_as_path_sets_as_path_set' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/as-path-sets/as-path-set={as-path-set-name}',
'list_openconfig_bgp_policy_routing_policy_defined_sets_bgp_defined_sets_as_path_sets_as_path_set' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/as-path-sets/as-path-set',
'openconfig_bgp_policy_routing_policy_defined_sets_bgp_defined_sets_as_path_sets_as_path_set_config_as_path_set_member' :
'/restconf/data/openconfig-routing-policy:routing-policy/defined-sets/openconfig-bgp-policy:bgp-defined-sets/as-path-sets/as-path-set={as-path-set-name}/config/as-path-set-member',
'openconfig_routing_policy_routing_policy_policy_definitions' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions:',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}:',
'list_openconfig_routing_policy_routing_policy_policy_definitions_policy_definition' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition:',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements:',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}',
'list_openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement:',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_config' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/config',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_config_call_policy' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/config/call-policy',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_config_install_protocol_eq' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/config/install-protocol-eq',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_match_interface' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/match-interface',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_match_interface_config' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/match-interface/config',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_match_interface_config_interface' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/match-interface/config/interface',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_match_interface_config_subinterface' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/match-interface/config/subinterface',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_match_prefix_set' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/match-prefix-set',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_match_prefix_set_config' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/match-prefix-set/config',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_match_prefix_set_config_prefix_set' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/match-prefix-set/config/prefix-set',
'openconfig_routing_policy917596535' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/match-prefix-set/config/match-set-options',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_match_neighbor_set' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/match-neighbor-set',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_match_neighbor_set_config' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/match-neighbor-set/config',
'openconfig_routing_policy_ext_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_match_neighbor_set_config_address' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/match-neighbor-set/config/openconfig-routing-policy-ext:address',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_match_tag_set' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/match-tag-set',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_match_tag_set_config' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/match-tag-set/config',
'openconfig_routing_policy_ext_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_match_tag_set_config_tag_value' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/match-tag-set/config/openconfig-routing-policy-ext:tag-value',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_bgp_conditions' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/openconfig-bgp-policy:bgp-conditions',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_bgp_conditions_config' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/openconfig-bgp-policy:bgp-conditions/config',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_bgp_conditions_config_med_eq' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/openconfig-bgp-policy:bgp-conditions/config/med-eq',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_bgp_conditions_config_origin_eq' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/openconfig-bgp-policy:bgp-conditions/config/origin-eq',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_bgp_conditions_config_local_pref_eq' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/openconfig-bgp-policy:bgp-conditions/config/local-pref-eq',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_bgp_conditions_config_community_set' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/openconfig-bgp-policy:bgp-conditions/config/community-set',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_bgp_conditions_config_ext_community_set' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/openconfig-bgp-policy:bgp-conditions/config/ext-community-set',
'openconfig_bgp_policy_ext_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_bgp_conditions_config_next_hop_set' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/openconfig-bgp-policy:bgp-conditions/config/openconfig-bgp-policy-ext:next-hop-set',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_bgp_conditions_match_as_path_set' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/openconfig-bgp-policy:bgp-conditions/match-as-path-set',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_conditions_bgp_conditions_match_as_path_set_config' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/openconfig-bgp-policy:bgp-conditions/match-as-path-set/config',
'openconfig_bgp_policy4215218961' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/openconfig-bgp-policy:bgp-conditions/match-as-path-set/config/as-path-set',
'openconfig_bgp_policy479708225' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/conditions/openconfig-bgp-policy:bgp-conditions/match-as-path-set/config/match-set-options',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_config' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/config',
'openconfig_routing_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_config_policy_result' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/config/policy-result',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_config' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/config',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_config_set_route_origin' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/config/set-route-origin',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_config_set_local_pref' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/config/set-local-pref',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_config_set_next_hop' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/config/set-next-hop',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_config_set_med' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/config/set-med',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_as_path_prepend' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-as-path-prepend',
'openconfig_routing_policy_ext_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_as_path_prepend_config_asn_list':
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-as-path-prepend/config/openconfig-routing-policy-ext:asn_list',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_as_path_prepend_config' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-as-path-prepend/config',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_as_path_prepend_config_repeat_n' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-as-path-prepend/config/repeat-n',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_as_path_prepend_config_asn' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-as-path-prepend/config/asn',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_community' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-community',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_community_config' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-community/config',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_community_config_method' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-community/config/method',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_community_config_options' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-community/config/options',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_community_inline' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-community/inline',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_community_inline_config' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-community/inline/config',
'openconfig_bgp_policy3674057445' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-community/inline/config/communities',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_community_reference' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-community/reference',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_community_reference_config' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-community/reference/config',
'openconfig_bgp_policy717106109' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-community/reference/config/community-set-ref',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_ext_community' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-ext-community',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_ext_community_config' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-ext-community/config',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_ext_community_config_method' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-ext-community/config/method',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_ext_community_config_options' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-ext-community/config/options',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_ext_community_inline' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-ext-community/inline',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_ext_community_inline_config' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-ext-community/inline/config',
'openconfig_bgp_policy2318914281' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-ext-community/inline/config/communities',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_ext_community_reference' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-ext-community/reference',
'openconfig_bgp_policy_routing_policy_policy_definitions_policy_definition_statements_statement_actions_bgp_actions_set_ext_community_reference_config' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-ext-community/reference/config',
'openconfig_bgp_policy197769463' :
'/restconf/data/openconfig-routing-policy:routing-policy/policy-definitions/policy-definition={name}/statements/statement={name1}/actions/openconfig-bgp-policy:bgp-actions/set-ext-community/reference/config/ext-community-set-ref',
}
| 148.025773 | 248 | 0.846815 | 3,463 | 28,717 | 6.646549 | 0.019636 | 0.180171 | 0.146935 | 0.17335 | 0.986445 | 0.975844 | 0.969935 | 0.964244 | 0.961507 | 0.958074 | 0 | 0.004803 | 0.050214 | 28,717 | 193 | 249 | 148.792746 | 0.839083 | 0 | 0 | 0 | 0 | 0.479167 | 0.929415 | 0.929415 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
9e3391dd0099d903d160dc707842dcc7622ffc57 | 35 | py | Python | 4_src/3_other/1_surasura-python/q2-3/q2-3.py | hirobel/todoapp | 834e6dcdd3e6c227a79004c89430c6853935b23c | [
"Apache-2.0"
] | null | null | null | 4_src/3_other/1_surasura-python/q2-3/q2-3.py | hirobel/todoapp | 834e6dcdd3e6c227a79004c89430c6853935b23c | [
"Apache-2.0"
] | null | null | null | 4_src/3_other/1_surasura-python/q2-3/q2-3.py | hirobel/todoapp | 834e6dcdd3e6c227a79004c89430c6853935b23c | [
"Apache-2.0"
] | null | null | null | print('10 + 3 = {}'.format(10 + 3)) | 35 | 35 | 0.485714 | 6 | 35 | 2.833333 | 0.666667 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206897 | 0.171429 | 35 | 1 | 35 | 35 | 0.37931 | 0 | 0 | 0 | 0 | 0 | 0.305556 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
9e49b62b69c23ec142a838b362f6e64d6e3f9d67 | 49 | py | Python | pan_tilt/__init__.py | Richard-Kirby/solar_oven | 00eef5909a27f5b09b2da5baca3049b8e58021d6 | [
"MIT"
] | null | null | null | pan_tilt/__init__.py | Richard-Kirby/solar_oven | 00eef5909a27f5b09b2da5baca3049b8e58021d6 | [
"MIT"
] | null | null | null | pan_tilt/__init__.py | Richard-Kirby/solar_oven | 00eef5909a27f5b09b2da5baca3049b8e58021d6 | [
"MIT"
] | null | null | null | from pan_tilt.pan_tilt import PanTiltController
| 24.5 | 48 | 0.877551 | 7 | 49 | 5.857143 | 0.714286 | 0.341463 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 49 | 1 | 49 | 49 | 0.931818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
9e976da942986c3e0e676d230719c98a7d75c0f0 | 246,761 | py | Python | sdk/python/pulumi_aws/securityhub/_inputs.py | alexbowers/pulumi-aws | 7dbdb03b1e4f7c0d51d5b5d17233ff4465c3eff5 | [
"ECL-2.0",
"Apache-2.0"
] | 260 | 2018-06-18T14:57:00.000Z | 2022-03-29T11:41:03.000Z | sdk/python/pulumi_aws/securityhub/_inputs.py | alexbowers/pulumi-aws | 7dbdb03b1e4f7c0d51d5b5d17233ff4465c3eff5 | [
"ECL-2.0",
"Apache-2.0"
] | 1,154 | 2018-06-19T20:38:20.000Z | 2022-03-31T19:48:16.000Z | sdk/python/pulumi_aws/securityhub/_inputs.py | alexbowers/pulumi-aws | 7dbdb03b1e4f7c0d51d5b5d17233ff4465c3eff5 | [
"ECL-2.0",
"Apache-2.0"
] | 115 | 2018-06-28T03:20:27.000Z | 2022-03-29T11:41:06.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = [
'InsightFiltersArgs',
'InsightFiltersAwsAccountIdArgs',
'InsightFiltersCompanyNameArgs',
'InsightFiltersComplianceStatusArgs',
'InsightFiltersConfidenceArgs',
'InsightFiltersCreatedAtArgs',
'InsightFiltersCreatedAtDateRangeArgs',
'InsightFiltersCriticalityArgs',
'InsightFiltersDescriptionArgs',
'InsightFiltersFindingProviderFieldsConfidenceArgs',
'InsightFiltersFindingProviderFieldsCriticalityArgs',
'InsightFiltersFindingProviderFieldsRelatedFindingsIdArgs',
'InsightFiltersFindingProviderFieldsRelatedFindingsProductArnArgs',
'InsightFiltersFindingProviderFieldsSeverityLabelArgs',
'InsightFiltersFindingProviderFieldsSeverityOriginalArgs',
'InsightFiltersFindingProviderFieldsTypeArgs',
'InsightFiltersFirstObservedAtArgs',
'InsightFiltersFirstObservedAtDateRangeArgs',
'InsightFiltersGeneratorIdArgs',
'InsightFiltersIdArgs',
'InsightFiltersKeywordArgs',
'InsightFiltersLastObservedAtArgs',
'InsightFiltersLastObservedAtDateRangeArgs',
'InsightFiltersMalwareNameArgs',
'InsightFiltersMalwarePathArgs',
'InsightFiltersMalwareStateArgs',
'InsightFiltersMalwareTypeArgs',
'InsightFiltersNetworkDestinationDomainArgs',
'InsightFiltersNetworkDestinationIpv4Args',
'InsightFiltersNetworkDestinationIpv6Args',
'InsightFiltersNetworkDestinationPortArgs',
'InsightFiltersNetworkDirectionArgs',
'InsightFiltersNetworkProtocolArgs',
'InsightFiltersNetworkSourceDomainArgs',
'InsightFiltersNetworkSourceIpv4Args',
'InsightFiltersNetworkSourceIpv6Args',
'InsightFiltersNetworkSourceMacArgs',
'InsightFiltersNetworkSourcePortArgs',
'InsightFiltersNoteTextArgs',
'InsightFiltersNoteUpdatedAtArgs',
'InsightFiltersNoteUpdatedAtDateRangeArgs',
'InsightFiltersNoteUpdatedByArgs',
'InsightFiltersProcessLaunchedAtArgs',
'InsightFiltersProcessLaunchedAtDateRangeArgs',
'InsightFiltersProcessNameArgs',
'InsightFiltersProcessParentPidArgs',
'InsightFiltersProcessPathArgs',
'InsightFiltersProcessPidArgs',
'InsightFiltersProcessTerminatedAtArgs',
'InsightFiltersProcessTerminatedAtDateRangeArgs',
'InsightFiltersProductArnArgs',
'InsightFiltersProductFieldArgs',
'InsightFiltersProductNameArgs',
'InsightFiltersRecommendationTextArgs',
'InsightFiltersRecordStateArgs',
'InsightFiltersRelatedFindingsIdArgs',
'InsightFiltersRelatedFindingsProductArnArgs',
'InsightFiltersResourceAwsEc2InstanceIamInstanceProfileArnArgs',
'InsightFiltersResourceAwsEc2InstanceImageIdArgs',
'InsightFiltersResourceAwsEc2InstanceIpv4AddressArgs',
'InsightFiltersResourceAwsEc2InstanceIpv6AddressArgs',
'InsightFiltersResourceAwsEc2InstanceKeyNameArgs',
'InsightFiltersResourceAwsEc2InstanceLaunchedAtArgs',
'InsightFiltersResourceAwsEc2InstanceLaunchedAtDateRangeArgs',
'InsightFiltersResourceAwsEc2InstanceSubnetIdArgs',
'InsightFiltersResourceAwsEc2InstanceTypeArgs',
'InsightFiltersResourceAwsEc2InstanceVpcIdArgs',
'InsightFiltersResourceAwsIamAccessKeyCreatedAtArgs',
'InsightFiltersResourceAwsIamAccessKeyCreatedAtDateRangeArgs',
'InsightFiltersResourceAwsIamAccessKeyStatusArgs',
'InsightFiltersResourceAwsIamAccessKeyUserNameArgs',
'InsightFiltersResourceAwsS3BucketOwnerIdArgs',
'InsightFiltersResourceAwsS3BucketOwnerNameArgs',
'InsightFiltersResourceContainerImageIdArgs',
'InsightFiltersResourceContainerImageNameArgs',
'InsightFiltersResourceContainerLaunchedAtArgs',
'InsightFiltersResourceContainerLaunchedAtDateRangeArgs',
'InsightFiltersResourceContainerNameArgs',
'InsightFiltersResourceDetailsOtherArgs',
'InsightFiltersResourceIdArgs',
'InsightFiltersResourcePartitionArgs',
'InsightFiltersResourceRegionArgs',
'InsightFiltersResourceTagArgs',
'InsightFiltersResourceTypeArgs',
'InsightFiltersSeverityLabelArgs',
'InsightFiltersSourceUrlArgs',
'InsightFiltersThreatIntelIndicatorCategoryArgs',
'InsightFiltersThreatIntelIndicatorLastObservedAtArgs',
'InsightFiltersThreatIntelIndicatorLastObservedAtDateRangeArgs',
'InsightFiltersThreatIntelIndicatorSourceArgs',
'InsightFiltersThreatIntelIndicatorSourceUrlArgs',
'InsightFiltersThreatIntelIndicatorTypeArgs',
'InsightFiltersThreatIntelIndicatorValueArgs',
'InsightFiltersTitleArgs',
'InsightFiltersTypeArgs',
'InsightFiltersUpdatedAtArgs',
'InsightFiltersUpdatedAtDateRangeArgs',
'InsightFiltersUserDefinedValueArgs',
'InsightFiltersVerificationStateArgs',
'InsightFiltersWorkflowStatusArgs',
]
@pulumi.input_type
class InsightFiltersArgs:
def __init__(__self__, *,
aws_account_ids: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersAwsAccountIdArgs']]]] = None,
company_names: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersCompanyNameArgs']]]] = None,
compliance_statuses: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersComplianceStatusArgs']]]] = None,
confidences: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersConfidenceArgs']]]] = None,
created_ats: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersCreatedAtArgs']]]] = None,
criticalities: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersCriticalityArgs']]]] = None,
descriptions: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersDescriptionArgs']]]] = None,
finding_provider_fields_confidences: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsConfidenceArgs']]]] = None,
finding_provider_fields_criticalities: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsCriticalityArgs']]]] = None,
finding_provider_fields_related_findings_ids: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsRelatedFindingsIdArgs']]]] = None,
finding_provider_fields_related_findings_product_arns: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsRelatedFindingsProductArnArgs']]]] = None,
finding_provider_fields_severity_labels: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsSeverityLabelArgs']]]] = None,
finding_provider_fields_severity_originals: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsSeverityOriginalArgs']]]] = None,
finding_provider_fields_types: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsTypeArgs']]]] = None,
first_observed_ats: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFirstObservedAtArgs']]]] = None,
generator_ids: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersGeneratorIdArgs']]]] = None,
ids: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersIdArgs']]]] = None,
keywords: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersKeywordArgs']]]] = None,
last_observed_ats: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersLastObservedAtArgs']]]] = None,
malware_names: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersMalwareNameArgs']]]] = None,
malware_paths: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersMalwarePathArgs']]]] = None,
malware_states: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersMalwareStateArgs']]]] = None,
malware_types: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersMalwareTypeArgs']]]] = None,
network_destination_domains: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDestinationDomainArgs']]]] = None,
network_destination_ipv4s: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDestinationIpv4Args']]]] = None,
network_destination_ipv6s: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDestinationIpv6Args']]]] = None,
network_destination_ports: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDestinationPortArgs']]]] = None,
network_directions: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDirectionArgs']]]] = None,
network_protocols: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkProtocolArgs']]]] = None,
network_source_domains: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourceDomainArgs']]]] = None,
network_source_ipv4s: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourceIpv4Args']]]] = None,
network_source_ipv6s: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourceIpv6Args']]]] = None,
network_source_macs: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourceMacArgs']]]] = None,
network_source_ports: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourcePortArgs']]]] = None,
note_texts: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNoteTextArgs']]]] = None,
note_updated_ats: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNoteUpdatedAtArgs']]]] = None,
note_updated_bies: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNoteUpdatedByArgs']]]] = None,
process_launched_ats: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessLaunchedAtArgs']]]] = None,
process_names: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessNameArgs']]]] = None,
process_parent_pids: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessParentPidArgs']]]] = None,
process_paths: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessPathArgs']]]] = None,
process_pids: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessPidArgs']]]] = None,
process_terminated_ats: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessTerminatedAtArgs']]]] = None,
product_arns: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProductArnArgs']]]] = None,
product_fields: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProductFieldArgs']]]] = None,
product_names: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProductNameArgs']]]] = None,
recommendation_texts: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersRecommendationTextArgs']]]] = None,
record_states: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersRecordStateArgs']]]] = None,
related_findings_ids: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersRelatedFindingsIdArgs']]]] = None,
related_findings_product_arns: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersRelatedFindingsProductArnArgs']]]] = None,
resource_aws_ec2_instance_iam_instance_profile_arns: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceIamInstanceProfileArnArgs']]]] = None,
resource_aws_ec2_instance_image_ids: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceImageIdArgs']]]] = None,
resource_aws_ec2_instance_ipv4_addresses: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceIpv4AddressArgs']]]] = None,
resource_aws_ec2_instance_ipv6_addresses: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceIpv6AddressArgs']]]] = None,
resource_aws_ec2_instance_key_names: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceKeyNameArgs']]]] = None,
resource_aws_ec2_instance_launched_ats: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceLaunchedAtArgs']]]] = None,
resource_aws_ec2_instance_subnet_ids: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceSubnetIdArgs']]]] = None,
resource_aws_ec2_instance_types: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceTypeArgs']]]] = None,
resource_aws_ec2_instance_vpc_ids: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceVpcIdArgs']]]] = None,
resource_aws_iam_access_key_created_ats: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsIamAccessKeyCreatedAtArgs']]]] = None,
resource_aws_iam_access_key_statuses: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsIamAccessKeyStatusArgs']]]] = None,
resource_aws_iam_access_key_user_names: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsIamAccessKeyUserNameArgs']]]] = None,
resource_aws_s3_bucket_owner_ids: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsS3BucketOwnerIdArgs']]]] = None,
resource_aws_s3_bucket_owner_names: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsS3BucketOwnerNameArgs']]]] = None,
resource_container_image_ids: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceContainerImageIdArgs']]]] = None,
resource_container_image_names: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceContainerImageNameArgs']]]] = None,
resource_container_launched_ats: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceContainerLaunchedAtArgs']]]] = None,
resource_container_names: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceContainerNameArgs']]]] = None,
resource_details_others: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceDetailsOtherArgs']]]] = None,
resource_ids: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceIdArgs']]]] = None,
resource_partitions: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourcePartitionArgs']]]] = None,
resource_regions: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceRegionArgs']]]] = None,
resource_tags: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceTagArgs']]]] = None,
resource_types: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceTypeArgs']]]] = None,
severity_labels: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersSeverityLabelArgs']]]] = None,
source_urls: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersSourceUrlArgs']]]] = None,
threat_intel_indicator_categories: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorCategoryArgs']]]] = None,
threat_intel_indicator_last_observed_ats: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorLastObservedAtArgs']]]] = None,
threat_intel_indicator_source_urls: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorSourceUrlArgs']]]] = None,
threat_intel_indicator_sources: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorSourceArgs']]]] = None,
threat_intel_indicator_types: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorTypeArgs']]]] = None,
threat_intel_indicator_values: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorValueArgs']]]] = None,
titles: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersTitleArgs']]]] = None,
types: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersTypeArgs']]]] = None,
updated_ats: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersUpdatedAtArgs']]]] = None,
user_defined_values: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersUserDefinedValueArgs']]]] = None,
verification_states: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersVerificationStateArgs']]]] = None,
workflow_statuses: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersWorkflowStatusArgs']]]] = None):
"""
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersAwsAccountIdArgs']]] aws_account_ids: AWS account ID that a finding is generated in. See String_Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersCompanyNameArgs']]] company_names: The name of the findings provider (company) that owns the solution (product) that generates findings. See String_Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersComplianceStatusArgs']]] compliance_statuses: Exclusive to findings that are generated as the result of a check run against a specific rule in a supported standard, such as CIS AWS Foundations. Contains security standard-related finding details. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersConfidenceArgs']]] confidences: A finding's confidence. Confidence is defined as the likelihood that a finding accurately identifies the behavior or issue that it was intended to identify. Confidence is scored on a 0-100 basis using a ratio scale, where 0 means zero percent confidence and 100 means 100 percent confidence. See Number Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersCreatedAtArgs']]] created_ats: An ISO8601-formatted timestamp that indicates when the security-findings provider captured the potential security issue that a finding captured. See Date Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersCriticalityArgs']]] criticalities: The level of importance assigned to the resources associated with the finding. A score of 0 means that the underlying resources have no criticality, and a score of 100 is reserved for the most critical resources. See Number Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersDescriptionArgs']]] descriptions: A finding's description. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsConfidenceArgs']]] finding_provider_fields_confidences: The finding provider value for the finding confidence. Confidence is defined as the likelihood that a finding accurately identifies the behavior or issue that it was intended to identify. Confidence is scored on a 0-100 basis using a ratio scale, where 0 means zero percent confidence and 100 means 100 percent confidence. See Number Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsCriticalityArgs']]] finding_provider_fields_criticalities: The finding provider value for the level of importance assigned to the resources associated with the findings. A score of 0 means that the underlying resources have no criticality, and a score of 100 is reserved for the most critical resources. See Number Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsRelatedFindingsIdArgs']]] finding_provider_fields_related_findings_ids: The finding identifier of a related finding that is identified by the finding provider. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsRelatedFindingsProductArnArgs']]] finding_provider_fields_related_findings_product_arns: The ARN of the solution that generated a related finding that is identified by the finding provider. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsSeverityLabelArgs']]] finding_provider_fields_severity_labels: The finding provider value for the severity label. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsSeverityOriginalArgs']]] finding_provider_fields_severity_originals: The finding provider's original value for the severity. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsTypeArgs']]] finding_provider_fields_types: One or more finding types that the finding provider assigned to the finding. Uses the format of `namespace/category/classifier` that classify a finding. Valid namespace values include: `Software and Configuration Checks`, `TTPs`, `Effects`, `Unusual Behaviors`, and `Sensitive Data Identifications`. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersFirstObservedAtArgs']]] first_observed_ats: An ISO8601-formatted timestamp that indicates when the security-findings provider first observed the potential security issue that a finding captured. See Date Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersGeneratorIdArgs']]] generator_ids: The identifier for the solution-specific component (a discrete unit of logic) that generated a finding. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersIdArgs']]] ids: The security findings provider-specific identifier for a finding. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersKeywordArgs']]] keywords: A keyword for a finding. See Keyword Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersLastObservedAtArgs']]] last_observed_ats: An ISO8601-formatted timestamp that indicates when the security-findings provider most recently observed the potential security issue that a finding captured. See Date Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersMalwareNameArgs']]] malware_names: The name of the malware that was observed. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersMalwarePathArgs']]] malware_paths: The filesystem path of the malware that was observed. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersMalwareStateArgs']]] malware_states: The state of the malware that was observed. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersMalwareTypeArgs']]] malware_types: The type of the malware that was observed. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDestinationDomainArgs']]] network_destination_domains: The destination domain of network-related information about a finding. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDestinationIpv4Args']]] network_destination_ipv4s: The destination IPv4 address of network-related information about a finding. See Ip Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDestinationIpv6Args']]] network_destination_ipv6s: The destination IPv6 address of network-related information about a finding. See Ip Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDestinationPortArgs']]] network_destination_ports: The destination port of network-related information about a finding. See Number Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDirectionArgs']]] network_directions: Indicates the direction of network traffic associated with a finding. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkProtocolArgs']]] network_protocols: The protocol of network-related information about a finding. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourceDomainArgs']]] network_source_domains: The source domain of network-related information about a finding. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourceIpv4Args']]] network_source_ipv4s: The source IPv4 address of network-related information about a finding. See Ip Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourceIpv6Args']]] network_source_ipv6s: The source IPv6 address of network-related information about a finding. See Ip Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourceMacArgs']]] network_source_macs: The source media access control (MAC) address of network-related information about a finding. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourcePortArgs']]] network_source_ports: The source port of network-related information about a finding. See Number Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersNoteTextArgs']]] note_texts: The text of a note. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersNoteUpdatedAtArgs']]] note_updated_ats: The timestamp of when the note was updated. See Date Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersNoteUpdatedByArgs']]] note_updated_bies: The principal that created a note. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessLaunchedAtArgs']]] process_launched_ats: The date/time that the process was launched. See Date Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessNameArgs']]] process_names: The name of the process. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessParentPidArgs']]] process_parent_pids: The parent process ID. See Number Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessPathArgs']]] process_paths: The path to the process executable. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessPidArgs']]] process_pids: The process ID. See Number Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessTerminatedAtArgs']]] process_terminated_ats: The date/time that the process was terminated. See Date Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersProductArnArgs']]] product_arns: The ARN generated by Security Hub that uniquely identifies a third-party company (security findings provider) after this provider's product (solution that generates findings) is registered with Security Hub. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersProductFieldArgs']]] product_fields: A data type where security-findings providers can include additional solution-specific details that aren't part of the defined `AwsSecurityFinding` format. See Map Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersProductNameArgs']]] product_names: The name of the solution (product) that generates findings. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersRecommendationTextArgs']]] recommendation_texts: The recommendation of what to do about the issue described in a finding. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersRecordStateArgs']]] record_states: The updated record state for the finding. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersRelatedFindingsIdArgs']]] related_findings_ids: The solution-generated identifier for a related finding. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersRelatedFindingsProductArnArgs']]] related_findings_product_arns: The ARN of the solution that generated a related finding. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceIamInstanceProfileArnArgs']]] resource_aws_ec2_instance_iam_instance_profile_arns: The IAM profile ARN of the instance. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceImageIdArgs']]] resource_aws_ec2_instance_image_ids: The Amazon Machine Image (AMI) ID of the instance. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceIpv4AddressArgs']]] resource_aws_ec2_instance_ipv4_addresses: The IPv4 addresses associated with the instance. See Ip Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceIpv6AddressArgs']]] resource_aws_ec2_instance_ipv6_addresses: The IPv6 addresses associated with the instance. See Ip Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceKeyNameArgs']]] resource_aws_ec2_instance_key_names: The key name associated with the instance. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceLaunchedAtArgs']]] resource_aws_ec2_instance_launched_ats: The date and time the instance was launched. See Date Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceSubnetIdArgs']]] resource_aws_ec2_instance_subnet_ids: The identifier of the subnet that the instance was launched in. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceTypeArgs']]] resource_aws_ec2_instance_types: The instance type of the instance. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceVpcIdArgs']]] resource_aws_ec2_instance_vpc_ids: The identifier of the VPC that the instance was launched in. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsIamAccessKeyCreatedAtArgs']]] resource_aws_iam_access_key_created_ats: The creation date/time of the IAM access key related to a finding. See Date Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsIamAccessKeyStatusArgs']]] resource_aws_iam_access_key_statuses: The status of the IAM access key related to a finding. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsIamAccessKeyUserNameArgs']]] resource_aws_iam_access_key_user_names: The user associated with the IAM access key related to a finding. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsS3BucketOwnerIdArgs']]] resource_aws_s3_bucket_owner_ids: The canonical user ID of the owner of the S3 bucket. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsS3BucketOwnerNameArgs']]] resource_aws_s3_bucket_owner_names: The display name of the owner of the S3 bucket. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceContainerImageIdArgs']]] resource_container_image_ids: The identifier of the image related to a finding. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceContainerImageNameArgs']]] resource_container_image_names: The name of the image related to a finding. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceContainerLaunchedAtArgs']]] resource_container_launched_ats: The date/time that the container was started. See Date Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceContainerNameArgs']]] resource_container_names: The name of the container related to a finding. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceDetailsOtherArgs']]] resource_details_others: The details of a resource that doesn't have a specific subfield for the resource type defined. See Map Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceIdArgs']]] resource_ids: The canonical identifier for the given resource type. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourcePartitionArgs']]] resource_partitions: The canonical AWS partition name that the Region is assigned to. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceRegionArgs']]] resource_regions: The canonical AWS external Region name where this resource is located. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceTagArgs']]] resource_tags: A list of AWS tags associated with a resource at the time the finding was processed. See Map Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceTypeArgs']]] resource_types: Specifies the type of the resource that details are provided for. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersSeverityLabelArgs']]] severity_labels: The label of a finding's severity. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersSourceUrlArgs']]] source_urls: A URL that links to a page about the current finding in the security-findings provider's solution. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorCategoryArgs']]] threat_intel_indicator_categories: The category of a threat intelligence indicator. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorLastObservedAtArgs']]] threat_intel_indicator_last_observed_ats: The date/time of the last observation of a threat intelligence indicator. See Date Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorSourceUrlArgs']]] threat_intel_indicator_source_urls: The URL for more details from the source of the threat intelligence. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorSourceArgs']]] threat_intel_indicator_sources: The source of the threat intelligence. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorTypeArgs']]] threat_intel_indicator_types: The type of a threat intelligence indicator. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorValueArgs']]] threat_intel_indicator_values: The value of a threat intelligence indicator. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersTitleArgs']]] titles: A finding's title. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersTypeArgs']]] types: A finding type in the format of `namespace/category/classifier` that classifies a finding. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersUpdatedAtArgs']]] updated_ats: An ISO8601-formatted timestamp that indicates when the security-findings provider last updated the finding record. See Date Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersUserDefinedValueArgs']]] user_defined_values: A list of name/value string pairs associated with the finding. These are custom, user-defined fields added to a finding. See Map Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersVerificationStateArgs']]] verification_states: The veracity of a finding. See String Filter below for more details.
:param pulumi.Input[Sequence[pulumi.Input['InsightFiltersWorkflowStatusArgs']]] workflow_statuses: The status of the investigation into a finding. See Workflow Status Filter below for more details.
"""
if aws_account_ids is not None:
pulumi.set(__self__, "aws_account_ids", aws_account_ids)
if company_names is not None:
pulumi.set(__self__, "company_names", company_names)
if compliance_statuses is not None:
pulumi.set(__self__, "compliance_statuses", compliance_statuses)
if confidences is not None:
pulumi.set(__self__, "confidences", confidences)
if created_ats is not None:
pulumi.set(__self__, "created_ats", created_ats)
if criticalities is not None:
pulumi.set(__self__, "criticalities", criticalities)
if descriptions is not None:
pulumi.set(__self__, "descriptions", descriptions)
if finding_provider_fields_confidences is not None:
pulumi.set(__self__, "finding_provider_fields_confidences", finding_provider_fields_confidences)
if finding_provider_fields_criticalities is not None:
pulumi.set(__self__, "finding_provider_fields_criticalities", finding_provider_fields_criticalities)
if finding_provider_fields_related_findings_ids is not None:
pulumi.set(__self__, "finding_provider_fields_related_findings_ids", finding_provider_fields_related_findings_ids)
if finding_provider_fields_related_findings_product_arns is not None:
pulumi.set(__self__, "finding_provider_fields_related_findings_product_arns", finding_provider_fields_related_findings_product_arns)
if finding_provider_fields_severity_labels is not None:
pulumi.set(__self__, "finding_provider_fields_severity_labels", finding_provider_fields_severity_labels)
if finding_provider_fields_severity_originals is not None:
pulumi.set(__self__, "finding_provider_fields_severity_originals", finding_provider_fields_severity_originals)
if finding_provider_fields_types is not None:
pulumi.set(__self__, "finding_provider_fields_types", finding_provider_fields_types)
if first_observed_ats is not None:
pulumi.set(__self__, "first_observed_ats", first_observed_ats)
if generator_ids is not None:
pulumi.set(__self__, "generator_ids", generator_ids)
if ids is not None:
pulumi.set(__self__, "ids", ids)
if keywords is not None:
pulumi.set(__self__, "keywords", keywords)
if last_observed_ats is not None:
pulumi.set(__self__, "last_observed_ats", last_observed_ats)
if malware_names is not None:
pulumi.set(__self__, "malware_names", malware_names)
if malware_paths is not None:
pulumi.set(__self__, "malware_paths", malware_paths)
if malware_states is not None:
pulumi.set(__self__, "malware_states", malware_states)
if malware_types is not None:
pulumi.set(__self__, "malware_types", malware_types)
if network_destination_domains is not None:
pulumi.set(__self__, "network_destination_domains", network_destination_domains)
if network_destination_ipv4s is not None:
pulumi.set(__self__, "network_destination_ipv4s", network_destination_ipv4s)
if network_destination_ipv6s is not None:
pulumi.set(__self__, "network_destination_ipv6s", network_destination_ipv6s)
if network_destination_ports is not None:
pulumi.set(__self__, "network_destination_ports", network_destination_ports)
if network_directions is not None:
pulumi.set(__self__, "network_directions", network_directions)
if network_protocols is not None:
pulumi.set(__self__, "network_protocols", network_protocols)
if network_source_domains is not None:
pulumi.set(__self__, "network_source_domains", network_source_domains)
if network_source_ipv4s is not None:
pulumi.set(__self__, "network_source_ipv4s", network_source_ipv4s)
if network_source_ipv6s is not None:
pulumi.set(__self__, "network_source_ipv6s", network_source_ipv6s)
if network_source_macs is not None:
pulumi.set(__self__, "network_source_macs", network_source_macs)
if network_source_ports is not None:
pulumi.set(__self__, "network_source_ports", network_source_ports)
if note_texts is not None:
pulumi.set(__self__, "note_texts", note_texts)
if note_updated_ats is not None:
pulumi.set(__self__, "note_updated_ats", note_updated_ats)
if note_updated_bies is not None:
pulumi.set(__self__, "note_updated_bies", note_updated_bies)
if process_launched_ats is not None:
pulumi.set(__self__, "process_launched_ats", process_launched_ats)
if process_names is not None:
pulumi.set(__self__, "process_names", process_names)
if process_parent_pids is not None:
pulumi.set(__self__, "process_parent_pids", process_parent_pids)
if process_paths is not None:
pulumi.set(__self__, "process_paths", process_paths)
if process_pids is not None:
pulumi.set(__self__, "process_pids", process_pids)
if process_terminated_ats is not None:
pulumi.set(__self__, "process_terminated_ats", process_terminated_ats)
if product_arns is not None:
pulumi.set(__self__, "product_arns", product_arns)
if product_fields is not None:
pulumi.set(__self__, "product_fields", product_fields)
if product_names is not None:
pulumi.set(__self__, "product_names", product_names)
if recommendation_texts is not None:
pulumi.set(__self__, "recommendation_texts", recommendation_texts)
if record_states is not None:
pulumi.set(__self__, "record_states", record_states)
if related_findings_ids is not None:
pulumi.set(__self__, "related_findings_ids", related_findings_ids)
if related_findings_product_arns is not None:
pulumi.set(__self__, "related_findings_product_arns", related_findings_product_arns)
if resource_aws_ec2_instance_iam_instance_profile_arns is not None:
pulumi.set(__self__, "resource_aws_ec2_instance_iam_instance_profile_arns", resource_aws_ec2_instance_iam_instance_profile_arns)
if resource_aws_ec2_instance_image_ids is not None:
pulumi.set(__self__, "resource_aws_ec2_instance_image_ids", resource_aws_ec2_instance_image_ids)
if resource_aws_ec2_instance_ipv4_addresses is not None:
pulumi.set(__self__, "resource_aws_ec2_instance_ipv4_addresses", resource_aws_ec2_instance_ipv4_addresses)
if resource_aws_ec2_instance_ipv6_addresses is not None:
pulumi.set(__self__, "resource_aws_ec2_instance_ipv6_addresses", resource_aws_ec2_instance_ipv6_addresses)
if resource_aws_ec2_instance_key_names is not None:
pulumi.set(__self__, "resource_aws_ec2_instance_key_names", resource_aws_ec2_instance_key_names)
if resource_aws_ec2_instance_launched_ats is not None:
pulumi.set(__self__, "resource_aws_ec2_instance_launched_ats", resource_aws_ec2_instance_launched_ats)
if resource_aws_ec2_instance_subnet_ids is not None:
pulumi.set(__self__, "resource_aws_ec2_instance_subnet_ids", resource_aws_ec2_instance_subnet_ids)
if resource_aws_ec2_instance_types is not None:
pulumi.set(__self__, "resource_aws_ec2_instance_types", resource_aws_ec2_instance_types)
if resource_aws_ec2_instance_vpc_ids is not None:
pulumi.set(__self__, "resource_aws_ec2_instance_vpc_ids", resource_aws_ec2_instance_vpc_ids)
if resource_aws_iam_access_key_created_ats is not None:
pulumi.set(__self__, "resource_aws_iam_access_key_created_ats", resource_aws_iam_access_key_created_ats)
if resource_aws_iam_access_key_statuses is not None:
pulumi.set(__self__, "resource_aws_iam_access_key_statuses", resource_aws_iam_access_key_statuses)
if resource_aws_iam_access_key_user_names is not None:
pulumi.set(__self__, "resource_aws_iam_access_key_user_names", resource_aws_iam_access_key_user_names)
if resource_aws_s3_bucket_owner_ids is not None:
pulumi.set(__self__, "resource_aws_s3_bucket_owner_ids", resource_aws_s3_bucket_owner_ids)
if resource_aws_s3_bucket_owner_names is not None:
pulumi.set(__self__, "resource_aws_s3_bucket_owner_names", resource_aws_s3_bucket_owner_names)
if resource_container_image_ids is not None:
pulumi.set(__self__, "resource_container_image_ids", resource_container_image_ids)
if resource_container_image_names is not None:
pulumi.set(__self__, "resource_container_image_names", resource_container_image_names)
if resource_container_launched_ats is not None:
pulumi.set(__self__, "resource_container_launched_ats", resource_container_launched_ats)
if resource_container_names is not None:
pulumi.set(__self__, "resource_container_names", resource_container_names)
if resource_details_others is not None:
pulumi.set(__self__, "resource_details_others", resource_details_others)
if resource_ids is not None:
pulumi.set(__self__, "resource_ids", resource_ids)
if resource_partitions is not None:
pulumi.set(__self__, "resource_partitions", resource_partitions)
if resource_regions is not None:
pulumi.set(__self__, "resource_regions", resource_regions)
if resource_tags is not None:
pulumi.set(__self__, "resource_tags", resource_tags)
if resource_types is not None:
pulumi.set(__self__, "resource_types", resource_types)
if severity_labels is not None:
pulumi.set(__self__, "severity_labels", severity_labels)
if source_urls is not None:
pulumi.set(__self__, "source_urls", source_urls)
if threat_intel_indicator_categories is not None:
pulumi.set(__self__, "threat_intel_indicator_categories", threat_intel_indicator_categories)
if threat_intel_indicator_last_observed_ats is not None:
pulumi.set(__self__, "threat_intel_indicator_last_observed_ats", threat_intel_indicator_last_observed_ats)
if threat_intel_indicator_source_urls is not None:
pulumi.set(__self__, "threat_intel_indicator_source_urls", threat_intel_indicator_source_urls)
if threat_intel_indicator_sources is not None:
pulumi.set(__self__, "threat_intel_indicator_sources", threat_intel_indicator_sources)
if threat_intel_indicator_types is not None:
pulumi.set(__self__, "threat_intel_indicator_types", threat_intel_indicator_types)
if threat_intel_indicator_values is not None:
pulumi.set(__self__, "threat_intel_indicator_values", threat_intel_indicator_values)
if titles is not None:
pulumi.set(__self__, "titles", titles)
if types is not None:
pulumi.set(__self__, "types", types)
if updated_ats is not None:
pulumi.set(__self__, "updated_ats", updated_ats)
if user_defined_values is not None:
pulumi.set(__self__, "user_defined_values", user_defined_values)
if verification_states is not None:
pulumi.set(__self__, "verification_states", verification_states)
if workflow_statuses is not None:
pulumi.set(__self__, "workflow_statuses", workflow_statuses)
@property
@pulumi.getter(name="awsAccountIds")
def aws_account_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersAwsAccountIdArgs']]]]:
"""
AWS account ID that a finding is generated in. See String_Filter below for more details.
"""
return pulumi.get(self, "aws_account_ids")
@aws_account_ids.setter
def aws_account_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersAwsAccountIdArgs']]]]):
pulumi.set(self, "aws_account_ids", value)
@property
@pulumi.getter(name="companyNames")
def company_names(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersCompanyNameArgs']]]]:
"""
The name of the findings provider (company) that owns the solution (product) that generates findings. See String_Filter below for more details.
"""
return pulumi.get(self, "company_names")
@company_names.setter
def company_names(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersCompanyNameArgs']]]]):
pulumi.set(self, "company_names", value)
@property
@pulumi.getter(name="complianceStatuses")
def compliance_statuses(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersComplianceStatusArgs']]]]:
"""
Exclusive to findings that are generated as the result of a check run against a specific rule in a supported standard, such as CIS AWS Foundations. Contains security standard-related finding details. See String Filter below for more details.
"""
return pulumi.get(self, "compliance_statuses")
@compliance_statuses.setter
def compliance_statuses(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersComplianceStatusArgs']]]]):
pulumi.set(self, "compliance_statuses", value)
@property
@pulumi.getter
def confidences(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersConfidenceArgs']]]]:
"""
A finding's confidence. Confidence is defined as the likelihood that a finding accurately identifies the behavior or issue that it was intended to identify. Confidence is scored on a 0-100 basis using a ratio scale, where 0 means zero percent confidence and 100 means 100 percent confidence. See Number Filter below for more details.
"""
return pulumi.get(self, "confidences")
@confidences.setter
def confidences(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersConfidenceArgs']]]]):
pulumi.set(self, "confidences", value)
@property
@pulumi.getter(name="createdAts")
def created_ats(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersCreatedAtArgs']]]]:
"""
An ISO8601-formatted timestamp that indicates when the security-findings provider captured the potential security issue that a finding captured. See Date Filter below for more details.
"""
return pulumi.get(self, "created_ats")
@created_ats.setter
def created_ats(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersCreatedAtArgs']]]]):
pulumi.set(self, "created_ats", value)
@property
@pulumi.getter
def criticalities(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersCriticalityArgs']]]]:
"""
The level of importance assigned to the resources associated with the finding. A score of 0 means that the underlying resources have no criticality, and a score of 100 is reserved for the most critical resources. See Number Filter below for more details.
"""
return pulumi.get(self, "criticalities")
@criticalities.setter
def criticalities(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersCriticalityArgs']]]]):
pulumi.set(self, "criticalities", value)
@property
@pulumi.getter
def descriptions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersDescriptionArgs']]]]:
"""
A finding's description. See String Filter below for more details.
"""
return pulumi.get(self, "descriptions")
@descriptions.setter
def descriptions(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersDescriptionArgs']]]]):
pulumi.set(self, "descriptions", value)
@property
@pulumi.getter(name="findingProviderFieldsConfidences")
def finding_provider_fields_confidences(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsConfidenceArgs']]]]:
"""
The finding provider value for the finding confidence. Confidence is defined as the likelihood that a finding accurately identifies the behavior or issue that it was intended to identify. Confidence is scored on a 0-100 basis using a ratio scale, where 0 means zero percent confidence and 100 means 100 percent confidence. See Number Filter below for more details.
"""
return pulumi.get(self, "finding_provider_fields_confidences")
@finding_provider_fields_confidences.setter
def finding_provider_fields_confidences(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsConfidenceArgs']]]]):
pulumi.set(self, "finding_provider_fields_confidences", value)
@property
@pulumi.getter(name="findingProviderFieldsCriticalities")
def finding_provider_fields_criticalities(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsCriticalityArgs']]]]:
"""
The finding provider value for the level of importance assigned to the resources associated with the findings. A score of 0 means that the underlying resources have no criticality, and a score of 100 is reserved for the most critical resources. See Number Filter below for more details.
"""
return pulumi.get(self, "finding_provider_fields_criticalities")
@finding_provider_fields_criticalities.setter
def finding_provider_fields_criticalities(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsCriticalityArgs']]]]):
pulumi.set(self, "finding_provider_fields_criticalities", value)
@property
@pulumi.getter(name="findingProviderFieldsRelatedFindingsIds")
def finding_provider_fields_related_findings_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsRelatedFindingsIdArgs']]]]:
"""
The finding identifier of a related finding that is identified by the finding provider. See String Filter below for more details.
"""
return pulumi.get(self, "finding_provider_fields_related_findings_ids")
@finding_provider_fields_related_findings_ids.setter
def finding_provider_fields_related_findings_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsRelatedFindingsIdArgs']]]]):
pulumi.set(self, "finding_provider_fields_related_findings_ids", value)
@property
@pulumi.getter(name="findingProviderFieldsRelatedFindingsProductArns")
def finding_provider_fields_related_findings_product_arns(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsRelatedFindingsProductArnArgs']]]]:
"""
The ARN of the solution that generated a related finding that is identified by the finding provider. See String Filter below for more details.
"""
return pulumi.get(self, "finding_provider_fields_related_findings_product_arns")
@finding_provider_fields_related_findings_product_arns.setter
def finding_provider_fields_related_findings_product_arns(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsRelatedFindingsProductArnArgs']]]]):
pulumi.set(self, "finding_provider_fields_related_findings_product_arns", value)
@property
@pulumi.getter(name="findingProviderFieldsSeverityLabels")
def finding_provider_fields_severity_labels(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsSeverityLabelArgs']]]]:
"""
The finding provider value for the severity label. See String Filter below for more details.
"""
return pulumi.get(self, "finding_provider_fields_severity_labels")
@finding_provider_fields_severity_labels.setter
def finding_provider_fields_severity_labels(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsSeverityLabelArgs']]]]):
pulumi.set(self, "finding_provider_fields_severity_labels", value)
@property
@pulumi.getter(name="findingProviderFieldsSeverityOriginals")
def finding_provider_fields_severity_originals(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsSeverityOriginalArgs']]]]:
"""
The finding provider's original value for the severity. See String Filter below for more details.
"""
return pulumi.get(self, "finding_provider_fields_severity_originals")
@finding_provider_fields_severity_originals.setter
def finding_provider_fields_severity_originals(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsSeverityOriginalArgs']]]]):
pulumi.set(self, "finding_provider_fields_severity_originals", value)
@property
@pulumi.getter(name="findingProviderFieldsTypes")
def finding_provider_fields_types(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsTypeArgs']]]]:
"""
One or more finding types that the finding provider assigned to the finding. Uses the format of `namespace/category/classifier` that classify a finding. Valid namespace values include: `Software and Configuration Checks`, `TTPs`, `Effects`, `Unusual Behaviors`, and `Sensitive Data Identifications`. See String Filter below for more details.
"""
return pulumi.get(self, "finding_provider_fields_types")
@finding_provider_fields_types.setter
def finding_provider_fields_types(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFindingProviderFieldsTypeArgs']]]]):
pulumi.set(self, "finding_provider_fields_types", value)
@property
@pulumi.getter(name="firstObservedAts")
def first_observed_ats(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFirstObservedAtArgs']]]]:
"""
An ISO8601-formatted timestamp that indicates when the security-findings provider first observed the potential security issue that a finding captured. See Date Filter below for more details.
"""
return pulumi.get(self, "first_observed_ats")
@first_observed_ats.setter
def first_observed_ats(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersFirstObservedAtArgs']]]]):
pulumi.set(self, "first_observed_ats", value)
@property
@pulumi.getter(name="generatorIds")
def generator_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersGeneratorIdArgs']]]]:
"""
The identifier for the solution-specific component (a discrete unit of logic) that generated a finding. See String Filter below for more details.
"""
return pulumi.get(self, "generator_ids")
@generator_ids.setter
def generator_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersGeneratorIdArgs']]]]):
pulumi.set(self, "generator_ids", value)
@property
@pulumi.getter
def ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersIdArgs']]]]:
"""
The security findings provider-specific identifier for a finding. See String Filter below for more details.
"""
return pulumi.get(self, "ids")
@ids.setter
def ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersIdArgs']]]]):
pulumi.set(self, "ids", value)
@property
@pulumi.getter
def keywords(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersKeywordArgs']]]]:
"""
A keyword for a finding. See Keyword Filter below for more details.
"""
return pulumi.get(self, "keywords")
@keywords.setter
def keywords(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersKeywordArgs']]]]):
pulumi.set(self, "keywords", value)
@property
@pulumi.getter(name="lastObservedAts")
def last_observed_ats(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersLastObservedAtArgs']]]]:
"""
An ISO8601-formatted timestamp that indicates when the security-findings provider most recently observed the potential security issue that a finding captured. See Date Filter below for more details.
"""
return pulumi.get(self, "last_observed_ats")
@last_observed_ats.setter
def last_observed_ats(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersLastObservedAtArgs']]]]):
pulumi.set(self, "last_observed_ats", value)
@property
@pulumi.getter(name="malwareNames")
def malware_names(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersMalwareNameArgs']]]]:
"""
The name of the malware that was observed. See String Filter below for more details.
"""
return pulumi.get(self, "malware_names")
@malware_names.setter
def malware_names(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersMalwareNameArgs']]]]):
pulumi.set(self, "malware_names", value)
@property
@pulumi.getter(name="malwarePaths")
def malware_paths(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersMalwarePathArgs']]]]:
"""
The filesystem path of the malware that was observed. See String Filter below for more details.
"""
return pulumi.get(self, "malware_paths")
@malware_paths.setter
def malware_paths(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersMalwarePathArgs']]]]):
pulumi.set(self, "malware_paths", value)
@property
@pulumi.getter(name="malwareStates")
def malware_states(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersMalwareStateArgs']]]]:
"""
The state of the malware that was observed. See String Filter below for more details.
"""
return pulumi.get(self, "malware_states")
@malware_states.setter
def malware_states(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersMalwareStateArgs']]]]):
pulumi.set(self, "malware_states", value)
@property
@pulumi.getter(name="malwareTypes")
def malware_types(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersMalwareTypeArgs']]]]:
"""
The type of the malware that was observed. See String Filter below for more details.
"""
return pulumi.get(self, "malware_types")
@malware_types.setter
def malware_types(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersMalwareTypeArgs']]]]):
pulumi.set(self, "malware_types", value)
@property
@pulumi.getter(name="networkDestinationDomains")
def network_destination_domains(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDestinationDomainArgs']]]]:
"""
The destination domain of network-related information about a finding. See String Filter below for more details.
"""
return pulumi.get(self, "network_destination_domains")
@network_destination_domains.setter
def network_destination_domains(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDestinationDomainArgs']]]]):
pulumi.set(self, "network_destination_domains", value)
@property
@pulumi.getter(name="networkDestinationIpv4s")
def network_destination_ipv4s(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDestinationIpv4Args']]]]:
"""
The destination IPv4 address of network-related information about a finding. See Ip Filter below for more details.
"""
return pulumi.get(self, "network_destination_ipv4s")
@network_destination_ipv4s.setter
def network_destination_ipv4s(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDestinationIpv4Args']]]]):
pulumi.set(self, "network_destination_ipv4s", value)
@property
@pulumi.getter(name="networkDestinationIpv6s")
def network_destination_ipv6s(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDestinationIpv6Args']]]]:
"""
The destination IPv6 address of network-related information about a finding. See Ip Filter below for more details.
"""
return pulumi.get(self, "network_destination_ipv6s")
@network_destination_ipv6s.setter
def network_destination_ipv6s(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDestinationIpv6Args']]]]):
pulumi.set(self, "network_destination_ipv6s", value)
@property
@pulumi.getter(name="networkDestinationPorts")
def network_destination_ports(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDestinationPortArgs']]]]:
"""
The destination port of network-related information about a finding. See Number Filter below for more details.
"""
return pulumi.get(self, "network_destination_ports")
@network_destination_ports.setter
def network_destination_ports(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDestinationPortArgs']]]]):
pulumi.set(self, "network_destination_ports", value)
@property
@pulumi.getter(name="networkDirections")
def network_directions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDirectionArgs']]]]:
"""
Indicates the direction of network traffic associated with a finding. See String Filter below for more details.
"""
return pulumi.get(self, "network_directions")
@network_directions.setter
def network_directions(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkDirectionArgs']]]]):
pulumi.set(self, "network_directions", value)
@property
@pulumi.getter(name="networkProtocols")
def network_protocols(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkProtocolArgs']]]]:
"""
The protocol of network-related information about a finding. See String Filter below for more details.
"""
return pulumi.get(self, "network_protocols")
@network_protocols.setter
def network_protocols(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkProtocolArgs']]]]):
pulumi.set(self, "network_protocols", value)
@property
@pulumi.getter(name="networkSourceDomains")
def network_source_domains(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourceDomainArgs']]]]:
"""
The source domain of network-related information about a finding. See String Filter below for more details.
"""
return pulumi.get(self, "network_source_domains")
@network_source_domains.setter
def network_source_domains(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourceDomainArgs']]]]):
pulumi.set(self, "network_source_domains", value)
@property
@pulumi.getter(name="networkSourceIpv4s")
def network_source_ipv4s(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourceIpv4Args']]]]:
"""
The source IPv4 address of network-related information about a finding. See Ip Filter below for more details.
"""
return pulumi.get(self, "network_source_ipv4s")
@network_source_ipv4s.setter
def network_source_ipv4s(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourceIpv4Args']]]]):
pulumi.set(self, "network_source_ipv4s", value)
@property
@pulumi.getter(name="networkSourceIpv6s")
def network_source_ipv6s(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourceIpv6Args']]]]:
"""
The source IPv6 address of network-related information about a finding. See Ip Filter below for more details.
"""
return pulumi.get(self, "network_source_ipv6s")
@network_source_ipv6s.setter
def network_source_ipv6s(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourceIpv6Args']]]]):
pulumi.set(self, "network_source_ipv6s", value)
@property
@pulumi.getter(name="networkSourceMacs")
def network_source_macs(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourceMacArgs']]]]:
"""
The source media access control (MAC) address of network-related information about a finding. See String Filter below for more details.
"""
return pulumi.get(self, "network_source_macs")
@network_source_macs.setter
def network_source_macs(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourceMacArgs']]]]):
pulumi.set(self, "network_source_macs", value)
@property
@pulumi.getter(name="networkSourcePorts")
def network_source_ports(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourcePortArgs']]]]:
"""
The source port of network-related information about a finding. See Number Filter below for more details.
"""
return pulumi.get(self, "network_source_ports")
@network_source_ports.setter
def network_source_ports(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNetworkSourcePortArgs']]]]):
pulumi.set(self, "network_source_ports", value)
@property
@pulumi.getter(name="noteTexts")
def note_texts(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNoteTextArgs']]]]:
"""
The text of a note. See String Filter below for more details.
"""
return pulumi.get(self, "note_texts")
@note_texts.setter
def note_texts(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNoteTextArgs']]]]):
pulumi.set(self, "note_texts", value)
@property
@pulumi.getter(name="noteUpdatedAts")
def note_updated_ats(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNoteUpdatedAtArgs']]]]:
"""
The timestamp of when the note was updated. See Date Filter below for more details.
"""
return pulumi.get(self, "note_updated_ats")
@note_updated_ats.setter
def note_updated_ats(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNoteUpdatedAtArgs']]]]):
pulumi.set(self, "note_updated_ats", value)
@property
@pulumi.getter(name="noteUpdatedBies")
def note_updated_bies(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNoteUpdatedByArgs']]]]:
"""
The principal that created a note. See String Filter below for more details.
"""
return pulumi.get(self, "note_updated_bies")
@note_updated_bies.setter
def note_updated_bies(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersNoteUpdatedByArgs']]]]):
pulumi.set(self, "note_updated_bies", value)
@property
@pulumi.getter(name="processLaunchedAts")
def process_launched_ats(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessLaunchedAtArgs']]]]:
"""
The date/time that the process was launched. See Date Filter below for more details.
"""
return pulumi.get(self, "process_launched_ats")
@process_launched_ats.setter
def process_launched_ats(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessLaunchedAtArgs']]]]):
pulumi.set(self, "process_launched_ats", value)
@property
@pulumi.getter(name="processNames")
def process_names(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessNameArgs']]]]:
"""
The name of the process. See String Filter below for more details.
"""
return pulumi.get(self, "process_names")
@process_names.setter
def process_names(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessNameArgs']]]]):
pulumi.set(self, "process_names", value)
@property
@pulumi.getter(name="processParentPids")
def process_parent_pids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessParentPidArgs']]]]:
"""
The parent process ID. See Number Filter below for more details.
"""
return pulumi.get(self, "process_parent_pids")
@process_parent_pids.setter
def process_parent_pids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessParentPidArgs']]]]):
pulumi.set(self, "process_parent_pids", value)
@property
@pulumi.getter(name="processPaths")
def process_paths(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessPathArgs']]]]:
"""
The path to the process executable. See String Filter below for more details.
"""
return pulumi.get(self, "process_paths")
@process_paths.setter
def process_paths(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessPathArgs']]]]):
pulumi.set(self, "process_paths", value)
@property
@pulumi.getter(name="processPids")
def process_pids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessPidArgs']]]]:
"""
The process ID. See Number Filter below for more details.
"""
return pulumi.get(self, "process_pids")
@process_pids.setter
def process_pids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessPidArgs']]]]):
pulumi.set(self, "process_pids", value)
@property
@pulumi.getter(name="processTerminatedAts")
def process_terminated_ats(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessTerminatedAtArgs']]]]:
"""
The date/time that the process was terminated. See Date Filter below for more details.
"""
return pulumi.get(self, "process_terminated_ats")
@process_terminated_ats.setter
def process_terminated_ats(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProcessTerminatedAtArgs']]]]):
pulumi.set(self, "process_terminated_ats", value)
@property
@pulumi.getter(name="productArns")
def product_arns(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProductArnArgs']]]]:
"""
The ARN generated by Security Hub that uniquely identifies a third-party company (security findings provider) after this provider's product (solution that generates findings) is registered with Security Hub. See String Filter below for more details.
"""
return pulumi.get(self, "product_arns")
@product_arns.setter
def product_arns(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProductArnArgs']]]]):
pulumi.set(self, "product_arns", value)
@property
@pulumi.getter(name="productFields")
def product_fields(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProductFieldArgs']]]]:
"""
A data type where security-findings providers can include additional solution-specific details that aren't part of the defined `AwsSecurityFinding` format. See Map Filter below for more details.
"""
return pulumi.get(self, "product_fields")
@product_fields.setter
def product_fields(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProductFieldArgs']]]]):
pulumi.set(self, "product_fields", value)
@property
@pulumi.getter(name="productNames")
def product_names(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProductNameArgs']]]]:
"""
The name of the solution (product) that generates findings. See String Filter below for more details.
"""
return pulumi.get(self, "product_names")
@product_names.setter
def product_names(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersProductNameArgs']]]]):
pulumi.set(self, "product_names", value)
@property
@pulumi.getter(name="recommendationTexts")
def recommendation_texts(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersRecommendationTextArgs']]]]:
"""
The recommendation of what to do about the issue described in a finding. See String Filter below for more details.
"""
return pulumi.get(self, "recommendation_texts")
@recommendation_texts.setter
def recommendation_texts(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersRecommendationTextArgs']]]]):
pulumi.set(self, "recommendation_texts", value)
@property
@pulumi.getter(name="recordStates")
def record_states(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersRecordStateArgs']]]]:
"""
The updated record state for the finding. See String Filter below for more details.
"""
return pulumi.get(self, "record_states")
@record_states.setter
def record_states(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersRecordStateArgs']]]]):
pulumi.set(self, "record_states", value)
@property
@pulumi.getter(name="relatedFindingsIds")
def related_findings_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersRelatedFindingsIdArgs']]]]:
"""
The solution-generated identifier for a related finding. See String Filter below for more details.
"""
return pulumi.get(self, "related_findings_ids")
@related_findings_ids.setter
def related_findings_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersRelatedFindingsIdArgs']]]]):
pulumi.set(self, "related_findings_ids", value)
@property
@pulumi.getter(name="relatedFindingsProductArns")
def related_findings_product_arns(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersRelatedFindingsProductArnArgs']]]]:
"""
The ARN of the solution that generated a related finding. See String Filter below for more details.
"""
return pulumi.get(self, "related_findings_product_arns")
@related_findings_product_arns.setter
def related_findings_product_arns(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersRelatedFindingsProductArnArgs']]]]):
pulumi.set(self, "related_findings_product_arns", value)
@property
@pulumi.getter(name="resourceAwsEc2InstanceIamInstanceProfileArns")
def resource_aws_ec2_instance_iam_instance_profile_arns(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceIamInstanceProfileArnArgs']]]]:
"""
The IAM profile ARN of the instance. See String Filter below for more details.
"""
return pulumi.get(self, "resource_aws_ec2_instance_iam_instance_profile_arns")
@resource_aws_ec2_instance_iam_instance_profile_arns.setter
def resource_aws_ec2_instance_iam_instance_profile_arns(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceIamInstanceProfileArnArgs']]]]):
pulumi.set(self, "resource_aws_ec2_instance_iam_instance_profile_arns", value)
@property
@pulumi.getter(name="resourceAwsEc2InstanceImageIds")
def resource_aws_ec2_instance_image_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceImageIdArgs']]]]:
"""
The Amazon Machine Image (AMI) ID of the instance. See String Filter below for more details.
"""
return pulumi.get(self, "resource_aws_ec2_instance_image_ids")
@resource_aws_ec2_instance_image_ids.setter
def resource_aws_ec2_instance_image_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceImageIdArgs']]]]):
pulumi.set(self, "resource_aws_ec2_instance_image_ids", value)
@property
@pulumi.getter(name="resourceAwsEc2InstanceIpv4Addresses")
def resource_aws_ec2_instance_ipv4_addresses(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceIpv4AddressArgs']]]]:
"""
The IPv4 addresses associated with the instance. See Ip Filter below for more details.
"""
return pulumi.get(self, "resource_aws_ec2_instance_ipv4_addresses")
@resource_aws_ec2_instance_ipv4_addresses.setter
def resource_aws_ec2_instance_ipv4_addresses(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceIpv4AddressArgs']]]]):
pulumi.set(self, "resource_aws_ec2_instance_ipv4_addresses", value)
@property
@pulumi.getter(name="resourceAwsEc2InstanceIpv6Addresses")
def resource_aws_ec2_instance_ipv6_addresses(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceIpv6AddressArgs']]]]:
"""
The IPv6 addresses associated with the instance. See Ip Filter below for more details.
"""
return pulumi.get(self, "resource_aws_ec2_instance_ipv6_addresses")
@resource_aws_ec2_instance_ipv6_addresses.setter
def resource_aws_ec2_instance_ipv6_addresses(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceIpv6AddressArgs']]]]):
pulumi.set(self, "resource_aws_ec2_instance_ipv6_addresses", value)
@property
@pulumi.getter(name="resourceAwsEc2InstanceKeyNames")
def resource_aws_ec2_instance_key_names(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceKeyNameArgs']]]]:
"""
The key name associated with the instance. See String Filter below for more details.
"""
return pulumi.get(self, "resource_aws_ec2_instance_key_names")
@resource_aws_ec2_instance_key_names.setter
def resource_aws_ec2_instance_key_names(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceKeyNameArgs']]]]):
pulumi.set(self, "resource_aws_ec2_instance_key_names", value)
@property
@pulumi.getter(name="resourceAwsEc2InstanceLaunchedAts")
def resource_aws_ec2_instance_launched_ats(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceLaunchedAtArgs']]]]:
"""
The date and time the instance was launched. See Date Filter below for more details.
"""
return pulumi.get(self, "resource_aws_ec2_instance_launched_ats")
@resource_aws_ec2_instance_launched_ats.setter
def resource_aws_ec2_instance_launched_ats(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceLaunchedAtArgs']]]]):
pulumi.set(self, "resource_aws_ec2_instance_launched_ats", value)
@property
@pulumi.getter(name="resourceAwsEc2InstanceSubnetIds")
def resource_aws_ec2_instance_subnet_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceSubnetIdArgs']]]]:
"""
The identifier of the subnet that the instance was launched in. See String Filter below for more details.
"""
return pulumi.get(self, "resource_aws_ec2_instance_subnet_ids")
@resource_aws_ec2_instance_subnet_ids.setter
def resource_aws_ec2_instance_subnet_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceSubnetIdArgs']]]]):
pulumi.set(self, "resource_aws_ec2_instance_subnet_ids", value)
@property
@pulumi.getter(name="resourceAwsEc2InstanceTypes")
def resource_aws_ec2_instance_types(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceTypeArgs']]]]:
"""
The instance type of the instance. See String Filter below for more details.
"""
return pulumi.get(self, "resource_aws_ec2_instance_types")
@resource_aws_ec2_instance_types.setter
def resource_aws_ec2_instance_types(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceTypeArgs']]]]):
pulumi.set(self, "resource_aws_ec2_instance_types", value)
@property
@pulumi.getter(name="resourceAwsEc2InstanceVpcIds")
def resource_aws_ec2_instance_vpc_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceVpcIdArgs']]]]:
"""
The identifier of the VPC that the instance was launched in. See String Filter below for more details.
"""
return pulumi.get(self, "resource_aws_ec2_instance_vpc_ids")
@resource_aws_ec2_instance_vpc_ids.setter
def resource_aws_ec2_instance_vpc_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsEc2InstanceVpcIdArgs']]]]):
pulumi.set(self, "resource_aws_ec2_instance_vpc_ids", value)
@property
@pulumi.getter(name="resourceAwsIamAccessKeyCreatedAts")
def resource_aws_iam_access_key_created_ats(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsIamAccessKeyCreatedAtArgs']]]]:
"""
The creation date/time of the IAM access key related to a finding. See Date Filter below for more details.
"""
return pulumi.get(self, "resource_aws_iam_access_key_created_ats")
@resource_aws_iam_access_key_created_ats.setter
def resource_aws_iam_access_key_created_ats(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsIamAccessKeyCreatedAtArgs']]]]):
pulumi.set(self, "resource_aws_iam_access_key_created_ats", value)
@property
@pulumi.getter(name="resourceAwsIamAccessKeyStatuses")
def resource_aws_iam_access_key_statuses(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsIamAccessKeyStatusArgs']]]]:
"""
The status of the IAM access key related to a finding. See String Filter below for more details.
"""
return pulumi.get(self, "resource_aws_iam_access_key_statuses")
@resource_aws_iam_access_key_statuses.setter
def resource_aws_iam_access_key_statuses(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsIamAccessKeyStatusArgs']]]]):
pulumi.set(self, "resource_aws_iam_access_key_statuses", value)
@property
@pulumi.getter(name="resourceAwsIamAccessKeyUserNames")
def resource_aws_iam_access_key_user_names(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsIamAccessKeyUserNameArgs']]]]:
"""
The user associated with the IAM access key related to a finding. See String Filter below for more details.
"""
return pulumi.get(self, "resource_aws_iam_access_key_user_names")
@resource_aws_iam_access_key_user_names.setter
def resource_aws_iam_access_key_user_names(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsIamAccessKeyUserNameArgs']]]]):
pulumi.set(self, "resource_aws_iam_access_key_user_names", value)
@property
@pulumi.getter(name="resourceAwsS3BucketOwnerIds")
def resource_aws_s3_bucket_owner_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsS3BucketOwnerIdArgs']]]]:
"""
The canonical user ID of the owner of the S3 bucket. See String Filter below for more details.
"""
return pulumi.get(self, "resource_aws_s3_bucket_owner_ids")
@resource_aws_s3_bucket_owner_ids.setter
def resource_aws_s3_bucket_owner_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsS3BucketOwnerIdArgs']]]]):
pulumi.set(self, "resource_aws_s3_bucket_owner_ids", value)
@property
@pulumi.getter(name="resourceAwsS3BucketOwnerNames")
def resource_aws_s3_bucket_owner_names(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsS3BucketOwnerNameArgs']]]]:
"""
The display name of the owner of the S3 bucket. See String Filter below for more details.
"""
return pulumi.get(self, "resource_aws_s3_bucket_owner_names")
@resource_aws_s3_bucket_owner_names.setter
def resource_aws_s3_bucket_owner_names(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceAwsS3BucketOwnerNameArgs']]]]):
pulumi.set(self, "resource_aws_s3_bucket_owner_names", value)
@property
@pulumi.getter(name="resourceContainerImageIds")
def resource_container_image_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceContainerImageIdArgs']]]]:
"""
The identifier of the image related to a finding. See String Filter below for more details.
"""
return pulumi.get(self, "resource_container_image_ids")
@resource_container_image_ids.setter
def resource_container_image_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceContainerImageIdArgs']]]]):
pulumi.set(self, "resource_container_image_ids", value)
@property
@pulumi.getter(name="resourceContainerImageNames")
def resource_container_image_names(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceContainerImageNameArgs']]]]:
"""
The name of the image related to a finding. See String Filter below for more details.
"""
return pulumi.get(self, "resource_container_image_names")
@resource_container_image_names.setter
def resource_container_image_names(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceContainerImageNameArgs']]]]):
pulumi.set(self, "resource_container_image_names", value)
@property
@pulumi.getter(name="resourceContainerLaunchedAts")
def resource_container_launched_ats(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceContainerLaunchedAtArgs']]]]:
"""
The date/time that the container was started. See Date Filter below for more details.
"""
return pulumi.get(self, "resource_container_launched_ats")
@resource_container_launched_ats.setter
def resource_container_launched_ats(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceContainerLaunchedAtArgs']]]]):
pulumi.set(self, "resource_container_launched_ats", value)
@property
@pulumi.getter(name="resourceContainerNames")
def resource_container_names(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceContainerNameArgs']]]]:
"""
The name of the container related to a finding. See String Filter below for more details.
"""
return pulumi.get(self, "resource_container_names")
@resource_container_names.setter
def resource_container_names(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceContainerNameArgs']]]]):
pulumi.set(self, "resource_container_names", value)
@property
@pulumi.getter(name="resourceDetailsOthers")
def resource_details_others(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceDetailsOtherArgs']]]]:
"""
The details of a resource that doesn't have a specific subfield for the resource type defined. See Map Filter below for more details.
"""
return pulumi.get(self, "resource_details_others")
@resource_details_others.setter
def resource_details_others(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceDetailsOtherArgs']]]]):
pulumi.set(self, "resource_details_others", value)
@property
@pulumi.getter(name="resourceIds")
def resource_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceIdArgs']]]]:
"""
The canonical identifier for the given resource type. See String Filter below for more details.
"""
return pulumi.get(self, "resource_ids")
@resource_ids.setter
def resource_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceIdArgs']]]]):
pulumi.set(self, "resource_ids", value)
@property
@pulumi.getter(name="resourcePartitions")
def resource_partitions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourcePartitionArgs']]]]:
"""
The canonical AWS partition name that the Region is assigned to. See String Filter below for more details.
"""
return pulumi.get(self, "resource_partitions")
@resource_partitions.setter
def resource_partitions(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourcePartitionArgs']]]]):
pulumi.set(self, "resource_partitions", value)
@property
@pulumi.getter(name="resourceRegions")
def resource_regions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceRegionArgs']]]]:
"""
The canonical AWS external Region name where this resource is located. See String Filter below for more details.
"""
return pulumi.get(self, "resource_regions")
@resource_regions.setter
def resource_regions(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceRegionArgs']]]]):
pulumi.set(self, "resource_regions", value)
@property
@pulumi.getter(name="resourceTags")
def resource_tags(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceTagArgs']]]]:
"""
A list of AWS tags associated with a resource at the time the finding was processed. See Map Filter below for more details.
"""
return pulumi.get(self, "resource_tags")
@resource_tags.setter
def resource_tags(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceTagArgs']]]]):
pulumi.set(self, "resource_tags", value)
@property
@pulumi.getter(name="resourceTypes")
def resource_types(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceTypeArgs']]]]:
"""
Specifies the type of the resource that details are provided for. See String Filter below for more details.
"""
return pulumi.get(self, "resource_types")
@resource_types.setter
def resource_types(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersResourceTypeArgs']]]]):
pulumi.set(self, "resource_types", value)
@property
@pulumi.getter(name="severityLabels")
def severity_labels(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersSeverityLabelArgs']]]]:
"""
The label of a finding's severity. See String Filter below for more details.
"""
return pulumi.get(self, "severity_labels")
@severity_labels.setter
def severity_labels(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersSeverityLabelArgs']]]]):
pulumi.set(self, "severity_labels", value)
@property
@pulumi.getter(name="sourceUrls")
def source_urls(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersSourceUrlArgs']]]]:
"""
A URL that links to a page about the current finding in the security-findings provider's solution. See String Filter below for more details.
"""
return pulumi.get(self, "source_urls")
@source_urls.setter
def source_urls(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersSourceUrlArgs']]]]):
pulumi.set(self, "source_urls", value)
@property
@pulumi.getter(name="threatIntelIndicatorCategories")
def threat_intel_indicator_categories(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorCategoryArgs']]]]:
"""
The category of a threat intelligence indicator. See String Filter below for more details.
"""
return pulumi.get(self, "threat_intel_indicator_categories")
@threat_intel_indicator_categories.setter
def threat_intel_indicator_categories(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorCategoryArgs']]]]):
pulumi.set(self, "threat_intel_indicator_categories", value)
@property
@pulumi.getter(name="threatIntelIndicatorLastObservedAts")
def threat_intel_indicator_last_observed_ats(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorLastObservedAtArgs']]]]:
"""
The date/time of the last observation of a threat intelligence indicator. See Date Filter below for more details.
"""
return pulumi.get(self, "threat_intel_indicator_last_observed_ats")
@threat_intel_indicator_last_observed_ats.setter
def threat_intel_indicator_last_observed_ats(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorLastObservedAtArgs']]]]):
pulumi.set(self, "threat_intel_indicator_last_observed_ats", value)
@property
@pulumi.getter(name="threatIntelIndicatorSourceUrls")
def threat_intel_indicator_source_urls(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorSourceUrlArgs']]]]:
"""
The URL for more details from the source of the threat intelligence. See String Filter below for more details.
"""
return pulumi.get(self, "threat_intel_indicator_source_urls")
@threat_intel_indicator_source_urls.setter
def threat_intel_indicator_source_urls(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorSourceUrlArgs']]]]):
pulumi.set(self, "threat_intel_indicator_source_urls", value)
@property
@pulumi.getter(name="threatIntelIndicatorSources")
def threat_intel_indicator_sources(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorSourceArgs']]]]:
"""
The source of the threat intelligence. See String Filter below for more details.
"""
return pulumi.get(self, "threat_intel_indicator_sources")
@threat_intel_indicator_sources.setter
def threat_intel_indicator_sources(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorSourceArgs']]]]):
pulumi.set(self, "threat_intel_indicator_sources", value)
@property
@pulumi.getter(name="threatIntelIndicatorTypes")
def threat_intel_indicator_types(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorTypeArgs']]]]:
"""
The type of a threat intelligence indicator. See String Filter below for more details.
"""
return pulumi.get(self, "threat_intel_indicator_types")
@threat_intel_indicator_types.setter
def threat_intel_indicator_types(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorTypeArgs']]]]):
pulumi.set(self, "threat_intel_indicator_types", value)
@property
@pulumi.getter(name="threatIntelIndicatorValues")
def threat_intel_indicator_values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorValueArgs']]]]:
"""
The value of a threat intelligence indicator. See String Filter below for more details.
"""
return pulumi.get(self, "threat_intel_indicator_values")
@threat_intel_indicator_values.setter
def threat_intel_indicator_values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersThreatIntelIndicatorValueArgs']]]]):
pulumi.set(self, "threat_intel_indicator_values", value)
@property
@pulumi.getter
def titles(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersTitleArgs']]]]:
"""
A finding's title. See String Filter below for more details.
"""
return pulumi.get(self, "titles")
@titles.setter
def titles(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersTitleArgs']]]]):
pulumi.set(self, "titles", value)
@property
@pulumi.getter
def types(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersTypeArgs']]]]:
"""
A finding type in the format of `namespace/category/classifier` that classifies a finding. See String Filter below for more details.
"""
return pulumi.get(self, "types")
@types.setter
def types(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersTypeArgs']]]]):
pulumi.set(self, "types", value)
@property
@pulumi.getter(name="updatedAts")
def updated_ats(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersUpdatedAtArgs']]]]:
"""
An ISO8601-formatted timestamp that indicates when the security-findings provider last updated the finding record. See Date Filter below for more details.
"""
return pulumi.get(self, "updated_ats")
@updated_ats.setter
def updated_ats(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersUpdatedAtArgs']]]]):
pulumi.set(self, "updated_ats", value)
@property
@pulumi.getter(name="userDefinedValues")
def user_defined_values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersUserDefinedValueArgs']]]]:
"""
A list of name/value string pairs associated with the finding. These are custom, user-defined fields added to a finding. See Map Filter below for more details.
"""
return pulumi.get(self, "user_defined_values")
@user_defined_values.setter
def user_defined_values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersUserDefinedValueArgs']]]]):
pulumi.set(self, "user_defined_values", value)
@property
@pulumi.getter(name="verificationStates")
def verification_states(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersVerificationStateArgs']]]]:
"""
The veracity of a finding. See String Filter below for more details.
"""
return pulumi.get(self, "verification_states")
@verification_states.setter
def verification_states(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersVerificationStateArgs']]]]):
pulumi.set(self, "verification_states", value)
@property
@pulumi.getter(name="workflowStatuses")
def workflow_statuses(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersWorkflowStatusArgs']]]]:
"""
The status of the investigation into a finding. See Workflow Status Filter below for more details.
"""
return pulumi.get(self, "workflow_statuses")
@workflow_statuses.setter
def workflow_statuses(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InsightFiltersWorkflowStatusArgs']]]]):
pulumi.set(self, "workflow_statuses", value)
@pulumi.input_type
class InsightFiltersAwsAccountIdArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersCompanyNameArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersComplianceStatusArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersConfidenceArgs:
def __init__(__self__, *,
eq: Optional[pulumi.Input[str]] = None,
gte: Optional[pulumi.Input[str]] = None,
lte: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] eq: The equal-to condition to be applied to a single field when querying for findings, provided as a String.
:param pulumi.Input[str] gte: The greater-than-equal condition to be applied to a single field when querying for findings, provided as a String.
:param pulumi.Input[str] lte: The less-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
if eq is not None:
pulumi.set(__self__, "eq", eq)
if gte is not None:
pulumi.set(__self__, "gte", gte)
if lte is not None:
pulumi.set(__self__, "lte", lte)
@property
@pulumi.getter
def eq(self) -> Optional[pulumi.Input[str]]:
"""
The equal-to condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "eq")
@eq.setter
def eq(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "eq", value)
@property
@pulumi.getter
def gte(self) -> Optional[pulumi.Input[str]]:
"""
The greater-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "gte")
@gte.setter
def gte(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "gte", value)
@property
@pulumi.getter
def lte(self) -> Optional[pulumi.Input[str]]:
"""
The less-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "lte")
@lte.setter
def lte(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "lte", value)
@pulumi.input_type
class InsightFiltersCreatedAtArgs:
def __init__(__self__, *,
date_range: Optional[pulumi.Input['InsightFiltersCreatedAtDateRangeArgs']] = None,
end: Optional[pulumi.Input[str]] = None,
start: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input['InsightFiltersCreatedAtDateRangeArgs'] date_range: A configuration block of the date range for the date filter. See date_range below for more details.
:param pulumi.Input[str] end: An end date for the date filter. Required with `start` if `date_range` is not specified.
:param pulumi.Input[str] start: A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
if date_range is not None:
pulumi.set(__self__, "date_range", date_range)
if end is not None:
pulumi.set(__self__, "end", end)
if start is not None:
pulumi.set(__self__, "start", start)
@property
@pulumi.getter(name="dateRange")
def date_range(self) -> Optional[pulumi.Input['InsightFiltersCreatedAtDateRangeArgs']]:
"""
A configuration block of the date range for the date filter. See date_range below for more details.
"""
return pulumi.get(self, "date_range")
@date_range.setter
def date_range(self, value: Optional[pulumi.Input['InsightFiltersCreatedAtDateRangeArgs']]):
pulumi.set(self, "date_range", value)
@property
@pulumi.getter
def end(self) -> Optional[pulumi.Input[str]]:
"""
An end date for the date filter. Required with `start` if `date_range` is not specified.
"""
return pulumi.get(self, "end")
@end.setter
def end(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "end", value)
@property
@pulumi.getter
def start(self) -> Optional[pulumi.Input[str]]:
"""
A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
return pulumi.get(self, "start")
@start.setter
def start(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "start", value)
@pulumi.input_type
class InsightFiltersCreatedAtDateRangeArgs:
def __init__(__self__, *,
unit: pulumi.Input[str],
value: pulumi.Input[int]):
"""
:param pulumi.Input[str] unit: A date range unit for the date filter. Valid values: `DAYS`.
:param pulumi.Input[int] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "unit", unit)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def unit(self) -> pulumi.Input[str]:
"""
A date range unit for the date filter. Valid values: `DAYS`.
"""
return pulumi.get(self, "unit")
@unit.setter
def unit(self, value: pulumi.Input[str]):
pulumi.set(self, "unit", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[int]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[int]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersCriticalityArgs:
def __init__(__self__, *,
eq: Optional[pulumi.Input[str]] = None,
gte: Optional[pulumi.Input[str]] = None,
lte: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] eq: The equal-to condition to be applied to a single field when querying for findings, provided as a String.
:param pulumi.Input[str] gte: The greater-than-equal condition to be applied to a single field when querying for findings, provided as a String.
:param pulumi.Input[str] lte: The less-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
if eq is not None:
pulumi.set(__self__, "eq", eq)
if gte is not None:
pulumi.set(__self__, "gte", gte)
if lte is not None:
pulumi.set(__self__, "lte", lte)
@property
@pulumi.getter
def eq(self) -> Optional[pulumi.Input[str]]:
"""
The equal-to condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "eq")
@eq.setter
def eq(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "eq", value)
@property
@pulumi.getter
def gte(self) -> Optional[pulumi.Input[str]]:
"""
The greater-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "gte")
@gte.setter
def gte(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "gte", value)
@property
@pulumi.getter
def lte(self) -> Optional[pulumi.Input[str]]:
"""
The less-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "lte")
@lte.setter
def lte(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "lte", value)
@pulumi.input_type
class InsightFiltersDescriptionArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersFindingProviderFieldsConfidenceArgs:
def __init__(__self__, *,
eq: Optional[pulumi.Input[str]] = None,
gte: Optional[pulumi.Input[str]] = None,
lte: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] eq: The equal-to condition to be applied to a single field when querying for findings, provided as a String.
:param pulumi.Input[str] gte: The greater-than-equal condition to be applied to a single field when querying for findings, provided as a String.
:param pulumi.Input[str] lte: The less-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
if eq is not None:
pulumi.set(__self__, "eq", eq)
if gte is not None:
pulumi.set(__self__, "gte", gte)
if lte is not None:
pulumi.set(__self__, "lte", lte)
@property
@pulumi.getter
def eq(self) -> Optional[pulumi.Input[str]]:
"""
The equal-to condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "eq")
@eq.setter
def eq(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "eq", value)
@property
@pulumi.getter
def gte(self) -> Optional[pulumi.Input[str]]:
"""
The greater-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "gte")
@gte.setter
def gte(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "gte", value)
@property
@pulumi.getter
def lte(self) -> Optional[pulumi.Input[str]]:
"""
The less-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "lte")
@lte.setter
def lte(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "lte", value)
@pulumi.input_type
class InsightFiltersFindingProviderFieldsCriticalityArgs:
def __init__(__self__, *,
eq: Optional[pulumi.Input[str]] = None,
gte: Optional[pulumi.Input[str]] = None,
lte: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] eq: The equal-to condition to be applied to a single field when querying for findings, provided as a String.
:param pulumi.Input[str] gte: The greater-than-equal condition to be applied to a single field when querying for findings, provided as a String.
:param pulumi.Input[str] lte: The less-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
if eq is not None:
pulumi.set(__self__, "eq", eq)
if gte is not None:
pulumi.set(__self__, "gte", gte)
if lte is not None:
pulumi.set(__self__, "lte", lte)
@property
@pulumi.getter
def eq(self) -> Optional[pulumi.Input[str]]:
"""
The equal-to condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "eq")
@eq.setter
def eq(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "eq", value)
@property
@pulumi.getter
def gte(self) -> Optional[pulumi.Input[str]]:
"""
The greater-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "gte")
@gte.setter
def gte(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "gte", value)
@property
@pulumi.getter
def lte(self) -> Optional[pulumi.Input[str]]:
"""
The less-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "lte")
@lte.setter
def lte(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "lte", value)
@pulumi.input_type
class InsightFiltersFindingProviderFieldsRelatedFindingsIdArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersFindingProviderFieldsRelatedFindingsProductArnArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersFindingProviderFieldsSeverityLabelArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersFindingProviderFieldsSeverityOriginalArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersFindingProviderFieldsTypeArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersFirstObservedAtArgs:
def __init__(__self__, *,
date_range: Optional[pulumi.Input['InsightFiltersFirstObservedAtDateRangeArgs']] = None,
end: Optional[pulumi.Input[str]] = None,
start: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input['InsightFiltersFirstObservedAtDateRangeArgs'] date_range: A configuration block of the date range for the date filter. See date_range below for more details.
:param pulumi.Input[str] end: An end date for the date filter. Required with `start` if `date_range` is not specified.
:param pulumi.Input[str] start: A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
if date_range is not None:
pulumi.set(__self__, "date_range", date_range)
if end is not None:
pulumi.set(__self__, "end", end)
if start is not None:
pulumi.set(__self__, "start", start)
@property
@pulumi.getter(name="dateRange")
def date_range(self) -> Optional[pulumi.Input['InsightFiltersFirstObservedAtDateRangeArgs']]:
"""
A configuration block of the date range for the date filter. See date_range below for more details.
"""
return pulumi.get(self, "date_range")
@date_range.setter
def date_range(self, value: Optional[pulumi.Input['InsightFiltersFirstObservedAtDateRangeArgs']]):
pulumi.set(self, "date_range", value)
@property
@pulumi.getter
def end(self) -> Optional[pulumi.Input[str]]:
"""
An end date for the date filter. Required with `start` if `date_range` is not specified.
"""
return pulumi.get(self, "end")
@end.setter
def end(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "end", value)
@property
@pulumi.getter
def start(self) -> Optional[pulumi.Input[str]]:
"""
A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
return pulumi.get(self, "start")
@start.setter
def start(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "start", value)
@pulumi.input_type
class InsightFiltersFirstObservedAtDateRangeArgs:
def __init__(__self__, *,
unit: pulumi.Input[str],
value: pulumi.Input[int]):
"""
:param pulumi.Input[str] unit: A date range unit for the date filter. Valid values: `DAYS`.
:param pulumi.Input[int] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "unit", unit)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def unit(self) -> pulumi.Input[str]:
"""
A date range unit for the date filter. Valid values: `DAYS`.
"""
return pulumi.get(self, "unit")
@unit.setter
def unit(self, value: pulumi.Input[str]):
pulumi.set(self, "unit", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[int]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[int]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersGeneratorIdArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersIdArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersKeywordArgs:
def __init__(__self__, *,
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] value: A value for the keyword.
"""
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A value for the keyword.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersLastObservedAtArgs:
def __init__(__self__, *,
date_range: Optional[pulumi.Input['InsightFiltersLastObservedAtDateRangeArgs']] = None,
end: Optional[pulumi.Input[str]] = None,
start: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input['InsightFiltersLastObservedAtDateRangeArgs'] date_range: A configuration block of the date range for the date filter. See date_range below for more details.
:param pulumi.Input[str] end: An end date for the date filter. Required with `start` if `date_range` is not specified.
:param pulumi.Input[str] start: A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
if date_range is not None:
pulumi.set(__self__, "date_range", date_range)
if end is not None:
pulumi.set(__self__, "end", end)
if start is not None:
pulumi.set(__self__, "start", start)
@property
@pulumi.getter(name="dateRange")
def date_range(self) -> Optional[pulumi.Input['InsightFiltersLastObservedAtDateRangeArgs']]:
"""
A configuration block of the date range for the date filter. See date_range below for more details.
"""
return pulumi.get(self, "date_range")
@date_range.setter
def date_range(self, value: Optional[pulumi.Input['InsightFiltersLastObservedAtDateRangeArgs']]):
pulumi.set(self, "date_range", value)
@property
@pulumi.getter
def end(self) -> Optional[pulumi.Input[str]]:
"""
An end date for the date filter. Required with `start` if `date_range` is not specified.
"""
return pulumi.get(self, "end")
@end.setter
def end(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "end", value)
@property
@pulumi.getter
def start(self) -> Optional[pulumi.Input[str]]:
"""
A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
return pulumi.get(self, "start")
@start.setter
def start(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "start", value)
@pulumi.input_type
class InsightFiltersLastObservedAtDateRangeArgs:
def __init__(__self__, *,
unit: pulumi.Input[str],
value: pulumi.Input[int]):
"""
:param pulumi.Input[str] unit: A date range unit for the date filter. Valid values: `DAYS`.
:param pulumi.Input[int] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "unit", unit)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def unit(self) -> pulumi.Input[str]:
"""
A date range unit for the date filter. Valid values: `DAYS`.
"""
return pulumi.get(self, "unit")
@unit.setter
def unit(self, value: pulumi.Input[str]):
pulumi.set(self, "unit", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[int]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[int]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersMalwareNameArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersMalwarePathArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersMalwareStateArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersMalwareTypeArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersNetworkDestinationDomainArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersNetworkDestinationIpv4Args:
def __init__(__self__, *,
cidr: pulumi.Input[str]):
"""
:param pulumi.Input[str] cidr: A finding's CIDR value.
"""
pulumi.set(__self__, "cidr", cidr)
@property
@pulumi.getter
def cidr(self) -> pulumi.Input[str]:
"""
A finding's CIDR value.
"""
return pulumi.get(self, "cidr")
@cidr.setter
def cidr(self, value: pulumi.Input[str]):
pulumi.set(self, "cidr", value)
@pulumi.input_type
class InsightFiltersNetworkDestinationIpv6Args:
def __init__(__self__, *,
cidr: pulumi.Input[str]):
"""
:param pulumi.Input[str] cidr: A finding's CIDR value.
"""
pulumi.set(__self__, "cidr", cidr)
@property
@pulumi.getter
def cidr(self) -> pulumi.Input[str]:
"""
A finding's CIDR value.
"""
return pulumi.get(self, "cidr")
@cidr.setter
def cidr(self, value: pulumi.Input[str]):
pulumi.set(self, "cidr", value)
@pulumi.input_type
class InsightFiltersNetworkDestinationPortArgs:
def __init__(__self__, *,
eq: Optional[pulumi.Input[str]] = None,
gte: Optional[pulumi.Input[str]] = None,
lte: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] eq: The equal-to condition to be applied to a single field when querying for findings, provided as a String.
:param pulumi.Input[str] gte: The greater-than-equal condition to be applied to a single field when querying for findings, provided as a String.
:param pulumi.Input[str] lte: The less-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
if eq is not None:
pulumi.set(__self__, "eq", eq)
if gte is not None:
pulumi.set(__self__, "gte", gte)
if lte is not None:
pulumi.set(__self__, "lte", lte)
@property
@pulumi.getter
def eq(self) -> Optional[pulumi.Input[str]]:
"""
The equal-to condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "eq")
@eq.setter
def eq(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "eq", value)
@property
@pulumi.getter
def gte(self) -> Optional[pulumi.Input[str]]:
"""
The greater-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "gte")
@gte.setter
def gte(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "gte", value)
@property
@pulumi.getter
def lte(self) -> Optional[pulumi.Input[str]]:
"""
The less-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "lte")
@lte.setter
def lte(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "lte", value)
@pulumi.input_type
class InsightFiltersNetworkDirectionArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersNetworkProtocolArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersNetworkSourceDomainArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersNetworkSourceIpv4Args:
def __init__(__self__, *,
cidr: pulumi.Input[str]):
"""
:param pulumi.Input[str] cidr: A finding's CIDR value.
"""
pulumi.set(__self__, "cidr", cidr)
@property
@pulumi.getter
def cidr(self) -> pulumi.Input[str]:
"""
A finding's CIDR value.
"""
return pulumi.get(self, "cidr")
@cidr.setter
def cidr(self, value: pulumi.Input[str]):
pulumi.set(self, "cidr", value)
@pulumi.input_type
class InsightFiltersNetworkSourceIpv6Args:
def __init__(__self__, *,
cidr: pulumi.Input[str]):
"""
:param pulumi.Input[str] cidr: A finding's CIDR value.
"""
pulumi.set(__self__, "cidr", cidr)
@property
@pulumi.getter
def cidr(self) -> pulumi.Input[str]:
"""
A finding's CIDR value.
"""
return pulumi.get(self, "cidr")
@cidr.setter
def cidr(self, value: pulumi.Input[str]):
pulumi.set(self, "cidr", value)
@pulumi.input_type
class InsightFiltersNetworkSourceMacArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersNetworkSourcePortArgs:
def __init__(__self__, *,
eq: Optional[pulumi.Input[str]] = None,
gte: Optional[pulumi.Input[str]] = None,
lte: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] eq: The equal-to condition to be applied to a single field when querying for findings, provided as a String.
:param pulumi.Input[str] gte: The greater-than-equal condition to be applied to a single field when querying for findings, provided as a String.
:param pulumi.Input[str] lte: The less-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
if eq is not None:
pulumi.set(__self__, "eq", eq)
if gte is not None:
pulumi.set(__self__, "gte", gte)
if lte is not None:
pulumi.set(__self__, "lte", lte)
@property
@pulumi.getter
def eq(self) -> Optional[pulumi.Input[str]]:
"""
The equal-to condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "eq")
@eq.setter
def eq(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "eq", value)
@property
@pulumi.getter
def gte(self) -> Optional[pulumi.Input[str]]:
"""
The greater-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "gte")
@gte.setter
def gte(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "gte", value)
@property
@pulumi.getter
def lte(self) -> Optional[pulumi.Input[str]]:
"""
The less-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "lte")
@lte.setter
def lte(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "lte", value)
@pulumi.input_type
class InsightFiltersNoteTextArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersNoteUpdatedAtArgs:
def __init__(__self__, *,
date_range: Optional[pulumi.Input['InsightFiltersNoteUpdatedAtDateRangeArgs']] = None,
end: Optional[pulumi.Input[str]] = None,
start: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input['InsightFiltersNoteUpdatedAtDateRangeArgs'] date_range: A configuration block of the date range for the date filter. See date_range below for more details.
:param pulumi.Input[str] end: An end date for the date filter. Required with `start` if `date_range` is not specified.
:param pulumi.Input[str] start: A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
if date_range is not None:
pulumi.set(__self__, "date_range", date_range)
if end is not None:
pulumi.set(__self__, "end", end)
if start is not None:
pulumi.set(__self__, "start", start)
@property
@pulumi.getter(name="dateRange")
def date_range(self) -> Optional[pulumi.Input['InsightFiltersNoteUpdatedAtDateRangeArgs']]:
"""
A configuration block of the date range for the date filter. See date_range below for more details.
"""
return pulumi.get(self, "date_range")
@date_range.setter
def date_range(self, value: Optional[pulumi.Input['InsightFiltersNoteUpdatedAtDateRangeArgs']]):
pulumi.set(self, "date_range", value)
@property
@pulumi.getter
def end(self) -> Optional[pulumi.Input[str]]:
"""
An end date for the date filter. Required with `start` if `date_range` is not specified.
"""
return pulumi.get(self, "end")
@end.setter
def end(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "end", value)
@property
@pulumi.getter
def start(self) -> Optional[pulumi.Input[str]]:
"""
A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
return pulumi.get(self, "start")
@start.setter
def start(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "start", value)
@pulumi.input_type
class InsightFiltersNoteUpdatedAtDateRangeArgs:
def __init__(__self__, *,
unit: pulumi.Input[str],
value: pulumi.Input[int]):
"""
:param pulumi.Input[str] unit: A date range unit for the date filter. Valid values: `DAYS`.
:param pulumi.Input[int] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "unit", unit)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def unit(self) -> pulumi.Input[str]:
"""
A date range unit for the date filter. Valid values: `DAYS`.
"""
return pulumi.get(self, "unit")
@unit.setter
def unit(self, value: pulumi.Input[str]):
pulumi.set(self, "unit", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[int]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[int]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersNoteUpdatedByArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersProcessLaunchedAtArgs:
def __init__(__self__, *,
date_range: Optional[pulumi.Input['InsightFiltersProcessLaunchedAtDateRangeArgs']] = None,
end: Optional[pulumi.Input[str]] = None,
start: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input['InsightFiltersProcessLaunchedAtDateRangeArgs'] date_range: A configuration block of the date range for the date filter. See date_range below for more details.
:param pulumi.Input[str] end: An end date for the date filter. Required with `start` if `date_range` is not specified.
:param pulumi.Input[str] start: A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
if date_range is not None:
pulumi.set(__self__, "date_range", date_range)
if end is not None:
pulumi.set(__self__, "end", end)
if start is not None:
pulumi.set(__self__, "start", start)
@property
@pulumi.getter(name="dateRange")
def date_range(self) -> Optional[pulumi.Input['InsightFiltersProcessLaunchedAtDateRangeArgs']]:
"""
A configuration block of the date range for the date filter. See date_range below for more details.
"""
return pulumi.get(self, "date_range")
@date_range.setter
def date_range(self, value: Optional[pulumi.Input['InsightFiltersProcessLaunchedAtDateRangeArgs']]):
pulumi.set(self, "date_range", value)
@property
@pulumi.getter
def end(self) -> Optional[pulumi.Input[str]]:
"""
An end date for the date filter. Required with `start` if `date_range` is not specified.
"""
return pulumi.get(self, "end")
@end.setter
def end(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "end", value)
@property
@pulumi.getter
def start(self) -> Optional[pulumi.Input[str]]:
"""
A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
return pulumi.get(self, "start")
@start.setter
def start(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "start", value)
@pulumi.input_type
class InsightFiltersProcessLaunchedAtDateRangeArgs:
def __init__(__self__, *,
unit: pulumi.Input[str],
value: pulumi.Input[int]):
"""
:param pulumi.Input[str] unit: A date range unit for the date filter. Valid values: `DAYS`.
:param pulumi.Input[int] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "unit", unit)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def unit(self) -> pulumi.Input[str]:
"""
A date range unit for the date filter. Valid values: `DAYS`.
"""
return pulumi.get(self, "unit")
@unit.setter
def unit(self, value: pulumi.Input[str]):
pulumi.set(self, "unit", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[int]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[int]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersProcessNameArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersProcessParentPidArgs:
def __init__(__self__, *,
eq: Optional[pulumi.Input[str]] = None,
gte: Optional[pulumi.Input[str]] = None,
lte: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] eq: The equal-to condition to be applied to a single field when querying for findings, provided as a String.
:param pulumi.Input[str] gte: The greater-than-equal condition to be applied to a single field when querying for findings, provided as a String.
:param pulumi.Input[str] lte: The less-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
if eq is not None:
pulumi.set(__self__, "eq", eq)
if gte is not None:
pulumi.set(__self__, "gte", gte)
if lte is not None:
pulumi.set(__self__, "lte", lte)
@property
@pulumi.getter
def eq(self) -> Optional[pulumi.Input[str]]:
"""
The equal-to condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "eq")
@eq.setter
def eq(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "eq", value)
@property
@pulumi.getter
def gte(self) -> Optional[pulumi.Input[str]]:
"""
The greater-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "gte")
@gte.setter
def gte(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "gte", value)
@property
@pulumi.getter
def lte(self) -> Optional[pulumi.Input[str]]:
"""
The less-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "lte")
@lte.setter
def lte(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "lte", value)
@pulumi.input_type
class InsightFiltersProcessPathArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersProcessPidArgs:
def __init__(__self__, *,
eq: Optional[pulumi.Input[str]] = None,
gte: Optional[pulumi.Input[str]] = None,
lte: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] eq: The equal-to condition to be applied to a single field when querying for findings, provided as a String.
:param pulumi.Input[str] gte: The greater-than-equal condition to be applied to a single field when querying for findings, provided as a String.
:param pulumi.Input[str] lte: The less-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
if eq is not None:
pulumi.set(__self__, "eq", eq)
if gte is not None:
pulumi.set(__self__, "gte", gte)
if lte is not None:
pulumi.set(__self__, "lte", lte)
@property
@pulumi.getter
def eq(self) -> Optional[pulumi.Input[str]]:
"""
The equal-to condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "eq")
@eq.setter
def eq(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "eq", value)
@property
@pulumi.getter
def gte(self) -> Optional[pulumi.Input[str]]:
"""
The greater-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "gte")
@gte.setter
def gte(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "gte", value)
@property
@pulumi.getter
def lte(self) -> Optional[pulumi.Input[str]]:
"""
The less-than-equal condition to be applied to a single field when querying for findings, provided as a String.
"""
return pulumi.get(self, "lte")
@lte.setter
def lte(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "lte", value)
@pulumi.input_type
class InsightFiltersProcessTerminatedAtArgs:
def __init__(__self__, *,
date_range: Optional[pulumi.Input['InsightFiltersProcessTerminatedAtDateRangeArgs']] = None,
end: Optional[pulumi.Input[str]] = None,
start: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input['InsightFiltersProcessTerminatedAtDateRangeArgs'] date_range: A configuration block of the date range for the date filter. See date_range below for more details.
:param pulumi.Input[str] end: An end date for the date filter. Required with `start` if `date_range` is not specified.
:param pulumi.Input[str] start: A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
if date_range is not None:
pulumi.set(__self__, "date_range", date_range)
if end is not None:
pulumi.set(__self__, "end", end)
if start is not None:
pulumi.set(__self__, "start", start)
@property
@pulumi.getter(name="dateRange")
def date_range(self) -> Optional[pulumi.Input['InsightFiltersProcessTerminatedAtDateRangeArgs']]:
"""
A configuration block of the date range for the date filter. See date_range below for more details.
"""
return pulumi.get(self, "date_range")
@date_range.setter
def date_range(self, value: Optional[pulumi.Input['InsightFiltersProcessTerminatedAtDateRangeArgs']]):
pulumi.set(self, "date_range", value)
@property
@pulumi.getter
def end(self) -> Optional[pulumi.Input[str]]:
"""
An end date for the date filter. Required with `start` if `date_range` is not specified.
"""
return pulumi.get(self, "end")
@end.setter
def end(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "end", value)
@property
@pulumi.getter
def start(self) -> Optional[pulumi.Input[str]]:
"""
A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
return pulumi.get(self, "start")
@start.setter
def start(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "start", value)
@pulumi.input_type
class InsightFiltersProcessTerminatedAtDateRangeArgs:
def __init__(__self__, *,
unit: pulumi.Input[str],
value: pulumi.Input[int]):
"""
:param pulumi.Input[str] unit: A date range unit for the date filter. Valid values: `DAYS`.
:param pulumi.Input[int] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "unit", unit)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def unit(self) -> pulumi.Input[str]:
"""
A date range unit for the date filter. Valid values: `DAYS`.
"""
return pulumi.get(self, "unit")
@unit.setter
def unit(self, value: pulumi.Input[str]):
pulumi.set(self, "unit", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[int]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[int]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersProductArnArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersProductFieldArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
key: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] key: The key of the map filter. For example, for `ResourceTags`, `Key` identifies the name of the tag. For `UserDefinedFields`, `Key` is the name of the field.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
"""
The key of the map filter. For example, for `ResourceTags`, `Key` identifies the name of the tag. For `UserDefinedFields`, `Key` is the name of the field.
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersProductNameArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersRecommendationTextArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersRecordStateArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersRelatedFindingsIdArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersRelatedFindingsProductArnArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceAwsEc2InstanceIamInstanceProfileArnArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceAwsEc2InstanceImageIdArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceAwsEc2InstanceIpv4AddressArgs:
def __init__(__self__, *,
cidr: pulumi.Input[str]):
"""
:param pulumi.Input[str] cidr: A finding's CIDR value.
"""
pulumi.set(__self__, "cidr", cidr)
@property
@pulumi.getter
def cidr(self) -> pulumi.Input[str]:
"""
A finding's CIDR value.
"""
return pulumi.get(self, "cidr")
@cidr.setter
def cidr(self, value: pulumi.Input[str]):
pulumi.set(self, "cidr", value)
@pulumi.input_type
class InsightFiltersResourceAwsEc2InstanceIpv6AddressArgs:
def __init__(__self__, *,
cidr: pulumi.Input[str]):
"""
:param pulumi.Input[str] cidr: A finding's CIDR value.
"""
pulumi.set(__self__, "cidr", cidr)
@property
@pulumi.getter
def cidr(self) -> pulumi.Input[str]:
"""
A finding's CIDR value.
"""
return pulumi.get(self, "cidr")
@cidr.setter
def cidr(self, value: pulumi.Input[str]):
pulumi.set(self, "cidr", value)
@pulumi.input_type
class InsightFiltersResourceAwsEc2InstanceKeyNameArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceAwsEc2InstanceLaunchedAtArgs:
def __init__(__self__, *,
date_range: Optional[pulumi.Input['InsightFiltersResourceAwsEc2InstanceLaunchedAtDateRangeArgs']] = None,
end: Optional[pulumi.Input[str]] = None,
start: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input['InsightFiltersResourceAwsEc2InstanceLaunchedAtDateRangeArgs'] date_range: A configuration block of the date range for the date filter. See date_range below for more details.
:param pulumi.Input[str] end: An end date for the date filter. Required with `start` if `date_range` is not specified.
:param pulumi.Input[str] start: A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
if date_range is not None:
pulumi.set(__self__, "date_range", date_range)
if end is not None:
pulumi.set(__self__, "end", end)
if start is not None:
pulumi.set(__self__, "start", start)
@property
@pulumi.getter(name="dateRange")
def date_range(self) -> Optional[pulumi.Input['InsightFiltersResourceAwsEc2InstanceLaunchedAtDateRangeArgs']]:
"""
A configuration block of the date range for the date filter. See date_range below for more details.
"""
return pulumi.get(self, "date_range")
@date_range.setter
def date_range(self, value: Optional[pulumi.Input['InsightFiltersResourceAwsEc2InstanceLaunchedAtDateRangeArgs']]):
pulumi.set(self, "date_range", value)
@property
@pulumi.getter
def end(self) -> Optional[pulumi.Input[str]]:
"""
An end date for the date filter. Required with `start` if `date_range` is not specified.
"""
return pulumi.get(self, "end")
@end.setter
def end(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "end", value)
@property
@pulumi.getter
def start(self) -> Optional[pulumi.Input[str]]:
"""
A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
return pulumi.get(self, "start")
@start.setter
def start(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "start", value)
@pulumi.input_type
class InsightFiltersResourceAwsEc2InstanceLaunchedAtDateRangeArgs:
def __init__(__self__, *,
unit: pulumi.Input[str],
value: pulumi.Input[int]):
"""
:param pulumi.Input[str] unit: A date range unit for the date filter. Valid values: `DAYS`.
:param pulumi.Input[int] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "unit", unit)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def unit(self) -> pulumi.Input[str]:
"""
A date range unit for the date filter. Valid values: `DAYS`.
"""
return pulumi.get(self, "unit")
@unit.setter
def unit(self, value: pulumi.Input[str]):
pulumi.set(self, "unit", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[int]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[int]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceAwsEc2InstanceSubnetIdArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceAwsEc2InstanceTypeArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceAwsEc2InstanceVpcIdArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceAwsIamAccessKeyCreatedAtArgs:
def __init__(__self__, *,
date_range: Optional[pulumi.Input['InsightFiltersResourceAwsIamAccessKeyCreatedAtDateRangeArgs']] = None,
end: Optional[pulumi.Input[str]] = None,
start: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input['InsightFiltersResourceAwsIamAccessKeyCreatedAtDateRangeArgs'] date_range: A configuration block of the date range for the date filter. See date_range below for more details.
:param pulumi.Input[str] end: An end date for the date filter. Required with `start` if `date_range` is not specified.
:param pulumi.Input[str] start: A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
if date_range is not None:
pulumi.set(__self__, "date_range", date_range)
if end is not None:
pulumi.set(__self__, "end", end)
if start is not None:
pulumi.set(__self__, "start", start)
@property
@pulumi.getter(name="dateRange")
def date_range(self) -> Optional[pulumi.Input['InsightFiltersResourceAwsIamAccessKeyCreatedAtDateRangeArgs']]:
"""
A configuration block of the date range for the date filter. See date_range below for more details.
"""
return pulumi.get(self, "date_range")
@date_range.setter
def date_range(self, value: Optional[pulumi.Input['InsightFiltersResourceAwsIamAccessKeyCreatedAtDateRangeArgs']]):
pulumi.set(self, "date_range", value)
@property
@pulumi.getter
def end(self) -> Optional[pulumi.Input[str]]:
"""
An end date for the date filter. Required with `start` if `date_range` is not specified.
"""
return pulumi.get(self, "end")
@end.setter
def end(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "end", value)
@property
@pulumi.getter
def start(self) -> Optional[pulumi.Input[str]]:
"""
A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
return pulumi.get(self, "start")
@start.setter
def start(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "start", value)
@pulumi.input_type
class InsightFiltersResourceAwsIamAccessKeyCreatedAtDateRangeArgs:
def __init__(__self__, *,
unit: pulumi.Input[str],
value: pulumi.Input[int]):
"""
:param pulumi.Input[str] unit: A date range unit for the date filter. Valid values: `DAYS`.
:param pulumi.Input[int] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "unit", unit)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def unit(self) -> pulumi.Input[str]:
"""
A date range unit for the date filter. Valid values: `DAYS`.
"""
return pulumi.get(self, "unit")
@unit.setter
def unit(self, value: pulumi.Input[str]):
pulumi.set(self, "unit", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[int]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[int]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceAwsIamAccessKeyStatusArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceAwsIamAccessKeyUserNameArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceAwsS3BucketOwnerIdArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceAwsS3BucketOwnerNameArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceContainerImageIdArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceContainerImageNameArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceContainerLaunchedAtArgs:
def __init__(__self__, *,
date_range: Optional[pulumi.Input['InsightFiltersResourceContainerLaunchedAtDateRangeArgs']] = None,
end: Optional[pulumi.Input[str]] = None,
start: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input['InsightFiltersResourceContainerLaunchedAtDateRangeArgs'] date_range: A configuration block of the date range for the date filter. See date_range below for more details.
:param pulumi.Input[str] end: An end date for the date filter. Required with `start` if `date_range` is not specified.
:param pulumi.Input[str] start: A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
if date_range is not None:
pulumi.set(__self__, "date_range", date_range)
if end is not None:
pulumi.set(__self__, "end", end)
if start is not None:
pulumi.set(__self__, "start", start)
@property
@pulumi.getter(name="dateRange")
def date_range(self) -> Optional[pulumi.Input['InsightFiltersResourceContainerLaunchedAtDateRangeArgs']]:
"""
A configuration block of the date range for the date filter. See date_range below for more details.
"""
return pulumi.get(self, "date_range")
@date_range.setter
def date_range(self, value: Optional[pulumi.Input['InsightFiltersResourceContainerLaunchedAtDateRangeArgs']]):
pulumi.set(self, "date_range", value)
@property
@pulumi.getter
def end(self) -> Optional[pulumi.Input[str]]:
"""
An end date for the date filter. Required with `start` if `date_range` is not specified.
"""
return pulumi.get(self, "end")
@end.setter
def end(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "end", value)
@property
@pulumi.getter
def start(self) -> Optional[pulumi.Input[str]]:
"""
A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
return pulumi.get(self, "start")
@start.setter
def start(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "start", value)
@pulumi.input_type
class InsightFiltersResourceContainerLaunchedAtDateRangeArgs:
def __init__(__self__, *,
unit: pulumi.Input[str],
value: pulumi.Input[int]):
"""
:param pulumi.Input[str] unit: A date range unit for the date filter. Valid values: `DAYS`.
:param pulumi.Input[int] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "unit", unit)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def unit(self) -> pulumi.Input[str]:
"""
A date range unit for the date filter. Valid values: `DAYS`.
"""
return pulumi.get(self, "unit")
@unit.setter
def unit(self, value: pulumi.Input[str]):
pulumi.set(self, "unit", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[int]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[int]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceContainerNameArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceDetailsOtherArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
key: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] key: The key of the map filter. For example, for `ResourceTags`, `Key` identifies the name of the tag. For `UserDefinedFields`, `Key` is the name of the field.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
"""
The key of the map filter. For example, for `ResourceTags`, `Key` identifies the name of the tag. For `UserDefinedFields`, `Key` is the name of the field.
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceIdArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourcePartitionArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceRegionArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceTagArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
key: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] key: The key of the map filter. For example, for `ResourceTags`, `Key` identifies the name of the tag. For `UserDefinedFields`, `Key` is the name of the field.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
"""
The key of the map filter. For example, for `ResourceTags`, `Key` identifies the name of the tag. For `UserDefinedFields`, `Key` is the name of the field.
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersResourceTypeArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersSeverityLabelArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersSourceUrlArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersThreatIntelIndicatorCategoryArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersThreatIntelIndicatorLastObservedAtArgs:
def __init__(__self__, *,
date_range: Optional[pulumi.Input['InsightFiltersThreatIntelIndicatorLastObservedAtDateRangeArgs']] = None,
end: Optional[pulumi.Input[str]] = None,
start: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input['InsightFiltersThreatIntelIndicatorLastObservedAtDateRangeArgs'] date_range: A configuration block of the date range for the date filter. See date_range below for more details.
:param pulumi.Input[str] end: An end date for the date filter. Required with `start` if `date_range` is not specified.
:param pulumi.Input[str] start: A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
if date_range is not None:
pulumi.set(__self__, "date_range", date_range)
if end is not None:
pulumi.set(__self__, "end", end)
if start is not None:
pulumi.set(__self__, "start", start)
@property
@pulumi.getter(name="dateRange")
def date_range(self) -> Optional[pulumi.Input['InsightFiltersThreatIntelIndicatorLastObservedAtDateRangeArgs']]:
"""
A configuration block of the date range for the date filter. See date_range below for more details.
"""
return pulumi.get(self, "date_range")
@date_range.setter
def date_range(self, value: Optional[pulumi.Input['InsightFiltersThreatIntelIndicatorLastObservedAtDateRangeArgs']]):
pulumi.set(self, "date_range", value)
@property
@pulumi.getter
def end(self) -> Optional[pulumi.Input[str]]:
"""
An end date for the date filter. Required with `start` if `date_range` is not specified.
"""
return pulumi.get(self, "end")
@end.setter
def end(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "end", value)
@property
@pulumi.getter
def start(self) -> Optional[pulumi.Input[str]]:
"""
A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
return pulumi.get(self, "start")
@start.setter
def start(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "start", value)
@pulumi.input_type
class InsightFiltersThreatIntelIndicatorLastObservedAtDateRangeArgs:
def __init__(__self__, *,
unit: pulumi.Input[str],
value: pulumi.Input[int]):
"""
:param pulumi.Input[str] unit: A date range unit for the date filter. Valid values: `DAYS`.
:param pulumi.Input[int] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "unit", unit)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def unit(self) -> pulumi.Input[str]:
"""
A date range unit for the date filter. Valid values: `DAYS`.
"""
return pulumi.get(self, "unit")
@unit.setter
def unit(self, value: pulumi.Input[str]):
pulumi.set(self, "unit", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[int]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[int]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersThreatIntelIndicatorSourceArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersThreatIntelIndicatorSourceUrlArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersThreatIntelIndicatorTypeArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersThreatIntelIndicatorValueArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersTitleArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersTypeArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersUpdatedAtArgs:
def __init__(__self__, *,
date_range: Optional[pulumi.Input['InsightFiltersUpdatedAtDateRangeArgs']] = None,
end: Optional[pulumi.Input[str]] = None,
start: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input['InsightFiltersUpdatedAtDateRangeArgs'] date_range: A configuration block of the date range for the date filter. See date_range below for more details.
:param pulumi.Input[str] end: An end date for the date filter. Required with `start` if `date_range` is not specified.
:param pulumi.Input[str] start: A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
if date_range is not None:
pulumi.set(__self__, "date_range", date_range)
if end is not None:
pulumi.set(__self__, "end", end)
if start is not None:
pulumi.set(__self__, "start", start)
@property
@pulumi.getter(name="dateRange")
def date_range(self) -> Optional[pulumi.Input['InsightFiltersUpdatedAtDateRangeArgs']]:
"""
A configuration block of the date range for the date filter. See date_range below for more details.
"""
return pulumi.get(self, "date_range")
@date_range.setter
def date_range(self, value: Optional[pulumi.Input['InsightFiltersUpdatedAtDateRangeArgs']]):
pulumi.set(self, "date_range", value)
@property
@pulumi.getter
def end(self) -> Optional[pulumi.Input[str]]:
"""
An end date for the date filter. Required with `start` if `date_range` is not specified.
"""
return pulumi.get(self, "end")
@end.setter
def end(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "end", value)
@property
@pulumi.getter
def start(self) -> Optional[pulumi.Input[str]]:
"""
A start date for the date filter. Required with `end` if `date_range` is not specified.
"""
return pulumi.get(self, "start")
@start.setter
def start(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "start", value)
@pulumi.input_type
class InsightFiltersUpdatedAtDateRangeArgs:
def __init__(__self__, *,
unit: pulumi.Input[str],
value: pulumi.Input[int]):
"""
:param pulumi.Input[str] unit: A date range unit for the date filter. Valid values: `DAYS`.
:param pulumi.Input[int] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "unit", unit)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def unit(self) -> pulumi.Input[str]:
"""
A date range unit for the date filter. Valid values: `DAYS`.
"""
return pulumi.get(self, "unit")
@unit.setter
def unit(self, value: pulumi.Input[str]):
pulumi.set(self, "unit", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[int]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[int]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersUserDefinedValueArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
key: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] key: The key of the map filter. For example, for `ResourceTags`, `Key` identifies the name of the tag. For `UserDefinedFields`, `Key` is the name of the field.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
"""
The key of the map filter. For example, for `ResourceTags`, `Key` identifies the name of the tag. For `UserDefinedFields`, `Key` is the name of the field.
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersVerificationStateArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class InsightFiltersWorkflowStatusArgs:
def __init__(__self__, *,
comparison: pulumi.Input[str],
value: pulumi.Input[str]):
"""
:param pulumi.Input[str] comparison: The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
:param pulumi.Input[str] value: A date range value for the date filter, provided as an Integer.
"""
pulumi.set(__self__, "comparison", comparison)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def comparison(self) -> pulumi.Input[str]:
"""
The condition to apply to a string value when querying for findings. Valid values include: `EQUALS` and `NOT_EQUALS`.
"""
return pulumi.get(self, "comparison")
@comparison.setter
def comparison(self, value: pulumi.Input[str]):
pulumi.set(self, "comparison", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
A date range value for the date filter, provided as an Integer.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
| 44.955547 | 506 | 0.682045 | 28,455 | 246,761 | 5.757688 | 0.018766 | 0.111454 | 0.065627 | 0.053713 | 0.902176 | 0.87947 | 0.836573 | 0.734263 | 0.694675 | 0.657674 | 0 | 0.002084 | 0.214349 | 246,761 | 5,488 | 507 | 44.963739 | 0.843001 | 0.311131 | 0 | 0.686544 | 1 | 0 | 0.170552 | 0.126836 | 0 | 0 | 0 | 0 | 0 | 1 | 0.215291 | false | 0 | 0.001529 | 0 | 0.339755 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7b6a0640b923383181bc139a0b37b58a57feb066 | 178,584 | py | Python | OrderCloud/apis/me_api.py | klreeher/python-sdk | b7fe922dcfc3bb73fe4149475fa45fdcb04d956a | [
"Apache-2.0"
] | null | null | null | OrderCloud/apis/me_api.py | klreeher/python-sdk | b7fe922dcfc3bb73fe4149475fa45fdcb04d956a | [
"Apache-2.0"
] | null | null | null | OrderCloud/apis/me_api.py | klreeher/python-sdk | b7fe922dcfc3bb73fe4149475fa45fdcb04d956a | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
OrderCloud
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
OpenAPI spec version: 1.0
Contact: ordercloud@four51.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import absolute_import
import sys
import os
import re
# python 2 and python 3 compatibility library
from six import iteritems
from ..configuration import Configuration
from ..api_client import ApiClient
class MeApi(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
config = Configuration()
if api_client:
self.api_client = api_client
else:
if not config.api_client:
config.api_client = ApiClient()
self.api_client = config.api_client
def create_address(self, buyer_address, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_address(buyer_address, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param BuyerAddress buyer_address: (required)
:return: BuyerAddress
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.create_address_with_http_info(buyer_address, **kwargs)
else:
(data) = self.create_address_with_http_info(buyer_address, **kwargs)
return data
def create_address_with_http_info(self, buyer_address, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_address_with_http_info(buyer_address, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param BuyerAddress buyer_address: (required)
:return: BuyerAddress
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['buyer_address']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_address" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'buyer_address' is set
if ('buyer_address' not in params) or (params['buyer_address'] is None):
raise ValueError("Missing the required parameter `buyer_address` when calling `create_address`")
resource_path = '/me/addresses'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'buyer_address' in params:
body_params = params['buyer_address']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='BuyerAddress',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def create_credit_card(self, buyer_credit_card, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_credit_card(buyer_credit_card, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param BuyerCreditCard buyer_credit_card: (required)
:return: BuyerCreditCard
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.create_credit_card_with_http_info(buyer_credit_card, **kwargs)
else:
(data) = self.create_credit_card_with_http_info(buyer_credit_card, **kwargs)
return data
def create_credit_card_with_http_info(self, buyer_credit_card, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_credit_card_with_http_info(buyer_credit_card, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param BuyerCreditCard buyer_credit_card: (required)
:return: BuyerCreditCard
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['buyer_credit_card']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_credit_card" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'buyer_credit_card' is set
if ('buyer_credit_card' not in params) or (params['buyer_credit_card'] is None):
raise ValueError("Missing the required parameter `buyer_credit_card` when calling `create_credit_card`")
resource_path = '/me/creditcards'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'buyer_credit_card' in params:
body_params = params['buyer_credit_card']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='BuyerCreditCard',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def delete_address(self, address_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_address(address_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str address_id: ID of the address. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_address_with_http_info(address_id, **kwargs)
else:
(data) = self.delete_address_with_http_info(address_id, **kwargs)
return data
def delete_address_with_http_info(self, address_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_address_with_http_info(address_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str address_id: ID of the address. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['address_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_address" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'address_id' is set
if ('address_id' not in params) or (params['address_id'] is None):
raise ValueError("Missing the required parameter `address_id` when calling `delete_address`")
resource_path = '/me/addresses/{addressID}'.replace('{format}', 'json')
path_params = {}
if 'address_id' in params:
path_params['addressID'] = params['address_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def delete_credit_card(self, creditcard_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_credit_card(creditcard_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str creditcard_id: ID of the creditcard. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_credit_card_with_http_info(creditcard_id, **kwargs)
else:
(data) = self.delete_credit_card_with_http_info(creditcard_id, **kwargs)
return data
def delete_credit_card_with_http_info(self, creditcard_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_credit_card_with_http_info(creditcard_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str creditcard_id: ID of the creditcard. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['creditcard_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_credit_card" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'creditcard_id' is set
if ('creditcard_id' not in params) or (params['creditcard_id'] is None):
raise ValueError("Missing the required parameter `creditcard_id` when calling `delete_credit_card`")
resource_path = '/me/creditcards/{creditcardID}'.replace('{format}', 'json')
path_params = {}
if 'creditcard_id' in params:
path_params['creditcardID'] = params['creditcard_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def get(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:return: MeUser
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_with_http_info(**kwargs)
else:
(data) = self.get_with_http_info(**kwargs)
return data
def get_with_http_info(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:return: MeUser
If the method is called asynchronously,
returns the request thread.
"""
all_params = []
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get" % key
)
params[key] = val
del params['kwargs']
resource_path = '/me'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='MeUser',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def get_address(self, address_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_address(address_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str address_id: ID of the address. (required)
:return: BuyerAddress
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_address_with_http_info(address_id, **kwargs)
else:
(data) = self.get_address_with_http_info(address_id, **kwargs)
return data
def get_address_with_http_info(self, address_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_address_with_http_info(address_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str address_id: ID of the address. (required)
:return: BuyerAddress
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['address_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_address" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'address_id' is set
if ('address_id' not in params) or (params['address_id'] is None):
raise ValueError("Missing the required parameter `address_id` when calling `get_address`")
resource_path = '/me/addresses/{addressID}'.replace('{format}', 'json')
path_params = {}
if 'address_id' in params:
path_params['addressID'] = params['address_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='BuyerAddress',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def get_catalog(self, catalog_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_catalog(catalog_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str catalog_id: ID of the catalog. (required)
:return: Catalog
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_catalog_with_http_info(catalog_id, **kwargs)
else:
(data) = self.get_catalog_with_http_info(catalog_id, **kwargs)
return data
def get_catalog_with_http_info(self, catalog_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_catalog_with_http_info(catalog_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str catalog_id: ID of the catalog. (required)
:return: Catalog
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['catalog_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_catalog" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'catalog_id' is set
if ('catalog_id' not in params) or (params['catalog_id'] is None):
raise ValueError("Missing the required parameter `catalog_id` when calling `get_catalog`")
resource_path = '/me/catalogs/{catalogID}'.replace('{format}', 'json')
path_params = {}
if 'catalog_id' in params:
path_params['catalogID'] = params['catalog_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Catalog',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def get_category(self, category_id, catalog_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_category(category_id, catalog_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str category_id: ID of the category. (required)
:param str catalog_id: ID of the catalog. (required)
:return: Category
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_category_with_http_info(category_id, catalog_id, **kwargs)
else:
(data) = self.get_category_with_http_info(category_id, catalog_id, **kwargs)
return data
def get_category_with_http_info(self, category_id, catalog_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_category_with_http_info(category_id, catalog_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str category_id: ID of the category. (required)
:param str catalog_id: ID of the catalog. (required)
:return: Category
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['category_id', 'catalog_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_category" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'category_id' is set
if ('category_id' not in params) or (params['category_id'] is None):
raise ValueError("Missing the required parameter `category_id` when calling `get_category`")
# verify the required parameter 'catalog_id' is set
if ('catalog_id' not in params) or (params['catalog_id'] is None):
raise ValueError("Missing the required parameter `catalog_id` when calling `get_category`")
resource_path = '/me/categories/{categoryID}'.replace('{format}', 'json')
path_params = {}
if 'category_id' in params:
path_params['categoryID'] = params['category_id']
query_params = {}
if 'catalog_id' in params:
query_params['catalogID'] = params['catalog_id']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Category',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def get_credit_card(self, creditcard_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_credit_card(creditcard_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str creditcard_id: ID of the creditcard. (required)
:return: BuyerCreditCard
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_credit_card_with_http_info(creditcard_id, **kwargs)
else:
(data) = self.get_credit_card_with_http_info(creditcard_id, **kwargs)
return data
def get_credit_card_with_http_info(self, creditcard_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_credit_card_with_http_info(creditcard_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str creditcard_id: ID of the creditcard. (required)
:return: BuyerCreditCard
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['creditcard_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_credit_card" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'creditcard_id' is set
if ('creditcard_id' not in params) or (params['creditcard_id'] is None):
raise ValueError("Missing the required parameter `creditcard_id` when calling `get_credit_card`")
resource_path = '/me/creditcards/{creditcardID}'.replace('{format}', 'json')
path_params = {}
if 'creditcard_id' in params:
path_params['creditcardID'] = params['creditcard_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='BuyerCreditCard',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def get_product(self, product_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_product(product_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str product_id: ID of the product. (required)
:return: BuyerProduct
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_product_with_http_info(product_id, **kwargs)
else:
(data) = self.get_product_with_http_info(product_id, **kwargs)
return data
def get_product_with_http_info(self, product_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_product_with_http_info(product_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str product_id: ID of the product. (required)
:return: BuyerProduct
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['product_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_product" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'product_id' is set
if ('product_id' not in params) or (params['product_id'] is None):
raise ValueError("Missing the required parameter `product_id` when calling `get_product`")
resource_path = '/me/products/{productID}'.replace('{format}', 'json')
path_params = {}
if 'product_id' in params:
path_params['productID'] = params['product_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='BuyerProduct',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def get_promotion(self, promotion_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_promotion(promotion_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str promotion_id: ID of the promotion. (required)
:return: Promotion
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_promotion_with_http_info(promotion_id, **kwargs)
else:
(data) = self.get_promotion_with_http_info(promotion_id, **kwargs)
return data
def get_promotion_with_http_info(self, promotion_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_promotion_with_http_info(promotion_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str promotion_id: ID of the promotion. (required)
:return: Promotion
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['promotion_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_promotion" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'promotion_id' is set
if ('promotion_id' not in params) or (params['promotion_id'] is None):
raise ValueError("Missing the required parameter `promotion_id` when calling `get_promotion`")
resource_path = '/me/promotions/{promotionID}'.replace('{format}', 'json')
path_params = {}
if 'promotion_id' in params:
path_params['promotionID'] = params['promotion_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Promotion',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def get_shipment(self, shipment_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_shipment(shipment_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str shipment_id: ID of the shipment. (required)
:return: Shipment
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_shipment_with_http_info(shipment_id, **kwargs)
else:
(data) = self.get_shipment_with_http_info(shipment_id, **kwargs)
return data
def get_shipment_with_http_info(self, shipment_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_shipment_with_http_info(shipment_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str shipment_id: ID of the shipment. (required)
:return: Shipment
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['shipment_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_shipment" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'shipment_id' is set
if ('shipment_id' not in params) or (params['shipment_id'] is None):
raise ValueError("Missing the required parameter `shipment_id` when calling `get_shipment`")
resource_path = '/me/shipments/{shipmentID}'.replace('{format}', 'json')
path_params = {}
if 'shipment_id' in params:
path_params['shipmentID'] = params['shipment_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Shipment',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def get_spec(self, product_id, spec_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_spec(product_id, spec_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str product_id: ID of the product. (required)
:param str spec_id: ID of the spec. (required)
:param str catalog_id: ID of the catalog.
:return: BuyerSpec
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_spec_with_http_info(product_id, spec_id, **kwargs)
else:
(data) = self.get_spec_with_http_info(product_id, spec_id, **kwargs)
return data
def get_spec_with_http_info(self, product_id, spec_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_spec_with_http_info(product_id, spec_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str product_id: ID of the product. (required)
:param str spec_id: ID of the spec. (required)
:param str catalog_id: ID of the catalog.
:return: BuyerSpec
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['product_id', 'spec_id', 'catalog_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_spec" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'product_id' is set
if ('product_id' not in params) or (params['product_id'] is None):
raise ValueError("Missing the required parameter `product_id` when calling `get_spec`")
# verify the required parameter 'spec_id' is set
if ('spec_id' not in params) or (params['spec_id'] is None):
raise ValueError("Missing the required parameter `spec_id` when calling `get_spec`")
resource_path = '/me/products/{productID}/specs/{specID}'.replace('{format}', 'json')
path_params = {}
if 'product_id' in params:
path_params['productID'] = params['product_id']
if 'spec_id' in params:
path_params['specID'] = params['spec_id']
query_params = {}
if 'catalog_id' in params:
query_params['catalogID'] = params['catalog_id']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='BuyerSpec',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def get_spending_account(self, spending_account_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_spending_account(spending_account_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str spending_account_id: ID of the spending account. (required)
:return: SpendingAccount
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_spending_account_with_http_info(spending_account_id, **kwargs)
else:
(data) = self.get_spending_account_with_http_info(spending_account_id, **kwargs)
return data
def get_spending_account_with_http_info(self, spending_account_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_spending_account_with_http_info(spending_account_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str spending_account_id: ID of the spending account. (required)
:return: SpendingAccount
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['spending_account_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_spending_account" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'spending_account_id' is set
if ('spending_account_id' not in params) or (params['spending_account_id'] is None):
raise ValueError("Missing the required parameter `spending_account_id` when calling `get_spending_account`")
resource_path = '/me/spendingaccounts/{spendingAccountID}'.replace('{format}', 'json')
path_params = {}
if 'spending_account_id' in params:
path_params['spendingAccountID'] = params['spending_account_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SpendingAccount',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def list_addresses(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_addresses(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListBuyerAddress
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_addresses_with_http_info(**kwargs)
else:
(data) = self.list_addresses_with_http_info(**kwargs)
return data
def list_addresses_with_http_info(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_addresses_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListBuyerAddress
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['search', 'search_on', 'sort_by', 'page', 'page_size', 'filters']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_addresses" % key
)
params[key] = val
del params['kwargs']
resource_path = '/me/addresses'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'search' in params:
query_params['search'] = params['search']
if 'search_on' in params:
query_params['searchOn'] = params['search_on']
if 'sort_by' in params:
query_params['sortBy'] = params['sort_by']
if 'page' in params:
query_params['page'] = params['page']
if 'page_size' in params:
query_params['pageSize'] = params['page_size']
if 'filters' in params:
query_params['filters'] = params['filters']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListBuyerAddress',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def list_approvable_orders(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_approvable_orders(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str _from: Lower bound of date range that the order was created (if outgoing) or submitted (if incoming).
:param str to: Upper bound of date range that the order was created (if outgoing) or submitted (if incoming).
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListOrder
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_approvable_orders_with_http_info(**kwargs)
else:
(data) = self.list_approvable_orders_with_http_info(**kwargs)
return data
def list_approvable_orders_with_http_info(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_approvable_orders_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str _from: Lower bound of date range that the order was created (if outgoing) or submitted (if incoming).
:param str to: Upper bound of date range that the order was created (if outgoing) or submitted (if incoming).
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListOrder
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['_from', 'to', 'search', 'search_on', 'sort_by', 'page', 'page_size', 'filters']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_approvable_orders" % key
)
params[key] = val
del params['kwargs']
resource_path = '/me/orders/approvable'.replace('{format}', 'json')
path_params = {}
query_params = {}
if '_from' in params:
query_params['from'] = params['_from']
if 'to' in params:
query_params['to'] = params['to']
if 'search' in params:
query_params['search'] = params['search']
if 'search_on' in params:
query_params['searchOn'] = params['search_on']
if 'sort_by' in params:
query_params['sortBy'] = params['sort_by']
if 'page' in params:
query_params['page'] = params['page']
if 'page_size' in params:
query_params['pageSize'] = params['page_size']
if 'filters' in params:
query_params['filters'] = params['filters']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListOrder',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def list_catalogs(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_catalogs(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListCatalog
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_catalogs_with_http_info(**kwargs)
else:
(data) = self.list_catalogs_with_http_info(**kwargs)
return data
def list_catalogs_with_http_info(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_catalogs_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListCatalog
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['search', 'search_on', 'sort_by', 'page', 'page_size', 'filters']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_catalogs" % key
)
params[key] = val
del params['kwargs']
resource_path = '/me/catalogs'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'search' in params:
query_params['search'] = params['search']
if 'search_on' in params:
query_params['searchOn'] = params['search_on']
if 'sort_by' in params:
query_params['sortBy'] = params['sort_by']
if 'page' in params:
query_params['page'] = params['page']
if 'page_size' in params:
query_params['pageSize'] = params['page_size']
if 'filters' in params:
query_params['filters'] = params['filters']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListCatalog',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def list_categories(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_categories(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str depth: Depth of the category.
:param str catalog_id: ID of the catalog.
:param str product_id: ID of the product.
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListCategory
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_categories_with_http_info(**kwargs)
else:
(data) = self.list_categories_with_http_info(**kwargs)
return data
def list_categories_with_http_info(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_categories_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str depth: Depth of the category.
:param str catalog_id: ID of the catalog.
:param str product_id: ID of the product.
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListCategory
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['depth', 'catalog_id', 'product_id', 'search', 'search_on', 'sort_by', 'page', 'page_size', 'filters']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_categories" % key
)
params[key] = val
del params['kwargs']
resource_path = '/me/categories'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'depth' in params:
query_params['depth'] = params['depth']
if 'catalog_id' in params:
query_params['catalogID'] = params['catalog_id']
if 'product_id' in params:
query_params['productID'] = params['product_id']
if 'search' in params:
query_params['search'] = params['search']
if 'search_on' in params:
query_params['searchOn'] = params['search_on']
if 'sort_by' in params:
query_params['sortBy'] = params['sort_by']
if 'page' in params:
query_params['page'] = params['page']
if 'page_size' in params:
query_params['pageSize'] = params['page_size']
if 'filters' in params:
query_params['filters'] = params['filters']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListCategory',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def list_cost_centers(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_cost_centers(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListCostCenter
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_cost_centers_with_http_info(**kwargs)
else:
(data) = self.list_cost_centers_with_http_info(**kwargs)
return data
def list_cost_centers_with_http_info(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_cost_centers_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListCostCenter
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['search', 'search_on', 'sort_by', 'page', 'page_size', 'filters']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_cost_centers" % key
)
params[key] = val
del params['kwargs']
resource_path = '/me/costcenters'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'search' in params:
query_params['search'] = params['search']
if 'search_on' in params:
query_params['searchOn'] = params['search_on']
if 'sort_by' in params:
query_params['sortBy'] = params['sort_by']
if 'page' in params:
query_params['page'] = params['page']
if 'page_size' in params:
query_params['pageSize'] = params['page_size']
if 'filters' in params:
query_params['filters'] = params['filters']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListCostCenter',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def list_credit_cards(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_credit_cards(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListBuyerCreditCard
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_credit_cards_with_http_info(**kwargs)
else:
(data) = self.list_credit_cards_with_http_info(**kwargs)
return data
def list_credit_cards_with_http_info(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_credit_cards_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListBuyerCreditCard
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['search', 'search_on', 'sort_by', 'page', 'page_size', 'filters']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_credit_cards" % key
)
params[key] = val
del params['kwargs']
resource_path = '/me/creditcards'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'search' in params:
query_params['search'] = params['search']
if 'search_on' in params:
query_params['searchOn'] = params['search_on']
if 'sort_by' in params:
query_params['sortBy'] = params['sort_by']
if 'page' in params:
query_params['page'] = params['page']
if 'page_size' in params:
query_params['pageSize'] = params['page_size']
if 'filters' in params:
query_params['filters'] = params['filters']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListBuyerCreditCard',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def list_orders(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_orders(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str _from: Lower bound of date range that the order was created (if outgoing) or submitted (if incoming).
:param str to: Upper bound of date range that the order was created (if outgoing) or submitted (if incoming).
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListOrder
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_orders_with_http_info(**kwargs)
else:
(data) = self.list_orders_with_http_info(**kwargs)
return data
def list_orders_with_http_info(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_orders_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str _from: Lower bound of date range that the order was created (if outgoing) or submitted (if incoming).
:param str to: Upper bound of date range that the order was created (if outgoing) or submitted (if incoming).
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListOrder
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['_from', 'to', 'search', 'search_on', 'sort_by', 'page', 'page_size', 'filters']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_orders" % key
)
params[key] = val
del params['kwargs']
resource_path = '/me/orders'.replace('{format}', 'json')
path_params = {}
query_params = {}
if '_from' in params:
query_params['from'] = params['_from']
if 'to' in params:
query_params['to'] = params['to']
if 'search' in params:
query_params['search'] = params['search']
if 'search_on' in params:
query_params['searchOn'] = params['search_on']
if 'sort_by' in params:
query_params['sortBy'] = params['sort_by']
if 'page' in params:
query_params['page'] = params['page']
if 'page_size' in params:
query_params['pageSize'] = params['page_size']
if 'filters' in params:
query_params['filters'] = params['filters']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListOrder',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def list_products(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_products(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str catalog_id: ID of the catalog.
:param str category_id: ID of the category.
:param str depth: Depth of the product.
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListBuyerProduct
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_products_with_http_info(**kwargs)
else:
(data) = self.list_products_with_http_info(**kwargs)
return data
def list_products_with_http_info(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_products_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str catalog_id: ID of the catalog.
:param str category_id: ID of the category.
:param str depth: Depth of the product.
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListBuyerProduct
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['catalog_id', 'category_id', 'depth', 'search', 'search_on', 'sort_by', 'page', 'page_size', 'filters']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_products" % key
)
params[key] = val
del params['kwargs']
resource_path = '/me/products'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'catalog_id' in params:
query_params['catalogID'] = params['catalog_id']
if 'category_id' in params:
query_params['categoryID'] = params['category_id']
if 'depth' in params:
query_params['depth'] = params['depth']
if 'search' in params:
query_params['search'] = params['search']
if 'search_on' in params:
query_params['searchOn'] = params['search_on']
if 'sort_by' in params:
query_params['sortBy'] = params['sort_by']
if 'page' in params:
query_params['page'] = params['page']
if 'page_size' in params:
query_params['pageSize'] = params['page_size']
if 'filters' in params:
query_params['filters'] = params['filters']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListBuyerProduct',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def list_promotions(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_promotions(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListPromotion
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_promotions_with_http_info(**kwargs)
else:
(data) = self.list_promotions_with_http_info(**kwargs)
return data
def list_promotions_with_http_info(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_promotions_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListPromotion
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['search', 'search_on', 'sort_by', 'page', 'page_size', 'filters']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_promotions" % key
)
params[key] = val
del params['kwargs']
resource_path = '/me/promotions'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'search' in params:
query_params['search'] = params['search']
if 'search_on' in params:
query_params['searchOn'] = params['search_on']
if 'sort_by' in params:
query_params['sortBy'] = params['sort_by']
if 'page' in params:
query_params['page'] = params['page']
if 'page_size' in params:
query_params['pageSize'] = params['page_size']
if 'filters' in params:
query_params['filters'] = params['filters']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListPromotion',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def list_shipment_items(self, shipment_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_shipment_items(shipment_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str shipment_id: ID of the shipment. (required)
:param str order_id: ID of the order.
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListShipmentItem
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_shipment_items_with_http_info(shipment_id, **kwargs)
else:
(data) = self.list_shipment_items_with_http_info(shipment_id, **kwargs)
return data
def list_shipment_items_with_http_info(self, shipment_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_shipment_items_with_http_info(shipment_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str shipment_id: ID of the shipment. (required)
:param str order_id: ID of the order.
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListShipmentItem
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['shipment_id', 'order_id', 'search', 'search_on', 'sort_by', 'page', 'page_size', 'filters']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_shipment_items" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'shipment_id' is set
if ('shipment_id' not in params) or (params['shipment_id'] is None):
raise ValueError("Missing the required parameter `shipment_id` when calling `list_shipment_items`")
resource_path = '/me/shipments/{shipmentID}/items'.replace('{format}', 'json')
path_params = {}
if 'shipment_id' in params:
path_params['shipmentID'] = params['shipment_id']
query_params = {}
if 'order_id' in params:
query_params['orderID'] = params['order_id']
if 'search' in params:
query_params['search'] = params['search']
if 'search_on' in params:
query_params['searchOn'] = params['search_on']
if 'sort_by' in params:
query_params['sortBy'] = params['sort_by']
if 'page' in params:
query_params['page'] = params['page']
if 'page_size' in params:
query_params['pageSize'] = params['page_size']
if 'filters' in params:
query_params['filters'] = params['filters']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListShipmentItem',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def list_shipments(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_shipments(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str order_id: ID of the order.
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListShipment
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_shipments_with_http_info(**kwargs)
else:
(data) = self.list_shipments_with_http_info(**kwargs)
return data
def list_shipments_with_http_info(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_shipments_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str order_id: ID of the order.
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListShipment
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['order_id', 'search', 'search_on', 'sort_by', 'page', 'page_size', 'filters']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_shipments" % key
)
params[key] = val
del params['kwargs']
resource_path = '/me/shipments'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'order_id' in params:
query_params['orderID'] = params['order_id']
if 'search' in params:
query_params['search'] = params['search']
if 'search_on' in params:
query_params['searchOn'] = params['search_on']
if 'sort_by' in params:
query_params['sortBy'] = params['sort_by']
if 'page' in params:
query_params['page'] = params['page']
if 'page_size' in params:
query_params['pageSize'] = params['page_size']
if 'filters' in params:
query_params['filters'] = params['filters']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListShipment',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def list_specs(self, product_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_specs(product_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str product_id: ID of the product. (required)
:param str catalog_id: ID of the catalog.
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListBuyerSpec
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_specs_with_http_info(product_id, **kwargs)
else:
(data) = self.list_specs_with_http_info(product_id, **kwargs)
return data
def list_specs_with_http_info(self, product_id, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_specs_with_http_info(product_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str product_id: ID of the product. (required)
:param str catalog_id: ID of the catalog.
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListBuyerSpec
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['product_id', 'catalog_id', 'search', 'search_on', 'sort_by', 'page', 'page_size', 'filters']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_specs" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'product_id' is set
if ('product_id' not in params) or (params['product_id'] is None):
raise ValueError("Missing the required parameter `product_id` when calling `list_specs`")
resource_path = '/me/products/{productID}/specs'.replace('{format}', 'json')
path_params = {}
if 'product_id' in params:
path_params['productID'] = params['product_id']
query_params = {}
if 'catalog_id' in params:
query_params['catalogID'] = params['catalog_id']
if 'search' in params:
query_params['search'] = params['search']
if 'search_on' in params:
query_params['searchOn'] = params['search_on']
if 'sort_by' in params:
query_params['sortBy'] = params['sort_by']
if 'page' in params:
query_params['page'] = params['page']
if 'page_size' in params:
query_params['pageSize'] = params['page_size']
if 'filters' in params:
query_params['filters'] = params['filters']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListBuyerSpec',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def list_spending_accounts(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_spending_accounts(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListSpendingAccount
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_spending_accounts_with_http_info(**kwargs)
else:
(data) = self.list_spending_accounts_with_http_info(**kwargs)
return data
def list_spending_accounts_with_http_info(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_spending_accounts_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListSpendingAccount
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['search', 'search_on', 'sort_by', 'page', 'page_size', 'filters']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_spending_accounts" % key
)
params[key] = val
del params['kwargs']
resource_path = '/me/spendingAccounts'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'search' in params:
query_params['search'] = params['search']
if 'search_on' in params:
query_params['searchOn'] = params['search_on']
if 'sort_by' in params:
query_params['sortBy'] = params['sort_by']
if 'page' in params:
query_params['page'] = params['page']
if 'page_size' in params:
query_params['pageSize'] = params['page_size']
if 'filters' in params:
query_params['filters'] = params['filters']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListSpendingAccount',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def list_user_groups(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_user_groups(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListUserGroup
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_user_groups_with_http_info(**kwargs)
else:
(data) = self.list_user_groups_with_http_info(**kwargs)
return data
def list_user_groups_with_http_info(self, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_user_groups_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str search: Word or phrase to search for.
:param str search_on: Comma-delimited list of fields to search on.
:param str sort_by: Comma-delimited list of fields to sort by.
:param int page: Page of results to return. Default: 1
:param int page_size: Number of results to return per page. Default: 20, max: 100.
:param dict(str, str) filters: Any additional key/value pairs passed in the query string are interpretted as filters. Valid keys are top-level properties of the returned model or 'xp.???'
:return: ListUserGroup
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['search', 'search_on', 'sort_by', 'page', 'page_size', 'filters']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_user_groups" % key
)
params[key] = val
del params['kwargs']
resource_path = '/me/usergroups'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'search' in params:
query_params['search'] = params['search']
if 'search_on' in params:
query_params['searchOn'] = params['search_on']
if 'sort_by' in params:
query_params['sortBy'] = params['sort_by']
if 'page' in params:
query_params['page'] = params['page']
if 'page_size' in params:
query_params['pageSize'] = params['page_size']
if 'filters' in params:
query_params['filters'] = params['filters']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListUserGroup',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def patch(self, partial_me_user, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.patch(partial_me_user, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param MeUser partial_me_user: (required)
:return: MeUser
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.patch_with_http_info(partial_me_user, **kwargs)
else:
(data) = self.patch_with_http_info(partial_me_user, **kwargs)
return data
def patch_with_http_info(self, partial_me_user, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.patch_with_http_info(partial_me_user, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param MeUser partial_me_user: (required)
:return: MeUser
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['partial_me_user']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method patch" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'partial_me_user' is set
if ('partial_me_user' not in params) or (params['partial_me_user'] is None):
raise ValueError("Missing the required parameter `partial_me_user` when calling `patch`")
resource_path = '/me'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'partial_me_user' in params:
body_params = params['partial_me_user']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='MeUser',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def patch_address(self, address_id, partial_buyer_address, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.patch_address(address_id, partial_buyer_address, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str address_id: ID of the address. (required)
:param BuyerAddress partial_buyer_address: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.patch_address_with_http_info(address_id, partial_buyer_address, **kwargs)
else:
(data) = self.patch_address_with_http_info(address_id, partial_buyer_address, **kwargs)
return data
def patch_address_with_http_info(self, address_id, partial_buyer_address, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.patch_address_with_http_info(address_id, partial_buyer_address, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str address_id: ID of the address. (required)
:param BuyerAddress partial_buyer_address: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['address_id', 'partial_buyer_address']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_address" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'address_id' is set
if ('address_id' not in params) or (params['address_id'] is None):
raise ValueError("Missing the required parameter `address_id` when calling `patch_address`")
# verify the required parameter 'partial_buyer_address' is set
if ('partial_buyer_address' not in params) or (params['partial_buyer_address'] is None):
raise ValueError("Missing the required parameter `partial_buyer_address` when calling `patch_address`")
resource_path = '/me/addresses/{addressID}'.replace('{format}', 'json')
path_params = {}
if 'address_id' in params:
path_params['addressID'] = params['address_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'partial_buyer_address' in params:
body_params = params['partial_buyer_address']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def patch_credit_card(self, creditcard_id, partial_buyer_credit_card, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.patch_credit_card(creditcard_id, partial_buyer_credit_card, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str creditcard_id: ID of the creditcard. (required)
:param BuyerCreditCard partial_buyer_credit_card: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.patch_credit_card_with_http_info(creditcard_id, partial_buyer_credit_card, **kwargs)
else:
(data) = self.patch_credit_card_with_http_info(creditcard_id, partial_buyer_credit_card, **kwargs)
return data
def patch_credit_card_with_http_info(self, creditcard_id, partial_buyer_credit_card, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.patch_credit_card_with_http_info(creditcard_id, partial_buyer_credit_card, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str creditcard_id: ID of the creditcard. (required)
:param BuyerCreditCard partial_buyer_credit_card: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['creditcard_id', 'partial_buyer_credit_card']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_credit_card" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'creditcard_id' is set
if ('creditcard_id' not in params) or (params['creditcard_id'] is None):
raise ValueError("Missing the required parameter `creditcard_id` when calling `patch_credit_card`")
# verify the required parameter 'partial_buyer_credit_card' is set
if ('partial_buyer_credit_card' not in params) or (params['partial_buyer_credit_card'] is None):
raise ValueError("Missing the required parameter `partial_buyer_credit_card` when calling `patch_credit_card`")
resource_path = '/me/creditcards/{creditcardID}'.replace('{format}', 'json')
path_params = {}
if 'creditcard_id' in params:
path_params['creditcardID'] = params['creditcard_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'partial_buyer_credit_card' in params:
body_params = params['partial_buyer_credit_card']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def register(self, anon_user_token, me_user, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.register(anon_user_token, me_user, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str anon_user_token: Anon user token of the user. (required)
:param MeUser me_user: (required)
:return: object
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.register_with_http_info(anon_user_token, me_user, **kwargs)
else:
(data) = self.register_with_http_info(anon_user_token, me_user, **kwargs)
return data
def register_with_http_info(self, anon_user_token, me_user, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.register_with_http_info(anon_user_token, me_user, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str anon_user_token: Anon user token of the user. (required)
:param MeUser me_user: (required)
:return: object
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['anon_user_token', 'me_user']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method register" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'anon_user_token' is set
if ('anon_user_token' not in params) or (params['anon_user_token'] is None):
raise ValueError("Missing the required parameter `anon_user_token` when calling `register`")
# verify the required parameter 'me_user' is set
if ('me_user' not in params) or (params['me_user'] is None):
raise ValueError("Missing the required parameter `me_user` when calling `register`")
resource_path = '/me/register'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'anon_user_token' in params:
query_params['anonUserToken'] = params['anon_user_token']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'me_user' in params:
body_params = params['me_user']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='object',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def reset_password_by_token(self, token_password_reset, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.reset_password_by_token(token_password_reset, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param TokenPasswordReset token_password_reset: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.reset_password_by_token_with_http_info(token_password_reset, **kwargs)
else:
(data) = self.reset_password_by_token_with_http_info(token_password_reset, **kwargs)
return data
def reset_password_by_token_with_http_info(self, token_password_reset, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.reset_password_by_token_with_http_info(token_password_reset, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param TokenPasswordReset token_password_reset: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['token_password_reset']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method reset_password_by_token" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'token_password_reset' is set
if ('token_password_reset' not in params) or (params['token_password_reset'] is None):
raise ValueError("Missing the required parameter `token_password_reset` when calling `reset_password_by_token`")
resource_path = '/me/password'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'token_password_reset' in params:
body_params = params['token_password_reset']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def save(self, me_user, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.save(me_user, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param MeUser me_user: (required)
:return: MeUser
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.save_with_http_info(me_user, **kwargs)
else:
(data) = self.save_with_http_info(me_user, **kwargs)
return data
def save_with_http_info(self, me_user, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.save_with_http_info(me_user, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param MeUser me_user: (required)
:return: MeUser
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['me_user']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method save" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'me_user' is set
if ('me_user' not in params) or (params['me_user'] is None):
raise ValueError("Missing the required parameter `me_user` when calling `save`")
resource_path = '/me'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'me_user' in params:
body_params = params['me_user']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='MeUser',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def save_address(self, address_id, buyer_address, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.save_address(address_id, buyer_address, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str address_id: ID of the address. (required)
:param BuyerAddress buyer_address: (required)
:return: BuyerAddress
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.save_address_with_http_info(address_id, buyer_address, **kwargs)
else:
(data) = self.save_address_with_http_info(address_id, buyer_address, **kwargs)
return data
def save_address_with_http_info(self, address_id, buyer_address, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.save_address_with_http_info(address_id, buyer_address, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str address_id: ID of the address. (required)
:param BuyerAddress buyer_address: (required)
:return: BuyerAddress
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['address_id', 'buyer_address']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method save_address" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'address_id' is set
if ('address_id' not in params) or (params['address_id'] is None):
raise ValueError("Missing the required parameter `address_id` when calling `save_address`")
# verify the required parameter 'buyer_address' is set
if ('buyer_address' not in params) or (params['buyer_address'] is None):
raise ValueError("Missing the required parameter `buyer_address` when calling `save_address`")
resource_path = '/me/addresses/{addressID}'.replace('{format}', 'json')
path_params = {}
if 'address_id' in params:
path_params['addressID'] = params['address_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'buyer_address' in params:
body_params = params['buyer_address']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='BuyerAddress',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def save_credit_card(self, creditcard_id, buyer_credit_card, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.save_credit_card(creditcard_id, buyer_credit_card, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str creditcard_id: ID of the creditcard. (required)
:param BuyerCreditCard buyer_credit_card: (required)
:return: BuyerCreditCard
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.save_credit_card_with_http_info(creditcard_id, buyer_credit_card, **kwargs)
else:
(data) = self.save_credit_card_with_http_info(creditcard_id, buyer_credit_card, **kwargs)
return data
def save_credit_card_with_http_info(self, creditcard_id, buyer_credit_card, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.save_credit_card_with_http_info(creditcard_id, buyer_credit_card, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str creditcard_id: ID of the creditcard. (required)
:param BuyerCreditCard buyer_credit_card: (required)
:return: BuyerCreditCard
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['creditcard_id', 'buyer_credit_card']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method save_credit_card" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'creditcard_id' is set
if ('creditcard_id' not in params) or (params['creditcard_id'] is None):
raise ValueError("Missing the required parameter `creditcard_id` when calling `save_credit_card`")
# verify the required parameter 'buyer_credit_card' is set
if ('buyer_credit_card' not in params) or (params['buyer_credit_card'] is None):
raise ValueError("Missing the required parameter `buyer_credit_card` when calling `save_credit_card`")
resource_path = '/me/creditcards/{creditcardID}'.replace('{format}', 'json')
path_params = {}
if 'creditcard_id' in params:
path_params['creditcardID'] = params['creditcard_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'buyer_credit_card' in params:
body_params = params['buyer_credit_card']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='BuyerCreditCard',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def transfer_anon_user_order(self, anon_user_token, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.transfer_anon_user_order(anon_user_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str anon_user_token: Anon user token of the me. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.transfer_anon_user_order_with_http_info(anon_user_token, **kwargs)
else:
(data) = self.transfer_anon_user_order_with_http_info(anon_user_token, **kwargs)
return data
def transfer_anon_user_order_with_http_info(self, anon_user_token, **kwargs):
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.transfer_anon_user_order_with_http_info(anon_user_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str anon_user_token: Anon user token of the me. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['anon_user_token']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method transfer_anon_user_order" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'anon_user_token' is set
if ('anon_user_token' not in params) or (params['anon_user_token'] is None):
raise ValueError("Missing the required parameter `anon_user_token` when calling `transfer_anon_user_order`")
resource_path = '/me/orders'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'anon_user_token' in params:
query_params['anonUserToken'] = params['anon_user_token']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'text/plain; charset=utf-8'])
# Authentication setting
auth_settings = ['oauth2']
return self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
| 42.00988 | 195 | 0.5692 | 19,010 | 178,584 | 5.137033 | 0.016781 | 0.060622 | 0.027505 | 0.02728 | 0.972649 | 0.963729 | 0.95435 | 0.946628 | 0.935221 | 0.924233 | 0 | 0.002169 | 0.3467 | 178,584 | 4,250 | 196 | 42.019765 | 0.834858 | 0.350961 | 0 | 0.821089 | 1 | 0 | 0.177876 | 0.033281 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037481 | false | 0.005497 | 0.003498 | 0 | 0.096952 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7b76a77cdbfd0894da7651e2cd703847478680d6 | 132 | py | Python | ASGama CTF/[CRYPTO] base64/solver.py | bemrdo/CTF-2019 | 424512f7c43278d72091aa737da78907c14f9fc1 | [
"MIT"
] | null | null | null | ASGama CTF/[CRYPTO] base64/solver.py | bemrdo/CTF-2019 | 424512f7c43278d72091aa737da78907c14f9fc1 | [
"MIT"
] | null | null | null | ASGama CTF/[CRYPTO] base64/solver.py | bemrdo/CTF-2019 | 424512f7c43278d72091aa737da78907c14f9fc1 | [
"MIT"
] | 1 | 2020-03-14T07:24:12.000Z | 2020-03-14T07:24:12.000Z | from base64 import b64decode as decode
c = "QVNHYW1he2phbGFuX2hla2Vsa3VfbXVsYWlfZGFyaV9iZWxhamFyX2VuY29kaW5nfQ=="
print decode(c)
| 22 | 74 | 0.848485 | 11 | 132 | 10.181818 | 0.818182 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 0.098485 | 132 | 5 | 75 | 26.4 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0.515152 | 0.515152 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0.333333 | null | null | 0.333333 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
7bcef00098aa1bdb50b5028258d37ea342a20c58 | 4,608 | py | Python | twootfeed/tests/test_mastodon_feed_generation.py | SamR1/python-twittfeed | a88485a919419bb3733262a051a906192a11b647 | [
"MIT"
] | 12 | 2017-07-15T09:32:22.000Z | 2022-01-15T21:31:07.000Z | twootfeed/tests/test_mastodon_feed_generation.py | SamR1/python-twootfeed | a88485a919419bb3733262a051a906192a11b647 | [
"MIT"
] | 17 | 2017-04-19T14:27:28.000Z | 2020-03-15T17:26:57.000Z | twootfeed/tests/test_mastodon_feed_generation.py | SamR1/python-twittfeed | a88485a919419bb3733262a051a906192a11b647 | [
"MIT"
] | 2 | 2017-05-03T16:07:52.000Z | 2017-12-02T02:02:36.000Z | import re
from ..mastodon.generate_toots_feed import (
format_toot,
generate_mastodon_feed,
generate_xml,
)
from .data import (
empty_toot_feed,
empty_toot_search_feed,
formatted_toot1,
formatted_toot2,
invalid_param as param,
toot1,
toot2,
toot_1_bookmarks_feed,
toot_1_favorites_feed,
toot_1_feed,
toot_1_search_feed,
toot_100_bookmarks_feed,
toot_100_favorites_feed,
toot_100_feed,
)
from .utils import MastodonApi
def test_format_toot():
assert format_toot(toot1, 100) == formatted_toot1
assert format_toot(toot2, 10) == formatted_toot2
def test_generate_feed():
val = generate_mastodon_feed(
[toot1],
param,
'Recherche Mastodon : "test"',
'https://mastodon.social/web/timelines/tag/test',
'Résultat d\'une recherche Mastodon retournée dans un flux RSS.',
)
# remove date
val = re.sub(
r'(<lastBuildDate>)(.*)(</lastBuildDate>)',
'<lastBuildDate></lastBuildDate>',
val,
)
assert val == toot_1_feed
def test_generate_feed_200_toots():
val = generate_mastodon_feed(
[toot1] * 200,
param,
'Recherche Mastodon : "test"',
'https://mastodon.social/web/timelines/tag/test',
'Résultat d\'une recherche Mastodon retournée dans un flux RSS.',
)
# remove date
val = re.sub(
r'(<lastBuildDate>)(.*)(</lastBuildDate>)',
'<lastBuildDate></lastBuildDate>',
val,
)
assert val == toot_100_feed
def test_generate_xml_no_api():
val, code = generate_xml(None, param, {'hashtag': 'test'})
assert val == 'error - Mastodon parameters not defined'
assert code == 401
def test_generate_xml_no_toots():
api = MastodonApi([])
val, code = generate_xml(api, param, {'hashtag': 'test'})
val = re.sub(
r'(<lastBuildDate>)(.*)(</lastBuildDate>)',
'<lastBuildDate></lastBuildDate>',
val,
)
assert val == empty_toot_feed
assert code == 200
def test_generate_xml_query_ok():
api = MastodonApi([toot1])
val, code = generate_xml(api, param, {'hashtag': 'test'})
val = re.sub(
r'(<lastBuildDate>)(.*)(</lastBuildDate>)',
'<lastBuildDate></lastBuildDate>',
val,
)
assert val == toot_1_feed
assert code == 200
def test_generate_xml_query_limit_ok():
api = MastodonApi([toot1] * 200)
val, code = generate_xml(api, param, {'hashtag': 'test'})
val = re.sub(
r'(<lastBuildDate>)(.*)(</lastBuildDate>)',
'<lastBuildDate></lastBuildDate>',
val,
)
assert val == toot_100_feed
assert code == 200
def test_generate_xml_search_no_toots():
api = MastodonApi([])
val, code = generate_xml(api, param, {'query': 'test'})
val = re.sub(
r'(<lastBuildDate>)(.*)(</lastBuildDate>)',
'<lastBuildDate></lastBuildDate>',
val,
)
assert val == empty_toot_search_feed
assert code == 200
def test_generate_xml_search_ok():
api = MastodonApi([toot1])
val, code = generate_xml(api, param, {'query': 'test'})
val = re.sub(
r'(<lastBuildDate>)(.*)(</lastBuildDate>)',
'<lastBuildDate></lastBuildDate>',
val,
)
assert val == toot_1_search_feed
assert code == 200
def test_generate_xml_favorites_ok():
api = MastodonApi([toot1])
val, code = generate_xml(api, param, favorites=True)
val = re.sub(
r'(<lastBuildDate>)(.*)(</lastBuildDate>)',
'<lastBuildDate></lastBuildDate>',
val,
)
assert val == toot_1_favorites_feed
assert code == 200
def test_generate_xml_favorites_limit_ok():
api = MastodonApi([toot1] * 150)
val, code = generate_xml(api, param, favorites=True)
val = re.sub(
r'(<lastBuildDate>)(.*)(</lastBuildDate>)',
'<lastBuildDate></lastBuildDate>',
val,
)
assert val == toot_100_favorites_feed
assert code == 200
def test_generate_xml_bookmarks_ok():
api = MastodonApi([toot1])
val, code = generate_xml(api, param)
val = re.sub(
r'(<lastBuildDate>)(.*)(</lastBuildDate>)',
'<lastBuildDate></lastBuildDate>',
val,
)
assert val == toot_1_bookmarks_feed
assert code == 200
def test_generate_xml_bookmarks_limit_ok():
api = MastodonApi([toot1] * 150)
val, code = generate_xml(api, param)
val = re.sub(
r'(<lastBuildDate>)(.*)(</lastBuildDate>)',
'<lastBuildDate></lastBuildDate>',
val,
)
assert val == toot_100_bookmarks_feed
assert code == 200
| 25.88764 | 73 | 0.617405 | 515 | 4,608 | 5.269903 | 0.126214 | 0.316139 | 0.316139 | 0.036478 | 0.792189 | 0.735077 | 0.735077 | 0.735077 | 0.735077 | 0.602432 | 0 | 0.027722 | 0.240668 | 4,608 | 177 | 74 | 26.033898 | 0.747928 | 0.004991 | 0 | 0.567568 | 1 | 0 | 0.226757 | 0.168049 | 0 | 0 | 0 | 0 | 0.162162 | 1 | 0.087838 | false | 0 | 0.027027 | 0 | 0.114865 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c89a208b7a8ebb84afb99ba77cb80f17cebcf757 | 5,001 | py | Python | scripts/init_self_play.py | emdoyle/chess_ai | e8efef116ee760e9a41b6d037d063074386cb1d8 | [
"MIT"
] | 14 | 2017-11-11T13:41:12.000Z | 2022-02-09T06:55:16.000Z | scripts/init_self_play.py | emdoyle/chess_ai | e8efef116ee760e9a41b6d037d063074386cb1d8 | [
"MIT"
] | 1 | 2017-11-11T16:21:52.000Z | 2018-02-11T22:11:31.000Z | scripts/init_self_play.py | emdoyle/chess_ai | e8efef116ee760e9a41b6d037d063074386cb1d8 | [
"MIT"
] | 3 | 2018-07-17T04:05:20.000Z | 2022-02-28T14:25:05.000Z | with open('ACZData/self_play.csv', 'w') as f:
f.write(
"0p_0,0p_1,0p_2,0p_3,0p_4,0p_5,0p_6,0p_7,0p_8,0p_9,0p_10,0p_11,0p_12,0p_13,0p_14,0p_15,0p_16,0p_17,0p_18,0p_19,0p_20,0p_21,0p_22,0p_23,0p_24,0p_25,0p_26,0p_27,0p_28,0p_29,0p_30,0p_31,0p_32,0p_33,0p_34,0p_35,0p_36,0p_37,0p_38,0p_39,0p_40,0p_41,0p_42,0p_43,0p_44,0p_45,0p_46,0p_47,0p_48,0p_49,0p_50,0p_51,0p_52,0p_53,0p_54,0p_55,0p_56,0p_57,0p_58,0p_59,0p_60,0p_61,0p_62,0p_63,0n_0,0n_1,0n_2,0n_3,0n_4,0n_5,0n_6,0n_7,0n_8,0n_9,0n_10,0n_11,0n_12,0n_13,0n_14,0n_15,0n_16,0n_17,0n_18,0n_19,0n_20,0n_21,0n_22,0n_23,0n_24,0n_25,0n_26,0n_27,0n_28,0n_29,0n_30,0n_31,0n_32,0n_33,0n_34,0n_35,0n_36,0n_37,0n_38,0n_39,0n_40,0n_41,0n_42,0n_43,0n_44,0n_45,0n_46,0n_47,0n_48,0n_49,0n_50,0n_51,0n_52,0n_53,0n_54,0n_55,0n_56,0n_57,0n_58,0n_59,0n_60,0n_61,0n_62,0n_63,0b_0,0b_1,0b_2,0b_3,0b_4,0b_5,0b_6,0b_7,0b_8,0b_9,0b_10,0b_11,0b_12,0b_13,0b_14,0b_15,0b_16,0b_17,0b_18,0b_19,0b_20,0b_21,0b_22,0b_23,0b_24,0b_25,0b_26,0b_27,0b_28,0b_29,0b_30,0b_31,0b_32,0b_33,0b_34,0b_35,0b_36,0b_37,0b_38,0b_39,0b_40,0b_41,0b_42,0b_43,0b_44,0b_45,0b_46,0b_47,0b_48,0b_49,0b_50,0b_51,0b_52,0b_53,0b_54,0b_55,0b_56,0b_57,0b_58,0b_59,0b_60,0b_61,0b_62,0b_63,0r_0,0r_1,0r_2,0r_3,0r_4,0r_5,0r_6,0r_7,0r_8,0r_9,0r_10,0r_11,0r_12,0r_13,0r_14,0r_15,0r_16,0r_17,0r_18,0r_19,0r_20,0r_21,0r_22,0r_23,0r_24,0r_25,0r_26,0r_27,0r_28,0r_29,0r_30,0r_31,0r_32,0r_33,0r_34,0r_35,0r_36,0r_37,0r_38,0r_39,0r_40,0r_41,0r_42,0r_43,0r_44,0r_45,0r_46,0r_47,0r_48,0r_49,0r_50,0r_51,0r_52,0r_53,0r_54,0r_55,0r_56,0r_57,0r_58,0r_59,0r_60,0r_61,0r_62,0r_63,0q_0,0q_1,0q_2,0q_3,0q_4,0q_5,0q_6,0q_7,0q_8,0q_9,0q_10,0q_11,0q_12,0q_13,0q_14,0q_15,0q_16,0q_17,0q_18,0q_19,0q_20,0q_21,0q_22,0q_23,0q_24,0q_25,0q_26,0q_27,0q_28,0q_29,0q_30,0q_31,0q_32,0q_33,0q_34,0q_35,0q_36,0q_37,0q_38,0q_39,0q_40,0q_41,0q_42,0q_43,0q_44,0q_45,0q_46,0q_47,0q_48,0q_49,0q_50,0q_51,0q_52,0q_53,0q_54,0q_55,0q_56,0q_57,0q_58,0q_59,0q_60,0q_61,0q_62,0q_63,0k_0,0k_1,0k_2,0k_3,0k_4,0k_5,0k_6,0k_7,0k_8,0k_9,0k_10,0k_11,0k_12,0k_13,0k_14,0k_15,0k_16,0k_17,0k_18,0k_19,0k_20,0k_21,0k_22,0k_23,0k_24,0k_25,0k_26,0k_27,0k_28,0k_29,0k_30,0k_31,0k_32,0k_33,0k_34,0k_35,0k_36,0k_37,0k_38,0k_39,0k_40,0k_41,0k_42,0k_43,0k_44,0k_45,0k_46,0k_47,0k_48,0k_49,0k_50,0k_51,0k_52,0k_53,0k_54,0k_55,0k_56,0k_57,0k_58,0k_59,0k_60,0k_61,0k_62,0k_63,1p_0,1p_1,1p_2,1p_3,1p_4,1p_5,1p_6,1p_7,1p_8,1p_9,1p_10,1p_11,1p_12,1p_13,1p_14,1p_15,1p_16,1p_17,1p_18,1p_19,1p_20,1p_21,1p_22,1p_23,1p_24,1p_25,1p_26,1p_27,1p_28,1p_29,1p_30,1p_31,1p_32,1p_33,1p_34,1p_35,1p_36,1p_37,1p_38,1p_39,1p_40,1p_41,1p_42,1p_43,1p_44,1p_45,1p_46,1p_47,1p_48,1p_49,1p_50,1p_51,1p_52,1p_53,1p_54,1p_55,1p_56,1p_57,1p_58,1p_59,1p_60,1p_61,1p_62,1p_63,1n_0,1n_1,1n_2,1n_3,1n_4,1n_5,1n_6,1n_7,1n_8,1n_9,1n_10,1n_11,1n_12,1n_13,1n_14,1n_15,1n_16,1n_17,1n_18,1n_19,1n_20,1n_21,1n_22,1n_23,1n_24,1n_25,1n_26,1n_27,1n_28,1n_29,1n_30,1n_31,1n_32,1n_33,1n_34,1n_35,1n_36,1n_37,1n_38,1n_39,1n_40,1n_41,1n_42,1n_43,1n_44,1n_45,1n_46,1n_47,1n_48,1n_49,1n_50,1n_51,1n_52,1n_53,1n_54,1n_55,1n_56,1n_57,1n_58,1n_59,1n_60,1n_61,1n_62,1n_63,1b_0,1b_1,1b_2,1b_3,1b_4,1b_5,1b_6,1b_7,1b_8,1b_9,1b_10,1b_11,1b_12,1b_13,1b_14,1b_15,1b_16,1b_17,1b_18,1b_19,1b_20,1b_21,1b_22,1b_23,1b_24,1b_25,1b_26,1b_27,1b_28,1b_29,1b_30,1b_31,1b_32,1b_33,1b_34,1b_35,1b_36,1b_37,1b_38,1b_39,1b_40,1b_41,1b_42,1b_43,1b_44,1b_45,1b_46,1b_47,1b_48,1b_49,1b_50,1b_51,1b_52,1b_53,1b_54,1b_55,1b_56,1b_57,1b_58,1b_59,1b_60,1b_61,1b_62,1b_63,1r_0,1r_1,1r_2,1r_3,1r_4,1r_5,1r_6,1r_7,1r_8,1r_9,1r_10,1r_11,1r_12,1r_13,1r_14,1r_15,1r_16,1r_17,1r_18,1r_19,1r_20,1r_21,1r_22,1r_23,1r_24,1r_25,1r_26,1r_27,1r_28,1r_29,1r_30,1r_31,1r_32,1r_33,1r_34,1r_35,1r_36,1r_37,1r_38,1r_39,1r_40,1r_41,1r_42,1r_43,1r_44,1r_45,1r_46,1r_47,1r_48,1r_49,1r_50,1r_51,1r_52,1r_53,1r_54,1r_55,1r_56,1r_57,1r_58,1r_59,1r_60,1r_61,1r_62,1r_63,1q_0,1q_1,1q_2,1q_3,1q_4,1q_5,1q_6,1q_7,1q_8,1q_9,1q_10,1q_11,1q_12,1q_13,1q_14,1q_15,1q_16,1q_17,1q_18,1q_19,1q_20,1q_21,1q_22,1q_23,1q_24,1q_25,1q_26,1q_27,1q_28,1q_29,1q_30,1q_31,1q_32,1q_33,1q_34,1q_35,1q_36,1q_37,1q_38,1q_39,1q_40,1q_41,1q_42,1q_43,1q_44,1q_45,1q_46,1q_47,1q_48,1q_49,1q_50,1q_51,1q_52,1q_53,1q_54,1q_55,1q_56,1q_57,1q_58,1q_59,1q_60,1q_61,1q_62,1q_63,1k_0,1k_1,1k_2,1k_3,1k_4,1k_5,1k_6,1k_7,1k_8,1k_9,1k_10,1k_11,1k_12,1k_13,1k_14,1k_15,1k_16,1k_17,1k_18,1k_19,1k_20,1k_21,1k_22,1k_23,1k_24,1k_25,1k_26,1k_27,1k_28,1k_29,1k_30,1k_31,1k_32,1k_33,1k_34,1k_35,1k_36,1k_37,1k_38,1k_39,1k_40,1k_41,1k_42,1k_43,1k_44,1k_45,1k_46,1k_47,1k_48,1k_49,1k_50,1k_51,1k_52,1k_53,1k_54,1k_55,1k_56,1k_57,1k_58,1k_59,1k_60,1k_61,1k_62,1k_63,turn0,turn1,turn2,turn3,turn4,turn5,turn6,turn7,turn8,turn9,turn10,turn11,turn12,turn13,turn14,turn15,turn16,turn17,turn18,turn19,turn20,turn21,turn22,turn23,turn24,turn25,turn26,turn27,turn28,turn29,turn30,turn31,turn32,turn33,turn34,turn35,turn36,turn37,turn38,turn39,turn40,turn41,turn42,turn43,turn44,turn45,turn46,turn47,turn48,turn49,turn50,turn51,turn52,turn53,turn54,turn55,turn56,turn57,turn58,turn59,turn60,turn61,turn62,turn63,probs,value"
) | 1,250.25 | 4,941 | 0.828034 | 1,613 | 5,001 | 2.090515 | 0.094234 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.461415 | 0.0024 | 5,001 | 4 | 4,942 | 1,250.25 | 0.214472 | 0 | 0 | 0 | 0 | 0.25 | 0.991403 | 0.991204 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
c8a856d2a1347a24199a8ab3aa3c2c5e35975b25 | 11,208 | py | Python | functions/tridimensional_functions.py | brendaferrari/AutoPaDELPy | 777415c1448b6f0eb0b10ad941bbe79ec404ccd0 | [
"MIT"
] | 2 | 2022-01-03T21:29:17.000Z | 2022-03-23T22:55:37.000Z | functions/tridimensional_functions.py | brendaferrari/AutoPaDELPy | 777415c1448b6f0eb0b10ad941bbe79ec404ccd0 | [
"MIT"
] | 3 | 2021-09-03T13:06:27.000Z | 2022-03-23T23:00:46.000Z | functions/tridimensional_functions.py | brendaferrari/AutoPaDELPy | 777415c1448b6f0eb0b10ad941bbe79ec404ccd0 | [
"MIT"
] | 1 | 2022-01-03T21:29:18.000Z | 2022-01-03T21:29:18.000Z | import cmd
class TridimensionalFunctions(cmd.Cmd):
file = None
def __init__(self):
cmd.Cmd.__init__(self)
self.prompt = '(tridimensional descriptor) '
self.intro = 'Which tridimensional descriptor do you wish to calculate? Type help or ? to list commands. Write finish if you want to move on to the preprocessing step.\n'
self.completekey='tab'
def do_Autocorrelation3D(self, mol_dir):
"""Autocorrelation3D [mol_dir]
Calculate the Autocorrelation3D fingerprint"""
from padelpy import padeldescriptor
import glob
xml_files = glob.glob("functions/descriptors/tridimensional_descriptors/*.xml")
xml_files.sort()
FP_list = ['Autocorrelation3D',
'CPSA',
'GravitationalIndex',
'LengthOverBreadth',
'MomentOfInertia',
'PetitjeanShapeIndex',
'RDF',
'WHIM']
fp = dict(zip(FP_list, xml_files))
fingerprint = 'Autocorrelation3D'
fingerprint_output_file = ''.join([fingerprint,'.csv'])
fingerprint_descriptortypes = fp[fingerprint]
descriptor = padeldescriptor(mol_dir='dataset.smi',
d_file=fingerprint_output_file,
descriptortypes= fingerprint_descriptortypes,
detectaromaticity=True,
standardizenitro=True,
standardizetautomers=True,
threads=2,
removesalt=True,
log=True,
fingerprints=True,
convert3d=True,
d_3d=True)
return False
def do_CPSA(self, mol_dir):
"""CPSA [mol_dir]
Calculate the CPSA fingerprint"""
from padelpy import padeldescriptor
import glob
xml_files = glob.glob("functions/descriptors/tridimensional_descriptors/*.xml")
xml_files.sort()
FP_list = ['Autocorrelation3D',
'CPSA',
'GravitationalIndex',
'LengthOverBreadth',
'MomentOfInertia',
'PetitjeanShapeIndex',
'RDF',
'WHIM']
fp = dict(zip(FP_list, xml_files))
fingerprint = 'CPSA'
fingerprint_output_file = ''.join([fingerprint,'.csv'])
fingerprint_descriptortypes = fp[fingerprint]
descriptor = padeldescriptor(mol_dir='dataset.smi',
d_file=fingerprint_output_file,
descriptortypes= fingerprint_descriptortypes,
detectaromaticity=True,
standardizenitro=True,
standardizetautomers=True,
threads=2,
removesalt=True,
log=True,
fingerprints=True,
convert3d=True,
d_3d=True)
return False
def do_GravitationalIndex(self, mol_dir):
"""GravitationalIndex[mol_dir]
Calculate the GravitationalIndex fingerprint"""
from padelpy import padeldescriptor
import glob
xml_files = glob.glob("functions/descriptors/tridimensional_descriptors/*.xml")
xml_files.sort()
FP_list = ['Autocorrelation3D',
'CPSA',
'GravitationalIndex',
'LengthOverBreadth',
'MomentOfInertia',
'PetitjeanShapeIndex',
'RDF',
'WHIM']
fp = dict(zip(FP_list, xml_files))
fingerprint = 'GravitationalIndex'
fingerprint_output_file = ''.join([fingerprint,'.csv'])
fingerprint_descriptortypes = fp[fingerprint]
descriptor = padeldescriptor(mol_dir='dataset.smi',
d_file=fingerprint_output_file,
descriptortypes= fingerprint_descriptortypes,
detectaromaticity=True,
standardizenitro=True,
standardizetautomers=True,
threads=2,
removesalt=True,
log=True,
fingerprints=True,
convert3d=True,
d_3d=True)
return False
def do_LengthOverBreadth(self, mol_dir):
"""LengthOverBreadth [mol_dir]
Calculate the LengthOverBreadth fingerprint"""
from padelpy import padeldescriptor
import glob
xml_files = glob.glob("functions/descriptors/tridimensional_descriptors/*.xml")
xml_files.sort()
FP_list = ['Autocorrelation3D',
'CPSA',
'GravitationalIndex',
'LengthOverBreadth',
'MomentOfInertia',
'PetitjeanShapeIndex',
'RDF',
'WHIM']
fp = dict(zip(FP_list, xml_files))
fingerprint = 'LengthOverBreadth'
fingerprint_output_file = ''.join([fingerprint,'.csv'])
fingerprint_descriptortypes = fp[fingerprint]
descriptor = padeldescriptor(mol_dir='dataset.smi',
d_file=fingerprint_output_file,
descriptortypes= fingerprint_descriptortypes,
detectaromaticity=True,
standardizenitro=True,
standardizetautomers=True,
threads=2,
removesalt=True,
log=True,
fingerprints=True,
convert3d=True,
d_3d=True)
return False
def do_MomentOfInertia(self, mol_dir):
"""MomentOfInertia [mol_dir]
Calculate the MomentOfInertia fingerprint"""
from padelpy import padeldescriptor
import glob
xml_files = glob.glob("functions/descriptors/tridimensional_descriptors/*.xml")
xml_files.sort()
FP_list = ['Autocorrelation3D',
'CPSA',
'GravitationalIndex',
'LengthOverBreadth',
'MomentOfInertia',
'PetitjeanShapeIndex',
'RDF',
'WHIM']
fp = dict(zip(FP_list, xml_files))
fingerprint = 'MomentOfInertia'
fingerprint_output_file = ''.join([fingerprint,'.csv'])
fingerprint_descriptortypes = fp[fingerprint]
descriptor = padeldescriptor(mol_dir='dataset.smi',
d_file=fingerprint_output_file,
descriptortypes= fingerprint_descriptortypes,
detectaromaticity=True,
standardizenitro=True,
standardizetautomers=True,
threads=2,
removesalt=True,
log=True,
fingerprints=True,
convert3d=True,
d_3d=True)
return False
def do_PetitjeanShapeIndex(self, mol_dir):
"""PetitjeanShapeIndex [mol_dir]
Calculate the PetitjeanShapeIndex fingerprint"""
from padelpy import padeldescriptor
import glob
xml_files = glob.glob("functions/descriptors/tridimensional_descriptors/*.xml")
xml_files.sort()
FP_list = ['Autocorrelation3D',
'CPSA',
'GravitationalIndex',
'LengthOverBreadth',
'MomentOfInertia',
'PetitjeanShapeIndex',
'RDF',
'WHIM']
fp = dict(zip(FP_list, xml_files))
fingerprint = 'PetitjeanShapeIndex'
fingerprint_output_file = ''.join([fingerprint,'.csv'])
fingerprint_descriptortypes = fp[fingerprint]
descriptor = padeldescriptor(mol_dir='dataset.smi',
d_file=fingerprint_output_file,
descriptortypes= fingerprint_descriptortypes,
detectaromaticity=True,
standardizenitro=True,
standardizetautomers=True,
threads=2,
removesalt=True,
log=True,
fingerprints=True,
convert3d=True,
d_3d=True)
return False
def do_RDF(self, mol_dir):
"""RDF [mol_dir]
Calculate the RDF fingerprint"""
from padelpy import padeldescriptor
import glob
xml_files = glob.glob("functions/descriptors/tridimensional_descriptors/*.xml")
xml_files.sort()
FP_list = ['Autocorrelation3D',
'CPSA',
'GravitationalIndex',
'LengthOverBreadth',
'MomentOfInertia',
'PetitjeanShapeIndex',
'RDF',
'WHIM']
fp = dict(zip(FP_list, xml_files))
fingerprint = 'RDF'
fingerprint_output_file = ''.join([fingerprint,'.csv'])
fingerprint_descriptortypes = fp[fingerprint]
descriptor = padeldescriptor(mol_dir='dataset.smi',
d_file=fingerprint_output_file,
descriptortypes= fingerprint_descriptortypes,
detectaromaticity=True,
standardizenitro=True,
standardizetautomers=True,
threads=2,
removesalt=True,
log=True,
fingerprints=True,
convert3d=True,
d_3d=True)
return False
def do_WHIM(self, mol_dir):
"""WHIM [mol_dir]
Calculate the WHIM fingerprint"""
from padelpy import padeldescriptor
import glob
xml_files = glob.glob("functions/descriptors/tridimensional_descriptors/*.xml")
xml_files.sort()
FP_list = ['Autocorrelation3D',
'CPSA',
'GravitationalIndex',
'LengthOverBreadth',
'MomentOfInertia',
'PetitjeanShapeIndex',
'RDF',
'WHIM']
fp = dict(zip(FP_list, xml_files))
fingerprint = 'WHIM'
fingerprint_output_file = ''.join([fingerprint,'.csv'])
fingerprint_descriptortypes = fp[fingerprint]
descriptor = padeldescriptor(mol_dir='dataset.smi',
d_file=fingerprint_output_file,
descriptortypes= fingerprint_descriptortypes,
detectaromaticity=True,
standardizenitro=True,
standardizetautomers=True,
threads=2,
removesalt=True,
log=True,
fingerprints=True,
convert3d=True,
d_3d=True)
return False
def do_finish(self, arg):
"""finish [self, arg]
Finish running Tridimensional Descriptors and move on to the next descriptor."""
print('Moving on to data preprocessing...')
self.close()
return True
def close(self):
if self.file:
self.file.close()
self.file = None
if __name__ == '__main__':
TridimensionalFunctions().cmdloop() | 30.961326 | 178 | 0.540061 | 857 | 11,208 | 6.875146 | 0.105018 | 0.02444 | 0.057026 | 0.02444 | 0.811948 | 0.811948 | 0.811948 | 0.811948 | 0.811948 | 0.811948 | 0 | 0.005166 | 0.378212 | 11,208 | 362 | 179 | 30.961326 | 0.840293 | 0.052909 | 0 | 0.868217 | 0 | 0.003876 | 0.157459 | 0.041151 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042636 | false | 0 | 0.065891 | 0 | 0.151163 | 0.189922 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cdb38a1dfff1fb82659d4f90c40780dfa787fe19 | 6,976 | py | Python | falmer/events/migrations/0028_auto_20180730_1310.py | sussexstudent/services-api | ae735bd9d6177002c3d986e5c19a78102233308f | [
"MIT"
] | 2 | 2017-04-27T19:35:59.000Z | 2017-06-13T16:19:33.000Z | falmer/events/migrations/0028_auto_20180730_1310.py | sussexstudent/falmer | ae735bd9d6177002c3d986e5c19a78102233308f | [
"MIT"
] | 975 | 2017-04-13T11:31:07.000Z | 2022-02-10T07:46:18.000Z | falmer/events/migrations/0028_auto_20180730_1310.py | sussexstudent/services-api | ae735bd9d6177002c3d986e5c19a78102233308f | [
"MIT"
] | 3 | 2018-05-09T06:42:25.000Z | 2020-12-10T18:29:30.000Z | # Generated by Django 2.0.7 on 2018-07-30 12:10
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('events', '0027_auto_20180730_1235'),
]
operations = [
migrations.AddField(
model_name='event',
name='contains_flashing_lights',
field=models.SmallIntegerField(choices=[(0, 'Not set/inherit'), (1, 'Negative'), (2, 'Positive')], default=0),
),
migrations.AddField(
model_name='event',
name='contains_flashing_lights_reasoning',
field=models.TextField(blank=True, default=''),
),
migrations.AddField(
model_name='event',
name='contains_loud_music',
field=models.SmallIntegerField(choices=[(0, 'Not set/inherit'), (1, 'Negative'), (2, 'Positive')], default=0),
),
migrations.AddField(
model_name='event',
name='contains_loud_music_reasoning',
field=models.TextField(blank=True, default=''),
),
migrations.AddField(
model_name='event',
name='contains_low_light',
field=models.SmallIntegerField(choices=[(0, 'Not set/inherit'), (1, 'Negative'), (2, 'Positive')], default=0),
),
migrations.AddField(
model_name='event',
name='contains_low_light_reasoning',
field=models.TextField(blank=True, default=''),
),
migrations.AddField(
model_name='event',
name='contains_uneven_ground',
field=models.SmallIntegerField(choices=[(0, 'Not set/inherit'), (1, 'Negative'), (2, 'Positive')], default=0),
),
migrations.AddField(
model_name='event',
name='contains_uneven_ground_reasoning',
field=models.TextField(blank=True, default=''),
),
migrations.AddField(
model_name='event',
name='has_accessible_toilets',
field=models.SmallIntegerField(choices=[(0, 'Not set/inherit'), (1, 'Negative'), (2, 'Positive')], default=0),
),
migrations.AddField(
model_name='event',
name='has_accessible_toilets_reasoning',
field=models.TextField(blank=True, default=''),
),
migrations.AddField(
model_name='event',
name='has_changing_facilities',
field=models.SmallIntegerField(choices=[(0, 'Not set/inherit'), (1, 'Negative'), (2, 'Positive')], default=0),
),
migrations.AddField(
model_name='event',
name='has_changing_facilities_reasoning',
field=models.TextField(blank=True, default=''),
),
migrations.AddField(
model_name='event',
name='has_gender_neutral_toilets',
field=models.SmallIntegerField(choices=[(0, 'Not set/inherit'), (1, 'Negative'), (2, 'Positive')], default=0),
),
migrations.AddField(
model_name='event',
name='has_gender_neutral_toilets_reasoning',
field=models.TextField(blank=True, default=''),
),
migrations.AddField(
model_name='event',
name='has_level_access',
field=models.SmallIntegerField(choices=[(0, 'Not set/inherit'), (1, 'Negative'), (2, 'Positive')], default=0),
),
migrations.AddField(
model_name='event',
name='has_level_access_reasoning',
field=models.TextField(blank=True, default=''),
),
migrations.AddField(
model_name='venue',
name='contains_flashing_lights',
field=models.SmallIntegerField(choices=[(0, 'Not set/inherit'), (1, 'Negative'), (2, 'Positive')], default=0),
),
migrations.AddField(
model_name='venue',
name='contains_flashing_lights_reasoning',
field=models.TextField(blank=True, default=''),
),
migrations.AddField(
model_name='venue',
name='contains_loud_music',
field=models.SmallIntegerField(choices=[(0, 'Not set/inherit'), (1, 'Negative'), (2, 'Positive')], default=0),
),
migrations.AddField(
model_name='venue',
name='contains_loud_music_reasoning',
field=models.TextField(blank=True, default=''),
),
migrations.AddField(
model_name='venue',
name='contains_low_light',
field=models.SmallIntegerField(choices=[(0, 'Not set/inherit'), (1, 'Negative'), (2, 'Positive')], default=0),
),
migrations.AddField(
model_name='venue',
name='contains_low_light_reasoning',
field=models.TextField(blank=True, default=''),
),
migrations.AddField(
model_name='venue',
name='contains_uneven_ground',
field=models.SmallIntegerField(choices=[(0, 'Not set/inherit'), (1, 'Negative'), (2, 'Positive')], default=0),
),
migrations.AddField(
model_name='venue',
name='contains_uneven_ground_reasoning',
field=models.TextField(blank=True, default=''),
),
migrations.AddField(
model_name='venue',
name='has_accessible_toilets',
field=models.SmallIntegerField(choices=[(0, 'Not set/inherit'), (1, 'Negative'), (2, 'Positive')], default=0),
),
migrations.AddField(
model_name='venue',
name='has_accessible_toilets_reasoning',
field=models.TextField(blank=True, default=''),
),
migrations.AddField(
model_name='venue',
name='has_changing_facilities',
field=models.SmallIntegerField(choices=[(0, 'Not set/inherit'), (1, 'Negative'), (2, 'Positive')], default=0),
),
migrations.AddField(
model_name='venue',
name='has_changing_facilities_reasoning',
field=models.TextField(blank=True, default=''),
),
migrations.AddField(
model_name='venue',
name='has_gender_neutral_toilets',
field=models.SmallIntegerField(choices=[(0, 'Not set/inherit'), (1, 'Negative'), (2, 'Positive')], default=0),
),
migrations.AddField(
model_name='venue',
name='has_gender_neutral_toilets_reasoning',
field=models.TextField(blank=True, default=''),
),
migrations.AddField(
model_name='venue',
name='has_level_access',
field=models.SmallIntegerField(choices=[(0, 'Not set/inherit'), (1, 'Negative'), (2, 'Positive')], default=0),
),
migrations.AddField(
model_name='venue',
name='has_level_access_reasoning',
field=models.TextField(blank=True, default=''),
),
]
| 40.091954 | 122 | 0.566514 | 657 | 6,976 | 5.834094 | 0.09589 | 0.150274 | 0.192017 | 0.225411 | 0.961127 | 0.961127 | 0.961127 | 0.961127 | 0.961127 | 0.952779 | 0 | 0.019115 | 0.287557 | 6,976 | 173 | 123 | 40.323699 | 0.752113 | 0.006451 | 0 | 0.958084 | 1 | 0 | 0.220089 | 0.109251 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.005988 | 0 | 0.023952 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
cdedc70eedb11e9bcb4d4eb1a16e3e9a02df1f66 | 50 | py | Python | Chapter 01/ch1_18.py | bpbpublications/TEST-YOUR-SKILLS-IN-PYTHON-LANGUAGE | f6a4194684515495d00aa38347a725dd08f39a0c | [
"MIT"
] | null | null | null | Chapter 01/ch1_18.py | bpbpublications/TEST-YOUR-SKILLS-IN-PYTHON-LANGUAGE | f6a4194684515495d00aa38347a725dd08f39a0c | [
"MIT"
] | null | null | null | Chapter 01/ch1_18.py | bpbpublications/TEST-YOUR-SKILLS-IN-PYTHON-LANGUAGE | f6a4194684515495d00aa38347a725dd08f39a0c | [
"MIT"
] | null | null | null | print('{0:3d}, {2:6.1f}, {1:4d} '.format(1,2,3.0)) | 50 | 50 | 0.5 | 13 | 50 | 1.923077 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.234043 | 0.06 | 50 | 1 | 50 | 50 | 0.297872 | 0 | 0 | 0 | 0 | 0 | 0.490196 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
a81972a9d13873bce049590b2076f39f8078da1f | 91 | py | Python | IPython/external/simplegeneric/__init__.py | chebee7i/ipython | 85b169fa3afc3d374973295c7f1409ededddbaca | [
"BSD-3-Clause-Clear"
] | 8 | 2021-12-14T21:30:01.000Z | 2022-02-14T11:30:03.000Z | IPython/external/simplegeneric/__init__.py | chebee7i/ipython | 85b169fa3afc3d374973295c7f1409ededddbaca | [
"BSD-3-Clause-Clear"
] | 7 | 2021-02-08T20:22:15.000Z | 2022-03-11T23:19:41.000Z | IPython/external/simplegeneric/__init__.py | chebee7i/ipython | 85b169fa3afc3d374973295c7f1409ededddbaca | [
"BSD-3-Clause-Clear"
] | 2 | 2016-12-19T02:27:46.000Z | 2019-07-29T02:53:54.000Z | try:
from simplegeneric import *
except ImportError:
from ._simplegeneric import *
| 18.2 | 33 | 0.736264 | 9 | 91 | 7.333333 | 0.666667 | 0.515152 | 0.69697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208791 | 91 | 4 | 34 | 22.75 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
b5309f74ec8c63400d04aa127431a5144cf5f18a | 12,364 | py | Python | data.py | grushaprasad/neural-complexity | 2854898a6b25bf106150d418cf64237fde7ff3dc | [
"Apache-2.0"
] | null | null | null | data.py | grushaprasad/neural-complexity | 2854898a6b25bf106150d418cf64237fde7ff3dc | [
"Apache-2.0"
] | null | null | null | data.py | grushaprasad/neural-complexity | 2854898a6b25bf106150d418cf64237fde7ff3dc | [
"Apache-2.0"
] | null | null | null | import os
import torch
import gzip
from nltk import sent_tokenize
class Dictionary(object):
def __init__(self):
self.word2idx = {}
self.idx2word = []
def add_word(self, word):
if word not in self.word2idx:
self.idx2word.append(word)
self.word2idx[word] = len(self.idx2word) - 1
return self.word2idx[word]
def __len__(self):
return len(self.idx2word)
class SentenceCorpus(object):
def __init__(self, path, vocab_file, testflag=False, interactflag=False,
trainfname='train.txt',
validfname='valid.txt',
testfname='test.txt'):
if not testflag:
self.dictionary = Dictionary()
self.train = self.tokenize(os.path.join(path, trainfname))
self.valid = self.tokenize_with_unks(os.path.join(path, validfname))
self.vocab_file = self.save_dict(vocab_file)
else:
if vocab_file[-3:] == 'bin':
self.load_dict(vocab_file)
else:
self.dictionary = Dictionary()
self.load_dict(vocab_file)
if not interactflag:
self.test = self.sent_tokenize_with_unks(os.path.join(path, testfname))
def save_dict(self, path):
if path[-3:] == 'bin':
# This check actually seems to be faster than passing in a binary flag
# Assume dict is binarized
import dill
with open(path, 'wb') as f:
torch.save(self.dictionary, f, pickle_module=dill)
else:
# Assume dict is plaintext
with open(path, 'w') as f:
for word in self.dictionary.idx2word:
f.write(word+'\n')
def load_dict(self, path):
assert os.path.exists(path)
if path[-3:] == 'bin':
# This check actually seems to be faster than passing in a binary flag
# Assume dict is binarized
import dill
with open(path, 'rb') as f:
fdata = torch.load(f, pickle_module=dill)
if type(fdata) == type(()):
# Compatibility with old pytorch LM saving
self.dictionary = fdata[3]
self.dictionary = fdata
else:
# Assume dict is plaintext
with open(path, 'r') as f:
for line in f:
self.dictionary.add_word(line.strip())
def tokenize(self, path):
"""Tokenizes a text file."""
assert os.path.exists(path)
# Add words to the dictionary
if path[-2:] == 'gz':
with gzip.open(path, 'rb') as f:
tokens = 0
FIRST = True
for fchunk in f.readlines():
for line in sent_tokenize(fchunk.decode("utf-8")):
if line.strip() == '':
# Ignore blank lines
continue
if FIRST:
words = ['<eos>'] + line.split() + ['<eos>']
FIRST = False
else:
words = line.split() + ['<eos>']
tokens += len(words)
for word in words:
self.dictionary.add_word(word)
# Tokenize file content
with gzip.open(path, 'rb') as f:
ids = torch.LongTensor(tokens)
token = 0
FIRST = True
for fchunk in f.readlines():
for line in sent_tokenize(fchunk.decode("utf-8")):
if line.strip() == '':
# Ignore blank lines
continue
if FIRST:
words = ['<eos>'] + line.split() + ['<eos>']
FIRST = False
else:
words = line.split() + ['<eos>']
for word in words:
ids[token] = self.dictionary.word2idx[word]
token += 1
else:
with open(path, 'r') as f:
tokens = 0
FIRST = True
for fchunk in f:
for line in sent_tokenize(fchunk):
if line.strip() == '':
# Ignore blank lines
continue
if FIRST:
words = ['<eos>'] + line.split() + ['<eos>']
FIRST = False
else:
words = line.split() + ['<eos>']
tokens += len(words)
for word in words:
self.dictionary.add_word(word)
# Tokenize file content
with open(path, 'r') as f:
ids = torch.LongTensor(tokens)
token = 0
FIRST = True
for fchunk in f:
for line in sent_tokenize(fchunk):
if line.strip() == '':
# Ignore blank lines
continue
if FIRST:
words = ['<eos>'] + line.split() + ['<eos>']
FIRST = False
else:
words = line.split() + ['<eos>']
for word in words:
ids[token] = self.dictionary.word2idx[word]
token += 1
return ids
def tokenize_with_unks(self, path):
"""Tokenizes a text file, adding unks if needed."""
assert os.path.exists(path)
if path[-2:] == 'gz':
# Determine the length of the corpus
with gzip.open(path, 'rb') as f:
tokens = 0
FIRST = True
for fchunk in f.readlines():
for line in sent_tokenize(fchunk.decode("utf-8")):
if line.strip() == '':
# Ignore blank lines
continue
if FIRST:
words = ['<eos>'] + line.split() + ['<eos>']
FIRST = False
else:
words = line.split() + ['<eos>']
tokens += len(words)
# Tokenize file content
with gzip.open(path, 'rb') as f:
ids = torch.LongTensor(tokens)
token = 0
FIRST = True
for fchunk in f.readlines():
for line in sent_tokenize(fchunk.decode("utf-8")):
if line.strip() == '':
# Ignore blank lines
continue
if FIRST:
words = ['<eos>'] + line.split() + ['<eos>']
FIRST = False
else:
words = line.split() + ['<eos>']
for word in words:
# Convert OOV to <unk>
if word not in self.dictionary.word2idx:
ids[token] = self.dictionary.add_word("<unk>")
else:
ids[token] = self.dictionary.word2idx[word]
token += 1
else:
# Determine the length of the corpus
with open(path, 'r') as f:
tokens = 0
FIRST = True
for fchunk in f:
for line in sent_tokenize(fchunk):
if line.strip() == '':
# Ignore blank lines
continue
if FIRST:
words = ['<eos>'] + line.split() + ['<eos>']
FIRST = False
else:
words = line.split() + ['<eos>']
tokens += len(words)
# Tokenize file content
with open(path, 'r') as f:
ids = torch.LongTensor(tokens)
token = 0
FIRST = True
for fchunk in f:
for line in sent_tokenize(fchunk):
if line.strip() == '':
# Ignore blank lines
continue
if FIRST:
words = ['<eos>'] + line.split() + ['<eos>']
FIRST = False
else:
words = line.split() + ['<eos>']
for word in words:
# Convert OOV to <unk>
if word not in self.dictionary.word2idx:
ids[token] = self.dictionary.add_word("<unk>")
else:
ids[token] = self.dictionary.word2idx[word]
token += 1
return ids
def sent_tokenize_with_unks(self, path):
"""Tokenizes a text file into sentences, adding unks if needed."""
assert os.path.exists(path)
all_ids = []
sents = []
if path [-2:] == 'gz':
with gzip.open(path, 'rb') as f:
for fchunk in f.readlines():
for line in sent_tokenize(fchunk.decode("utf-8")):
if line.strip() == '':
# Ignore blank lines
continue
sents.append(line.strip())
words = ['<eos>'] + line.split() + ['<eos>']
tokens = len(words)
# Tokenize file content
ids = torch.LongTensor(tokens)
token = 0
for word in words:
# Convert OOV to <unk>
if word not in self.dictionary.word2idx:
ids[token] = self.dictionary.add_word("<unk>")
else:
ids[token] = self.dictionary.word2idx[word]
token += 1
all_ids.append(ids)
else:
with open(path, 'r') as f:
for fchunk in f:
for line in sent_tokenize(fchunk):
if line.strip() == '':
# Ignore blank lines
continue
sents.append(line.strip())
words = ['<eos>'] + line.split() + ['<eos>']
tokens = len(words)
# Tokenize file content
ids = torch.LongTensor(tokens)
token = 0
for word in words:
# Convert OOV to <unk>
if word not in self.dictionary.word2idx:
ids[token] = self.dictionary.add_word("<unk>")
else:
ids[token] = self.dictionary.word2idx[word]
token += 1
all_ids.append(ids)
return (sents, all_ids)
def online_tokenize_with_unks(self, line):
"""Tokenizes an input sentence, adding unks if needed."""
all_ids = []
sents = [line.strip()]
words = ['<eos>'] + line.strip().split() + ['<eos>']
tokens = len(words)
# Tokenize file content
ids = torch.LongTensor(tokens)
token = 0
for word in words:
# Convert OOV to <unk>
if word not in self.dictionary.word2idx:
ids[token] = self.dictionary.add_word("<unk>")
else:
ids[token] = self.dictionary.word2idx[word]
token += 1
all_ids.append(ids)
return (sents, all_ids)
| 40.805281 | 87 | 0.402135 | 1,121 | 12,364 | 4.377342 | 0.115968 | 0.07418 | 0.044019 | 0.053801 | 0.787854 | 0.766456 | 0.761158 | 0.726717 | 0.711636 | 0.676584 | 0 | 0.008459 | 0.502831 | 12,364 | 302 | 88 | 40.940397 | 0.789816 | 0.081527 | 0 | 0.805668 | 0 | 0 | 0.023355 | 0 | 0 | 0 | 0 | 0 | 0.016194 | 1 | 0.040486 | false | 0 | 0.024292 | 0.004049 | 0.097166 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b572462fcfeb9a9b50e1b89304d6cee49960ab75 | 152 | py | Python | CryostatGUI/Sequence/__init__.py | Cryostat-GUI/Cryostat-GUI | 9b538ecaef4f1c0758907b9ee32d79ffd6793867 | [
"MIT"
] | 2 | 2018-11-23T15:59:19.000Z | 2019-01-28T20:18:58.000Z | CryostatGUI/Sequence/__init__.py | Cryostat-GUI/Cryostat-GUI | 9b538ecaef4f1c0758907b9ee32d79ffd6793867 | [
"MIT"
] | 54 | 2018-10-16T20:03:32.000Z | 2021-11-09T09:07:03.000Z | CryostatGUI/Sequence/__init__.py | Cryostat-GUI/Cryostat-GUI | 9b538ecaef4f1c0758907b9ee32d79ffd6793867 | [
"MIT"
] | 9 | 2018-11-04T17:37:30.000Z | 2021-05-03T21:15:33.000Z | # from Sequence import *
from .Sequence import Sequence_Thread
from .Sequence import OneShot_Thread
from .Sequence import OneShot_Thread_multichannel
| 21.714286 | 49 | 0.842105 | 19 | 152 | 6.526316 | 0.315789 | 0.387097 | 0.580645 | 0.387097 | 0.548387 | 0.548387 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 152 | 6 | 50 | 25.333333 | 0.932331 | 0.144737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
b58bfcce4c97488fc4cf45408e8f412aecd1a387 | 6,448 | py | Python | lambda/tests/test_handler.py | brentley/serverless-southwest-check-in | de97827786e37e88ece7a62a30c162c453d8962a | [
"MIT"
] | null | null | null | lambda/tests/test_handler.py | brentley/serverless-southwest-check-in | de97827786e37e88ece7a62a30c162c453d8962a | [
"MIT"
] | null | null | null | lambda/tests/test_handler.py | brentley/serverless-southwest-check-in | de97827786e37e88ece7a62a30c162c453d8962a | [
"MIT"
] | 1 | 2022-01-22T03:40:26.000Z | 2022-01-22T03:40:26.000Z | import logging
import unittest
import mock
import responses
import handler
import util
from lib import exceptions
# Prevent the handler function from logging during test runs
logging.disable(logging.CRITICAL)
class TestScheduleCheckIn(unittest.TestCase):
def setUp(self):
self.mock_event = {
'first_name': 'George',
'last_name': 'Bush',
'confirmation_number': 'ABC123',
'email': 'gwb@example.com'
}
@mock.patch('handler.email.send_confirmation')
@responses.activate
def test_schedule_check_in(self, email_mock):
expected = {
'passengers': [
{"firstName": "GEORGE", "lastName": "BUSH"}
],
'confirmation_number': 'ABC123',
'check_in_times': {
'remaining': ['2099-08-21T07:35:05-05:00'],
'next': '2099-08-17T18:50:05-05:00'
},
'email': 'gwb@example.com'
}
responses.add(
responses.GET,
'https://api-extensions.southwest.com/v1/mobile/reservations/record-locator/ABC123',
json=util.load_fixture('get_reservation'),
status=200
)
result = handler.schedule_check_in(self.mock_event, None)
assert result == expected
@mock.patch('handler.email.send_confirmation')
@responses.activate
def test_schedule_multi_passenger_check_in(self, email_mock):
expected = {
'passengers': [
{"firstName": "GEORGE", "lastName": "BUSH"},
{"firstName": "LAURA", "lastName": "BUSH"},
],
'confirmation_number': 'ABC123',
'check_in_times': {
'remaining': ['2099-05-13T15:10:05-05:00'],
'next': '2099-05-12T08:55:05-05:00'
},
'email': 'gwb@example.com'
}
responses.add(
responses.GET,
'https://api-extensions.southwest.com/v1/mobile/reservations/record-locator/ABC123',
json=util.load_fixture('get_multi_passenger_reservation'),
status=200
)
result = handler.schedule_check_in(self.mock_event, None)
assert result == expected
class TestCheckIn(unittest.TestCase):
@responses.activate
def test_multi_passenger_check_in(self):
fake_event = {
'passengers': [
{"firstName": "GEORGE", "lastName": "BUSH"},
{"firstName": "LAURA", "lastName": "BUSH"},
],
'confirmation_number': 'ABC123',
'check_in_times': {
'remaining': [],
'next': '2017-05-12T08:55:00-05:00'
},
'email': 'gwb@example.com'
}
responses.add(
responses.POST,
'https://api-extensions.southwest.com/v1/mobile/reservations/'
'record-locator/ABC123/boarding-passes',
json=util.load_fixture('check_in_success'),
status=200
)
responses.add(
responses.POST,
'https://api-extensions.southwest.com/v1/mobile/record-locator/'
'ABC123/operation-infos/mobile-boarding-pass/notifications',
json=util.load_fixture('email_boarding_pass'),
status=200
)
assert(handler.check_in(fake_event, None))
@responses.activate
def test_not_last_check_in(self):
fake_event = {
'passengers': [
{"firstName": "GEORGE", "lastName": "BUSH"},
{"firstName": "LAURA", "lastName": "BUSH"},
],
'confirmation_number': 'ABC123',
'check_in_times': {
'remaining': ['2017-05-13T15:10:00-05:00'],
'next': '2017-05-12T08:55:00-05:00'
},
'email': 'gwb@example.com'
}
responses.add(
responses.POST,
'https://api-extensions.southwest.com/v1/mobile/reservations/'
'record-locator/ABC123/boarding-passes',
json=util.load_fixture('check_in_success'),
status=200
)
responses.add(
responses.POST,
'https://api-extensions.southwest.com/v1/mobile/record-locator/'
'ABC123/operation-infos/mobile-boarding-pass/notifications',
json=util.load_fixture('email_boarding_pass'),
status=200
)
try:
result = handler.check_in(fake_event, None)
assert False, "NotLastCheckIn exception was not raised"
except exceptions.NotLastCheckIn:
pass
@responses.activate
def test_old_format_check_in(self):
fake_event = {
'first_name': "GEORGE",
'last_name': "BUSH",
'confirmation_number': 'ABC123',
'check_in_times': {
'remaining': [],
'next': '2017-05-12T08:55:00-05:00'
},
'email': 'gwb@example.com'
}
responses.add(
responses.POST,
'https://api-extensions.southwest.com/v1/mobile/reservations/'
'record-locator/ABC123/boarding-passes',
json=util.load_fixture('check_in_success'),
status=200
)
responses.add(
responses.POST,
'https://api-extensions.southwest.com/v1/mobile/record-locator/'
'ABC123/operation-infos/mobile-boarding-pass/notifications',
json=util.load_fixture('email_boarding_pass'),
status=200
)
assert(handler.check_in(fake_event, None))
@responses.activate
def test_cancelled_check_in(self):
fake_event = {
'first_name': "GEORGE",
'last_name': "BUSH",
'confirmation_number': 'ABC123',
'check_in_times': {
'remaining': [],
'next': '2017-05-12T08:55:00-05:00'
},
'email': 'gwb@example.com'
}
responses.add(
responses.POST,
'https://api-extensions.southwest.com/v1/mobile/reservations/'
'record-locator/ABC123/boarding-passes',
json=util.load_fixture('check_in_reservation_cancelled'),
status=404
)
assert(handler.check_in(fake_event, None) == False)
@responses.activate
def test_failed_check_in(self):
pass
| 30.851675 | 96 | 0.547301 | 628 | 6,448 | 5.457006 | 0.173567 | 0.04698 | 0.028888 | 0.070908 | 0.825795 | 0.809454 | 0.801576 | 0.791946 | 0.791946 | 0.791946 | 0 | 0.056344 | 0.322891 | 6,448 | 208 | 97 | 31 | 0.728585 | 0.008995 | 0 | 0.666667 | 0 | 0.011494 | 0.345076 | 0.104431 | 0 | 0 | 0 | 0 | 0.034483 | 1 | 0.045977 | false | 0.109195 | 0.04023 | 0 | 0.097701 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
b58e89e0b90dd75ee23732de92e9807f4d6602f0 | 31,479 | py | Python | timepiece/entries/migrations/0001_initial.py | icekernel/django-timepiece | 883cfcd50da3d1b411a43f3b6116342b49117ace | [
"MIT"
] | null | null | null | timepiece/entries/migrations/0001_initial.py | icekernel/django-timepiece | 883cfcd50da3d1b411a43f3b6116342b49117ace | [
"MIT"
] | null | null | null | timepiece/entries/migrations/0001_initial.py | icekernel/django-timepiece | 883cfcd50da3d1b411a43f3b6116342b49117ace | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Also includes initial migrations for crm, to avoid circular
# dependencies. See #787.
# Adding model 'UserProfile'
db.create_table('timepiece_userprofile', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('user', self.gf('django.db.models.fields.related.OneToOneField')(related_name='profile', unique=True, to=orm['auth.User'])),
('hours_per_week', self.gf('django.db.models.fields.DecimalField')(default=40, max_digits=8, decimal_places=2)),
))
db.send_create_signal('crm', ['UserProfile'])
# Adding model 'Attribute'
db.create_table('timepiece_attribute', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('type', self.gf('django.db.models.fields.CharField')(max_length=32)),
('label', self.gf('django.db.models.fields.CharField')(max_length=255)),
('sort_order', self.gf('django.db.models.fields.SmallIntegerField')(null=True, blank=True)),
('enable_timetracking', self.gf('django.db.models.fields.BooleanField')(default=False)),
('billable', self.gf('django.db.models.fields.BooleanField')(default=False)),
))
db.send_create_signal('crm', ['Attribute'])
# Adding unique constraint on 'Attribute', fields ['type', 'label']
db.create_unique('timepiece_attribute', ['type', 'label'])
# Adding model 'Business'
db.create_table('timepiece_business', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(max_length=255)),
('short_name', self.gf('django.db.models.fields.CharField')(max_length=255, blank=True)),
('email', self.gf('django.db.models.fields.EmailField')(max_length=75, blank=True)),
('description', self.gf('django.db.models.fields.TextField')(blank=True)),
('notes', self.gf('django.db.models.fields.TextField')(blank=True)),
('external_id', self.gf('django.db.models.fields.CharField')(max_length=32, blank=True)),
))
db.send_create_signal('crm', ['Business'])
# Adding model 'Project'
db.create_table('timepiece_project', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(max_length=255)),
('tracker_url', self.gf('django.db.models.fields.CharField')(default='', max_length=255, blank=True)),
('business', self.gf('django.db.models.fields.related.ForeignKey')(related_name='new_business_projects', to=orm['crm.Business'])),
('point_person', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['auth.User'])),
('activity_group', self.gf('django.db.models.fields.related.ForeignKey')(blank=True, related_name='activity_group', null=True, to=orm['entries.ActivityGroup'])),
('type', self.gf('django.db.models.fields.related.ForeignKey')(related_name='projects_with_type', to=orm['crm.Attribute'])),
('status', self.gf('django.db.models.fields.related.ForeignKey')(related_name='projects_with_status', to=orm['crm.Attribute'])),
('description', self.gf('django.db.models.fields.TextField')()),
))
db.send_create_signal('crm', ['Project'])
# Adding model 'RelationshipType'
db.create_table('timepiece_relationshiptype', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(unique=True, max_length=255)),
('slug', self.gf('django.db.models.fields.SlugField')(max_length=255)),
))
db.send_create_signal('crm', ['RelationshipType'])
# Adding model 'ProjectRelationship'
db.create_table('timepiece_projectrelationship', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('user', self.gf('django.db.models.fields.related.ForeignKey')(related_name='project_relationships', to=orm['auth.User'])),
('project', self.gf('django.db.models.fields.related.ForeignKey')(related_name='project_relationships', to=orm['crm.Project'])),
))
db.send_create_signal('crm', ['ProjectRelationship'])
# Adding unique constraint on 'ProjectRelationship', fields ['user', 'project']
db.create_unique('timepiece_projectrelationship', ['user_id', 'project_id'])
# Adding M2M table for field types on 'ProjectRelationship'
db.create_table('timepiece_projectrelationship_types', (
('id', models.AutoField(verbose_name='ID', primary_key=True, auto_created=True)),
('projectrelationship', models.ForeignKey(orm['crm.projectrelationship'], null=False)),
('relationshiptype', models.ForeignKey(orm['crm.relationshiptype'], null=False))
))
db.create_unique('timepiece_projectrelationship_types', ['projectrelationship_id', 'relationshiptype_id'])
# Adding model 'Activity'
db.create_table('timepiece_activity', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('code', self.gf('django.db.models.fields.CharField')(unique=True, max_length=5)),
('name', self.gf('django.db.models.fields.CharField')(max_length=50)),
('billable', self.gf('django.db.models.fields.BooleanField')(default=True)),
))
db.send_create_signal('entries', ['Activity'])
# Adding model 'ActivityGroup'
db.create_table('timepiece_activitygroup', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(unique=True, max_length=255)),
))
db.send_create_signal('entries', ['ActivityGroup'])
# Adding M2M table for field activities on 'ActivityGroup'
db.create_table('timepiece_activitygroup_activities', (
('id', models.AutoField(verbose_name='ID', primary_key=True, auto_created=True)),
('activitygroup', models.ForeignKey(orm['entries.activitygroup'], null=False)),
('activity', models.ForeignKey(orm['entries.activity'], null=False))
))
db.create_unique('timepiece_activitygroup_activities', ['activitygroup_id', 'activity_id'])
# Adding model 'Location'
db.create_table('timepiece_location', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(unique=True, max_length=255)),
('slug', self.gf('django.db.models.fields.CharField')(unique=True, max_length=255)),
))
db.send_create_signal('entries', ['Location'])
# Adding model 'Entry'
db.create_table('timepiece_entry', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('user', self.gf('django.db.models.fields.related.ForeignKey')(related_name='timepiece_entries', to=orm['auth.User'])),
('project', self.gf('django.db.models.fields.related.ForeignKey')(related_name='entries', to=orm['crm.Project'])),
('activity', self.gf('django.db.models.fields.related.ForeignKey')(related_name='entries', to=orm['entries.Activity'])),
('location', self.gf('django.db.models.fields.related.ForeignKey')(related_name='entries', to=orm['entries.Location'])),
('status', self.gf('django.db.models.fields.CharField')(default='unverified', max_length=24)),
('start_time', self.gf('django.db.models.fields.DateTimeField')()),
('end_time', self.gf('django.db.models.fields.DateTimeField')(db_index=True, null=True, blank=True)),
('seconds_paused', self.gf('django.db.models.fields.PositiveIntegerField')(default=0)),
('pause_time', self.gf('django.db.models.fields.DateTimeField')(null=True, blank=True)),
('comments', self.gf('django.db.models.fields.TextField')(blank=True)),
('date_updated', self.gf('django.db.models.fields.DateTimeField')(auto_now=True, blank=True)),
('hours', self.gf('django.db.models.fields.DecimalField')(default=0, max_digits=8, decimal_places=2)),
))
db.send_create_signal('entries', ['Entry'])
# Adding model 'ProjectHours'
db.create_table('timepiece_projecthours', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('week_start', self.gf('django.db.models.fields.DateField')()),
('project', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['crm.Project'])),
('user', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['auth.User'])),
('hours', self.gf('django.db.models.fields.DecimalField')(default=0, max_digits=8, decimal_places=2)),
('published', self.gf('django.db.models.fields.BooleanField')(default=False)),
))
db.send_create_signal('entries', ['ProjectHours'])
# Adding unique constraint on 'ProjectHours', fields ['week_start', 'project', 'user']
db.create_unique('timepiece_projecthours', ['week_start', 'project_id', 'user_id'])
def backwards(self, orm):
# Removing unique constraint on 'ProjectHours', fields ['week_start', 'project', 'user']
db.delete_unique('timepiece_projecthours', ['week_start', 'project_id', 'user_id'])
# Deleting model 'Activity'
db.delete_table('timepiece_activity')
# Deleting model 'ActivityGroup'
db.delete_table('timepiece_activitygroup')
# Removing M2M table for field activities on 'ActivityGroup'
db.delete_table('timepiece_activitygroup_activities')
# Deleting model 'Location'
db.delete_table('timepiece_location')
# Deleting model 'Entry'
db.delete_table('timepiece_entry')
# Deleting model 'ProjectHours'
db.delete_table('timepiece_projecthours')
# Removing unique constraint on 'ProjectRelationship', fields ['user', 'project']
db.delete_unique('timepiece_projectrelationship', ['user_id', 'project_id'])
# Removing unique constraint on 'Attribute', fields ['type', 'label']
db.delete_unique('timepiece_attribute', ['type', 'label'])
# Deleting model 'UserProfile'
db.delete_table('timepiece_userprofile')
# Deleting model 'Attribute'
db.delete_table('timepiece_attribute')
# Deleting model 'Business'
db.delete_table('timepiece_business')
# Deleting model 'Project'
db.delete_table('timepiece_project')
# Deleting model 'RelationshipType'
db.delete_table('timepiece_relationshiptype')
# Deleting model 'ProjectRelationship'
db.delete_table('timepiece_projectrelationship')
# Removing M2M table for field types on 'ProjectRelationship'
db.delete_table('timepiece_projectrelationship_types')
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'crm.attribute': {
'Meta': {'ordering': "('sort_order',)", 'unique_together': "(('type', 'label'),)", 'object_name': 'Attribute', 'db_table': "'timepiece_attribute'"},
'billable': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'enable_timetracking': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'label': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'sort_order': ('django.db.models.fields.SmallIntegerField', [], {'null': 'True', 'blank': 'True'}),
'type': ('django.db.models.fields.CharField', [], {'max_length': '32'})
},
'crm.business': {
'Meta': {'ordering': "('name',)", 'object_name': 'Business', 'db_table': "'timepiece_business'"},
'description': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'external_id': ('django.db.models.fields.CharField', [], {'max_length': '32', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'notes': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'short_name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'})
},
'crm.project': {
'Meta': {'ordering': "('name', 'status', 'type')", 'object_name': 'Project', 'db_table': "'timepiece_project'"},
'activity_group': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'activity_group'", 'null': 'True', 'to': "orm['entries.ActivityGroup']"}),
'business': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'new_business_projects'", 'to': "orm['crm.Business']"}),
'description': ('django.db.models.fields.TextField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'point_person': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"}),
'status': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'projects_with_status'", 'to': "orm['crm.Attribute']"}),
'tracker_url': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '255', 'blank': 'True'}),
'type': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'projects_with_type'", 'to': "orm['crm.Attribute']"}),
'users': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'user_projects'", 'symmetrical': 'False', 'through': "orm['crm.ProjectRelationship']", 'to': "orm['auth.User']"})
},
'crm.projectrelationship': {
'Meta': {'unique_together': "(('user', 'project'),)", 'object_name': 'ProjectRelationship', 'db_table': "'timepiece_projectrelationship'"},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'project_relationships'", 'to': "orm['crm.Project']"}),
'types': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'related_name': "'project_relationships'", 'blank': 'True', 'to': "orm['crm.RelationshipType']"}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'project_relationships'", 'to': "orm['auth.User']"})
},
'crm.relationshiptype': {
'Meta': {'object_name': 'RelationshipType', 'db_table': "'timepiece_relationshiptype'"},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '255'})
},
'entries.activity': {
'Meta': {'ordering': "('name',)", 'object_name': 'Activity', 'db_table': "'timepiece_activity'"},
'billable': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'code': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '5'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'entries.activitygroup': {
'Meta': {'object_name': 'ActivityGroup', 'db_table': "'timepiece_activitygroup'"},
'activities': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'activity_group'", 'symmetrical': 'False', 'to': "orm['entries.Activity']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'})
},
'entries.entry': {
'Meta': {'ordering': "('-start_time',)", 'object_name': 'Entry', 'db_table': "'timepiece_entry'"},
'activity': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'entries'", 'to': "orm['entries.Activity']"}),
'comments': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'date_updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'end_time': ('django.db.models.fields.DateTimeField', [], {'db_index': 'True', 'null': 'True', 'blank': 'True'}),
'hours': ('django.db.models.fields.DecimalField', [], {'default': '0', 'max_digits': '8', 'decimal_places': '2'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'location': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'entries'", 'to': "orm['entries.Location']"}),
'pause_time': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'entries'", 'to': "orm['crm.Project']"}),
'seconds_paused': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'start_time': ('django.db.models.fields.DateTimeField', [], {}),
'status': ('django.db.models.fields.CharField', [], {'default': "'unverified'", 'max_length': '24'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'timepiece_entries'", 'to': "orm['auth.User']"})
},
'entries.location': {
'Meta': {'object_name': 'Location', 'db_table': "'timepiece_location'"},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'}),
'slug': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'})
},
'entries.projecthours': {
'Meta': {'unique_together': "(('week_start', 'project', 'user'),)", 'object_name': 'ProjectHours', 'db_table': "'timepiece_projecthours'"},
'hours': ('django.db.models.fields.DecimalField', [], {'default': '0', 'max_digits': '8', 'decimal_places': '2'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['crm.Project']"}),
'published': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"}),
'week_start': ('django.db.models.fields.DateField', [], {})
},
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'crm.attribute': {
'Meta': {'ordering': "('sort_order',)", 'unique_together': "(('type', 'label'),)", 'object_name': 'Attribute', 'db_table': "'timepiece_attribute'"},
'billable': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'enable_timetracking': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'label': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'sort_order': ('django.db.models.fields.SmallIntegerField', [], {'null': 'True', 'blank': 'True'}),
'type': ('django.db.models.fields.CharField', [], {'max_length': '32'})
},
'crm.business': {
'Meta': {'ordering': "('name',)", 'object_name': 'Business', 'db_table': "'timepiece_business'"},
'description': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'external_id': ('django.db.models.fields.CharField', [], {'max_length': '32', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'notes': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'short_name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'})
},
'crm.project': {
'Meta': {'ordering': "('name', 'status', 'type')", 'object_name': 'Project', 'db_table': "'timepiece_project'"},
'activity_group': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'activity_group'", 'null': 'True', 'to': "orm['entries.ActivityGroup']"}),
'business': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'new_business_projects'", 'to': "orm['crm.Business']"}),
'description': ('django.db.models.fields.TextField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'point_person': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"}),
'status': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'projects_with_status'", 'to': "orm['crm.Attribute']"}),
'tracker_url': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '255', 'blank': 'True'}),
'type': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'projects_with_type'", 'to': "orm['crm.Attribute']"}),
'users': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'user_projects'", 'symmetrical': 'False', 'through': "orm['crm.ProjectRelationship']", 'to': "orm['auth.User']"})
},
'crm.projectrelationship': {
'Meta': {'unique_together': "(('user', 'project'),)", 'object_name': 'ProjectRelationship', 'db_table': "'timepiece_projectrelationship'"},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'project_relationships'", 'to': "orm['crm.Project']"}),
'types': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'related_name': "'project_relationships'", 'blank': 'True', 'to': "orm['crm.RelationshipType']"}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'project_relationships'", 'to': "orm['auth.User']"})
},
'crm.relationshiptype': {
'Meta': {'object_name': 'RelationshipType', 'db_table': "'timepiece_relationshiptype'"},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '255'})
},
'crm.userprofile': {
'Meta': {'object_name': 'UserProfile', 'db_table': "'timepiece_userprofile'"},
'hours_per_week': ('django.db.models.fields.DecimalField', [], {'default': '40', 'max_digits': '8', 'decimal_places': '2'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'user': ('django.db.models.fields.related.OneToOneField', [], {'related_name': "'profile'", 'unique': 'True', 'to': "orm['auth.User']"})
},
'entries.activity': {
'Meta': {'ordering': "('name',)", 'object_name': 'Activity', 'db_table': "'timepiece_activity'"},
'billable': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'code': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '5'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'entries.activitygroup': {
'Meta': {'object_name': 'ActivityGroup', 'db_table': "'timepiece_activitygroup'"},
'activities': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'activity_group'", 'symmetrical': 'False', 'to': "orm['entries.Activity']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'})
}
}
complete_apps = ['entries', 'crm']
| 73.037123 | 208 | 0.594936 | 3,261 | 31,479 | 5.600736 | 0.054278 | 0.09067 | 0.157906 | 0.22558 | 0.873631 | 0.857972 | 0.829884 | 0.813896 | 0.784275 | 0.752847 | 0 | 0.00777 | 0.18228 | 31,479 | 430 | 209 | 73.206977 | 0.70176 | 0.044728 | 0 | 0.577031 | 0 | 0 | 0.55274 | 0.314011 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005602 | false | 0.005602 | 0.008403 | 0 | 0.022409 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b59ea1f6d4f7d17549440102e76a17b963cb776e | 134,943 | py | Python | files/runs_small/cores_8/ocean.cont/power.py | ST4NSB/sniper-simulator-predictions | 1f0fe2a10fda55fceea053464ea202bfe2effafc | [
"MIT"
] | 1 | 2021-03-08T03:39:23.000Z | 2021-03-08T03:39:23.000Z | files/runs_small/cores_8/ocean.cont/power.py | ST4NSB/sniper-simulator-predictions | 1f0fe2a10fda55fceea053464ea202bfe2effafc | [
"MIT"
] | null | null | null | files/runs_small/cores_8/ocean.cont/power.py | ST4NSB/sniper-simulator-predictions | 1f0fe2a10fda55fceea053464ea202bfe2effafc | [
"MIT"
] | null | null | null | power = {'BUSES': {'Area': 3.70399,
'Bus/Area': 3.70399,
'Bus/Gate Leakage': 0.00993673,
'Bus/Peak Dynamic': 2.39417,
'Bus/Runtime Dynamic': 0.294528,
'Bus/Subthreshold Leakage': 0.103619,
'Bus/Subthreshold Leakage with power gating': 0.0388573,
'Gate Leakage': 0.00993673,
'Peak Dynamic': 2.39417,
'Runtime Dynamic': 0.294528,
'Subthreshold Leakage': 0.103619,
'Subthreshold Leakage with power gating': 0.0388573},
'Core': [{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.138543,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.311506,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.23702,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.848216,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.507633,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.879037,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.567015,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.95368,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.312121,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.433916,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 7.41358,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.233699,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0184021,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.165062,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.136095,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.398761,
'Execution Unit/Register Files/Runtime Dynamic': 0.154497,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.436813,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.24336,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 4.94518,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00126781,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00126781,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00109828,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000421887,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00195501,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00558891,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0123696,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.130831,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.318147,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.444362,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 0.911299,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.225889,
'L2/Runtime Dynamic': 0.0744946,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 4.98831,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.88637,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.12136,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.12136,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 5.56373,
'Load Store Unit/Runtime Dynamic': 2.60623,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.299253,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.598507,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.106206,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.109576,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0518135,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.742103,
'Memory Management Unit/Runtime Dynamic': 0.161389,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 27.4757,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.815324,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0357686,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.247441,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 1.09853,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 9.79713,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.137315,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.310542,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.22795,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.844228,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.503011,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.871033,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.562292,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.93633,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.308941,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.431207,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 7.39225,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.231987,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0182345,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.16349,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.134856,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.395477,
'Execution Unit/Register Files/Runtime Dynamic': 0.15309,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.432678,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.23352,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 4.90893,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.0012457,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.0012457,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00107925,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000414643,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00193721,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00550786,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0121495,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.12964,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.316105,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.440316,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 0.903719,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.224983,
'L2/Runtime Dynamic': 0.0745787,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 4.96369,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.87486,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.120563,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.120563,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 5.53533,
'Load Store Unit/Runtime Dynamic': 2.59,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.297289,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.594577,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.105509,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.108866,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.051493,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.740898,
'Memory Management Unit/Runtime Dynamic': 0.160359,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 27.4239,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.809348,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0354603,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.245172,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 1.08998,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 9.72756,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.13904,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.311897,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.23962,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.849363,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.507287,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.878438,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.566765,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.95249,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.311381,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.433475,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 7.41681,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.234192,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0183896,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.165162,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.136002,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.399353,
'Execution Unit/Register Files/Runtime Dynamic': 0.154392,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.43719,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.24386,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 4.94548,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00125867,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00125867,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00109027,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000418763,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00195368,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00556129,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0122835,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.130742,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.318605,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.444059,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 0.911251,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.222759,
'L2/Runtime Dynamic': 0.073456,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 4.98588,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.88361,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.121281,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.121281,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 5.56093,
'Load Store Unit/Runtime Dynamic': 2.60301,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.299059,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.598118,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.106137,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.109457,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0518486,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.741984,
'Memory Management Unit/Runtime Dynamic': 0.161306,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 27.4729,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.817042,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0357716,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.247217,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 1.10003,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 9.79453,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.137641,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.310798,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.22751,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.844032,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.501543,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.868492,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.560915,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.93095,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.307561,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.430183,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 7.38998,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.231902,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0181813,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.163318,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.134462,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.39522,
'Execution Unit/Register Files/Runtime Dynamic': 0.152644,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.432351,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.23158,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 4.90019,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00123458,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00123458,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00106953,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.00041087,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00193156,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00547026,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0120437,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.129262,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.317,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.439032,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 0.902808,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.22253,
'L2/Runtime Dynamic': 0.0736449,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 4.95058,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.86639,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.120139,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.120139,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 5.52022,
'Load Store Unit/Runtime Dynamic': 2.57902,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.296243,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.592486,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.105138,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.108456,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0516047,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.740256,
'Memory Management Unit/Runtime Dynamic': 0.16006,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 27.4034,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.809054,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0353817,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.244406,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 1.08884,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 9.70456,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.138881,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.311772,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.23664,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.84805,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.505995,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.8762,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.565449,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.94764,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.310559,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.432735,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 7.41105,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.233628,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0183427,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.164849,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.135656,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.398476,
'Execution Unit/Register Files/Runtime Dynamic': 0.153998,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.43639,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.24114,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 4.93534,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00125509,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00125509,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.0010872,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000417599,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.0019487,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00554608,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0122475,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.130409,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.318805,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.442928,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 0.909935,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.222789,
'L2/Runtime Dynamic': 0.0734579,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 4.97325,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.87726,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.120873,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.120873,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 5.54636,
'Load Store Unit/Runtime Dynamic': 2.59423,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.298052,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.596103,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.105779,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.109101,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0518998,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.741366,
'Memory Management Unit/Runtime Dynamic': 0.161001,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 27.452,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.815074,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0356818,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.246584,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 1.09734,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 9.7713,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.137315,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.310542,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.22795,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.844228,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.502978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.870976,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.562258,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.93621,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.308911,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.431177,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 7.39219,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.231987,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0182333,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.163482,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.134847,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.395469,
'Execution Unit/Register Files/Runtime Dynamic': 0.15308,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.432659,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.23346,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 4.9087,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00124552,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00124552,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00107905,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000414548,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00193708,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00550717,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.012149,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.129632,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.31606,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.440288,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 0.903635,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.224581,
'L2/Runtime Dynamic': 0.0744802,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 4.96344,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.87455,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.120555,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.120555,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 5.53505,
'Load Store Unit/Runtime Dynamic': 2.58965,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.297269,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.594538,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.105502,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.108853,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.05149,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.740886,
'Memory Management Unit/Runtime Dynamic': 0.160343,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 27.4231,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.809348,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0354586,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.245154,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 1.08996,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 9.72677,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.138543,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.311506,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.23702,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.848216,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.507452,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.878724,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.56682,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.953,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.311944,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.433762,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 7.41322,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.233699,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0183955,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.165015,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.136046,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.398715,
'Execution Unit/Register Files/Runtime Dynamic': 0.154442,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.4367,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.24305,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 4.94397,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00126617,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00126617,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00109684,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000421328,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00195432,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00558349,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0123539,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.130785,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.317933,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.444204,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 0.91086,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.225344,
'L2/Runtime Dynamic': 0.0747545,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 4.98679,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.88625,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.121311,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.121311,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 5.56198,
'Load Store Unit/Runtime Dynamic': 2.60583,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.299132,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.598263,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.106163,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.109525,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0517893,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.742029,
'Memory Management Unit/Runtime Dynamic': 0.161315,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 27.473,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.815324,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0357593,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.247347,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 1.09843,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 9.79516,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.137789,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.310914,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.23038,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.845298,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.502795,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.87066,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.562192,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.93565,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.308357,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.4309,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 7.39551,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.232446,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0182267,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.163616,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.134798,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.396062,
'Execution Unit/Register Files/Runtime Dynamic': 0.153025,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.433113,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.23421,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 4.90999,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00123847,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00123847,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00107294,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000412198,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00193638,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00548627,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0120805,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.129585,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.316631,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.440127,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 0.90391,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.222754,
'L2/Runtime Dynamic': 0.0738638,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 4.96268,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.8728,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.120531,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.120531,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 5.53417,
'Load Store Unit/Runtime Dynamic': 2.58774,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.297208,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.594417,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.10548,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.108802,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0515461,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.740848,
'Memory Management Unit/Runtime Dynamic': 0.160348,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 27.4237,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.810952,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0354686,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.245018,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 1.09144,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 9.7273,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311}],
'DRAM': {'Area': 0,
'Gate Leakage': 0,
'Peak Dynamic': 2.1272279653105817,
'Runtime Dynamic': 2.1272279653105817,
'Subthreshold Leakage': 8.504,
'Subthreshold Leakage with power gating': 8.504},
'L3': [{'Area': 61.9075,
'Gate Leakage': 0.0484137,
'Peak Dynamic': 0.747029,
'Runtime Dynamic': 0.498578,
'Subthreshold Leakage': 6.80085,
'Subthreshold Leakage with power gating': 3.32364},
{'Area': 61.9075,
'Gate Leakage': 0.0484137,
'Peak Dynamic': 0.747965,
'Runtime Dynamic': 0.499268,
'Subthreshold Leakage': 6.80085,
'Subthreshold Leakage with power gating': 3.32364}],
'Processor': {'Area': 388.384,
'Gate Leakage': 3.09074,
'Peak Dynamic': 223.437,
'Peak Power': 289.983,
'Runtime Dynamic': 79.3367,
'Subthreshold Leakage': 63.4555,
'Subthreshold Leakage with power gating': 28.0572,
'Total Cores/Area': 260.865,
'Total Cores/Gate Leakage': 2.98397,
'Total Cores/Peak Dynamic': 219.548,
'Total Cores/Runtime Dynamic': 78.0443,
'Total Cores/Subthreshold Leakage': 49.7502,
'Total Cores/Subthreshold Leakage with power gating': 20.6649,
'Total L3s/Area': 123.815,
'Total L3s/Gate Leakage': 0.0968273,
'Total L3s/Peak Dynamic': 1.49499,
'Total L3s/Runtime Dynamic': 0.997846,
'Total L3s/Subthreshold Leakage': 13.6017,
'Total L3s/Subthreshold Leakage with power gating': 6.64728,
'Total Leakage': 66.5462,
'Total NoCs/Area': 3.70399,
'Total NoCs/Gate Leakage': 0.00993673,
'Total NoCs/Peak Dynamic': 2.39417,
'Total NoCs/Runtime Dynamic': 0.294528,
'Total NoCs/Subthreshold Leakage': 0.103619,
'Total NoCs/Subthreshold Leakage with power gating': 0.0388573}} | 75.640695 | 124 | 0.683918 | 15,928 | 134,943 | 5.788172 | 0.042629 | 0.122416 | 0.114541 | 0.094757 | 0.964976 | 0.963501 | 0.958034 | 0.937783 | 0.928499 | 0.922793 | 0 | 0.129827 | 0.222909 | 134,943 | 1,784 | 125 | 75.640695 | 0.749359 | 0 | 0 | 0.75 | 0 | 0 | 0.661267 | 0.048909 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
a988ec389cdb72482263d56d3d5b502e906bf5e8 | 86 | py | Python | contrib/python/numpy/numpy/core/umath/__init__.py | HeyLey/catboost | f472aed90604ebe727537d9d4a37147985e10ec2 | [
"Apache-2.0"
] | 1 | 2019-01-26T02:58:50.000Z | 2019-01-26T02:58:50.000Z | contrib/python/numpy/numpy/core/umath/__init__.py | HeyLey/catboost | f472aed90604ebe727537d9d4a37147985e10ec2 | [
"Apache-2.0"
] | null | null | null | contrib/python/numpy/numpy/core/umath/__init__.py | HeyLey/catboost | f472aed90604ebe727537d9d4a37147985e10ec2 | [
"Apache-2.0"
] | 1 | 2018-08-06T14:13:12.000Z | 2018-08-06T14:13:12.000Z | from umath import *
from umath import _add_newdoc_ufunc
from umath import _UFUNC_API
| 17.2 | 35 | 0.837209 | 14 | 86 | 4.785714 | 0.5 | 0.402985 | 0.671642 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151163 | 86 | 4 | 36 | 21.5 | 0.917808 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
a98a2b31aca8e80c00be770bd08826bb78a01be3 | 146 | py | Python | discord/types/interactions.py | kuzaku-developers/disnake | 61cc1ad4c2bafd39726a1447c85f7e469e41af10 | [
"MIT"
] | null | null | null | discord/types/interactions.py | kuzaku-developers/disnake | 61cc1ad4c2bafd39726a1447c85f7e469e41af10 | [
"MIT"
] | null | null | null | discord/types/interactions.py | kuzaku-developers/disnake | 61cc1ad4c2bafd39726a1447c85f7e469e41af10 | [
"MIT"
] | null | null | null | from disnake.types.interactions import *
from disnake.types.interactions import __dict__ as __original_dict__
locals().update(__original_dict__)
| 29.2 | 68 | 0.849315 | 18 | 146 | 6.111111 | 0.555556 | 0.2 | 0.290909 | 0.509091 | 0.618182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082192 | 146 | 4 | 69 | 36.5 | 0.820896 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.