hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
872e4d3f25a9f2f42aba1b2c436ff6316b6a3b2a | 122 | py | Python | src/modules/podcast/tasks/__init__.py | DmitryBurnaev/podcast-service | 53349a3f9aed22a8024d0c83380f9a02464962a3 | [
"MIT"
] | 5 | 2021-07-01T16:31:29.000Z | 2022-01-29T14:32:13.000Z | src/modules/podcast/tasks/__init__.py | DmitryBurnaev/podcast-service | 53349a3f9aed22a8024d0c83380f9a02464962a3 | [
"MIT"
] | 45 | 2020-10-25T19:41:26.000Z | 2022-03-25T06:31:58.000Z | src/modules/podcast/tasks/__init__.py | DmitryBurnaev/podcast-service | 53349a3f9aed22a8024d0c83380f9a02464962a3 | [
"MIT"
] | 1 | 2022-01-27T11:30:07.000Z | 2022-01-27T11:30:07.000Z | from .base import RQTask # noqa:F401,F403
from .download import * # noqa:F401,F403
from .rss import * # noqa:F401,F403
| 30.5 | 42 | 0.713115 | 19 | 122 | 4.578947 | 0.473684 | 0.275862 | 0.413793 | 0.367816 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178218 | 0.172131 | 122 | 3 | 43 | 40.666667 | 0.683168 | 0.360656 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
8738d09a63ab592e8cdb707575b3d4abdc0bf6ce | 81 | py | Python | fabricchef/tasks/__init__.py | crossroad0201/fabric-chef | e9537c23958993c436a84f0936b132aff97c7653 | [
"MIT"
] | null | null | null | fabricchef/tasks/__init__.py | crossroad0201/fabric-chef | e9537c23958993c436a84f0936b132aff97c7653 | [
"MIT"
] | null | null | null | fabricchef/tasks/__init__.py | crossroad0201/fabric-chef | e9537c23958993c436a84f0936b132aff97c7653 | [
"MIT"
] | null | null | null | # TODO Add coolbook tasks.
# TODO Add databag tasks.
# TODO Add node.apply task.
| 20.25 | 27 | 0.728395 | 13 | 81 | 4.538462 | 0.615385 | 0.355932 | 0.40678 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185185 | 81 | 3 | 28 | 27 | 0.893939 | 0.91358 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0.333333 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
87732043023b6787e670cf55fe984cfe291488d7 | 5,860 | py | Python | src/tests/test_c3s.py | kjhall-iri/cpt-tools | 7c0a43c3332e6c51253fe4a530c47a2b839d6075 | [
"MIT"
] | null | null | null | src/tests/test_c3s.py | kjhall-iri/cpt-tools | 7c0a43c3332e6c51253fe4a530c47a2b839d6075 | [
"MIT"
] | null | null | null | src/tests/test_c3s.py | kjhall-iri/cpt-tools | 7c0a43c3332e6c51253fe4a530c47a2b839d6075 | [
"MIT"
] | null | null | null | import pytest
from .. import *
def make_args(model):
predictor_domain = Geo(20,50, -110, -70)
attrs = [recursive_getattr(model, i) for i in model.walk() ]
args = [(predictor_domain, attrs[i]) for i in range(len(attrs))]
return args
@pytest.mark.SEASONAL
@pytest.mark.C3S_SPSV3P5
@pytest.mark.hindcast
@pytest.mark.C3S
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.SPSv3p5))
def test_spsv3p5_hcst(predictor_domain, entry):
entry.hindcasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.forecast
@pytest.mark.C3S
@pytest.mark.C3S_SPSV3P5
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.SPSv3p5))
def test_spsv3p5_fcst(predictor_domain, entry):
entry.forecasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.C3S_SPSV3P0
@pytest.mark.hindcast
@pytest.mark.C3S
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.SPSv3p0))
def test_spsv3_hcst(predictor_domain, entry):
entry.hindcasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.C3S_SPSV3P0
@pytest.mark.forecast
@pytest.mark.C3S
@pytest.mark.xfail
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.SPSv3p0))
def test_spsv3_fcst(predictor_domain, entry):
entry.forecasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.C3S_GCFS2P1
@pytest.mark.hindcast
@pytest.mark.C3S
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.GCFS2p1))
def test_gcfs2p1_hcst(predictor_domain, entry):
entry.hindcasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.C3S_GCFS2P1
@pytest.mark.forecast
@pytest.mark.C3S
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.GCFS2p1))
def test_gcfs2p1_fcst(predictor_domain, entry):
entry.forecasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.C3S_GCFS2P0
@pytest.mark.hindcast
@pytest.mark.C3S
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.GCFS2p0))
def test_gcfs2p0_hcst(predictor_domain, entry):
entry.hindcasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.C3S_GCFS2P0
@pytest.mark.forecast
@pytest.mark.C3S
@pytest.mark.xfail
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.GCFS2p0))
def test_gcfs2p0_fcst(predictor_domain, entry):
entry.forecasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.C3S_SEAS5
@pytest.mark.hindcast
@pytest.mark.C3S
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.SEAS5))
def test_seas5_hcst(predictor_domain, entry):
entry.hindcasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.C3S_SEAS5
@pytest.mark.forecast
@pytest.mark.C3S
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.SEAS5))
def test_seas5_fcst(predictor_domain, entry):
entry.forecasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.C3S_CPS2
@pytest.mark.hindcast
@pytest.mark.C3S
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.CPS2))
def test_cps2_hcst(predictor_domain, entry):
entry.hindcasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.C3S_CPS2
@pytest.mark.forecast
@pytest.mark.C3S
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.CPS2))
def test_cps2_fcst(predictor_domain, entry):
entry.forecasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.C3S_METEOFRANCE7
@pytest.mark.hindcast
@pytest.mark.C3S
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.METEOFRANCE7))
def test_mf7_hcst(predictor_domain, entry):
entry.hindcasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.C3S_METEOFRANCE7
@pytest.mark.forecast
@pytest.mark.C3S
@pytest.mark.xfail
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.METEOFRANCE7))
def test_mf7_fcst(predictor_domain, entry):
entry.forecasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.C3S_METEOFRANCE8
@pytest.mark.hindcast
@pytest.mark.C3S
@pytest.mark.xfail
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.METEOFRANCE8))
def test_mf8_hcst(predictor_domain, entry):
entry.hindcasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.C3S_METEOFRANCE8
@pytest.mark.forecast
@pytest.mark.C3S
@pytest.mark.xfail
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.METEOFRANCE8))
def test_mf8_fcst(predictor_domain, entry):
entry.forecasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.C3S_GLOSEA5
@pytest.mark.hindcast
@pytest.mark.C3S
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.GLOSEA5))
def test_glosea5_hcst(predictor_domain, entry):
entry.hindcasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.C3S_GLOSEA5
@pytest.mark.forecast
@pytest.mark.C3S
@pytest.mark.xfail
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.GLOSEA5))
def test_glosea5_fcst(predictor_domain, entry):
entry.forecasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.C3S_GLOSEA6
@pytest.mark.hindcast
@pytest.mark.C3S
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.GLOSEA6))
def test_glosea6_hcst(predictor_domain, entry):
entry.hindcasts(predictor_domain, target='Jun-Sep')
@pytest.mark.SEASONAL
@pytest.mark.C3S_GLOSEA6
@pytest.mark.forecast
@pytest.mark.C3S
@pytest.mark.parametrize('predictor_domain,entry', make_args(SEASONAL.C3S.GLOSEA6))
def test_glosea6_fcst(predictor_domain, entry):
entry.forecasts(predictor_domain, target='Jun-Sep') | 32.555556 | 88 | 0.799488 | 816 | 5,860 | 5.564951 | 0.064951 | 0.233429 | 0.114512 | 0.105704 | 0.964545 | 0.964545 | 0.953975 | 0.953975 | 0.945386 | 0.945386 | 0 | 0.027565 | 0.065188 | 5,860 | 180 | 89 | 32.555556 | 0.801387 | 0 | 0 | 0.823529 | 0 | 0 | 0.098959 | 0.075073 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137255 | false | 0 | 0.013072 | 0 | 0.156863 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
5e7b71d091919fb41bfbc0d915999bdc5196afc0 | 5,660 | py | Python | day_14/test_14.py | allisonhonold/advent_of_code | a0f19a7b23e3112698944512efa2d23794c02a82 | [
"MIT"
] | null | null | null | day_14/test_14.py | allisonhonold/advent_of_code | a0f19a7b23e3112698944512efa2d23794c02a82 | [
"MIT"
] | null | null | null | day_14/test_14.py | allisonhonold/advent_of_code | a0f19a7b23e3112698944512efa2d23794c02a82 | [
"MIT"
] | null | null | null | from d14 import parse_rxns, calc_cost
from d14_2 import search
import math
def test_parse_rxns():
lines = ['10 ORE => 10 A', '1 ORE => 1 B', '7 A, 1 B => 1 C',
'7 A, 1 C => 1 D', '7 A, 1 D => 1 E', '7 A, 1 E => 1 FUEL']
elems = parse_rxns(lines)
assert elems['FUEL']['n_out'] == 1
assert elems['FUEL']['inputs'] == [[7, 'A'], [1, 'E']]
def test_calc_cost():
lines = ['157 ORE => 5 NZVS',
'165 ORE => 6 DCFZ',
'44 XJWVT, 5 KHKGT, 1 QDVJ, 29 NZVS, 9 GPVTF, 48 HKGWZ => 1 FUEL',
'12 HKGWZ, 1 GPVTF, 8 PSHF => 9 QDVJ',
'179 ORE => 7 PSHF',
'177 ORE => 5 HKGWZ',
'7 DCFZ, 7 PSHF => 2 XJWVT',
'165 ORE => 2 GPVTF',
'3 DCFZ, 7 NZVS, 5 HKGWZ, 10 PSHF => 8 KHKGT']
elements = parse_rxns(lines)
assert calc_cost(elements) == 13312
def test_calc_cost2():
lines = [
'2 VPVL, 7 FWMGM, 2 CXFTF, 11 MNCFX => 1 STKFG',
'17 NVRVD, 3 JNWZP => 8 VPVL',
'53 STKFG, 6 MNCFX, 46 VJHF, 81 HVMC, 68 CXFTF, 25 GNMV => 1 FUEL',
'22 VJHF, 37 MNCFX => 5 FWMGM',
'139 ORE => 4 NVRVD',
'144 ORE => 7 JNWZP',
'5 MNCFX, 7 RFSQX, 2 FWMGM, 2 VPVL, 19 CXFTF => 3 HVMC',
'5 VJHF, 7 MNCFX, 9 VPVL, 37 CXFTF => 6 GNMV',
'145 ORE => 6 MNCFX',
'1 NVRVD => 8 CXFTF',
'1 VJHF, 6 MNCFX => 4 RFSQX',
'176 ORE => 6 VJHF',
]
elements = parse_rxns(lines)
assert calc_cost(elements) == 180697
def test_calc_cost3():
lines = [
'171 ORE => 8 CNZTR',
'7 ZLQW, 3 BMBT, 9 XCVML, 26 XMNCP, 1 WPTQ, 2 MZWV, 1 RJRHP => 4 PLWSL',
'114 ORE => 4 BHXH',
'14 VRPVC => 6 BMBT',
'6 BHXH, 18 KTJDG, 12 WPTQ, 7 PLWSL, 31 FHTLT, 37 ZDVW => 1 FUEL',
'6 WPTQ, 2 BMBT, 8 ZLQW, 18 KTJDG, 1 XMNCP, 6 MZWV, 1 RJRHP => 6 FHTLT',
'15 XDBXC, 2 LTCX, 1 VRPVC => 6 ZLQW',
'13 WPTQ, 10 LTCX, 3 RJRHP, 14 XMNCP, 2 MZWV, 1 ZLQW => 1 ZDVW',
'5 BMBT => 4 WPTQ',
'189 ORE => 9 KTJDG',
'1 MZWV, 17 XDBXC, 3 XCVML => 2 XMNCP',
'12 VRPVC, 27 CNZTR => 2 XDBXC',
'15 KTJDG, 12 BHXH => 5 XCVML',
'3 BHXH, 2 VRPVC => 7 MZWV',
'121 ORE => 7 VRPVC',
'7 XCVML => 6 RJRHP',
'5 BHXH, 4 VRPVC => 5 LTCX'
]
elements = parse_rxns(lines)
assert calc_cost(elements) == 2210736
def test_search():
lines = ['157 ORE => 5 NZVS',
'165 ORE => 6 DCFZ',
'44 XJWVT, 5 KHKGT, 1 QDVJ, 29 NZVS, 9 GPVTF, 48 HKGWZ => 1 FUEL',
'12 HKGWZ, 1 GPVTF, 8 PSHF => 9 QDVJ',
'179 ORE => 7 PSHF',
'177 ORE => 5 HKGWZ',
'7 DCFZ, 7 PSHF => 2 XJWVT',
'165 ORE => 2 GPVTF',
'3 DCFZ, 7 NZVS, 5 HKGWZ, 10 PSHF => 8 KHKGT']
ore_per_fuel = 13312
# parse input into reactions
elements = parse_rxns(lines)
ore_input = 1000000000000
min_fuel = math.floor(ore_input/ore_per_fuel)
max_fuel = min_fuel * 1.2
assert search(elements, max_fuel, min_fuel, ore_input) == 82892753
def test_search_2():
lines = [
'2 VPVL, 7 FWMGM, 2 CXFTF, 11 MNCFX => 1 STKFG',
'17 NVRVD, 3 JNWZP => 8 VPVL',
'53 STKFG, 6 MNCFX, 46 VJHF, 81 HVMC, 68 CXFTF, 25 GNMV => 1 FUEL',
'22 VJHF, 37 MNCFX => 5 FWMGM',
'139 ORE => 4 NVRVD',
'144 ORE => 7 JNWZP',
'5 MNCFX, 7 RFSQX, 2 FWMGM, 2 VPVL, 19 CXFTF => 3 HVMC',
'5 VJHF, 7 MNCFX, 9 VPVL, 37 CXFTF => 6 GNMV',
'145 ORE => 6 MNCFX',
'1 NVRVD => 8 CXFTF',
'1 VJHF, 6 MNCFX => 4 RFSQX',
'176 ORE => 6 VJHF',
]
ore_per_fuel = 180697
# parse input into reactions
elements = parse_rxns(lines)
ore_input = 1000000000000
min_fuel = math.floor(ore_input/ore_per_fuel)
max_fuel = min_fuel * 1.2
assert search(elements, max_fuel, min_fuel, ore_input) == 5586022
def test_search_3():
lines = [
'171 ORE => 8 CNZTR',
'7 ZLQW, 3 BMBT, 9 XCVML, 26 XMNCP, 1 WPTQ, 2 MZWV, 1 RJRHP => 4 PLWSL',
'114 ORE => 4 BHXH',
'14 VRPVC => 6 BMBT',
'6 BHXH, 18 KTJDG, 12 WPTQ, 7 PLWSL, 31 FHTLT, 37 ZDVW => 1 FUEL',
'6 WPTQ, 2 BMBT, 8 ZLQW, 18 KTJDG, 1 XMNCP, 6 MZWV, 1 RJRHP => 6 FHTLT',
'15 XDBXC, 2 LTCX, 1 VRPVC => 6 ZLQW',
'13 WPTQ, 10 LTCX, 3 RJRHP, 14 XMNCP, 2 MZWV, 1 ZLQW => 1 ZDVW',
'5 BMBT => 4 WPTQ',
'189 ORE => 9 KTJDG',
'1 MZWV, 17 XDBXC, 3 XCVML => 2 XMNCP',
'12 VRPVC, 27 CNZTR => 2 XDBXC',
'15 KTJDG, 12 BHXH => 5 XCVML',
'3 BHXH, 2 VRPVC => 7 MZWV',
'121 ORE => 7 VRPVC',
'7 XCVML => 6 RJRHP',
'5 BHXH, 4 VRPVC => 5 LTCX'
]
ore_per_fuel = 2210736
# parse input into reactions
elements = parse_rxns(lines)
ore_input = 1000000000000
min_fuel = math.floor(ore_input/ore_per_fuel)
max_fuel = min_fuel * 1.2
assert search(elements, max_fuel, min_fuel, ore_input) == 460664 | 37.483444 | 88 | 0.465018 | 785 | 5,660 | 3.272611 | 0.151592 | 0.03153 | 0.043597 | 0.051382 | 0.859478 | 0.859478 | 0.859478 | 0.859478 | 0.808097 | 0.808097 | 0 | 0.15593 | 0.413074 | 5,660 | 151 | 89 | 37.483444 | 0.617399 | 0.014134 | 0 | 0.772358 | 0 | 0.03252 | 0.446835 | 0 | 0 | 0 | 0 | 0 | 0.065041 | 1 | 0.056911 | false | 0 | 0.02439 | 0 | 0.081301 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5eb237f47b23ad27a20ef58ef8c156714c9310b5 | 520 | py | Python | CommandLineTools/updateCLT.py | MooersLab/jupyterlabcctbxsnipsplus | 80a380046adcc9b16581ed1681884017514edbb7 | [
"MIT"
] | null | null | null | CommandLineTools/updateCLT.py | MooersLab/jupyterlabcctbxsnipsplus | 80a380046adcc9b16581ed1681884017514edbb7 | [
"MIT"
] | null | null | null | CommandLineTools/updateCLT.py | MooersLab/jupyterlabcctbxsnipsplus | 80a380046adcc9b16581ed1681884017514edbb7 | [
"MIT"
] | null | null | null | # Description: Update the command line tools for Xcode on Mac OS X.
# Source: NA
"""
sudo touch /tmp/.com.apple.dt.CommandLineTools.installondemand.in-progress
softwareupdate -l
# Update command line tools via software update.
sudo rm /tmp/.com.apple.dt.CommandLineTools.installondemand.in-progress"""
sudo touch /tmp/.com.apple.dt.CommandLineTools.installondemand.in-progress
softwareupdate -l
# Update command line tools via software update.
sudo rm /tmp/.com.apple.dt.CommandLineTools.installondemand.in-progress | 40 | 74 | 0.796154 | 72 | 520 | 5.75 | 0.402778 | 0.057971 | 0.10628 | 0.125604 | 0.855072 | 0.855072 | 0.855072 | 0.855072 | 0.855072 | 0.855072 | 0 | 0 | 0.101923 | 520 | 13 | 75 | 40 | 0.88651 | 0.242308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5ebfc5928b513e230647799568f78d35e3a88de7 | 1,573 | py | Python | test/statements/for3.py | abjugard/MagicPython | 2802ded681e0ab1a1057821c1da287147d639505 | [
"MIT"
] | 1,482 | 2015-10-16T21:59:32.000Z | 2022-03-30T11:44:40.000Z | test/statements/for3.py | abjugard/MagicPython | 2802ded681e0ab1a1057821c1da287147d639505 | [
"MIT"
] | 226 | 2015-10-15T15:53:44.000Z | 2022-03-25T03:08:27.000Z | test/statements/for3.py | abjugard/MagicPython | 2802ded681e0ab1a1057821c1da287147d639505 | [
"MIT"
] | 129 | 2015-10-20T02:41:49.000Z | 2022-03-22T01:44:36.000Z | for(a, b), c, invariable in[2 in q, 2 in w]:
pass
for : keyword.control.flow.python, source.python
( : punctuation.parenthesis.begin.python, source.python
a : source.python
, : punctuation.separator.element.python, source.python
: source.python
b : source.python
) : punctuation.parenthesis.end.python, source.python
, : punctuation.separator.element.python, source.python
: source.python
c : source.python
, : punctuation.separator.element.python, source.python
: source.python
invariable : source.python
: source.python
in : keyword.control.flow.python, source.python
[ : punctuation.definition.list.begin.python, source.python
2 : constant.numeric.dec.python, source.python
: source.python
in : keyword.operator.logical.python, source.python
: source.python
q : source.python
, : punctuation.separator.element.python, source.python
: source.python
2 : constant.numeric.dec.python, source.python
: source.python
in : keyword.operator.logical.python, source.python
: source.python
w : source.python
] : punctuation.definition.list.end.python, source.python
: : punctuation.separator.colon.python, source.python
: source.python
pass : keyword.control.flow.python, source.python
| 40.333333 | 71 | 0.582962 | 156 | 1,573 | 5.878205 | 0.185897 | 0.418757 | 0.51036 | 0.261723 | 0.858233 | 0.769902 | 0.647764 | 0.545256 | 0.545256 | 0.545256 | 0 | 0.003756 | 0.32295 | 1,573 | 38 | 72 | 41.394737 | 0.857277 | 0 | 0 | 0.529412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.058824 | 0 | null | null | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
0dca6e3663a1d2d4097ec3b68066b1f2a172d9b1 | 4,624 | py | Python | tests/functions/spin_fields/test_anisotropy_interaction_field.py | jdalzatec/llg | c0acd728d29a9a821ebadc4f1e17e0327d7e238c | [
"MIT"
] | 4 | 2019-09-02T19:18:55.000Z | 2021-05-05T15:04:54.000Z | tests/functions/spin_fields/test_anisotropy_interaction_field.py | lufvelasquezgo/llg | c0acd728d29a9a821ebadc4f1e17e0327d7e238c | [
"MIT"
] | 116 | 2020-02-09T05:19:52.000Z | 2022-03-27T18:47:17.000Z | tests/functions/spin_fields/test_anisotropy_interaction_field.py | lufvelasquezgo/llg | c0acd728d29a9a821ebadc4f1e17e0327d7e238c | [
"MIT"
] | null | null | null | from llg.functions import spin_fields
import pytest
import numpy
def compute_anisotropy_field(
num_sites, state, magnitude_spin_moment, anisotropy_constant, anisotropy_vector
):
total = numpy.zeros(shape=(num_sites, 3))
for i in range(num_sites):
total[i] += (
2.0
* anisotropy_constant[i]
* numpy.dot(state[i], anisotropy_vector[i])
* anisotropy_vector[i]
)
total /= magnitude_spin_moment[:, numpy.newaxis]
return total
@pytest.mark.repeat(10)
def test_anisotropy_interaction_field_null_spin_moments(
random_state_spins,
build_sample,
random_anisotropy_constant,
random_anisotropy_vector,
):
num_sites, _, _, _ = build_sample
spin_moments = numpy.zeros(shape=num_sites)
assert numpy.all(
numpy.isinf(
spin_fields.anisotropy_interaction_field(
random_state_spins,
spin_moments,
random_anisotropy_constant,
random_anisotropy_vector,
)
)
)
@pytest.mark.repeat(10)
def test_anisotropy_interaction_field_null_anisotropy_constant(
random_state_spins, build_sample, random_anisotropy_vector
):
num_sites, _, _, _ = build_sample
spin_moments = numpy.ones(shape=num_sites)
anisotropy_constant = numpy.zeros(shape=num_sites)
total = compute_anisotropy_field(
num_sites,
random_state_spins,
spin_moments,
anisotropy_constant,
random_anisotropy_vector,
)
assert numpy.allclose(
spin_fields.anisotropy_interaction_field(
random_state_spins,
spin_moments,
anisotropy_constant,
random_anisotropy_vector,
),
total,
)
@pytest.mark.repeat(10)
def test_anisotropy_interaction_field_null_anisotropy_vector(
random_state_spins, build_sample, random_anisotropy_constant
):
num_sites, _, _, _ = build_sample
spin_moments = numpy.ones(shape=num_sites)
anisotropy_vector = numpy.zeros(shape=(num_sites, 3))
total = compute_anisotropy_field(
num_sites,
random_state_spins,
spin_moments,
random_anisotropy_constant,
anisotropy_vector,
)
assert numpy.allclose(
spin_fields.anisotropy_interaction_field(
random_state_spins,
spin_moments,
random_anisotropy_constant,
anisotropy_vector,
),
total,
)
@pytest.mark.repeat(10)
def test_anisotropy_interaction_field_random_spin_moments(
random_state_spins, build_sample, random_spin_moments
):
num_sites, _, _, _ = build_sample
anisotropy_constant = numpy.ones(shape=num_sites)
anisotropy_vector = numpy.ones(shape=(num_sites, 3))
total = compute_anisotropy_field(
num_sites,
random_state_spins,
random_spin_moments,
anisotropy_constant,
anisotropy_vector,
)
assert numpy.allclose(
spin_fields.anisotropy_interaction_field(
random_state_spins,
random_spin_moments,
anisotropy_constant,
anisotropy_vector,
),
total,
)
@pytest.mark.repeat(10)
def test_anisotropy_interaction_field_random_anisotropy_constant(
random_state_spins, build_sample, random_anisotropy_constant
):
num_sites, _, _, _ = build_sample
spin_moments = numpy.ones(shape=num_sites)
anisotropy_vector = numpy.ones(shape=(num_sites, 3))
total = compute_anisotropy_field(
num_sites,
random_state_spins,
spin_moments,
random_anisotropy_constant,
anisotropy_vector,
)
assert numpy.allclose(
spin_fields.anisotropy_interaction_field(
random_state_spins,
spin_moments,
random_anisotropy_constant,
anisotropy_vector,
),
total,
)
@pytest.mark.repeat(10)
def test_anisotropy_interaction_field_random_anisotropy_vector(
random_state_spins, build_sample, random_anisotropy_vector
):
num_sites, _, _, _ = build_sample
spin_moments = numpy.ones(shape=num_sites)
anisotropy_constants = numpy.ones(shape=num_sites)
total = compute_anisotropy_field(
num_sites,
random_state_spins,
spin_moments,
anisotropy_constants,
random_anisotropy_vector,
)
assert numpy.allclose(
spin_fields.anisotropy_interaction_field(
random_state_spins,
spin_moments,
anisotropy_constants,
random_anisotropy_vector,
),
total,
)
| 27.688623 | 83 | 0.669118 | 485 | 4,624 | 5.894845 | 0.101031 | 0.069955 | 0.095138 | 0.100735 | 0.890871 | 0.853795 | 0.835957 | 0.835957 | 0.798881 | 0.789087 | 0 | 0.005286 | 0.263625 | 4,624 | 166 | 84 | 27.855422 | 0.834361 | 0 | 0 | 0.713333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04 | 1 | 0.046667 | false | 0 | 0.02 | 0 | 0.073333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
df3a65ed7818d74c5d3630fbe34780ae2dd3d040 | 119 | py | Python | nz_crawl_demo/day8/wh1904/start.py | gaohj/nzflask_bbs | 36a94c380b78241ed5d1e07edab9618c3e8d477b | [
"Apache-2.0"
] | null | null | null | nz_crawl_demo/day8/wh1904/start.py | gaohj/nzflask_bbs | 36a94c380b78241ed5d1e07edab9618c3e8d477b | [
"Apache-2.0"
] | 27 | 2020-02-12T07:55:58.000Z | 2022-03-12T00:19:09.000Z | nz_crawl_demo/day8/wh1904/start.py | gaohj/nzflask_bbs | 36a94c380b78241ed5d1e07edab9618c3e8d477b | [
"Apache-2.0"
] | 2 | 2020-02-18T01:54:55.000Z | 2020-02-21T11:36:28.000Z | from scrapy import cmdline
cmdline.execute("scrapy crawl bsbdj".split())
# cmdline.execute(['scrapy','crawl','bsbdj']) | 29.75 | 45 | 0.739496 | 15 | 119 | 5.866667 | 0.533333 | 0.318182 | 0.454545 | 0.568182 | 0.681818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07563 | 119 | 4 | 46 | 29.75 | 0.8 | 0.361345 | 0 | 0 | 0 | 0 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
df3bdef06068888e7c437ff12419c5e6fffffb06 | 270 | py | Python | Server/Python/src/dbs/dao/MySQL/FileBuffer/FindDuplicates.py | vkuznet/DBS | 14df8bbe8ee8f874fe423399b18afef911fe78c7 | [
"Apache-2.0"
] | 8 | 2015-08-14T04:01:32.000Z | 2021-06-03T00:56:42.000Z | Server/Python/src/dbs/dao/MySQL/FileBuffer/FindDuplicates.py | yuyiguo/DBS | 14df8bbe8ee8f874fe423399b18afef911fe78c7 | [
"Apache-2.0"
] | 162 | 2015-01-07T21:34:47.000Z | 2021-10-13T09:42:41.000Z | Server/Python/src/dbs/dao/MySQL/FileBuffer/FindDuplicates.py | yuyiguo/DBS | 14df8bbe8ee8f874fe423399b18afef911fe78c7 | [
"Apache-2.0"
] | 16 | 2015-01-22T15:27:29.000Z | 2021-04-28T09:23:28.000Z | #!/usr/bin/env python
"""
This module provides FileBuffer.FindDuplicates data access object.
"""
from dbs.dao.Oracle.FileBuffer.FindDuplicates import FindDuplicates as OraFileBufferFindDuplicates
class FindDuplicates(OraFileBufferFindDuplicates):
pass
| 27 | 98 | 0.785185 | 26 | 270 | 8.153846 | 0.807692 | 0.226415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140741 | 270 | 9 | 99 | 30 | 0.913793 | 0.322222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
10c0b07ca891fe90521dd8648cbad5ff5d62d477 | 1,516 | py | Python | reorg-outputs.py | momo-the-monster/vqgan-clip-app | 56cfc0a53928d6d8f90ed8c79439afb4430bc118 | [
"MIT"
] | null | null | null | reorg-outputs.py | momo-the-monster/vqgan-clip-app | 56cfc0a53928d6d8f90ed8c79439afb4430bc118 | [
"MIT"
] | null | null | null | reorg-outputs.py | momo-the-monster/vqgan-clip-app | 56cfc0a53928d6d8f90ed8c79439afb4430bc118 | [
"MIT"
] | null | null | null | import os
from pathlib import Path
from shutil import copyfile
def reorg(folder):
for subdir, dirs, files in os.walk(folder):
for filename in files:
filepath = subdir + os.sep + filename
if filepath.endswith("output.PNG"):
# turn into FilePath
p = Path(filepath)
# directory becomes guid
guid = p.parent.name
# parent dir becomes prefix
prefix = p.parent.parent.name
renamed = f"{p.parent.parent.parent.parent.absolute()}\\~collected\\{prefix}_{guid}.png"
print (f"copying {filepath} to {renamed}")
# DO IT
copyfile(filepath, renamed)
def reorg_single(folder):
for subdir, dirs, files in os.walk(folder):
for filename in files:
filepath = subdir + os.sep + filename
if filepath.endswith("output.PNG"):
# turn into FilePath
p = Path(filepath)
# directory becomes guid
guid = p.parent.name
# parent dir becomes prefix
prefix = p.parent.parent.name
renamed = f"{p.parent.parent.parent.absolute()}\\~{prefix}\\{guid}.png"
print (f"copying {filepath} to {renamed}")
# DO IT
copyfile(filepath, renamed)
# reorg("G:\\My Drive\\ai-art\\outputs")
reorg_single("G:\\My Drive\\ai-art\\outputs\\fam") | 39.894737 | 104 | 0.527704 | 166 | 1,516 | 4.807229 | 0.307229 | 0.105263 | 0.065163 | 0.047619 | 0.842105 | 0.842105 | 0.79198 | 0.79198 | 0.79198 | 0.79198 | 0 | 0 | 0.364776 | 1,516 | 38 | 105 | 39.894737 | 0.827622 | 0.122691 | 0 | 0.692308 | 0 | 0.038462 | 0.188636 | 0.121212 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.115385 | null | null | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8001f9ed56d6ece5085c7fd4bfbbbb45bf5ca050 | 4,032 | py | Python | tests/test_onnxruntime.py | openppl-public/ppq | 0fdea7d4982bc57feb6bb8548c7f012707fbd607 | [
"Apache-2.0"
] | 100 | 2021-12-31T09:34:06.000Z | 2022-03-25T02:54:51.000Z | tests/test_onnxruntime.py | openppl-public/ppq | 0fdea7d4982bc57feb6bb8548c7f012707fbd607 | [
"Apache-2.0"
] | 12 | 2021-12-31T10:28:15.000Z | 2022-03-31T07:08:44.000Z | tests/test_onnxruntime.py | openppl-public/ppq | 0fdea7d4982bc57feb6bb8548c7f012707fbd607 | [
"Apache-2.0"
] | 21 | 2021-12-31T09:51:02.000Z | 2022-03-30T12:21:55.000Z | import onnxruntime
from ppq.IR.morph import GraphFormatter
from tests.tmodel import *
from tests.tscheme import *
from ppq import *
from ppq.api import *
from ppq import layerwise_error_analyse
import sys
DEVICE = 'cuda'
PLATFORM = TargetPlatform.ORT_OOS_INT8
for case in TORCH_TEST_CASES:
try:
print(f'PPQ System test(Onnxruntime) start with model {case.model_name}')
dataset = [case.input_generator().to(DEVICE) for _ in range(8)]
model = case.model_builder().to(DEVICE)
quantized = quantize_torch_model(
model=model,
calib_dataloader=dataset,
calib_steps=8,
input_shape=case.input_generator().shape,
platform=PLATFORM,
setting=QuantizationSettingFactory.default_setting())
executor = TorchExecutor(quantized)
sample_output = [(executor(sample)[0].cpu().unsqueeze(0)) for sample in dataset]
export_ppq_graph(
graph=quantized,
platform=TargetPlatform.ONNXRUNTIME,
graph_save_to='onnxruntime',
config_save_to='export.json')
# graph has only 1 input and output.
for name in quantized.outputs: output_name = name
for name in quantized.inputs: input_name = name
session = onnxruntime.InferenceSession('onnxruntime.onnx', providers=onnxruntime.get_available_providers())
onnxruntime_outputs = [session.run([output_name], {input_name: convert_any_to_numpy(sample)}) for sample in dataset]
onnxruntime_outputs = [convert_any_to_torch_tensor(sample) for sample in onnxruntime_outputs]
error = []
for ref, real in zip(sample_output, onnxruntime_outputs):
error.append(torch_snr_error(ref, real))
error = sum(error) / len(error) * 100
print(f'Simulating Error: {error: .4f}%')
assert error < 1
except NotImplementedError as e:
print(f'{time.strftime("%Y-%m-%d %H:%M:%S")} | Error occurred: {e}')
sys.exit(1)
for case in TORCH_TEST_CASES:
try:
print(f'PPQ System test(Onnxruntime) start with model {case.model_name}')
dataset = [case.input_generator().to(DEVICE) for _ in range(8)]
model = case.model_builder().to(DEVICE)
quantized = quantize_torch_model(
model=model,
calib_dataloader=dataset,
calib_steps=8,
input_shape=case.input_generator().shape,
platform=PLATFORM,
setting=QuantizationSettingFactory.default_setting())
'''
quantized.outputs.clear()
quantized.mark_variable_as_graph_output(quantized.variables['onnx::Conv_26'])
processor = GraphFormatter(quantized)
processor.truncate_on_var(quantized.variables['onnx::Conv_26'], mark_as_output=True)
'''
executor = TorchExecutor(quantized)
sample_output = [(executor(sample)[0].cpu().unsqueeze(0)) for sample in dataset]
export_ppq_graph(
graph=quantized,
platform=TargetPlatform.ORT_OOS_INT8,
graph_save_to='onnx',
config_save_to='export.json')
# graph has only 1 input and output.
for name in quantized.outputs: output_name = name
for name in quantized.inputs: input_name = name
session = onnxruntime.InferenceSession('onnx.onnx', providers=onnxruntime.get_available_providers())
onnxruntime_outputs = [session.run([output_name], {input_name: convert_any_to_numpy(sample)}) for sample in dataset]
onnxruntime_outputs = [convert_any_to_torch_tensor(sample) for sample in onnxruntime_outputs]
error = []
for ref, real in zip(sample_output, onnxruntime_outputs):
error.append(torch_snr_error(ref, real))
error = sum(error) / len(error) * 100
print(f'Simulating Error: {error: .4f}%')
# assert error < 1
except NotImplementedError as e:
print(f'{time.strftime("%Y-%m-%d %H:%M:%S")} | Error occurred: {e}')
sys.exit(1)
| 39.529412 | 124 | 0.657738 | 482 | 4,032 | 5.302905 | 0.240664 | 0.056338 | 0.025822 | 0.028169 | 0.85759 | 0.819249 | 0.819249 | 0.819249 | 0.819249 | 0.819249 | 0 | 0.009126 | 0.239087 | 4,032 | 101 | 125 | 39.920792 | 0.82399 | 0.021329 | 0 | 0.773333 | 0 | 0.026667 | 0.100872 | 0.013086 | 0 | 0 | 0 | 0 | 0.013333 | 1 | 0 | false | 0 | 0.106667 | 0 | 0.106667 | 0.08 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8049ea531524c6743c7a62fbbad660c9633ffc98 | 93 | py | Python | Algorithms/2. Implementation/46 - Beautiful Triplets.py | rosiejh/HackerRank | bfb07b8add04d3f3b67a61754db483f88a79e5a5 | [
"Apache-2.0"
] | null | null | null | Algorithms/2. Implementation/46 - Beautiful Triplets.py | rosiejh/HackerRank | bfb07b8add04d3f3b67a61754db483f88a79e5a5 | [
"Apache-2.0"
] | null | null | null | Algorithms/2. Implementation/46 - Beautiful Triplets.py | rosiejh/HackerRank | bfb07b8add04d3f3b67a61754db483f88a79e5a5 | [
"Apache-2.0"
] | null | null | null | def beautifulTriplets(d, arr):
return sum(a + d in arr and a + 2 * d in arr for a in arr) | 46.5 | 62 | 0.645161 | 20 | 93 | 3 | 0.55 | 0.25 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014493 | 0.258065 | 93 | 2 | 62 | 46.5 | 0.855072 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
338891c471e9907f2a470ab7703e48b6c14410ba | 14,703 | py | Python | stories/api/views.py | Emrahgs/Food-stories | b3465ca25024e2370cd5ce42af83c1b92a8982d4 | [
"MIT"
] | 1 | 2021-08-19T19:20:11.000Z | 2021-08-19T19:20:11.000Z | stories/api/views.py | Emrahgs/Food-stories | b3465ca25024e2370cd5ce42af83c1b92a8982d4 | [
"MIT"
] | null | null | null | stories/api/views.py | Emrahgs/Food-stories | b3465ca25024e2370cd5ce42af83c1b92a8982d4 | [
"MIT"
] | null | null | null | from rest_framework.response import Response
from rest_framework import status
from stories.models import Recipe, Story, Category, Comment, Subscriber, Tag
from stories.api.serializers import RecipeSerializer, RecipeCreateSerializer, StorySerializer, StoryCreateSerializer, CategorySerializer, CategoryCreateSerializer, CommentSerializer, CommentCreateSerializer, SubscriberSerializer, TagSerializer
# from rest_framework.decorators import api_view
from rest_framework.views import APIView
from rest_framework.generics import ListAPIView, ListCreateAPIView, RetrieveUpdateDestroyAPIView, CreateAPIView
import json
from rest_framework.exceptions import NotFound
from rest_framework.permissions import IsAuthenticatedOrReadOnly
from accounts.api.serializers import UserSerializer
# -----------Method 3 ---------------------
# Class Based
class SubscribeAPIView(CreateAPIView):
queryset = Subscriber.objects.filter(is_active=True)
serializer_class = SubscriberSerializer
class TagAPIView(ListAPIView):
queryset = Tag.objects.all()
serializer_class = TagSerializer
class CommentAPIView(APIView):
def get(self,request,*args,**kwargs):
comment = Comment.objects.all()
serializer = CommentSerializer(comment , many = True , context = {'request':request})
return Response(serializer.data)
def post(self, request, *args, **kwargs):
recipe_data = request.data
recipe_id = kwargs.get('pk')
recipe = Recipe.objects.filter(pk=recipe_id).first()
serializer = CommentSerializer(data=recipe_data, context={'request': request})
serializer.is_valid(raise_exception=True)
serializer.save(recipe=recipe)
return Response(serializer.data, status=status.HTTP_201_CREATED)
class CommentDetailAPIView(APIView):
def get(self,request,*args,**kwargs):
comment_id = kwargs.get('pk')
comment = Comment.objects.filter(pk=comment_id)
serializer = CommentSerializer(comment , many = True , context = {'request':request})
return Response(serializer.data)
def put(self, request, *args, **kwargs):
comment_id = kwargs.get('pk')
comment = Comment.objects.filter(pk=comment_id).first()
serializer = CommentCreateSerializer(comment, data=request.data, partial=True, context={'request': request})
serializer.is_valid(raise_exception=True)
serializer.save()
return Response(serializer.data, status=status.HTTP_200_OK)
def delete(self, request, *args, **kwargs):
comment_id = kwargs.get('pk')
comment = Comment.objects.filter(pk=comment_id).first()
comment.delete()
return Response(status=status.HTTP_204_NO_CONTENT)
class RecipeAPIView(APIView):
permission_classes = (IsAuthenticatedOrReadOnly,)
def get(self, request, *args, **kwargs):
recipes = Recipe.objects.filter(is_published=True)
filter_by = json.loads(json.dumps(request.GET))
if filter_by:
recipes = recipes.filter(**filter_by) # title=resept
serializer = RecipeSerializer(recipes, many=True, context={'request': request})
return Response(serializer.data)
def post(self, request, *args, **kwargs):
recipe_data = request.data
serializer = RecipeCreateSerializer(data=recipe_data, context={'request': request})
serializer.is_valid(raise_exception=True)
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
class RecipeDetailAPIView(APIView):
permission_classes= (IsAuthenticatedOrReadOnly,)
def get(self, request, *args, **kwargs):
recipe_id = kwargs.get('pk')
recipe = Recipe.objects.filter(pk=recipe_id, is_published=True).first()
if not recipe:
raise NotFound
serializer = RecipeSerializer(recipe, context={'request': request})
return Response(serializer.data)
def put(self, request, *args, **kwargs):
recipe_data = request.data
recipe_id = kwargs.get('pk')
recipe = Recipe.objects.filter(pk=recipe_id, is_published=True).first()
if not recipe:
raise NotFound
serializer = RecipeCreateSerializer(data=recipe_data, instance=recipe, partial=True, context={'request': request})
serializer.is_valid(raise_exception=True)
serializer.save()
return Response(serializer.data, status=status.HTTP_200_OK)
def patch(self, request, *args, **kwargs):
recipe_data = request.data
recipe_id = kwargs.get('pk')
recipe = Recipe.objects.filter(pk=recipe_id, is_published=True).first()
if not recipe:
raise NotFound
serializer = RecipeCreateSerializer(data=recipe_data, instance=recipe,
partial=True, context={'request': request})
serializer.is_valid(raise_exception=True)
serializer.save()
return Response(serializer.data, status=status.HTTP_200_OK)
def delete(self, request, *args, **kwargs):
recipe_id = kwargs.get('pk')
recipe = Recipe.objects.filter(pk=recipe_id, is_published=True)
if not recipe:
raise NotFound
recipe.delete()
return Response(status=status.HTTP_204_NO_CONTENT)
# -------------Story--------------
class StoryAPIView(APIView):
def get(self, request, *args, **kwargs):
recipes = Story.objects.filter(is_published=True)
filter_by = json.loads(json.dumps(request.GET))
if filter_by:
recipes = recipes.filter(**filter_by) # title=resept
serializer = StorySerializer(recipes, many=True, context={'request': request})
return Response(serializer.data)
def post(self, request, *args, **kwargs):
recipe_data = request.data
serializer = StoryCreateSerializer(data=recipe_data, context={'request': request})
serializer.is_valid(raise_exception=True)
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
class StoryDetailAPIView(APIView):
def get(self, request, *args, **kwargs):
recipe_id = kwargs.get('pk')
recipe = Story.objects.filter(pk=recipe_id, is_published=True).first()
if not recipe:
raise NotFound
serializer = StorySerializer(recipe, context={'request': request})
return Response(serializer.data)
def put(self, request, *args, **kwargs):
recipe_data = request.data
recipe_id = kwargs.get('pk')
recipe = Story.objects.filter(pk=recipe_id, is_published=True).first()
if not recipe:
raise NotFound
serializer = StoryCreateSerializer(data=recipe_data, instance=recipe, partial=True, context={'request': request})
serializer.is_valid(raise_exception=True)
serializer.save()
return Response(serializer.data, status=status.HTTP_200_OK)
def patch(self, request, *args, **kwargs):
recipe_data = request.data
recipe_id = kwargs.get('pk')
recipe = Story.objects.filter(pk=recipe_id, is_published=True).first()
if not recipe:
raise NotFound
serializer = StoryCreateSerializer(data=recipe_data, instance=recipe,
partial=True, context={'request': request})
serializer.is_valid(raise_exception=True)
serializer.save()
return Response(serializer.data, status=status.HTTP_200_OK)
def delete(self, request, *args, **kwargs):
recipe_id = kwargs.get('pk')
recipe = Story.objects.filter(pk=recipe_id, is_published=True)
if not recipe:
raise NotFound
recipe.delete()
return Response(status=status.HTTP_204_NO_CONTENT)
# ----------- Category --------------------
class CategoryAPIView(APIView):
def get(self, request, *args, **kwargs):
recipes = Category.objects.all()
filter_by = json.loads(json.dumps(request.GET))
if filter_by:
recipes = recipes.filter(**filter_by) # title=resept
serializer = CategorySerializer(recipes, many=True, context={'request': request})
return Response(serializer.data)
def post(self, request, *args, **kwargs):
recipe_data = request.data
serializer = CategoryCreateSerializer(data=recipe_data, context={'request': request})
serializer.is_valid(raise_exception=True)
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
class CategoryDetailAPIView(APIView):
def get(self, request, *args, **kwargs):
recipe_id = kwargs.get('pk')
recipe = Category.objects.filter(pk=recipe_id).first()
if not recipe:
raise NotFound
serializer = CategorySerializer(recipe, context={'request': request})
return Response(serializer.data)
def put(self, request, *args, **kwargs):
recipe_data = request.data
recipe_id = kwargs.get('pk')
recipe = Category.objects.filter(pk=recipe_id).first()
if not recipe:
raise NotFound
serializer = CategoryCreateSerializer(data=recipe_data, instance=recipe, partial=True, context={'request': request})
serializer.is_valid(raise_exception=True)
serializer.save()
return Response(serializer.data, status=status.HTTP_200_OK)
def patch(self, request, *args, **kwargs):
recipe_data = request.data
recipe_id = kwargs.get('pk')
recipe = Category.objects.filter(pk=recipe_id).first()
if not recipe:
raise NotFound
serializer = CategoryCreateSerializer(data=recipe_data, instance=recipe,
partial=True, context={'request': request})
serializer.is_valid(raise_exception=True)
serializer.save()
return Response(serializer.data, status=status.HTTP_200_OK)
def delete(self, request, *args, **kwargs):
recipe_id = kwargs.get('pk')
recipe = Category.objects.filter(pk=recipe_id)
if not recipe:
raise NotFound
recipe.delete()
return Response(status=status.HTTP_204_NO_CONTENT)
# -----------Method 2 ---------------------
# class RecipeList(ListCreateAPIView):
# queryset = Recipe.objects.filter(is_published=True)
# serializer_class = RecipeSerializer
# def get_serializer_class(self):
# if self.request.method == 'GET':
# return self.serializer_class
# return RecipeCreateSerializer
# class RecipeDetail(RetrieveUpdateDestroyAPIView):
# permission_classes = (IsAuthenticated,)
# queryset = Recipe.objects.filter(is_published=True)
# serializer_class = RecipeSerializer
# def get_serializer_class(self):
# if self.request.method == 'GET':
# return self.serializer_class
# return RecipeCreateSerializer
# class StoryList(ListCreateAPIView):
# queryset = Recipe.objects.filter(is_published=True)
# serializer_class = RecipeSerializer
# def get_serializer_class(self):
# if self.request.method == 'GET':
# return self.serializer_class
# return RecipeCreateSerializer
# class StoryDetail(RetrieveUpdateDestroyAPIView):
# permission_classes = (IsAuthenticated,)
# queryset = Recipe.objects.filter(is_published=True)
# serializer_class = RecipeSerializer
# def get_serializer_class(self):
# if self.request.method == 'GET':
# return self.serializer_class
# return RecipeCreateSerializer
# -----------Method 1 ---------------------
# @api_view(('GET', 'POST'))
# def recipes(request):
# if request.method == 'POST':
# recipe_data = request.data
# serializer = RecipeCreateSerializer(data=recipe_data, context={'request': request})
# serializer.is_valid(raise_exception=True)
# serializer.save()
# return Response(serializer.data, status=status.HTTP_201_CREATED)
# recipes = Recipe.objects.filter(is_published=True)
# serializer = RecipeSerializer(recipes, many=True, context={'request': request})
# # serialized_recipe_list = [recipe.serialized_data for recipe in recipes]
# # json_data = {
# # 'recipes': serialized_recipe_list
# # }
# return Response(serializer.data)
# @api_view(('GET', 'POST'))
# def stories(request):
# if request.method == 'POST':
# recipe_data = request.data
# serializer = StoryCreateSerializer(data=recipe_data, context={'request': request})
# serializer.is_valid(raise_exception=True)
# serializer.save()
# return Response(serializer.data, status=status.HTTP_201_CREATED)
# recipes = Story.objects.filter(is_published=True)
# serializer = StorySerializer(recipes, many=True, context={'request': request})
# # serialized_recipe_list = [recipe.serialized_data for recipe in recipes]
# # json_data = {
# # 'recipes': serialized_recipe_list
# # }
# return Response(serializer.data)
# @api_view(['GET', 'PUT', 'DELETE'])
# def recipe_detail(request, slug):
# try:
# recipe = Recipe.objects.get(slug=slug)
# except Recipe.DoesNotExist:
# return Response(status=status.HTTP_404_NOT_FOUND)
# if request.method == 'GET':
# serializer = RecipeSerializer(recipe)
# return Response(serializer.data)
# elif request.method == 'PUT':
# serializer = RecipeSerializer(recipe, data=request.data, partial= True)
# if serializer.is_valid():
# serializer.save()
# return Response(serializer.data)
# return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
# elif request.method == 'DELETE':
# recipe.delete()
# return Response(status=status.HTTP_204_NO_CONTENT)
# @api_view(['GET', 'PUT', 'DELETE'])
# def story_detail(request, slug):
# try:
# recipe = Story.objects.get(slug=slug)
# except Story.DoesNotExist:
# return Response(status=status.HTTP_404_NOT_FOUND)
# if request.method == 'GET':
# serializer = StorySerializer(recipe)
# return Response(serializer.data)
# elif request.method == 'PUT':
# serializer = StorySerializer(recipe, data=request.data, partial= True)
# if serializer.is_valid():
# serializer.save()
# return Response(serializer.data)
# return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
# elif request.method == 'DELETE':
# recipe.delete()
# return Response(status=status.HTTP_204_NO_CONTENT) | 38.18961 | 243 | 0.66483 | 1,559 | 14,703 | 6.132777 | 0.082104 | 0.054178 | 0.072796 | 0.079071 | 0.853363 | 0.835687 | 0.831608 | 0.822717 | 0.799916 | 0.794896 | 0 | 0.006248 | 0.216214 | 14,703 | 385 | 244 | 38.18961 | 0.823412 | 0.301707 | 0 | 0.738693 | 0 | 0 | 0.016251 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115578 | false | 0 | 0.050251 | 0 | 0.361809 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
33fa265d458dc2d94212acd01c515b8fa500c7f3 | 11,395 | py | Python | main.py | owlskin/Internet-Speed-Tester | 649ae11398d4318f0f51ddd50fd10e22f30c6a75 | [
"MIT"
] | null | null | null | main.py | owlskin/Internet-Speed-Tester | 649ae11398d4318f0f51ddd50fd10e22f30c6a75 | [
"MIT"
] | null | null | null | main.py | owlskin/Internet-Speed-Tester | 649ae11398d4318f0f51ddd50fd10e22f30c6a75 | [
"MIT"
] | null | null | null | import os
import sys
import time
import threading
import subprocess
import re
import datetime
import math
import urllib
import urllib2
import json
import requests
import argparse
from collections import OrderedDict
from pprint import pprint
#------------------------------------------------------------------------------
def get_speed(url):
"""
Get the download speed from a URL
"""
start = time.time()
try:
urllib.urlretrieve(url, '/dev/null')
except:
return 0
end = time.time()
return (end - start) * 8 / 1000
#------------------------------------------------------------------------------
def get_speedtest():
"""
Get the download speed from speedtest.net
"""
url = 'http://www.speedtest.net/speedtest-config.php'
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
server_config = json.loads(data)
url = 'http://www.speedtest.net/speedtest-servers.php'
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
servers = json.loads(data)
server_list = []
for server in servers['servers']:
if server['country'].lower() == 'russia':
server_list.append(server)
best_server = None
for server in server_list:
if not best_server:
best_server = server
elif server['latency'] < best_server['latency']:
best_server = server
url = '%s/latency.txt' % best_server['url']
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
time_list = data.split('\n')
time_list.pop()
time_list = [float(time) for time in time_list]
time_list.sort()
median_time = time_list[len(time_list) / 2]
url = '%s/latency.txt' % best_server['url']
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
time_list = data.split('\n')
time_list.pop()
time_list = [float(time) for time in time_list]
time_list.sort()
median_time = time_list[len(time_list) / 2]
url = '%s/download.php?x=%s' % (best_server['url'], urllib.quote_plus(str(datetime.datetime.now())))
download_speed = get_speed(url)
url = '%s/upload.php?x=%s' % (best_server['url'], urllib.quote_plus(str(datetime.datetime.now())))
upload_speed = get_speed(url)
return download_speed, upload_speed
#------------------------------------------------------------------------------
def get_speed_from_net_monitor():
"""
Get the download speed from a network monitor
"""
url = 'http://www.speedtest.net/my-result/'
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
download_speed = float(re.findall('Download: (.*?) Mbit', data)[0])
upload_speed = float(re.findall('Upload: (.*?) Mbit', data)[0])
return download_speed, upload_speed
#------------------------------------------------------------------------------
def get_speed_from_speedtest():
"""
Get the download speed from speedtest.net
"""
url = 'http://www.speedtest.net/speedtest-config.php'
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
server_config = json.loads(data)
url = 'http://www.speedtest.net/speedtest-servers.php'
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
servers = json.loads(data)
server_list = []
for server in servers['servers']:
if server['country'].lower() == 'russia':
server_list.append(server)
best_server = None
for server in server_list:
if not best_server:
best_server = server
elif server['latency'] < best_server['latency']:
best_server = server
url = '%s/latency.txt' % best_server['url']
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
time_list = data.split('\n')
time_list.pop()
time_list = [float(time) for time in time_list]
time_list.sort()
median_time = time_list[len(time_list) / 2]
url = '%s/latency.txt' % best_server['url']
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
time_list = data.split('\n')
time_list.pop()
time_list = [float(time) for time in time_list]
time_list.sort()
median_time = time_list[len(time_list) / 2]
url = '%s/download.php?x=%s' % (best_server['url'], urllib.quote_plus(str(datetime.datetime.now())))
download_speed = get_speed(url)
url = '%s/upload.php?x=%s' % (best_server['url'], urllib.quote_plus(str(datetime.datetime.now())))
upload_speed = get_speed(url)
return download_speed, upload_speed
#------------------------------------------------------------------------------
def get_speed_from_speedtest_cli():
"""
Get the download speed from speedtest.net
"""
url = 'http://speedtest.net/speedtest-config.php'
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
server_config = json.loads(data)
url = 'http://speedtest.net/speedtest-servers.php'
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
servers = json.loads(data)
server_list = []
for server in servers['servers']:
if server['country'].lower() == 'russia':
server_list.append(server)
best_server = None
for server in server_list:
if not best_server:
best_server = server
elif server['latency'] < best_server['latency']:
best_server = server
url = '%s/latency.txt' % best_server['url']
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
time_list = data.split('\n')
time_list.pop()
time_list = [float(time) for time in time_list]
time_list.sort()
median_time = time_list[len(time_list) / 2]
url = '%s/latency.txt' % best_server['url']
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
time_list = data.split('\n')
time_list.pop()
time_list = [float(time) for time in time_list]
time_list.sort()
median_time = time_list[len(time_list) / 2]
url = '%s/download.php?x=%s' % (best_server['url'], urllib.quote_plus(str(datetime.datetime.now())))
download_speed = get_speed(url)
url = '%s/upload.php?x=%s' % (best_server['url'], urllib.quote_plus(str(datetime.datetime.now())))
upload_speed = get_speed(url)
return download_speed, upload_speed
#------------------------------------------------------------------------------
def get_speed_from_fast_com():
"""
Get the download speed from fast.com
"""
url = 'https://fast.com/'
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
start_string = '<div id="speed-value">'
end_string = '<div id="speed-units">'
start = data.find(start_string) + len(start_string)
end = data.find(end_string)
download_speed = float(data[start:end])
start_string = '<div id="upload-value">'
end_string = '<div id="upload-units">'
start = data.find(start_string) + len(start_string)
end = data.find(end_string)
upload_speed = float(data[start:end])
return download_speed, upload_speed
#------------------------------------------------------------------------------
def get_speed_from_fast_com_v2():
"""
Get the download speed from fast.com
"""
url = 'https://fast.com/'
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
start_string = '<div id="speed-value">'
end_string = '<div id="speed-units">'
start = data.find(start_string) + len(start_string)
end = data.find(end_string)
download_speed = float(data[start:end])
start_string = '<div id="upload-value">'
end_string = '<div id="upload-units">'
start = data.find(start_string) + len(start_string)
end = data.find(end_string)
upload_speed = float(data[start:end])
return download_speed, upload_speed
#------------------------------------------------------------------------------
def get_speed_from_fast_com_v3():
"""
Get the download speed from fast.com
"""
url = 'https://fast.com/'
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
start_string = '<div id="speed-value">'
end_string = '<div id="speed-units">'
start = data.find(start_string) + len(start_string)
end = data.find(end_string)
download_speed = float(data[start:end])
start_string = '<div id="upload-value">'
end_string = '<div id="upload-units">'
start = data.find(start_string) + len(start_string)
end = data.find(end_string)
upload_speed = float(data[start:end])
return download_speed, upload_speed
#------------------------------------------------------------------------------
def get_speed_from_fast_com_v4():
"""
Get the download speed from fast.com
"""
url = 'https://fast.com/'
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
start_string = '<div id="speed-value">'
end_string = '<div id="speed-units">'
start = data.find(start_string) + len(start_string)
end = data.find(end_string)
download_speed = float(data[start:end])
start_string = '<div id="upload-value">'
end_string = '<div id="upload-units">'
start = data.find(start_string) + len(start_string)
end = data.find(end_string)
upload_speed = float(data[start:end])
return download_speed, upload_speed
#------------------------------------------------------------------------------
def get_speed_from_fast_com_v5():
"""
Get the download speed from fast.com
"""
url = 'https://fast.com/'
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
start_string = '<div id="speed-value">'
end_string = '<div id="speed-units">'
start = data.find(start_string) + len(start_string)
end = data.find(end_string)
download_speed = float(data[start:end])
start_string = '<div id="upload-value">'
end_string = '<div id="upload-units">'
start = data.find(start_string) + len(start_string)
end = data.find(end_string)
upload_speed = float(data[start:end])
return download_speed, upload_speed
#------------------------------------------------------------------------------
def get_speed_from_fast_com_v6():
"""
Get the download speed from fast.com
"""
url = 'https://fast.com/'
request = urllib2.Request(url)
response = urllib2.urlopen(request)
data = response.read()
start_string = '<div id="speed-value">'
end_string = '<div id="speed-units">'
start = data.find(start_string) + len(start_string)
end = data.find(end_string)
download_speed = float(data[start:end])
start_string = '<div id="upload-value">'
end_string = '<div id="upload-units">'
start = data.find(start_string) + len(start_string)
end = data.find(end_string)
upload_speed = float(data[start:end])
return download_speed, upload_speed
| 34.847095 | 104 | 0.605265 | 1,406 | 11,395 | 4.733997 | 0.071124 | 0.050481 | 0.039663 | 0.06851 | 0.933143 | 0.929838 | 0.922626 | 0.922626 | 0.922626 | 0.916316 | 0 | 0.006281 | 0.189645 | 11,395 | 326 | 105 | 34.953988 | 0.714533 | 0.112857 | 0 | 0.853282 | 0 | 0 | 0.134602 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042471 | false | 0 | 0.057915 | 0 | 0.146718 | 0.003861 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1d6c7521df863f3ebadc247a3fe4245463a99d2d | 4,828 | py | Python | tests/test_deprecation.py | MuhammadAlBarham/Maha | c28e0b7ca69942905548f013a1e35208ef8de7e7 | [
"BSD-3-Clause"
] | 152 | 2021-09-18T08:18:47.000Z | 2022-03-14T13:23:17.000Z | tests/test_deprecation.py | vikrambala/Maha | 67020437e745b8fca4770186608326b81073d4b7 | [
"BSD-3-Clause"
] | 65 | 2021-09-20T06:00:41.000Z | 2022-03-20T22:44:39.000Z | tests/test_deprecation.py | vikrambala/Maha | 67020437e745b8fca4770186608326b81073d4b7 | [
"BSD-3-Clause"
] | 10 | 2021-09-18T11:56:57.000Z | 2021-11-20T09:05:16.000Z | import pytest
from maha.deprecation import deprecated_default, deprecated_fn, deprecated_param
def _get_warning_msg(recwarn):
w = recwarn.pop(DeprecationWarning)
return w.message.args[0]
def test_deprecated_default(recwarn):
@deprecated_default(from_v="0.2.0", to_v="0.5.0", depr_param="arg11")
def f(arg11=1):
pass
f()
msg = _get_warning_msg(recwarn)
assert "0.2.0" in msg
assert "0.5.0" in msg
assert "arg11" in msg
assert "arg11=1" in msg
def test_deprecated_default_with_alt(recwarn):
@deprecated_default(from_v="0.2.0", to_v="0.5.0", depr_param="arg11", alt_value=999)
def f(arg11=1):
pass
f()
msg = _get_warning_msg(recwarn)
assert "0.2.0" in msg
assert "0.5.0" in msg
assert "arg11" in msg
assert "999" in msg
assert "arg11=1" in msg
def test_deprecated_default_with_passed_argument(recwarn):
@deprecated_default(
from_v="0.2.0", to_v="0.5.0", depr_param="arg11", alt_value="None"
)
def f(arg11=1):
pass
f(arg11=3)
msg = _get_warning_msg(recwarn)
assert "0.2.0" in msg
assert "0.5.0" in msg
assert "arg11" in msg
assert "None" in msg
assert "arg11=1" not in msg
def test_deprecated_default_invalid_parameter():
with pytest.raises(ValueError):
@deprecated_default(
from_v="0.2.0", to_v="0.5.0", depr_param="x1", alt_value=999
)
def f(x=1):
return x
def test_deprecated_default_with_message(recwarn):
@deprecated_default(
from_v="0.2.0",
to_v="0.5.0",
depr_param="arg11",
alt_value="None",
message="This is a message",
)
def f(arg11=1):
pass
f(arg11=3)
msg = _get_warning_msg(recwarn)
assert "This is a message" in msg
def test_deprecated_fn(recwarn):
@deprecated_fn(
from_v="0.2.0",
to_v="0.5.0",
alt_fn="my_alt_fn",
)
def f():
pass
f()
msg = _get_warning_msg(recwarn)
assert "0.2.0" in msg
assert "0.5.0" in msg
assert "my_alt_fn" in msg
def test_deprecated_fn_with_message(recwarn):
@deprecated_fn(
from_v="0.2.0",
to_v="0.5.0",
alt_fn="my_alt_fn",
message="This is a message",
)
def f():
pass
f()
msg = _get_warning_msg(recwarn)
assert "0.2.0" in msg
assert "0.5.0" in msg
assert "my_alt_fn" in msg
assert "This is a message" in msg
def test_deprecated_param_with_arg_input(recwarn):
@deprecated_param(from_v="0.2.0", to_v="0.5.0", depr_param="arg11")
def f(a, b, arg11=3, arg12=1):
pass
f(1, 2, 3)
msg = _get_warning_msg(recwarn)
assert "0.2.0" in msg
assert "0.5.0" in msg
assert "arg11" in msg
def test_deprecated_param_with_arg_input2(recwarn):
@deprecated_param(from_v="0.2.0", to_v="0.5.0", depr_param="arg12")
def f(a, b, arg11=3, arg12=1):
pass
f(1, 2, 3)
with pytest.raises(AssertionError):
_get_warning_msg(recwarn)
def test_deprecated_param_with_arg_input3(recwarn):
@deprecated_param(from_v="0.2.0", to_v="0.5.0", depr_param="arg13")
def f(a, b, arg11=3, arg12=1, arg13=1):
pass
f(1, 2, 3, arg13=10)
msg = _get_warning_msg(recwarn)
assert "0.2.0" in msg
assert "0.5.0" in msg
assert "arg13" in msg
def test_deprecated_param_with_kwarg_input(recwarn):
@deprecated_param(from_v="0.2.0", to_v="0.5.0", depr_param="arg11")
def f(arg11=1):
pass
f(arg11=10)
msg = _get_warning_msg(recwarn)
assert "0.2.0" in msg
assert "0.5.0" in msg
assert "arg11" in msg
def test_deprecated_param_without_input(recwarn):
@deprecated_param(from_v="0.2.0", to_v="0.5.0", depr_param="arg11")
def f(arg11=1):
pass
f()
with pytest.raises(AssertionError):
_get_warning_msg(recwarn)
def test_deprecated_param_with_alt_argument(recwarn):
@deprecated_param(
from_v="0.2.0", to_v="0.5.0", depr_param="arg11", alt_param="arg12"
)
def f(arg11=1, arg12=1):
pass
f(arg11=3)
msg = _get_warning_msg(recwarn)
assert "0.2.0" in msg
assert "0.5.0" in msg
assert "arg11" in msg
assert "arg12" in msg
def test_deprecated_param_with_alt_argument_not_found():
with pytest.raises(ValueError):
@deprecated_param(
from_v="0.2.0", to_v="0.5.0", depr_param="arg11", alt_param="arg12"
)
def f(arg11=1):
pass
def test_deprecated_param_with_message(recwarn):
@deprecated_param(
from_v="0.2.0",
to_v="0.5.0",
depr_param="arg11",
message="This is a message",
)
def f(arg11=1):
pass
f(arg11=3)
msg = _get_warning_msg(recwarn)
assert "This is a message" in msg
| 22.990476 | 88 | 0.618476 | 794 | 4,828 | 3.528967 | 0.080605 | 0.06424 | 0.098144 | 0.077088 | 0.885082 | 0.833333 | 0.811206 | 0.771949 | 0.758744 | 0.758744 | 0 | 0.080209 | 0.24855 | 4,828 | 209 | 89 | 23.100478 | 0.692117 | 0 | 0 | 0.710692 | 0 | 0 | 0.108948 | 0 | 0 | 0 | 0 | 0 | 0.238994 | 1 | 0.194969 | false | 0.09434 | 0.012579 | 0.006289 | 0.220126 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
d53348bdd0e3ae5b58c9b969e09cac952df56196 | 4,491 | py | Python | tests/test_git_log.py | stevezieglerva/git-log-to-csv | 1e7909e0dad8d633bc2bf40df49196d108bedf66 | [
"MIT"
] | null | null | null | tests/test_git_log.py | stevezieglerva/git-log-to-csv | 1e7909e0dad8d633bc2bf40df49196d108bedf66 | [
"MIT"
] | null | null | null | tests/test_git_log.py | stevezieglerva/git-log-to-csv | 1e7909e0dad8d633bc2bf40df49196d108bedf66 | [
"MIT"
] | null | null | null | import unittest
from unittest.mock import patch, Mock, MagicMock, PropertyMock
from git_log_to_csv import *
class UnitTests(unittest.TestCase):
def test_process__given_two_commits_three_files__then_three_lines_created(self):
# Arrange
input = """^^c8ed1ef--1576592170--2019-12-17T09:16:10-05:00--Steve Ziegler
3 5 README.md
0 1 sam-app/add_cw_log_error_metric/CloudFormationReplicator.py
^^a999999--1576592605--2019-12-17T09:23:25-05:00--Steve Ziegler
2 1 sam-app/add_cw_log_error_metric/CloudFormationReplicator.py
"""
# Act
results = process_git_log(input)
print(results)
# Assert
expected = """commit_hash,epoch,timestamp,date,year,month,day,author,file,churn_count,dir_1,dir_2,dir_3,dir_4
c8ed1ef,1576592170,2019-12-17T09:16:10,2019-12-17,2019,12,17,"Steve Ziegler",README.md,8,,,,
c8ed1ef,1576592170,2019-12-17T09:16:10,2019-12-17,2019,12,17,"Steve Ziegler",sam-app/add_cw_log_error_metric/CloudFormationReplicator.py,1,sam-app,add_cw_log_error_metric,,
a999999,1576592605,2019-12-17T09:23:25,2019-12-17,2019,12,17,"Steve Ziegler",sam-app/add_cw_log_error_metric/CloudFormationReplicator.py,3,sam-app,add_cw_log_error_metric,,
"""
self.assertEqual(results, expected)
def test_process__given_insertion_is_dash__then_churn_set_to_two(self):
# Arrange
input = """^^c8ed1ef--1576592170--2019-12-17T09:16:10-05:00--Steve Ziegler
- - README.md
"""
# Act
results = process_git_log(input)
print(results)
# Assert
expected = """commit_hash,epoch,timestamp,date,year,month,day,author,file,churn_count,dir_1,dir_2,dir_3,dir_4
c8ed1ef,1576592170,2019-12-17T09:16:10,2019-12-17,2019,12,17,"Steve Ziegler",README.md,2,,,,
"""
self.assertEqual(results, expected)
def test_process__given_file_in_one_dir__then_dir_1_correct(self):
# Arrange
input = """^^c8ed1ef--1576592170--2019-12-17T09:16:10-05:00--Steve Ziegler
- - test1/README.md
"""
# Act
results = process_git_log(input)
print(results)
# Assert
expected = """commit_hash,epoch,timestamp,date,year,month,day,author,file,churn_count,dir_1,dir_2,dir_3,dir_4
c8ed1ef,1576592170,2019-12-17T09:16:10,2019-12-17,2019,12,17,"Steve Ziegler",test1/README.md,2,test1,,,
"""
self.assertEqual(results, expected)
def test_process__given_file_in_four_dir__then_dirs_correct(self):
# Arrange
input = """^^c8ed1ef--1576592170--2019-12-17T09:16:10-05:00--Steve Ziegler
- - test1/test2/test3/test4/README.md
"""
# Act
results = process_git_log(input)
print(results)
# Assert
expected = """commit_hash,epoch,timestamp,date,year,month,day,author,file,churn_count,dir_1,dir_2,dir_3,dir_4
c8ed1ef,1576592170,2019-12-17T09:16:10,2019-12-17,2019,12,17,"Steve Ziegler",test1/test2/test3/test4/README.md,2,test1,test2,test3,test4
"""
self.assertEqual(results, expected)
def test_process__given_files_with_different_dirs__then_dirs_correct(self):
# Arrange
input = """^^c8ed1ef--1576592170--2019-12-17T09:16:10-05:00--Steve Ziegler
- - README.md
- - test1/test2/README.md
- - test1/test2/test3/test4/README.md
"""
# Act
results = process_git_log(input)
print(results)
# Assert
expected = """commit_hash,epoch,timestamp,date,year,month,day,author,file,churn_count,dir_1,dir_2,dir_3,dir_4
c8ed1ef,1576592170,2019-12-17T09:16:10,2019-12-17,2019,12,17,"Steve Ziegler",README.md,2,,,,
c8ed1ef,1576592170,2019-12-17T09:16:10,2019-12-17,2019,12,17,"Steve Ziegler",test1/test2/README.md,2,test1,test2,,
c8ed1ef,1576592170,2019-12-17T09:16:10,2019-12-17,2019,12,17,"Steve Ziegler",test1/test2/test3/test4/README.md,2,test1,test2,test3,test4
"""
self.assertEqual(results, expected)
def test_process__given_timezone_is_gmt__then_results_are_correct(self):
# Arrange
input = """^^c8ed1ef--1576592170--2019-12-17T09:16:10+00:00--Steve Ziegler
1 1 README.md
"""
# Act
results = process_git_log(input)
print(results)
# Assert
expected = """commit_hash,epoch,timestamp,date,year,month,day,author,file,churn_count,dir_1,dir_2,dir_3,dir_4
c8ed1ef,1576592170,2019-12-17T09:16:10,2019-12-17,2019,12,17,"Steve Ziegler",README.md,2,,,,
"""
self.assertEqual(results, expected)
if __name__ == "__main__":
unittest.main() | 34.813953 | 172 | 0.703852 | 676 | 4,491 | 4.451183 | 0.149408 | 0.073779 | 0.053174 | 0.114656 | 0.876039 | 0.867065 | 0.867065 | 0.837488 | 0.812562 | 0.812562 | 0 | 0.187599 | 0.152527 | 4,491 | 129 | 173 | 34.813953 | 0.602995 | 0.025161 | 0 | 0.666667 | 0 | 0.306667 | 0.594266 | 0.53922 | 0 | 0 | 0 | 0 | 0.08 | 1 | 0.08 | false | 0 | 0.04 | 0 | 0.133333 | 0.08 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d562f129338507ccca881c90c89b782617a19f7b | 163 | py | Python | or_gym/envs/supply_network/__init__.py | grossmann-group/or-gym | 8ebad3bdc84cc7787592d7bf347fdd39eb24d982 | [
"MIT"
] | 6 | 2020-12-03T21:08:39.000Z | 2021-12-26T08:40:33.000Z | or_gym/envs/supply_network/__init__.py | grossmann-group/or-gym | 8ebad3bdc84cc7787592d7bf347fdd39eb24d982 | [
"MIT"
] | 9 | 2020-09-21T19:03:54.000Z | 2022-03-07T09:05:56.000Z | or_gym/envs/supply_network/__init__.py | grossmann-group/or-gym | 8ebad3bdc84cc7787592d7bf347fdd39eb24d982 | [
"MIT"
] | 1 | 2022-03-12T13:40:50.000Z | 2022-03-12T13:40:50.000Z | from or_gym.envs.supply_network.inventory_management import NetInvMgmtBacklogEnv
from or_gym.envs.supply_network.inventory_management import NetInvMgmtLostSalesEnv | 81.5 | 82 | 0.920245 | 20 | 163 | 7.2 | 0.55 | 0.083333 | 0.125 | 0.180556 | 0.708333 | 0.708333 | 0.708333 | 0.708333 | 0.708333 | 0 | 0 | 0 | 0.042945 | 163 | 2 | 82 | 81.5 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 10 |
d56a0399e901bed022f3a02bb463d3043902dc6d | 21,954 | py | Python | modules/gapi/samples/pipeline_modeling_tool/test_pipeline_modeling_tool.py | yash112-lang/opencv | be38d4ea932bc3a0d06845ed1a2de84acc2a09de | [
"Apache-2.0"
] | 2 | 2022-03-26T11:12:18.000Z | 2022-03-30T13:07:32.000Z | modules/gapi/samples/pipeline_modeling_tool/test_pipeline_modeling_tool.py | yash112-lang/opencv | be38d4ea932bc3a0d06845ed1a2de84acc2a09de | [
"Apache-2.0"
] | null | null | null | modules/gapi/samples/pipeline_modeling_tool/test_pipeline_modeling_tool.py | yash112-lang/opencv | be38d4ea932bc3a0d06845ed1a2de84acc2a09de | [
"Apache-2.0"
] | 1 | 2020-12-13T22:09:12.000Z | 2020-12-13T22:09:12.000Z | import os
import subprocess
pipeline_modeling_tool = os.getenv('PIPELINE_MODELING_TOOL')
def get_output(exec_str):
try:
out = subprocess.check_output(exec_str,
stderr=subprocess.STDOUT,
shell=True).strip().decode()
except subprocess.CalledProcessError as exc:
out = exc.output.strip().decode()
return out
def test_error_no_config_specified():
out = get_output(pipeline_modeling_tool)
assert out.startswith('Config must be specified via --cfg option')
def test_error_no_config_exists():
cfg_file = 'not_existing_cfg.yml'
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert 'Failed to open config file: not_existing_cfg.yml' in out
def test_error_no_work_time():
cfg_file = """\"%YAML:1.0\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Config must contain field: work_time')
def test_error_work_time_not_positive():
cfg_file = """\"%YAML:1.0
work_time: -1\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('work_time must be positive')
def test_error_no_pipelines():
cfg_file = """\"%YAML:1.0
work_time: 1000\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Config must contain field: Pipelines')
def test_error_pipelines_node_not_map():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Pipelines field must be a map')
def test_error_config_not_contain_pl():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:\" """
exec_str = '{} --cfg={} --exec_list=PL2'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Pipelines must contain field: PL2')
def test_error_no_source():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('PL1 must contain field: source')
def test_error_source_no_name():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('source must contain field: name')
def test_error_source_no_latency():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('source must contain field: latency')
def test_error_source_no_output():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('source must contain field: output')
def test_error_source_output_no_dims():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('output must contain field: dims')
def test_error_source_output_no_precision():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('output must contain field: precision')
def test_error_no_nodes():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('PL1 must contain field: nodes')
def test_error_nodes_not_sequence():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('nodes in PL1 must be a sequence')
def test_error_node_no_name():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
-\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('node must contain field: name')
def test_error_node_no_type():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('node must contain field: type')
def test_error_node_unknown_type():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Unknown'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Unsupported node type: Unknown')
def test_error_node_dummy_no_time():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Node0 must contain field: time')
def test_error_node_dummy_not_positive_time():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
time: -0.2\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Node0 time must be positive')
def test_error_node_dummy_no_output():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
time: 0.2\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Node0 must contain field: output')
def test_error_node_infer_no_model_path():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Infer'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
error_msg = """Path to OpenVINO model must be specified in either of two formats:
1.
xml: path to *.xml
bin: path to *.bin
2.
blob: path to *.blob"""
assert out.startswith(error_msg)
def test_error_node_infer_no_input_layers():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Infer'
blob: model.blob
device: 'CPU'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Node0 must contain field: input_layers')
def test_error_node_infer_input_layers_are_empty():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Infer'
blob: model.blob
device: 'CPU'
input_layers:
\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('input_layers in Node0 must be a sequence')
def test_error_node_infer_no_output_layers():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Infer'
blob: model.blob
device: 'CPU'
input_layers:
- 'layer_name'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Node0 must contain field: output_layers')
def test_error_node_infer_output_layers_are_empty():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Infer'
blob: model.blob
device: 'CPU'
input_layers:
- 'layer_name'
output_layers:\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('output_layers in Node0 must be a sequence')
def test_error_no_edges():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
time: 0.2
output:
dims: [1,2,3,4]
precision: 'U8'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('PL1 must contain field: edges')
def test_error_edges_not_sequence():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
time: 0.2
output:
dims: [1,2,3,4]
precision: 'U8'
edges:\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('edges in PL1 must be a sequence')
def test_error_edges_no_from():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
time: 0.2
output:
dims: [1,2,3,4]
precision: 'U8'
edges:
-\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('edge must contain field: from')
def test_error_edges_no_to():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
time: 0.2
output:
dims: [1,2,3,4]
precision: 'U8'
edges:
- from: 'Node0'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('edge must contain field: to')
def test_error_edges_from_not_exists():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
time: 0.2
output:
dims: [1,2,3,4]
precision: 'U8'
edges:
- from: 'Node1'
to: 'Node2'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Failed to find node: Node1')
def test_error_edges_from_port_not_exists():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
time: 0.2
output:
dims: [1,2,3,4]
precision: 'U8'
edges:
- from: 'Node0:10'
to: 'Node2'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Failed to access node: Node0 by out port: 10')
def test_error_edges_to_not_exists():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
time: 0.2
output:
dims: [1,2,3,4]
precision: 'U8'
edges:
- from: 'Src'
to: 'Node2'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Failed to find node: Node2')
def test_error_edges_to_port_not_exists():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
time: 0.2
output:
dims: [1,2,3,4]
precision: 'U8'
edges:
- from: 'Src'
to: 'Node0:3'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Failed to access node: Node0 by in port: 3')
def test_error_connect_to_source():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
time: 0.2
output:
dims: [1,2,3,4]
precision: 'U8'
edges:
- from: 'Node0'
to: 'Src'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Failed to access node: Src by in port: 0')
def test_error_double_edge():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
time: 0.2
output:
dims: [1,2,3,4]
precision: 'U8'
edges:
- from: 'Src'
to: 'Node0'
- from: 'Src'
to: 'Node0'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Node: Node0 already connected by in port: 0')
def test_error_double_edge():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
time: 0.2
output:
dims: [1,2,3,4]
precision: 'U8'
edges:
- from: 'Src'
to: 'Node0'
- from: 'Src'
to: 'Node0'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Node: Node0 already connected by in port: 0')
def test_node_has_dangling_input():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
time: 0.2
output:
dims: [1,2,3,4]
precision: 'U8'
- name: 'Node1'
type: 'Dummy'
time: 0.2
output:
dims: [1,2,3,4]
precision: 'U8'
edges:
- from: 'Node0'
to: 'Node1'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Node: Node0 in Pipeline: PL1 has dangling input by in port: 0')
def test_error_has_cycle_0():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node'
type: 'Infer'
blob: 'model.blob'
device: 'CPU'
input_layers:
- 'in_layer_name_0'
- 'in_layer_name_1'
output_layers:
- 'out_layer_name'
edges:
- from: 'Src'
to: 'Node:0'
- from: 'Node:0'
to: 'Node:1'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Pipeline: PL1 has cyclic dependencies')
def test_error_has_cycle_0():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Infer'
blob: 'model.blob'
device: 'CPU'
input_layers:
- 'in_layer_name_0'
- 'in_layer_name_1'
output_layers:
- 'out_layer_name'
- name: 'Node1'
type: 'Dummy'
time: 0.2
output:
dims: [1,2,3,4]
precision: 'U8'
edges:
- from: 'Src'
to: 'Node0:0'
- from: 'Node0:0'
to: 'Node1:0'
- from: 'Node1'
to: 'Node0:1'\" """
exec_str = '{} --cfg={}'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Pipeline: PL1 has cyclic dependencies')
def test_error_no_load_config_exists():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
time: 0.2
output:
dims: [1,2,3,4]
precision: 'U8'
edges:
- from: 'Src'
to: 'Node0'\" """
exec_str = '{} --cfg={} --load_config=not_existing.yml'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert 'Failed to load config: not_existing.yml' in out
def test_error_invalid_app_mode():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
time: 0.2
output:
dims: [1,2,3,4]
precision: 'U8'
edges:
- from: 'Src'
to: 'Node0'\" """
exec_str = '{} --cfg={} --pl_mode=unknown'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Unsupported PLMode: unknown\n'
'Please chose between: streaming and regular')
def test_error_invalid_pl_mode():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
time: 0.2
output:
dims: [1,2,3,4]
precision: 'U8'
edges:
- from: 'Src'
to: 'Node0'\" """
exec_str = '{} --cfg={} --app_mode=unknown'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('Unsupported AppMode: unknown\n'
'Please chose between: realtime and benchmark')
def test_error_drop_frames_with_streaming():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
time: 0.2
output:
dims: [1,2,3,4]
precision: 'U8'
edges:
- from: 'Src'
to: 'Node0'\" """
exec_str = '{} --cfg={} --pl_mode=streaming --drop_frames'.format(pipeline_modeling_tool, cfg_file)
out = get_output(exec_str)
assert out.startswith('--drop_frames option is supported only for pipelines in "regular" mode')
def test_incorrect_call_every_nth():
cfg_file = """\"%YAML:1.0
work_time: 1000
Pipelines:
PL1:
source:
name: 'Src'
latency: 20
output:
dims: [1,2,3,4]
precision: 'U8'
nodes:
- name: 'Node0'
type: 'Dummy'
call_every_nth: {}\" """
error = 'Node0 call_every_nth must be greater than zero\nCurrent call_every_nth: {}'
def check(cfg_file, call_every_nth):
out = get_output('{} --cfg={}'.format(pipeline_modeling_tool, cfg_file.format(call_every_nth)))
assert out.startswith(error.format(call_every_nth))
check(cfg_file, -3)
check(cfg_file, 0)
| 22.288325 | 103 | 0.57042 | 2,857 | 21,954 | 4.164858 | 0.052853 | 0.053534 | 0.047147 | 0.051433 | 0.873603 | 0.841079 | 0.822254 | 0.806454 | 0.803681 | 0.795445 | 0 | 0.048307 | 0.284322 | 21,954 | 984 | 104 | 22.310976 | 0.709012 | 0 | 0 | 0.819831 | 0 | 0 | 0.60235 | 0.002369 | 0 | 0 | 0 | 0 | 0.054414 | 1 | 0.056832 | false | 0 | 0.002418 | 0 | 0.060459 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d5abb3795ed5d212a1115feb2511cb2c1ad33b55 | 1,638 | py | Python | company/InfyTQ/practice/specialChar.py | vikkastiwari/cpp-coding-questions | 020790e1a3b26c7b24991427004730b3f0785c71 | [
"MIT"
] | null | null | null | company/InfyTQ/practice/specialChar.py | vikkastiwari/cpp-coding-questions | 020790e1a3b26c7b24991427004730b3f0785c71 | [
"MIT"
] | null | null | null | company/InfyTQ/practice/specialChar.py | vikkastiwari/cpp-coding-questions | 020790e1a3b26c7b24991427004730b3f0785c71 | [
"MIT"
] | null | null | null | <<<<<<< HEAD
def special_string(str):
special_char = "!$%&'()*+,-./:;<=>?@[\]^_`{|}~#\""
count = 0
for each in str:
if each in special_char:
count += 1
list1 = []
for each in str:
if count % 2 != 0 and each.isdigit() and int(each) % 2 != 0:
list1.append(each)
break
if count % 2 == 0 and each.isdigit() and int(each) % 2 == 0:
list1.append(each)
break
for each in str:
if each.isdigit() and int(each) % 2 != 0 and each != list1[0]:
list1.append(each)
elif each.isdigit() and int(each) % 2 == 0 and each != list1[0]:
list1.append(each)
else:
pass
return "".join(list1)
string = "@2$1374%$"
print(special_string(string))
=======
def special_string(str):
special_char = "!$%&'()*+,-./:;<=>?@[\]^_`{|}~#\""
count = 0
for each in str:
if each in special_char:
count += 1
list1 = []
for each in str:
if count % 2 != 0 and each.isdigit() and int(each) % 2 != 0:
list1.append(each)
break
if count % 2 == 0 and each.isdigit() and int(each) % 2 == 0:
list1.append(each)
break
for each in str:
if each.isdigit() and int(each) % 2 != 0 and each != list1[0]:
list1.append(each)
elif each.isdigit() and int(each) % 2 == 0 and each != list1[0]:
list1.append(each)
else:
pass
return "".join(list1)
string = "@2$1374%$"
print(special_string(string))
>>>>>>> ce81f1e9daeed4ad77a581aed91489021f8515d5
| 25.59375 | 72 | 0.494505 | 206 | 1,638 | 3.883495 | 0.135922 | 0.03 | 0.05 | 0.09 | 0.945 | 0.945 | 0.945 | 0.945 | 0.945 | 0.945 | 0 | 0.074654 | 0.337607 | 1,638 | 63 | 73 | 26 | 0.662673 | 0 | 0 | 0.941176 | 0 | 0.078431 | 0.050061 | 0.039072 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.039216 | 0 | null | null | 0.039216 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
63a474a854b2ca366554b916810cd236f53d8550 | 3,068 | py | Python | tests/unit_tests/test_properties/test_transformers/test_CanonicalTransformer/test_LessThanOrEqual.py | samysweb/dnnv | 58fb95b7300914d9da28eed86c39eca473b1aaef | [
"MIT"
] | 5 | 2022-01-28T20:30:34.000Z | 2022-03-17T09:26:52.000Z | tests/unit_tests/test_properties/test_transformers/test_CanonicalTransformer/test_LessThanOrEqual.py | samysweb/dnnv | 58fb95b7300914d9da28eed86c39eca473b1aaef | [
"MIT"
] | 9 | 2022-01-27T03:50:28.000Z | 2022-02-08T18:42:17.000Z | tests/unit_tests/test_properties/test_transformers/test_CanonicalTransformer/test_LessThanOrEqual.py | samysweb/dnnv | 58fb95b7300914d9da28eed86c39eca473b1aaef | [
"MIT"
] | 2 | 2022-02-03T17:32:43.000Z | 2022-03-24T16:38:49.000Z | from dnnv.properties import *
from dnnv.properties.transformers import CanonicalTransformer
def test_LessThanOrEqual_symbols():
transformer = CanonicalTransformer()
expr = LessThanOrEqual(Symbol("a"), Symbol("b"))
new_expr = transformer.visit(expr)
assert new_expr is not expr
assert isinstance(new_expr, Or)
assert len(new_expr.expressions) == 1
assert isinstance(new_expr.expressions[0], And)
assert len(new_expr.expressions[0].expressions) == 1
assert isinstance(new_expr.expressions[0].expressions[0], LessThanOrEqual)
new_expr_le = new_expr.expressions[0].expressions[0]
assert isinstance(new_expr_le.expr1, Add)
assert len(new_expr_le.expr1.expressions) == 2
assert Multiply(Constant(1), Symbol("a")) in new_expr_le.expr1.expressions
assert Multiply(Constant(-1), Symbol("b")) in new_expr_le.expr1.expressions
assert isinstance(new_expr_le.expr2, Constant)
assert new_expr_le.expr2 is Constant(0)
def test_LessThanOrEqual_constants():
transformer = CanonicalTransformer()
expr = LessThanOrEqual(Constant(9), Constant(5))
new_expr = transformer.visit(expr)
assert new_expr is not expr
assert new_expr is Constant(False)
def test_LessThanOrEqual_mixed_symbols_constants_0():
transformer = CanonicalTransformer()
a, b = Symbol("a"), Symbol("b")
expr = LessThanOrEqual(a + b, Constant(5))
new_expr = transformer.visit(expr)
assert new_expr is not expr
assert isinstance(new_expr, Or)
assert len(new_expr.expressions) == 1
assert isinstance(new_expr.expressions[0], And)
assert len(new_expr.expressions[0].expressions) == 1
assert isinstance(new_expr.expressions[0].expressions[0], LessThanOrEqual)
new_expr_le = new_expr.expressions[0].expressions[0]
assert isinstance(new_expr_le.expr1, Add)
assert len(new_expr_le.expr1.expressions) == 2
assert Multiply(Constant(1), Symbol("a")) in new_expr_le.expr1.expressions
assert Multiply(Constant(1), Symbol("b")) in new_expr_le.expr1.expressions
assert isinstance(new_expr_le.expr2, Constant)
assert new_expr_le.expr2 is Constant(5)
def test_LessThanOrEqual_mixed_symbols_constants_1():
transformer = CanonicalTransformer()
a, b = Symbol("a"), Symbol("b")
expr = LessThanOrEqual(Constant(5), a + b)
new_expr = transformer.visit(expr)
assert new_expr is not expr
assert isinstance(new_expr, Or)
assert len(new_expr.expressions) == 1
assert isinstance(new_expr.expressions[0], And)
assert len(new_expr.expressions[0].expressions) == 1
assert isinstance(new_expr.expressions[0].expressions[0], LessThanOrEqual)
new_expr_le = new_expr.expressions[0].expressions[0]
assert isinstance(new_expr_le.expr1, Add)
assert len(new_expr_le.expr1.expressions) == 2
assert Multiply(Constant(-1), Symbol("a")) in new_expr_le.expr1.expressions
assert Multiply(Constant(-1), Symbol("b")) in new_expr_le.expr1.expressions
assert isinstance(new_expr_le.expr2, Constant)
assert new_expr_le.expr2 is Constant(-5)
| 41.459459 | 79 | 0.740874 | 420 | 3,068 | 5.214286 | 0.095238 | 0.153425 | 0.086301 | 0.157534 | 0.86758 | 0.86347 | 0.824201 | 0.824201 | 0.824201 | 0.824201 | 0 | 0.022989 | 0.149283 | 3,068 | 73 | 80 | 42.027397 | 0.816092 | 0 | 0 | 0.737705 | 0 | 0 | 0.003911 | 0 | 0 | 0 | 0 | 0 | 0.622951 | 1 | 0.065574 | false | 0 | 0.032787 | 0 | 0.098361 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
63c3b074cbc59ca4fc8a1b5b9844bb7fe3e1677a | 7,812 | py | Python | tests/test_semiring.py | wilkeraziz/grasp | 95f5135fd3711eed32cddce2049dd595314fb1f4 | [
"Apache-2.0"
] | 9 | 2015-07-22T18:07:44.000Z | 2021-11-08T11:21:11.000Z | tests/test_semiring.py | wilkeraziz/grasp | 95f5135fd3711eed32cddce2049dd595314fb1f4 | [
"Apache-2.0"
] | null | null | null | tests/test_semiring.py | wilkeraziz/grasp | 95f5135fd3711eed32cddce2049dd595314fb1f4 | [
"Apache-2.0"
] | 1 | 2021-01-12T10:00:22.000Z | 2021-01-12T10:00:22.000Z | """
:Authors: - Wilker Aziz
"""
import unittest
import grasp.semiring as semiring
import numpy as np
from collections import Counter
class ProbTestCase(unittest.TestCase):
def setUp(self):
self.semiring = semiring.prob
def test_attributes(self):
self.assertFalse(self.semiring.idempotent)
self.assertFalse(self.semiring.LOG)
def test_identities(self):
self.assertEqual(self.semiring.one, 1.0)
self.assertEqual(self.semiring.zero, 0.0)
self.assertSequenceEqual(self.semiring.zeros(10), [0.0] * 10)
self.assertSequenceEqual(self.semiring.ones(10), [1.0] * 10)
def test_conversions(self):
self.assertEqual(self.semiring.as_real(0.5), 0.5)
self.assertEqual(self.semiring.from_real(0.5), 0.5)
def test_times(self):
self.assertEqual(self.semiring.times.identity, self.semiring.one)
self.assertEqual(self.semiring.times(self.semiring.one, self.semiring.one), self.semiring.one)
self.assertEqual(self.semiring.times(self.semiring.one, self.semiring.zero), self.semiring.zero)
self.assertAlmostEqual(self.semiring.times(0.2, 0.4), 0.08)
self.assertAlmostEqual(self.semiring.times.reduce([0.1, 0.5, 0.2]), 0.01)
self.assertEqual(self.semiring.times.reduce([]), self.semiring.one)
self.assertAlmostEqual(self.semiring.divide(0.5, 0.2), 2.5)
def test_plus(self):
self.assertEqual(self.semiring.plus.identity, self.semiring.zero)
self.assertEqual(self.semiring.plus(self.semiring.one, self.semiring.zero), self.semiring.one)
self.assertEqual(self.semiring.plus(self.semiring.zero, self.semiring.zero), self.semiring.zero)
self.assertEqual(self.semiring.plus(self.semiring.one, self.semiring.one), 2.0)
self.assertAlmostEqual(self.semiring.plus(0.2, 0.4), 0.6)
self.assertAlmostEqual(self.semiring.plus.reduce([0.1, 0.5, 0.2]), 0.8)
self.assertEqual(self.semiring.plus.reduce([]), self.semiring.zero)
def test_choice(self):
with self.assertRaises(ValueError):
self.semiring.plus.choice(np.array([0.1, 0.2, 0.3, 0.6]))
c = Counter(self.semiring.plus.choice(np.array([0.1, 0.6, 0.3])) for i in range(1000))
self.assertSequenceEqual([i for i, n in c.most_common()], [1, 2, 0])
class InsideTestCase(unittest.TestCase):
def setUp(self):
self.semiring = semiring.inside
def test_attributes(self):
self.assertFalse(self.semiring.idempotent)
self.assertTrue(self.semiring.LOG)
def test_identities(self):
self.assertEqual(self.semiring.one, 0.0)
self.assertEqual(self.semiring.zero, -float('inf'))
self.assertSequenceEqual(self.semiring.zeros(10), [-float('inf')] * 10)
self.assertSequenceEqual(self.semiring.ones(10), [0.0] * 10)
def test_conversions(self):
self.assertEqual(self.semiring.as_real(-0.69314718055994529), 0.5)
self.assertEqual(self.semiring.from_real(0.5), -0.69314718055994529)
self.assertEqual(self.semiring.as_real(0.69314718055994529), 2.0)
self.assertEqual(self.semiring.from_real(2.0), 0.69314718055994529)
def test_invariants(self):
self.assertEqual(self.semiring.times.identity, self.semiring.one)
self.assertEqual(self.semiring.times(self.semiring.one, self.semiring.one), self.semiring.one)
self.assertEqual(self.semiring.times(self.semiring.one, self.semiring.zero), self.semiring.zero)
self.assertEqual(self.semiring.times(self.semiring.zero, self.semiring.zero), self.semiring.zero)
self.assertEqual(self.semiring.plus.identity, self.semiring.zero)
self.assertEqual(self.semiring.plus(self.semiring.one, self.semiring.zero), self.semiring.one)
self.assertEqual(self.semiring.plus(self.semiring.zero, self.semiring.zero), self.semiring.zero)
def test_times(self):
self.assertAlmostEqual(self.semiring.times(1, 2), 3)
self.assertAlmostEqual(self.semiring.times.reduce([1, 2, 3]), 6)
self.assertEqual(self.semiring.times.reduce([]), self.semiring.one)
self.assertAlmostEqual(self.semiring.divide(2, 1), 1)
def test_plus(self):
self.assertEqual(self.semiring.plus(self.semiring.one, self.semiring.one), 0.69314718055994529)
self.assertAlmostEqual(self.semiring.plus.reduce([-0.69314718055994529, 0.0]), 0.40546510810816438)
self.assertEqual(self.semiring.plus.reduce([]), self.semiring.zero)
def test_choice(self):
with self.assertRaises(ValueError):
self.semiring.plus.choice(np.array([self.semiring.from_real(0.1),
self.semiring.from_real(0.2),
self.semiring.from_real(0.3),
self.semiring.from_real(0.6)]))
c = Counter(self.semiring.plus.choice(np.array([self.semiring.from_real(0.1),
self.semiring.from_real(0.6),
self.semiring.from_real(0.3)])) for i in range(1000))
self.assertSequenceEqual([i for i, n in c.most_common()], [1, 2, 0])
class ViterbiTestCase(unittest.TestCase):
def setUp(self):
self.semiring = semiring.viterbi
def test_attributes(self):
self.assertTrue(self.semiring.idempotent)
self.assertTrue(self.semiring.LOG)
def test_identities(self):
self.assertEqual(self.semiring.one, 0.0)
self.assertEqual(self.semiring.zero, -float('inf'))
self.assertSequenceEqual(self.semiring.zeros(10), [-float('inf')] * 10)
self.assertSequenceEqual(self.semiring.ones(10), [0.0] * 10)
def test_conversions(self):
self.assertEqual(self.semiring.as_real(-0.69314718055994529), 0.5)
self.assertEqual(self.semiring.from_real(0.5), -0.69314718055994529)
self.assertEqual(self.semiring.as_real(0.69314718055994529), 2.0)
self.assertEqual(self.semiring.from_real(2.0), 0.69314718055994529)
def test_invariants(self):
self.assertEqual(self.semiring.times.identity, self.semiring.one)
self.assertEqual(self.semiring.times(self.semiring.one, self.semiring.one), self.semiring.one)
self.assertEqual(self.semiring.times(self.semiring.one, self.semiring.zero), self.semiring.zero)
self.assertEqual(self.semiring.times(self.semiring.zero, self.semiring.zero), self.semiring.zero)
self.assertEqual(self.semiring.plus.identity, self.semiring.zero)
self.assertEqual(self.semiring.plus(self.semiring.one, self.semiring.zero), self.semiring.one)
self.assertEqual(self.semiring.plus(self.semiring.zero, self.semiring.zero), self.semiring.zero)
def test_times(self):
self.assertAlmostEqual(self.semiring.times(1, 2), 3)
self.assertAlmostEqual(self.semiring.times.reduce([1, 2, 3]), 6)
self.assertEqual(self.semiring.times.reduce([]), self.semiring.one)
self.assertAlmostEqual(self.semiring.divide(2, 1), 1)
def test_plus(self):
self.assertEqual(self.semiring.plus(self.semiring.one, self.semiring.one), self.semiring.one)
self.assertAlmostEqual(self.semiring.plus.reduce([-0.69314718055994529, 0.0]), 0.0)
self.assertEqual(self.semiring.plus.reduce([]), self.semiring.zero)
def test_choice(self):
c = Counter(self.semiring.plus.choice(np.array([self.semiring.from_real(0.1),
self.semiring.from_real(0.9),
self.semiring.from_real(0.3)])) for i in range(1000))
self.assertSequenceEqual(c.most_common(), [(1, 1000)])
if __name__ == '__main__':
unittest.main()
| 48.825 | 109 | 0.667307 | 1,009 | 7,812 | 5.115956 | 0.078295 | 0.346377 | 0.165633 | 0.235374 | 0.944595 | 0.921348 | 0.891127 | 0.87563 | 0.842697 | 0.838628 | 0 | 0.06349 | 0.1915 | 7,812 | 159 | 110 | 49.132075 | 0.7538 | 0.002944 | 0 | 0.680328 | 0 | 0 | 0.00257 | 0 | 0 | 0 | 0 | 0 | 0.614754 | 1 | 0.188525 | false | 0 | 0.032787 | 0 | 0.245902 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
893490584fbba80b979709b707b1b604ee060a9a | 525,894 | py | Python | darling_ansible/python_venv/lib/python3.7/site-packages/oci/database/database_client.py | revnav/sandbox | f9c8422233d093b76821686b6c249417502cf61d | [
"Apache-2.0"
] | null | null | null | darling_ansible/python_venv/lib/python3.7/site-packages/oci/database/database_client.py | revnav/sandbox | f9c8422233d093b76821686b6c249417502cf61d | [
"Apache-2.0"
] | null | null | null | darling_ansible/python_venv/lib/python3.7/site-packages/oci/database/database_client.py | revnav/sandbox | f9c8422233d093b76821686b6c249417502cf61d | [
"Apache-2.0"
] | 1 | 2020-06-25T03:12:58.000Z | 2020-06-25T03:12:58.000Z | # coding: utf-8
# Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
# This software is dual-licensed to you under the Universal Permissive License (UPL) 1.0 as shown at https://oss.oracle.com/licenses/upl or Apache License 2.0 as shown at http://www.apache.org/licenses/LICENSE-2.0. You may choose either license.
from __future__ import absolute_import
from oci._vendor import requests # noqa: F401
from oci._vendor import six
from oci import retry # noqa: F401
from oci.base_client import BaseClient
from oci.config import get_config_value_or_default, validate_config
from oci.signer import Signer
from oci.util import Sentinel
from .models import database_type_mapping
missing = Sentinel("Missing")
class DatabaseClient(object):
"""
The API for the Database Service. Use this API to manage resources such as databases and DB Systems. For more information, see [Overview of the Database Service](/iaas/Content/Database/Concepts/databaseoverview.htm).
"""
def __init__(self, config, **kwargs):
"""
Creates a new service client
:param dict config:
Configuration keys and values as per `SDK and Tool Configuration <https://docs.cloud.oracle.com/Content/API/Concepts/sdkconfig.htm>`__.
The :py:meth:`~oci.config.from_file` method can be used to load configuration from a file. Alternatively, a ``dict`` can be passed. You can validate_config
the dict using :py:meth:`~oci.config.validate_config`
:param str service_endpoint: (optional)
The endpoint of the service to call using this client. For example ``https://iaas.us-ashburn-1.oraclecloud.com``. If this keyword argument is
not provided then it will be derived using the region in the config parameter. You should only provide this keyword argument if you have an explicit
need to specify a service endpoint.
:param timeout: (optional)
The connection and read timeouts for the client. The default values are connection timeout 10 seconds and read timeout 60 seconds. This keyword argument can be provided
as a single float, in which case the value provided is used for both the read and connection timeouts, or as a tuple of two floats. If
a tuple is provided then the first value is used as the connection timeout and the second value as the read timeout.
:type timeout: float or tuple(float, float)
:param signer: (optional)
The signer to use when signing requests made by the service client. The default is to use a :py:class:`~oci.signer.Signer` based on the values
provided in the config parameter.
One use case for this parameter is for `Instance Principals authentication <https://docs.cloud.oracle.com/Content/Identity/Tasks/callingservicesfrominstances.htm>`__
by passing an instance of :py:class:`~oci.auth.signers.InstancePrincipalsSecurityTokenSigner` as the value for this keyword argument
:type signer: :py:class:`~oci.signer.AbstractBaseSigner`
:param obj retry_strategy: (optional)
A retry strategy to apply to all calls made by this service client (i.e. at the client level). There is no retry strategy applied by default.
Retry strategies can also be applied at the operation level by passing a ``retry_strategy`` keyword argument as part of calling the operation.
Any value provided at the operation level will override whatever is specified at the client level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
"""
validate_config(config, signer=kwargs.get('signer'))
if 'signer' in kwargs:
signer = kwargs['signer']
else:
signer = Signer(
tenancy=config["tenancy"],
user=config["user"],
fingerprint=config["fingerprint"],
private_key_file_location=config.get("key_file"),
pass_phrase=get_config_value_or_default(config, "pass_phrase"),
private_key_content=config.get("key_content")
)
base_client_init_kwargs = {
'regional_client': True,
'service_endpoint': kwargs.get('service_endpoint'),
'timeout': kwargs.get('timeout'),
'base_path': '/20160918',
'service_endpoint_template': 'https://database.{region}.{secondLevelDomain}',
'skip_deserialization': kwargs.get('skip_deserialization', False)
}
self.base_client = BaseClient("database", config, signer, database_type_mapping, **base_client_init_kwargs)
self.retry_strategy = kwargs.get('retry_strategy')
self._config = config
self._kwargs = kwargs
def activate_exadata_infrastructure(self, exadata_infrastructure_id, activate_exadata_infrastructure_details, **kwargs):
"""
Activates the specified Exadata infrastructure.
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param ActivateExadataInfrastructureDetails activate_exadata_infrastructure_details: (required)
The activation details for the Exadata infrastructure.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.ExadataInfrastructure`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/exadataInfrastructures/{exadataInfrastructureId}/actions/activate"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id",
"opc_retry_token"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"activate_exadata_infrastructure got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"exadataInfrastructureId": exadata_infrastructure_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing),
"opc-retry-token": kwargs.get("opc_retry_token", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=activate_exadata_infrastructure_details,
response_type="ExadataInfrastructure")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=activate_exadata_infrastructure_details,
response_type="ExadataInfrastructure")
def change_autonomous_container_database_compartment(self, change_compartment_details, autonomous_container_database_id, **kwargs):
"""
Move the Autonomous Container Database and its dependent resources to the specified compartment.
For more information about moving Autonomous Container Databases, see
`Moving Database Resources to a Different Compartment`__.
__ https://docs.cloud.oracle.com/Content/Database/Concepts/databaseoverview.htm#moveRes
:param ChangeCompartmentDetails change_compartment_details: (required)
Request to move Autonomous Container Database to a different compartment
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousContainerDatabases/{autonomousContainerDatabaseId}/actions/changeCompartment"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token",
"opc_request_id",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"change_autonomous_container_database_compartment got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousContainerDatabaseId": autonomous_container_database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing),
"opc-request-id": kwargs.get("opc_request_id", missing),
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=change_compartment_details)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=change_compartment_details)
def change_autonomous_database_compartment(self, change_compartment_details, autonomous_database_id, **kwargs):
"""
Move the Autonomous Database and its dependent resources to the specified compartment.
For more information about moving Autonomous Databases, see
`Moving Database Resources to a Different Compartment`__.
__ https://docs.cloud.oracle.com/Content/Database/Concepts/databaseoverview.htm#moveRes
:param ChangeCompartmentDetails change_compartment_details: (required)
Request to move Autonomous Database to a different compartment
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabases/{autonomousDatabaseId}/actions/changeCompartment"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token",
"opc_request_id",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"change_autonomous_database_compartment got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDatabaseId": autonomous_database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing),
"opc-request-id": kwargs.get("opc_request_id", missing),
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=change_compartment_details)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=change_compartment_details)
def change_autonomous_exadata_infrastructure_compartment(self, change_compartment_details, autonomous_exadata_infrastructure_id, **kwargs):
"""
Move the Autonomous Exadata Infrastructure and its dependent resources to the specified compartment.
For more information about moving Autonomous Exadata Infrastructures, see
`Moving Database Resources to a Different Compartment`__.
__ https://docs.cloud.oracle.com/Content/Database/Concepts/databaseoverview.htm#moveRes
:param ChangeCompartmentDetails change_compartment_details: (required)
Request to move Autonomous Exadata Infrastructure to a different compartment
:param str autonomous_exadata_infrastructure_id: (required)
The Autonomous Exadata Infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousExadataInfrastructures/{autonomousExadataInfrastructureId}/actions/changeCompartment"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token",
"opc_request_id",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"change_autonomous_exadata_infrastructure_compartment got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousExadataInfrastructureId": autonomous_exadata_infrastructure_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing),
"opc-request-id": kwargs.get("opc_request_id", missing),
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=change_compartment_details)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=change_compartment_details)
def change_backup_destination_compartment(self, change_compartment_details, backup_destination_id, **kwargs):
"""
Move the backup destination and its dependent resources to the specified compartment.
For more information about moving backup destinations, see
`Moving Database Resources to a Different Compartment`__.
__ https://docs.cloud.oracle.com/Content/Database/Concepts/databaseoverview.htm#moveRes
:param ChangeCompartmentDetails change_compartment_details: (required)
Request to move backup destination to a different compartment
:param str backup_destination_id: (required)
The `OCID`__ of the backup destination.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/backupDestinations/{backupDestinationId}/actions/changeCompartment"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token",
"opc_request_id",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"change_backup_destination_compartment got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"backupDestinationId": backup_destination_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing),
"opc-request-id": kwargs.get("opc_request_id", missing),
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=change_compartment_details)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=change_compartment_details)
def change_db_system_compartment(self, change_compartment_details, db_system_id, **kwargs):
"""
Move the DB system and its dependent resources to the specified compartment.
For more information about moving DB systems, see
`Moving Database Resources to a Different Compartment`__.
__ https://docs.cloud.oracle.com/Content/Database/Concepts/databaseoverview.htm#moveRes
:param ChangeCompartmentDetails change_compartment_details: (required)
Request to move Db System to a different compartment
:param str db_system_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbSystems/{dbSystemId}/actions/changeCompartment"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token",
"opc_request_id",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"change_db_system_compartment got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbSystemId": db_system_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing),
"opc-request-id": kwargs.get("opc_request_id", missing),
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=change_compartment_details)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=change_compartment_details)
def change_exadata_infrastructure_compartment(self, change_exadata_infrastructure_compartment_details, exadata_infrastructure_id, **kwargs):
"""
To move an Exadata infrastructure and its dependent resources to another compartment, use the
:func:`change_exadata_infrastructure_compartment` operation.
:param ChangeExadataInfrastructureCompartmentDetails change_exadata_infrastructure_compartment_details: (required)
Request to move Exadata infrastructure to a different compartment
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/exadataInfrastructures/{exadataInfrastructureId}/actions/changeCompartment"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token",
"opc_request_id",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"change_exadata_infrastructure_compartment got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"exadataInfrastructureId": exadata_infrastructure_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing),
"opc-request-id": kwargs.get("opc_request_id", missing),
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=change_exadata_infrastructure_compartment_details)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=change_exadata_infrastructure_compartment_details)
def change_vm_cluster_compartment(self, change_vm_cluster_compartment_details, vm_cluster_id, **kwargs):
"""
To move a VM cluster and its dependent resources to another compartment, use the
:func:`change_vm_cluster_compartment` operation.
:param ChangeVmClusterCompartmentDetails change_vm_cluster_compartment_details: (required)
Request to move VM cluster to a different compartment
:param str vm_cluster_id: (required)
The VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/vmClusters/{vmClusterId}/actions/changeCompartment"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token",
"opc_request_id",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"change_vm_cluster_compartment got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"vmClusterId": vm_cluster_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing),
"opc-request-id": kwargs.get("opc_request_id", missing),
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=change_vm_cluster_compartment_details)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=change_vm_cluster_compartment_details)
def complete_external_backup_job(self, backup_id, complete_external_backup_job_details, **kwargs):
"""
Changes the status of the standalone backup resource to `ACTIVE` after the backup is created from the on-premises database and placed in Oracle Cloud Infrastructure Object Storage.
**Note:** This API is used by an Oracle Cloud Infrastructure Python script that is packaged with the Oracle Cloud Infrastructure CLI. Oracle recommends that you use the script instead using the API directly. See `Migrating an On-Premises Database to Oracle Cloud Infrastructure by Creating a Backup in the Cloud`__ for more information.
__ https://docs.cloud.oracle.com/Content/Database/Tasks/mig-onprembackup.htm
:param str backup_id: (required)
The backup `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param CompleteExternalBackupJobDetails complete_external_backup_job_details: (required)
Updates the status of the backup resource.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.ExternalBackupJob`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/externalBackupJobs/{backupId}/actions/complete"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match",
"opc_retry_token"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"complete_external_backup_job got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"backupId": backup_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-retry-token": kwargs.get("opc_retry_token", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=complete_external_backup_job_details,
response_type="ExternalBackupJob")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=complete_external_backup_job_details,
response_type="ExternalBackupJob")
def create_autonomous_container_database(self, create_autonomous_container_database_details, **kwargs):
"""
Create a new Autonomous Container Database in the specified Autonomous Exadata Infrastructure.
:param CreateAutonomousContainerDatabaseDetails create_autonomous_container_database_details: (required)
Request to create an Autonomous Container Database in a specified Autonomous Exadata Infrastructure.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousContainerDatabase`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousContainerDatabases"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"create_autonomous_container_database got unknown kwargs: {!r}".format(extra_kwargs))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_autonomous_container_database_details,
response_type="AutonomousContainerDatabase")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_autonomous_container_database_details,
response_type="AutonomousContainerDatabase")
def create_autonomous_data_warehouse(self, create_autonomous_data_warehouse_details, **kwargs):
"""
**Deprecated.** To create a new Autonomous Data Warehouse, use the :func:`create_autonomous_database` operation and specify `DW` as the workload type.
:param CreateAutonomousDataWarehouseDetails create_autonomous_data_warehouse_details: (required)
Request to create a new Autonomous Data Warehouse.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDataWarehouse`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDataWarehouses"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"create_autonomous_data_warehouse got unknown kwargs: {!r}".format(extra_kwargs))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_autonomous_data_warehouse_details,
response_type="AutonomousDataWarehouse")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_autonomous_data_warehouse_details,
response_type="AutonomousDataWarehouse")
def create_autonomous_data_warehouse_backup(self, create_autonomous_data_warehouse_backup_details, **kwargs):
"""
**Deprecated.** To create a new Autonomous Data Warehouse backup for a specified database, use the :func:`create_autonomous_database_backup` operation.
:param CreateAutonomousDataWarehouseBackupDetails create_autonomous_data_warehouse_backup_details: (required)
Request to create a new Autonomous Data Warehouse backup.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDataWarehouseBackup`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDataWarehouseBackups"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"create_autonomous_data_warehouse_backup got unknown kwargs: {!r}".format(extra_kwargs))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_autonomous_data_warehouse_backup_details,
response_type="AutonomousDataWarehouseBackup")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_autonomous_data_warehouse_backup_details,
response_type="AutonomousDataWarehouseBackup")
def create_autonomous_database(self, create_autonomous_database_details, **kwargs):
"""
Creates a new Autonomous Database.
:param CreateAutonomousDatabaseBase create_autonomous_database_details: (required)
Request to create a new Autonomous Database.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDatabase`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabases"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"create_autonomous_database got unknown kwargs: {!r}".format(extra_kwargs))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_autonomous_database_details,
response_type="AutonomousDatabase")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_autonomous_database_details,
response_type="AutonomousDatabase")
def create_autonomous_database_backup(self, create_autonomous_database_backup_details, **kwargs):
"""
Creates a new Autonomous Database backup for the specified database based on the provided request parameters.
:param CreateAutonomousDatabaseBackupDetails create_autonomous_database_backup_details: (required)
Request to create a new Autonomous Database backup.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDatabaseBackup`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabaseBackups"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"create_autonomous_database_backup got unknown kwargs: {!r}".format(extra_kwargs))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_autonomous_database_backup_details,
response_type="AutonomousDatabaseBackup")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_autonomous_database_backup_details,
response_type="AutonomousDatabaseBackup")
def create_backup(self, create_backup_details, **kwargs):
"""
Creates a new backup in the specified database based on the request parameters you provide. If you previously used RMAN or dbcli to configure backups and then you switch to using the Console or the API for backups, a new backup configuration is created and associated with your database. This means that you can no longer rely on your previously configured unmanaged backups to work.
:param CreateBackupDetails create_backup_details: (required)
Request to create a new database backup.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.Backup`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/backups"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"create_backup got unknown kwargs: {!r}".format(extra_kwargs))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_backup_details,
response_type="Backup")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_backup_details,
response_type="Backup")
def create_backup_destination(self, create_backup_destination_details, **kwargs):
"""
Creates a backup destination.
:param CreateBackupDestinationDetails create_backup_destination_details: (required)
Request to create a new backup destination.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.BackupDestination`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/backupDestinations"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"create_backup_destination got unknown kwargs: {!r}".format(extra_kwargs))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_backup_destination_details,
response_type="BackupDestination")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_backup_destination_details,
response_type="BackupDestination")
def create_console_connection(self, create_console_connection_details, db_node_id, **kwargs):
"""
Creates a new console connection to the specified dbNode.
After the console connection has been created and is available,
you connect to the console using SSH.
:param CreateConsoleConnectionDetails create_console_connection_details: (required)
Request object for creating an CreateConsoleConnection
:param str db_node_id: (required)
The database node `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.ConsoleConnection`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbNodes/{dbNodeId}/consoleConnections"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"create_console_connection got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbNodeId": db_node_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=create_console_connection_details,
response_type="ConsoleConnection")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=create_console_connection_details,
response_type="ConsoleConnection")
def create_data_guard_association(self, database_id, create_data_guard_association_details, **kwargs):
"""
Creates a new Data Guard association. A Data Guard association represents the replication relationship between the
specified database and a peer database. For more information, see `Using Oracle Data Guard`__.
All Oracle Cloud Infrastructure resources, including Data Guard associations, get an Oracle-assigned, unique ID
called an Oracle Cloud Identifier (OCID). When you create a resource, you can find its OCID in the response.
You can also retrieve a resource's OCID by using a List API operation on that resource type, or by viewing the
resource in the Console. For more information, see
`Resource Identifiers`__.
__ https://docs.cloud.oracle.com/Content/Database/Tasks/usingdataguard.htm
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param CreateDataGuardAssociationDetails create_data_guard_association_details: (required)
A request to create a Data Guard association.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.DataGuardAssociation`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/databases/{databaseId}/dataGuardAssociations"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"create_data_guard_association got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"databaseId": database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=create_data_guard_association_details,
response_type="DataGuardAssociation")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=create_data_guard_association_details,
response_type="DataGuardAssociation")
def create_database(self, create_new_database_details, **kwargs):
"""
Creates a new database in the specified Database Home. If the database version is provided, it must match the version of the Database Home. Applies only to Exadata DB systems.
:param CreateDatabaseBase create_new_database_details: (required)
Request to create a new database.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.Database`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/databases"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"create_database got unknown kwargs: {!r}".format(extra_kwargs))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_new_database_details,
response_type="Database")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_new_database_details,
response_type="Database")
def create_db_home(self, create_db_home_with_db_system_id_details, **kwargs):
"""
Creates a new Database Home in the specified DB system based on the request parameters you provide. Applies only to bare metal and Exadata DB systems.
:param CreateDbHomeBase create_db_home_with_db_system_id_details: (required)
Request to create a new Database Home.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.DbHome`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbHomes"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"create_db_home got unknown kwargs: {!r}".format(extra_kwargs))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_db_home_with_db_system_id_details,
response_type="DbHome")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_db_home_with_db_system_id_details,
response_type="DbHome")
def create_exadata_infrastructure(self, create_exadata_infrastructure_details, **kwargs):
"""
Create Exadata infrastructure.
:param CreateExadataInfrastructureDetails create_exadata_infrastructure_details: (required)
Request to create Exadata infrastructure.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.ExadataInfrastructure`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/exadataInfrastructures"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"create_exadata_infrastructure got unknown kwargs: {!r}".format(extra_kwargs))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_exadata_infrastructure_details,
response_type="ExadataInfrastructure")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_exadata_infrastructure_details,
response_type="ExadataInfrastructure")
def create_external_backup_job(self, create_external_backup_job_details, **kwargs):
"""
Creates a new backup resource and returns the information the caller needs to back up an on-premises Oracle Database to Oracle Cloud Infrastructure.
**Note:** This API is used by an Oracle Cloud Infrastructure Python script that is packaged with the Oracle Cloud Infrastructure CLI. Oracle recommends that you use the script instead using the API directly. See `Migrating an On-Premises Database to Oracle Cloud Infrastructure by Creating a Backup in the Cloud`__ for more information.
__ https://docs.cloud.oracle.com/Content/Database/Tasks/mig-onprembackup.htm
:param CreateExternalBackupJobDetails create_external_backup_job_details: (required)
Request to create a cloud backup resource for a database running outside the cloud.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.ExternalBackupJob`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/externalBackupJobs"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"create_external_backup_job got unknown kwargs: {!r}".format(extra_kwargs))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_external_backup_job_details,
response_type="ExternalBackupJob")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_external_backup_job_details,
response_type="ExternalBackupJob")
def create_vm_cluster(self, create_vm_cluster_details, **kwargs):
"""
Creates a VM cluster.
:param CreateVmClusterDetails create_vm_cluster_details: (required)
Request to create a VM cluster.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.VmCluster`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/vmClusters"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"create_vm_cluster got unknown kwargs: {!r}".format(extra_kwargs))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_vm_cluster_details,
response_type="VmCluster")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
header_params=header_params,
body=create_vm_cluster_details,
response_type="VmCluster")
def create_vm_cluster_network(self, exadata_infrastructure_id, vm_cluster_network_details, **kwargs):
"""
Creates the VM cluster network.
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param VmClusterNetworkDetails vm_cluster_network_details: (required)
Request to create the VM cluster network.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.VmClusterNetwork`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/exadataInfrastructures/{exadataInfrastructureId}/vmClusterNetworks"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"create_vm_cluster_network got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"exadataInfrastructureId": exadata_infrastructure_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=vm_cluster_network_details,
response_type="VmClusterNetwork")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=vm_cluster_network_details,
response_type="VmClusterNetwork")
def db_node_action(self, db_node_id, action, **kwargs):
"""
Performs one of the following power actions on the specified DB node:
- start - power on
- stop - power off
- softreset - ACPI shutdown and power on
- reset - power off and power on
**Note:** Stopping a node affects billing differently, depending on the type of DB system:
*Bare metal and Exadata DB systems* - The _stop_ state has no effect on the resources you consume.
Billing continues for DB nodes that you stop, and related resources continue
to apply against any relevant quotas. You must terminate the DB system
(:func:`terminate_db_system`)
to remove its resources from billing and quotas.
*Virtual machine DB systems* - Stopping a node stops billing for all OCPUs associated with that node, and billing resumes when you restart the node.
:param str db_node_id: (required)
The database node `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str action: (required)
The action to perform on the DB Node.
Allowed values are: "STOP", "START", "SOFTRESET", "RESET"
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.DbNode`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbNodes/{dbNodeId}"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"db_node_action got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbNodeId": db_node_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
query_params = {
"action": action
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing),
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="DbNode")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="DbNode")
def delete_autonomous_data_warehouse(self, autonomous_data_warehouse_id, **kwargs):
"""
**Deprecated.** To delete an Autonomous Data Warehouse, use the :func:`delete_autonomous_database` operation.
:param str autonomous_data_warehouse_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDataWarehouses/{autonomousDataWarehouseId}"
method = "DELETE"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"delete_autonomous_data_warehouse got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDataWarehouseId": autonomous_data_warehouse_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
def delete_autonomous_database(self, autonomous_database_id, **kwargs):
"""
Deletes the specified Autonomous Database.
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabases/{autonomousDatabaseId}"
method = "DELETE"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"delete_autonomous_database got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDatabaseId": autonomous_database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
def delete_backup(self, backup_id, **kwargs):
"""
Deletes a full backup. You cannot delete automatic backups using this API.
:param str backup_id: (required)
The backup `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/backups/{backupId}"
method = "DELETE"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"delete_backup got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"backupId": backup_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
def delete_backup_destination(self, backup_destination_id, **kwargs):
"""
Deletes a backup destination.
:param str backup_destination_id: (required)
The `OCID`__ of the backup destination.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/backupDestinations/{backupDestinationId}"
method = "DELETE"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"delete_backup_destination got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"backupDestinationId": backup_destination_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
def delete_console_connection(self, db_node_id, console_connection_id, **kwargs):
"""
Deletes the specified Db node console connection.
:param str db_node_id: (required)
The database node `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str console_connection_id: (required)
The OCID of the console connection.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbNodes/{dbNodeId}/consoleConnections/{consoleConnectionId}"
method = "DELETE"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"delete_console_connection got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbNodeId": db_node_id,
"consoleConnectionId": console_connection_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
def delete_database(self, database_id, **kwargs):
"""
Deletes the database. Applies only to Exadata DB systems.
The data in this database is local to the DB system and will be lost when the database is deleted. Oracle recommends that you back up any data in the DB system prior to deleting it. You can use the `performFinalBackup` parameter to have the Exadata DB system database backed up before it is deleted.
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param bool perform_final_backup: (optional)
Whether to perform a final backup of the database or not. Default is false.
If you previously used RMAN or dbcli to configure backups and then you switch to using the Console or the API for backups, a new backup configuration is created and associated with your database. This means that you can no longer rely on your previously configured unmanaged backups to work.
This parameter is used in multiple APIs. Refer to the API description for details on how the operation uses it.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/databases/{databaseId}"
method = "DELETE"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match",
"perform_final_backup",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"delete_database got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"databaseId": database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
query_params = {
"performFinalBackup": kwargs.get("perform_final_backup", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params)
def delete_db_home(self, db_home_id, **kwargs):
"""
Deletes a Database Home. Applies only to bare metal and Exadata DB systems.
The Database Home and its database data are local to the DB system, and on a bare metal DB system, both are lost when you delete the Database Home. Oracle recommends that you back up any data on the DB system before you delete it. You can use the `performFinalBackup` parameter with this operation on bare metal DB systems.
On an Exadata DB system, the delete request is rejected if the Database Home is not empty. You must terminate all databases in the Database Home before you delete the home. The `performFinalBackup` parameter is not used with this operation on Exadata DB systems.
:param str db_home_id: (required)
The Database Home `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param bool perform_final_backup: (optional)
Whether to perform a final backup of the database or not. Default is false.
If you previously used RMAN or dbcli to configure backups and then you switch to using the Console or the API for backups, a new backup configuration is created and associated with your database. This means that you can no longer rely on your previously configured unmanaged backups to work.
This parameter is used in multiple APIs. Refer to the API description for details on how the operation uses it.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbHomes/{dbHomeId}"
method = "DELETE"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match",
"perform_final_backup"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"delete_db_home got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbHomeId": db_home_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
query_params = {
"performFinalBackup": kwargs.get("perform_final_backup", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params)
def delete_exadata_infrastructure(self, exadata_infrastructure_id, **kwargs):
"""
Deletes the Exadata infrastructure.
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/exadataInfrastructures/{exadataInfrastructureId}"
method = "DELETE"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"delete_exadata_infrastructure got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"exadataInfrastructureId": exadata_infrastructure_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
def delete_vm_cluster(self, vm_cluster_id, **kwargs):
"""
Deletes the specified VM cluster.
:param str vm_cluster_id: (required)
The VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/vmClusters/{vmClusterId}"
method = "DELETE"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"delete_vm_cluster got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"vmClusterId": vm_cluster_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
def delete_vm_cluster_network(self, exadata_infrastructure_id, vm_cluster_network_id, **kwargs):
"""
Deletes the specified VM cluster network.
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str vm_cluster_network_id: (required)
The VM cluster network `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/exadataInfrastructures/{exadataInfrastructureId}/vmClusterNetworks/{vmClusterNetworkId}"
method = "DELETE"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"delete_vm_cluster_network got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"exadataInfrastructureId": exadata_infrastructure_id,
"vmClusterNetworkId": vm_cluster_network_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
def deregister_autonomous_database_data_safe(self, autonomous_database_id, **kwargs):
"""
Asynchronously deregisters this Autonomous Database with Data Safe.
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabases/{autonomousDatabaseId}/actions/deregisterDataSafe"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"deregister_autonomous_database_data_safe got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDatabaseId": autonomous_database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
def download_exadata_infrastructure_config_file(self, exadata_infrastructure_id, **kwargs):
"""
Downloads the configuration file for the specified Exadata infrastructure.
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type stream
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/exadataInfrastructures/{exadataInfrastructureId}/actions/downloadConfigFile"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id",
"opc_retry_token"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"download_exadata_infrastructure_config_file got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"exadataInfrastructureId": exadata_infrastructure_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/octet-stream",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing),
"opc-retry-token": kwargs.get("opc_retry_token", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="stream")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="stream")
def download_vm_cluster_network_config_file(self, exadata_infrastructure_id, vm_cluster_network_id, **kwargs):
"""
Downloads the configuration file for the specified VM Cluster Network.
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str vm_cluster_network_id: (required)
The VM cluster network `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type stream
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/exadataInfrastructures/{exadataInfrastructureId}/vmClusterNetworks/{vmClusterNetworkId}/actions/downloadConfigFile"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id",
"opc_retry_token"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"download_vm_cluster_network_config_file got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"exadataInfrastructureId": exadata_infrastructure_id,
"vmClusterNetworkId": vm_cluster_network_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/octet-stream",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing),
"opc-retry-token": kwargs.get("opc_retry_token", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="stream")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="stream")
def failover_data_guard_association(self, database_id, data_guard_association_id, failover_data_guard_association_details, **kwargs):
"""
Performs a failover to transition the standby database identified by the `databaseId` parameter into the
specified Data Guard association's primary role after the existing primary database fails or becomes unreachable.
A failover might result in data loss depending on the protection mode in effect at the time of the primary
database failure.
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str data_guard_association_id: (required)
The Data Guard association's `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param FailoverDataGuardAssociationDetails failover_data_guard_association_details: (required)
A request to perform a failover, transitioning a standby database into a primary database.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.DataGuardAssociation`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/databases/{databaseId}/dataGuardAssociations/{dataGuardAssociationId}/actions/failover"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"failover_data_guard_association got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"databaseId": database_id,
"dataGuardAssociationId": data_guard_association_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=failover_data_guard_association_details,
response_type="DataGuardAssociation")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=failover_data_guard_association_details,
response_type="DataGuardAssociation")
def generate_autonomous_data_warehouse_wallet(self, autonomous_data_warehouse_id, generate_autonomous_data_warehouse_wallet_details, **kwargs):
"""
**Deprecated.** To create and download a wallet for an Autonomous Data Warehouse, use the :func:`generate_autonomous_database_wallet` operation.
:param str autonomous_data_warehouse_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param GenerateAutonomousDataWarehouseWalletDetails generate_autonomous_data_warehouse_wallet_details: (required)
Request to create a new Autonomous Data Warehouse wallet.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type stream
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDataWarehouses/{autonomousDataWarehouseId}/actions/generateWallet"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id",
"opc_retry_token"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"generate_autonomous_data_warehouse_wallet got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDataWarehouseId": autonomous_data_warehouse_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/octet-stream",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing),
"opc-retry-token": kwargs.get("opc_retry_token", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=generate_autonomous_data_warehouse_wallet_details,
response_type="stream")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=generate_autonomous_data_warehouse_wallet_details,
response_type="stream")
def generate_autonomous_database_wallet(self, autonomous_database_id, generate_autonomous_database_wallet_details, **kwargs):
"""
Creates and downloads a wallet for the specified Autonomous Database.
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param GenerateAutonomousDatabaseWalletDetails generate_autonomous_database_wallet_details: (required)
Request to create a new Autonomous Database wallet.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type stream
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabases/{autonomousDatabaseId}/actions/generateWallet"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id",
"opc_retry_token"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"generate_autonomous_database_wallet got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDatabaseId": autonomous_database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/octet-stream",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing),
"opc-retry-token": kwargs.get("opc_retry_token", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=generate_autonomous_database_wallet_details,
response_type="stream")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=generate_autonomous_database_wallet_details,
response_type="stream")
def generate_recommended_vm_cluster_network(self, exadata_infrastructure_id, generate_recommended_network_details, **kwargs):
"""
Generates a recommended VM cluster network configuration.
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param GenerateRecommendedNetworkDetails generate_recommended_network_details: (required)
Request to generate a recommended VM cluster network configuration.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.VmClusterNetworkDetails`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/exadataInfrastructures/{exadataInfrastructureId}/vmClusterNetworks/actions/generateRecommendedNetwork"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id",
"opc_retry_token"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"generate_recommended_vm_cluster_network got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"exadataInfrastructureId": exadata_infrastructure_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing),
"opc-retry-token": kwargs.get("opc_retry_token", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=generate_recommended_network_details,
response_type="VmClusterNetworkDetails")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=generate_recommended_network_details,
response_type="VmClusterNetworkDetails")
def get_autonomous_container_database(self, autonomous_container_database_id, **kwargs):
"""
Gets information about the specified Autonomous Container Database.
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousContainerDatabase`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousContainerDatabases/{autonomousContainerDatabaseId}"
method = "GET"
expected_kwargs = ["retry_strategy"]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_autonomous_container_database got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousContainerDatabaseId": autonomous_container_database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousContainerDatabase")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousContainerDatabase")
def get_autonomous_data_warehouse(self, autonomous_data_warehouse_id, **kwargs):
"""
**Deprecated.** To get the details of an Autonomous Data Warehouse, use the :func:`get_autonomous_database` operation.
:param str autonomous_data_warehouse_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDataWarehouse`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDataWarehouses/{autonomousDataWarehouseId}"
method = "GET"
expected_kwargs = ["retry_strategy"]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_autonomous_data_warehouse got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDataWarehouseId": autonomous_data_warehouse_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDataWarehouse")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDataWarehouse")
def get_autonomous_data_warehouse_backup(self, autonomous_data_warehouse_backup_id, **kwargs):
"""
**Deprecated.** To get information about a specified Autonomous Data Warehouse backup, use the :func:`get_autonomous_database_backup` operation.
:param str autonomous_data_warehouse_backup_id: (required)
The `OCID`__ of the Autonomous Data Warehouse backup.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDataWarehouseBackup`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDataWarehouseBackups/{autonomousDataWarehouseBackupId}"
method = "GET"
expected_kwargs = ["retry_strategy"]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_autonomous_data_warehouse_backup got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDataWarehouseBackupId": autonomous_data_warehouse_backup_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDataWarehouseBackup")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDataWarehouseBackup")
def get_autonomous_database(self, autonomous_database_id, **kwargs):
"""
Gets the details of the specified Autonomous Database.
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDatabase`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabases/{autonomousDatabaseId}"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_autonomous_database got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDatabaseId": autonomous_database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDatabase")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDatabase")
def get_autonomous_database_backup(self, autonomous_database_backup_id, **kwargs):
"""
Gets information about the specified Autonomous Database backup.
:param str autonomous_database_backup_id: (required)
The `OCID`__ of the Autonomous Database backup.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDatabaseBackup`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabaseBackups/{autonomousDatabaseBackupId}"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_autonomous_database_backup got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDatabaseBackupId": autonomous_database_backup_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDatabaseBackup")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDatabaseBackup")
def get_autonomous_database_regional_wallet(self, **kwargs):
"""
Gets the Autonomous Database regional wallet details.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDatabaseWallet`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabases/wallet"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_autonomous_database_regional_wallet got unknown kwargs: {!r}".format(extra_kwargs))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
header_params=header_params,
response_type="AutonomousDatabaseWallet")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
header_params=header_params,
response_type="AutonomousDatabaseWallet")
def get_autonomous_database_wallet(self, autonomous_database_id, **kwargs):
"""
Gets the wallet details for the specified Autonomous Database.
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDatabaseWallet`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabases/{autonomousDatabaseId}/wallet"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_autonomous_database_wallet got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDatabaseId": autonomous_database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDatabaseWallet")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDatabaseWallet")
def get_autonomous_exadata_infrastructure(self, autonomous_exadata_infrastructure_id, **kwargs):
"""
Gets information about the specified Autonomous Exadata Infrastructure.
:param str autonomous_exadata_infrastructure_id: (required)
The Autonomous Exadata Infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousExadataInfrastructure`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousExadataInfrastructures/{autonomousExadataInfrastructureId}"
method = "GET"
expected_kwargs = ["retry_strategy"]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_autonomous_exadata_infrastructure got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousExadataInfrastructureId": autonomous_exadata_infrastructure_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousExadataInfrastructure")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousExadataInfrastructure")
def get_backup(self, backup_id, **kwargs):
"""
Gets information about the specified backup.
:param str backup_id: (required)
The backup `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.Backup`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/backups/{backupId}"
method = "GET"
expected_kwargs = ["retry_strategy"]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_backup got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"backupId": backup_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="Backup")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="Backup")
def get_backup_destination(self, backup_destination_id, **kwargs):
"""
Gets information about the specified backup destination.
:param str backup_destination_id: (required)
The `OCID`__ of the backup destination.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.BackupDestination`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/backupDestinations/{backupDestinationId}"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_backup_destination got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"backupDestinationId": backup_destination_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="BackupDestination")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="BackupDestination")
def get_console_connection(self, db_node_id, console_connection_id, **kwargs):
"""
Gets the specified Db node console connection's information.
:param str db_node_id: (required)
The database node `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str console_connection_id: (required)
The OCID of the console connection.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.ConsoleConnection`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbNodes/{dbNodeId}/consoleConnections/{consoleConnectionId}"
method = "GET"
expected_kwargs = ["retry_strategy"]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_console_connection got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbNodeId": db_node_id,
"consoleConnectionId": console_connection_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="ConsoleConnection")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="ConsoleConnection")
def get_data_guard_association(self, database_id, data_guard_association_id, **kwargs):
"""
Gets the specified Data Guard association's configuration information.
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str data_guard_association_id: (required)
The Data Guard association's `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.DataGuardAssociation`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/databases/{databaseId}/dataGuardAssociations/{dataGuardAssociationId}"
method = "GET"
expected_kwargs = ["retry_strategy"]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_data_guard_association got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"databaseId": database_id,
"dataGuardAssociationId": data_guard_association_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="DataGuardAssociation")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="DataGuardAssociation")
def get_database(self, database_id, **kwargs):
"""
Gets information about a specific database.
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.Database`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/databases/{databaseId}"
method = "GET"
expected_kwargs = ["retry_strategy"]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_database got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"databaseId": database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="Database")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="Database")
def get_db_home(self, db_home_id, **kwargs):
"""
Gets information about the specified Database Home.
:param str db_home_id: (required)
The Database Home `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.DbHome`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbHomes/{dbHomeId}"
method = "GET"
expected_kwargs = ["retry_strategy"]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_db_home got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbHomeId": db_home_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="DbHome")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="DbHome")
def get_db_home_patch(self, db_home_id, patch_id, **kwargs):
"""
Gets information about a specified patch package.
:param str db_home_id: (required)
The Database Home `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str patch_id: (required)
The `OCID`__ of the patch.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.Patch`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbHomes/{dbHomeId}/patches/{patchId}"
method = "GET"
expected_kwargs = ["retry_strategy"]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_db_home_patch got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbHomeId": db_home_id,
"patchId": patch_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="Patch")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="Patch")
def get_db_home_patch_history_entry(self, db_home_id, patch_history_entry_id, **kwargs):
"""
Gets the patch history details for the specified patchHistoryEntryId
:param str db_home_id: (required)
The Database Home `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str patch_history_entry_id: (required)
The `OCID`__ of the patch history entry.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.PatchHistoryEntry`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbHomes/{dbHomeId}/patchHistoryEntries/{patchHistoryEntryId}"
method = "GET"
expected_kwargs = ["retry_strategy"]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_db_home_patch_history_entry got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbHomeId": db_home_id,
"patchHistoryEntryId": patch_history_entry_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="PatchHistoryEntry")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="PatchHistoryEntry")
def get_db_node(self, db_node_id, **kwargs):
"""
Gets information about the specified database node.
:param str db_node_id: (required)
The database node `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.DbNode`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbNodes/{dbNodeId}"
method = "GET"
expected_kwargs = ["retry_strategy"]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_db_node got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbNodeId": db_node_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="DbNode")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="DbNode")
def get_db_system(self, db_system_id, **kwargs):
"""
Gets information about the specified DB system.
:param str db_system_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.DbSystem`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbSystems/{dbSystemId}"
method = "GET"
expected_kwargs = ["retry_strategy"]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_db_system got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbSystemId": db_system_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="DbSystem")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="DbSystem")
def get_db_system_patch(self, db_system_id, patch_id, **kwargs):
"""
Gets information about a specified patch package.
:param str db_system_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str patch_id: (required)
The `OCID`__ of the patch.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.Patch`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbSystems/{dbSystemId}/patches/{patchId}"
method = "GET"
expected_kwargs = ["retry_strategy"]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_db_system_patch got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbSystemId": db_system_id,
"patchId": patch_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="Patch")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="Patch")
def get_db_system_patch_history_entry(self, db_system_id, patch_history_entry_id, **kwargs):
"""
Gets the patch history details for the specified patchHistoryEntryId.
:param str db_system_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str patch_history_entry_id: (required)
The `OCID`__ of the patch history entry.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.PatchHistoryEntry`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbSystems/{dbSystemId}/patchHistoryEntries/{patchHistoryEntryId}"
method = "GET"
expected_kwargs = ["retry_strategy"]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_db_system_patch_history_entry got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbSystemId": db_system_id,
"patchHistoryEntryId": patch_history_entry_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="PatchHistoryEntry")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="PatchHistoryEntry")
def get_exadata_infrastructure(self, exadata_infrastructure_id, **kwargs):
"""
Gets information about the specified Exadata infrastructure.
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.ExadataInfrastructure`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/exadataInfrastructures/{exadataInfrastructureId}"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_exadata_infrastructure got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"exadataInfrastructureId": exadata_infrastructure_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="ExadataInfrastructure")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="ExadataInfrastructure")
def get_exadata_infrastructure_ocpus(self, autonomous_exadata_infrastructure_id, **kwargs):
"""
Gets details of the available and consumed OCPUs for the specified Autonomous Exadata Infrastructure instance.
:param str autonomous_exadata_infrastructure_id: (required)
The Autonomous Exadata Infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.OCPUs`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousExadataInfrastructures/{autonomousExadataInfrastructureId}/ocpus"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_exadata_infrastructure_ocpus got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousExadataInfrastructureId": autonomous_exadata_infrastructure_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="OCPUs")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="OCPUs")
def get_exadata_iorm_config(self, db_system_id, **kwargs):
"""
Gets `IORM` Setting for the requested Exadata DB System.
The default IORM Settings is pre-created in all the Exadata DB System.
:param str db_system_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.ExadataIormConfig`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbSystems/{dbSystemId}/ExadataIormConfig"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_exadata_iorm_config got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbSystemId": db_system_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="ExadataIormConfig")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="ExadataIormConfig")
def get_external_backup_job(self, backup_id, **kwargs):
"""
Gets information about the specified external backup job.
**Note:** This API is used by an Oracle Cloud Infrastructure Python script that is packaged with the Oracle Cloud Infrastructure CLI. Oracle recommends that you use the script instead using the API directly. See `Migrating an On-Premises Database to Oracle Cloud Infrastructure by Creating a Backup in the Cloud`__ for more information.
__ https://docs.cloud.oracle.com/Content/Database/Tasks/mig-onprembackup.htm
:param str backup_id: (required)
The backup `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.ExternalBackupJob`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/externalBackupJobs/{backupId}"
method = "GET"
expected_kwargs = ["retry_strategy"]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_external_backup_job got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"backupId": backup_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="ExternalBackupJob")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="ExternalBackupJob")
def get_maintenance_run(self, maintenance_run_id, **kwargs):
"""
Gets information about the specified Maintenance Run.
:param str maintenance_run_id: (required)
The Maintenance Run OCID.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.MaintenanceRun`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/maintenanceRuns/{maintenanceRunId}"
method = "GET"
expected_kwargs = ["retry_strategy"]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_maintenance_run got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"maintenanceRunId": maintenance_run_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="MaintenanceRun")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="MaintenanceRun")
def get_vm_cluster(self, vm_cluster_id, **kwargs):
"""
Gets information about the specified VM cluster.
:param str vm_cluster_id: (required)
The VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.VmCluster`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/vmClusters/{vmClusterId}"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_vm_cluster got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"vmClusterId": vm_cluster_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="VmCluster")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="VmCluster")
def get_vm_cluster_network(self, exadata_infrastructure_id, vm_cluster_network_id, **kwargs):
"""
Gets information about the specified VM cluster network.
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str vm_cluster_network_id: (required)
The VM cluster network `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.VmClusterNetwork`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/exadataInfrastructures/{exadataInfrastructureId}/vmClusterNetworks/{vmClusterNetworkId}"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"get_vm_cluster_network got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"exadataInfrastructureId": exadata_infrastructure_id,
"vmClusterNetworkId": vm_cluster_network_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="VmClusterNetwork")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="VmClusterNetwork")
def launch_autonomous_exadata_infrastructure(self, launch_autonomous_exadata_infrastructure_details, **kwargs):
"""
Launches a new Autonomous Exadata Infrastructure in the specified compartment and availability domain.
:param LaunchAutonomousExadataInfrastructureDetails launch_autonomous_exadata_infrastructure_details: (required)
Request to launch a Autonomous Exadata Infrastructure.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousExadataInfrastructure`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousExadataInfrastructures"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"launch_autonomous_exadata_infrastructure got unknown kwargs: {!r}".format(extra_kwargs))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
header_params=header_params,
body=launch_autonomous_exadata_infrastructure_details,
response_type="AutonomousExadataInfrastructure")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
header_params=header_params,
body=launch_autonomous_exadata_infrastructure_details,
response_type="AutonomousExadataInfrastructure")
def launch_db_system(self, launch_db_system_details, **kwargs):
"""
Creates a new DB system in the specified compartment and availability domain. The Oracle
Database edition that you specify applies to all the databases on that DB system. The selected edition cannot be changed.
An initial database is created on the DB system based on the request parameters you provide and some default
options. For detailed information about default options, see the following:
- `Bare metal and virtual machine DB system default options`__
- `Exadata DB system default options`__
__ https://docs.cloud.oracle.com/Content/Database/Tasks/creatingDBsystem.htm#DefaultOptionsfortheInitialDatabase
__ https://docs.cloud.oracle.com/Content/Database/Tasks/exacreatingDBsystem.htm#DefaultOptionsfortheInitialDatabase
:param LaunchDbSystemBase launch_db_system_details: (required)
Request to launch a DB system.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.DbSystem`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbSystems"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_retry_token"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"launch_db_system got unknown kwargs: {!r}".format(extra_kwargs))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-retry-token": kwargs.get("opc_retry_token", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
header_params=header_params,
body=launch_db_system_details,
response_type="DbSystem")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
header_params=header_params,
body=launch_db_system_details,
response_type="DbSystem")
def list_autonomous_container_databases(self, compartment_id, **kwargs):
"""
Gets a list of the Autonomous Container Databases in the specified compartment.
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str autonomous_exadata_infrastructure_id: (optional)
The Autonomous Exadata Infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str sort_by: (optional)
The field to sort by. You can provide one sort order (`sortOrder`). Default order for TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME sort order is case sensitive.
**Note:** If you do not include the availability domain filter, the resources are grouped by availability domain, then sorted.
Allowed values are: "TIMECREATED", "DISPLAYNAME"
:param str sort_order: (optional)
The sort order to use, either ascending (`ASC`) or descending (`DESC`).
Allowed values are: "ASC", "DESC"
:param str lifecycle_state: (optional)
A filter to return only resources that match the given lifecycle state exactly.
Allowed values are: "PROVISIONING", "AVAILABLE", "UPDATING", "TERMINATING", "TERMINATED", "FAILED", "BACKUP_IN_PROGRESS", "RESTORING", "RESTORE_FAILED", "RESTARTING", "MAINTENANCE_IN_PROGRESS"
:param str availability_domain: (optional)
A filter to return only resources that match the given availability domain exactly.
:param str display_name: (optional)
A filter to return only resources that match the entire display name given. The match is not case sensitive.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.AutonomousContainerDatabaseSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousContainerDatabases"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"autonomous_exadata_infrastructure_id",
"limit",
"page",
"sort_by",
"sort_order",
"lifecycle_state",
"availability_domain",
"display_name"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_autonomous_container_databases got unknown kwargs: {!r}".format(extra_kwargs))
if 'sort_by' in kwargs:
sort_by_allowed_values = ["TIMECREATED", "DISPLAYNAME"]
if kwargs['sort_by'] not in sort_by_allowed_values:
raise ValueError(
"Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values)
)
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
if 'lifecycle_state' in kwargs:
lifecycle_state_allowed_values = ["PROVISIONING", "AVAILABLE", "UPDATING", "TERMINATING", "TERMINATED", "FAILED", "BACKUP_IN_PROGRESS", "RESTORING", "RESTORE_FAILED", "RESTARTING", "MAINTENANCE_IN_PROGRESS"]
if kwargs['lifecycle_state'] not in lifecycle_state_allowed_values:
raise ValueError(
"Invalid value for `lifecycle_state`, must be one of {0}".format(lifecycle_state_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"autonomousExadataInfrastructureId": kwargs.get("autonomous_exadata_infrastructure_id", missing),
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"sortBy": kwargs.get("sort_by", missing),
"sortOrder": kwargs.get("sort_order", missing),
"lifecycleState": kwargs.get("lifecycle_state", missing),
"availabilityDomain": kwargs.get("availability_domain", missing),
"displayName": kwargs.get("display_name", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[AutonomousContainerDatabaseSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[AutonomousContainerDatabaseSummary]")
def list_autonomous_data_warehouse_backups(self, **kwargs):
"""
**Deprecated.** To get a list of Autonomous Data Warehouse backups, use the :func:`list_autonomous_database_backups` operation.
:param str autonomous_data_warehouse_id: (optional)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str compartment_id: (optional)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str sort_by: (optional)
The field to sort by. You can provide one sort order (`sortOrder`). Default order for TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME sort order is case sensitive.
**Note:** If you do not include the availability domain filter, the resources are grouped by availability domain, then sorted.
Allowed values are: "TIMECREATED", "DISPLAYNAME"
:param str sort_order: (optional)
The sort order to use, either ascending (`ASC`) or descending (`DESC`).
Allowed values are: "ASC", "DESC"
:param str lifecycle_state: (optional)
A filter to return only resources that match the given lifecycle state exactly.
Allowed values are: "CREATING", "ACTIVE", "DELETING", "DELETED", "FAILED"
:param str display_name: (optional)
A filter to return only resources that match the entire display name given. The match is not case sensitive.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.AutonomousDataWarehouseBackupSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDataWarehouseBackups"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"autonomous_data_warehouse_id",
"compartment_id",
"limit",
"page",
"sort_by",
"sort_order",
"lifecycle_state",
"display_name"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_autonomous_data_warehouse_backups got unknown kwargs: {!r}".format(extra_kwargs))
if 'sort_by' in kwargs:
sort_by_allowed_values = ["TIMECREATED", "DISPLAYNAME"]
if kwargs['sort_by'] not in sort_by_allowed_values:
raise ValueError(
"Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values)
)
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
if 'lifecycle_state' in kwargs:
lifecycle_state_allowed_values = ["CREATING", "ACTIVE", "DELETING", "DELETED", "FAILED"]
if kwargs['lifecycle_state'] not in lifecycle_state_allowed_values:
raise ValueError(
"Invalid value for `lifecycle_state`, must be one of {0}".format(lifecycle_state_allowed_values)
)
query_params = {
"autonomousDataWarehouseId": kwargs.get("autonomous_data_warehouse_id", missing),
"compartmentId": kwargs.get("compartment_id", missing),
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"sortBy": kwargs.get("sort_by", missing),
"sortOrder": kwargs.get("sort_order", missing),
"lifecycleState": kwargs.get("lifecycle_state", missing),
"displayName": kwargs.get("display_name", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[AutonomousDataWarehouseBackupSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[AutonomousDataWarehouseBackupSummary]")
def list_autonomous_data_warehouses(self, compartment_id, **kwargs):
"""
**Deprecated.** To get a list of Autonomous Data Warehouses, use the :func:`list_autonomous_databases` operation and specify `DW` as the workload type.
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str sort_by: (optional)
The field to sort by. You can provide one sort order (`sortOrder`). Default order for TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME sort order is case sensitive.
**Note:** If you do not include the availability domain filter, the resources are grouped by availability domain, then sorted.
Allowed values are: "TIMECREATED", "DISPLAYNAME"
:param str sort_order: (optional)
The sort order to use, either ascending (`ASC`) or descending (`DESC`).
Allowed values are: "ASC", "DESC"
:param str lifecycle_state: (optional)
A filter to return only resources that match the given lifecycle state exactly.
Allowed values are: "PROVISIONING", "AVAILABLE", "STOPPING", "STOPPED", "STARTING", "TERMINATING", "TERMINATED", "UNAVAILABLE", "RESTORE_IN_PROGRESS", "BACKUP_IN_PROGRESS", "SCALE_IN_PROGRESS", "AVAILABLE_NEEDS_ATTENTION", "UPDATING"
:param str display_name: (optional)
A filter to return only resources that match the entire display name given. The match is not case sensitive.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.AutonomousDataWarehouseSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDataWarehouses"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"limit",
"page",
"sort_by",
"sort_order",
"lifecycle_state",
"display_name"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_autonomous_data_warehouses got unknown kwargs: {!r}".format(extra_kwargs))
if 'sort_by' in kwargs:
sort_by_allowed_values = ["TIMECREATED", "DISPLAYNAME"]
if kwargs['sort_by'] not in sort_by_allowed_values:
raise ValueError(
"Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values)
)
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
if 'lifecycle_state' in kwargs:
lifecycle_state_allowed_values = ["PROVISIONING", "AVAILABLE", "STOPPING", "STOPPED", "STARTING", "TERMINATING", "TERMINATED", "UNAVAILABLE", "RESTORE_IN_PROGRESS", "BACKUP_IN_PROGRESS", "SCALE_IN_PROGRESS", "AVAILABLE_NEEDS_ATTENTION", "UPDATING"]
if kwargs['lifecycle_state'] not in lifecycle_state_allowed_values:
raise ValueError(
"Invalid value for `lifecycle_state`, must be one of {0}".format(lifecycle_state_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"sortBy": kwargs.get("sort_by", missing),
"sortOrder": kwargs.get("sort_order", missing),
"lifecycleState": kwargs.get("lifecycle_state", missing),
"displayName": kwargs.get("display_name", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[AutonomousDataWarehouseSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[AutonomousDataWarehouseSummary]")
def list_autonomous_database_backups(self, **kwargs):
"""
Gets a list of Autonomous Database backups based on either the `autonomousDatabaseId` or `compartmentId` specified as a query parameter.
:param str autonomous_database_id: (optional)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str compartment_id: (optional)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str sort_by: (optional)
The field to sort by. You can provide one sort order (`sortOrder`). Default order for TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME sort order is case sensitive.
**Note:** If you do not include the availability domain filter, the resources are grouped by availability domain, then sorted.
Allowed values are: "TIMECREATED", "DISPLAYNAME"
:param str sort_order: (optional)
The sort order to use, either ascending (`ASC`) or descending (`DESC`).
Allowed values are: "ASC", "DESC"
:param str lifecycle_state: (optional)
A filter to return only resources that match the given lifecycle state exactly.
Allowed values are: "CREATING", "ACTIVE", "DELETING", "DELETED", "FAILED"
:param str display_name: (optional)
A filter to return only resources that match the entire display name given. The match is not case sensitive.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.AutonomousDatabaseBackupSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabaseBackups"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"autonomous_database_id",
"compartment_id",
"limit",
"page",
"sort_by",
"sort_order",
"lifecycle_state",
"display_name",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_autonomous_database_backups got unknown kwargs: {!r}".format(extra_kwargs))
if 'sort_by' in kwargs:
sort_by_allowed_values = ["TIMECREATED", "DISPLAYNAME"]
if kwargs['sort_by'] not in sort_by_allowed_values:
raise ValueError(
"Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values)
)
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
if 'lifecycle_state' in kwargs:
lifecycle_state_allowed_values = ["CREATING", "ACTIVE", "DELETING", "DELETED", "FAILED"]
if kwargs['lifecycle_state'] not in lifecycle_state_allowed_values:
raise ValueError(
"Invalid value for `lifecycle_state`, must be one of {0}".format(lifecycle_state_allowed_values)
)
query_params = {
"autonomousDatabaseId": kwargs.get("autonomous_database_id", missing),
"compartmentId": kwargs.get("compartment_id", missing),
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"sortBy": kwargs.get("sort_by", missing),
"sortOrder": kwargs.get("sort_order", missing),
"lifecycleState": kwargs.get("lifecycle_state", missing),
"displayName": kwargs.get("display_name", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[AutonomousDatabaseBackupSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[AutonomousDatabaseBackupSummary]")
def list_autonomous_databases(self, compartment_id, **kwargs):
"""
Gets a list of Autonomous Databases.
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str autonomous_container_database_id: (optional)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str sort_by: (optional)
The field to sort by. You can provide one sort order (`sortOrder`). Default order for TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME sort order is case sensitive.
**Note:** If you do not include the availability domain filter, the resources are grouped by availability domain, then sorted.
Allowed values are: "TIMECREATED", "DISPLAYNAME"
:param str sort_order: (optional)
The sort order to use, either ascending (`ASC`) or descending (`DESC`).
Allowed values are: "ASC", "DESC"
:param str lifecycle_state: (optional)
A filter to return only resources that match the given lifecycle state exactly.
Allowed values are: "PROVISIONING", "AVAILABLE", "STOPPING", "STOPPED", "STARTING", "TERMINATING", "TERMINATED", "UNAVAILABLE", "RESTORE_IN_PROGRESS", "RESTORE_FAILED", "BACKUP_IN_PROGRESS", "SCALE_IN_PROGRESS", "AVAILABLE_NEEDS_ATTENTION", "UPDATING", "MAINTENANCE_IN_PROGRESS", "RESTARTING", "UPGRADING"
:param str db_workload: (optional)
A filter to return only autonomous database resources that match the specified workload type.
Allowed values are: "OLTP", "DW"
:param str db_version: (optional)
A filter to return only autonomous database resources that match the specified dbVersion.
:param bool is_free_tier: (optional)
Filter on the value of the resource's 'isFreeTier' property. A value of `true` returns only Always Free resources.
A value of `false` excludes Always Free resources from the returned results. Omitting this parameter returns both Always Free and paid resources.
:param str display_name: (optional)
A filter to return only resources that match the entire display name given. The match is not case sensitive.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.AutonomousDatabaseSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabases"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"autonomous_container_database_id",
"limit",
"page",
"sort_by",
"sort_order",
"lifecycle_state",
"db_workload",
"db_version",
"is_free_tier",
"display_name",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_autonomous_databases got unknown kwargs: {!r}".format(extra_kwargs))
if 'sort_by' in kwargs:
sort_by_allowed_values = ["TIMECREATED", "DISPLAYNAME"]
if kwargs['sort_by'] not in sort_by_allowed_values:
raise ValueError(
"Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values)
)
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
if 'lifecycle_state' in kwargs:
lifecycle_state_allowed_values = ["PROVISIONING", "AVAILABLE", "STOPPING", "STOPPED", "STARTING", "TERMINATING", "TERMINATED", "UNAVAILABLE", "RESTORE_IN_PROGRESS", "RESTORE_FAILED", "BACKUP_IN_PROGRESS", "SCALE_IN_PROGRESS", "AVAILABLE_NEEDS_ATTENTION", "UPDATING", "MAINTENANCE_IN_PROGRESS", "RESTARTING", "UPGRADING"]
if kwargs['lifecycle_state'] not in lifecycle_state_allowed_values:
raise ValueError(
"Invalid value for `lifecycle_state`, must be one of {0}".format(lifecycle_state_allowed_values)
)
if 'db_workload' in kwargs:
db_workload_allowed_values = ["OLTP", "DW"]
if kwargs['db_workload'] not in db_workload_allowed_values:
raise ValueError(
"Invalid value for `db_workload`, must be one of {0}".format(db_workload_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"autonomousContainerDatabaseId": kwargs.get("autonomous_container_database_id", missing),
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"sortBy": kwargs.get("sort_by", missing),
"sortOrder": kwargs.get("sort_order", missing),
"lifecycleState": kwargs.get("lifecycle_state", missing),
"dbWorkload": kwargs.get("db_workload", missing),
"dbVersion": kwargs.get("db_version", missing),
"isFreeTier": kwargs.get("is_free_tier", missing),
"displayName": kwargs.get("display_name", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[AutonomousDatabaseSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[AutonomousDatabaseSummary]")
def list_autonomous_db_preview_versions(self, compartment_id, **kwargs):
"""
Gets a list of supported Autonomous Database versions. Note that preview version software is only available for
databases with `shared Exadata infrastructure`__.
__ https://docs.cloud.oracle.com/Content/Database/Concepts/adboverview.htm#AEI
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str sort_by: (optional)
The field to sort by. You can provide one sort order (`sortOrder`). Default order for DBWORKLOAD is ascending.
**Note:** If you do not include the availability domain filter, the resources are grouped by availability domain, then sorted.
Allowed values are: "DBWORKLOAD"
:param str sort_order: (optional)
The sort order to use, either ascending (`ASC`) or descending (`DESC`).
Allowed values are: "ASC", "DESC"
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.AutonomousDbPreviewVersionSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDbPreviewVersions"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"limit",
"page",
"opc_request_id",
"sort_by",
"sort_order"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_autonomous_db_preview_versions got unknown kwargs: {!r}".format(extra_kwargs))
if 'sort_by' in kwargs:
sort_by_allowed_values = ["DBWORKLOAD"]
if kwargs['sort_by'] not in sort_by_allowed_values:
raise ValueError(
"Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values)
)
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"sortBy": kwargs.get("sort_by", missing),
"sortOrder": kwargs.get("sort_order", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[AutonomousDbPreviewVersionSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[AutonomousDbPreviewVersionSummary]")
def list_autonomous_db_versions(self, compartment_id, **kwargs):
"""
Gets a list of supported Autonomous Database versions.
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str db_workload: (optional)
A filter to return only autonomous database resources that match the specified workload type.
Allowed values are: "OLTP", "DW"
:param str sort_order: (optional)
The sort order to use, either ascending (`ASC`) or descending (`DESC`).
Allowed values are: "ASC", "DESC"
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.AutonomousDbVersionSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDbVersions"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"limit",
"page",
"opc_request_id",
"db_workload",
"sort_order"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_autonomous_db_versions got unknown kwargs: {!r}".format(extra_kwargs))
if 'db_workload' in kwargs:
db_workload_allowed_values = ["OLTP", "DW"]
if kwargs['db_workload'] not in db_workload_allowed_values:
raise ValueError(
"Invalid value for `db_workload`, must be one of {0}".format(db_workload_allowed_values)
)
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"dbWorkload": kwargs.get("db_workload", missing),
"sortOrder": kwargs.get("sort_order", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[AutonomousDbVersionSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[AutonomousDbVersionSummary]")
def list_autonomous_exadata_infrastructure_shapes(self, availability_domain, compartment_id, **kwargs):
"""
Gets a list of the shapes that can be used to launch a new Autonomous Exadata Infrastructure DB system. The shape determines resources to allocate to the DB system (CPU cores, memory and storage).
:param str availability_domain: (required)
The name of the Availability Domain.
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.AutonomousExadataInfrastructureShapeSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousExadataInfrastructureShapes"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"limit",
"page",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_autonomous_exadata_infrastructure_shapes got unknown kwargs: {!r}".format(extra_kwargs))
query_params = {
"availabilityDomain": availability_domain,
"compartmentId": compartment_id,
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[AutonomousExadataInfrastructureShapeSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[AutonomousExadataInfrastructureShapeSummary]")
def list_autonomous_exadata_infrastructures(self, compartment_id, **kwargs):
"""
Gets a list of the Autonomous Exadata Infrastructures in the specified compartment.
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str sort_by: (optional)
The field to sort by. You can provide one sort order (`sortOrder`). Default order for TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME sort order is case sensitive.
**Note:** If you do not include the availability domain filter, the resources are grouped by availability domain, then sorted.
Allowed values are: "TIMECREATED", "DISPLAYNAME"
:param str sort_order: (optional)
The sort order to use, either ascending (`ASC`) or descending (`DESC`).
Allowed values are: "ASC", "DESC"
:param str lifecycle_state: (optional)
A filter to return only resources that match the given lifecycle state exactly.
Allowed values are: "PROVISIONING", "AVAILABLE", "UPDATING", "TERMINATING", "TERMINATED", "FAILED", "MAINTENANCE_IN_PROGRESS"
:param str availability_domain: (optional)
A filter to return only resources that match the given availability domain exactly.
:param str display_name: (optional)
A filter to return only resources that match the entire display name given. The match is not case sensitive.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.AutonomousExadataInfrastructureSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousExadataInfrastructures"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"limit",
"page",
"sort_by",
"sort_order",
"lifecycle_state",
"availability_domain",
"display_name"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_autonomous_exadata_infrastructures got unknown kwargs: {!r}".format(extra_kwargs))
if 'sort_by' in kwargs:
sort_by_allowed_values = ["TIMECREATED", "DISPLAYNAME"]
if kwargs['sort_by'] not in sort_by_allowed_values:
raise ValueError(
"Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values)
)
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
if 'lifecycle_state' in kwargs:
lifecycle_state_allowed_values = ["PROVISIONING", "AVAILABLE", "UPDATING", "TERMINATING", "TERMINATED", "FAILED", "MAINTENANCE_IN_PROGRESS"]
if kwargs['lifecycle_state'] not in lifecycle_state_allowed_values:
raise ValueError(
"Invalid value for `lifecycle_state`, must be one of {0}".format(lifecycle_state_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"sortBy": kwargs.get("sort_by", missing),
"sortOrder": kwargs.get("sort_order", missing),
"lifecycleState": kwargs.get("lifecycle_state", missing),
"availabilityDomain": kwargs.get("availability_domain", missing),
"displayName": kwargs.get("display_name", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[AutonomousExadataInfrastructureSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[AutonomousExadataInfrastructureSummary]")
def list_backup_destination(self, compartment_id, **kwargs):
"""
Gets a list of backup destinations in the specified compartment.
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str type: (optional)
A filter to return only resources that match the given type of the Backup Destination.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.BackupDestinationSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/backupDestinations"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"limit",
"page",
"opc_request_id",
"type"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_backup_destination got unknown kwargs: {!r}".format(extra_kwargs))
query_params = {
"compartmentId": compartment_id,
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"type": kwargs.get("type", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[BackupDestinationSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[BackupDestinationSummary]")
def list_backups(self, **kwargs):
"""
Gets a list of backups based on the databaseId or compartmentId specified. Either one of the query parameters must be provided.
:param str database_id: (optional)
The `OCID`__ of the database.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str compartment_id: (optional)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.BackupSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/backups"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"database_id",
"compartment_id",
"limit",
"page"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_backups got unknown kwargs: {!r}".format(extra_kwargs))
query_params = {
"databaseId": kwargs.get("database_id", missing),
"compartmentId": kwargs.get("compartment_id", missing),
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[BackupSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[BackupSummary]")
def list_console_connections(self, db_node_id, **kwargs):
"""
Lists the console connections for the specified Db node.
:param str db_node_id: (required)
The database node `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.ConsoleConnectionSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbNodes/{dbNodeId}/consoleConnections"
method = "GET"
expected_kwargs = ["retry_strategy"]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_console_connections got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbNodeId": db_node_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="list[ConsoleConnectionSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="list[ConsoleConnectionSummary]")
def list_data_guard_associations(self, database_id, **kwargs):
"""
Lists all Data Guard associations for the specified database.
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.DataGuardAssociationSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/databases/{databaseId}/dataGuardAssociations"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"limit",
"page"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_data_guard_associations got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"databaseId": database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
query_params = {
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="list[DataGuardAssociationSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="list[DataGuardAssociationSummary]")
def list_databases(self, compartment_id, **kwargs):
"""
Gets a list of the databases in the specified Database Home.
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str db_home_id: (optional)
A Database Home `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str system_id: (optional)
The `OCID`__ of the Exadata DB system that you want to filter the database results by. Applies only to Exadata DB systems.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str sort_by: (optional)
The field to sort by. You can provide one sort order (`sortOrder`). Default order for TIMECREATED is descending. Default order for DBNAME is ascending. The DBNAME sort order is case sensitive.
Allowed values are: "DBNAME", "TIMECREATED"
:param str sort_order: (optional)
The sort order to use, either ascending (`ASC`) or descending (`DESC`).
Allowed values are: "ASC", "DESC"
:param str lifecycle_state: (optional)
A filter to return only resources that match the given lifecycle state exactly.
Allowed values are: "PROVISIONING", "AVAILABLE", "UPDATING", "BACKUP_IN_PROGRESS", "TERMINATING", "TERMINATED", "RESTORE_FAILED", "FAILED"
:param str db_name: (optional)
A filter to return only resources that match the entire database name given. The match is not case sensitive.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.DatabaseSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/databases"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"db_home_id",
"system_id",
"limit",
"page",
"sort_by",
"sort_order",
"lifecycle_state",
"db_name"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_databases got unknown kwargs: {!r}".format(extra_kwargs))
if 'sort_by' in kwargs:
sort_by_allowed_values = ["DBNAME", "TIMECREATED"]
if kwargs['sort_by'] not in sort_by_allowed_values:
raise ValueError(
"Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values)
)
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
if 'lifecycle_state' in kwargs:
lifecycle_state_allowed_values = ["PROVISIONING", "AVAILABLE", "UPDATING", "BACKUP_IN_PROGRESS", "TERMINATING", "TERMINATED", "RESTORE_FAILED", "FAILED"]
if kwargs['lifecycle_state'] not in lifecycle_state_allowed_values:
raise ValueError(
"Invalid value for `lifecycle_state`, must be one of {0}".format(lifecycle_state_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"dbHomeId": kwargs.get("db_home_id", missing),
"systemId": kwargs.get("system_id", missing),
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"sortBy": kwargs.get("sort_by", missing),
"sortOrder": kwargs.get("sort_order", missing),
"lifecycleState": kwargs.get("lifecycle_state", missing),
"dbName": kwargs.get("db_name", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[DatabaseSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[DatabaseSummary]")
def list_db_home_patch_history_entries(self, db_home_id, **kwargs):
"""
Gets history of the actions taken for patches for the specified Database Home.
:param str db_home_id: (required)
The Database Home `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.PatchHistoryEntrySummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbHomes/{dbHomeId}/patchHistoryEntries"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"limit",
"page"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_db_home_patch_history_entries got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbHomeId": db_home_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
query_params = {
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="list[PatchHistoryEntrySummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="list[PatchHistoryEntrySummary]")
def list_db_home_patches(self, db_home_id, **kwargs):
"""
Lists patches applicable to the requested Database Home.
:param str db_home_id: (required)
The Database Home `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.PatchSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbHomes/{dbHomeId}/patches"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"limit",
"page"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_db_home_patches got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbHomeId": db_home_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
query_params = {
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="list[PatchSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="list[PatchSummary]")
def list_db_homes(self, compartment_id, **kwargs):
"""
Gets a list of Database Homes in the specified DB system and compartment. A Database Home is a directory where Oracle Database software is installed.
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str db_system_id: (optional)
The DB system `OCID`__. If provided, filters the results to the set of database versions which are supported for the DB system.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str vm_cluster_id: (optional)
The `OCID`__ of the VM cluster.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str backup_id: (optional)
The `OCID`__ of the backup. Specify a backupId to list only the DB systems or DB homes that support creating a database using this backup in this compartment.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str sort_by: (optional)
The field to sort by. You can provide one sort order (`sortOrder`). Default order for TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME sort order is case sensitive.
Allowed values are: "TIMECREATED", "DISPLAYNAME"
:param str sort_order: (optional)
The sort order to use, either ascending (`ASC`) or descending (`DESC`).
Allowed values are: "ASC", "DESC"
:param str lifecycle_state: (optional)
A filter to return only resources that match the given lifecycle state exactly.
Allowed values are: "PROVISIONING", "AVAILABLE", "UPDATING", "TERMINATING", "TERMINATED", "FAILED"
:param str display_name: (optional)
A filter to return only resources that match the entire display name given. The match is not case sensitive.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.DbHomeSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbHomes"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"db_system_id",
"vm_cluster_id",
"backup_id",
"limit",
"page",
"sort_by",
"sort_order",
"lifecycle_state",
"display_name"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_db_homes got unknown kwargs: {!r}".format(extra_kwargs))
if 'sort_by' in kwargs:
sort_by_allowed_values = ["TIMECREATED", "DISPLAYNAME"]
if kwargs['sort_by'] not in sort_by_allowed_values:
raise ValueError(
"Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values)
)
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
if 'lifecycle_state' in kwargs:
lifecycle_state_allowed_values = ["PROVISIONING", "AVAILABLE", "UPDATING", "TERMINATING", "TERMINATED", "FAILED"]
if kwargs['lifecycle_state'] not in lifecycle_state_allowed_values:
raise ValueError(
"Invalid value for `lifecycle_state`, must be one of {0}".format(lifecycle_state_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"dbSystemId": kwargs.get("db_system_id", missing),
"vmClusterId": kwargs.get("vm_cluster_id", missing),
"backupId": kwargs.get("backup_id", missing),
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"sortBy": kwargs.get("sort_by", missing),
"sortOrder": kwargs.get("sort_order", missing),
"lifecycleState": kwargs.get("lifecycle_state", missing),
"displayName": kwargs.get("display_name", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[DbHomeSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[DbHomeSummary]")
def list_db_nodes(self, compartment_id, **kwargs):
"""
Gets a list of database nodes in the specified DB system and compartment. A database node is a server running database software.
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str db_system_id: (optional)
The DB system `OCID`__. If provided, filters the results to the set of database versions which are supported for the DB system.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str vm_cluster_id: (optional)
The `OCID`__ of the VM cluster.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str sort_by: (optional)
Sort by TIMECREATED. Default order for TIMECREATED is descending.
Allowed values are: "TIMECREATED"
:param str sort_order: (optional)
The sort order to use, either ascending (`ASC`) or descending (`DESC`).
Allowed values are: "ASC", "DESC"
:param str lifecycle_state: (optional)
A filter to return only resources that match the given lifecycle state exactly.
Allowed values are: "PROVISIONING", "AVAILABLE", "UPDATING", "STOPPING", "STOPPED", "STARTING", "TERMINATING", "TERMINATED", "FAILED"
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.DbNodeSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbNodes"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"db_system_id",
"vm_cluster_id",
"limit",
"page",
"sort_by",
"sort_order",
"lifecycle_state"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_db_nodes got unknown kwargs: {!r}".format(extra_kwargs))
if 'sort_by' in kwargs:
sort_by_allowed_values = ["TIMECREATED"]
if kwargs['sort_by'] not in sort_by_allowed_values:
raise ValueError(
"Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values)
)
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
if 'lifecycle_state' in kwargs:
lifecycle_state_allowed_values = ["PROVISIONING", "AVAILABLE", "UPDATING", "STOPPING", "STOPPED", "STARTING", "TERMINATING", "TERMINATED", "FAILED"]
if kwargs['lifecycle_state'] not in lifecycle_state_allowed_values:
raise ValueError(
"Invalid value for `lifecycle_state`, must be one of {0}".format(lifecycle_state_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"dbSystemId": kwargs.get("db_system_id", missing),
"vmClusterId": kwargs.get("vm_cluster_id", missing),
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"sortBy": kwargs.get("sort_by", missing),
"sortOrder": kwargs.get("sort_order", missing),
"lifecycleState": kwargs.get("lifecycle_state", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[DbNodeSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[DbNodeSummary]")
def list_db_system_patch_history_entries(self, db_system_id, **kwargs):
"""
Gets the history of the patch actions performed on the specified DB system.
:param str db_system_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.PatchHistoryEntrySummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbSystems/{dbSystemId}/patchHistoryEntries"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"limit",
"page"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_db_system_patch_history_entries got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbSystemId": db_system_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
query_params = {
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="list[PatchHistoryEntrySummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="list[PatchHistoryEntrySummary]")
def list_db_system_patches(self, db_system_id, **kwargs):
"""
Lists the patches applicable to the requested DB system.
:param str db_system_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.PatchSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbSystems/{dbSystemId}/patches"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"limit",
"page"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_db_system_patches got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbSystemId": db_system_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
query_params = {
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="list[PatchSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="list[PatchSummary]")
def list_db_system_shapes(self, compartment_id, **kwargs):
"""
Gets a list of the shapes that can be used to launch a new DB system. The shape determines resources to allocate to the DB system - CPU cores and memory for VM shapes; CPU cores, memory and storage for non-VM (or bare metal) shapes.
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str availability_domain: (optional)
The name of the Availability Domain.
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.DbSystemShapeSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbSystemShapes"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"availability_domain",
"limit",
"page"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_db_system_shapes got unknown kwargs: {!r}".format(extra_kwargs))
query_params = {
"availabilityDomain": kwargs.get("availability_domain", missing),
"compartmentId": compartment_id,
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[DbSystemShapeSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[DbSystemShapeSummary]")
def list_db_systems(self, compartment_id, **kwargs):
"""
Gets a list of the DB systems in the specified compartment. You can specify a backupId to list only the DB systems that support creating a database using this backup in this compartment.
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str backup_id: (optional)
The `OCID`__ of the backup. Specify a backupId to list only the DB systems or DB homes that support creating a database using this backup in this compartment.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str sort_by: (optional)
The field to sort by. You can provide one sort order (`sortOrder`). Default order for TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME sort order is case sensitive.
**Note:** If you do not include the availability domain filter, the resources are grouped by availability domain, then sorted.
Allowed values are: "TIMECREATED", "DISPLAYNAME"
:param str sort_order: (optional)
The sort order to use, either ascending (`ASC`) or descending (`DESC`).
Allowed values are: "ASC", "DESC"
:param str lifecycle_state: (optional)
A filter to return only resources that match the given lifecycle state exactly.
Allowed values are: "PROVISIONING", "AVAILABLE", "UPDATING", "TERMINATING", "TERMINATED", "FAILED"
:param str availability_domain: (optional)
A filter to return only resources that match the given availability domain exactly.
:param str display_name: (optional)
A filter to return only resources that match the entire display name given. The match is not case sensitive.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.DbSystemSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbSystems"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"limit",
"page",
"backup_id",
"sort_by",
"sort_order",
"lifecycle_state",
"availability_domain",
"display_name"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_db_systems got unknown kwargs: {!r}".format(extra_kwargs))
if 'sort_by' in kwargs:
sort_by_allowed_values = ["TIMECREATED", "DISPLAYNAME"]
if kwargs['sort_by'] not in sort_by_allowed_values:
raise ValueError(
"Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values)
)
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
if 'lifecycle_state' in kwargs:
lifecycle_state_allowed_values = ["PROVISIONING", "AVAILABLE", "UPDATING", "TERMINATING", "TERMINATED", "FAILED"]
if kwargs['lifecycle_state'] not in lifecycle_state_allowed_values:
raise ValueError(
"Invalid value for `lifecycle_state`, must be one of {0}".format(lifecycle_state_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"backupId": kwargs.get("backup_id", missing),
"sortBy": kwargs.get("sort_by", missing),
"sortOrder": kwargs.get("sort_order", missing),
"lifecycleState": kwargs.get("lifecycle_state", missing),
"availabilityDomain": kwargs.get("availability_domain", missing),
"displayName": kwargs.get("display_name", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[DbSystemSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[DbSystemSummary]")
def list_db_versions(self, compartment_id, **kwargs):
"""
Gets a list of supported Oracle Database versions.
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str db_system_shape: (optional)
If provided, filters the results to the set of database versions which are supported for the given shape.
:param str db_system_id: (optional)
The DB system `OCID`__. If provided, filters the results to the set of database versions which are supported for the DB system.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str storage_management: (optional)
The DB system storage management option. Used to list database versions available for that storage manager. Valid values are:
* ASM - Automatic storage management
* LVM - Logical volume management
Allowed values are: "ASM", "LVM"
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.DbVersionSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbVersions"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"limit",
"page",
"db_system_shape",
"db_system_id",
"storage_management"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_db_versions got unknown kwargs: {!r}".format(extra_kwargs))
if 'storage_management' in kwargs:
storage_management_allowed_values = ["ASM", "LVM"]
if kwargs['storage_management'] not in storage_management_allowed_values:
raise ValueError(
"Invalid value for `storage_management`, must be one of {0}".format(storage_management_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"dbSystemShape": kwargs.get("db_system_shape", missing),
"dbSystemId": kwargs.get("db_system_id", missing),
"storageManagement": kwargs.get("storage_management", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[DbVersionSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[DbVersionSummary]")
def list_exadata_infrastructures(self, compartment_id, **kwargs):
"""
Gets a list of the Exadata infrastructure in the specified compartment.
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str sort_by: (optional)
The field to sort by. You can provide one sort order (`sortOrder`). Default order for TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME sort order is case sensitive.
Allowed values are: "TIMECREATED", "DISPLAYNAME"
:param str sort_order: (optional)
The sort order to use, either ascending (`ASC`) or descending (`DESC`).
Allowed values are: "ASC", "DESC"
:param str lifecycle_state: (optional)
A filter to return only resources that match the given lifecycle state exactly.
Allowed values are: "CREATING", "REQUIRES_ACTIVATION", "ACTIVATING", "ACTIVE", "ACTIVATION_FAILED", "FAILED", "UPDATING", "DELETING", "DELETED", "OFFLINE"
:param str display_name: (optional)
A filter to return only resources that match the entire display name given. The match is not case sensitive.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.ExadataInfrastructureSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/exadataInfrastructures"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"limit",
"page",
"opc_request_id",
"sort_by",
"sort_order",
"lifecycle_state",
"display_name"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_exadata_infrastructures got unknown kwargs: {!r}".format(extra_kwargs))
if 'sort_by' in kwargs:
sort_by_allowed_values = ["TIMECREATED", "DISPLAYNAME"]
if kwargs['sort_by'] not in sort_by_allowed_values:
raise ValueError(
"Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values)
)
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
if 'lifecycle_state' in kwargs:
lifecycle_state_allowed_values = ["CREATING", "REQUIRES_ACTIVATION", "ACTIVATING", "ACTIVE", "ACTIVATION_FAILED", "FAILED", "UPDATING", "DELETING", "DELETED", "OFFLINE"]
if kwargs['lifecycle_state'] not in lifecycle_state_allowed_values:
raise ValueError(
"Invalid value for `lifecycle_state`, must be one of {0}".format(lifecycle_state_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"sortBy": kwargs.get("sort_by", missing),
"sortOrder": kwargs.get("sort_order", missing),
"lifecycleState": kwargs.get("lifecycle_state", missing),
"displayName": kwargs.get("display_name", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[ExadataInfrastructureSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[ExadataInfrastructureSummary]")
def list_gi_versions(self, compartment_id, **kwargs):
"""
Gets a list of supported GI versions for VM Cluster.
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str sort_order: (optional)
The sort order to use, either ascending (`ASC`) or descending (`DESC`).
Allowed values are: "ASC", "DESC"
:param str shape: (optional)
If provided, filters the results for the given shape.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.GiVersionSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/giVersions"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"limit",
"page",
"sort_order",
"shape"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_gi_versions got unknown kwargs: {!r}".format(extra_kwargs))
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"sortOrder": kwargs.get("sort_order", missing),
"shape": kwargs.get("shape", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[GiVersionSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[GiVersionSummary]")
def list_maintenance_runs(self, compartment_id, **kwargs):
"""
Gets a list of the Maintenance Runs in the specified compartment.
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str target_resource_id: (optional)
The target resource ID.
:param str target_resource_type: (optional)
The type of the target resource.
Allowed values are: "AUTONOMOUS_EXADATA_INFRASTRUCTURE", "AUTONOMOUS_CONTAINER_DATABASE", "EXADATA_DB_SYSTEM"
:param str maintenance_type: (optional)
The maintenance type.
Allowed values are: "PLANNED", "UNPLANNED"
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str sort_by: (optional)
The field to sort by. You can provide one sort order (`sortOrder`). Default order for TIME_SCHEDULED and TIME_ENDED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME sort order is case sensitive.
**Note:** If you do not include the availability domain filter, the resources are grouped by availability domain, then sorted.
Allowed values are: "TIME_SCHEDULED", "TIME_ENDED", "DISPLAYNAME"
:param str sort_order: (optional)
The sort order to use, either ascending (`ASC`) or descending (`DESC`).
Allowed values are: "ASC", "DESC"
:param str lifecycle_state: (optional)
A filter to return only resources that match the given lifecycle state exactly.
Allowed values are: "SCHEDULED", "IN_PROGRESS", "SUCCEEDED", "SKIPPED", "FAILED", "UPDATING", "DELETING", "DELETED"
:param str availability_domain: (optional)
A filter to return only resources that match the given availability domain exactly.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.MaintenanceRunSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/maintenanceRuns"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"target_resource_id",
"target_resource_type",
"maintenance_type",
"limit",
"page",
"sort_by",
"sort_order",
"lifecycle_state",
"availability_domain"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_maintenance_runs got unknown kwargs: {!r}".format(extra_kwargs))
if 'target_resource_type' in kwargs:
target_resource_type_allowed_values = ["AUTONOMOUS_EXADATA_INFRASTRUCTURE", "AUTONOMOUS_CONTAINER_DATABASE", "EXADATA_DB_SYSTEM"]
if kwargs['target_resource_type'] not in target_resource_type_allowed_values:
raise ValueError(
"Invalid value for `target_resource_type`, must be one of {0}".format(target_resource_type_allowed_values)
)
if 'maintenance_type' in kwargs:
maintenance_type_allowed_values = ["PLANNED", "UNPLANNED"]
if kwargs['maintenance_type'] not in maintenance_type_allowed_values:
raise ValueError(
"Invalid value for `maintenance_type`, must be one of {0}".format(maintenance_type_allowed_values)
)
if 'sort_by' in kwargs:
sort_by_allowed_values = ["TIME_SCHEDULED", "TIME_ENDED", "DISPLAYNAME"]
if kwargs['sort_by'] not in sort_by_allowed_values:
raise ValueError(
"Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values)
)
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
if 'lifecycle_state' in kwargs:
lifecycle_state_allowed_values = ["SCHEDULED", "IN_PROGRESS", "SUCCEEDED", "SKIPPED", "FAILED", "UPDATING", "DELETING", "DELETED"]
if kwargs['lifecycle_state'] not in lifecycle_state_allowed_values:
raise ValueError(
"Invalid value for `lifecycle_state`, must be one of {0}".format(lifecycle_state_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"targetResourceId": kwargs.get("target_resource_id", missing),
"targetResourceType": kwargs.get("target_resource_type", missing),
"maintenanceType": kwargs.get("maintenance_type", missing),
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"sortBy": kwargs.get("sort_by", missing),
"sortOrder": kwargs.get("sort_order", missing),
"lifecycleState": kwargs.get("lifecycle_state", missing),
"availabilityDomain": kwargs.get("availability_domain", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json"
}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[MaintenanceRunSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[MaintenanceRunSummary]")
def list_vm_cluster_networks(self, exadata_infrastructure_id, compartment_id, **kwargs):
"""
Gets a list of the VM cluster networks in the specified compartment.
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str sort_by: (optional)
The field to sort by. You can provide one sort order (`sortOrder`). Default order for TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME sort order is case sensitive.
Allowed values are: "TIMECREATED", "DISPLAYNAME"
:param str sort_order: (optional)
The sort order to use, either ascending (`ASC`) or descending (`DESC`).
Allowed values are: "ASC", "DESC"
:param str lifecycle_state: (optional)
A filter to return only resources that match the given lifecycle state exactly.
Allowed values are: "CREATING", "REQUIRES_VALIDATION", "VALIDATING", "VALIDATED", "VALIDATION_FAILED", "UPDATING", "ALLOCATED", "TERMINATING", "TERMINATED", "FAILED"
:param str display_name: (optional)
A filter to return only resources that match the entire display name given. The match is not case sensitive.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.VmClusterNetworkSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/exadataInfrastructures/{exadataInfrastructureId}/vmClusterNetworks"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"limit",
"page",
"sort_by",
"sort_order",
"lifecycle_state",
"display_name",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_vm_cluster_networks got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"exadataInfrastructureId": exadata_infrastructure_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
if 'sort_by' in kwargs:
sort_by_allowed_values = ["TIMECREATED", "DISPLAYNAME"]
if kwargs['sort_by'] not in sort_by_allowed_values:
raise ValueError(
"Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values)
)
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
if 'lifecycle_state' in kwargs:
lifecycle_state_allowed_values = ["CREATING", "REQUIRES_VALIDATION", "VALIDATING", "VALIDATED", "VALIDATION_FAILED", "UPDATING", "ALLOCATED", "TERMINATING", "TERMINATED", "FAILED"]
if kwargs['lifecycle_state'] not in lifecycle_state_allowed_values:
raise ValueError(
"Invalid value for `lifecycle_state`, must be one of {0}".format(lifecycle_state_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"sortBy": kwargs.get("sort_by", missing),
"sortOrder": kwargs.get("sort_order", missing),
"lifecycleState": kwargs.get("lifecycle_state", missing),
"displayName": kwargs.get("display_name", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="list[VmClusterNetworkSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
query_params=query_params,
header_params=header_params,
response_type="list[VmClusterNetworkSummary]")
def list_vm_clusters(self, compartment_id, **kwargs):
"""
Gets a list of the VM clusters in the specified compartment.
:param str compartment_id: (required)
The compartment `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str exadata_infrastructure_id: (optional)
If provided, filters the results for the given Exadata Infrastructure.
:param int limit: (optional)
The maximum number of items to return per page.
:param str page: (optional)
The pagination token to continue listing from.
:param str sort_by: (optional)
The field to sort by. You can provide one sort order (`sortOrder`). Default order for TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME sort order is case sensitive.
Allowed values are: "TIMECREATED", "DISPLAYNAME"
:param str sort_order: (optional)
The sort order to use, either ascending (`ASC`) or descending (`DESC`).
Allowed values are: "ASC", "DESC"
:param str lifecycle_state: (optional)
A filter to return only resources that match the given lifecycle state exactly.
Allowed values are: "PROVISIONING", "AVAILABLE", "UPDATING", "TERMINATING", "TERMINATED", "FAILED"
:param str display_name: (optional)
A filter to return only resources that match the entire display name given. The match is not case sensitive.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.database.models.VmClusterSummary`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/vmClusters"
method = "GET"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"exadata_infrastructure_id",
"limit",
"page",
"sort_by",
"sort_order",
"lifecycle_state",
"display_name",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"list_vm_clusters got unknown kwargs: {!r}".format(extra_kwargs))
if 'sort_by' in kwargs:
sort_by_allowed_values = ["TIMECREATED", "DISPLAYNAME"]
if kwargs['sort_by'] not in sort_by_allowed_values:
raise ValueError(
"Invalid value for `sort_by`, must be one of {0}".format(sort_by_allowed_values)
)
if 'sort_order' in kwargs:
sort_order_allowed_values = ["ASC", "DESC"]
if kwargs['sort_order'] not in sort_order_allowed_values:
raise ValueError(
"Invalid value for `sort_order`, must be one of {0}".format(sort_order_allowed_values)
)
if 'lifecycle_state' in kwargs:
lifecycle_state_allowed_values = ["PROVISIONING", "AVAILABLE", "UPDATING", "TERMINATING", "TERMINATED", "FAILED"]
if kwargs['lifecycle_state'] not in lifecycle_state_allowed_values:
raise ValueError(
"Invalid value for `lifecycle_state`, must be one of {0}".format(lifecycle_state_allowed_values)
)
query_params = {
"compartmentId": compartment_id,
"exadataInfrastructureId": kwargs.get("exadata_infrastructure_id", missing),
"limit": kwargs.get("limit", missing),
"page": kwargs.get("page", missing),
"sortBy": kwargs.get("sort_by", missing),
"sortOrder": kwargs.get("sort_order", missing),
"lifecycleState": kwargs.get("lifecycle_state", missing),
"displayName": kwargs.get("display_name", missing)
}
query_params = {k: v for (k, v) in six.iteritems(query_params) if v is not missing and v is not None}
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[VmClusterSummary]")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
query_params=query_params,
header_params=header_params,
response_type="list[VmClusterSummary]")
def register_autonomous_database_data_safe(self, autonomous_database_id, **kwargs):
"""
Asynchronously registers this Autonomous Database with Data Safe.
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabases/{autonomousDatabaseId}/actions/registerDataSafe"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"register_autonomous_database_data_safe got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDatabaseId": autonomous_database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
def reinstate_data_guard_association(self, database_id, data_guard_association_id, reinstate_data_guard_association_details, **kwargs):
"""
Reinstates the database identified by the `databaseId` parameter into the standby role in a Data Guard association.
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str data_guard_association_id: (required)
The Data Guard association's `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param ReinstateDataGuardAssociationDetails reinstate_data_guard_association_details: (required)
A request to reinstate a database in a standby role.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.DataGuardAssociation`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/databases/{databaseId}/dataGuardAssociations/{dataGuardAssociationId}/actions/reinstate"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"reinstate_data_guard_association got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"databaseId": database_id,
"dataGuardAssociationId": data_guard_association_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=reinstate_data_guard_association_details,
response_type="DataGuardAssociation")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=reinstate_data_guard_association_details,
response_type="DataGuardAssociation")
def restart_autonomous_container_database(self, autonomous_container_database_id, **kwargs):
"""
Rolling restarts the specified Autonomous Container Database.
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousContainerDatabase`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousContainerDatabases/{autonomousContainerDatabaseId}/actions/restart"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"restart_autonomous_container_database got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousContainerDatabaseId": autonomous_container_database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousContainerDatabase")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousContainerDatabase")
def restart_autonomous_database(self, autonomous_database_id, **kwargs):
"""
Restarts the specified Autonomous Database. Restart supported only for databases using dedicated Exadata infrastructure.
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDatabase`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabases/{autonomousDatabaseId}/actions/restart"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"restart_autonomous_database got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDatabaseId": autonomous_database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDatabase")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDatabase")
def restore_autonomous_data_warehouse(self, autonomous_data_warehouse_id, restore_autonomous_data_warehouse_details, **kwargs):
"""
**Deprecated.** To restore an Autonomous Data Warehouse, use the :func:`restore_autonomous_database` operation.
:param str autonomous_data_warehouse_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param RestoreAutonomousDataWarehouseDetails restore_autonomous_data_warehouse_details: (required)
Request to perform an Autonomous Data Warehouse restore.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDataWarehouse`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDataWarehouses/{autonomousDataWarehouseId}/actions/restore"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"restore_autonomous_data_warehouse got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDataWarehouseId": autonomous_data_warehouse_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=restore_autonomous_data_warehouse_details,
response_type="AutonomousDataWarehouse")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=restore_autonomous_data_warehouse_details,
response_type="AutonomousDataWarehouse")
def restore_autonomous_database(self, autonomous_database_id, restore_autonomous_database_details, **kwargs):
"""
Restores an Autonomous Database based on the provided request parameters.
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param RestoreAutonomousDatabaseDetails restore_autonomous_database_details: (required)
Request to perform an Autonomous Database restore.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDatabase`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabases/{autonomousDatabaseId}/actions/restore"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"restore_autonomous_database got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDatabaseId": autonomous_database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=restore_autonomous_database_details,
response_type="AutonomousDatabase")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=restore_autonomous_database_details,
response_type="AutonomousDatabase")
def restore_database(self, database_id, restore_database_details, **kwargs):
"""
Restore a Database based on the request parameters you provide.
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param RestoreDatabaseDetails restore_database_details: (required)
Request to perform database restore.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.Database`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/databases/{databaseId}/actions/restore"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"restore_database got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"databaseId": database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=restore_database_details,
response_type="Database")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=restore_database_details,
response_type="Database")
def start_autonomous_data_warehouse(self, autonomous_data_warehouse_id, **kwargs):
"""
**Deprecated.** To start an Autonomous Data Warehouse, use the :func:`start_autonomous_database` operation.
:param str autonomous_data_warehouse_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDataWarehouse`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDataWarehouses/{autonomousDataWarehouseId}/actions/start"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"start_autonomous_data_warehouse got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDataWarehouseId": autonomous_data_warehouse_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDataWarehouse")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDataWarehouse")
def start_autonomous_database(self, autonomous_database_id, **kwargs):
"""
Starts the specified Autonomous Database.
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDatabase`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabases/{autonomousDatabaseId}/actions/start"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"start_autonomous_database got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDatabaseId": autonomous_database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDatabase")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDatabase")
def stop_autonomous_data_warehouse(self, autonomous_data_warehouse_id, **kwargs):
"""
**Deprecated.** To stop an Autonomous Data Warehouse, use the :func:`stop_autonomous_database` operation.
:param str autonomous_data_warehouse_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDataWarehouse`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDataWarehouses/{autonomousDataWarehouseId}/actions/stop"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"stop_autonomous_data_warehouse got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDataWarehouseId": autonomous_data_warehouse_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDataWarehouse")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDataWarehouse")
def stop_autonomous_database(self, autonomous_database_id, **kwargs):
"""
Stops the specified Autonomous Database.
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDatabase`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabases/{autonomousDatabaseId}/actions/stop"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"stop_autonomous_database got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDatabaseId": autonomous_database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDatabase")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="AutonomousDatabase")
def switchover_data_guard_association(self, database_id, data_guard_association_id, switchover_data_guard_association_details, **kwargs):
"""
Performs a switchover to transition the primary database of a Data Guard association into a standby role. The
standby database associated with the `dataGuardAssociationId` assumes the primary database role.
A switchover guarantees no data loss.
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str data_guard_association_id: (required)
The Data Guard association's `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param SwitchoverDataGuardAssociationDetails switchover_data_guard_association_details: (required)
Request to swtichover a primary to a standby.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.DataGuardAssociation`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/databases/{databaseId}/dataGuardAssociations/{dataGuardAssociationId}/actions/switchover"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"switchover_data_guard_association got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"databaseId": database_id,
"dataGuardAssociationId": data_guard_association_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=switchover_data_guard_association_details,
response_type="DataGuardAssociation")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=switchover_data_guard_association_details,
response_type="DataGuardAssociation")
def terminate_autonomous_container_database(self, autonomous_container_database_id, **kwargs):
"""
Terminates an Autonomous Container Database, which permanently deletes the container database and any databases within the container database. The database data is local to the Autonomous Exadata Infrastructure and will be lost when the container database is terminated. Oracle recommends that you back up any data in the Autonomous Container Database prior to terminating it.
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousContainerDatabases/{autonomousContainerDatabaseId}"
method = "DELETE"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"terminate_autonomous_container_database got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousContainerDatabaseId": autonomous_container_database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
def terminate_autonomous_exadata_infrastructure(self, autonomous_exadata_infrastructure_id, **kwargs):
"""
Terminates an Autonomous Exadata Infrastructure, which permanently deletes the Exadata Infrastructure and any container databases and databases contained in the Exadata Infrastructure. The database data is local to the Autonomous Exadata Infrastructure and will be lost when the system is terminated. Oracle recommends that you back up any data in the Autonomous Exadata Infrastructure prior to terminating it.
:param str autonomous_exadata_infrastructure_id: (required)
The Autonomous Exadata Infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousExadataInfrastructures/{autonomousExadataInfrastructureId}"
method = "DELETE"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"terminate_autonomous_exadata_infrastructure got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousExadataInfrastructureId": autonomous_exadata_infrastructure_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
def terminate_db_system(self, db_system_id, **kwargs):
"""
Terminates a DB system and permanently deletes it and any databases running on it, and any storage volumes attached to it. The database data is local to the DB system and will be lost when the system is terminated. Oracle recommends that you back up any data in the DB system prior to terminating it.
:param str db_system_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbSystems/{dbSystemId}"
method = "DELETE"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"terminate_db_system got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbSystemId": db_system_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params)
def update_autonomous_container_database(self, autonomous_container_database_id, update_autonomous_container_database_details, **kwargs):
"""
Updates the properties of an Autonomous Container Database, such as the OCPU core count and storage size.
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param UpdateAutonomousContainerDatabaseDetails update_autonomous_container_database_details: (required)
Request to update the properties of an Autonomous Container Database.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousContainerDatabase`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousContainerDatabases/{autonomousContainerDatabaseId}"
method = "PUT"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"update_autonomous_container_database got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousContainerDatabaseId": autonomous_container_database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_autonomous_container_database_details,
response_type="AutonomousContainerDatabase")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_autonomous_container_database_details,
response_type="AutonomousContainerDatabase")
def update_autonomous_data_warehouse(self, autonomous_data_warehouse_id, update_autonomous_data_warehouse_details, **kwargs):
"""
**Deprecated.** To update the CPU core count and storage size of an Autonomous Data Warehouse, use the :func:`update_autonomous_database` operation.
:param str autonomous_data_warehouse_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param UpdateAutonomousDataWarehouseDetails update_autonomous_data_warehouse_details: (required)
Request to update the properties of an Autonomous Data Warehouse.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDataWarehouse`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDataWarehouses/{autonomousDataWarehouseId}"
method = "PUT"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"update_autonomous_data_warehouse got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDataWarehouseId": autonomous_data_warehouse_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_autonomous_data_warehouse_details,
response_type="AutonomousDataWarehouse")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_autonomous_data_warehouse_details,
response_type="AutonomousDataWarehouse")
def update_autonomous_database(self, autonomous_database_id, update_autonomous_database_details, **kwargs):
"""
Updates one or more attributes of the specified Autonomous Database. See the UpdateAutonomousDatabaseDetails resource for a full list of attributes that can be updated.
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param UpdateAutonomousDatabaseDetails update_autonomous_database_details: (required)
Request to update the properties of an Autonomous Database.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousDatabase`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabases/{autonomousDatabaseId}"
method = "PUT"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"update_autonomous_database got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDatabaseId": autonomous_database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_autonomous_database_details,
response_type="AutonomousDatabase")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_autonomous_database_details,
response_type="AutonomousDatabase")
def update_autonomous_database_regional_wallet(self, update_autonomous_database_wallet_details, **kwargs):
"""
Updates the Autonomous Database regional wallet.
:param UpdateAutonomousDatabaseWalletDetails update_autonomous_database_wallet_details: (required)
Request to update the properties of Autonomous Database regional wallet.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabases/wallet"
method = "PUT"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"update_autonomous_database_regional_wallet got unknown kwargs: {!r}".format(extra_kwargs))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
header_params=header_params,
body=update_autonomous_database_wallet_details)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
header_params=header_params,
body=update_autonomous_database_wallet_details)
def update_autonomous_database_wallet(self, autonomous_database_id, update_autonomous_database_wallet_details, **kwargs):
"""
Updates the wallet for the specified Autonomous Database.
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param UpdateAutonomousDatabaseWalletDetails update_autonomous_database_wallet_details: (required)
Request to update the properties of an Autonomous Database wallet.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type None
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousDatabases/{autonomousDatabaseId}/wallet"
method = "PUT"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"update_autonomous_database_wallet got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousDatabaseId": autonomous_database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_autonomous_database_wallet_details)
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_autonomous_database_wallet_details)
def update_autonomous_exadata_infrastructure(self, autonomous_exadata_infrastructure_id, update_autonomous_exadata_infrastructures_details, **kwargs):
"""
Updates the properties of an Autonomous Exadata Infrastructure, such as the CPU core count.
:param str autonomous_exadata_infrastructure_id: (required)
The Autonomous Exadata Infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param UpdateAutonomousExadataInfrastructureDetails update_autonomous_exadata_infrastructures_details: (required)
Request to update the properties of a Autonomous Exadata Infrastructure.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.AutonomousExadataInfrastructure`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/autonomousExadataInfrastructures/{autonomousExadataInfrastructureId}"
method = "PUT"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"update_autonomous_exadata_infrastructure got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"autonomousExadataInfrastructureId": autonomous_exadata_infrastructure_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_autonomous_exadata_infrastructures_details,
response_type="AutonomousExadataInfrastructure")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_autonomous_exadata_infrastructures_details,
response_type="AutonomousExadataInfrastructure")
def update_backup_destination(self, backup_destination_id, update_backup_destination_details, **kwargs):
"""
If no database is associated with the backup destination:
- For a RECOVERY_APPLIANCE backup destination, updates the connection string and/or the list of VPC users.
- For an NFS backup destination, updates the NFS location.
:param str backup_destination_id: (required)
The `OCID`__ of the backup destination.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param UpdateBackupDestinationDetails update_backup_destination_details: (required)
For a RECOVERY_APPLIANCE backup destination, request to update the connection string and/or the list of VPC users.
For an NFS backup destination, request to update the NFS location.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.BackupDestination`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/backupDestinations/{backupDestinationId}"
method = "PUT"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"update_backup_destination got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"backupDestinationId": backup_destination_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_backup_destination_details,
response_type="BackupDestination")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_backup_destination_details,
response_type="BackupDestination")
def update_database(self, database_id, update_database_details, **kwargs):
"""
Update a Database based on the request parameters you provide.
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param UpdateDatabaseDetails update_database_details: (required)
Request to perform database update.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.Database`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/databases/{databaseId}"
method = "PUT"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"update_database got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"databaseId": database_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_database_details,
response_type="Database")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_database_details,
response_type="Database")
def update_db_home(self, db_home_id, update_db_home_details, **kwargs):
"""
Patches the specified dbHome.
:param str db_home_id: (required)
The Database Home `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param UpdateDbHomeDetails update_db_home_details: (required)
Request to update the properties of a DB Home.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.DbHome`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbHomes/{dbHomeId}"
method = "PUT"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"update_db_home got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbHomeId": db_home_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_db_home_details,
response_type="DbHome")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_db_home_details,
response_type="DbHome")
def update_db_system(self, db_system_id, update_db_system_details, **kwargs):
"""
Updates the properties of a DB system, such as the CPU core count.
:param str db_system_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param UpdateDbSystemDetails update_db_system_details: (required)
Request to update the properties of a DB system.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.DbSystem`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbSystems/{dbSystemId}"
method = "PUT"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"update_db_system got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbSystemId": db_system_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_db_system_details,
response_type="DbSystem")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_db_system_details,
response_type="DbSystem")
def update_exadata_infrastructure(self, exadata_infrastructure_id, update_exadata_infrastructure_details, **kwargs):
"""
Updates the Exadata infrastructure.
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param UpdateExadataInfrastructureDetails update_exadata_infrastructure_details: (required)
Request to update the properties of an Exadata infrastructure
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.ExadataInfrastructure`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/exadataInfrastructures/{exadataInfrastructureId}"
method = "PUT"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"update_exadata_infrastructure got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"exadataInfrastructureId": exadata_infrastructure_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_exadata_infrastructure_details,
response_type="ExadataInfrastructure")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_exadata_infrastructure_details,
response_type="ExadataInfrastructure")
def update_exadata_iorm_config(self, db_system_id, exadata_iorm_config_update_details, **kwargs):
"""
Update `IORM` Settings for the requested Exadata DB System.
:param str db_system_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param ExadataIormConfigUpdateDetails exadata_iorm_config_update_details: (required)
Request to perform database update.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.ExadataIormConfig`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/dbSystems/{dbSystemId}/ExadataIormConfig"
method = "PUT"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"update_exadata_iorm_config got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"dbSystemId": db_system_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing),
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=exadata_iorm_config_update_details,
response_type="ExadataIormConfig")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=exadata_iorm_config_update_details,
response_type="ExadataIormConfig")
def update_maintenance_run(self, maintenance_run_id, update_maintenance_run_details, **kwargs):
"""
Updates the properties of a Maintenance Run, such as the state of a Maintenance Run.
:param str maintenance_run_id: (required)
The Maintenance Run OCID.
:param UpdateMaintenanceRunDetails update_maintenance_run_details: (required)
Request to update the properties of a Maintenance Run.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.MaintenanceRun`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/maintenanceRuns/{maintenanceRunId}"
method = "PUT"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"update_maintenance_run got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"maintenanceRunId": maintenance_run_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_maintenance_run_details,
response_type="MaintenanceRun")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_maintenance_run_details,
response_type="MaintenanceRun")
def update_vm_cluster(self, vm_cluster_id, update_vm_cluster_details, **kwargs):
"""
Updates the specified VM cluster.
:param str vm_cluster_id: (required)
The VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param UpdateVmClusterDetails update_vm_cluster_details: (required)
Request to update the attributes of a VM cluster.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.VmCluster`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/vmClusters/{vmClusterId}"
method = "PUT"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"update_vm_cluster got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"vmClusterId": vm_cluster_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_vm_cluster_details,
response_type="VmCluster")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_vm_cluster_details,
response_type="VmCluster")
def update_vm_cluster_network(self, exadata_infrastructure_id, vm_cluster_network_id, update_vm_cluster_network_details, **kwargs):
"""
Updates the specified VM cluster network.
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str vm_cluster_network_id: (required)
The VM cluster network `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param UpdateVmClusterNetworkDetails update_vm_cluster_network_details: (required)
Request to update the properties of a VM cluster network.
:param str if_match: (optional)
For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match`
parameter to the value of the etag from a previous GET or POST response for that resource. The resource
will be updated or deleted only if the etag you provide matches the resource's current etag value.
:param str opc_request_id: (optional)
Unique identifier for the request.
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.VmClusterNetwork`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/exadataInfrastructures/{exadataInfrastructureId}/vmClusterNetworks/{vmClusterNetworkId}"
method = "PUT"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"if_match",
"opc_request_id"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"update_vm_cluster_network got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"exadataInfrastructureId": exadata_infrastructure_id,
"vmClusterNetworkId": vm_cluster_network_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"if-match": kwargs.get("if_match", missing),
"opc-request-id": kwargs.get("opc_request_id", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_vm_cluster_network_details,
response_type="VmClusterNetwork")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
body=update_vm_cluster_network_details,
response_type="VmClusterNetwork")
def validate_vm_cluster_network(self, exadata_infrastructure_id, vm_cluster_network_id, **kwargs):
"""
Validates the specified VM cluster network.
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str vm_cluster_network_id: (required)
The VM cluster network `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str opc_request_id: (optional)
Unique identifier for the request.
:param str opc_retry_token: (optional)
A token that uniquely identifies a request so it can be retried in case of a timeout or
server error without risk of executing that same action again. Retry tokens expire after 24
hours, but can be invalidated before then due to conflicting operations (for example, if a resource
has been deleted and purged from the system, then a retry of the original creation request
may be rejected).
:param obj retry_strategy: (optional)
A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.
This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`
is also available. The specifics of the default retry strategy are described `here <https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/sdk_behaviors/retries.html>`__.
To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.
:return: A :class:`~oci.response.Response` object with data of type :class:`~oci.database.models.VmClusterNetwork`
:rtype: :class:`~oci.response.Response`
"""
resource_path = "/exadataInfrastructures/{exadataInfrastructureId}/vmClusterNetworks/{vmClusterNetworkId}/actions/validate"
method = "POST"
# Don't accept unknown kwargs
expected_kwargs = [
"retry_strategy",
"opc_request_id",
"opc_retry_token"
]
extra_kwargs = [_key for _key in six.iterkeys(kwargs) if _key not in expected_kwargs]
if extra_kwargs:
raise ValueError(
"validate_vm_cluster_network got unknown kwargs: {!r}".format(extra_kwargs))
path_params = {
"exadataInfrastructureId": exadata_infrastructure_id,
"vmClusterNetworkId": vm_cluster_network_id
}
path_params = {k: v for (k, v) in six.iteritems(path_params) if v is not missing}
for (k, v) in six.iteritems(path_params):
if v is None or (isinstance(v, six.string_types) and len(v.strip()) == 0):
raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k))
header_params = {
"accept": "application/json",
"content-type": "application/json",
"opc-request-id": kwargs.get("opc_request_id", missing),
"opc-retry-token": kwargs.get("opc_retry_token", missing)
}
header_params = {k: v for (k, v) in six.iteritems(header_params) if v is not missing and v is not None}
retry_strategy = self.retry_strategy
if kwargs.get('retry_strategy'):
retry_strategy = kwargs.get('retry_strategy')
if retry_strategy:
if not isinstance(retry_strategy, retry.NoneRetryStrategy):
self.base_client.add_opc_retry_token_if_needed(header_params)
return retry_strategy.make_retrying_call(
self.base_client.call_api,
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="VmClusterNetwork")
else:
return self.base_client.call_api(
resource_path=resource_path,
method=method,
path_params=path_params,
header_params=header_params,
response_type="VmClusterNetwork")
| 46.721215 | 418 | 0.642673 | 62,840 | 525,894 | 5.199682 | 0.013542 | 0.06887 | 0.026883 | 0.006641 | 0.951219 | 0.941698 | 0.933251 | 0.923641 | 0.917006 | 0.906634 | 0 | 0.00064 | 0.274694 | 525,894 | 11,255 | 419 | 46.725367 | 0.855991 | 0.40593 | 0 | 0.861467 | 0 | 0 | 0.177224 | 0.043566 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021786 | false | 0.000166 | 0.001497 | 0 | 0.066689 | 0.000166 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f6c0ce5ec4c8544c946151bee413fe0b082193a5 | 29,317 | py | Python | test/connectivity/acts/tests/google/ble/scan/BleOpportunisticScanTest.py | Keneral/atools | 055e76621340c7dced125e9de56e2645b5e1cdfb | [
"Unlicense"
] | null | null | null | test/connectivity/acts/tests/google/ble/scan/BleOpportunisticScanTest.py | Keneral/atools | 055e76621340c7dced125e9de56e2645b5e1cdfb | [
"Unlicense"
] | null | null | null | test/connectivity/acts/tests/google/ble/scan/BleOpportunisticScanTest.py | Keneral/atools | 055e76621340c7dced125e9de56e2645b5e1cdfb | [
"Unlicense"
] | 1 | 2018-02-24T19:13:01.000Z | 2018-02-24T19:13:01.000Z | #/usr/bin/env python3.4
#
# Copyright (C) 2016 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy of
# the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under
# the License.
"""
This test script exercises different opportunistic scan scenarios.
It is expected that the second AndroidDevice is able to advertise.
This test script was designed with this setup in mind:
Shield box one: Android Device, Android Device
"""
from queue import Empty
from acts.test_utils.bt.BluetoothBaseTest import BluetoothBaseTest
from acts.test_utils.bt.BleEnum import ScanSettingsScanMode
from acts.test_utils.bt.BleEnum import ScanSettingsScanMode
from acts.test_utils.bt.bt_test_utils import batch_scan_result
from acts.test_utils.bt.bt_test_utils import cleanup_scanners_and_advertisers
from acts.test_utils.bt.bt_test_utils import generate_ble_advertise_objects
from acts.test_utils.bt.bt_test_utils import generate_ble_scan_objects
from acts.test_utils.bt.bt_test_utils import get_advanced_droid_list
from acts.test_utils.bt.bt_test_utils import reset_bluetooth
from acts.test_utils.bt.bt_test_utils import scan_result
class BleOpportunisticScanTest(BluetoothBaseTest):
default_timeout = 10
max_scan_instances = 28
report_delay = 2000
scan_callbacks = []
adv_callbacks = []
active_scan_callback_list = []
active_adv_callback_list = []
def __init__(self, controllers):
BluetoothBaseTest.__init__(self, controllers)
self.droid_list = get_advanced_droid_list(self.android_devices)
self.scn_ad = self.android_devices[0]
self.adv_ad = self.android_devices[1]
if self.droid_list[1]['max_advertisements'] == 0:
self.tests = ()
return
self.tests = (
"test_scan_result_no_advertisement",
"test_scan_result_no_advertisement",
"test_scan_result",
"test_batch_scan_result_not_expected",
"test_scan_result_not_expected",
"test_max_opportunistic_scan_instances",
"test_discover_opportunistic_scan_result_off_secondary_scan_filter",
"test_negative_opportunistic_scan_filter_result_off_secondary_scan_result",
"test_opportunistic_scan_filter_result_off_secondary_scan_result",
)
if self.droid_list[0]['batch_scan_supported']:
self.tests = self.tests + (
"test_batch_scan_result",
"test_max_opportunistic_batch_scan_instances", )
def teardown_test(self):
cleanup_scanners_and_advertisers(
self.scn_ad, self.active_adv_callback_list, self.adv_ad,
self.active_adv_callback_list)
self.active_adv_callback_list = []
self.active_scan_callback_list = []
def on_exception(self, test_name, begin_time):
reset_bluetooth(self.android_devices)
def _setup_generic_advertisement(self):
adv_callback, adv_data, adv_settings = generate_ble_advertise_objects(
self.adv_ad.droid)
self.adv_ad.droid.bleStartBleAdvertising(adv_callback, adv_data,
adv_settings)
self.active_adv_callback_list.append(adv_callback)
def _verify_no_events_found(self, event_name):
try:
event = self.scn_ad.ed.pop_event(event_name, self.default_timeout)
self.log.error("Found an event when none was expected: {}".format(
event))
return False
except Empty:
self.log.info("No scan result found as expected.")
return True
@BluetoothBaseTest.bt_test_wrap
def test_scan_result_no_advertisement(self):
"""Test opportunistic scan with no advertisement.
Tests opportunistic scan where there are no advertisements. This should
not find any onScanResults.
Steps:
1. Initialize scanner with scan mode set to opportunistic mode.
2. Start scanning on dut 0
3. Pop onScanResults event on the scanner
Expected Result:
Find no advertisements with the opportunistic scan instance.
Returns:
Pass if True
Fail if False
TAGS: LE, Advertising, Scanning, Opportunistic Scan
Priority: 1
"""
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_OPPORTUNISTIC.value)
filter_list, scan_settings, scan_callback = generate_ble_scan_objects(
self.scn_ad.droid)
self.scn_ad.droid.bleStartBleScan(filter_list, scan_settings,
scan_callback)
self.active_scan_callback_list.append(scan_callback)
if not self._verify_no_events_found(scan_result.format(scan_callback)):
return False
self.scn_ad.droid.bleStopBleScan(scan_callback)
return True
@BluetoothBaseTest.bt_test_wrap
def test_batch_scan_result_no_advertisement(self):
"""Test batch opportunistic scan without an advertisement.
Tests opportunistic scan where there are no advertisements. This should
not find any onBatchScanResult.
Steps:
1. Initialize scanner with scan mode set to opportunistic mode.
2. Set report delay seconds such that onBatchScanResult events are
expected
2. Start scanning on dut 0
3. Pop onBatchScanResult event on the scanner
Expected Result:
Find no advertisements with the opportunistic scan instance.
Returns:
Pass if True
Fail if False
TAGS: LE, Advertising, Scanning, Opportunistic Scan, Batch Scanning
Priority: 1
"""
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_OPPORTUNISTIC.value)
self.scn_ad.droid.bleSetScanSettingsReportDelayMillis(
self.report_delay)
filter_list, scan_settings, scan_callback = generate_ble_scan_objects(
self.scn_ad.droid)
self.scn_ad.droid.bleStartBleScan(filter_list, scan_settings,
scan_callback)
self.active_scan_callback_list.append(scan_callback)
if not self._verify_no_events_found(batch_scan_result.format(
scan_callback)):
return False
self.scn_ad.droid.bleStopBleScan(scan_callback)
return True
@BluetoothBaseTest.bt_test_wrap
def test_scan_result(self):
"""Test opportunistic scan with an advertisement.
Tests opportunistic scan where it will only report scan results when
other registered scanners find results.
Steps:
1. Initialize advertiser and start advertisement on dut1
2. Initialize scanner with scan mode set to opportunistic mode on dut0
and start scanning
3. Try to find an event, expect none.
4. Start a second scanner on dut0, with any other mode set
5. Pop onScanResults event on the second scanner
6. Pop onScanResults event on the first scanner
Expected Result:
Scan result is found on the opportunistic scan instance.
Returns:
Pass if True
Fail if False
TAGS: LE, Advertising, Scanning, Opportunistic Scan
Priority: 1
"""
self._setup_generic_advertisement()
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_OPPORTUNISTIC.value)
filter_list, scan_settings, scan_callback = generate_ble_scan_objects(
self.scn_ad.droid)
self.scn_ad.droid.bleStartBleScan(filter_list, scan_settings,
scan_callback)
self.active_scan_callback_list.append(scan_callback)
if not self._verify_no_events_found(scan_result.format(scan_callback)):
return False
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_LOW_LATENCY.value)
filter_list2, scan_settings2, scan_callback2 = (
generate_ble_scan_objects(self.scn_ad.droid))
self.scn_ad.droid.bleStartBleScan(filter_list2, scan_settings2,
scan_callback2)
self.active_scan_callback_list.append(scan_callback2)
self.scn_ad.ed.pop_event(
scan_result.format(scan_callback2), self.default_timeout)
self.scn_ad.ed.pop_event(
scan_result.format(scan_callback), self.default_timeout)
return True
@BluetoothBaseTest.bt_test_wrap
def test_batch_scan_result(self):
"""Test batch opportunistic scan with advertisement.
Tests opportunistic scan where it will only report scan results when
other registered scanners find results. Set the report delay millis such
that an onBatchScanResult is expected.
Steps:
1. Initialize advertiser and start advertisement on dut1
2. Initialize scanner with scan mode set to opportunistic mode and
set scan settings report delay seconds such that a batch scan is
expected
3. Start scanning on dut 0
4. Try to find an event, expect none.
5. Start a second scanner on dut0, with any other mode set and set scan
settings report delay millis such that an onBatchScanResult is expected
6. Pop onBatchScanResult event on the second scanner
7. Pop onBatchScanResult event on the first scanner
Expected Result:
Find a batch scan result on both opportunistic scan instances.
Returns:
Pass if True
Fail if False
TAGS: LE, Advertising, Scanning, Opportunistic Scan, Batch Scanning
Priority: 1
"""
self._setup_generic_advertisement()
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_OPPORTUNISTIC.value)
self.scn_ad.droid.bleSetScanSettingsReportDelayMillis(
self.report_delay)
filter_list, scan_settings, scan_callback = generate_ble_scan_objects(
self.scn_ad.droid)
self.scn_ad.droid.bleStartBleScan(filter_list, scan_settings,
scan_callback)
self.active_scan_callback_list.append(scan_callback)
if not self._verify_no_events_found(batch_scan_result.format(
scan_callback)):
return False
self.scn_ad.droid.bleSetScanSettingsReportDelayMillis(
self.report_delay)
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_LOW_LATENCY.value)
filter_list2 = self.scn_ad.droid.bleGenFilterList()
scan_settings2 = self.scn_ad.droid.bleBuildScanSetting()
scan_callback2 = self.scn_ad.droid.bleGenScanCallback()
self.scn_ad.droid.bleStartBleScan(filter_list2, scan_settings2,
scan_callback2)
self.active_scan_callback_list.append(scan_callback2)
self.scn_ad.ed.pop_event(
batch_scan_result.format(scan_callback2), self.default_timeout)
self.scn_ad.ed.pop_event(
batch_scan_result.format(scan_callback), self.default_timeout)
return True
@BluetoothBaseTest.bt_test_wrap
def test_batch_scan_result_not_expected(self):
"""Test opportunistic batch scan without expecting an event.
Tests opportunistic scan where it will only report scan results when
other registered scanners find results. Set the report delay millis such
that a batch scan is not expected.
Steps:
1. Initialize advertiser and start advertisement on dut1
2. Initialize scanner with scan mode set to opportunistic mode and
set scan settings report delay seconds such that a batch scan is
expected.
3. Start scanning on dut 0
4. Try to find an event, expect none.
5. Start a second scanner on dut0, with any other mode set and set scan
settings report delay millis to 0 such that an onBatchScanResult is not
expected.
6. Pop onScanResults event on the second scanner
7. Pop onBatchScanResult event on the first scanner
Expected Result:
Batch scan result is not expected on opportunistic scan instance.
Returns:
Pass if True
Fail if False
TAGS: LE, Advertising, Scanning, Opportunistic Scan, Batch Scanning
Priority: 1
"""
self._setup_generic_advertisement()
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_OPPORTUNISTIC.value)
self.scn_ad.droid.bleSetScanSettingsReportDelayMillis(
self.report_delay)
filter_list, scan_settings, scan_callback = generate_ble_scan_objects(
self.scn_ad.droid)
self.scn_ad.droid.bleStartBleScan(filter_list, scan_settings,
scan_callback)
self.active_scan_callback_list.append(scan_callback)
if not self._verify_no_events_found(batch_scan_result.format(
scan_callback)):
return False
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_LOW_LATENCY.value)
filter_list2, scan_settings2, scan_callback2 = (
generate_ble_scan_objects(self.scn_ad.droid))
self.scn_ad.droid.bleStartBleScan(filter_list2, scan_settings2,
scan_callback2)
self.active_scan_callback_list.append(scan_callback2)
self.scn_ad.ed.pop_event(
scan_result.format(scan_callback2), self.default_timeout)
return self._verify_no_events_found(batch_scan_result.format(
scan_callback))
@BluetoothBaseTest.bt_test_wrap
def test_scan_result_not_expected(self):
"""Test opportunistic scan without expecting an event.
Tests opportunistic scan where it will only report batch scan results
when other registered scanners find results.
Steps:
1. Initialize advertiser and start advertisement on dut1
2. Initialize scanner with scan mode set to opportunistic mode.
3. Start scanning on dut 0
4. Try to find an event, expect none.
5. Start a second scanner on dut0, with any other mode set and set scan
settings
report delay millis such that an onBatchScanResult is expected
6. Pop onBatchScanResult event on the second scanner
7. Pop onScanResults event on the first scanner
Expected Result:
Scan result is not expected on opportunistic scan instance.
Returns:
Pass if True
Fail if False
TAGS: LE, Advertising, Scanning, Opportunistic Scan
Priority: 1
"""
self._setup_generic_advertisement()
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_OPPORTUNISTIC.value)
filter_list = self.scn_ad.droid.bleGenFilterList()
scan_settings = self.scn_ad.droid.bleBuildScanSetting()
scan_callback = self.scn_ad.droid.bleGenScanCallback()
self.scn_ad.droid.bleStartBleScan(filter_list, scan_settings,
scan_callback)
self.active_scan_callback_list.append(scan_callback)
if not self._verify_no_events_found(scan_result.format(scan_callback)):
return False
self.scn_ad.droid.bleSetScanSettingsReportDelayMillis(
self.report_delay)
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_LOW_LATENCY.value)
filter_list2, scan_settings2, scan_callback2 = (
generate_ble_scan_objects(self.scn_ad.droid))
self.scn_ad.droid.bleStartBleScan(filter_list2, scan_settings2,
scan_callback2)
self.active_scan_callback_list.append(scan_callback2)
self.scn_ad.ed.pop_event(
batch_scan_result.format(scan_callback2), self.default_timeout)
return self._verify_no_events_found(scan_result.format(scan_callback))
@BluetoothBaseTest.bt_test_wrap
def test_max_opportunistic_scan_instances(self):
"""Test max number of opportunistic scan instances.
Tests max instances of opportunistic scans. Each instances should
find an onScanResults event.
Steps:
1. Initialize advertiser and start advertisement on dut1
2. Set scan settings to opportunistic scan on dut0 scan instance
3. Start scan scan from step 2
4. Repeat step two and three until there are max_scan_instances-1 scan
instances
5. Start a regular ble scan on dut0 with the last available scan
instance
6. Pop onScanResults event on all scan instances
Expected Result:
Each opportunistic scan instance finds a advertisement.
Returns:
Pass if True
Fail if False
TAGS: LE, Advertising, Scanning, Opportunistic Scan
Priority: 1
"""
self._setup_generic_advertisement()
for _ in range(self.max_scan_instances - 1):
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_OPPORTUNISTIC.value)
filter_list = self.scn_ad.droid.bleGenFilterList()
scan_settings = self.scn_ad.droid.bleBuildScanSetting()
scan_callback = self.scn_ad.droid.bleGenScanCallback()
self.scn_ad.droid.bleStartBleScan(filter_list, scan_settings,
scan_callback)
self.active_scan_callback_list.append(scan_callback)
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_LOW_LATENCY.value)
filter_list2, scan_settings2, scan_callback2 = (
generate_ble_scan_objects(self.scn_ad.droid))
self.scn_ad.droid.bleStartBleScan(filter_list2, scan_settings2,
scan_callback2)
self.active_scan_callback_list.append(scan_callback2)
for callback in self.active_scan_callback_list:
self.scn_ad.ed.pop_event(
scan_result.format(callback), self.default_timeout)
return True
@BluetoothBaseTest.bt_test_wrap
def test_max_opportunistic_batch_scan_instances(self):
"""Test max opportunistic batch scan instances.
Tests max instances of opportunistic batch scans. Each instances should
find an onBatchScanResult event.
Steps:
1. Initialize advertiser and start advertisement on dut1
2. Set scan settings to opportunistic scan on dut0 scan instance and
set report delay seconds such that an onBatchScanResult is expected
3. Start scan scan from step 2
4. Repeat step two and three until there are max_scan_instances-1 scan
instances
5. Start a regular ble scan on dut0 with the last available scan
instance
6. Pop onBatchScanResult event on all scan instances
Expected Result:
Each opportunistic scan instance finds an advertisement.
Returns:
Pass if True
Fail if False
TAGS: LE, Advertising, Scanning, Opportunistic Scan, Batch Scanning
Priority: 1
"""
self._setup_generic_advertisement()
for _ in range(self.max_scan_instances - 1):
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_OPPORTUNISTIC.value)
self.scn_ad.droid.bleSetScanSettingsReportDelayMillis(
self.report_delay)
filter_list = self.scn_ad.droid.bleGenFilterList()
scan_settings = self.scn_ad.droid.bleBuildScanSetting()
scan_callback = self.scn_ad.droid.bleGenScanCallback()
self.scn_ad.droid.bleStartBleScan(filter_list, scan_settings,
scan_callback)
self.active_scan_callback_list.append(scan_callback)
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_LOW_LATENCY.value)
self.scn_ad.droid.bleSetScanSettingsReportDelayMillis(
self.report_delay)
filter_list2, scan_settings2, scan_callback2 = (
generate_ble_scan_objects(self.scn_ad.droid))
self.scn_ad.droid.bleStartBleScan(filter_list2, scan_settings2,
scan_callback2)
self.active_scan_callback_list.append(scan_callback2)
for callback in self.active_scan_callback_list:
self.scn_ad.ed.pop_event(
batch_scan_result.format(callback), self.default_timeout)
return True
@BluetoothBaseTest.bt_test_wrap
def test_discover_opportunistic_scan_result_off_secondary_scan_filter(
self):
"""Test opportunistic scan result from secondary scan filter.
Tests opportunistic scan where the secondary scan instance does not find
an advertisement but the scan instance with scan mode set to
opportunistic scan will find an advertisement.
Steps:
1. Initialize advertiser and start advertisement on dut1 (make sure the
advertisement is not advertising the device name)
2. Set scan settings to opportunistic scan on dut0 scan instance
3. Start scan scan from step 2
4. Try to find an event, expect none
5. Start a second scanner on dut0, with any other mode set and set the
scan filter device name to "opp_test"
6. Pop onScanResults from the second scanner
7. Expect no events
8. Pop onScanResults from the first scanner
Expected Result:
Opportunistic scan instance finds an advertisement.
Returns:
Pass if True
Fail if False
TAGS: LE, Advertising, Scanning, Opportunistic Scan
Priority: 1
"""
self._setup_generic_advertisement()
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_OPPORTUNISTIC.value)
filter_list, scan_settings, scan_callback = generate_ble_scan_objects(
self.scn_ad.droid)
self.scn_ad.droid.bleStartBleScan(filter_list, scan_settings,
scan_callback)
self.active_scan_callback_list.append(scan_callback)
if not self._verify_no_events_found(scan_result.format(scan_callback)):
return False
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_LOW_LATENCY.value)
filter_list2, scan_settings2, scan_callback2 = (
generate_ble_scan_objects(self.scn_ad.droid))
self.scn_ad.droid.bleSetScanFilterDeviceName("opp_test")
self.scn_ad.droid.bleBuildScanFilter(filter_list2)
self.scn_ad.droid.bleStartBleScan(filter_list2, scan_settings2,
scan_callback2)
self.active_scan_callback_list.append(scan_callback2)
if not self._verify_no_events_found(scan_result.format(
scan_callback2)):
return False
self.scn_ad.ed.pop_event(
scan_result.format(scan_callback), self.default_timeout)
return True
@BluetoothBaseTest.bt_test_wrap
def test_negative_opportunistic_scan_filter_result_off_secondary_scan_result(
self):
"""Test opportunistic scan not found scenario.
Tests opportunistic scan where the secondary scan instance does find an
advertisement but the scan instance with scan mode set to opportunistic
scan does not find an advertisement due to mismatched scan filters.
Steps:
1. Initialize advertiser and start advertisement on dut1 (make sure the
advertisement is not advertising the device name)
2. Set scan settings to opportunistic scan on dut0 scan instance and set
the scan filter device name to "opp_test"
3. Start scan scan from step 2
4. Try to find an event, expect none
5. Start a second scanner on dut0, with any other mode set
6. Pop onScanResults from the second scanner
7. Pop onScanResults from the first scanner
8. Expect no events
Expected Result:
Opportunistic scan instance doesn't find any advertisements.
Returns:
Pass if True
Fail if False
TAGS: LE, Advertising, Scanning, Opportunistic Scan
Priority: 1
"""
self._setup_generic_advertisement()
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_OPPORTUNISTIC.value)
filter_list, scan_settings, scan_callback = generate_ble_scan_objects(
self.scn_ad.droid)
self.scn_ad.droid.bleSetScanFilterDeviceName("opp_test")
self.scn_ad.droid.bleBuildScanFilter(filter_list)
self.scn_ad.droid.bleStartBleScan(filter_list, scan_settings,
scan_callback)
self.active_scan_callback_list.append(scan_callback)
if not self._verify_no_events_found(scan_result.format(scan_callback)):
return False
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_LOW_LATENCY.value)
filter_list2, scan_settings2, scan_callback2 = (
generate_ble_scan_objects(self.scn_ad.droid))
self.scn_ad.droid.bleStartBleScan(filter_list2, scan_settings2,
scan_callback2)
self.active_scan_callback_list.append(scan_callback2)
self.scn_ad.ed.pop_event(
scan_result.format(scan_callback2), self.default_timeout)
return self._verify_no_events_found(scan_result.format(scan_callback))
@BluetoothBaseTest.bt_test_wrap
def test_opportunistic_scan_filter_result_off_secondary_scan_result(self):
"""Test opportunistic scan from a secondary scan result.
Tests opportunistic scan where the scan filters are the same between the
first scan instance with opportunistic scan set and the second instance
with any other mode set.
Steps:
1. Initialize advertiser and start advertisement on dut1 (make sure the
advertisement is not advertising the device name)
2. Set scan settings to opportunistic scan on dut0 scan instance and set
the scan filter device name to the advertiser's device name
3. Start scan scan from step 2
4. Try to find an event, expect none
5. Start a second scanner on dut0, with any other mode set and set the
scan filter device name to the advertiser's device name
6. Pop onScanResults from the second scanner
7. Pop onScanResults from the first scanner
Expected Result:
Opportunistic scan instance finds a advertisement.
Returns:
Pass if True
Fail if False
TAGS: LE, Advertising, Scanning, Opportunistic Scan
Priority: 1
"""
self.adv_ad.droid.bleSetAdvertiseDataIncludeDeviceName(True)
self._setup_generic_advertisement()
adv_device_name = self.adv_ad.droid.bluetoothGetLocalName()
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_OPPORTUNISTIC.value)
filter_list, scan_settings, scan_callback = generate_ble_scan_objects(
self.scn_ad.droid)
self.scn_ad.droid.bleSetScanFilterDeviceName(adv_device_name)
self.scn_ad.droid.bleBuildScanFilter(filter_list)
self.scn_ad.droid.bleStartBleScan(filter_list, scan_settings,
scan_callback)
self.active_scan_callback_list.append(scan_callback)
if not self._verify_no_events_found(scan_result.format(scan_callback)):
return False
self.scn_ad.droid.bleSetScanSettingsScanMode(
ScanSettingsScanMode.SCAN_MODE_LOW_LATENCY.value)
filter_list2, scan_settings2, scan_callback2 = (
generate_ble_scan_objects(self.scn_ad.droid))
self.scn_ad.droid.bleSetScanFilterDeviceName(adv_device_name)
self.scn_ad.droid.bleBuildScanFilter(filter_list2)
self.scn_ad.droid.bleStartBleScan(filter_list2, scan_settings2,
scan_callback2)
self.active_scan_callback_list.append(scan_callback2)
self.scn_ad.ed.pop_event(
scan_result.format(scan_callback2), self.default_timeout)
self.scn_ad.ed.pop_event(
scan_result.format(scan_callback), self.default_timeout)
return True
| 43.887725 | 87 | 0.683187 | 3,504 | 29,317 | 5.472603 | 0.075913 | 0.036504 | 0.046934 | 0.062057 | 0.874896 | 0.851012 | 0.825355 | 0.80825 | 0.797872 | 0.775918 | 0 | 0.009918 | 0.263977 | 29,317 | 667 | 88 | 43.953523 | 0.878765 | 0.325204 | 0 | 0.753012 | 0 | 0 | 0.032087 | 0.024066 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048193 | false | 0 | 0.033133 | 0 | 0.177711 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
63f5190657bd0061d2adb5dd94b76fa5a370b0c2 | 2,432 | py | Python | test/test_feed.py | aqche/flaskedd | 04edbf2e22c7a63c944cca91176df9119983eab2 | [
"MIT"
] | 1 | 2019-03-23T03:21:35.000Z | 2019-03-23T03:21:35.000Z | test/test_feed.py | aqche/flaskedd | 04edbf2e22c7a63c944cca91176df9119983eab2 | [
"MIT"
] | null | null | null | test/test_feed.py | aqche/flaskedd | 04edbf2e22c7a63c944cca91176df9119983eab2 | [
"MIT"
] | null | null | null | from test import helpers
from passlib.hash import bcrypt
from flaskeddit import db
from flaskeddit.models import AppUser, Community, CommunityMember, Post
class TestFeed:
def test_get_feed(self, test_client):
"""
Test GET request to the / route to assert posts from the users joined
communities are displayed.
"""
password = "Mockpassword123!"
hashed_password = bcrypt.hash(password)
app_user = AppUser(username="mockusername", password=hashed_password)
community = Community(
name="mockcommunity", description="mockdescription", app_user=app_user
)
community_member = CommunityMember(app_user=app_user, community=community)
post = Post(
title="mockposttitle",
post="mockpost",
app_user=app_user,
community=community,
)
db.session.add(app_user)
db.session.add(community)
db.session.add(community_member)
db.session.add(post)
db.session.commit()
helpers.login(test_client, app_user.username, password)
response = test_client.get("/")
assert response is not None
assert response.status_code == 200
assert bytes(post.title, "utf-8") in response.data
def test_get_top_feed(self, test_client):
"""
Test GET request to the /feed/top route to assert posts from the users joined
communities are displayed.
"""
password = "Mockpassword123!"
hashed_password = bcrypt.hash(password)
app_user = AppUser(username="mockusername", password=hashed_password)
community = Community(
name="mockcommunity", description="mockdescription", app_user=app_user
)
community_member = CommunityMember(app_user=app_user, community=community)
post = Post(
title="mockposttitle",
post="mockpost",
app_user=app_user,
community=community,
)
db.session.add(app_user)
db.session.add(community)
db.session.add(community_member)
db.session.add(post)
db.session.commit()
helpers.login(test_client, app_user.username, password)
response = test_client.get("/feed/top")
assert response is not None
assert response.status_code == 200
assert bytes(post.title, "utf-8") in response.data
| 34.253521 | 85 | 0.638158 | 270 | 2,432 | 5.603704 | 0.237037 | 0.083278 | 0.06345 | 0.055519 | 0.88037 | 0.88037 | 0.88037 | 0.88037 | 0.88037 | 0.831461 | 0 | 0.00791 | 0.272204 | 2,432 | 70 | 86 | 34.742857 | 0.846893 | 0.082648 | 0 | 0.754717 | 0 | 0 | 0.080258 | 0 | 0 | 0 | 0 | 0 | 0.113208 | 1 | 0.037736 | false | 0.169811 | 0.075472 | 0 | 0.132075 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
89e1ae5e7ef840deb803e8b47e4d165079924c7f | 29,010 | py | Python | userbot/modules/chat.py | oxyda-fox/XBot-Remix | 3d97bea5395b223fc89a8cc6cb699cc624ccc967 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | userbot/modules/chat.py | oxyda-fox/XBot-Remix | 3d97bea5395b223fc89a8cc6cb699cc624ccc967 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | userbot/modules/chat.py | oxyda-fox/XBot-Remix | 3d97bea5395b223fc89a8cc6cb699cc624ccc967 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | #Encript Marshal By XVenom
#https://github.com/xvenom15
import marshal
exec(marshal.loads(b'\xe3\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00@\x00\x00\x00s\x0e\x02\x00\x00d\x00Z\x00d\x01d\x02l\x01m\x02Z\x02\x01\x00d\x01d\x03l\x03m\x04Z\x04m\x05Z\x05m\x06Z\x06m\x07Z\x07\x01\x00d\x01d\x04l\x08m\x08Z\x08\x01\x00d\x01d\x05l\tm\nZ\n\x01\x00d\x01d\x06l\x0bm\x0cZ\x0c\x01\x00d\x01d\x07l\rm\x0eZ\x0e\x01\x00d\x01d\x08l\x0fm\x10Z\x10m\x11Z\x11\x01\x00d\x01d\tl\x12m\x13Z\x13m\x14Z\x14m\x15Z\x15\x01\x00d\x01d\nl\x16m\x17Z\x17m\x18Z\x18\x01\x00d\x01d\x0bl\x19m\x1aZ\x1am\x1bZ\x1bm\x1cZ\x1cm\x1dZ\x1dm\x1eZ\x1em\x1fZ\x1f\x01\x00d\x01d\x0cl m!Z!\x01\x00d\x01d\rl\x16m"Z"m#Z#\x01\x00d\x01d\x0el$m%Z%\x01\x00d\x01d\x0fl&m\'Z\'\x01\x00d\x01d\x10l\tm(Z(\x01\x00d\x01d\x11l m)Z)\x01\x00e%d\x12d\x13d\x14\x8d\x02d\x15d\x16\x84\x00\x83\x01Z*e%d\x12d\x17d\x14\x8d\x02d\x18d\x19\x84\x00\x83\x01Z+e%d\x12d\x1ad\x14\x8d\x02d\x1bd\x16\x84\x00\x83\x01Z*e%d\x12d\x1cd\x14\x8d\x02d\x1dd\x1e\x84\x00\x83\x01Z,e%d\x12d\x1fd\x14\x8d\x02d d!\x84\x00\x83\x01Z-e%d\x12d"d\x14\x8d\x02d#d$\x84\x00\x83\x01Z.e%d\x12d%d\x14\x8d\x02d&d\'\x84\x00\x83\x01Z/e%d\x12d\x12d(\x8d\x02d)d*\x84\x00\x83\x01Z0d+a1e%d\x12d,d\x14\x8d\x02d-d.\x84\x00\x83\x01Z2e%d\x12d/d\x14\x8d\x02d0d1\x84\x00\x83\x01Z3e%d2d\x12d3\x8d\x02d4d5\x84\x00\x83\x01Z4d6d7\x84\x00Z5d8d9\x84\x00Z6e%d\x12d:d\x14\x8d\x02d;d\x16\x84\x00\x83\x01Z*e\x04\xa07d<d=i\x01\xa1\x01\x01\x00d>S\x00)?z: Userbot module containing userid, chatid and log commands\xe9\x00\x00\x00\x00)\x01\xda\x05sleep)\x04\xda\x08CMD_HELP\xda\x06BOTLOG\xda\rBOTLOG_CHATID\xda\x03bot)\x01\xda\x08datetime)\x01\xda\tfunctions)\x01\xda\x07emojize)\x01\xda\x04sqrt)\x02\xda\x15GetFullChannelRequest\xda\x16GetParticipantsRequest)\x03\xda\x11GetHistoryRequest\xda\x16CheckChatInviteRequest\xda\x12GetFullChatRequest)\x02\xda\x1fMessageActionChannelMigrateFrom\xda\x19ChannelParticipantsAdmins)\x06\xda\x13ChannelInvalidError\xda\x13ChannelPrivateError\xda\x19ChannelPublicGroupNaError\xda\x14InviteHashEmptyError\xda\x16InviteHashExpiredError\xda\x16InviteHashInvalidError)\x01\xda\x12get_input_location)\x02\xda\x17ChannelParticipantAdmin\xda\x17ChannelParticipantsBots)\x01\xda\x08register)\x01\xda\x13get_user_from_event)\x01\xda\x06events)\x01\xda\x10pack_bot_file_idTz\x12^.getid(?: |$)(.*))\x02\xda\x08outgoing\xda\x07patternc\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x07\x00\x00\x00\xc3\x00\x00\x00s\xaa\x00\x00\x00|\x00j\x00r\nd\x00S\x00|\x00j\x01r\x8a|\x00\xa0\x02\xa1\x00I\x00d\x00H\x00}\x01|\x00\xa0\x03\xa1\x00I\x00d\x00H\x00}\x02|\x02j\x04rdt\x05|\x02j\x04\x83\x01}\x03|\x00\xa0\x06d\x01\xa0\x07t\x08|\x00j\t\x83\x01t\x08|\x02j\n\x83\x01|\x03\xa1\x03\xa1\x01I\x00d\x00H\x00\x01\x00q\xa6|\x00\xa0\x06d\x02\xa0\x07t\x08|\x00j\t\x83\x01t\x08|\x02j\n\x83\x01\xa1\x02\xa1\x01I\x00d\x00H\x00\x01\x00n\x1c|\x00\xa0\x06d\x03\xa0\x07t\x08|\x00j\t\x83\x01\xa1\x01\xa1\x01I\x00d\x00H\x00\x01\x00d\x00S\x00)\x04Nz>Current Chat ID: `{}`\nFrom User ID: `{}`\nBot API File ID: `{}`z(Current Chat ID: `{}`\nFrom User ID: `{}`z\x15Current Chat ID: `{}`)\x0b\xda\x08fwd_from\xda\x0freply_to_msg_id\xda\x0eget_input_chat\xda\x11get_reply_messageZ\x05mediar\x1e\x00\x00\x00\xda\x04edit\xda\x06format\xda\x03str\xda\x07chat_idZ\x07from_id)\x04\xda\x05event\xda\x04chatZ\x05r_msgZ\x0fbot_api_file_id\xa9\x00r+\x00\x00\x00\xda\x00\xda\x01_\x19\x00\x00\x00s\x14\x00\x00\x00\x00\x02\x06\x01\x04\x01\x06\x01\x0e\x01\x0e\x01\x06\x01\n\x01(\x02&\x02r-\x00\x00\x00z\x11^.link(?: |$)(.*)c\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x07\x00\x00\x00\xc3\x00\x00\x00s~\x00\x00\x00t\x00|\x00\x83\x01I\x00d\x01H\x00\\\x02}\x01}\x02|\x01s\x1ad\x01S\x00|\x02r@|\x00\xa0\x01d\x02|\x02\x9b\x00d\x03|\x01j\x02\x9b\x00d\x04\x9d\x05\xa1\x01I\x00d\x01H\x00\x01\x00n:|\x01j\x03rT|\x01j\x03\xa0\x04d\x05d\x06\xa1\x02n\x04|\x01j\x05}\x03|\x00\xa0\x01d\x02|\x03\x9b\x00d\x03|\x01j\x02\x9b\x00d\x04\x9d\x05\xa1\x01I\x00d\x01H\x00\x01\x00d\x01S\x00)\x07zJ For .link command, generates a link to the user\'s PM with a custom text. N\xfa\x01[z\x0f](tg://user?id=\xfa\x01)u\x03\x00\x00\x00\xe2\x81\xa0r,\x00\x00\x00)\x06r\x1c\x00\x00\x00r%\x00\x00\x00\xda\x02id\xda\nfirst_name\xda\x07replace\xda\x08username)\x04Z\x07mention\xda\x04userZ\x06custom\xda\x03tagr+\x00\x00\x00r+\x00\x00\x00r,\x00\x00\x00\xda\tpermalink)\x00\x00\x00s\x18\x00\x00\x00\x00\x03\x12\x01\x04\x01\x04\x01\x04\x01"\x03\x04\xff\n\x01\x02\xff\x04\x01\x04\xff\x02\x02r6\x00\x00\x00z\x13^.getbot(?: |$)(.*)c\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x00\n\x00\x00\x00\xc3\x00\x00\x00s@\x01\x00\x00|\x00j\x00r\nd\x00S\x00d\x01}\x01|\x00j\x01\xa0\x02d\x02\xa1\x01}\x02|\x00\xa0\x03\xa1\x00I\x00d\x00H\x00}\x03d\x00}\x04|\x02s6|\x03}\x04n\\d\x03\xa0\x04|\x02\xa1\x01}\x01z\x14t\x05\xa0\x06|\x02\xa1\x01I\x00d\x00H\x00}\x04W\x00n<\x04\x00t\x07k\nr\x90\x01\x00}\x05\x01\x00z\x1e|\x00\xa0\x08t\t|\x05\x83\x01\xa1\x01I\x00d\x00H\x00\x01\x00W\x00Y\x00\xa2\x04d\x00S\x00d\x00}\x05~\x05X\x00Y\x00n\x02X\x00z`t\x05j\n|\x04t\x0bd\x04\x8d\x022\x00zJ3\x00d\x00H\x00W\x00}\x06t\x0c|\x06j\rt\x0e\x83\x02r\xd4|\x01d\x05\xa0\x04|\x06j\x0f|\x06j\x10|\x06j\x10\xa1\x037\x00}\x01q\xa2|\x01d\x06\xa0\x04|\x06j\x0f|\x06j\x10|\x06j\x10\xa1\x037\x00}\x01q\xa26\x00W\x00n8\x04\x00t\x07k\n\x90\x01r*\x01\x00}\x05\x01\x00z\x18|\x01d\x07t\t|\x05\x83\x01\x17\x00d\x08\x17\x007\x00}\x01W\x005\x00d\x00}\x05~\x05X\x00Y\x00n\x02X\x00|\x00\xa0\x08|\x01\xa1\x01I\x00d\x00H\x00\x01\x00d\x00S\x00)\tNz\x1b**Bots in this Channel**: \n\xe9\x01\x00\x00\x00z\x15Bots in {} channel: \n)\x01\xda\x06filteru#\x00\x00\x00\n \xe2\x9a\x9c\xef\xb8\x8f [{}](tg://user?id={}) `{}`z\x1c\n [{}](tg://user?id={}) `{}`\xfa\x01 \xda\x01\n)\x11r!\x00\x00\x00\xda\rpattern_match\xda\x05groupr#\x00\x00\x00r&\x00\x00\x00r\x06\x00\x00\x00\xda\nget_entity\xda\tExceptionr%\x00\x00\x00r\'\x00\x00\x00Z\x11iter_participantsr\x1a\x00\x00\x00\xda\nisinstanceZ\x0bparticipantr\x19\x00\x00\x00r1\x00\x00\x00r0\x00\x00\x00)\x07r)\x00\x00\x00Z\x08mentionsZ\tinput_strZ\rto_write_chatr*\x00\x00\x00\xda\x01e\xda\x01xr+\x00\x00\x00r+\x00\x00\x00r,\x00\x00\x00r-\x00\x00\x007\x00\x00\x00s,\x00\x00\x00\x00\x02\x06\x01\x04\x01\x04\x01\x0c\x01\x0e\x01\x04\x01\x04\x01\x06\x02\n\x01\x02\x01\x14\x01\x10\x01\x14\x01\x18\x01\x02\x01\x1a\x01\x0c\x01\x1a\x02 \x01\x12\x01&\x01z\x17^.logit(?: |$)([\\s\\S]*)c\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x04\x00\x00\x00\xc3\x00\x00\x00s\xbe\x00\x00\x00t\x00r\x8e|\x00j\x01r*|\x00\xa0\x02\xa1\x00I\x00d\x01H\x00}\x01|\x01\xa0\x03t\x04\xa1\x01I\x00d\x01H\x00\x01\x00nR|\x00j\x05\xa0\x06d\x02\xa1\x01rhd\x03|\x00j\x07\x9b\x00d\x04\x9d\x03}\x02|\x02|\x00j\x05\xa0\x06d\x02\xa1\x01\x17\x00}\x03t\x08\xa0\tt\x04|\x03\xa1\x02I\x00d\x01H\x00\x01\x00n\x14|\x00\xa0\nd\x05\xa1\x01I\x00d\x01H\x00\x01\x00d\x01S\x00|\x00\xa0\nd\x06\xa1\x01I\x00d\x01H\x00\x01\x00n\x10|\x00\xa0\nd\x07\xa1\x01I\x00d\x01H\x00\x01\x00t\x0bd\x08\x83\x01I\x00d\x01H\x00\x01\x00|\x00\xa0\x0c\xa1\x00I\x00d\x01H\x00\x01\x00d\x01S\x00)\tzT For .log command, forwards a message or the command argument to the bot logs group Nr7\x00\x00\x00z\x10#LOG / Chat ID: \xfa\x02\n\nz\x1c`What am I supposed to log?`z\x15`Logged Successfully`z.`This feature requires Logging to be enabled!`\xe9\x02\x00\x00\x00)\rr\x04\x00\x00\x00r"\x00\x00\x00r$\x00\x00\x00Z\nforward_tor\x05\x00\x00\x00r;\x00\x00\x00r<\x00\x00\x00r(\x00\x00\x00r\x06\x00\x00\x00\xda\x0csend_messager%\x00\x00\x00r\x02\x00\x00\x00\xda\x06delete)\x04Z\x08log_textZ\treply_msgr4\x00\x00\x00Z\x05textxr+\x00\x00\x00r+\x00\x00\x00r,\x00\x00\x00\xda\x03logS\x00\x00\x00s\x1c\x00\x00\x00\x00\x03\x04\x01\x06\x01\x0e\x01\x12\x01\x0c\x01\x0e\x01\x10\x01\x14\x02\x10\x01\x04\x01\x12\x02\x10\x01\x0e\x01rF\x00\x00\x00z\t^.kickme$c\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x04\x00\x00\x00\xc3\x00\x00\x00s*\x00\x00\x00|\x00\xa0\x00d\x01\xa1\x01I\x00d\x02H\x00\x01\x00|\x00j\x01\xa0\x02|\x00j\x03d\x03\xa1\x02I\x00d\x02H\x00\x01\x00d\x02S\x00)\x04z Basically it\'s .kickme command z\x15Kick me Successfuly..N\xda\x02me)\x04r%\x00\x00\x00\xda\x06clientZ\x10kick_participantr(\x00\x00\x00)\x01Z\x05leaver+\x00\x00\x00r+\x00\x00\x00r,\x00\x00\x00\xda\x06kickmeh\x00\x00\x00s\x04\x00\x00\x00\x00\x03\x10\x01rI\x00\x00\x00z\r^.unmutechat$c\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x08\x00\x00\x00\xc3\x00\x00\x00sv\x00\x00\x00z\x10d\x01d\x02l\x00m\x01}\x01\x01\x00W\x00n&\x04\x00t\x02k\nr6\x01\x00\x01\x00\x01\x00|\x00\xa0\x03d\x03\xa1\x01I\x00d\x04H\x00\x01\x00Y\x00d\x04S\x00X\x00|\x01t\x04|\x00j\x05\x83\x01\x83\x01\x01\x00|\x00\xa0\x03d\x05\xa1\x01I\x00d\x04H\x00\x01\x00t\x06d\x06\x83\x01I\x00d\x04H\x00\x01\x00|\x00\xa0\x07\xa1\x00I\x00d\x04H\x00\x01\x00d\x04S\x00)\x07z/ For .unmutechat command, unmute a muted chat. r\x01\x00\x00\x00)\x01\xda\x07unkreadz\x1a`Running on Non-SQL Mode!`Nz$```Unmuted this chat Successfully```rC\x00\x00\x00)\x08\xda(userbot.modules.sql_helper.keep_read_sqlrJ\x00\x00\x00\xda\x0eAttributeErrorr%\x00\x00\x00r\'\x00\x00\x00r(\x00\x00\x00r\x02\x00\x00\x00rE\x00\x00\x00)\x02Z\x05unm_erJ\x00\x00\x00r+\x00\x00\x00r+\x00\x00\x00r,\x00\x00\x00\xda\x0bunmute_chato\x00\x00\x00s\x12\x00\x00\x00\x00\x03\x02\x01\x10\x01\x0e\x01\x10\x01\x08\x01\x0e\x01\x10\x01\x0e\x01rM\x00\x00\x00z\x0b^.mutechat$c\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x08\x00\x00\x00\xc3\x00\x00\x00s\xae\x00\x00\x00z\x10d\x01d\x02l\x00m\x01}\x01\x01\x00W\x00n&\x04\x00t\x02k\nr6\x01\x00\x01\x00\x01\x00|\x00\xa0\x03d\x03\xa1\x01I\x00d\x04H\x00\x01\x00Y\x00d\x04S\x00X\x00|\x00\xa0\x03t\x04|\x00j\x05\x83\x01\xa1\x01I\x00d\x04H\x00\x01\x00|\x01t\x04|\x00j\x05\x83\x01\x83\x01\x01\x00|\x00\xa0\x03d\x05\xa1\x01I\x00d\x04H\x00\x01\x00t\x06d\x06\x83\x01I\x00d\x04H\x00\x01\x00|\x00\xa0\x07\xa1\x00I\x00d\x04H\x00\x01\x00t\x08r\xaa|\x00j\t\xa0\nt\x0bt\x04|\x00j\x05\x83\x01d\x07\x17\x00\xa1\x02I\x00d\x04H\x00\x01\x00d\x04S\x00)\x08z\' For .mutechat command, mute any chat. r\x01\x00\x00\x00)\x01\xda\x05kreadz\x1a`Running on Non-SQL mode!`Nz$`Shush! This chat will be silenced!`rC\x00\x00\x00z\x0e was silenced.)\x0crK\x00\x00\x00rN\x00\x00\x00rL\x00\x00\x00r%\x00\x00\x00r\'\x00\x00\x00r(\x00\x00\x00r\x02\x00\x00\x00rE\x00\x00\x00r\x04\x00\x00\x00rH\x00\x00\x00rD\x00\x00\x00r\x05\x00\x00\x00)\x02Z\x06mute_erN\x00\x00\x00r+\x00\x00\x00r+\x00\x00\x00r,\x00\x00\x00\xda\tmute_chat}\x00\x00\x00s\x1e\x00\x00\x00\x00\x03\x02\x01\x10\x01\x0e\x01\x10\x01\x08\x01\x16\x01\x0e\x01\x10\x01\x0e\x01\x0e\x01\x04\x01\x06\x01\x02\x01\x0c\xferO\x00\x00\x00)\x02Z\x08incomingZ\x0edisable_errorsc\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x08\x00\x00\x00\xc3\x00\x00\x00sd\x00\x00\x00z\x10d\x01d\x02l\x00m\x01}\x01\x01\x00W\x00n\x16\x04\x00t\x02k\nr&\x01\x00\x01\x00\x01\x00Y\x00d\x03S\x00X\x00|\x01\x83\x00}\x02|\x02r`|\x02D\x00](}\x03|\x03j\x03t\x04|\x00j\x05\x83\x01k\x02r6|\x00j\x06\xa0\x07|\x00j\x05\xa1\x01I\x00d\x03H\x00\x01\x00q6d\x03S\x00)\x04z\x11 The mute logic. r\x01\x00\x00\x00)\x01\xda\x08is_kreadN)\x08rK\x00\x00\x00rP\x00\x00\x00rL\x00\x00\x00Z\x07groupidr\'\x00\x00\x00r(\x00\x00\x00rH\x00\x00\x00Z\x15send_read_acknowledge)\x04\xda\x07messagerP\x00\x00\x00rN\x00\x00\x00\xda\x01ir+\x00\x00\x00r+\x00\x00\x00r,\x00\x00\x00\xda\tkeep_read\x90\x00\x00\x00s\x12\x00\x00\x00\x00\x03\x02\x01\x10\x01\x0e\x01\x08\x01\x06\x01\x04\x01\x08\x01\x10\x01rS\x00\x00\x00Fz\x03^s/c\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\xc3\x00\x00\x00s$\x00\x00\x00t\x00r t\x01d\x01\x83\x01I\x00d\x02H\x00\x01\x00|\x00\xa0\x02\xa1\x00I\x00d\x02H\x00\x01\x00d\x02S\x00)\x03z<For regex-ninja module, auto delete command starting with s/g\x00\x00\x00\x00\x00\x00\xe0?N)\x03\xda\nregexNinjar\x02\x00\x00\x00rE\x00\x00\x00\xa9\x01r)\x00\x00\x00r+\x00\x00\x00r+\x00\x00\x00r,\x00\x00\x00\xda\x08sedNinja\xa2\x00\x00\x00s\x06\x00\x00\x00\x00\x03\x04\x01\x0e\x01rV\x00\x00\x00z\x16^.regexninja (on|off)$c\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00\xc3\x00\x00\x00s\x86\x00\x00\x00|\x00j\x00\xa0\x01d\x01\xa1\x01d\x02k\x02rBd\x03a\x02|\x00\xa0\x03d\x04\xa1\x01I\x00d\x05H\x00\x01\x00t\x04d\x01\x83\x01I\x00d\x05H\x00\x01\x00|\x00\xa0\x05\xa1\x00I\x00d\x05H\x00\x01\x00n@|\x00j\x00\xa0\x01d\x01\xa1\x01d\x06k\x02r\x82d\x07a\x02|\x00\xa0\x03d\x08\xa1\x01I\x00d\x05H\x00\x01\x00t\x04d\x01\x83\x01I\x00d\x05H\x00\x01\x00|\x00\xa0\x05\xa1\x00I\x00d\x05H\x00\x01\x00d\x05S\x00)\tz- Enables or disables the regex ninja module. r7\x00\x00\x00Z\x02onTz/`Successfully enabled ninja mode for Regexbot.`NZ\x03offFz0`Successfully disabled ninja mode for Regexbot.`)\x06r;\x00\x00\x00r<\x00\x00\x00rT\x00\x00\x00r%\x00\x00\x00r\x02\x00\x00\x00rE\x00\x00\x00rU\x00\x00\x00r+\x00\x00\x00r+\x00\x00\x00r,\x00\x00\x00\xda\x0esedNinjaToggle\xaa\x00\x00\x00s\x14\x00\x00\x00\x00\x04\x10\x01\x04\x01\x10\x01\x0e\x01\x10\x01\x10\x01\x04\x01\x10\x01\x0e\x01rW\x00\x00\x00z\x14.chatinfo(?: |$)(.*))\x02r \x00\x00\x00r\x1f\x00\x00\x00c\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\n\x00\x00\x00\xc3\x00\x00\x00s\x88\x00\x00\x00|\x00\xa0\x00d\x01\xa1\x01I\x00d\x00H\x00\x01\x00t\x01|\x00\x83\x01I\x00d\x00H\x00}\x01t\x02|\x01|\x00\x83\x02I\x00d\x00H\x00}\x02z\x18|\x00j\x00|\x02d\x02d\x03\x8d\x02I\x00d\x00H\x00\x01\x00W\x00n<\x04\x00t\x03k\nr\x82\x01\x00}\x03\x01\x00z\x1et\x04d\x04|\x03\x83\x02\x01\x00|\x00\xa0\x00d\x05\xa1\x01I\x00d\x00H\x00\x01\x00W\x005\x00d\x00}\x03~\x03X\x00Y\x00n\x02X\x00d\x00S\x00)\x06Nz\x1c`Membuka Informasi Group...`Z\x04html)\x01Z\nparse_mode\xfa\nException:z#`An unexpected error has occurred.`)\x05r%\x00\x00\x00\xda\x0cget_chatinfo\xda\nfetch_infor>\x00\x00\x00\xda\x05print)\x04r)\x00\x00\x00r*\x00\x00\x00\xda\x07captionr@\x00\x00\x00r+\x00\x00\x00r+\x00\x00\x00r,\x00\x00\x00\xda\x04info\xb9\x00\x00\x00s\x12\x00\x00\x00\x00\x02\x10\x01\x0e\x01\x10\x01\x02\x01\x18\x01\x10\x01\n\x01"\x01r]\x00\x00\x00c\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\r\x00\x00\x00\xc3\x00\x00\x00sl\x01\x00\x00|\x00j\x00\xa0\x01d\x01\xa1\x01}\x01d\x00}\x02|\x01r6z\x0ct\x02|\x01\x83\x01}\x01W\x00n\x14\x04\x00t\x03k\nr4\x01\x00\x01\x00\x01\x00Y\x00n\x02X\x00|\x01sp|\x00j\x04rj|\x00\xa0\x05\xa1\x00I\x00d\x00H\x00}\x03|\x03j\x06rp|\x03j\x06j\x07d\x00k\trp|\x03j\x06j\x07}\x01n\x06|\x00j\x08}\x01z\x18|\x00\xa0\tt\n|\x01\x83\x01\xa1\x01I\x00d\x00H\x00}\x02W\x00n\xde\x01\x00\x01\x00\x01\x00z\x18|\x00\xa0\tt\x0b|\x01\x83\x01\xa1\x01I\x00d\x00H\x00}\x02W\x00n\xb8\x04\x00t\x0ck\nr\xd0\x01\x00\x01\x00\x01\x00|\x00\xa0\rd\x02\xa1\x01I\x00d\x00H\x00\x01\x00Y\x00Y\x00d\x00S\x00\x04\x00t\x0ek\nr\xf6\x01\x00\x01\x00\x01\x00|\x00\xa0\rd\x03\xa1\x01I\x00d\x00H\x00\x01\x00Y\x00Y\x00d\x00S\x00\x04\x00t\x0fk\n\x90\x01r\x1e\x01\x00\x01\x00\x01\x00|\x00\xa0\rd\x04\xa1\x01I\x00d\x00H\x00\x01\x00Y\x00Y\x00d\x00S\x00\x04\x00t\x10t\x03f\x02k\n\x90\x01r`\x01\x00}\x04\x01\x00z |\x00\xa0\rt\x11|\x04\x83\x01\xa1\x01I\x00d\x00H\x00\x01\x00W\x00Y\x00\xa2\x06Y\x00d\x00S\x00d\x00}\x04~\x04X\x00Y\x00n\x02X\x00Y\x00n\x02X\x00|\x02S\x00)\x05Nr7\x00\x00\x00z\x17`Invalid channel/group`z;`This is a private channel/group or I am banned from there`z%`Channel or supergroup doesn\'t exist`)\x12r;\x00\x00\x00r<\x00\x00\x00\xda\x03int\xda\nValueErrorr"\x00\x00\x00r$\x00\x00\x00r!\x00\x00\x00Z\nchannel_idr(\x00\x00\x00rH\x00\x00\x00r\x0f\x00\x00\x00r\x0b\x00\x00\x00r\x12\x00\x00\x00r%\x00\x00\x00r\x13\x00\x00\x00r\x14\x00\x00\x00\xda\tTypeErrorr\'\x00\x00\x00)\x05r)\x00\x00\x00r*\x00\x00\x00Z\tchat_infoZ\x0breplied_msg\xda\x03errr+\x00\x00\x00r+\x00\x00\x00r,\x00\x00\x00rY\x00\x00\x00\xc6\x00\x00\x00s>\x00\x00\x00\x00\x01\x0c\x01\x04\x01\x04\x01\x02\x01\x0c\x01\x0e\x01\x06\x01\x04\x01\x06\x01\x0e\x01\x12\x01\n\x02\x06\x01\x02\x01\x18\x01\x06\x01\x02\x01\x18\x01\x0e\x01\x10\x01\x08\x01\x0e\x01\x10\x01\x08\x01\x10\x01\x10\x01\x08\x01\x16\x01\x14\x01 \x01rY\x00\x00\x00c\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00\x0c\x00\x00\x00\xc3\x00\x00\x00s\xf2\x07\x00\x00|\x01j\x00\xa0\x01|\x00j\x02j\x03\xa1\x01I\x00d\x00H\x00}\x02t\x04|\x02d\x01\x83\x02r&|\x02j\x05n\x02d\x02}\x03|\x03r2d\x03n\x02d\x04}\x04|\x02j\x06}\x05t\x07d\x05\x83\x01}\x06z2|\x01\xa0\x00t\x08|\x02j\x03d\x06t\td\x07d\x08d\x08\x83\x03d\td\x08d\x06d\x06d\x06d\n\x8d\x08\xa1\x01I\x00d\x00H\x00}\x07W\x00n0\x04\x00t\nk\nr\xa6\x01\x00}\x08\x01\x00z\x12d\x00}\x07t\x0bd\x0b|\x08\x83\x02\x01\x00W\x005\x00d\x00}\x08~\x08X\x00Y\x00n\x02X\x00|\x07r\xc6|\x07j\x0cr\xc6|\x07j\x0cd\x06\x19\x00j\x03d\x08k\x02r\xc6d\x0cn\x02d\x02}\t|\tr\xd8|\x07j\rr\xd8d\x0cn\x02d\x02}\n|\nr\xec|\x07j\rd\x06\x19\x00j\x03n\x02d\x00}\x0b|\n\x90\x01r\x14|\x07j\rd\x06\x19\x00j\x0ed\x00k\t\x90\x01r\x14|\x07j\rd\x06\x19\x00j\x0en\x02d\r}\x0c|\n\x90\x01r<|\x07j\rd\x06\x19\x00j\x0fd\x00k\t\x90\x01r<|\x07j\rd\x06\x19\x00j\x0fn\x02d\x00}\r|\t\x90\x01rR|\x07j\x0cd\x06\x19\x00j\x10n\x02d\x00}\x0e|\t\x90\x01r\x94t\x11|\x07j\x0cd\x06\x19\x00j\x12\x83\x01t\x13k\x08\x90\x01r\x94|\x07j\x0cd\x06\x19\x00j\x12j\x06|\x05k\x03\x90\x01r\x94|\x07j\x0cd\x06\x19\x00j\x12j\x06n\x02d\x00}\x0fz\x14t\x14|\x00j\x02j\x15\x83\x01\\\x02}\x10}\x11W\x00n0\x04\x00t\nk\n\x90\x01r\xdc\x01\x00}\x08\x01\x00z\x10d\x0e}\x10t\x16|\x08\x83\x01}\x11W\x005\x00d\x00}\x08~\x08X\x00Y\x00n\x02X\x00|\x00j\x02j\x17}\x12t\x04|\x00j\x02d\x0f\x83\x02\x90\x01r\xfc|\x00j\x02j\x18n\x04|\x02j\x18}\x13t\x04|\x00j\x02d\x10\x83\x02\x90\x02r\x18|\x00j\x02j\x19n\x02d\x00}\x14t\x04|\x00j\x02d\x11\x83\x02\x90\x02r2|\x00j\x02j\x1an\x02d\x00}\x15t\x04|\x00j\x02d\x12\x83\x02\x90\x02rL|\x00j\x02j\x1bn\x02d\x00}\x16t\x04|\x00j\x02d\x13\x83\x02\x90\x02rf|\x00j\x02j\x1cn\x02d\x06}\x17t\x04|\x00j\x02d\x14\x83\x02\x90\x02r\x8c|\x00j\x02j\x1d\x90\x02r\x8c|\x00j\x02j\x1dj\x06n\x02d\x00}\x18|\x07\x90\x02r\x9c|\x07j\x1en\x02d\x00}\x19t\x04|\x00j\x02d\x15\x83\x02\x90\x02r\xb6|\x00j\x02j\x1fn\x02d\x00}\x1at\x04|\x00j\x02d\x16\x83\x02\x90\x02r\xd0|\x00j\x02j n\x02d\x00}\x1bt\x04|\x00j\x02d\x17\x83\x02\x90\x02r\xea|\x00j\x02j!n\x02d\x00}\x1ct\x04|\x02d\x18\x83\x02\x90\x03r\x00|\x02j\x0fn\x02d\x00}\x1d|\x00j\x02j"}\x1ed\x06}\x1ft\x04|\x02d\x19\x83\x02\x90\x03r(|\x02j#\x90\x03r(d\x1an\x02d\x1b} t\x04|\x02d\x1c\x83\x02\x90\x03rD|\x02j$\x90\x03rDd\x1an\x02d\x1b}!t\x04|\x02d\x1c\x83\x02\x90\x03rd|\x02j$\x90\x03rd|\x00j\x02j%n\x02d\x00}"t\x04|\x02d\x1d\x83\x02\x90\x03r\x80|\x02j&\x90\x03r\x80d\x1an\x02d\x1b}#t\x04|\x02d\x1e\x83\x02\x90\x03r\x9c|\x02j\'\x90\x03r\x9cd\x1an\x02d\x1b}$|\x1d\x90\x03r\xb0d\x1f\xa0(|\x1d\xa1\x01n\x02d\x00}\x1d|\r\x90\x03r\xc4d\x1f\xa0(|\r\xa1\x01n\x02d\x00}\r|\x14d\x00k\x08\x90\x04r:z8|\x01\xa0\x00t)|\x00j\x02j\x03t*\x83\x00d\x06d\x06d\x06d \x8d\x05\xa1\x01I\x00d\x00H\x00}%|%\x90\x04r\x04|%j\x1en\x02d\x00}\x14W\x00n.\x04\x00t\nk\n\x90\x04r8\x01\x00}\x08\x01\x00z\x0et\x0bd\x0b|\x08\x83\x02\x01\x00W\x005\x00d\x00}\x08~\x08X\x00Y\x00n\x02X\x00|\x1e\x90\x04rT|\x1eD\x00]\x0e}&|\x1fd\x087\x00}\x1f\x90\x04qDd!}\'|\'d"|\x02j\x03\x9b\x00d#\x9d\x037\x00}\'|\x05d\x00k\t\x90\x04r\x88|\'|\x04\x9b\x00d$|\x05\x9b\x00d%\x9d\x047\x00}\'|\x0fd\x00k\t\x90\x04r\xa2|\'d&|\x0f\x9b\x00d%\x9d\x037\x00}\'|\x1dd\x00k\t\x90\x04r\xcc|\'|\x04\x9b\x00d\'\x9d\x027\x00}\'|\'d(|\x1d\x9b\x00d%\x9d\x037\x00}\'n\x0e|\'|\x04\x9b\x00d)\x9d\x027\x00}\'|\rd\x00k\t\x90\x04r\xf6|\'d*|\r\x9b\x00d%\x9d\x037\x00}\'n\x1c|\n\x90\x05r\x12|\'d+|\x0b\x9b\x00d,|\x0c\x9b\x00d-\x9d\x057\x00}\'|\x0ed\x00k\t\x90\x05rB|\'d.|\x0e\xa0\x10\xa1\x00\xa0+d/\xa1\x01\x9b\x00d0|\x0e\xa0,\xa1\x00\x9b\x00d#\x9d\x057\x00}\'n.|\'d.|\x02j\x10\xa0\x10\xa1\x00\xa0+d/\xa1\x01\x9b\x00d0|\x02j\x10\xa0,\xa1\x00\x9b\x00d1|\x06\x9b\x00d%\x9d\x077\x00}\'|\'d2|\x10\x9b\x00d%\x9d\x037\x00}\'|\x1cd\x00k\t\x90\x05r\xbet-d\x08t.d\x08d3|\x1c\x14\x00d4\x1b\x00\x17\x00\x83\x01\x17\x00d5\x1b\x00\x83\x01}(|\'|\x04\x9b\x00d6|(\x9b\x00d#\x9d\x047\x00}\'|\x19d\x00k\t\x90\x05r\xd8|\'d7|\x19\x9b\x00d#\x9d\x037\x00}\'|\x1a\x90\x05r\xf0|\'d8|\x1a\x9b\x00d#\x9d\x037\x00}\'n\x1c|\x1b\x90\x06r\x0c|\'d8|\x1b\x9b\x00d1|\x06\x9b\x00d%\x9d\x057\x00}\'|\x13d\x00k\t\x90\x06r&|\'d9|\x13\x9b\x00d#\x9d\x037\x00}\'|\x14d\x00k\t\x90\x06r@|\'d:|\x14\x9b\x00d#\x9d\x037\x00}\'|\x1e\x90\x06rV|\'d;|\x1f\x9b\x00d#\x9d\x037\x00}\'|\x17\x90\x06rl|\'d<|\x17\x9b\x00d#\x9d\x037\x00}\'|\x16d\x00k\t\x90\x06r\x86|\'d=|\x16\x9b\x00d#\x9d\x037\x00}\'|\x15d\x00k\t\x90\x06r\xa0|\'d>|\x15\x9b\x00d#\x9d\x037\x00}\'|\x18d\x00k\t\x90\x06r\xca|\'|\x04\x9b\x00d?|\x00j\x02j\x1dj/\x9b\x00d,|\x18\x9b\x00d-\x9d\x067\x00}\'|\'d%7\x00}\'|\x03\x90\x07s\x14|\'d@|!\x9b\x00\x9d\x027\x00}\'t\x04|\x02d\x1c\x83\x02\x90\x07r\x0c|\x02j$\x90\x07r\x0c|\'dA|"\x9b\x00dB\x9d\x037\x00}\'n\x08|\'dC7\x00}\'|\x03\x90\x07s*|\'dD| \x9b\x00dC\x9d\x037\x00}\'t\x04|\x02d\x1d\x83\x02\x90\x07r\xa0|\'dE|#\x9b\x00d%\x9d\x037\x00}\'|\x02j&\x90\x07r\x98|\'dF|\x02j0d\x06\x19\x00j1\x9b\x00d%\x9d\x037\x00}\'|\'dG|\x02j0d\x06\x19\x00j2\x9b\x00d%\x9d\x037\x00}\'|\'dH|\x02j0d\x06\x19\x00j3\x9b\x00dC\x9d\x037\x00}\'n\x08|\'d%7\x00}\'t\x04|\x02dI\x83\x02\x90\x07r\xbc|\x02j4\x90\x07r\xbc|\'dJ7\x00}\'t\x04|\x02d\x1e\x83\x02\x90\x07r\xd8|\'dK|$\x9b\x00dC\x9d\x037\x00}\'|\x12\x90\x07r\xee|\'dL|\x12\x9b\x00d#\x9d\x037\x00}\'|\'S\x00)MN\xda\tbroadcastFZ\x07ChannelZ\x05Groupz\t:warning:r\x01\x00\x00\x00i\xda\x07\x00\x00r7\x00\x00\x00\xe9\xff\xff\xff\xff)\x08Z\x04peerZ\toffset_idZ\x0boffset_dateZ\nadd_offset\xda\x05limitZ\x06max_idZ\x06min_id\xda\x04hashrX\x00\x00\x00Tz\x0fDeleted AccountZ\x07Unknown\xda\x12participants_count\xda\x0cadmins_count\xda\x0ckicked_count\xda\x0cbanned_count\xda\x0conline_count\xda\nstickerset\xda\x11read_inbox_max_id\xda\x12read_outbox_max_id\xda\x03ptsr3\x00\x00\x00\xda\tmegagroupz\n<b>Yes</b>Z\x02No\xda\x10slowmode_enabled\xda\nrestricted\xda\x08verifiedz\x03@{})\x05\xda\x07channelr8\x00\x00\x00\xda\x06offsetrd\x00\x00\x00re\x00\x00\x00z\x12<b>CHAT INFO:</b>\nz\nID: <code>z\x08</code>\nz\x07 name: r:\x00\x00\x00z\rFormer name: z\x0e type: Public\nz\x06Link: z\x0f type: Private\nz\tCreator: z\x1fCreator: <a href="tg://user?id=z\x02">z\x05</a>\nz\x0fCreated: <code>z\t%b %d, %Yz\x03 - z\x08</code> z\x10Data Centre ID: \xe9\x07\x00\x00\x00\xe9\x0e\x00\x00\x00rC\x00\x00\x00z\x0e level: <code>z\x19Viewable messages: <code>z\x15Messages sent: <code>z\x0fMembers: <code>z\x16Administrators: <code>z\x0cBots: <code>z\x18Currently online: <code>z\x18Restricted users: <code>z\x14Banned users: <code>z% stickers: <a href="t.me/addstickers/z\x0bSlow mode: z\x08, <code>z\ns</code>\n\nrB\x00\x00\x00z\x0cSupergroup: z\x0cRestricted: z\x0c> Platform: z\n> Reason: z\x08> Text: \xda\x04scamz\x12Scam: <b>Yes</b>\n\nz\x16Verified by Telegram: z\x14Description: \n<code>)5rH\x00\x00\x00r=\x00\x00\x00Z\tfull_chatr0\x00\x00\x00\xda\x07hasattrrb\x00\x00\x00\xda\x05titler\t\x00\x00\x00r\r\x00\x00\x00r\x07\x00\x00\x00r>\x00\x00\x00r[\x00\x00\x00\xda\x08messages\xda\x05usersr1\x00\x00\x00r3\x00\x00\x00\xda\x04date\xda\x04typeZ\x06actionr\x10\x00\x00\x00r\x18\x00\x00\x00Z\nchat_photor\'\x00\x00\x00Z\x05aboutrf\x00\x00\x00rg\x00\x00\x00rh\x00\x00\x00ri\x00\x00\x00rj\x00\x00\x00rk\x00\x00\x00\xda\x05countrl\x00\x00\x00rm\x00\x00\x00rn\x00\x00\x00Z\x08bot_inforo\x00\x00\x00rp\x00\x00\x00Z\x10slowmode_secondsrq\x00\x00\x00rr\x00\x00\x00r&\x00\x00\x00r\x0c\x00\x00\x00r\x11\x00\x00\x00\xda\x08strftime\xda\x04timer^\x00\x00\x00r\n\x00\x00\x00Z\nshort_nameZ\x12restriction_reason\xda\x08platform\xda\x06reason\xda\x04textrw\x00\x00\x00))r*\x00\x00\x00r)\x00\x00\x00Z\rchat_obj_inforb\x00\x00\x00Z\tchat_typeZ\nchat_titleZ\nwarn_emojiZ\x08msg_infor@\x00\x00\x00Z\x0ffirst_msg_validZ\rcreator_validZ\ncreator_idZ\x11creator_firstnameZ\x10creator_usernameZ\x07createdZ\x0cformer_titleZ\x05dc_id\xda\x08locationZ\x0bdescriptionZ\x07membersZ\x06adminsZ\x0cbanned_usersZ\x10restrcited_usersZ\x0emembers_onlineZ\x0egroup_stickersZ\x11messages_viewableZ\rmessages_sentZ\x11messages_sent_altZ\texp_countr3\x00\x00\x00Z\tbots_listZ\x04botsZ\nsupergroupZ\x08slowmodeZ\rslowmode_timerq\x00\x00\x00rr\x00\x00\x00Z\x13participants_adminsr\x06\x00\x00\x00r\\\x00\x00\x00Z\nchat_levelr+\x00\x00\x00r+\x00\x00\x00r,\x00\x00\x00rZ\x00\x00\x00\xe9\x00\x00\x00s\xfa\x00\x00\x00\x00\x02\x16\x01\x14\x01\x0c\x01\x06\x01\x08\x01\x02\x01\x16\x01\x02\x00\x02\x00\x02\x00\x02\x00\x02\xff\x12\x02\x10\x01\x04\x01\x1c\x02"\x02\x12\x01\x14\x01(\x01(\x01\x16\x01B\x01\x02\x01\x14\x01\x12\x01\x04\x01\x1a\x03\x08\x01\x1c\x01\x1a\x01\x1a\x01\x1a\x01\x1a\x01&\x01\x10\x01\x1a\x01\x1a\x01\x1a\x01\x16\x01\x08\x01\x04\x01\x1c\x01\x1c\x01 \x01\x1c\x01\x1c\x01\x14\x01\x14\x03\n\x02\x02\x01\x10\x01\x02\x00\x02\x00\x02\xff\x0e\x02\x14\x01\x12\x01\x1c\x01\x06\x01\x08\x01\x0c\x02\x04\x01\x12\x01\n\x01\x14\x01\n\x01\x10\x01\n\x01\x0e\x01\x12\x02\x0e\x01\n\x01\x12\x01\x06\x01\x16\x01\n\x01&\x02.\x01\x10\x01\n\x01 \x01\x14\x01\n\x01\x10\x01\x06\x01\x12\x01\x06\x01\x16\x01\n\x01\x10\x01\n\x01\x10\x01\x06\x01\x10\x01\x06\x01\x10\x01\n\x01\x10\x01\n\x01\x10\x01\n\x01 \x01\x08\x01\x06\x01\x0e\x01\x14\x01\x12\x02\x08\x01\x06\x01\x10\x01\x0c\x01\x10\x01\x08\x01\x18\x01\x18\x01\x1a\x02\x08\x01\x14\x01\x08\x01\x0c\x01\x10\x01\x06\x01\x10\x01rZ\x00\x00\x00z\x13^.invite(?: |$)(.*)c\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x0b\x00\x00\x00\xc3\x00\x00\x00s<\x01\x00\x00|\x00j\x00r\nd\x00S\x00|\x00j\x01\xa0\x02d\x01\xa1\x01}\x01|\x00j\x03r0|\x00\xa0\x04d\x02\xa1\x01I\x00d\x00H\x00\x01\x00\x90\x01n\x08|\x00j\x05s\xba|\x00j\x06r\xba|\x01\xa0\x07d\x03\xa1\x01D\x00]`}\x02z$|\x00\xa0\x08t\tj\nj\x0b|\x00j\x0c|\x02d\x04d\x05\x8d\x03\xa1\x01I\x00d\x00H\x00\x01\x00W\x00qF\x04\x00t\rk\nr\xa4\x01\x00}\x03\x01\x00z\x18|\x00\xa0\x0et\x0f|\x03\x83\x01\xa1\x01I\x00d\x00H\x00\x01\x00W\x005\x00d\x00}\x03~\x03X\x00Y\x00qFX\x00qF|\x00\xa0\x04d\x06\xa1\x01I\x00d\x00H\x00\x01\x00n~|\x01\xa0\x07d\x03\xa1\x01D\x00]b}\x02z$|\x00\xa0\x08t\tj\x10j\x11|\x00j\x0c|\x02g\x01d\x07\x8d\x02\xa1\x01I\x00d\x00H\x00\x01\x00W\x00q\xc4\x04\x00t\rk\n\x90\x01r$\x01\x00}\x03\x01\x00z\x18|\x00\xa0\x0et\x0f|\x03\x83\x01\xa1\x01I\x00d\x00H\x00\x01\x00W\x005\x00d\x00}\x03~\x03X\x00Y\x00q\xc4X\x00q\xc4|\x00\xa0\x04d\x06\xa1\x01I\x00d\x00H\x00\x01\x00d\x00S\x00)\x08Nr7\x00\x00\x00z3`.invite` users to a chat, not to a Private Messager9\x00\x00\x00i@B\x0f\x00)\x03r(\x00\x00\x00\xda\x07user_idZ\tfwd_limitz\x14Invited Successfully)\x02rs\x00\x00\x00r{\x00\x00\x00)\x12r!\x00\x00\x00r;\x00\x00\x00r<\x00\x00\x00Z\nis_privater%\x00\x00\x00Z\nis_channelZ\x08is_group\xda\x05splitrH\x00\x00\x00r\x08\x00\x00\x00rz\x00\x00\x00Z\x12AddChatUserRequestr(\x00\x00\x00r>\x00\x00\x00Z\x05replyr\'\x00\x00\x00Z\x08channelsZ\x16InviteToChannelRequest)\x04r)\x00\x00\x00Z\x0cto_add_usersr\x85\x00\x00\x00r@\x00\x00\x00r+\x00\x00\x00r+\x00\x00\x00r,\x00\x00\x00r-\x00\x00\x00m\x01\x00\x00s2\x00\x00\x00\x00\x02\x06\x01\x04\x01\x0c\x01\x06\x01\x14\x02\x0c\x02\x0e\x01\x02\x01\n\x01\x04\x01\x02\x01\x02\xfd\x12\x05\x10\x01(\x01\x12\x03\x0e\x01\x02\x01\n\x01\x04\x01\x04\xfe\x12\x04\x12\x01(\x01r*\x00\x00\x00ac\x03\x00\x00`.getid`\nUsage: Get ID of any Telegram media, or any user\n\n`.getbot`\nUsage: Get the Bots in any chat.\n\n`.logit`\nUsage: Forwards the message you\'ve replied to in your bot logs group.\n\n`.kickme`\nUsage: Leave from a targeted group.\n\n`.unmutechat`\nUsage: Unmutes a muted chat.\n\n`.mutechat`\nUsage: Allows you to mute any chat.\n\n`.link` <username/userid> : <optional text> (or) reply to someone\'s message with .link <optional text>\nUsage: Generate a permanent link to the user\'s profile with optional custom text.\n\n`.regexninja` on/off\nUsage: Globally enable/disables the regex ninja module.\nRegex Ninja module helps to delete the regex bot\'s triggering messages.\n\n`.chatinfo [optional: <reply/tag/chat id/invite link>]`\nUsage: Gets info of a chat. Some info might be limited due to missing permissions..\n\n`.invite`\nUsage: Invites users to a chat, not to a private message.N)8\xda\x07__doc__Z\x07asyncior\x02\x00\x00\x00Z\x07userbotr\x03\x00\x00\x00r\x04\x00\x00\x00r\x05\x00\x00\x00r\x06\x00\x00\x00r\x07\x00\x00\x00Z\x08telethonr\x08\x00\x00\x00Z\x05emojir\t\x00\x00\x00Z\x04mathr\n\x00\x00\x00Z\x1etelethon.tl.functions.channelsr\x0b\x00\x00\x00r\x0c\x00\x00\x00Z\x1etelethon.tl.functions.messagesr\r\x00\x00\x00r\x0e\x00\x00\x00r\x0f\x00\x00\x00Z\x11telethon.tl.typesr\x10\x00\x00\x00r\x11\x00\x00\x00Z\x0ftelethon.errorsr\x12\x00\x00\x00r\x13\x00\x00\x00r\x14\x00\x00\x00r\x15\x00\x00\x00r\x16\x00\x00\x00r\x17\x00\x00\x00Z\x0etelethon.utilsr\x18\x00\x00\x00r\x19\x00\x00\x00r\x1a\x00\x00\x00Z\x0euserbot.eventsr\x1b\x00\x00\x00Z\x15userbot.modules.adminr\x1c\x00\x00\x00r\x1d\x00\x00\x00r\x1e\x00\x00\x00r-\x00\x00\x00r6\x00\x00\x00rF\x00\x00\x00rI\x00\x00\x00rM\x00\x00\x00rO\x00\x00\x00rS\x00\x00\x00rT\x00\x00\x00rV\x00\x00\x00rW\x00\x00\x00r]\x00\x00\x00rY\x00\x00\x00rZ\x00\x00\x00\xda\x06updater+\x00\x00\x00r+\x00\x00\x00r+\x00\x00\x00r,\x00\x00\x00\xda\x08<module>\x05\x00\x00\x00s`\x00\x00\x00\x04\x02\x0c\x01\x18\x01\x0c\x01\x0c\x01\x0c\x01\x0c\x01\x10\x01\x14\x01\x10\x01 \x01\x0c\x01\x10\x01\x0c\x01\x0c\x01\x0c\x01\x0c\x03\n\x01\n\x0f\n\x01\n\r\n\x01\n\x1b\n\x01\n\x14\n\x01\n\x06\n\x01\n\r\n\x01\n\x12\n\x01\n\x0e\x04\x03\n\x01\n\x07\n\x01\n\x0e\n\x01\n\x0c\x08#\x08\x7f\x00\x05\n\x01\n\x1f\x04\x01\x02\x01\x02\xfe')) | 7,252.5 | 28,939 | 0.751844 | 6,017 | 29,010 | 3.605617 | 0.161708 | 0.185296 | 0.112837 | 0.082968 | 0.447015 | 0.356718 | 0.299332 | 0.246508 | 0.21475 | 0.193731 | 0 | 0.322538 | 0.01334 | 29,010 | 4 | 28,939 | 7,252.5 | 0.435419 | 0.001792 | 0 | 0 | 0 | 7 | 0.579135 | 0.531616 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 10 |
d6040c658795db514a9c969fc7da8c3464db4919 | 16,083 | py | Python | plugins/rapid7_attackerkb/icon_rapid7_attackerkb/actions/topic/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 46 | 2019-06-05T20:47:58.000Z | 2022-03-29T10:18:01.000Z | plugins/rapid7_attackerkb/icon_rapid7_attackerkb/actions/topic/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 386 | 2019-06-07T20:20:39.000Z | 2022-03-30T17:35:01.000Z | plugins/rapid7_attackerkb/icon_rapid7_attackerkb/actions/topic/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 43 | 2019-07-09T14:13:58.000Z | 2022-03-28T12:04:46.000Z | # GENERATED BY KOMAND SDK - DO NOT EDIT
import komand
import json
class Component:
DESCRIPTION = "Return a single topic with the specified ID"
class Input:
ID = "id"
class Output:
DATA = "data"
class TopicInput(komand.Input):
schema = json.loads("""
{
"type": "object",
"title": "Variables",
"properties": {
"id": {
"type": "string",
"title": "ID",
"description": "The UUID of a specific topic to return",
"order": 1
}
},
"required": [
"id"
]
}
""")
def __init__(self):
super(self.__class__, self).__init__(self.schema)
class TopicOutput(komand.Output):
schema = json.loads("""
{
"type": "object",
"title": "Variables",
"properties": {
"data": {
"$ref": "#/definitions/topic",
"title": "Data",
"description": "Returned topic data",
"order": 1
}
},
"definitions": {
"metadata": {
"type": "object",
"title": "metadata",
"properties": {
"commonEnterprise": {
"type": "number",
"title": "Common Enterprise",
"description": "The 'Common in enterprise' score",
"order": 1
},
"defaultConfiguration": {
"type": "number",
"title": "Default Configuration",
"description": "The 'Present in default configuration' score",
"order": 2
},
"difficultToDevelop": {
"type": "number",
"title": "Difficult To Develop",
"description": "The 'Difficult to develop' score",
"order": 3
},
"difficultToExploit": {
"type": "number",
"title": "Difficult To Exploit",
"description": "The 'High barrier to exploitation' score",
"order": 6
},
"difficultToPatch": {
"type": "number",
"title": "Difficult To Patch",
"description": "The 'Difficult to patch' score",
"order": 7
},
"easyToDevelop": {
"type": "number",
"title": "Easy To Develop",
"description": "The 'Easy to develop' score",
"order": 8
},
"highPrivilegeAccess": {
"type": "number",
"title": "High Privilege Access",
"description": "The 'Allows high-privileged access' score",
"order": 9
},
"noUsefulData": {
"type": "number",
"title": "No Useful Data",
"description": "The 'Does not produce useful data' score",
"order": 10
},
"obscureConfiguration": {
"type": "number",
"title": "Obscure Configuration",
"description": "The 'Only present in obscure configuration' score",
"order": 11
},
"postAuth": {
"type": "number",
"title": "Post Auth",
"description": "The 'Post-Auth' score",
"order": 5
},
"preAuth": {
"type": "number",
"title": "Pre Auth",
"description": "The 'Pre-Auth' score",
"order": 4
},
"requiresInteraction": {
"type": "number",
"title": "Requires Interaction",
"description": "The 'Requires user interaction' score",
"order": 12
}
}
},
"score": {
"type": "object",
"title": "score",
"properties": {
"attackerValue": {
"type": "number",
"title": "Attacker Value",
"description": "The attacker value score",
"order": 1
},
"exploitability": {
"type": "number",
"title": "Exploitability",
"description": "The exploitability score",
"order": 2
}
}
},
"tags": {
"type": "object",
"title": "tags",
"properties": {
"id": {
"type": "string",
"title": "ID",
"description": "ID",
"order": 1
},
"metadata": {
"$ref": "#/definitions/metadata",
"title": "Metadata",
"description": "Metadata",
"order": 2
}
},
"definitions": {
"metadata": {
"type": "object",
"title": "metadata",
"properties": {
"commonEnterprise": {
"type": "number",
"title": "Common Enterprise",
"description": "The 'Common in enterprise' score",
"order": 1
},
"defaultConfiguration": {
"type": "number",
"title": "Default Configuration",
"description": "The 'Present in default configuration' score",
"order": 2
},
"difficultToDevelop": {
"type": "number",
"title": "Difficult To Develop",
"description": "The 'Difficult to develop' score",
"order": 3
},
"difficultToExploit": {
"type": "number",
"title": "Difficult To Exploit",
"description": "The 'High barrier to exploitation' score",
"order": 6
},
"difficultToPatch": {
"type": "number",
"title": "Difficult To Patch",
"description": "The 'Difficult to patch' score",
"order": 7
},
"easyToDevelop": {
"type": "number",
"title": "Easy To Develop",
"description": "The 'Easy to develop' score",
"order": 8
},
"highPrivilegeAccess": {
"type": "number",
"title": "High Privilege Access",
"description": "The 'Allows high-privileged access' score",
"order": 9
},
"noUsefulData": {
"type": "number",
"title": "No Useful Data",
"description": "The 'Does not produce useful data' score",
"order": 10
},
"obscureConfiguration": {
"type": "number",
"title": "Obscure Configuration",
"description": "The 'Only present in obscure configuration' score",
"order": 11
},
"postAuth": {
"type": "number",
"title": "Post Auth",
"description": "The 'Post-Auth' score",
"order": 5
},
"preAuth": {
"type": "number",
"title": "Pre Auth",
"description": "The 'Pre-Auth' score",
"order": 4
},
"requiresInteraction": {
"type": "number",
"title": "Requires Interaction",
"description": "The 'Requires user interaction' score",
"order": 12
}
}
}
}
},
"topic": {
"type": "object",
"title": "topic",
"properties": {
"created": {
"type": "string",
"title": "Created",
"description": "The date and time the topic was created, eg. 2019-07-02T16:22:15.879357Z",
"order": 1
},
"disclosureDate": {
"type": "string",
"title": "Disclosure Date",
"description": "The date and time the topic was disclosed, eg. 2019-07-02T16:22:15.879357Z",
"order": 8
},
"document": {
"type": "string",
"title": "Document",
"description": "The main content of the topic",
"order": 2
},
"editorId": {
"type": "string",
"title": "Editor ID",
"description": "The UUID of the contributor who last edited the topic",
"order": 3
},
"id": {
"type": "string",
"title": "ID",
"description": "The UUID of the topic",
"order": 4
},
"metadata": {
"type": "object",
"title": "Metadata",
"description": "A JSON value containing key/value pairs describing various attributes about this topic",
"order": 5
},
"name": {
"type": "string",
"title": "Name",
"description": "The name or title of the topic",
"order": 6
},
"revisionDate": {
"type": "string",
"title": "Revision Date",
"description": "The date and time the topic was last changed, eg. 2019-07-02T16:22:15.879357Z",
"order": 7
},
"score": {
"$ref": "#/definitions/score",
"title": "Score",
"description": "The topic score properties",
"order": 9
},
"tags": {
"type": "array",
"title": "Tags",
"description": "The frequencies with which various tags appear on assessments",
"items": {
"$ref": "#/definitions/tags"
},
"order": 10
}
},
"required": [
"document",
"name"
],
"definitions": {
"metadata": {
"type": "object",
"title": "metadata",
"properties": {
"commonEnterprise": {
"type": "number",
"title": "Common Enterprise",
"description": "The 'Common in enterprise' score",
"order": 1
},
"defaultConfiguration": {
"type": "number",
"title": "Default Configuration",
"description": "The 'Present in default configuration' score",
"order": 2
},
"difficultToDevelop": {
"type": "number",
"title": "Difficult To Develop",
"description": "The 'Difficult to develop' score",
"order": 3
},
"difficultToExploit": {
"type": "number",
"title": "Difficult To Exploit",
"description": "The 'High barrier to exploitation' score",
"order": 6
},
"difficultToPatch": {
"type": "number",
"title": "Difficult To Patch",
"description": "The 'Difficult to patch' score",
"order": 7
},
"easyToDevelop": {
"type": "number",
"title": "Easy To Develop",
"description": "The 'Easy to develop' score",
"order": 8
},
"highPrivilegeAccess": {
"type": "number",
"title": "High Privilege Access",
"description": "The 'Allows high-privileged access' score",
"order": 9
},
"noUsefulData": {
"type": "number",
"title": "No Useful Data",
"description": "The 'Does not produce useful data' score",
"order": 10
},
"obscureConfiguration": {
"type": "number",
"title": "Obscure Configuration",
"description": "The 'Only present in obscure configuration' score",
"order": 11
},
"postAuth": {
"type": "number",
"title": "Post Auth",
"description": "The 'Post-Auth' score",
"order": 5
},
"preAuth": {
"type": "number",
"title": "Pre Auth",
"description": "The 'Pre-Auth' score",
"order": 4
},
"requiresInteraction": {
"type": "number",
"title": "Requires Interaction",
"description": "The 'Requires user interaction' score",
"order": 12
}
}
},
"score": {
"type": "object",
"title": "score",
"properties": {
"attackerValue": {
"type": "number",
"title": "Attacker Value",
"description": "The attacker value score",
"order": 1
},
"exploitability": {
"type": "number",
"title": "Exploitability",
"description": "The exploitability score",
"order": 2
}
}
},
"tags": {
"type": "object",
"title": "tags",
"properties": {
"id": {
"type": "string",
"title": "ID",
"description": "ID",
"order": 1
},
"metadata": {
"$ref": "#/definitions/metadata",
"title": "Metadata",
"description": "Metadata",
"order": 2
}
},
"definitions": {
"metadata": {
"type": "object",
"title": "metadata",
"properties": {
"commonEnterprise": {
"type": "number",
"title": "Common Enterprise",
"description": "The 'Common in enterprise' score",
"order": 1
},
"defaultConfiguration": {
"type": "number",
"title": "Default Configuration",
"description": "The 'Present in default configuration' score",
"order": 2
},
"difficultToDevelop": {
"type": "number",
"title": "Difficult To Develop",
"description": "The 'Difficult to develop' score",
"order": 3
},
"difficultToExploit": {
"type": "number",
"title": "Difficult To Exploit",
"description": "The 'High barrier to exploitation' score",
"order": 6
},
"difficultToPatch": {
"type": "number",
"title": "Difficult To Patch",
"description": "The 'Difficult to patch' score",
"order": 7
},
"easyToDevelop": {
"type": "number",
"title": "Easy To Develop",
"description": "The 'Easy to develop' score",
"order": 8
},
"highPrivilegeAccess": {
"type": "number",
"title": "High Privilege Access",
"description": "The 'Allows high-privileged access' score",
"order": 9
},
"noUsefulData": {
"type": "number",
"title": "No Useful Data",
"description": "The 'Does not produce useful data' score",
"order": 10
},
"obscureConfiguration": {
"type": "number",
"title": "Obscure Configuration",
"description": "The 'Only present in obscure configuration' score",
"order": 11
},
"postAuth": {
"type": "number",
"title": "Post Auth",
"description": "The 'Post-Auth' score",
"order": 5
},
"preAuth": {
"type": "number",
"title": "Pre Auth",
"description": "The 'Pre-Auth' score",
"order": 4
},
"requiresInteraction": {
"type": "number",
"title": "Requires Interaction",
"description": "The 'Requires user interaction' score",
"order": 12
}
}
}
}
}
}
}
}
}
""")
def __init__(self):
super(self.__class__, self).__init__(self.schema)
| 30.692748 | 114 | 0.422309 | 1,155 | 16,083 | 5.85974 | 0.131602 | 0.128251 | 0.115248 | 0.042553 | 0.854314 | 0.849734 | 0.845597 | 0.845597 | 0.812943 | 0.789598 | 0 | 0.01545 | 0.432569 | 16,083 | 523 | 115 | 30.751434 | 0.726167 | 0.002301 | 0 | 0.711765 | 1 | 0.005882 | 0.973074 | 0.044191 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003922 | false | 0 | 0.003922 | 0 | 0.027451 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
c3b06c3561f45e7ad5fbcf957a56d85225abd15d | 135 | py | Python | ofa/utils/my_dataloader/__init__.py | pyjhzwh/once-for-all | ee003c0302a9b3e0d6b804ef8df31f3e201fe4b3 | [
"MIT"
] | 1,518 | 2020-01-05T20:43:44.000Z | 2022-03-31T03:31:38.000Z | ofa/utils/my_dataloader/__init__.py | Diting-li/once-for-all | 77bcaba7e023de9198e3ec8f4969bc415a826478 | [
"Apache-2.0"
] | 59 | 2020-01-07T05:16:41.000Z | 2022-02-10T08:54:42.000Z | ofa/utils/my_dataloader/__init__.py | Diting-li/once-for-all | 77bcaba7e023de9198e3ec8f4969bc415a826478 | [
"Apache-2.0"
] | 280 | 2020-01-06T09:25:39.000Z | 2022-03-31T08:26:59.000Z | from .my_data_loader import *
from .my_data_worker import *
from .my_distributed_sampler import *
from .my_random_resize_crop import *
| 27 | 37 | 0.822222 | 21 | 135 | 4.857143 | 0.52381 | 0.235294 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118519 | 135 | 4 | 38 | 33.75 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
c3b11fb13d983c01f0c5ee23f13f78d19d91586e | 87,388 | py | Python | Emotion_Recognition/slice_png0.py | SENT117/homework | c7cc6c4a218dd922caabf00831a27c1d2435ca43 | [
"MIT"
] | 1 | 2020-06-30T13:46:24.000Z | 2020-06-30T13:46:24.000Z | slice_png.py | ArtistVan1/- | eaac0ebdf6f93bda9f19297a546e3e7d3c425e24 | [
"MIT"
] | null | null | null | slice_png.py | ArtistVan1/- | eaac0ebdf6f93bda9f19297a546e3e7d3c425e24 | [
"MIT"
] | null | null | null | img = "iVBORw0KGgoAAAANSUhEUgAAASwAAAD6CAYAAAAbbXrzAAAgAElEQVR4AbTB646kC2CW1/XWbIiS+/8dIcAcc0eRiIyDYoIEwYCx8Z568lVVd1VXH2Zm22at/eW//z+ysdlQ5DAtz2ZGuSiHXGxumg/mMBftTCNyOM3cbJNnOyNfCPlg85Uc5iHMG/NqGw3zKmdkpnKxTTKTm7lIzUJu5hPzqnmSiG3uYt7KB/lUZblKhDHzqjn8A/abzPodv9N3ZE5ehUnYMIe8VWMn841ObnI1NDd5lcjd/IKNcrV5iFJsXkzFGOrsbm5yt7wzDyE387XIJyLMTRhyiKF5lo/magibh2E4u9oIfVdn+m6dEaJhLpKbbGMOUR7mhzbMTeSQq81nGsKYudh/+rd/0jY2V4WxsWEsFxnlLsKMudr57FObJ7nZyM0m89ZECJvmjUz+fo3mag5j83BWvrQRdnaIxuau/NDmp0a+thwiN3nIG7kJU64m7Rv7Zkbf6TvOCN88WcynMvMNJ8znclG5mJ8JQ5ifC7na3JWHlKt5kYcwz/IizJfmIR8V82IU86z8WK6GjYZ5Nk9yiEL0nX7H7zjhhLOL5GLmIXcbRrnbXBVzmKvyZPOZvBgzF/vr//Cv2+YqbAqbbS6aw2xTeRjmrZ2/+6nNXQ5ztWmeLDe5avPW5O9VyF0b88s2iuUQ+cT8WP6Y+VwYm6sccpcXERmxxnK1M5JDJ8w6e3JKXiWZk/nGfsN8sHmIUu7mJpmLuUmYIczPhVzNTT6RHMp7O2PuKnI3z/LGmLlJsTnMk+GcqznMVfmxEMPmqvlobJSbyCFCZ5zpd3KzkDAX80s2eWMhz4bcjBzmanm1OUxn9l/+3T/PxdjmKjQXOYydps3OedXGhpm5yHefqVycwka5GeaHdiKUD4b5+xPypYbN3ZhnxXKI5qP5sfy92tzMB0UOk0Mzo+/4TjEvhhN9R55srkrDvpnf2Dc/l4vKq/lcDmWbXxfyMzmUi22ehNI55GIOm0+Vn9o8KVebJ+XHQq5Oc5V3hjEqyxspV+usojOdWTZ/O5uHOCESOczFNlfLq3I1cxf7iz/7Z4XNzebqPHcb86k2zDZXy2cqF8uLFHPykE/tRCgfDPOF+VvJTWGejObZ2Nw0Qg7RfDQ/ly/lB8Y8abkZY81N5DCMHCZDVhTOCGdXeTZvTL7ZvrH5dSlXc9gQ+aiYF0OeRJ5tZ1eRzJgXQyoXMwy5K1f5BZFP5GG+NOQPCLnaGPJOrjadczFvzdXZTd/pu5xNSNiGIT835mZjbiI388bypLmZyrC/+g//uiHMi9zMYTjh7KoT8jDmMFcnh7G5ybO5ykP5Uh4KczWH2DzJzbCxeZJPRHM1LHIT8jBaLmbkZm6au/LRMMzdyU1uRue8mp/IO/MQQh5Ono0NwzDCojN9Z2d359yFefEb+8aGuStX85CHeZbDPORZnuSQu7wxxM7u8s6Qu/koP5dDlC/lZn5gfi4/tDzJITYKwzwpz86cf+f8O2IOJwwxv27DEOYmV5tPhWFDmP3Fv/vneaeiMDbbSYWzdfKpzU1scmJM3monby2UX1HRXC2ccfIj27yVw+bVvMhhjOaQi52RN9Jczat5Nk/Ks3krc5OZi+TV/FjlrZUn86x5q812UszJVbEQzkjlKu8Ms31j37xXeW/yMB/kYXmv3G2UF7nKG7G8NVP5VcsvKzY35e+qss1blSdj5iHkyeaqMJqb5DNZZ/U7nc1bsflVOWyUqw25mMMc5oM5DOFkf/nn/ypzmKshlIvMnBg6ky+MoTMbG4Y82TcflB/aXBVnzCHJzHt52BzmWe52ctdYDJurc5y9kav5aCOHeVK+FJn5wvxYJA+ZPMxH81ZmTgjf6IyQmzOjcrEccnPCN3Ziw3ymQl7ND4Q8DPMQyattXlWWhw0hH+SQq81dKE/K1eZvrfytFJsn5YPNQ8inCica8qp5KBfbEOfvdGZn5HNjyCHPxpA3crX5YF6MeTH7b3/+L3M1c5irys3JnNycXZUcYt44OYTJxcw7O3k2cshnzmPGWCOH5CLzUV5l5q3krZmbYVquNnNo5BAih5iPhoZ5lmd5mM7zJDYvxjCfizybyBshDydvZRibGeezi5wRy1W5WIQZO+E3dnJxlpl5ljO5m4dc5K5ZHoa5y0VezbzKRZh56+yDUK42dzmE3BXmK7nZPMsb+dqQzxSbXzQ3Z+QzFc3MzeQiH8xhyM7f6Tu+IwxzN8xNHsLcDLnJzdzMXV7NNsnM/uP/9Sd5NSx3oXnVcBqbi8opTyo/ss2zeTY/knlr8pXKzEeRTzW2eTi5K//TNXJVeWsbc5ibPJubPMmTypOd1czYyQzRGSHK1SaxE32zfaN5svxI5WKbi8p7y8NG+XVp2elE+SCHyOfKB+WPyLN5J8zNprLN1cb57L3KxTY6Mzen0ZC7YnNXXlUY5q35Wmai75x/x3ct23Bio+SwmcMiko+GeWsO5W7zVthf/ft/3Yz5JZkNG5HI3eYuh3yw+VreGcYc5g8r8uuGUa7mxRyGlCfbPMnf0cghF5WvzFzNL8khb4RomJkWsaEz8la+2b7h5GFeJRdzMb8quZj35lWyPJSrjaIYTu7ybD6XmxV5KOQu8mzzLM/mIYSTwzwMeciTCHPorLGN0wh5I3c5xOZhRG4W8gNDOOM7ndV3xhzmMDbmEJu78tG8VaMwX9l/+b//WcbMzfxI5tV2kmeTt5L3Zl7V2Vszz4b5SnM186nyRzXkal6FuWiezBtN5q3NQ/6AkHJXmVdzM/Ni88FcVYT5KIfZTuqMTJT38r/YJhdhhmInKhcz5qpyt5mHyjaVu9xtXkzlYg7lbqNcLXkrzKu5GPMQycU865zJXSTP5lmezVsrTnO1kRepbPNe5WIb5+8a23RiIXdhXuQQm4tiGyaHIubnEqLoO76zzNg0L3KxzVUkD8M8yU/tv/zZn2RezMzDfG7MYRozD3krzGGu8kYOIXK15iLMxXyUq5ETMmPzkGLIw3wiD6OhXGxucjM/1pSHjc28GnKVL0TkIu+t3MxV86U5zEUitrmbm7yYOtsih1zlMDb5hxjFPNlIfmbmVTLzEHmS9yLmVZiH3ORmvjSUm7kbSmXmJnd5kTybQ+5ymLuFjbnJL0kuJq8aYm4ym5vyXrKdyFXlYg7RmIt5Lw/rd/WdfTdjJ6FysYXZKIc8DPOZOcwHlf33P/9XucrFzK/KYQ7DzLPcbNgwucirRXIVc4gwYW5yEWau5jDlxZhDlsM0T+YwV8UcctfyLJzMJGSbL4VILmbazFvzYymHXMyzvBHzolxt7nLIl3bCPDt7iBBO3+gkJxpyNYyhIZ/a3ESYw3B2U2wMc1feCSEzQrmZm/xt5TAWis2nykVlmyd5I/mxbX5JCRvlkIfYCd9sI/QdsXwqKhdDxeZihmFuzh4mZ3yn78iMTQ4lzGGskKsNw1zk2RyG5S7E/tOf/tP8SJEXsXkrr05mngybqzDMi7mJvBMhD+UibF7M1aazhzEX86k5zAfFyBudMXNCiA1zNYfchahcbMOQh7nb/GHlonIxr1LMPClf2phn+Wgn9o2m8zxZLoZ8bR7CfGEO87WQh8jXcoj8XHEahijC5lnkD0r5aMz8khI2yt3mEBt+wzcKZ5xxRm7CEPloczMMc3P20Rnf1dnK3UZ5yN0c5moI8xNj3+yv/t9/nTfma5XloxzmavNk81bzZJu3Os3OEQrDkJtcVC6SGYZZfmzYfGnY3HXmPLkIYba5Gpa70FSMNXLX3G3zXm62+amiyB9T7jY/kosT+0Yn8omYu23eqrza5oPC2CjE5mshd3mWQ56UPyYXlRnzd1K52OZvo/Ijm0MYfsM3hDN9x9lV+anT3AzDCWecvZWZEP3O+bvPVIztRLmbw7zKW9nJG+P0v9p/+/N/mYtyM6/mvVEecjOMcjN3c5ivNDaHueg0cyhXDZMsh7NnKWbCjMJ8Ja/CXMxhDmPzEKUc8jBXw3IxF6OpGHPIYW6G5AtzmFebwzwpn8sH+QW5KzYPJ/YbTpjM5FnIRZjPzLPIw3COzWeSV/NWGHOTQ+Sj8rkw5EluwvzdzE1+UcpPbV6EMPlmOyH6jihyV/nSaczhZBtO7Ix8cM7Vzpx/pzN5Jw9jc1VsblKYw2xY7nbi9L/Zf/6zf5arfKVyM3PymUrnnOaNuZoXczUfzVWbmZtomJsk5NU6ea9zVq42yrMhH2xu5iF/zJDmbnljCPOV5p15lvmBjdyUn8uTcEqdzD9kJ+YhzENn5GvDPISIZJuLymeWH2oYM0L5JeWiso3y92Kj/H0oV9uJkmwOZz82yVXMRT5TebXNzYnN1RyifGrD6Hf8D3IYnX2wE0ax+TW56PQP7a/+/F/l1TzMYYhQcjEz5qOiE0OxPJvCmPnUHOZqyDt50smTDXHOVV7kYchVmJswPxU2h3kSNuSiMfNBnuXH5p35ufy6PAkb+w2/+bkz5Wsnz0K+kry1/EQyc8ghzJMh7+QqDHkjv27I/1QNSTaHfKpclLvNTX7dTsxNMZ+bwzz8Tt/V2eRVhdmGE0KuNnLIZ4oJs7/40z/JVa42r3IYQw45zLPZPJwxRGIO80Fu5mrmg7mJRCQzxlycMK+amxwyN5X3ilNeJDczX2kxN5uFhsjNyDixzVs55K5yyk0+msM8y6+ZzzRvZPnEb+w3Rr6QF2dDIsyTOWEezn4kka/FPCQXyyE38yzP5lmYi+YQuZs/pvJHbPOVHHIIZxfziVwlnD0MM/MqhzFv5GFuctV8apuLMCeG/of6G0/C0Gwnyau5SPIwzFUOmewv/90/z1Vu5q3GzEVY81abmYfIi1xtmKu5qtzF5jBfqRDFHGYO++bJPITNVSmMuSnmIhdh+drczEMOQ65ylcMwZm4mzMWYQ5zdlLti83ORZxvCmIfmavkod33Dbzix5K0hzMVcnJG8k6vNYZ7lR/Iwb+Rr5UvlJmHGHOaqmJt5yM18lBexUcpN+aE82Wme5DDkIcnF8mxuctXOrsIcRmxzt7krPzSHeatizOTiZMZ+p985f0eSbYiGsbnKIZ/a3KQw9td//i9yURjmyeZLOYxN2OZV5TPb3A15Unlr3imKzd3mavPBRnmYcrfNReUqVj61sZCHsZPKVVmebV41d9vcDKlczCjKW5WLbZ6UJ5sPcpir5cnmrhP+gR9pbHMTRbnbXJVf0XxiCLO5q4h5o/xcLnIo2zzLk/nDyiGv5qby3vKkebJOnp29VdnmVQ7zidmoXG3mUORhY36q8my2k8rFNlfnM/2N+q5958ScvNrGeRSbi7DNTSw3qZmT/X9/+o/Li1hzN4d5Uq7KzRg2bVaeDfMqN3PYPEv5oZWfafNqcxhSDrNN+WBzE0L5YGkeYo3NTeTn5mYnbYhFiG3uziEVeTbmkJ9rjOStba42/Ma++UzlYpu7krP/GeawIUzl1fKQd1KezI+EXG3I1eaX5aZ8lPJk+YlhbkIIc1G52OZqhM2L3ExlQw4pV5uHzUdzk88U23BCLsohZKX+Bt/NxdxtDKPYKIfZUMLmMK/2V3/+LyI3k5OHLM/OuSoMwzhhQ0RhbF7MkyFyGDOfmqvKk1wt5GGYj4aQw9yd5iIPcwhFuRnSMOYwhPOYm3wiDAkzd5vmRWbuhrC5qJCZh4jKVWzzpNzMTT73Df+Ak0/M1TDKIRcr8oXczK/Lj81bzQcrF5WLba5KmDzLp8rV5kn52pi7yjZ3oShfO2FuUmcXm2ebq8WQu+QuNPOJYWM+KFebd/IwzLN0js52/hvEvDE2zKu8MSzbEE3N/uu/+5Pkrs3FNkKeFTlEaMxhzEdnh9hcba7mZg5zt/lgoxhylYid88E85DCGvDF3c7NR7s6525CbMIYc5q6Yw1wVG+VLm49ytbG5Klebq7zIq2IO5UnY3OSqXM3hRL/hG8tHY3O1szysPBvyQ/mBCPMid3M4+SU55Gojh1wV5SGfy8085Nl8MOQqmXk25CqHPJtn+dTmJoTczJMc5qJczU3YsLmaj8rdfGLuNkLRmX7n/J1TbDQ3wxjCojE38yJXzf7iz/5JM6+auxnN5yLkWbkaGkaYZ3Mzh3nV5mJzmCebm8hNIVf5qJNnuRnCfKn8ks1dsXkIo3xp80G52nyweat5Vl7NIe/kquRi5jd8w5ivjeRXbchNXkR+IszVYrkZOZyQm/mgXM2LUQhR3qu8Wv6Q5oeWN6bNxTakzt5auZmfizmEFNs8G81NyrNh83B2NeRqDpvPDfMk7EzfOf+O2AkjhzMbRiRkm4vyZMv++v/5ZzF3m7tN5ksx75S7kKvkYuZuDmFuxuZq82Rjc5VD3qpczDuNfCLJDHOTZ6M82XyUh/nj8tHc5GEuwszVPBvmLsxN5WK5yYvhN5x8ajGHuaj8is1hHnIVyteG3C3kYTTK3TwLG4ZwJodc5WEo5WoO+TvJzXyuuRkz5EnIXZ5teZaLylvbPMxN3sshb+StOWyuNh8Nc1eE5arf6bubIYRhLursashHY//x//zf2+Zq2NjcDPPB5qr8UMhh6kxs86pysc3d3GxuQmxuhrF5q3Ix75xPng3hu5zNN8xdudsoTzZ/yEaxuSo/VK42V+VmzE2xuSo2d3OYq/mgmDeincw/cJHvlo82r5JfsrlYETavmruh8moOuQkLeRjNQ8jDMA/RmY1iDmdys9FULuaQX1K52KayzU3K3TaEuRp5sZl3QnlVeWubJwtJ5G4OG06u8qXK3UJezSEMm4s2r2bk2TnmcMJ3+l3OJpwxDMNcJBdzMQ/R2f7y3/7jmLth2NgJ88HmqsjN5qpcbW7mrshNfmxzNYc8zJPNs/lorgpzc0YYTq7KVbna/J1tPsoHeSg2V4XYPMxVsXmWm7kZ8mTzcGLfcEKIs2fzYm7mWcjDkCd5mHfmJnehXC0WeTHyC6Ih5G4OkZuhUZ7MTchP5Ofmbt4ZcjPm5pxfstzlYV7MTa6aj0JeJYRZbk7YfNBoHsIwNkT/Q/1u8jCE4cRylY/K/vu//ad5oxNDmHc2d3M4Yb4W5mYYhfmg3ORJbjafGuYwnwnz1snD3OQqL/Kl3MxDDvncPNk8GfK5QsjD/Fw4sdFYyMOQdjLfcPKkPJmr3KyTZyFfam5ibvJz8yIW+agwmqtCyLPJ2cXMB+XJ5lWys4c5zE3kZm7CPKlsc1XybBvmJsxN5KP5g0JeZeSNmB+aF/NRw7yqzFzNYdp3zv/DOtM0T7Yx5HNlf/Fv/lHeaGxT+dLmYuZmfmxsOCHyUK42N2evKnK1TeXVNldzVz7azKtpJ8zVhjwpy02xeVKuNnflR/LRNorNpzbKXfk1IZzYyCGfad/wm5s82cyLuapc5aPl1RphXsxNrvLr5hBytVFeVdbcpFJsnmxTucuTeUjMs7Orba42SuViJ1+L5Mdm5iJRlG3yah5ivjTPko/mIT83m0+Vj8JmDqPF+Wz9LmfyZBsb5Sv76//7n+aNhs0HOYTZvDFvdZ6rMS82DGNDhBxyM4Z5lrvKq/moYsxbc5FXJzaMebZRlpvObJiH3OVFXuWQJ5u7vJoJ86nNXZ4tzE2UZ2f2DaOQZymc/iF+o0xe5TBmjLwV8lFuZiFsbk6uCiFfysMcJmdbbuYmFznEInlrbsJQebV8KYeNuSnLizE3JYcwNlfFPOQzkZsxU642dEYqjJ0wD/loHvJkeZLD3C3P8mQzlLvNXY3cbIhlxshhJ8r6G85/I3MV5jCbwyjkrWL/+d/8o+SjYWPIIVcb51zNYTQ2Np3P5mIIuWrsxE6IcrWRmznMk3kj8pDDmTAv5iYMc1Vu5m7YPMxN5I35XG4iVznEzN28M+Ymh9zNizE3OczdchXmE2GuCvnoG37DyCFPNuYLebKxuStEczMawpmhyCEM82TIzfJDRZiHeZZDrvILhlwVy5M8m3eG+VIOeRhyUZkzRmeMYcMoDGNeRO6aJ5MneSfmxRDzxlSYV5uHJjdzGIYwh7GTq75z/hv6HfMwho1ylTey//qn/yTlMznM1Rx2Ym7KXcOJTZ3NoVxtGHM4UQi5qGE2GmJezEOezYsYchMiN9u8qvxR23w0rxLlJgxzl6u5yWHYDMXKq+Zum8+NuWpzNTe5moszRd4Z/YZvLFd5mIeN8mzeymHYXCyHuRmiudqZcwghN/OqcrHNzdlHJw8hT+adkUM+k0O5inkrObuLechh3jmZ+UzFZj6qKGTeGHIz7ERehLyVwzzE8ssa5kvbvFX54P/nDe6WJVsQq7yOmXVavP8NBAYBxsEzGQcCAQ771giwrO4+lZ9XrrX/snZVdR8Je4zNu7GbberO/av1B5e5DDltlCexP/wf/yY/kBdzmDkMG4YQjYZxwz3yYmzsi1Nf6S4h3JCZy82TjaG8i82Todicyqm8Kof5Lbb5mcqT3cxU/lzLmxzmG3mymw2bHOaDeXcnlPnoC/3OJaS82S1vNnLIm+bZfNRubOQw3MwXin3FnRxS4W7yZN6VH6mQzQdj82P5pJRDHuZSDvmHmnnIQ5i55DJTOXU3lxw2bzZzSeSzeZZD3sS8C81pDpvPhjxs81Hlx8actjkV9z/i7hJuSLHbKJXLrOxv/v2/yqt5lsu827yLDXMKuxEKYXKZG91x92YjKq+2Oc3h5sl8EObJxkZ5k3flk1zCaN5Mno0hcpkPdsM8i7zIQ/Iwh/u8GTanDdHdd21OcxjCcKPY3Wc3/A5fXH51GeXvbfMml91cbrgJ21eKcrmzMKc8G4o8u4dc5pI3y48N863KwzzMqZx29yaHyJvmg5mRn8qLOWRzqnw0h7zbyGFsFO7Iq+bJzJvyffOkvLlNLtt8T+V7tmE076Kv+IqvTrthFEMOKWbI/vY//S95GPNZ2ByGScjDjHkxpw1D5FRhHobKz2yj5BCazeXmTdhuGCIMu6k8zJ+QU4U8G7t7NjP5IO82zOaDMMolucycwhzmzYbo7jIPyUdzyGVjI4eYD8a+4BfcEO7e5ZMw7/J980GYV/lC87DdEcKdxm7elEsekuUwpxyivBs5xJL81EbzMzPPQt6Uj/JsRnlV2eZV82IumUs+ysOMctoI8428ymfzQX5uo7zKizHzreRHZtrIm4lSf6Cv7Ga3qcwYorwZ9n//x3+dj/LZvBjL942NuWzkMHKq/CnzkGLDncrDNs1pmzwMM7E5zSXMN+a7yimUN8O8KWbMJeTJvRgzW5hv5TKH+WCYd6n8yBzybi5DXgyxG/sdbj5qXtw9zAdhLuXv54tTSQghzHxhcyrEwl1lDUNEMnnTMIQ7xjyp/Mw2H+Xmo+WQh4T8zEIO+WQOk3fzjRzyJJfbtDybV9u458nIu5WfGvI/yOQwTxb6I/evTjfMpWFOebP/+h/+VT7KixTzrOXZPMycvsy74UZ5qHzfvJqUQx4q8zCneVds3s1pzJw25oORN5VX2xCiiOY7Zl5EeVLZxkZ5sznNYV5tTpWHbV5VfqTysDzZ5qFiI4e43Wy/wxev8sFizEdRyJs8ybzaRhE2RGM33FUucwk32827cKckZC4ZZfnGkNxt863Kz2xTEeYbM+SjeXb3SQ55WD6p2CgPM0/mUvJuu8ndk43Nq3lWvpH//4z5ZEZfdf8j7rZ5E5ltlMo2+/3/+W/ywfJz3T1pGHNqXsw2DPNQ+Z7NYRRzGCKHaC55F+E2D+XFMK/mxbwYYWHykIdtCFEeMh/NyLM8CZtTuQzltHkytnlV2eah8rDND4XyUDltTiEsu33BL7j5KGxjMd8IUd7c59mc5jCKOQzRmEt3hC+44Su74YYhhOjuElJsc7rnzeZy9yOVn5m4u4w2T5qHbYyE2ebU3fdUyO75VmWbU/lk82RzGd2RV+3GbV7tnk/ywd2TzZNi8w83jJEPxozQ7+lXjI1y6sZcctp/+Q//Mh+sOQ25bE6FvKqc8maGPGRsXm2TebXNQ+XVNg+Vh3kRhjwblcrDNsxlthG5W+QwZsxlo5wKUWzyLizmYZh8z3y07mweKnKZ0zankVezIZrLkNM2lSEfFDltU1HsZvuF3TA2p0LMi1zmofKweZdvjM1DWCGvCnOaFNvUcGc32xfcKBb3rxRLorybzZuMMC+yPMndR8uLPGQ+2uZUHtq8yWkOG3OIklez3SiVySdhKHLJZS63echDFoZcykMOm40wH4Q8m8v8f6bNq81hXlWWU37l/qvNN0ZeTGX//T/+q7wZ5rfIoZRDbg15KMw35tU2NnkxZn6rypPYsOHGnfrVkBebIYdReZiZdznMJYfMw+jmobFN5UkOmZzyJ4Uws9s85IMxY7hjyWE3RHeamYfcXX6xfWHDPMtpIZqfKk82Nm8KYV6VQx5mKubFMDN2o/CVorvkyZiPbn5sLKQcwsxDTpFn8yzPlme3SeQ0N6/C3P1I5RRixUa44TZGsSKUS+RyG5uHPJt35TQvNuTJPMkhTzbv8kmbN5uh8mou+cr9VxPzLIdhKvu7//yv8zCnunkz5sdymHfhnktOMT/WvJjN4ebJ5rfIoRgzNXL4amHIm+aUQ9nmMqeF2cihnBrmdJsZkkNDhEV5k08mDznktJGHMU/mxobkIfOFpe402zwkzPY73FzmWYg55JTLhnzUPU8282LIIYw5jLzJodhsKPd7ttluEkV35M8x39g82ZzKQ75RnoShPCxyKMy8ykMOmzeb01zKJ5tTTET3KGKbxJebbYiiVF7tnhxuc7rdNE9mTjlELmPGHPKqzUeVb23zpizP5k3mzeZJ0R/NV9/VMJX9zV/9ZV5tal5tvjHkMkZezGl3z8qpnJZTLo1ymsOcNqfbDfNjYZ6l2CazsDu5DLkUhshlDmMkZJtT0Zy6MbplG4aU0zyMXOaSUzFRHorNJcJ8K4aNYmNjXxDdMcxDso39Dl+8KYw53CnEHIa5RA55N89yyWVOm1NezGmRyxzumFNziUIqP7M825gXYWxO5U3ezbtcypvyUMwHYT6bF0M+2XwSiuf1O+0AACAASURBVCKXOUxjDkO5RCinzWnjNh81TxZyGfJsfrv8eTbvbgz9Sn9Eno0c5mG//8//cx427+Y08iK2qRhz2OSjrJuH5NXcvKq7Sx5mFDlVHjaHsXmzaczDGJqHhJHDPGxkxIZi+Vb3uzmEOYwRJuRVhjnl1Nhm82IukcOYZzklykNlzcNymMvdR5XdHObyhTlEcbu5jDnc2C/4guFOmMtyKjmEMYdcyp+Wy/zUxuZdPunuIVHkh+aQy4acykNlG8aweag8zJDvyWXIZaGQctpGKA/Jw243yqvKNjZvNsqrutvdk5ZtbC4RlYdtLsnhxjY1D/Md+bHNZ/MsP1U+2ZxG5mEbvnL/o/pqhiEMN4Tsv//7v8yLHMqrXHa7mcMol83mO25OebebV4XuCGGWd+UhD7nMfDCHMewLwx2jEDansZwKiznN5V5WljcNmxXyqt3YzKGI+5xmzCHM5s8ToTKzPOsuh8Jws43ubBhCEuZhpo19YV9sN9wQ8iyXaIxEiOUfJIeyjduY7whDyEOhzPflUOYQuUzKIcLmofKwzTYPic2PhbHZRijudw853POQvInbpjKHjRySw2ZoyKlCZuaSw2Izh6E8VE6bIYfYLTa5aTMsL8J8lnfzD5dXucycNvko+iP9aqbG7jRzYw53+5u/+st8VCpvysO8uI0Nk3ebFzef3TDvUiFD3i2HlEv5aL6xYd5sGPNiHpJTw1xibHGPIsJwH7v5xpzuTC5zSZjD5jRs/hzFPMy7KBU5zPaLh3tfzcxYuKs8bHO5sd/hhtgoT4Y5DLE5dVdohnLa/MMM82epzHdslI+63z1U5t0cwhAVss1p82zMz+XZkDeJMCq7Z2Fj2FxyKh81H+TV5kVeVZjNN4Ybu3lSbJ6Uj9qYH4t5l8P8JtuU0xz61b0/uLkLy2EYYezv/vd/nQ/mUN6lfLLNJ5vcGDPvxoaRQ0jd/SkrpxzyUd7NVLZhThsNYw5z2jwkkxax0J2wOS1PRo2yXDYfJWJzGJs3xRzmVTdv5rNyGWIeRpHDnJa6WzfMaTf2C4Z817wY8yQs5JDT5kkO+fsK8zCkhbzaRg45bV5V5pBLYYhiI0TdzZyKYtgNkVNz2bzasPmuMD9VEeY0L8qpnHK5zZuFyJ8vkt2+YMy7/Gmbz/IkPzaH+aTYGDZPurvf/876ille3Cg2+5u/+ud5M/MqxeYwD5VtlMLY5qPMaczkYWzmxv2OPCTmcMNwRz5awyjfd/cm8q0xh2HmxuYS3eUyD2HIJyNUhjCs+SjvNoQ5FZsnGfNjc5g3xZzWaMyp7h7yMPPFbr8ob+ZHUti8ms/CfGNOxXyU8mSbj5IJEc2zIadtXuVQPppLebKoOzktkjlskrnkW7Oby+aTMH9SGJI15KGi2CgP21xymkuxUX6mHGKzTc3Mm/m5zZMc8ptsPglzCnPYGMr9/gfrK7Lm3Wj2t3/9L/JkTnOa+Z48ZOZ7EpGxmdnG/e6jFvtibuoueZgXG3mR5TCXkFeh2JgxKpeby8yQ5GFeJe/mo3loqWxzGeajeVEewjwrz8bMt5rPRpiHYd7FUGq237EvTiXc8o08JOHm5nuSh2Jmc2oYM8mMouRh5qM8WYoV0bypKDM2G/nGyLt5kTeL5E0OYSbJNm/KafNRwnxrvmNzKt+1eagoQz4oWz7Knzbk3YZumFc5jDnMJT+UyzZKmM/CkHfzjU0Oc5oxcrj/yv2PtmjyEM2a/e1f/4t8Yy4Zc5i5JB/NPBkih/JuZsi7VPhiu7G7zMO8GLnMIS9STnPJodhsc4kcbi5zuSPfE+bF5lXl1ebUbmzezbwrh8iLiKHyar4vh3m2sZlXk8zDtIjMbb/DF68qzEoeYgyVh22+lUM5xebd5jSEDZFTHvLRzKuk7hbyZqNQZj5qbOTF5qMZhZzybvOkmA8i74q5RB7yasZ8Y2zelE/mxSivKu53D1sYc8qhfDKHmXe5bMMIm4cwLzan8mbzLMJG+Z5EmMMo5jDzwcYImxfD6Ffuf0Re5RDu7L/823+aJ3mzyTxs81B5tc0nm1O5hJHDjb76KOFmjd1ltpvTxkLeNPJDlW0+u3mTQ7j7ZFhe1cx8q7KNTbt5tc2PJAtFVB5mLvmePNvmtLnkozanZrff4eZVMYecKts8JPIT8zB3RGFsMq+2eZbybsy8WeorzRzKuzCvkhmGhPlgDqM82SiModicioX5pFzyPZVX2zybP1+Yh2T3XO4emj9pm1OxqTxsww3zZ9t8Uk6bU/koeTWTzFzCnDafbGz0lfsfcffJnf3+P/3LHIaZ5NlcEubmYcPmIZHTbvN9Q17lcI/uTjnEbi4z04bZnJLTmIc5RSHCNpfZfBDl1V1ueTdsRCKfzLs2xubFjZDT/ea0zasVQhg55KFcIsxnYQ77Ysjdw1xa7Bf2BTdzk0PezCTKacxU7O4hl2EhhyFEOW0uI4c7m9MmJKfNzEeJ8jAj5oNCHnKI+Y57bPKQh23CEDaHMd+IMJdc5rMc8lD5JJS5tDGX+WBOxVxyiZXTMMwPVR62uZfvmZttmORbMz80b8oh35pJwjAPQy7zUQ4Ls8ZSqT+aXy3Pmv23f/fP8meZj4Y272ZoTnOYU2EscpmHlEPmYV7lRSM2QsvD5sXNPCsfzMNGLiuvknkR5jA/lo9y2YZ5yCinTdjmEsW8mQ9yKXlRHsppZpsahq8e5sXCL9x+0W40lxmSHxqTHEqYQ5jTcshnc8m7SXLZsJlDGPeyXDZivpU3uZSPyqWYy2YulW1szHfk3bxLYd7MoTzkUJ6E8mrGfFcOc6o8zCE2h3HLz5TTRuV7trEb5h+i8rDN91Qe5qMxb/KQDDPMQ7r/Eb9aedLsv/27f1b+PPNBTs1pxsjMYfOm2DxU5rA5FXOYNznk1MiLNIe82ubNhpHLKJRt/qRc5sdyyKuwzakw5oNh3qUxH+SQ79qcyptcmsvdZ1+4/YKb0+ZZ5NTm1RzmVDnlNIchL/JsLpF3c6pcZj6YF/NZns2pmMO8uUd3wobYPNkQ8mwIYw5DTpHMvJnP8q48JA/LZ4UxLK+SV9v8jxHGhmF+Lp/Nu3mW3yo3bl+I9atTDn+gXynf2v/z13+ZQznNi81D5aObdw1hM5duN29i5bQJYQ5zmIf5KO9GDnMvk0vKITPm2ea0YeRF5LJ5tZJDOW1+qOTdHDZDLvMuw7zaMGyUUwlzmEMecgjzLGzm5tTdQzlEsd9x+0VuVuZbETaNvNvmofIwLO82yveEFTnktHmoMM9yymkbc9koYS5hCPMweYjQHbNRGNvoLq9C3g3z0TYfhSGXeZbPhuRNKJ/MIZeEmYf5+6k8i7HbF2Hd/NRCXtUwxrzIzw35vk2339n+QuLr39FXQ37l/geTJ3f2f/1v/yR5M/Mz+bFtcpmHebMRdiOHGxv7SqnYrLvT5klONXNzCSGvEpLLzDbmXd7dfdKcZpJXM3m3zakYchnlkMs8zGFzmsuQN+WQn5nDRjckdzOGUmy/Y1+0kQ8yo5zmMJcwv1Ve5WGhPBvCvCmn8m4S825sUyg2m3chlNOwOS2M8pDLHHIIY07lEEYJ27zahjwbi8hlHuZdymkjzCESosyL23xUeZgx3GMO86a8Kd91Gxu7UR4qM+bdXMopbH6L5jQPw3zU7XfmL+SO39v9K0Zf+fp73Nmcitjv//qf580wP5Mf26by2cxhZIhmtxuLolS2+a7N5YYRisWGiEQ5bepu5mGbUyRkbphXeTbkMmND3myU0+ZNeZUXMfkoYS6z+dM2p+YSQyFh+wv5hfm5WPmRvJh3m2ezeVfk50J35E9JZi7JXGZzmNOcctiQinIzP1IZ8hCGVF7NPMxhvmNeJQ9DvjXbEBul7h7qbvfYbC4jh3tOmzmU32ZOw25sTuW3Sn6bsXmYh3n3C93xld2c7l+5/x53biPck+xv/+qf5s0NY/Mj+bHNm2IizGFIy6nZ5qG82W6Yd7mMeZFTLkOxORWbU54V5YdGLsXMGmJjY/Nnmc9CDjmV5jAPm8O8C3fvhrHRnJZTDgm7/SPc5OfmEPJDzbN5snlSnmxOOcQcQiFEfiIVY+Yyp82bzWljTt1D1ryZS3mozKWyzUMll1su+b5hLrnMm2SbdzltlIfucb/bvNt8Vw75842NjWFzym9W+S02p2JzGGLDEMWGG8XXP+BXbiPc75j913/7j3Oay7wb8+Sed2PmIdmmHMJM5kWe7UbD2A0pbE6beTWX8BV5lSg2Qy6Vh20uM1SUUxjzLodRTouZUJmx2ZxCYyYPebXN98wllxX5ibs3G7s5NXnIw0YI21/gC6M8mUNehJlX+Sh/pvmO2TwphzyscJfPllMOZeWhIWyeNMYcNsxD8rDNQyKGPBvyEHmyvIiQN8kcSsbGsg2TtGwuETaXXMqictoMbci8CPkg5jDfykOYiU2bbf4hcig/ljWnYfMqVLaZu0xm7rZfCPdf1R8kw0z37Pf/4X9KDnPaPJkPJvNq5tVd5uGO4eZh82JO+0JeDDfMq5aHmVPDmMvuiBySO4XMw91DDjkMYbbJoVyizMO8mXe5ROVhxrDJGHPJN+bJHMqzeRdhLmGRQxiby3wrh832j7SbbRL5gcycyimnPOS0YS53H83IuznkMjZvis2pO+W7unsIM8KiJDNvQmNezGkuc5jvSV5twxD5gZxCeaiskIfMxIYxmifzjXLKizCnDbE5FeaTeVdeVWaIOdy4zc/Njw0hzyLft/mZMOHm8pWvf1B3G3Yj9l//13+cyGUOw+ahsjlM2L4gz5KHWdiNTRu7meHGXHLJi3mzPLnPk1uYN+WSd3d8VXdvwryYIewe5TLmkHdzyqU82ZgXY55tnkV+LMy7YnPKi9icis2T2439I5rGPGuezIsc4k4OOc1hLmGYdyFvcliYh8035lTIQ97Nw51IthvdEO5O5U2jeTPfN4cxnyQzD2He5bPlkIfKRHlTmNMchrybN0O+ccechvltcimXIYaN3dj8NvMun5REPps32w2RZ3OYU1/5+gckmZtif/vv/kkir2be5SEzbWYUI4cY2hfMbjc2dtNumFPzkBgzpzIfDDlVZk7lzbxrmNOGWOyu7nSnu2RLOdzMZdE9yuayeZM3CZlvDfOQPGzzXZufi3nW/FjMu8btC/7C/0sb3mhJlijaWd38dlYfvT4ykhAMvxNGWALbPIP5Ecb3dlft5YgdlZEZXVV9zmUMz2lZ5NXyqjxtkrnZGOVh81Nzk3drXo0h3yXMyXw3T+WyeThUmDklc7O5ayHvtrkL2zCZS5k8JDfNfJKfOPwgzKWdnjZ386F52PxUbvJhXpRLvpsflHfbJJrLkMtKQn5tmA8hf8/8AzZDXlWe9o3zdzZzM5f+6X/693NXLstTGIrNNnXYTncVO+iN4wtDKCuFjcLhboXZKNq8KOZmtsnN/AuMosPD2DfbN4zNULEhnWPzbps/q1waucnTwhB5KP+nNA9hLssPwjw0T3NzcPwrdpgpbN7NQ+Xn8tka8jAfctcwnwzzFOZpJjcbG0V+rYNiM6fEZpu7HF7MQ7GxYS6bu0lFYeykXEJ5dXi3TUWxYbbT00b5wdyM0Tk/F/nBzF3FEb75webdmUuiNIQY2oH82jAfQn5lbnLJTdnmXfODbT6rPO20b79ryHfT//u//7/MU15FHjaXxeZS9vZF/YbDTtrJcbBRFrkpl8IwHJgXxeanNuZfpjxt+IpvGBsbGw7msk3FRrkU+YUwf18e5ik3+ZcZ3nBiyIve8BviyGVuTnbSG/OQH2xeFMq7NTY63DU/2ny2ueRubBhDbiIf8kkumx8N+bkwNsrTxjl/qShPjXIZczdGbuZf5hwbG+WpCDsxL8qlOPJum8plMzejuYkiNB9C7jbybsw/ZqM8bS7lKd9FHsrT5mGMocLJt98xjM1d//Q//VfbXCocXpxzKTaXDjvedLwRdiBbOk8a86OyUm5yyavjYPNUnjY/mJsxZkh+otg8fGNfcXKeODxEYYyZlsuRS27yYnPJw7wK82v5hfzMnPLGRifyFPqC39jBkctOdtLY4a/Mjyrv1lwKaX5uo9i8GjvZGDaM8rQ8HZFXQ35pGyVsU3kaNi82d9vkLi9yk0ssLxKbf9jG3MxlHsrDiXlR3u3IXe4irxbD5tIsT3W4bIbcxeYfNuQmjM0PchO5iXxyEDaXMXOXsT/Y6XKO6H//H//tCmNDhzBk7EQ4bHEcOt7s+EJpbuYhvg1zt00iTzOJ3ERuojAKeYqh8pfmu9k85WbzEMLwlX21TbmZoUKMmc5hFLmJeZX//1qeYk75wk6azyb6ouNveDMPbThxsnwY5tXhZ+a7Y8KWZEWempsxFJsXDbNzjDbMNiQ38yGUS5kZclPyau6mDnc7p3wY27woNnfNq82LPJTNpfxJmKf5ZH6wMQ9lToySz3I3Y6NUlody1zmfLS9yMDfzNA/50fxC5GZsXgz5LvJdPkQuyyV3s/N39k1u5mb63/7jfzk35SZzaLOdRJhDxxfrjeOLh7EoDJM4I5dtDHkYOT3FJLl0EIq5meVmEuWnyouN3MSweTc3m7sMJ/tKMydLDso2beykyHf5kMvml/Jh/o4wT+XPZvLGThryMBPHb+o3vFkeNjk9jLnsnAzz1JtXs/nQJHOXQnkxv9Zsw6nlYTY3uWszD22eyszcxCHyXe7mZqPkwxC2+XuSF5u7be7ysM1d5SGXPMxDPmzMq81nOyY3hTG2yU3ZZgiVHfks89nclMuoWLYhD3Mpd83T3I2h/FmbH8yHfCifzXe5VDjt/IN9c9fC9E//j38378pKjXMeDnrj7Qu90cHGsFFsLmG5FGbu5l3zYazJcHD8xnHgcNmJr7ZTQ2/kVXnKw5AfDRubjYrGTvpm+8YmuRuamzGEUMxDeRjzMK/yan5ULvPdEGFDPgxv7KR5FW+/0W94o7lsLo3Nwxg2l3koLzbm54rcjPJw+PtOdnoVYm7mMjSX+bl5yEN5Ue5mwuZSbD5sRGJ+YS5zM3fb5C4vCmGeNj+1+WxNRR7mZn4ujvlLMR8qls1NCPMQm+QpZgzl1TTMq81DXpRLHmJoQx6GP+z8qrAw/c//t/9ivpub6DgQ4jjozQpJLifmVXk15FIuuZnNU04cvP1Gv+ELxr6xP2x/sMlhclduwsxcovJuo7waNk87aBhO2zf2VW42hHyYDSU3ofxUYcxfW56aX5vNw94wdVKein7DF0S+G81l+cGw+anN/Cg3xWY+OfIqd7kJw04MhxWmsU1uymWjvNuGYS7DSJ42H6J4yzblk9xt8y+Rm/mTk7kZQh7G3MzDXMY85C7KpWg+jKHY/KAwL5qnjVipPO3wcLCxk3LZfIiYeVfZKLZpfrQ8DPMhwhBiaD4Z5x90IsZM/8d/+jfzWWyHHYeON3Ugi+Rynh5iKB/yIfJi+VBys+Hg7Tf6jb4wN9/Y75x/sK846GAM2xz+JJS/srkZUr6LYl/t/INv3+REiFw2RGWbu8qlMOSSmzBPy7/UNhVh2MneCJ1sLnnoC/2GN2uSS8OQp/lQHubFxnwol81n2/xM5UcnhS/0xr7ZTjk95C+FzWVucrcNkxO5GzqyTYWQXxvm1eEHY+Yun81l827fJt8Vmxd5COVpo9hcNjrIw2abu8pTp6dyt1wqG5aHOCnMq1xWmA+peZofbS4byw9yWSnMh512/qHjRLYp+v/+j/92mbu5i+MLfaGDjVFotuSzKIwxuZS7fDY6PMyE1Mxhx2+O44vtDcN0frX9jm+SORDmrnM+W25yV1FsPpu5DH2Tu8NDbPbtD/YVU4e7oRiKDUOE5ZNconxYXkVhzM1cis3dPOTd2BtGp8t86I1+w2HIgTBzmrkLyUxuylD+jjzM3YbNu/zczMPYSQf9KzrU2Df21dyFFJvv5iG5mZv5uZm5GzLkMjdh7jYqlbkb5iF3yd382VxGIg87XcJwYm5mPuS7ZshN2NxtU9lGSYQwtvmzZJ2U3JT5mRDLuzYvyt3cxEbuZk3+JJew5WGcdGYRhnJZLs1NhM3O3/GNwhT9r//hX4/T3Xbw9qbe6A0Hmw9jo5iH8moexlAUG8URQ5ibEKEvHH9DGMb+4DzNVLbTZ82PNk+FIZeNPAzlKTcxN9/wjX1DyId5GvIwNzMfKi/KDzowzE8NuQlzWS7Nj77QFxw09uZh7mZsMmKj3MRGYYg8lKf5kIfNi/lu3m0u7WSjg7e/4Y3G/mBfXXZQLnnYkKcw/4DhdNnB5rK5DKEYQr7LD+YhbB7yMJd5KA9zGTaGjTAcHvKwE/ODcik25rs8FcYQyqX5cLjMQxHOucxDeRUbRgjzIWzkJpdi4/TdEHLJw5Cb2Mn5O74ihuif/od/MzHRFx1viEKYF/NzY8jPDYXyYuigrNQb5bKxk52IYsP8YD6Zp/mTIXIT+Ylo7GQnTj8ay6UhD2MuQ/Oj/EnkIcyH/MLBRvODvnD8xg4z+cKGuYSdbJ7yMIQhNxHyoTD/sNN3Y9jYSXF8oTeaffvKvkrMTS6HTw4MQy65ya+NoRAbG8bczIt5lU8iH4aGXOZmyKt5Mew05CY3+XBiLkM+DLnJiyPkYS5DKMzTDpd5CB3MTeTDhnkacjMfwpibIfJzG2LIJ5GHjfN3fPMQpv/8H/6LDTu+qN/U4XJkxoZ5lzdzN0Ye5rvNTxUbcyk2hCP1Zm6GCJM2l7zYPOUvbDTGjHKJhJiHfJjvhq+2EyM3uWxI5W6+23yIzV/LpclN+bNt3nUcODhPOpEXfeHtb+awnXK4bJhJO7V5ysPYpnLJQ7mUeTd3yQ/yo81lmZMmOZe7zhOTm3koL8plQxwof882ddDBxk7Mu/lk0+m7Mea7yGe5W+S73OSy+Ss7h7nLTb473W1zl4fNQ+SmzE3pyHJpbmYjN+XFcpnL3JQ5dLxRmM6TnRhiM3fT5kMY8pSHzdyUy0Yhd/mTsNm3P/DVQ4r+9//+X0+Hvf2NvUnWOMjN5qkoNptL86o8be62qWzTGBKxjdDhLrnkk7kUc5mbUT4Um0sxNJeNzSJRhHkVhg6GnRj7Zr4KQ6JsQ+7KZXMpl82L8knMw+Yfkps3nLbJqx1vevsbHe7mZuT0cHAOY2xTLhs28ie5qyyvchPmLndR7mYum4cw+YYDUQxzMzPMXYt5KMw2dxXlku/yoyFkhqm82+Zdbs5hyIuNzd3c5ZJLR8jysLnLfMhGYThPD3MJuZnNpc2LI+TpyN3cRLFzlNzMn8SQTw7rTW+/4TCnzq98+8NDbLZhCvOQH81lm45sLh0hhDG2qTxtdv6Ob+6G0O//z3+/eqM35FIuuZkX+TAfNht1+Hs2N/Mu72aGQx22cGIqiuWzITdlm7vcFIXTZXPZPBWhmJtZkactjc6TfcU3zKTjsLGdKn9WudvmrvIzW55GmEl+ZiaZuUtm3lV8+VfmYH7QfNiYh80vlaeN8lTkl+YmH0b+ysnmaTGfjOZpIZdyyS8tlM+aDxtOl/K0+cHmz1bm5kjuJvPizNOGYS6h3G1zl7wIxYYhyi8NyyVW3lVmdNDf1Bc62Gn7Z87fNSxPDXPZ/GBY7ra5lLvKpdxtc9cOT/tmfvdhmP6X/+7fTAfylJs8zItyyY829ubdDEOSX9ow22QIITMzdx2H5KcKI+azscilsY1N5VIu+VCelsv51faHz8plczMfkpuYT8plI5LNi8ZMMnfzIdtUtql8lujgy9/MwUZu8q75sPnB/Cg/tY1RbvIiN7nkk/m1+cH8pW0uc5m7Sd6VD6GQH2wu+VBs/tJctrnLJ7k5zc1ccjMfitwMQ+YhN3k1l83NVJSHMR/ySYQdPlsu9RvHF3Pgm/bN9lWGfDjJh9PNvDg9hGEYQpGbKHfbPO0b337HvCv6P/7TfzP5sFmRp9xFPom82nB42sxD5S/Nw3yXh9Nni8plQy6h/GBD5GFj826omIe8Cjtw4KSvnN9sp4xCLvPdkEtslO9yN5ObYkOYzaWyza9Utvmz3PTG29/Mm4yiPJ3zYnMpvzQ/tzHfjcJ8iCKfzEOYn5tXh4exkQ+LzYsNeRjlRbnkuzzMwzwVm6di80vz3ZiHTuZmfu4gNycbDpf82ubDEOXDkKeQm9iBPIzm0hu9IYzzGzs95KmTvNo8DRvmspAXRSHMw1x22rd/xlyi6P/zH//rzU1UjDWfJe9m3iVykw+5NOYmzLvNTcqreeigAwcbvrETowxt7uYu5aGYm7mUFxtzM59tlJswTx00HHS4dHJ+dX776thJkZtc5mbIPyw3YT6E+bnowNhpPhl18PYb3hjKfBedHvK0jRImn+UvzIfNTxVhcxnyyRBhyM3YEGVeFcaQmJt5MTdzWZiHWW4mURSGfJinMAx5WtiQu3w3NyMPG5vL5pJPwjAM2SgPi3wIG3K3uZnkbuhAh4e5NJcdLHdDhijkw9i8mJuTvMqH+bAhloe55CYKYz7sm+0PT7k59b/93//9boj8QvnBPOS7PIXytHla7oY2RGEuHXTgcNk3DPMw5GkImx9F/mSYp8XCvChyM4rjcDdkfP2D86uHUYi5mV8L8yr/uCGONzZ8Q37w9q/oC5sfbJ7KZfPUm0u5bC7lxeapXDbmk9HhshNzGZoPEYZ8KDZPhRCbF2FuhjHkJpZXw+nhwIGv5CYvytN5UhQb5bK5bJRLsVmnlsvGhnk1zM/F8qKRm8NlYz4URxy5bF4sOlw2zNP8fTvZPIV8lx8Uctk8DCEaix042R/sqxfR7/+v/+tuiORHQ8p3+bNtLmUbUlR+ZsOGIYWyuUnHgWxuJszJpvIq7+Zm8/fUvNtQDDuRxJG7be46DsslcZ6c/2z7JnfRwYbTr4X82vwoH4bw5uEr8tlKb3+j31zmZrZ5l5uxsPmswkEI86EwxjYPqXwY5sPhbjsxhJGbMYwhN+UhFfmwecjdNpeomJvZ3IySu3CQm8PC82IJyAAAIABJREFUThzsTTvon/DN3MxDyScb4oiNchkzNpfI3WGm5rIxtmHeZZi7bZT82gypA7nbhiHlJo4w5LMtdbhsPttGMWzKZTuJZDs1T3OTF8lTcfgwzE0ujcUOnOx39tWL0v/8H/7t/KWUmzzMu20qm6dt3lWYykNmEuYh29xVngoxN8Mo27xK5VJs7ra5a/5kHKNcFnnY7CRx5GnMOJKbHcj2z+x3LZcONsxfO7xonjY/OnwYwhuGb8hnE2+/qS8Um6eNc95tU26i2HC4FIaRh7Kl8jSvNnKTS25GHnYyNDaKuZmfyyUPxfKD8nBiLuWycODN3HRi8hve2Ff8wU7ksrnbRrls7jpCbBwHG5uH4aSDvlDmq3YyDGXn6V2N+YetqVh+ajM3RRjJh+hw2Sg2is2H2dxMG+Wy+Ssb5Wl5Kg+LDg9D7MDJ/mBf/Vn/+T/9V8vDEOZdSOZDZu5yUy7zYu4m323utlNFbkKUy3wSwlyah/wodxvlJnITO5mbeRghD3OTSxiWSz4UeTgPhD9sf2inhzzMX8urecqfxPzE4eH0Zwv9TX1BhPlwziVsbJTLhsNTzNxVLh3Iw5hXQ17MdISx+TCMDpeNjTzMzxXlEoYwDxu5iVgHC4c2nC47KPyB05Y6fBjDPGwu5SlsDLkZRtEXOqw/ZAzDYmND5GYY5jKvcpNLc1lshEJsXuQmlg95iPIwwmLzMJfNw9gof2mjfJiHeYgOCmGGHOzk/J19pXzWP/0P/25EXsXE8jBKvdlO7/IwD2EempuxeWqe5iZEYTi8yGWG/Fm5ybt5l8Lmw2zkpjDb1OHDmKeVu8pDdGDMzTc7f9e+IRyEzU/Nj5qnMZQ/yasoNsxni/qN4ze8kYd52MzkZpjv5iHMpbybuav8i+zwbm6ifHJiXuVhzJ/M5lKsbKfjOGyTg+VhOKxDIWxsnnbSPIX5JByYbfLJPOybF0fIJQ95GM6xeYjjwPCNcPq5YnPJTZ42T7nJq8Nlo5AXG41CjG06DozNZXOZm3nIwzyEuZTL5qnITS4hFp0n335npz/rf/1v//W8K0Ju8hByKRzmdJd5sVFebJhfi3nIzbw6XMrdUP5Cnspl8zPb3FU+2+ap5KbY6ECeGt9+Z3/Q4Wl+bvOD8jDE/Fq+iw42zIvQbxxf8Ib82XZqPuRho/yVbT6rPAz5uTzMpXw4Ma9i8yvb3FXmbpIZDsmHyHd5mKeNTuTXDpfNT53fPIXytGGe5maUD5GbYcw/pvxoyIv5k/zU8eZyDiPkw+Yh5tXmX6TIh5i0r3z7A/Oi9M//8d9t5l2+K3NTOOTA4QwNk5NNYW6GvJuxUfILOzzlZl4sny2U3JRtfio3Se7mboy5G/IuN7nJZ9tUPoTc5bv9wX43c9e8mL+Ww7tt7hKxuZnkEkNuOpib02dzc/zG8RsOifJunbZpbvIhYU65KeYmn82f7MQ85AeF5CY3+Ww7MZ+F+ZAPczfkKczDsFwid2Hu5qbk4dzk9NcOl3LZ3FUuOzHmsnzYcPqzyt3cnR5yOScPc1MewrwL8125bETybtjmYSwvItEXebN9xTd1WNiJeVo4lMs25GEu892QF6GEuZu7lc6vnH/IfLbSf/7v/su5C8NcZi6hQ/uCw6KG0wzzMHf5ZB7yKh/mr+VhHoZyN+RdLrnJJU/DRnMzDBkqlXnId8XmVf5sTvY75yk3ucll89nyMDfpiHM+jKHY3C2SuRlhRm8qzpmbzV2x3ji+0BfMJRJOD5m7A/Mu78KYh9yE+Wzn3OWmXDZzM0T5UF7NZzPvcpdXYeahYjND7hJzM5fY5iG5i7CZqczNKK+GQpiHIZfNh2GG3GzWmA+lwtjMmIfinBdHlJZtGOYhSn4085CWu7kb5gcdeGPf6EQexvxJlHd5NblsLhtSzM18KOWy8w/tKxvyFP0v/+2/npuKYnO3DWPUQW94w0Hf8I3NPJSHzaX8zEzl/7S5yV8qT0OsDG3y4UTlrvIvlzl1frV9lZvytPlsjeXhUNjpr2xTudtI5qRD3tjpbnMplx1f1N9sk1FejLkp8tTyS8Xms20ql2LzbpuavzSEYm5Oa+76/xEH/8uWHYpeV8dn7iQH31spQLQoSosnQqsQUIQ/fItbgOBJuvf8OtdavX91d3KSe67XMYrTJQ8R8qbsPL1XvsiLbe5G3swlKsxGueRbB4b5YC5jHkKHSftsO71XeZiH2dxVtnlvUqlsY6ePUvnaNg+H5MWcmL8r5ZL35p25zAc73JRLXu0Xzk8Iea9f/q9/MZdEPhozOSgciJ0YOzGviuZN5qPKN0Ixl/naNje5efJRhM2QL8ZMhpAdh8x7w8xNJSF/SDif2S+2UyJ321Q+GmIHG8eJ/D6xzDM9yRPnM3lnFP1IP/pgMy/mIe9VPgp5CPPHnH7TZqg8nOahsMM3cglzs2IuY74VCWO+sbC5K+b7SsJ8sGEeop/s6QeJ85Odf8bkJszcTPJi5tfkJsx8lJt8beYmBw5vhnkz81vmW/k1ucnXZmyU3ORNtun8BZ8or5ZJ/+nf/fP5RuQSxtxtkzyEYRj5FSGv8sFQLvmuMJch5pIPyrfGcD6b0ZM6yAcr5iGvEmHzUeSd2Q657Gf2CSEPQ15tGMUOd51eDHmzUS5hiEUnPeHAM5uv7fhJfvRBLqdXw3zU4aO86mCnP2b+stkm0TBvDuSDfFTuNvOtis3dfCsfzQfLXR1uttN7Ob2JfuT4kXB+5vwF8942zE3lbsxsU7nZppI387W8V7nZ5ibh8KrTb9nmr5V5b94kQ24iX8w+/0zPkldzOfRf/8P/vAvmITeVj2ab3ER5ON3lYb6Sj+bFJJfyjVzyZsg35qMOd5u7DTOpCBtyc5bKzUbmpnIzD5XvG3vy8AvnJ8xdbMmlITYPMQ/NexuVm20qHwy5HHSwEyeGvOoHjh/x5KNnH2w+evJB3on5vmLzKgz51uZVh7udGEbeyfcM+aLcbb6RS2weYj5qlLvN92VezHuZN1F0uNuJ+WCzDVN5sc2LPGx0+Eq2uanczZtyMy+SvBnm12zztSEvctcY+Y55E/NFJDPJQzTOzzx/ovlg0aH/+h/+p22Uu81liAibV81DLjMjr5LfY8bQ4b3kLg8xJL9qHnLJ3XwxNhui8pDJovIQO7VRblZIeTN385A8fOb8xJ4t9GSedH5WJ6LYbO4qnF7MZSh3o/LREDsQzTyzyYFodNCP9IOPTq/mOw4fneTNDt8VhjzMJXfNq7nkzeHhGSf5izZ3ueSPWcib0fwu811zieYhd/OQh3kxNnfFyDub95ZvzEPIZb6SV+W95aP5TWHmZr4YIuQyX8mLNR/lLgyR8fwL52eOEGIeOvQ3//afzSWX2OajbHNTNB/MyBdD/pi8V9l8o3KXvyB3m1excxilDmRYqbzaNLYJO6K8yJt5Z8lp5894pif6Ez3x/P9oJ2YilzxMeTX5Wn5NyMNpaB4aDvqBfkTenH7b4aNh3hy+Nd9VGPNO3hsSThpic1e+azPkO8r/l+ayeZVLbnLZMGRC2rObeTHKwxgVY5uc5JJ5mPfyIpeR+dbhe+Y78iavNnflbpuvNfLr5juKebNnnZ9wmuhQ2eam0i//179Y5WYbTm9CXlSc82Y4ySU38/tVvnW42eZ7Kn9IY75yUCaKss1NyDvlZggzLxJmLksu5yfbJx0/cPzEsv2Z50+cQ5SO3G3k1VyKMHeVN7NNhViITowdNJrN5Uf1E+XN6bcd3gzz0eGjmbnJ7xHy5kSYhzB/r8pv2ryYb/X0xHna5kVunhj22V25mancbNNcZpubGqHM94Q8DJM3c3jIw5gvxsivW8xDZXOXS9mm8mrkvflgftNi5yd9/kWHLw5vokP/+d/987nMi7kpl8xlU7kbNg/zteUvG+XVvKl8sCEPMx/V4c2QN2PMzRiJosPEcQjbKLlsXpWbIQ8zNxVmIzGXZzs/Yeow4dRONoS8ChvlxYSRS/LRzE3yEOYhd81GHfQTntzMqfy2RS6xkcu8Wj6IGaP8DjFvQthY5DJ/v/LRfFe52eaDaN4Zix1m2olRbmbk4RyGITe5Gblk+UpejfLBwg7vLZcxGm1+VZmbech7G+VVcpeHucwH833lZucnPf/C4RJhHo6D0X/+3//Z3JSbobJNZcjDNpxe5FvzIsVMso0StnmRL8rNNncl39p8x+EmzORmbCbJwzArddCBvCoPEeYyX1t+VU6eP7FPxHpisck8hDzMXbkbio2NA/PBonIzJPNRXkQ/4AfidEreRD7ITYTNX2PehG3uyov8PSg2is3vUsxl/jbmiSUnZju1EQsbI6zTTb6Yu5lEYe7K77LDQ+aSV20+2Pw1Fsp7zZuhfLD54PzZzmcdY5FLiCPOU3/+9/985YuQh5nclbBNne7mobw3YygMk8wQ8qq0uSs327xXmYd8axuedPyI7PysfcYwQ6IwN3PpkJC5iUh0EDbmMt8XZshNODk/mc8y64lwnvIijA53m982L+ahDjfbEIV5SG5moif1gzkMx7xayEPJO8XmDwvzUbnZ5nvyW+Zb+VspNr9lvpU/Zi79pJ7c7dnOZ85PMjfbCCXDzGVDMhui8qvKd+3wkJlFctPmG5u/rSG/Vx7mzcnzL+azhMwXHUTnqf/4r/+HuSl3G0fMJXMZFU6c3oTIm/OkEEa52abyJnKZu3koNq/KzEPKR5t50g//DX7g/DPnL2zspNwVYXNXeGIzVIRCGHMJYb62uSsfnZ/s/KSyojhPGXkYcgnzkLu5RGG+MeRhHjowhHkzc+j4kZ6Yy3wUeSjyUGz+uDBvchc2fzfy2+aPmRcbyYuZyh8xh46f6EcrnZ95/mz7pPMzhogdaSdmc5kW5iGLXMIYykMxH4UdHjJf5K7Nb9r8MXmY78tHYR5inzl/Ns9y4MDcdZhp05///T+bu9yVD8ZMIpzPXpW7Ipy+yF3e5Is8zEMYxkZPhPliHsJBI8wXM+n4E8cT+8z5mc02DFMHuYyNYiF35Rsd5BLmG5u78mo4P3P+gpMOOhga88XczUN5yF15GPOtYvMwOih3mzezXH7S8SPG/Ip8kIf8HQnz18sHeZh35u/KkL9gHsKYQ8dPHD8hjPMT58+cn2iUN/PBmfdmOlzCPERs7vJOWB7CyCV3m4/mLxnya8L8qrmEPIyhcHJ+Ys+YhzxEmVOj//J//NORh5C78o3N9uxFsbmrKJZX+SKvis1DFMVOzEPuNooN0RNOmofYTDh0PLGTnWzm5vSiwqyxkIcQUt6Jw/eVV5s3cX7m/AXPJvkBczOTSzG2U4XczHwtkUvuNsrDcJqDsnHkizCGfuT40aLlg81721ReLGY2d5X3Kn9YJNu8qGzzrfm9tqlUtvn70nwwh44f6YnjiZ08f+b8hGc6/aZhyCWau7KN8kfkrzd/TJtXC9mmcrdR7JnzF3N6L5eeLOxZS3/zr/7x3OQS81DkcmDMF6dtyiWvNo6Dzatc8lHehAPR2Elj8xCFEItOH2wYclc+2HzQEGJhCAfHE7ENpzbzUC65C+X7Ys+cP7NnOhBmyGUuQz6au/Jimxy+Lw8nHcjDGIoNo4OnP+EJ88GQNzGUSxhjHuajYz6YS/6AfGsqzMa8yUd5mP//NA/z0fEDHZxjz3jmwLB5GPPR4ZJvhHkodxsx+TX5K+Vu8ytmSG7yYszDDt8a+8T5C0IehuEgZlr6+f/8H+cyucnN3BUOZCY3p20e8qKYFDY3c8kXuSkPY24O5TLb5GBzM1QcsRA7MYqNuZzE3CS5GcLcDAlzE5LhoB/s6UfEPrPPOp8tl9yVF5VthPIisWeef2afOZ4sDxvSXEYx5GHMKOVuS8vDbFTuwmaG1GEeMkMuY+H4Uf3JR/MQuVthHsIw5Ls2bV6sKL/XkDdD3pvfFuZm3uTvT/Owudkmlw0haxgRNpcxd3mnIWJuUmw+KOYm8mo+CvliyPfNQz6YPy6XuYyFfG1Ozl90fjaHN8PUYS5NO/Sf/vU/mcuGSB5GhxdDYWNDmEkusdGRh3k15BLlbvOtsTyEUd7EfGWYuzAfdSDMq4YDMXRwPPH0J4rzkz1/0p7dDLmUuzB3c4nkIXby/GeclIf8qjzMQ97JQ5i7YgjniTxE2SbDPESjH+hPlO+LwjDkZt7JN5rLvFj5Q4ZItrkpl8w8zO8yr8pv28gl8ybfmuQSc5nLvGge5ou5O08Pke/axsglD81duQtjCJNcykPem4/yznwrlzFf5L15KG+G/IZ5iPlovvjM+QtOdpiHXHIJoyH9l3/z388X81BsU4dXedhJechDhA2Rh/liiFA+GvP7FcubYT6YN6HY3BW5HBQ73B0Hx0842Cc7P8nMzeRS7sZM5WabOhBizzz/GTMkQ8Y8FIYwbyK/IvKwUe7mEuWjk2EjdNBP9KNX+UoYIq+WX5VfMV/Mt/JB3gx5M7+tYf6wIZdsPqh5bw65yc0M86LFXOZhmLu5jCGXMK8232iIotgwNgoHeSe/23yRV83dvDOG/HUW8o2d+IXzs4d8lLtOL/rzv/2nc9lmKJfY3OQSyjlqOvKQN2Em7828l3yt8jB/Wd7b5lvzqknOTaUOG4kOhHGEH6zwzJ4RY6aobGxTKXcbFTKxZz3/GTOXDnebF/PQaKcXE+WDGJJymXMcsbG5y6U4nrTRyemLkcsPPP3JhJBymTdD7mLzwfJBS94pb8Zc5k3kzVzmTeQPGM3dPOR3G87Ni6TImy0PETPMTaMd3gyjudlwzl3kZm62KXcbM0aROA7EhmHueiJ/QB7m+zJkbjZmftP8tlxieS/JbJ/t/EXnKTdRzGUcudmGEf3Nv/yHU16FcnfOXA6Um7yZIUblu8rd5rs6KHfDTsyrw1dCGPMdY/Mmio3Q4W55CKPRE4WTZgthcikPQ7bZKJfUQdgzzz/j2ZaOJ4p5OOfV5g87crNNxTxsdHAcbnZ+lpD57OHJjj85np5YiFDuNjrdlbvFvDrzweEy70TYbFP5TcVmm5vKzTaVv6jRmC+G/EXlbrPyIpfNrypz2QyVzrzYpuZu82KbmwrD3GzzazoOOux8ZlNRdPhomN8v5M3M3I18MeRvpwOxKDZ3hWeef7bzk+byZKXjYLPNTYbZTkZ//rf/dPmivBrbSamsQ0+H7VSzc8xDJMKwIYp8sCg2D8eT+gnZ+bPOz35TKC/mnY1hc1Nh5NWEJBsVhVFWHmaj8pfkMpconPb5FzktVjoONjZt7nK3+W2L3IXNF0PyZi6lnmwnhhTbhHXo6R/YnmjKJczDDJWbGXmVzBebu8IwlvfyO20eQl41781NXu3EVG42l/mLSm5CPjp9sAy5lJttSG7G5i6EYWMo2+kuMsrdOfNR+WjIJUKH97bJTeQyD2Hu5k15MTenvLOYh3zf5qMIm0nlITPJOti0T3z+GWMI4TgMiXOYu42D/u9/9Y+WS7nrYPNim44DhzNq7nZK1tyNjcObuZSbcrd8dBw6/uTu/Nmen9XhZjuRvFke8jAqYhsjX5tXhRBziQ6KxpBLhryZbxXmYYeFPfP8M4Y4UpjLcLobcsmMIZd8FGEY+XXzUE9m7CSX3LRZ6fgH9IOVfLTNB8eEbZR8kUvmxWgs5GGS3zYfxfLm9N5Eh5uhnTLKmyGMuZshN3mnmA+204vkZi4ll3lnbF7FwonNR8N8NA8Ryc1cNkr5Indlo3Kz0Ua5WeRhHpo3eTU3k7krxMKYy5iH8q0xlG8NuTk7HDvt88+cv6hswwgdHAdjz8+KRONI//F/+YervJjkEoZwZGd2pE5Gm5uFIYwKY8zlcMnNNpViLucIx+Gmc4YKs1FezXeUvNmmcrO5zAfl1VKHuyKXMDdDeZhLxDYKY1TudhD2bM8/q9M56hBmHuZN3iuXfGs2d/la3pupJ8r2zNxVttFBB8dP6geEMeRu88VkxDYvKjaEKIzGkEuYh/x15iET8mqn8pUhrzZ35W4jl3xr5jJ35Z0whrzasHnVGOYyQ/limG1CY+VmyKUDQ+rwYi7zkEvysM3vETZvwqZmozK5yaXMZfNe3kyYhFG+tlFxfuL5F9uzMDdTmUNlw57JJcLSn//1P5kP8rXlcnCkxsY5d7kbymUUm5nJceRmI4eH2TmJg+2Uw/JRudt8V7nJb8mLGYYQHXIQjtxtzGXI3UaRj8qrHTT2yZ5/oTEqQ4Z8bRuRzPcMkbt8LS9mbvJE2Z7d5M0chONH+pM62NyVbRiSm2GEYXNXHg5z2eh0U3kY8ofkkl8zIa82Ob2Yy6bytbkZc1e5mUmYIXkx5J3mbtHcnXkzmm2cc1NhZpj3GnPJF1E4lEsMQ7kZim1uchnK3eZmm8pdHsaMUcxDZkgWiXk15J3yYpu7TeVrK0wun//M8ycr5jLEMTkoO+cmX8TQ3/zL/24uifyG3B2xE8M8RGwuUxGbS+RVi3K3YTbKJWsUm6EOM8nDvEhmKtuQXDZvIhZhLiW52dJxIMJ8Md+YhzwUmyFZYZyfOX/G1IHMZAxlkUtslA/miw3JTYR5U75no3I3bIg8NOuJ40/qCblZtGHIwxjyMO+EmMvIF2NzV8xllG+FvBnNm4N5yK84bVQe5r1tvieXYmHuNsrdXOauPMxDft3JZpubzN1OiiFvhtDh4UBelQ8239hMHuYmkTex81TZ5iaXcrNNseX3KN8RQt6baZ/s85/lNIeHIZMjZu6GvFn6r//qH8+l/EUbHdmelbuNcneeUymUu+Uubzow5jIbhSLsdFcs29wkinI35hLb3BxiwzzkZkdEZd7szPF0IHdDXm1TuRvyUB5mqEwY5yedv7CT40C2E8Mhc5rKRodLcikvtrnJ4W4zJK/yjWGbpA42NkQuo6x0/EQ/UiyLcpm7zc2QL+adfN88DDGXURTGkEvIwzzMq2Jhft085GHe2+a9ygdzmVdDGNvUQf6AsdlONzV38x1jHjoQMrkp3xpD+WAuo7wpNszNNi+KjXK3UdmG+XURybeimDeF057/rPOThzCbu44nztEwm6+kP/+bfzyXbW7KO7HMZXOTy053sU1FuTtis81Nw0a5WVGYm8rd5u44bKdkaKPYzNRB2UbUE+fcFQ1jQ3ZOh0uWy+FFHbaIMJeNXFJsuckljKG8E0fEjOdPev4Fpx1P6rA9I0xD0eE8p+ZhlHxRiA53cxnF5s28N7PTJXUIM9vksjiG1BNPP9ETuYR5sc1DOmKzDSl321Rutql8bXOZys02N3W4y9/eRrnZsBF5s83XOg53O22nh4SZF9tUKsY2dyVfhDE303y0Z0SYy3wUQoyVD+Yr8xB5KJW7nRiFGMJmOz2k2GbIQ2VOzK+L5aayTYXocLd5M/bZ+flnx8GGUb6IDvaMMZSv9R//1/92XoW5K6/mzcKJMV8ZHe42NysPwxBSzFiYF5VXRSg2G3kxRIdtKg+nV8WGMAqZ3OSgJ8ybYd4cCPPefNRxYG52/qzzFzcremKnh8nMoX6k2C+2Ie9VCFFe5ZKH0TzMm1jM5WAuMzPhs6PYYcePOn6ggyIfbHMThuZNYex0Vwj5YHNXzBdzMyPvpPJmXg0hrybkt7R5CPOq2Mnm1ZCHzatyN+Rhs1HeFJs3Y3N3xMa8mS9CiI1yt/lGsXkVK7kUTm9CHk7bdBxsXmwu80fkK4WQbcqb8+T5Z/OJpQ4zpFwyY8OJ5PDR9Of/7R9NHvKmvJpXkzaMzQcbIl9Evpi7odxtmJu5eZLhdDOHjjCGonxjJ8WYuakIQ2EUYijGUC4HzUMUm4cwD2E+KMrDac+/sE8YkifMXWOcnvT0o4rnn9koH+QS8kEhd6GTYvOq3A07vBmGkUs4OH6iHwzJq2In5m4oNnIJYx5yOSh3Gxth8xD5Sj4K82buhiLv5GtDeTPkEhvmYbZJGMJoXg15Mw95mMvhYYQNY8x8rWUe8maS3IW5jPJdm43ylTEPR8jd5q4IY03LzTYv8rW5m4eiMA95MTeT+H9Zg5seS9PDLMDX/Zyq7vnw2LFNjHBQFIzEmr+FWMAyUghhEbFhy2/gR5AFJEFKhITEgh1SSORYJHbCOONxT1fXOc/N+1F1Tp2u6okd5bqU0zud96hd7EIiic6J+rApX/zR79ajWMSudvEooR2S0tKirrQIoVZxrcQitFbBzI3cfISpxzvppAhaEpuEuKhNYleKDCpWKQ1iERFqEW2JTdCEhBAXLYmXZZBYdd4z3+FE0CGhtUksog6MGwlO76hFXQuKuJIgNqnGRYh41FqExK6oVSxiMXDDuFVDxKpBLSqqrU1CSahFiScSBFWLTquITVypQVy0YhVntQsSVF1EPKpaRVxJVCkRUlpVQVtJVFFPRTyqVT2VxqoWLalNaxMXjbOWRMSqiBAUcdYWEVRtalFKYhG7qa1VEsQuVjUlUYtYhBbVEpUMm9IRmdVOERJnsQjioupBj8x7enJRu0Fi0+nrVb7849+tR0EGqrWoWNSiCLlB6aS1iYvWJrGLTSxCJ3FWpeXmI7n5mNPRPL2VnkSsWhLUoqhHFRFiEVpUE2IREY1NrIJYtSQuQhOxSGxK1SqJZzI86ryj78RqIGRSu3gQ3JDqPIoQ11oEw5W40vgaQUQIbTE9FdFExivyigxnLWrV1qqdkog4i0WsqpJYtSSov7OKiE2CidqUIhkIqiqeComqVQSxm9qKp0pdS1Bn9YJqLWoVT00XcS0kCEEtgpJQi6KIVVurJAS1qLNO6okQJDZzEmqRiEVrVaQlsWpLhlWUIkFpySBBPKpapVPnO3qU0NauFAkGQadVRK3qfbn7H/++nspw1qIuggMtnahNYhebuNaSeFGKIoxb5pGeiMUBsWlpUdRFEBJqUZuE0A5pORwYg57sBrGLF4S4aL0oIcPZ6Y7e2fTAGDjaJK7Ug9ok1EU8iIuQIM6C1pU6qyASJPRkk1CLouog4xPGwSZBUVdaiqClRQgS5iQhobVJSBDaBkSmAAAgAElEQVTUL6VBCGpRUpsWQUioxUTtQkKC2KR2pZMGBxRBUb+c0jprbRJaenJt2IXYJSS0GCTUopgkNq1HtYv3tNS1ICHhNFEyCIoWJYOWIChFPGgRRmhJSFwryunIvMPJWULtMmyCumgxXURLPv/D36nErmTYdCJ2RUhoXBRFnc1hEyQ2LYndRG2KxHMlQWj9Ymo3SFQlgzF0TgzJcCWhk8RZqIuIl1RFiEUIOukdnWpRJKgPiUXiqboWRTyq4dqUxC6ItlZJnCU2nTa1CKJ2ORx0vBY3dtOuWpLQaZNQi9rUpmoVsQkSZy2Js9YmoSUhqF1Q1xKbltgVCa223pdxwwhKS4tSFwkJqq0kJDqnJCTM0pKQ2LSoTrvWJhaRnjyqSgaNKwlxkUGnVVuPIiiJth6lkwy7EtqKYNgVUxvpJEHtop1iMGJTxK5obTIYsWltht1EBo4c7+gJRRGEhAxKO1FPRVAUQVD56r//br2oCLGIXRHErrQuSuOZxFlqU0+UIiGoB0WcxQuCUiQ2pcqIjINVJzEYcVG7uIjGWeJrxKOq9Mh8p6YkqJYkvlbiLPFMXWucBalN7TLsYldFrMIhVm2tItSuuHlNbhEUcaWltYlF7EpdJDatK3EtoWhJiEVoiUV0TkIyUJsiSGxainhPbBIyUJSiRV3EJohFUJt6rq4FRYsiKJ2oXahdkCAEtQiJXREU9aizMkIRD0pDQssYNi0Z1KKYCHPSaROLaKckCErtgiJBERLEJkFoUVrtW+ZJrEpKg5CQoZ1i0ljVLkqRULtB3vzP/1j3P2MeiV1tqiKuhXii1K4lcVEERZzFInZF7EoR1KKIrxXULqizhoxgqEWHjHguzkITXyfiqYYopzvmPbGIX1gsYhOLuJKgzuq5WIRaxC6kqKc6DlbxoLVJaBkH8oociEVcqU1bq3gqtDaxCEqRoBSxCK1N4qwlIXalKmPYzBK0dnEWu7qWEIwDSsucXlQEiU2LELQkxCJ02oWWhJaidTFVKfGgdgkjCGozDrRq0UpcK4JZgoQEwfBMBrWYBJ20tHTSkti0JCiJs2LERYhFEISe6GQemXeqkpCgCIKo0kkrqCGhJV6WL/7XfyqTu89591M6XQutswzXilIPpmcSF3ER1CahFiWhFvVMEbsiQbU2sYizJjbjIFYhdkFDBkoRGh8QVMRF1Ooop3fMIxkkCK1r9cyIDwoyUK1N6utluChxUZphFbFJrYq0hOZGDq/J8MxEQqlKQmtTHxa7Inb19RJUkYRaFKG1CYrY1TNFxmAMlFZnUcmwK6Uq4ixoEZu4qAe1i7aiNrVLbVpV8VTIIEFcCW2tglqUqE1LBolNBoKBqlUkFkFJmSWlpROTWkQ7ScUqzjJIUARxUbvJvNd5FCe7kGhIQ4Jq0SlKqSFj6CymiE1QtPL2f/yHevVN+fgfYPDmR9x9blOMYRebDPWgU1rUptVMqUWsiljEg+G5oDYpibPWteFFtahN4mKQ2IUgLoIMm9YmQRHiBSEehGDecTpqK4mz1pValMRZhrOU2o1QBHWROEto6KSTIMNZ4kotgiAo6kqKweEV45ZQEUWZYYR6UOprBHFR1EUQX6cuYlW7EIu4KEUs4kpQqmIVglpUS2LTVoRglviA2ARFi9pkoHQSlKrUg5JB4koGQiazLmrT0GmTGzIUyUCsOhGLiJKBapFKqCktJnOidvXMuEXokZTGrlqE9MjpHT1pXGQgtDIO2immK7lhvNLTvThiWLUVReXz//pblSHjwOvv8Nmvc3rHz/6M+y/IsIlFyEBtalFUazdPtIgkqsQikli1JM7aSmxaZwlNrBI0CIJ6rjaN2iXDldBELEJDEptSpYhFJHGWWNVTQ0zmHU7UIiQorXqinihC4lEsQltJtCSh1C5xkahVpTaNTRK7eKooRg4Una6kVrMHY9wybjBUCVHERaldvCCIi6IuQuJD6j0lLhovikXs6qIUsYhdXNSmJXFRi9rFc6Ue1C6ots5KPEgZw6Z2CYlVVVrqmbYoDhiaSKIlGYKplHiQqBoZpHRqK4rqPEkxPJcbEuaRIKGlJXROne+kJzHVRUUbY8RqdhpILKKtZsi41XkSJ/WgJDb52R/+dj1qSfj0H/HpP5bjl/z8R5ze0GmTA4IS7ymdFHMiGmexGANFUJtZRRIU8aIEg8Sm9SFtJWEcFEmI5xISFKHVVhLPJFZtnWVIT5y+QkkwbIqUeCI2pSpWcRYf1NZTSZwlKna1Sixi0yJWHaGVMTCYdZbQo1VFciCvGAcyVMT0XCkSz8W1uhZX4pm6iEURZ/VcXFS9L+Lvqla1ikXRktCSoNSmradSi5AyBi2JiyiSUrSuRU0RDBUyEJSGhCLeU8lAcKLTJpM5mWV4wUAwGcUBpaUn5lHnvbSkiItoI2MwTySY2koshhaJVSxql9Cppnz1R79Tm9BKYjNu+fT7vPoVjm94+1ccf05PKAmxKIlNkaAULYrQ2iQotYtFnTVeUkUkw0V8UJCQQYLYxHOJi9rFrhSxiIsQdHK6Z77DxECITVWGiwaxSajFdBES4onalCIeJNQi5IChQe/F14hFEIQMu9JqKh4UGdy8Ytxoh1iVlsSmFkVoXQviLBZ1EcTXil1dS3xY7epKXLQI4izeU52RWIRYTFVqk7rW2tWLGmfDIs4SMuxO1LVaBJMEB3Kwam2SwXRthKJFGZiTlpROlE7iInYdJDipkiFCJ6d75j2xmDorGajdQFBtJcEksQtFaEvjURIUlb/5/X9Tq4SWxEW5ec1H/5BXnzHvefs57/6GeY/YhdiNUE9Mm5YEA8WkRayqCCXeV6siYtMqYpE4G4MMgtIRsQqKIM5iEbsixK7I1FaEDKsmZKByuuP0jtaVWIQRlIQWQRCb1iYhN5j0SJxVRMjQTjFdhAYhsel0JaEWIbVqYpOIA0FLijqrXYaOWxmvGaGTekE9U88lfimxqwexi+eK2sQiNC5K6kpQKjQS14pYBFXTlUZqUVrUrnahtUlQZ7EIQizCCK0PahEyMHDQhNYmaD2K0Ngk9ISiBEUnJp2aUpLYxCIuJqGNnO7pvSu164kMDFJtSVAXEaUWg8SmdVFEvvqjf9fY1aIlsUqK2tx8wuvvcPsthLvP9e1fMe8p4kEIEZtUW7tIBimK0qjaFK2gnop4IrTVopUMuRkoQuJRi0QSm0RrURIRu2oRkqhSEpSGZFg1QcRJj29lnohdoyoWYxCLIs5ql9CSAx99j9ffZk5Ob3j7Y+ZbQgWRDO2k00WoXWKVutLEo1gkalXGkIZiWFRLlAybVlVzY9x8pLm1SmhJJ0Fi07rS+qUlzloS1+LD6kpCPVHva+waBPVUMlCrKupKSdGixLWitQuppypWsYhFVK0izoJalBzUQMhAELtatSVEqCeKikVCi4mTnk7EJomzxFkwq73ndCcpYheboiVBUOI9savdIBZRtSslIj/7b79dInZtnYWkrtx8wu1nvP4u45Y3f8nbv6Inu6iLWAVVxFCTIKU2iV2LqCIURWIV1KK1SQgZg6C1SRDPZFi1CHHRIgR1kSCDhCIh5Xinp3cIIQm1qFUtgkTErtQuodXbz+Qbv8G7zznd8epb3HyT+/+nX/1f5jsEsYrprBZxpa5l2NWVMbSlSCShURNFJKFFNUNyy81HCKlNER/W+uWE+PuTeKb1VGMRxEsSi2otKq7NMixaWhK7WrWldrGJXa3iLKGVhERbsatFa5VxUAcVDEERi8SqrVViEavOSotaJUHViZ7olEQt4kqE2HQeOd7hRCJCbDqLMIbUYlrVqhSJeBAPYlpFlDhLyU9//7eaBKWITTKIxfRcyA2vvsmnv47w5Z/x7q/VsCslKCIEDYqSau1iE7ErQuNR0VZCEsSjKomIqkfJoLULQTFCQotShAqCkghaMgYZqqj0pPdvBVNFJKhdbGoRkoHazCIkVv34V7n9Fj/7E3EiB24+5pNf4/ZbvP0LffsXzHs6xPRMPRFXUsTFsKpqKwMNIjlY1aS1CVpJdMa8eWUcbmREBZH6oLbel3giriS0zhJaLyuC2sVZYldau9gknmpsYiCuVU1nrbSemmEIRauuxQeE1jNJbEqV2iQoTWQctANBrOYsYRwGoq1NK4kk2jIn6lGsJiamBvVMYtN5Mk9vjXnEQCRIaLVlHMigk54Q8ZKQEpspVgMtWqsk8u6P/22tilBBGUMERe2iiPd8/D0++wHzni/+N/df0ulFDUJcq0U1tUosQqJildCS2NWmLbWJB7EICeKiNsMiKGLTItQihMYiYpGQkKnHt8x76dCQxKbTU7UaVolFVCRhDBo++T4Z/Oz/ELuE4PYzPvknHD7iqx/y7q/pkcSm5eRa6qIEOaiDdGotapUgoS7GDSYmSmNT2kqiueH2NRkyUQRBEddKkfjFJDYtCbUoQuKsSGkRYhFnrV0RYtc6SzBIcINoKoqTzpMUcdH6hRVBXaldhl1daUlsWhK01CJkMA502ISmWsQuocWgJFFESZm1itKJaicqQmhtEovqPHE6Su+1RUUQRigOA4OeaCmNX1hiEW09SiI//S+/WYl4kKhVbFJPJVGlrmXw8ffk4+9zesPbn3D/JfMe9ai1iMQinkldC0Ji07qS0JpzkkjREiRWGcMzsYiqZNi0rlVFRdQqifao8510UiQyhrZiEQ9Ko+IiqCQUOfDpr2Pql39KgoogqpLBq1/hk++TcPcTTl9yfEsn9TVCbhi3GJh6urOrVRKP2pIhIzorQetKoqVjyOFW3GhJola1SuJRW6skHrX1IUm0tUriuTgLWpuEFrFpiYuEFkWcZZADhgqptDhpp7SuBHWttUlo/a1iERLPtF7SWZSSHBgHBFFUbUIRQWkwiEVVBbEoMVHt0a6IRxWrzHdyeoeqRYuKRQaJIuNAq51SapFqKwmJoLWoliQ+KETk53/4W/WoJbHJIKHTlUHrg5IbPv5VXv0KPen9F9x9zryzihBPxLWqRT2IiCuxCPHcLC0JI7Q2iU3iLB6EhNauzuo9Zd5r70UQQi1aEvFEBgn1XMu45dPfYN7x5odWba2SWDXEIgdefZvX32XccP8z7n/K8UtaL8rQvOLwSjJo9fiGTmJRaZy1OoiB2BX1qHYNyeDwSsYtBkJLalObKiWJvx8hcdZJYtVWvCehJaG1STxqIjmQG2ed2qN00klCPVGKWITWldrFcy0JQYZnWmeJXWl1VmIxyMBAKEItEteCWLUlpTWsgkknTtrKCPVEdZ6Y99KjVTMwrWI1GNFWxqDoiSKxK0JCJ0Ls6sMSWnnzB79ZOWgrFokiCbEoddYQi3gidqEIxmtef5tX3ySD+y/46sf06Eos4oNqEWdFgtrEWVvxILErgjhLiEWIRVCbFiUoGhK7cnpHj6jNGNpSkkGoitgklNrFIrFpuXnFpz/g+HN+/kO7oIhVkQS1GQdefZtX3+bwmtMb3v6Y489RxMXQwyvGLYZ0Mr/SVkKVEqFoNURIMDBVRVQxRaxqkQOHV5JbDFKbhNbXKuKJELQkrrQICbWLXSeJTWuTUIuSOGttEo86Bok44IDSiSNz0pLY1KKeqV1Qu9YmQe3iLP52iSuzNrEIBoLYDbu6CEI8qJZYlZ4wUVWriE3LvKf3qM2ITWuTIFZtxaLFpBYhQWxSWpvEph7UJqF2sclPf+9f1+EVhoTaJSFBnbVk2NW1EGoXiwxyy+03+Oi7HD7Wu5/w1U+Y9x7FIvFMLOLKtIhdSa2qiHhBQg6eSRDUrnaxK0UGyrzj9A51loEiZFhVrZJYtcQiIWhpVOXmNZ/9gPsvefPnCAmdiF1sElqbhBy4/YzX3+H2m9x/yZs/5/TWRTRDDA2xyElbV2JXMoPYJKgiKJJJnTVRN8bNR4xXmDonKSFC7RI6ySCDedJWEmcJrbPEqrVJwixB7FrPJKqUWCR21ZLEo46QiOCA0JP2SKc0xCI2rbPEpnWlpbVJFPGCIEGIXe1iEWetTWsXhAwyqMWgdVHPxcWkJ1pMVYSQTo73OGGSECQetSSxq7aUzBJPxFl8UFtCxEUJufuDf9WM14zXkqEprVWCuCgSBKUW9VTtgibEIuTAzcd8+hvcfMybH/H2L5knVfGSEGKROKtFtRZVJFEkpK4lGCQoahfGIJhFVayCIh2I9k5Pb6UTQxLUqiMIgiqCxKaIXRubDO2Uw2s++6fcf8mbH9qMIS2JVSdCEmctQsIYHD7m41/j9ff0q7/gzZ9IT5SKBK0ioRaJD0mi9WBYtSVEMe1CB7nl9jO9+VhOR+Y7nEitIs5aMsigJ22tMoa2VnHRohhhHKToRFGtRSXxqLUoiVVbRGIRQmpR9SBDRK0qaCcJcyIeJbS1SqIWrUe1S21qVS+JRQZBImhrlUTt4onWJqGDMUgQGlrULjatTaKtWJVO7UmVWJRGOpnv6BHVoCSeCCIe1VTxkgM50He+ThEvyxe/9y/KQQ4f6eEGsYrY1VlC6yxx1npJkYTEqiO8+qZ89gMMvvxT7v5G59EvI4n3zRY1EhJnQcMYtK4EiU1CLeJRJ+lRj29wUkGMhARVFcOqFnGWxPtaBK3cfKSf/oDjz+XNnyuSaCuJthiS2CSeqlWtknB4zWf/jNtv65s/lTc/okdaWjJIfK1YxKpWIUMSnVNMF1Xh8EoOn5jjluNRes+IZHiuKggtapXEqrPiqZAwBhnMiaIoiU1rk9Bq6yzxVDwRqjpJIgmqRUsidS02ba2SOEusOiezHiXxIY1fSBJXGquOQYIgoq40iLMUZU6dJ1QsUpt51NO9zimjImqXxKqtXVwrJZ7I4Dv/nMMnvP1r/fmfMN+hdiUWRVzULtIhX/znf9mo5sDNaxm3ZNCqSjwR1FNVu1qlcRaLeKojNjnw+rvyyffpkbc/5v5LTneoVVvvS21qVRpCEquaWjIiQquKyBh2sataReJBrCqCOnK6k3lEyLBpbYIUQVS9L4lHtUgkoWW80m/8QI5vePNDRRItCRUMUZ21SoYiiWcSQscnfOPX5fARdz/h3ed6fCNKg/qFJCqIJDadtDZBQgbjBkPnUZFxK+MWIWgJapfQaiuhs4qRgVKbWiTioEFLJ/4/X3DaY+l5mAf6up9zqqoXkiIpWZREOVECI0CSv5QPWYRsyI8LJpjYA2TyB/IpMDIzmNiJAmuxLIlqLs3urjrnufMutXSxKV9XJSEeaeubMg6EzqLibUURxCYoEmZRd2qXBEHdaW0S2ikNqoho6x1BhsSmJUpovask0daqJYkkZJBBPGg8qNZiiqmzKImU9qTns/QaJ3FANO7FKnZF1C52bcVbLj/g/T/h8/+XZz/k4iO9+T2vf8f5JfOGVGIRStW9Rhp59Z//XbWSqCHHKw5XKpiSkFCqtCKqYhG71m5q7RIRxL3EpiXRcZSnf8ST79Ez1y+4/pzzK20RSbSlRG1qV8QuUUWtkqg7RUgQq1gkKEVi06D0RvuGFkEkQbXErZQMFKEWQWnJcKcDsQjKuOC9vyfzRl/+L1EVVEQRB1RrF9IQi2qJW2OQ0JIDF+/rk+/LOOrNl3LzOTdf0aIk1K16R0JjF0aoRVGUBEMtWslU0XEh40LGgQzqQWpTtDahRSJoaxcSEfdmUavGrYhVURK1Sw5WnUUl1dYqiXcN94JZ4lZsWqta1dvSaqdVEmpTJFEhFYtYDBWbFpPWJkEoilYSVRFF7CqSaCJjYFC0KKZNz9oqEmRIq/OamzeSiZIgBLUIDUFCS70r1VGrWFx9X68+kS//G60envLkhxyecf6amxdcv6BnyVDFpENQFZHP/+xf150W0eMVh0tyIMSdoDZ1q9raFfUgiFUSu7hTq5Iwjlx9zOWHjCE3X/HmMz2/kcSqLSpulbaqkrhTFSE2yaAlVIhNLBriQS3KvNH5hpZEMqyKeKwpIkGDeFBN3AsSWhJy4PlP6JmXP7NKYlVFROziXhGUIqhKouJekMHFB3L5sR6u5Hyjb34jpy8RFKEl1KIkcaetB5EE1drFWyqJ6YDIOMo4kgMJJkXQupe416pdBFG1CiriDylK4l5tiljVHxbEnSJWQRXxTRNBtWjdS0RtEkSFVJREGxGrIqa2NiWhRUsRgroTu5ChKmOIaDFLiimiqnNKLCYtqueT9EQsqu4MKS3JIMH0B4WktDZXn/Dke3zx/2hrF47PufyY43N65vpzbr4UU+c1plUR5PV/+jeVWFVRydBxyfGKHMWuVkHFrYTWg0mr7gSVBHEvFkE8kgsu3+fqI47PufmCV7/ifO3bVK1iV4sQcS92CeIdiU0xJ73RXtNJSYJBkRIPEhKKoBbxSOJOLeJBDvL872Dw8n+okli1NTK8rXaxSGgRm9BWDHdaDCKMA8dnHD/i6mPOb3j1S85fUrt40HpbW3ErwaAWJfWgCILYjIOOC3KUDBRBfavEprVqaxVBEHdqVUnsimotSiLuhJaWhFKVEbsiCGLTMga1KLULnZUgtWpLS0lC4l5rk5ADo6hVG2lQEjJtalH3atdS1KJqlwzGgUxq11CLYpKo0ElP9MQ8o3RKSoKoRYsgNCKotoT4AzK1tbn6hGc/4MWf01olbg0OT/TiA578kPGE17/m1a+0rwjpEFNe/l//qmKXaIkiHC44XDEOCLUoQgapWCS0JJg2RWtVi5CEFiFuBfFYGEc9POPZp3L1AS9/yde/oNOD2sRborEoIgmJzkoGYhObGhJ0ak+crpk3ZJIhxQgNLSnxlqhIYlWriFuJTRGLaCxqlXHk6Y84PtMv/zudNqGzkkFs4laCIqooIqItImKTkNiVhB44POHquzz5AacvePlXnL92L6FFEUXUg2AgmKg7bQWtTQZT1ME4HOVwSQ52pRa1qoqQuNeqWkVobIJSi8SdxKJa92JXi5aWRNCSEbt4UAQhdq1a1aqdkoEitLTelsSmdqEJcS8GomoVJTYVUUpbVITWqkqRSIKQSalVRKldLKLnk85r5klaSVWtkgOCehB1q0V8U0Lr1nTv4rs8/zG//69qinhHhnl4Lk//yHj6qZ5f68u/5OaFNKLy2X/8aYkkNrPaSkKQcLhiXKohLWLVFHUniVWRMZiTIkhQ6l5bSTySA7GIXTk+1w/+RC6e8sXPePUb7UliUY+EijvJQKyKNLSaMEKOknJ+w/kNSouJqFtB3UuiLYLIiEeCBGGWEQTTrE0SSXjyPZ58whf/v56vUUk8FqsiI9RbSkKpWkXIIHGnrSQkCMq44tmnXH2f13/Dq58zb2xKW0lIaL0jYURbba2GVRBMapdYzXEhOUqGGggq8SDRVhJVqV3RWlVFEDJsWhJVqyS0dFpVrSLuJRpi0WBgkiJqiHpQVJXapAeCTlVB1WOxqwgJLSPuJTatCmNIBj3T0mpJ/K3aihC0mopQtDpvmCd6wkQQSWw6bIIUQ0U7yURovK2tJNoaiRQJrR6eyIf/WH/7X+j0beKgCYeDHC65+Fif/YjTS335M7n5Wq7/739bm9qMaN2aqDrI8ZJxaY5Lo6UTVatKBupOEoqWhBGCWatatCTiQQ1EYpe608vvyPOf6Djy6pdy/YLza9Sqdm1FiEWoRWxikwwMctZ5o/NGOsmRw5FZ5g092WSg7iXUJhmM0KpVEaskdiHUN4Xjc977u1y/4NWv6NTWGEPFKh60btUq9ZaSwQiGWk0JrUVIJEFQRvT4nCc/5uIDefU35qvfSd9ITzbxoFQlQ5WWMUg4T6skVlWrCLFpLAbjQnKhGWRoa2RoJ6ZdGKR2pVb1TS1JtJXQRkIsWndqEWrRSoY06kwsjmTgLFbRVhK1qljErtUOTNSmaK2SaCuJVS1i09YYw522NoldZBzopNMm8UjrQQg9l053pggyr+V0baoEnahdbGJRRBwwVFFSlAbxINpK7EpbgpLgw38k15/rm9/q6ZWot0VVVMSwGYOnP5SnP9Tzl/LF//nTJkERHZEMq3ZiaiMZcjgwrhhDDQS1ams1xDtG1GKQOW0SahFvq0iC0GpKKoLioFcf8+wTSXjzGdcvuPmKWAybImgQBFElRLWTvpFOTSg5PNHjE22Nm6/1/EaCMazaWiXxtlq0jCGxKw0RLUISaheLkMHTH/D0h/r1z3n9G+1ZBCERi8Rj1VbEqkqLSIKhFrFpKxaxqIyhLQYhoof35OmPGJecvuTNC04v6Vmtighql9BYRBLvCIqWEXq2O2gGhwtykAw6UO2kNhlD1WaWxJ0ktFZVEVURd6oiNolVFREPajGO5BKl19IzBuJOFVMSq7ZStARBi7pTsWkRSRBtJSG1q3sdJHbVTqskWpKB2kWD1iqic0qiLfNET5xP9AYhkdJ6JInGIiS0KIpoLYbEJqKtTUiisx6rHK70+U9IuP49py85v6KTWpREWkTRnsWQwxVPfyBf/cefNqFKQwaJJMSiiIqoVceRcanjKGJTm2RSi6hFkKAoikjiQVBKBSEEtQhBrUrL4YKrj+TqI8Yl51e8+jWnlzTu1SJqkdDiTE86T9KzJJoDGXK80sNTWq6/4vyGeFciQhBa2soIiXfEImIVxKoqCTny9BO9+i6nV7z6OafXFInEpkiiViWDcYkyT8yziKpkKNKqRe0GWoJaDLFIrNrJ5Qdy9RGH55xPev0Z1y9Qipa4FYRYVBLEg1q1lYRO98LMQXLgcCEdmoFQm4xY1aRFEN8U1C6oB3EroVVFJFEVNXOU8UTGQZ2ZNzJPNrEIoioWCa120opFLKpF60FJ7CIZZDCndsoIQW1q0UqGWtWmtQsisRvDplOVBlNaetZ5I/OEomoxSaK1CxpJSKzaou7VW2KT2JUW0VZGqEVRigwuP5Kr7+i44vyaN7/j5ivUKqhVVUUQWnn1Zz+t0NrEQAgSgiJB6Zkc1GAcGRcyDuRAaWoVVYtEEu1ZQltJqMdiERoS72gJRRK76riQy/e5+piLDzi95Otfcv21TVKajyIAACAASURBVGJXnSd6Yp7olFRLEsaBDMaRcUnL6WvmiTHcK4rYFSPEIlZVERJatUhkoCFBrIrErcHxOU9/wMX7XH/Jq19wfm0Tu4RWkXHk8AzV8xt6404sEmZRShHULhYZFIlNplUzODyTy4/18iN6kq9/xc3n2sqIFq23RUhIbFpMhKD1rpBBDuRIDmRgEKp21bqXhNLWGMMjLbGrXdDaJJrQimoO5CDjQKfOKc60BEUGwgiOtOosznaT0hKxaispsYiGjANFq+cpIyQktFZtpTaNXW2CFkErY9h0Ymqn9mz0jIlSxK7TLt7WhkZSYhGEHGi1Z0ltGmLXErtSRAktWqldQo5cvM/lh1y8z+k1r37B+SWNVS0GEW2l5Pf/4Z9VULeirSSqFAkJypwyBqVFBoejjis5HMlASa2SWLVTYleqlPiGDptYhCCY1SJICJ2VDJsc5eKpPvuEq4/lze/46ud6vsaUeU1PdGpKEZKBWLVEGANh3qAYBLULglqEIhYh1CKxil0RsQuJokpJImOgjAuOT/XqE7n6mNe/5dUvOb+2SbS1ScgFQk9iutNWEoTYtBUPKoK2kqhFvKXkIOOSy+/y7Ed6fq1f/UxOX7kTq9jFvTFQWsKcBLGqXQkaMmwyNBc6LozDsKpo0UorFgmqVpHQ2kRJ7OodjRlGQmknowgdIqS02lIkMg7kiCMmztprWpRWkcSmRDUWISHROY0MWkZ8q1ZbEptWW7FIKFURik7mid7oPEuIkpC401YSu5CB0motahfJQLRTW2PUqo0kdiXxoLSKWNSDepAD40qvvitPv8/ppb78Kzl/TeJBaeT6P/3LelupikVC7RISsxXErVBRkXGUw5Ec1CAHCbWaktjVnLUaQWlr0yGJRxL3Evdi09okserxqXzwE64+5ou/0C/+J/Na1CZRIUPGQVvmGRVUGJWiIbGqRSI5oCRWba2CWiTuJKG1KpJgWDXeEYMxrNqzHJ/w3t/n6mO+/gUvf05vbFoSFQxBTUms2kqCeOQwaLUVi7rXlhGPDRH33vuxPv1jrj/j5c84vZJYBKUhg1qUhMNgMucUFbcSYle7VEsbGQc5HDkccNQzSlJaq44haCuJtiSSuJfYtFZtbRIZQ89TOyVByZCEINHzlKCDMRhHOuxOerpGMVFFEndSGiRqFRFBZxlBEUIyCGa1dSelc4pFQkInnTpv9HxDS6cRJCqoeNAgEUEQbUXVtItNg5Iikli1aCVISGxabSUe1IPW29oiJLz39+TZj7n+nfnV/5Tzaxmh5Vz56k//eS0iigQZtMQiNq3NOKA6SxDSUoQmkiPjSA6awYjdELvaxarUraAIQYOgBEXsipRhl3I+0zM96/E9+c6fcHzGl3/Fm98wr5lnMlQwxaoojVWVRBLUvXEkQ5UitBWVDKsiQqKdlMQiCEKiIe4ERcjA1E5RhIv3ee/v6/E9efUr3vya8xvmiSADoSW0NrEaKIlV1SpiV8S9RFurIohYtUhlXPHeH3P5IdcvePU3Ol8xzyKUIlbREclAmRNFSMhQpaUlCKlFNYMcxMCRccBA7aqIVUgQq6pVxKoq7oRQj8WqKopYhARFhuaAUDIm5xtmUY/EriEWsWpIQlEMFAlKLEItSt3rJFYTZ50nzid6RkUVERMRUdSqipCBiEWjimKSIJJBaSchGQiKqlC0ErsRWlrfrjaNXRFiEQ5XPPuUi4/0+gVvfsP5a8438sV/+GcV1G5Qi5JYhNgVsQgZVrVoxSLR1ioJsRiMA+NADrQIIxiaUBqb0ele0NgErQelk2BONUnppJXYFLn8iPc+JRfcfMGb33P9JfNM6tvVqom3JQMDJbFqK7HLQBQRVbGKqlWERFUyaOyKMCyic5KgIiR6fF+ef8rhKTdf8OYzbr6kJ/cSWg9Ca5PQulOrMJCI0GoroRYliVWV2iRDj0958n25eI/zG64/15sXnE8SWpKoIiIEQT0oEm0lFlUkFiFDOxEZR3UgIUMsWjLUUBWLhFYSrU1iU8RihFkNEpt6LMSuiNVArKriRNFSiyDuNDZBFZWEoMhQFbErgqjSKUpoKz3rPOOEMolFbao2dasSt2IX91rqXoPYFYkkJNqKXXMhxTxj4sw40KpSxL2Ie7WoXUiIXcK44uqPuPiAec31C3n9Zz8ttUlUKUl8u2KQMIJYtbXKiAelFDFItJWEhAwcEGIXBEXKtCiptqKoKirFrKokmkiCanEYEmRw+RFX32NccXrF17/m5gvUvaCIRYjH6htCYhMkiEcS9xKPJHR4ZISEOUm8ra0oVx9y+T0unjNvePMZbz6jJ028LW61apHYtGI1CFrEpiUWUSRIbFqrIlbh4rlefcLlBzKv9fVv5fozekK0RUQYw6qIVbS1aiuxKC0JGe61BCmO5Cg52GRoQkmQUIoUCYlNazMGc5IQi/hW8Vht2lpFUYoiRdxpPKhbtUoGSkJCS5GQySwtzjhpzzjrZGRQixJqUVKLaOtOMtzrVBUhKFr3EquqCEJCQouQC8aFzTxpb+hJrEJoK7GId7R2QUm0lYQEpbh4zuXHXHwgn/8f/7Te0tgksZkWtYlFaRixKWJTi8NBxKpWRSlaEasmCPWWEsStELeqnYIkGouKRS2CmBYZMtyKR8aRi/e5+oiL73B6zdc/5/oLSdSiJZFYDMSDekeqJaGCSGyKoHZJKA3JsOmBxKpKSGKTaD1oSUUxOD7n8mOuPqTl9a95/TtSD2pTbwmK+FYNimgQi9DahJYkdpUcuPyAi/f1+B2bV3/N9Wd0CmrQupMRcnBnzhqpqrYiJFqLoBKiWipkiMEYJOTIOAqqlAhBEdqK2IRaxR/S1CoJtUntYlEtWmpRSayKeiyo3RA1EXfSs5pqikknJqoqFiVuNVa1S+JOLYq6VW3Frlb1thioYiQIRewyGJfm4dIqnZyv6bW0JKruxVsiFrVrCQ0Rd6pWEasensirP/0X1ZLYxDfULjaJVS1asUjoVKshIxShFglKSS1CalWLILHKnHaxSQhaWjJUxaJ0EGGEBiVuBfUgjGhIjhwuefIjff4p1y/44i84vaQWIahFJCF2tYtdSu1CRYRYxKYlIZhVlTEQDJtE7ZLQqsUY4k5RisRucLjk6mOefso48uVf8vq3KIKJ2MXfprMSi9iEilhVRVBE7GqTQei45PIjef5jeuarn3H9QlurqFqVHBByoKETFRQJrUWpXVCaiiCITQ4yDiRkIIxg2EXFYyURUZUWQVCrWoSgLa0IQYIyq62qJLRWM5FEEqtiJKjOYtLSM53MM53akikpioEgYpFqK4Jh0+ogCYLqLEptOkvs6luUREQShARFNQdyIYcrEubU87W4xplaDHeqVrEKQe2KWFQR367I6z/957VKbBJabUklJaEDIWipB4lNwpw2CaFI4k4RiyLRlqC1SkLiXkIsalOPJe5MlRK3EkZsWqtpkUhCSaiDfPgPeP7HvPwFX/4PPb+RDFQ7RWgREg+qqTiQWFUlsYsHRVCPHexqIkJCa5XD0NbbkrjTWpREMnjyCR/+Q85v9MX/x5vfWyUhdi0JtSiJTWsTNKhNYjNLhseC0lo11C7jIM8+5YM/4fRSX/w559dWVQQVoQcZQ1tRq7aojNjUrVi19YcNEkQTEcnQHEgksUm0FiHDppWExiaTWUXiHW3VLkqnVUtCHWVEElrNSUzUnJOQlp6kpXYtqq1N7BqE1KYWkQzUJki0pUVQm9a74k5bSSTR1iqJxqYW4yC5oKFnnSdMI6XVhqAWkXhLUPcSalGbRk2rCCO08tm//yd1K6ElQuxaRXKQhExmGUFs6l6DllhEW0nEItFWBCVBEI/EIjaDhmJYTIS21KJWTSTUoiRIJEFofZsqQnB8zvs/4eIDXv9W3vxGT6/oSWoXu7rXkAxNpKhd7ILQ2sQiFvGglCYqKCJWsUkktL5diGiC8PQTnv9Yzq/5+q+5+ZJ5bZa2MiIWJbEIinhbva2IP6hI3AsRzUGefcqzT/Xm97z6a04v6YkWg05JFMmwqkUnSmLOaWT4WzV2UbugiJBoLEKIgUEGGYSqGIhVlFatQmJVpWWe3QtpUXeSEpvO0oqpimoIEgSzNqV2wWzFIhbRWkQsUgktbQVJVLUkFhG09VjE0FatapUMiUW0kxERNQlyoNU5aUXoIMXZg6hFGaEWqYhdaDyoKonYtZHXf/ZPSxGrzkpQahEiCGKToCgJQUuCg11pMUns6l5LQkNik9gVQRhuxabTpqWITUMy8L9Jg9dlTxPDvKvr9+4+zIwkW7LlOCdjyglU8Y1bIZVUCKS4xJAQAlwE36kQkpDKwWdbGklz6u734X/ovbu3ZiRDsVaeFLkI88xwhINytU0br37A93+PF9/j67/k67/gzefsne9UOCyabztys7kp5CZs7uIIs1Fs5CpyEcu35CKMchOOV3zyO7z+MXvL13/Jm5/x9gt3Y8hFfrVhiKH8anmSi4OheHjFZ3+Tl7/B2y/55if2zU+0N3a+U8xFh0IuHljENnbiFIbKtyyECDttqWzzaCa5KWRRJFfnOI4D80wY5zmHqzEXYzOTzHSMw93G+UAu3jF3RVHs9N3mZkPmQcWw09U2NdtU7uYuN+WDsZDNTTEXQyQ3R4QNo9gwxDB3zaNtrspFbnZ4EoYwhE5PhjP97J//47kYcjXKk0UhQ3KzmVE68mgdEsZOzN3cjWIoHMzF6S43uYi5K4Zm7vJeWVHqAbF3GMVQhLG5qYOHBzoQO9lbdiJ7/SN9+rs8vLa3X+qrP7c3P9H5DrkJRSEMsTEckYvY3OVJaMijea/Y5KIod7H5ToV8y/GK17/Nqx9yvOTN53z9F7z9BUPemy25GuVuPsh3GfJLiuUucpE9fKJPfsyL77N3vPmpffVn7K1clEdzqFxtZOS9MeQijCHvhcwkig1zNRebPMpcRCLMdzjJzTbNB0PzXIRiyHtzMx+Jw8U8M5SbYcPhbmyebL5TsVkx7w2ZPNkohbmpzMXomKHcDQtjp5s82dyU97LlqvKdcjHmST//n//xfKxhzEWIcjXT8uQIuQtzM+Rinhs9IDbC8YLj4HzHeWK+LTehzBgVQyg6OA7m4h076YHhyM0wDMfBi1d0uNk437B3bnKRvfwBr36k1z+yvdMv/hPf/ASjB3ogHxk72SjC3OWDuTtyM4SyzaO8V26KzZPyQQhDvuV4ycsf8Oq3ePE93n3JF3/Eu1+wUfTATjZP8t4Qm5vyaK4meRIWG0UYc5VefMqr3+T1j+g1X/0xX/0Ze+eDYba5qsNEKRdhbjaMfCTkW+ZXWlSebJ473YWxeWZonszNoqLcbH61ELnIk2Inm7shNjZX5+ZwkIsxlCdjcxOGlUe52ChhLnIzHLmYDw7mvZPNrzNMrirJx2bMM5m++l//h7kqmTndFCMhcxHOuarMe1HZaDPkbrkbIi/MxVg4HlTsnas6Mea9kG3q4GCbqw2NJVcHoSmEHebiiGFDNiq9eE0Pbnba3rK3riorNhw8vOaT3+F7f5vzG376r3n3FccLhbkYTvbO9o6ddLgqlGfmubIjd2E6x1yMsiM2FcI8UzY35SIMETvRC158xqd/jdc/1pvP+fm/5fyaXjjPU+doErnZ5qrj8GibJ8VGuapsc1W52rwXZuh4rVc/sM/+Nr3kF/+Br/+MnXLaRt6Lsk2lIzfnSZlcbVMkM+UjYe5i+dhQPmjMxRBmm8rNefrYNo8q2yiO5Feb93ZQKjZPhtBsk/nY5m5j87EtzFUb82RlKDfJzdwdkbtyN0+Wuwnb/DpzkfeS77D5Zf3lP/lHUxRGs83NaFEUuZhH21iuKnPXcbgbw2abykZlGx12HNpwCgtReaZczZBclLk4T+2g3M1ZOg4Zw+Fmo+Ng7KReUO6GE6dHy0Uqd6NX/MYf8Jv/lX31p/z0X/HuS1ct2+QtvWPv3GwUZe7K3Q43m8VQITe56TwRR662uap8bKhcbVP52I482jgePuP7v8/r37Ev/tB+/u/w1rHTNnUgm5ttKuXbyrdsbvJkUmxUyE0Hr3+L7/8B5zt+/q/5+iecp5tyteg4bGO5qhCFUR7tPOW9sJO8l5nKzUZ5tJHDB8Nspzrcjc02V5WbzU0RirkY8mgLKYaclJsNc1OUm2LDyVipGIbz9GTMXWWbR9swpPJkM3Sk47BNRZgn2zzZ2FzV4dc5l6vK1TYfy7f1xb/4x8tV7KS5y025mtnowBANHVaMnCbFuUlabjbKIhcbotjcjSNEPhKFmUkUhXh3IjdloRgdMTdzER0hNpa7ITe5GHJujqIDwygeXvHyM77/d/n0r/P1n/LFH/L2F5wnTs43eMt54kSeKXcHwmjEULkJ82TmUUIebcMolbsoN+fJEXKzITcvvs/3f5/XP+aLP+SrP+LtF+x0l6ttPlaxUe5yMxTNNh1MtknKRb7T8ZJPfpfP/gZvv+IX/5G3P2dv3M1GHkUYQgexkYvlJu/No7nLRWFsHm3DIVdhOM0kdyebm40OylxEOxB5b3S4G+cw5mJ0Mpb3kqsMuSiKnW7C5mbuys3cbFOZu7nYJB+rLGyuKk+KeW82F3OVj+WZYvNoHoX5WHI3jzb6+b/479ZoyN3czEXuwlCMGmNH9OCqvTWpbJNYFEMMYTsl5CZ3+Ug+mKuVYkPkgbkYIlbENjlU7kbIcw+v6IF3b9jpYxuFcrNZD3p4xYuX9MDDp/a939PL7/Pmc776CW8+592XnG/ZW5w+yHN5EkIoQ/KxmZtILNswd3OVKJMKYbZJ5Ns6ePiM7/0eD5/yzed88xN787nOt2YscpPMxZCbsPmgDOVmozxJ5JdkRi/57G/o1W9zfs3Xf8rbn9nbrzGJcrXNo44Ds7mpw13kg2IMNSs2zc02N/ORYR7lYiciH3RYmIvkKjNM5snmJu8Nsw1xoiRDYWwjF/NMuQrb3CzMkJBncjc0i4YiN9vUobDZ3G2uku8UijF/tVzNzdwM/fSf/6M1nCPMXdlmm3KRR5W7UdYLuTjfWIfKDCHkgylsriZM5WpzMeUi5EnMLzsk21QcmashV+VmQ94bovHwCV6wb+QtDk/mrtwMPfDw0h5eqgfDpJff4/WP9eJ77B1f/ylf/invvsQ7hMgz897I3aIy3yE3FbIojJnmmUkyHxmiQmxmKhbFi+/x+rftxQ/YyTd/wdd/zrs3hCEXSWbu8mQ+iORqxhAKeW6eDA+v+OR39Oo3OU978znf/KXOr817o3I180FuwjyTiA3hyM08SXa+cxPJwk7Kzbw35G6UjVxsbObqVLkpV0NyVSfzZIu5mcndvDcXQ27CkLshzHt5JpKrbZiKfFCeDJvK1czNfLdc5GPJL5t5lKuZi6H01f/23885duIgLE82zwz54AgP7t5xxLwXclMMne6GCEPeC2PeCxEKQ+QiHGwUG6EMCSMXuRsbRrHsOOS9Ir9GOHh4xcNLjhcsDFG8+B6vfoPXP+IcX/x7vvxPbBTmuTDmvdzkvdyEuctdcbiI3J1zUwwhd3NxMHeFeTLkIwcvPuPVj3j92xwv+fJP+OI/stNNmLtcZJtEmIvYiYNcDJGLyHMb5Wbz5OETXv2QVz/keM03f8FXf8z51pMhNjcV8tzYPMlFxDaZm1wcGGOuptjG5iqZMTe5CEfuYsyQhs1N0UyuCs0zO5iLuZmLeTI2ykUWM4kwZnKV+bakODfM0YGZIclMZZtE2PzV8i3lmbmYm6H52JY+/2f/cBcqV9tcJQox5GbvTjdFVBSbmyNkLnZ6VLma53KRi9yEjWJj6KDDXTRyceAw09ycIRphjbBREpsZi8N7Y+hB5bkwHxwcL3h4xfFABx3IXOyUOXuhlz/Qp3/DnPz83+rrv2QjhHlvrib5JWEI88xiRyo2SjjPU4WQyk2x4fDM3GzTEXI3YouH13r5A/v0b+nhNV/8B776Y853PjZzlTyaiw0ROTwp5GajWZiLIeRmo/Twihc/4NO/zsOnfPkf+eKPMHdzs6zD3WSIuQhhNB/Mk9x1YJzvEMXGZu4SZhjm7jiSX5JnNhdjyEWUu7nKR4bNzWZiLlKcDXMch7mYixlCw0bZmItSBxsNMxQblWc2j4aw0REbcrMoYZuruUvkIlcZG07kmaVf/PN/OFdhbPNByFVFMcR8ULG5Ok3lUeVj2+iwnR4d5aaYi7kpNjrocJcZJi/seCDslIuwsTEX+dhEuSrWiSF5b3m0qDzJRebQ8ZKHF3QgKyQnO3Fynsg++Wv8xt/Vuy/sp/9Sb35m7nJRbG7KM+WZzaPF3FWutrmqKJabCJu7DhXnfDBELsLIL3ng9Y/4wX9hHfzsX+vrP2WnmyPGztNVZZvKXch32VzMOnQ84LTzVKncjbkrXv6Q3/gvOV7w+f/JV3+KIYbjoOx8x+ZmLlJo7PDByDPnXMxhbsrN2Oa7bCdydRwoczVKZZvvtNThSadH2yQ3m5u5yJMDudnmaptHbQxhUXRgiIapbLNNx8FmG2Xn6TgO4dxUrio7T7k6KJsnFWZOpFzEDozzxJBHQ+jr/+W/nbINk8PVNhUdbLbJe6HDzUYhV4u8V54ZM8k2N5E4MIS5GTKEKDpobvbgfHihCDvfyUk+2APnKIaYD2quZq7q4PCdtslF2aJDDy9wcBx0eHK+o7ETwzhe8tnv8b3fs29+ws//De++5t07H0whjJnKXe7mLlenqVS2E7maSXKxIUKHDcfByJBvm0dzsbkpSh32+nf0/T9gJ7/4v3nzuZ3fsMmvkw/yJHc7OF6wt7Z3SLnI1VAuss3NJ7/D9/9AZr/4N3zzU863rpI5ZW46bLF3dLIHv1Zh2nCai7KNTYvcjVyNDszZVMhV5dzYVJxTeTSHyk0uhrmaSeRu7mJzMdsk24jkY9uQ8kxF7JyrmeSmhG2uZoyKYlO52lykwsyJ5KLcbB5N2mE72Smc8mSj9JN/+vfnai5GB+Zm0YOb8yRs5O6IuYiNcMRcjA6bm+RuzM1cHEmEuRjFkIuThSgcNAoPHC/cnCcbvSMUoYMTO5Gb+SBPhorywcjdRlHMxejAQQeFmIuTnRjH4cnw8Irv/W0++TFvfs5Xf8abzzm/YXNT7vLBPBch72XmZggjF3MxijJ0PDAsv9KGfDBDriIcL3j9Yz79G3jLV3/B25/y9gs2d6PDzU7ykRC5m4sYmrs8k1+Smx745Hf49Hc53/L1n/Pmp7z9ksZcRC8Je4OTHX6tDna6O93lZsM8GTtcpDDMXShyt9loPtgoRGE0N+W5MN8yZm7GkIvYaKcc5GYuSgfGNk9GrsLcFGYbpdhIho4HekljJ3vL5pnlapu5ik2bjA7mg+jLf/4P52pzE4YjN0OxWcnc5W4MuYhicxcOQi7COEe5OXJTbGyUm8JsQyp3UXTQCzZ20nyQm0YYchHmZkMU8lzuRvNoUgfmpsPVPIqlZq4mHymEePkpn/x1Xv8m3vHNT/jyz3j3NQ0R5tfLe5kZKoQxN2GRzCSTXIUhd5khyZMx87FcdHC84NWPeP1bHC948wVf/zlvf8ZO5G6+Wx6tPMpVvmUIQ3k0px5e8/rHvP4tHLz9nK/+hLdf2Oh4cLOTxvLrDWMIYd6bba6SuTii2IhObISyUhjb5GJutkmEzdVyk4gZvZAH84ZNHmXlydhOQ+7a5GNRNFczk7tpaS7m0XIT5mKscDgeXnG8NFcn5xvtHWPmLsY2TO5yNWTmKrnq5//0H+zCo7x3HK62Ux2uhsw2dZixMRxJFJu7EOXqjGM+CGWo3OS9uVm2UW42Sg6O8OCmE0PI3TCPNoTyKBeFw8LGpvLcXE10eFJysblazFUelY+kQm6KF5/x+rd5/SM6+OrP+OoP2WkbkovcbHlUaMjNZh1+tSHP5LkdKtuIHMxdbPNcOl4gzndWvPoRr36oF9/j3Zd89Ud88zP/b638evl/5eE1r37Iqx/pxaf2zV/y5R/x7iv/35xuhjAXc7X5IBKhGDONmaTcTG7yXmyYJ3MzczN3DS/loLfmRMpFtjhyM4wZo8JcJTM3G1HRzMUoDBtysxG525CLwxyOFy85XpnD9o7zG863crfNc3OV3M3VzFVi6fN/8vc3lLudDMfhapuEcRxuNsrNRtnmqh4wH+QmThzyTAwVRlEMYWMoNlcTRakQsZGLUG7mbvMkH4niCNlOzhGJ3A25CLmau4ohHMxdWAlz11zkashFB8fBwye8+k0+/Vs8vOJn/5d9+SeUXJSrbT5WYe5CDCEXebJRZn61B5UnQxHOkbt5L3pAtnfsRBwvePhEn/yYT/4ab3/Bz/8db3/urzL5lSJ3m/dyVWE2zx0v9fL79unv8uIHfPUnfPmfePfGVbEhwjbPzS+buRnyJKl8MFebu2KT92JF7oaN+Za52OSiw90sd+Vqm8rNmKvYKEm5yDbbMIUo5mKIY9iQq20qv2xYOR5ecLyi2Ol8+4a9kxTn5iofOxnyZEPk4qQ3/+IfTMmYJ+sQdk7eezgoNtv8snU4vFdsGOI4nHHM3U5zdRAVG8eBbMM4T1cdh5uYTK46qAOHLRrHVG7OfMvmSZmrOA52mneuDiHk0TbboSI3+WDHYeUqccyT0mIM24SK0EHh4PWP+NF/be++4M/+d86vVa7muWIIGxt5b3NTKlc7Mhebm/LMUgemwtwUm+8yB7K9lcOWq1AHPfD9/5zv/T5f/ymf/yvefek7xeSDmY9NZq6SkKswmW87ysSL7/ODv8vLH9jn/0pf/aHGPDrNPBn5yGaeW55pVCpXM3OXRyk329wUG3I1c5VHYSZzYnL4lo1yMybK3B2uTh/kKnc7mIvNVZsibENymLlKrk6z5ig68GA7cdoQRkfMxQyhYe5yN4SwUz/9H//eLtxsHFFyFRtjqJj3wgwVHb4lFyfyZNgoytXcVZSNbQqbu4hc5CLKNvVAhw/ypHxLmLvYOTelxnkSOswHkgr4ewAAIABJREFU24SKDlfbXNXhLmQuIpG7ctMoE5L3CgfGhsN+8Hf0yW/b5/+Sr/9CRoebzZCLI4ZQhrBztkk+eMAkYg2jXDUM+cgQG+Um72VyN3W4GWKLuenFKz77fT75Md/8JV/8IW+/YO88N99SbK7WkKvK1TZXlc0zxeYj8fKH+o2/Y8eDfv7v+ebPOd/Q6fRco3OEZifmpnJuPpZTR4hcHLZ5VLmaqyFzkZvOqWxTWe7CmIuwITbb3JTOqTzahtxEPtiGMJXKQjNjh5Y6nDsd3rDowZMhNzM3o7JNuZgtlmJ5ErZ5VG42Fylm+sX/9PfnajNUPpi7yEUezcUmMRQdGHMXwmYonO7KTShXc1chjLnZRnmUi3LT4aZYnik3uSuGfDCEYic73ZTlST7SA/JoLjpwqAfKTS7CPNrG3sk4wkEhjtjYyU564Pu/z2d/i5/9G778Y9upojCGvBe5yJP5yHCweTRTUZ5sFJu/2kGeDENyF8LcxcMn+uxv8vI3ePuFff0XvPmc8xu5yq+zPCl3Q54bQ7nZklzN3HzyY332N1l88+e8+Ym9+wU7PWpxjgNhOJnTd5uKcpO7MCZiJ0V5bhhyV662uSqUbW7GxjZXh3ysMIbynRZGMVejWDx8xsNrzm948ws5kV9n86TcbLQI5YNh/ip98c/+3lyVq4yQi1zNhEkdruausZFM2jxa0XwsF0PMxeiIIxvbqZLcFDJzMx+UXHQgNx0Y815ucpFnwtimIwrD2Cgzc5VyU7FMJjp0HHTQoeMFMqN8rGUb3ul8w2YiOg5X24nJSZkX+t5/xmd/k5/9a/vqT2TIzF1yERsV5bk8mYvZ3OS98v/XOsxFJIbYhlzlKl5+j0/+Gi9+wPmGNz/h67/k3dd+WWUbMflg/iqVu5CbzVAuDl7/mE9+mw69/Zl9/ee8/QWbm3M0jhg2Nt8pykWuZuQiV5PKNiGZj8RGYdjIR8LczMVczcWGB8/No8o2lSe5mKFys9PdYS9+wMMnOt/wzefyjth8JB/bplxkxdzkYswvO/1600//yX8zF7kahlzNRe5ykTrYGPNeSTaai9kQjoihuUnmah6Vi8PMVe5Wrirz3piL0ZFkUlG2ucpFBwtzNRcbJXfnRtRBkw9W5LnNpOOB4wW9kIsOKx+M48Dhg8lF2WZOvX1j56kjyt2005oNvdD3fo9Pf1c//T94+3PGDIcKs1Hei5Dn5m6+ZUMuwpCrYnOTi1zkbjZPdhx+tXxL8fCpXv3QXv6Q4wVf/5Sv/oTza4/CXIU8mmHK3RAb5SLMTYewsbmYwhDrpV7+hl7/0B4+490X/OI/2bsvtZNQEpv5ZfOo5tGWyrkJSZiL3Mwvy908MxRGsWFyEdvYg/nYMENIhFwkQ7YZMpmrSb3m4aXtHe/+H8bwrNfTBTHv69bv3VV1ph5IdtMkRVIRLUsGbN9kMAwkd8mNoNiQE+RLBrDCxPoEBhIYiAAp1oXhQZGYUBIpNpvNns5Utd8n/6H2rqo+h4zX+gon8k0R5q2p6DAHO9kw5GbumnfmLgxD+uof/Z/G3IUxhHKzuSnMB5abIszNMJNDZYU05D1jLqLMo/dViNwtN0Modwe52SgXcbBREYbNO5l3cjXkJhZKx8FmGw8v9fDSHFrmLlFuitCBMeYiErFh43zN+YZSmZM9CpNcHC/4zt/ik7/Bn/0/MBPHS8X2hnOSm7wV5i7fNIRxoshFlCebi2Rm6kBm5j2NeZa7RC4yf4WHj3jxXT79Pb38jn3+r/n8jznf4GAsF/mGMG/NszAqcrNz/lrHSx4+0Se/zUe/aV/9uX7xR5xfIYwxc1VuZipkLuamodxNcrXNt2qYuZiLkLxVxIxNSMTGzNXEEDbPclPsoE0ONttkis1NDnowtJmZ9+Ru6TgQZudJJMJm5ip538yzIXI1M1f99B/+g/lAlJsQO6cw1jypGNuQykahKHawGToeKDebmYzcbcjVNjelYnMXclPkrcOToXITG+UiH9jogbCZycXmQ1E4KI4HHl7SgyGzzVVFuSk3uciTofJkI2Nv7HxjTkkuhkYHm4nf+t/oqx/x0/+BohdudrJHd3kyYYi8J0yuwuykclNuyk0Hw043HZi5KB+aJ7mYi4j5VUM+0MGr7+t7f5eHT+xn/5xf/iscFGHMifxPErnK1TaVq22+TQ6Ol/ruH/DxD/n8X/H5v7LH12zmrqjclKuNbXKxlIsw89bmJr9irra52eEDx4Gx0STPxpzmYuk4EMZmSBjHLA5JmJ3DMOUiW8j75tuEQ7nZeRLlIjbz7cqvCMMwG/3sD//z+cAoNjdlCHM178TcRWUniSM6XK0D6XhwMxezjc120mgOF3O3meSiEPK+uYh6oBgzlattKn+l43CzmbnKe4ZCiOMlx0vKHDTlInIRZZubaLMh6iDvxIZInK/tfO0qFwsnudlwvNJv/2/5i3/M139pYlN5Z+6y/LWSqxnDfFPJg5tmmxzE6SLPKh/Kk9zNPEnE5mK2yUXuPvoB3/k7PLzi5/9CX/3YHl/j9G2Gyfsq78xV0pGrbZ5snjUXo+nlZ/bZH/DiUz7/E/viRzx+SWPTkauKsrGNTeVJMXd1YLb5pmyzTd4386CynZg8CbONUg+uGjYzTwq5mKu5SrkYO5Ew4fC+ed/chTA2N5EPFZsPlHeWyZyuamz66h/9H+d9zTuxCEPh9E6IjXJTbso6TI7jMIfEEUNs5MDsfMMe5WRzk4uQZwuRm+WmDuYiQ7nITOVuDMVczErlndHcDB3EFseDjpf0wkwiJnmrWOSt3I3hiDB3uZh1SDTO1/b4Rh7JxbwTO/js9+2zv6kf/2P2BrHYrCHvq1xtlG83N/PONlcVy01zVQ+uTrMwb0XuRkLeGc2zkqsIm5nyoY9+k8/+Z4x9+Wf6+i95/Jyd5AMTwlw1F7nLzFVR2Zh5Z0JyU3bOTK9+nU9/z3rBl3/G65/y5hfskVK5yTvzLCw3lattflUdrraxebaZw9U2OrXDzdx0zFxlS66GMDYVjRA7KRfhlMzc7EDet/LkMMxc7HA3mnfmLt9uns3FgZNcjIdP9NP/yz8YY94aIhe5mYtRbJS7kLvYSdFBD3SYi0iIIxvlIo4XeODNa9tXcrJhbvLWQfnARgeyDs6pEEM4RgcLp2ebZ+Um33QcGA4ePuJ48GxDFHNXCLkbeU8UG2E4otyFR958yU6Og53uhhiKH/4vef1Tfv4vPZuLMXe566AX5mJv5CIXMWzM3ZFnczFkZssRNspNmXcmydWQfNM8GSaVd+YqzFwVc+ijH/LRb9IDb35pX/+Fvv4Z5kNh5qLhATl6sM32KDNvdWBsMh9onmyn7dBHv6FP/h2Ol7z+pX31Y735BXskNx1sDLmKsKncxIbNk0IHHjhnTpxu5mabbcSBxuamZi6GMleHypwyykZ5K3enLRXmyZYPdCg3zTds887MVMpFnP5qDXPz4hNe/Qavfk1f/qP/w+z0ZJvKO3lnyLw1FxG5W+gFxwtzKBezuTmOBzOGsl5I7JG91nmasckIuTiQeaswpA6Tqy3JTWHqAeERJ83mGyrbPCvEkY6XdrzEQYRtSPKsDImYi82zqMMHjtxsCON84zy/dlVjzFwlM338Q37tP+Qn/w1vfi65WWY2ylsHx4MtefQkV9lmc1O+RZ4Umw/M+0KeVG7GfNPyrXI3lIvMVPSCl9/lo1/nxfc43/D5v+brv8TMJGIu5uLAoeJkTjkthsrNhlMyY2gqVztnc9cDr76vj36dl9/V41d88Se8/hmbcnPmPcmQ9808KRfZDpXtxKm5mKt5a9MwNzOG3FRWOCQ6zSQzN6PcbCei/LU6PMnF/DXmydwlf6Xw4iN9/Du8/B7nV/bVT/TLP/zPx2huNnOXt8o7IYx5K3LX6IX1QseDYZuKDVFsFB7wqIOdJ+cpFzvZOLx1GBJGuctykTpM6nC1jU0PL3BwvjbD3ETC3GWbq1yUmR5e8OJj26GYSYSh3OWdyDcMidxsKJUPzc6veXwt8w1lxwu+9+/Lyc/+OxvyVm7CYih3uZmbOrxvm2/Is4p5Nr9qKmQ7kcr7tjxZzN2kzZFnE4XczbMeePjUXv2a49Pf4fEL+8X/l9c/xSjJ5m4oYaaxGCqMuQkzNiJ3w1zMxdwcD7z4TB//UB/9kNc/5xd/xOOXruZ9M/O+iuVZbEMqjBHazMjdXORqG0O5GUKHOkw4bY9mCHMguTutkCdzMe8UcpXczYfmfWGuhvyVjhf67Pf56Dd4/Qv78s948wseX+sn/+f/dIVjyIfGZqOi3IShmJttnvTw0hz0wHG4GcIwHORA9OjcG3rh2Ng4T5wcuTtsY5M8WSh1OMdxHBwvKNvJHhELj+7ypPK+bRJlmzp58TEPH9smpw+UjXIRRbF5sqHkKo6DeTZzVRHKJK95/SUbYjOTXK3x8W/y3T/gp/89r3+uWLnaJtlylROxB2IbuTjcbJTK+zbayM3k6ihizfuaDyw3lSc7PZvD5EkmeXKicrVNZZvK1TZKxyt99jt88ru8+aX97F/w5ucqV9t0nLYhdXCyXIR5klxtw2iuKtupZvOeXHU88PAJn/0+n/5Nvvg3/Pyf8/gVZiPT5n0rk8rVNtsw5eKQi42hMYYi2djmanKV3JQJqdFpO93lmLfm5mDeF/JOJlfFhs2zUXnWMO/E8s448Onv6bt/wNc/sZ//S978gp2Slr7+L//B5G5sU1EcsbG5y90MeTKEyMWDdeh4icOzsJlZh6tQsZOiOE/OR3aayUUHuQhhNnIVsk0He3iwHrSp2UmFmblZSPmGoTB2POjlJzjMcPpQKsw2deAwkzDy1iiOF2xuwuamCB2ccZz2+KUeXxObm1wNcXxk3/u7PH6hX/wLDA+2R1flImTGOXqQi9iYDLnKXYptDCN3K6Ry0+kDkYvczFWuio0wV8NB2eYuHK7masrN5iIWcjXDVBLHKz77PX36u3z9l/aLP3KcXzofXxNhoyPGNsXmphjCnDK5Gpshd0NlHe6GzMXxSr/2H+mjH9ov/yVf/LE9fsX5qDF3oXCiiG2MXMRim43CfCAkM8aEXM2005PKjmwjQiO5mjF3EU6ZlGdDvk1mcpVnC7mbOengeKVP/h195w94/MJ+9t/z9U9tpycH2vSz/+I/nYuZq+Qmb4VRnp0zEyZMxZGreeB4gQOHMHMVZvRA4VAuhrHZeWqPdp6YOgi5OJBnm7sMFWUdMuemUpmxERZSPpS74pw9vNSLj8wDhvlVxVyMCgfmbhjlaq4OHTnPU65yk5s6bOkYe22PX0luYuZJDj75G3z0A37+z3n8wnbgdDek3GwuDsxNIZtnG3IRm+Rmc7X8ioNlJhchchVGKDebytU2NyW5mouGXOVwE3MxNpSr5H0zN8dH+uz39NEPeP1zvvyRvfklj1/R2NzlqvJkm8w6hTZXG9uQYpuOw/JsI2+Fh8/03b/Ni4/54k/s8z9hbzBXG4dkCEOuZozJEAo7zTsT5ckxH9qJIWJH7sI0kifbvG9Fh22uwrxTMf9/Dbno4MUnfPTrfPRDzkd++f/h9U98q9GmX/7hfzYXc7GpPNlGJNuQvBWbi7kplTnpgeMFPeCg3GzM3XHgoEOx88SJsVM7bY+cKBWhA3m2uToL6XjgnJwmM7kojCEXSXb40EgWO0/Hy4/teEUPNFeb98xVkasDkZvtxORuoijGzJMw1EEP7JG95vG1Z7HmJhIP3+G7f5sv/4wv/lQ90Olqm5t5VodttnlWyNXkLhnSmLla3hNCbO5ic1OYmzwLQ67CCGMucpOQq4oyMxe5SK4yd7mKMdPDJ3zyW7z6Ho+PvP4pX/8Fj1/Y5iaSu2mzTbmYbQxhIVflrrmqbGPM1dwcD7z6NX36+3z1Y/vyT22PrkKjjbKNSLbZhjwrdrKZJ1HIVfNO2IlRrnbkfUlzs1Fu5mKsTMzF3IW5OjoYM7ko21yFxYwOvfyePv4BLz5lr+2LH/H1T9gbf7305X/5D+Zi56OrOghjG6WyjU1jRnlfR4idJo6XenigB8rVNsm81cGoOc9TZSbTedoeGXVQhGJh7sLMQenhpe2R841Jvs0kyvIrDsmw89Hx6hN6aT3Q3CzvzE3kKjo82zB3szzrOGxjc1UZ2sHDA+ej7TWPr91sxHZSOmKsF/re36aDn/+/2almLjbJXMzNTLnZ2EgqV6er5C5XeTJDbsIy5EnmLhdjvl2lXMxNbGxuKsxVocPM3JWLkHfCYZvC6IgXn/Dq13n5fXrB1z/jyz+1N18SxVxsMjZ3szGT3GWjI9up5ihPtlS2YcjNJ7/FJ79tP/8fefO5q6KTbZQw5G4bZp6kzeZi5mIu8iSZUXI1cpGrHbnapkha5mJUZra5y4bC3AyRVIxtnoUhN3t45fjuH/DiU9783L7+C77+Kecb3zQf6ODj39FP/+F/NsZODIdnGx0cL9hp5xtPkquZRLkplpWOB3pAFjM5KOVmktmmsaadbLZTcxHF4SIfWGwmFWXNs2LzTp5UhHknF5GL2cPHHK98KMnMVTLkblLeGkMYMx25GnKRuyEXEYY98vi1udjcxDaEqfjoN/je37Gf/ne8/rkKc3d43/aIQ8W8E3Oxuaq8k2fLB8okmUd1eN/mWS6WudgMRQcbxeZZ+RW5mjGKyVXlJraUi9zkrXh4xcvv8/Fv8uI7fPEjvvg37CuMzd3M1TTMzdzlKtupg8qTucpczbNe6Nf+Q33+p3z1I4y5mI3CIjfbXG1ztU1iudqmsk0yc1XMdIQMCXOXuynmYi6SBxo7nZurZJvkZihyNxezUVE0d0P69f+AveHzP7U3n9seMSLTfFMPfPRDvvP75tBX/+h/vxwsZDuJXGXoeGCzx0cduZttSEeIjXLTgQd6QYdtyA46Yi6m8mwnO4mdj9qYm0UdFPJsp50hN6XclZVvyrPDh0plG8aLjzk+wulqLuYiDKlcVWbMs8rmLhfpOFztPImaZ+VmLoaTx6/MXa6ynd7JOvTD/5gv/q19/sdsro4j5Gqbbe5SB5ubXOTcHHkr36bYvHMc6sHVuUfOIe/bqFxV7mZzUy6yzd1slIt8Qy4yY24qV5Vt7lJpmUl24AgPvPgu3/0Dffzr9ss/4hd/xPmouZhtNDnYbPOtopi7CWGuDjN3feff5eFTfvY/cL7BbHNV2VxM5eo8pyM3myc7mXcO2VxMB9soM+QoVxuVu9mGuTtwqNmYaQxhriZRttmomIvR4TiyDcN48al+8z+xf/t/53zjah0022QO86yDVz/g+3+XHuzn/6N9/RP98v/29+fEwmhuhlxkHXL1iMM2ucpdPlDIHDoecCA6bFO52eygXIzH093Y6S42ylD5phBCmCdD5SaULXIzCRWFkbdmDx/z8Apzs5mLUW4mV7koucq3GToe3GwYeWuedWDskccvEbkb23TkyXmOz/6WPvkhP/5ndr4WengwlJthm4QD40SICXlf5kMH5m6Ujgcb26OEfGg2F0NIucjVTLLNVVG52mbzTlQUm3NjVK7yq6IQkbkLIV5+yvf+Di+/wy//FZ//a+ebr3A6jhAbm6vNTUXhJG/ldLe5mDZXhY9/i8/+lv34n3C+UW7Oc4rKVeVqmyebm8J8YN4JM5Vhm6PcjLkKo7lZrjZykQ/kbkhyYDZmRGWjQrYT4+Mf6nv/nv3o/8lGmWiucspFDzx8qu//XV5+x37+L/n8X3O+Meln/8Xf33bilIsOV9sUc3XQ2ChXudsOHxiOw7OiQx7owOHJDFOznW6Gna7acCBiuYswc7ODQhIxY24qV8MkV5GLQ7krM0yF2YuPOV5hnmxuKnK32SgXuSo3c8jMW5HMJEOYi41wHBrON3Z+Td7ZXFWeDHv4TD/4n/Ojf2KPX8jdmooSJlebm8ZcjePA4Z25ynuW963JA+b06HAg75v5UJghbNSBucuTMO8kchGb0zDJVcVmyEWHlbkLudtc5Kri1a/x2e9zvHR++Wd89ReOx1/Y46P35a3czHti8mwuTrnro+/x/f/AfvT/4vxKZcaQu/mGucuEIVGGzcVc5WKjkJn8qrkJy5M5JeRDYe6Sd05X0Rh1sDCMT3+bz36XP/+n7KScQ6N0vNKr7+iT3+Hld+zzP+GX/4bzK+9MX/5f//7MRW4ObG7KO7PS5n1bCuFEuYsO5FnRC+8Mw8wYOZm7vHVwHGxmKsRmeetBLso3HZ7NO8VxsLkpz0I4XvHwseXilKsoG7nIe/KhsVHeicNFroZk5qrYDm3sDedXhKG5y12Yu/jhf8Lnf8Iv/9g2NqTiyDbKO2O0GSrrQWWbu5AnyZO5OFCuzvN0mLwzV3MT7bC5mRNRPpQnycyTyoSEORnyoVHUYe7qMKyxWO4eVVqIj36NT3+b42P7+i+dX/w5X/8lTiFPhlkPzMU8qYRhRu4ePnH84H9hf/5PePzCTS5yNdMwz1YSxkZslItsGRLhHDsNR4cOF3Nu3pmriuVumEnyTjYKCyfNXa5m3okdCsWnv8unv8WP/ymGOY2HT/TxD/joN9RLvv6pff7HvPlSm5knST/7h39v5iJzddrcVCpXM8SGSW5iaMx0PLgacjDmKkwib2W5mCed3slFdFDkbp7NWx1yN+/kohAbMk/yLCoKkYvMoZcfO0ubXBRlyK8KuZurmXxokVxNGFJuJrk4v7a9VmxjlItsLmKjhH32N/n09/iz/5owd5FsMxdRudmEGXJK5apcHD4w78miQrbJiflQrlYawjDEzF3u8tfLTe42pDJDbEQdbK7qgeY0c+CBwjhPh1OGWQ+8/L598tv6+DedP/4n+vqn7obUzOiBuZjNTVJh1hjC8crxw/+V/cU/s9e/cFWZqzEq22yT6MGGDRO2qWwuJhflajvdREV+RcLmYj40HL5hiGSGuapsLuad3EXp09/h09/hx/+UjYdX9ulv8eo35OSrv7Qv/i2PX3qSMWbu0hd/+PdGOIjzPG1TqVSeLBo7T5VtrirMRkfIXchNLmInhXygKM7TB4oOChHmYsjMVT34nyYrc3eIzU1RFOLIRi9e2fGSTS4KeTLvVIjC2DC/augIuRnKs2Inb76kuRvmasvdYZuksodP+a3/tf70v+J8bRsmcWTnXG1zdRyHNRqjci5X21Tqwd2QbT5QErnbME+Go8Nki4cXOuLxkfMRo7kLed9GuRvzzka5SeatkouobG7qcLWxHnh4ZT3IxePXnF9pp5te8p3f59Pf5fXP7C//Wx6/ZBG5CDGphLkYhrKdNKRwvNIP/2P7i/+G17/wZOYuB+ZiM+FgbKfMZDvVwVwMU7ma6TgkczVMniTZZsZGLnKXD2yGcpFtDsxdZa7mLleVm49/wPf/ff78H/PJ3+Cz3+PN5/bFn/D1T3j8Sptvms3FVPriD//ehkQh21QYY9FxMHaebK7CXIyhcJ46DjpYFpVtKnNXITeF+YaNMlFEQuRDO8i3yAdieZYHzzZXKx0HZTv18IqHjyjm2TYKYe5SuSmG5tnmalEhN3MxG0oHe/yax69VlJtcDCHGTsKK0e/+7+wn/4zP/wzZJnfbKJWr7bTmOEKuenhgbKerOmxzVSEfGMqTzXtGs2UeHA8v9PJTc9jjlzx+oc1dPhA2hDybD5U6bLP/H2Nw9qtrYuDldT3v3qeq7LLdHjo9AE2rEZNACokSKVcZoYH8qSjQgeQuUqRIuQ2CBBFBwhBEAEO7bbenqjrn7PeXbzjTLttN1jpPYkg6DjZyUyHO7OEjPX5iPdJ0fm6vfibj09/ia7/Dq584f/AP9fSFna9tJyIq2xwHOsjdQvLemqttPHzMd/4TfvD3efpcyBubbcxNZRtlLs6TaJlJrrbTcJSOw8xcbCpho3J1nicmF7G5CLEU8saQbSozjZ2TiyKGoRK2uRp6eKHf+M/pgdc/sx/9Y17+IXvNJjEfmG0q74y+/zd+dxW5iGIzV2nuNlczFWIuDjPCXDzJgRB5LhTD5uaIMBeHnxNiYUluwg7m7oh8yeG5YSY3Hd6bZAujQ8dpPejhq/b4wKYTZeaqYsyQd+auSZR38gtNCCevP9P5xJFnjjyz2GHY6Dv/Iecr+8E/lFgMYSQzV3PRSbnZOFBmriqJXAyH90LeSrbD3YkZtqlHPX5kD19Vj5yv7Okn7DXj6LCxTkzIaWJ5a4W8l5aZXIXcJTMnkVxNHB/z8LH1giPtyR6+qq/9ls5X/Oif2Ms/YDPZwsm805FKZu6SuRiVYUYkHr/GN/+Cff/v63xpGyV3M0ZzF8qwnbbp5JBt3imOEbkYcjETyy+XvDdzl22uypccwjw3czdKmHF8ooePefUjN8vVzNXh36/P/8f/dq6GMPLeXJyzzVVlaDiSg7KwyVU275WwzVXHgdjpai6OJORm814ckYvDhzaUXJSfl19kodjM1KEyMYbQwRwcH/HwiLQ4mLvkLh/a5hepKGIuNhXlbvb0kqdXrpaLSb4sUYixsY+/pV/9T+1f/U9u5iKVncPcFBtHkm1uorK5Kw7yy+SZRczFTmQuFscLjo8dDw92vranz9mIQ+aiVGy2kyYjbySZuZmLXBXKliEXYXmrcrrohY6PePF1Pv4VffwtO5/48f/DF7+PE6cNp5u52FxVKsyMcRzZWFRuRuadT37dvvbb/MHfY68xFmEu5peZi+EcQ2yUu5Cbctds/kg5PBebq4X5BVLZ5qpytc1NLsb8Aqls8958KPnQop/83l/fzFu5ms1dtFzN7Jxic5EcFB3KxYnD1ZAIY+aqQpiruSiFHW42H1p0HJR3NkNyUwh5b57L1YpoM9FBD+pkp7lKZS7C8ZGOFzisVJiZq2SiFBuGzd2QYlF5ZijOV/b0GaKDTbkZmucaomzoUX/8r9m/+Z/t9eeai1S2MeYuF2UotrFUbmJFGEPNc2He2UHkojmXnScmj/RIbCejJplJOg46bCcbRr5k7lLWRNZpAAAgAElEQVQuZku5CKnMEPJOTDx+Qx//Kh9/i8bPvstn/5bzFVHYaZurDZubOIo4R+6Kzd0RG3Mx73ztT/HwFX70j7TTXGzkZqPcJDOG3I0tRrFzlLBOlZswdLJ8aN4oN8O8k8xduch8yTxXbIZc5I0xhrxRCLO5aac/UunHf/OvzlW5m7t5LxwUO93NRjswZEgcUe7yTjhHKIrNcwfmFyrygRC5GEKI8ovlptxsOOjgeMQT5xOhcGA0PHB8xMML74TYRkmG8t5yN1eTXBTm7iCcTzx9wfmS48Epx0axUYiN3GynirINh37jv+BH/4Sf/Ws388bYYchFbrZRFA5/pOatucsb5TznqijsMKebHoXtdLPclJ2nig4rGXOR5+adcrVN0hEbDSFXKwyHPvo6X/kNHr7K0+d88QNefp/zlTBTudmTjco2zFvlIjfFXMyQizAXs81Nh771l/jsu/bZd72zUW42SuVms3lu2bwxuZspHIebnW4WY+4KeWebZ5Z3yntDcjE3cxV5byOEDXkmLDPmYtr8+/TZf//X5o3KhsjVMBz0yJGdr3yosc1dWpR5K0rl5hwHivJzho3DReQi7ywfWm46MmEaOhBCfrHIG9FhxuaqotgoNjseeHjBw6NJJTMXxeatsFxEmIvU4b1h7GBPtlfaE0bsdDGEMRyPtlNlm3a6WmwuDn3nP9bTF/zwH3onjDpsU7nahiEd2fHoPOdm5GJTbDiY9ypvbSPM3aYOQgfHRw5Pdr52tbHRcTjPOcpcHX652dxUklMOd9tonik9fpWv/w6PX+flD/jie7z6EedrbWZuYpKL87RNZaOQi2GIouykzdVM+Xmf/Jo+/ZP2o3/Eq5/40DY2d4dclORuNh8Is1HYDMXyTC7mnULe2bDc5csmhc2MaLnaRpF3NjQJ8wstzBCOMVdjLqZytXFG3/8bvzvmyzoOb20uUrEnX7ZNRdl5Ii07WdMRJVdRxFwsInkmF5FnJh9qDB0sjFxlDrnIGyHPFGOlZSg2ipX3ktHBi494+Ajx9NrpouQi75WN8oHclDD09Jqnl7aTwwcyKYxzp8NhO9XhZqxJlC0+/ZP65Dv2+/8bIjflIgy5mpE3ksNGRZiLqZznaaFcbW6OI8/lahuLTjocx0fy5Dxfu6psyDuTHN7aXOS52dgo6lAYm4u8c7zQr/xpx1d/3X72r/npv7anz3BibJiKMTNjc5XZCENFGHKRGfPGvFW5Sw+f8I0/Y69+xE//JXvyVnJumG1yV4erlo0hF1E5z7kqF/PWmg+FmvdC3pmLWOTnTBwHzXk+aWnemYt8yfzR5kMtV9vczXu56vO//dfny8rVNkrGPLNNGCrbKDs+cpidT9oYi8o7RZyj5apyNdNxUGyUZ5Z3NjdlUTFmdCCVX6i8MxwHxZD35iLbVGzEjkPHCx4/woFRZrZ5q/KhXByHm3COveR8xTkrV9tcVTiEbepwnnM3V40V0fHADnvxdX3rLzm/+79gbK4q21TeqlxtczWpVOqwzVXl6ozKNhvbqfJeyFVlCydOeVCce3J1HIdttjmOw13OUWxUvmzLVWXnXCXbXNXBwyf62m/x9d+xz77LH/5fnJ+7qry1nTg9F+e8tU3lbjTF5q7DNrmbu4rjhT75Nb7xp/nse/bj/5vzpQ9tU6lstFxtczfbbFQ0xXmero6ivDV5bnIiv1ws5MsWdWC2uWpuwnJxem7eKeRD25C3Wt6bOV1VjC199rf/2uZL5mIUPbBxjlyMuZiruYqN41GffIfzC3v1E+3JXZ4pN2EhyXygOPJlLc/lai5yF3UgzIfmYt47kiiTd6KyHYidCKPcjV5Yj3o4OA6Og2Ez5GLMEIYTYyfnKa/VzIEDY3M1tNx0KPZEcW5uytXMJAc96o/9l3z3f+XppW0YwqnjYKPcbLa5WlQ2KsmG3KzDWx0xd4Wx2AwVwmjagVmUm22YOmzUrEk2Kh/a8syw1KxHHr7KV35NX/k1Xv/M/vCf8vqnmJvy1jZHiG3kbjF2TsaoyM06lZsOJkYuxo7Djq/oo2/o679jPfDjf8pn/9Z2InkjjGJzN8LMRt7KhmMqNpWZbZIZMoQhlA/MMxtys5C5GpI3NkJsuSrMxcgHTu9lhbwzFzNXeatiw1wlxU76w7/1u2PmrUOuhnAwtkk4vbVNZS5OenzBx9/i5U/t9c/UaS5KQmyIcrMhjsPNOXNxUAcxd7kYcpGbQp7LL5WLMIQTB6IDwyhCBw4MJzvocLNRttN60MMjPajD1ZCrsZnY2JPtSU42dVC2081xYIiN000uypedchcbxfGoX/vP+OE/ts+/jzEXI88sLCFxHN7K1WnnnN5KZaFUNnKRi9jMByLZ5qqyzVvF5qYQyXu5ycUwhCHzwEff1Mff4fFrPH3Bz/4Nr37MhtnmLsxNKbbR5GrINtvYhDAJi+NwMTMc7uLF1/nKr/LRtzkOfvqv+Pz37XztKpGLbHPVRrFhbpp3zslhy8ai5qrCMGLD5i4bmnwgzzScLtIRDqepE4cWO92UOa0Qy+HLhhNhmDms2OEqwwwLmyS5y3sz9Nnf+d2Rcldu5o1cbWMkb83IRQzHA8cDT6+0ITNXOSjPzV0cIc5hJkLkjULemYsQuYjcFRvyXOQDQ8g6MLkIMVcpFyce8IB5a0OZw1XlZrPIhzJXc2xmlLuZ3M1VHZzzXN6JyYc2HI+OX/2P+OIH9of/3Exlm8qHTtSBBxXHA+Zmw8k524kxhKIcZS7mYnYcyNU2FRtl5xTKzhFho9xMzEWuat4q5moqxCf/gb76GxwvePlj+/x7vPoR5+kuNjd5Jrma0djIxRhzcZ6ODlfbCIW5KRWPn/KV3+Sjb/D0kpc/4PPv2vkk2eaqsvKhNjY3udsQYS5ydZ4uRlQY5p0xU9lchLHTkIvyoWOYi1ytA8OJB21mcreGEGKTDw3DgWEmE0Ic6Xiw8PTE+SQ0NKc8d+iHf/OvTNgQTa6i2Ahjox3uZhumDspcjHbSgzlw2sZShx5iczeaoR4sekIuIjczRkcI884OH1oplJvlmfLMTjqQdWDayRFlc1PYrNRhO1W2JDcxB3I3G0VFTLbZOeEoa8w72xxl7jaSmQ8lYuVqm8oWHfrmn6UX+v4/QK62ceRDE8cDxyMedIRJ5mInO/X0mvPJmo2OCMvVRrHjgZKLMXO1uZjKh4Z8KHe5mS+ZjkMff9s+/RN08MUf2Bc/4NWP2emqYrmauYlc7HA1FNuUi2G2uTqObPPWNqKYix16+ESf/gk+/ra9+jGf/769/CHnK2/lotxsFGFUdg5T+XlRjI2JneSdDHM1F5ubDndj89Y2pNyEZKebOsxsT44ezdWYiyk2FyEM80zYgbETUSbr0OPHPH7C0tPn9voznk4dJ0MH8qF+/Ht/ZUoi2mkj0dzlnXlnm5u5OIiG0IFYttkoHFPe2aYjPJgxKsozG6J5L+RDc3XIVeauchMTo1wMIXrAaJSrnaMDU2zTwUaxpR7MMGSSi7wRRdFhxjltMnNX2RiOIxszTvLe5r1mh4tc1cGY+Npv65Pv2O//XW2Gih1qFlvqgYdHHl7Ig9PpiA3RZnvifGJP2pPNB3I1FHOoQwfbmJswM8xzFbJNsVHeCEnOcnz0TX3jT9nxsf3kX/D597SXzqcnN2FUkqvtycJSITdjo6aeMIZi7kKxmYsQjo/06Z/Sp79tn/9b++m/0Ouf2l6zudrmKhRzMXIxhLyRyjYcrnI3zN02x9hGbmrkIttcbSgV54nJVbYTKeatGErIbKMYuShEc24qG22Yq8pNB3MzM1dRHI/66BvOh49dPTx9wcsfef36C0cnUgfy1jZ9/nf+6uqgg42dfrmRu8Vm3grhxCFXBw5vDXkiF3MXooO5GEfeCcXmmbnroNjchbyz2ZHKXSZv5aLc9IAwcjHGRCgZ5mYu4jjY3JTlJjOsA4d6oMPNnnSetic6vBeFzMkS2swbG+VulosoHG42vvqbfO2P872/y/nkLs4DT06TFzx8pMdHetAOjqlsczfbaedrRk+vMG/loJlR9IAHwuY4TzwZ5q1553gwV6lxkvfm4jg4PuHbf1Eff8t+9M/sp/9C52sbdrC5KR/KaCZKhRg2G0fDK3baqAPZTkrCTHp8wae/xTf+LK/+0L7/f/DyMzehvDMcw+Ru5i7OkYtsc5WrR8+d5G5jmLvQCMXGDnOxucrY/HIh7w3zziYXRQ/mxBDGvFMusuWqWJk3wvFCj1/T40dunr5wvvyJPb10NIl55pR++Df/8nTYsKm5mbv8vPk5W+6GkETeyE1jbKfkaqjDXRS5K7/URg+eizBmDOW9g3KzkYtUdCA3jTDkZrnJ3eamDlcbuQhlMhc9qAc6EE7Ok/MJ04GxzXIR5iqHXIyZmyFvRKzooHRmjU++zae/Y3/w97SXiE3LmlPq0fHwwo7DduBQh5u5GY5me7Lz1PnSzFU4inKKohdyENu0Jztf42SnuzBXHYerjePIxjY3x0d68Slf/eP2ybf52Xf56f9rT5+5SuaNyFU2z1QUHcg7GxuG15xPNhJhM9QDj1/l4287Pv1N25N+/M/t5Q/YbGMIRZkhRQ5yNxczs1Eccp4zJOZL5hfJVdYIYdMysWG2YRKxTeWd5blhnhmKDjNtnoltcnVQDKGIeSt6oYePXO31K86X5rXDVZj3QvrZ3/4rc1VCy0zeyodmzEXkYsxFCCcOd+F0U96qwzY25eJA5q7jIHcbwtyU5w6EuYtchJncDLnI8k4udpCLBwqzhjkclKu5KLnb5qaEzcWJdBzODuvB4eB4sB1smHay1+wkNzNXydU2JLnJM9sYO0IchwoPcvHxN+wbf86+93f19BnF5txU6sF6oYdHjsN60BMKsw0hdtqe1KnXr9nMXHXgyDyoB44XFMZmO+18cuy1PNk5N8tNLDdh0vExH3+br3xHD1+xL37Iz/6l8+kLR9i8NVGGCnG6y0WUjgccroZcbK7qtL1yPj2xae5iL37F8cmv6qNvsif7/Lu8/AP25Kqyc7Yhotg5FUeutrlKZmwUI2OjGJYPbd4pNOa5vLNzKncz74XNXSTmIu80z01jhQztZIh5IxKF2JDlroRhwiPFJk+cr9mJ5ENZ9OPf+29GCGlzVZm73A3b3OXnhRNJyBrGXCQRRi5Ch52ZqVTWXDVfEvnA4bkIuZkkM2GYq1QUczET5WouNkeHSUURNko4N7kohs3VjkeOB/PgOA51OJdtNMdOPHE+4WTMkGI+lKvczUVuNheZuzrUo7AXX+Obf8H5/f+dVz/21pwO1CPHRzw82vFIDzydNDcbuYidtif25Hh6cp6nHORmx0EP6pEjRMPY7HxynE84bdPczEUMFcejPvl1Pv4Oxcsf8fl37dVPyUVEUtmYiG2IYm4qHByHHMhcNG+11Mxrnl47z5ONj76hr/w6L35Frz/ji+/x+R+oJztczEblXHKRm8RGVM6NzTulss3dyfGxPvoW50v74g/ZaS5Gc5OLsua5yXs7qXlro0KuZpiEfNk8l2HIzTCGZLmYZCaZD0W5KcXpoEc5ZHiyp1fmlLw3xOjHv/dfz9XCgZkxF6mQuyHM5o28FWaSyU10ZC7OycWYqSiKxdADxZ4w5G7u8tzh5sgvMqk8E1vIOmTyZB2UGXORHGZIR25ykas2d7mJLXrgeMTB8cAiN3P1xPmavdbmrcrVNosw5G7eCB1u5m4M9aCyx6/xrb9oP/g/eflD74zD6IGHj3h4tB7NoT0pF5mpGOeetNfstNdf2OboAZnogR50PDpRKRdjp84nzM4n2ynvzcXxwvHpH+PTP8brz/ns93n5A159htPcFSvJTWwhjPLWvNGhXiBHh4Vmm5uR0cn5xPEV+/S3ePzUvvgBn3+Plz/UXlM6omxTLnIiV2FCmLvKNlfDULkKOx752u/o4++42Wk//mf22fdscwwjF+V88EybzFxNZ3QKk7uxw00uxkI+NDNXeSvzzLwRwsyEeS9vZXK1Dh0HPcojhSf2ynm+ZqccmLe2Cb38H/7yyF0YZnMR8l4qhtxsk6tcrUl02GZIKtvkYi7m5nCRiaXjcLMTQyy/XHQQytW8VyHPxInDYQ8vtJPzlXVQOBGSw5zOjSNJ5a0Kc55jSRQ9WAc90uGqDsqM8xXnk/ZkTkVF2dg5x5HznKvKW3Ox2XG4S8W8M/Tia3zrL/Ljf26f/TvmbofDSQ/28ELHwfFgDhmFMITZ+Vo77XzFXjGUncOhHjgerQeOB4rNdup8LSfGeTKKbSZ97bf0K3+Gpy/sh/+YVz+yvWJTKc5zPlQuwmxDKu90uCocj9aDyhbC5I1o7PGRT39bH/+qffZv7Ef/0l79VHuy4xQ6DpVh51S2EQmzUWxDKoy5GTqwXFW8+Lp9/U/z+b/TF3/AV3+Tb/x5Xv/Mvv8P+OL7NmyGitxUGE4bxeZiykUYm3nr8M4Ob82Y/1+2uUu5mLc2N+UDh6v1QIeOF+rBjD1xvrLztQyHu3lv+sF/91+NEKK52oa0OE93schdDOWuvHe4GrbTVaV5LxdzNbnZodCYi5C7/LwhQiHknbJRUWyMlTp4fMF58vTaioPtVAcS5qJZucob0dg5pNiozAM9cDxyPCBvbSd70vmaTqdTpRJ2zlVlc3HIzMWwkZszlLcq2xhefFXf/PP67N85f/KvXG3U4dhMdnxE8fCoDpzscFXZRrPzSXuy8wmvJAmxmTheWAcdCpvtZNNm55OiwgOffIdv/jk2++E/4vPvsVGutjmOg/+PMTzttjRB7PrK/XtuROScpZqQQKIsIQabHpZ5Z7PaXrQ/rW3adnd/kDYL2YANAlmDKaQaMiszK4eI+/x9zrl540ZUFbj3lm2uKtto6jBjc3S42mYRzk2lZ3d4ruOwk20qr3WnD/8j++h3+fKv+OR/4dUXznOcUxx3s1HMRQdmSLZTYWynpPJom6vKNh1Rtrnqne/bBz/i8/9NLz/FacNHf0vf+fv25U/45F/YN5+x03KRovJoJrF7TA3Z8iAbmnxrhy3MozUzj1qSJ7MNITVv2uaqcrVcHK7WQXfquTrMPXul3dv5ig2xqTyZvvgf/qu5yU1ztU3CwVzMTGeYB1kRuSjkQeRmGxvHoXkwFM2btkONPNjh/y950B3yWmxUCEPkIhPSTivyrWxTiMo2cpFCGBbF2KYOOnBwPLO7OwnZTjtP2712YjQVzc1QbCqWbYTTRYSSzGmbq8pcjbv39J3/mG9+bp/9qXnQCMOOO3pGdzgUlkXCycZO2yu8Yi4OJLQR68CBdITTuXtGLoZn7/Lit/T+73K84Is/tS9+rN2buSlXO+eqDiK5mhF5MGOeFLkpF884nqnDNhbHHXfv2Hs/1Ae/x8vP+fyP+eqndt6z2cbSQd1LZjgoN2NNqGxzldnmqkJmKjtHJFdD7/zQPvibfPavefmZnK42HO/w0R/ovd+xr3/GF39hL3/B+Q0lMSrDNjk9GDLTXGTC6UnMRZib/Ir8+8Vm5rVNx4EQDmvmHnd0p+4o9sp26rzHyVyEeZCM6Mv/9381N3mQJ3NTmI08mYvlTTm8FvOgIhex00Yiv+LwlnwrbxtDLvLkoMNr5U0TeUsdmAexbHO1UCqPapTXdrCRi9x0IOug6DBX4zwxV0MbkRGTYiMXQ8nFOMsZ4W5ss1HeEHfv6Lf+nvPl53z2bxjbtBRnLuK4ozukwp2rGTt1npjtFWauUrkZeTBXccROnEjPP+DFd/Xeb+POvvxL++X/zv03iDC/QRjmUaWyuZhfU+QmrDuO5yrrjucf8c739c5vcb7SL//U+dXP2T3nKztfsbnKoZh7V5WKZRtlOymJCDM2V+Ui21xVmHMxF/HuD/TBj/jFH/Pyc9uJeTKef8iHP+LFd3n5KV/+Fd98ovMb82DDRvMftLxtinMuJiGvNW+L+Q8YDooO9cwa+5odeGaRaadt7PR/pk/+8T+ai0Qhr20YeTBvmatcVa52epKLdMRmSJgHkTdEh5vNrymvbV4rN7k4KE+yeVKURxvHkQd5MobjsOWmMIujbHPVXESedCBDQjY0hbKNjbKdSDIcMRdDeRJ3z7h7zqb7l3a+8pYhOl7wnb9j99/w6b+iXO2cqx15kAoHZUuFmZOTDKcZ8pblN6mD5x/y3g/1/GPbPV/9lK9+wvkNY6aYSZjtQLapZGYeJa/l1yTLTViH3T13vPM93vsd7t7h1S/19U/45hO7v7dw/4rzFbvHEBJOp6LScSA7T08ShrCDmeRJbjaM5WqbvfsDffA3+cW/5tXnnDPzlkg8/5D3/hrvfFfnvX39E331l3Z+bZurNm/LvGEhrzWcbsa8rfyKkCfzplwdZq46njnF+VJSdxibjM3pDbmYX9XX/8P/c8pNUR6MYfOWMldDyJMwV4lcRGxjbnJRbspbOhAbGyIPymubq3lSIUQo5E0rJJmxdORmyFvWHQ42czVXx5HXhnxrbjqQuRpCRObBbKMDsWxu8mQS5qJ49oK7d+Xi/kvuX2JuSthGz/n4b7v55F/YXMyjHUlEsrGhlIvZ5iqZb41yMVsIeZTx7D19+Ae881375lP76q/4+hPOb7xtNGEejYUwWypX+RW5yNtmc1Px4jt89z9x3L1jX/+Er35qLz/T+YpczM7pvOe8t73CJJMHp6uKojDKNjpsc5VcVR6d51QeTI3lauLd7/HBj+wXf8zXv0CejGNey4NnH/LiO3r/r6vn9sWf88sfs1c4vS2TR83bYmHz79O8rby2eVu2aDQ6zJ02jCaxsZOhPBl52+gX//gfzVVeqwhjw1zEEZvKmt9kJ0ZHknmwnSoPwiiGkkdRrrZY8q0QdlLIys3I3AxF4eDw2spVWHnT0Ng8CB0SHbYxFA455bTYOR2H5GpIlosT4fAow2wuosOTXG3IaxtCd3RHOF9ps81NHO4oO+7swz9wHHf8/J85N5VtrirKo8pOlm/NNhwShoPGXAy5yUV06Dt/yHt/3b76GV/8Bd98zvmSvCFP5tEMk7maIRxylQcH5t9reP6xfuvv8O73+OxP7Jc/1vmSnebqpNR0f7KZ08572+koVxsaUbmacCi2k+OgvDY35WYnys3GMY0Z0jvf48Mf8dmf2NefupknzZN5EGXHC975nuOj38dhv/hXfPljzJP8ZkMe5MHc5CKMeTIXUZ7MrzswDBF5NPOtuQijexYOjHmQmz75f/2jJds8SHnStOgZHeyVbZhiR/Jgm3a42qYjZJttKtscx2Gbykbl0UwdHhzM2/KkWG5yEca8ITqcTqLuJDflNG9qLjIPKg9CzpOOO8dxZ+e9nfeOWAnnqBTbCLmZVMzF6SpXIdssN5uLPKpsQ1YqFZs25zlXFQ7F2R0f/4HjeIef/0+2qbwlTg8qzOam8iAVss3VNo8qjud88Df13b9vX/4ln/5zvvnMNock5zk2p4tSB2abytvmteY/aCFXGz173/Hdv6cP/obzsz/hZ//cdlJ0pyNkG07t1E7l5jxn50lsU6m56jgwK3VnG06Ow4M8qmyzTflWmO2UB3Pxznf14R/wxZ/y1c8k22xYdLoqF3m0zVWY9OHv6rt/3159bT/7J3z9Ceaq8mibN+Wi/KptKttcVR6EXG1T8+sOT4Z50zaEJJq5l+TOdppJNBt9/f/5R6vDXM2DMA/GDvWMYve2E6MkN+Wmg7kY8ujchOEoM8mQJ5MKIa+FudnmqnKTbx2Um83NRjkdlEoeZUfktVyMxZCrbBg61DM67P7eeb5yV9YhmenIdtoo1tSdOlzN1SlTsZmLzUTZXMRQig1D2abjcMg22zDmtd0908d/yN17/OyfIjbHEXMzc7oomTrcbOZbUQnbVDbm4Nl7ev+39f7ftPuv7ZN/ppe/QK6GRpjZZg7kUZurytXKm7ZTUdlmThmGEMdznr3P+7/r+PBHfPUTPvkXvPylGcMR3XkyjE1OhNnmLTFXCTPkOO6cm+2VSmWjYm6GXITNNpVf887H+ujv8Mt/y5f/ztU2m4swzKMacm6OvK2DD3+kj/62ffML+/xf880vdL7EXG1TuZqrSX7VNgoxKtuUiyEP5tfl0YxN5dE2hFxVnsyMzYMZ+vl/+1+sUvl12cIwVzlwYuYqD1JxZPNaQra5Go7yJL9mKIS5yYPlaiYZKjdlojxqczUHMuRBochrQ1iR17axKDpYODlPyaSiXG1uKsNcdKgMmTZP5moyUwcyk6swZC7OEcmQXG3D4WrHoY/+lp5/4PzZP0XsdMijmeUiV0mbIQ9W3tKhdz7ixfd453uc39gXf87XP2OjQzJjQ96UA5nZ5vBgyEXMk41cRbO5mKuOZ7z4jt77AS++w6tf8sWf6etP2MyTOkzmYsNJk2Q2D3KRmWSGJDcNqTvMztPMVd6UOdXhwWxDKjY3xd1zffy37bzXZ/+GnbbZXCQnxtws6rCdHoQhwsbxTB/8Hu/9kFdf8dVP+Prnuv/SORwezK8rNrkoZGK+NVeVB6dfd3htp5nKzWauQq4qubOd5h7zZEif/3f/xZKboiTkyUmnm92ZuZkHYS6iiDqYb82QzFWuchVlm6swUwfC3GyUOWTkIuYtk4oOD8ZmDkQkM0TeMlNRHmQehHlQudlsCMV8KysJmeigcGrY6bUh5lsd3pQ82uaqMmNUyAw5N3Xn+M7f5vlHzp/+E8dmThzeVJgnczNDXive/b7e/23u3uPlF/bVX+mbn3O+MnOVgzzYPMhrZXKVq9lcDCkPNnMV8yAPuuPd7+u93+buBd/8gi//km8+xXB6EqPCzMVYpzzYyIO5Sg2Hq+WmYiRDRr41V3OxCJvKGnMzYyFX5cG7P9BHf8g3n9rnf86rLwy5mjfNkNc2V7nKmtfu3tN7P9SL79Jh33zC539m50u/SfKWXGRIhjZPRsw8SsibZh41bxlydbDRzFwlzFWf/3f/5czFkBzW5KIojFyMHcrvaZcAAB9ySURBVLZ5U3FuDoe5Sh0enF4rZJurClHMxTzJJGNzUyY3TcVCbjY3HRSy5k05TLYhyhEbM4lytUXMVIwZUh5sSEcWztmorEMulnXY8Uyl3XOOnTLzZGPltZIwJMxF5ME2ShiWi0Md+uhv8fxj50//Rwd2nnYcjMpVI8wYis0QNrz7PX30I+7e5auf2lc/45tP2b1cNBtGjahcbWMjklMWifKmEBpzsZk3dPDO9/jw9+gZX/2Ur3/GN5+yqTAzCXO1pZjZxtAIw5gnofJoR0hhLmLDKEIJc7FRrsJMcW6Sic2WcjE69OK7+uD3ePaBffnv7PM/Y9/Igw2xTeg42NgkNnNRltcy3b3Hi+/x/l/T3Xv2yx/zxV/Y/dcYhhTJXM3VZFLMxaYhGouVt+TJxjwZbR4Nx5GNnafKTbnZKH36j/8fMzcbOcxpuQgHpqhsU9nmV1UehPwmi8rV5iKJ8pa5CPOmSZhTwuG1ojEXIWdzlYtyVQcyMfJgLkoy2TmnizgKmdPNUI7QoRi2YSq6czWZZ+q5jsPOV+ye8x7DPJqLDlfneaqQnZOLclWpMJvXKme+lePj3+f5d5w//SeSB4eZPDhczeZiOFxt9OIjPv5DvfjIPv8L+/Lf6v4bO195VAdjrqZcjKE82uZR5dFiHoQQknnD84/5rb/H3Qs+/zP78ifcf8lObyq2IcxG5cHpyciDZZtHeTBUVirlIpZhu/dgyKNKxyGc50mzjWghj7aZSW7uXujFd/Xhj3jxkX32J/bFn+v8BqlsYzMP2lSutll5U6hc7XjBs4/00e/r+Qf2iz9x/vLPbS9dVRp5MpxI3pSLYjNvCOW1Ta5y7mRTuRraJNtUbJQwDyq9+v/+l+s4MA8yd9w9d3P/ynZimMpbIrkpjzbMTeXR8iAMO2zTkcQRwzkWIW/Ig7na8qgiNBaivFbW2NRBB+dhm6vKXMQ2ZDKpVK622aaSi3K1KIpJDg/urOcc7+o4OF/a/TfsFbu3vCHkaps67JzfpLJN5Ul2HDIrffQHev6R86f/PzmsO4e52iYXYzt1xNhyPH9f3/2/ON/76/zij/nFv+T82jYcHlWY7VTZhkMdttNV5WqbyqNtKmfekouNeXD3vn7wD/T+79jP/if7xb9kJ2FUftXmYghTYbbTVUVztbFzKq9tlKuOLE/K4c42zDZXlTdVtrlptrkZQ0gejLENcRwYzz/SD/5Tnn3s/OSf6/M/ZS/lYpznqXLVkWSbM7+mkmwTEu98j+/93+144fzZH/HVj9k08mTNudPR4WomuZoh5Mkwj5I3nTtVHiUHtnk0Y16r9Ol//Z/PcehIDttpz97XOx9bz/nmC15+Jq+ccrjKkzGSSQdzMSpzMQ9K3hDbXCXLxSExJlflZmEehCE2LJWrmmGSiw0RCwc5CDvsnFyUIdnYcByuKi0rcxWGyEWY7eTg6KBwqDvrjl6ow3Y677/R+VKbFebJPMnkwRBzU4dtHqRcRLFZ6aPfd7z40H72R+ZirKlc5WJs0/Hcjhd8/PuO93/XvvpL56f/km8+lzcdripzsbmqzFjKzeZitqkQeTC/WaNnevEhH/we7/5QX/7Y+em/4v4ly025mrnJTS7mZoY4CNs9IxchbHaOTWWu8qiiyFuSYnLudFUHZkiYq7Cw2QgzyVxsNirMMAejDr33XX30hxzP7Ys/55f/TufXnPeuZpiKucibFvMkycVGB+//jj7+Q86X9tmf6JtP7f5rnG6WobC5KYbmZnk08ygPhpkkF3kw5kEeVLbZRskoff7f/sMpjhDD8/d557ecx3N9/ZlefspeOTschmzJo3FmKBzRQZgHczHk181NIYTczINYJG+JbdqB3IQy5GLzIAtHlFzssE0uyqRis3MUEbasw0pyVZkw2xgVMSSVHYd6Toedp+2VY6/c7HCTi7F5NKyQchFjcxHCVLa5qtx0x0d/S8/etZ/9kattFsVGpbsXevFbvPtDnn/Mq0/5/C/s5Rd4xcbcFFsqjzY35TfaXAx5kMo2D2abm+O53v2e3v2hXnzIN584P/8zvfqlbbZJHs1VbspVpmZj8yAqjM1NI2ycs7mZ4XBVLpKIchEbucjVubmqXK0huUqbGUMuwlxtaK6ODjM72cLkoJMO3vmePvw9He/w9c/tq5/YN5+yl0Qyc5zesqI8ypPNxXQ84/3f5YO/wfmKr37C1z/l5ec2N5lzUy5iyIPNo7nYKOVmG/IoD+ZbG6VyNWNuijV9+d//wxGiA3ccd3b3LsczXn3J/ZdqtqxpB0IYZg42DaW7w81Gudo55deVR0OusiUP5iKSBxFyszOVq22UxeEw5KLMbFQKOxgLRYfKdsppy5MM8yhKHczFbMlBbHNV44jjmRy2k510r2VylVzNPMnktUjm4ky5Oc/TVR2uKo5nfPSHHHf2s/9ZZZiLWM8c7/01vfsDjhd6+bn98n/Xy09tDKGyzU0ucpUHm2+NksxcJY8mV2HesNEd7/013v2B7t6xl5/xyx/z8jM77zFX+VZs8yAKYx5EQ7mayUWzuclF5GJsczXfKmEuRuWmhG1uNnKRm1zMUNlo82geJA9GdMRcjB22IZuLk0iId3+g93+b4107v7Gvfqyvf8b50tDmbZE3xLwtF9Fzvf/Xeff7dPD1z+3Lv+TVF5jtdJXMJFfb2ChvqszF5ibMkzA3lW1eC0OIPvlv/rPNt3pudy90fq2dOMxFJ9JyNpYciIaQq3aYGcJMKBeZ+VUVYrOGEEuYB4uWsFEI5VyOYsyIiQ4kF4XZXMzVsVjOA8dBz1i4dzTblIsYJ+bRcLA7eTBTbk7Tcog7dIdsw1wVxsxvllOuypO5yIN5EMViOJ7rO3+I8cn/Yqic4v3f0ft/Q8ZXP7WvfqpXv7SdZsrF1CEPtnkwjyrm5twwdecqzKM8GBLmoju990M++D0zvvopX/2El5/jnmFsLlLMiORqJvNom0moXNVszDyIyEUcy8zVxiIPtuFQYZJH28wkr8XMVb41NzNXFXNTbFRmzGubi2xTUyGZiRff4b0f8s73cc8Xf2Zf/qXO0/+5zDxKdCA3x3Pe/R7vfl/P3revfs4Xf2b3X2KM5GobeTJP8oYw21wl8lod7LQxk4ih6Kv//h9OCSsks51sdHiU7IgdboZcJIer5WYjF2WbXORbyVVm5GKuKjpsYy7ytlztPCkPknQcriZXKzlZONSdNdvMVVqu5iKSYcaRY3MUB8M5FwfGqDhdzNVEuSk2NZO6k4NmOyUKmYuNIjcbk/yKML/RjKVw946+83ft/ms++VdOHO//Dt/7++yV89N/w1c/ddx/zU6PhkJU5sE25HAyD0Kx2camDnIzF0uyTUUuDt77vj7+u3TYL/6Yr39ur77GvcxcTbINYQh5cppJmAfR3CxhxkYelKudkzBKZaYx3yrFNpXKubFRbIxFktlyFKLZOXM1yaOZPJmLjY2hzAyVo1zNxUZ3dveO3v9tffwHGD/7I778qUfzIA9mft2h49DunLs39yqO5zz/SB/9R7zzA774c/v0f7XzFZInMzcjbyhXYS428mAjNhcpF9nmJhdz1Wf/+D9focPN5mqbR9s86rijQ0KutrGQdbqqhMl5DlMhV5WZ4TgOFYZYthOpkEfbXG1TqZBtbmLLxJHDEO4QuZmLhbxtlosM4QjRwSnnySEz2zSOIzanTMpFjiPbaUbPtINONpWrSWWbiXwrW5g3Vf5Dtqn07F0+/rt8/Qte/ZLv/l+5e24//SO+/HeYR3W42qayzaPFPDmQqzDb6apynqerylW5OFS2uXn+sb73f+PZ+85P/4Xjl//Wzle2eTDbVK4qm4shFfJoOzGPNoptNEmyTfnWbMzFOea1jojKo+Vmm3KRq0rl6jxPxc7JXNWBXG3zaPPaNhVmm8RGXqvclKttflVFd3zwu/r+f8qrL+yn/5Sv/sqbtqlss81xHIpt6jCH7ZR5MOTm+Yf6wT/g+Xfs5//M+dmfsNNvtCkXeVS52oZ5kqttripX2zwq+uS/+c+mbBO2qdwUm3NzlKt7OY5DYlEeZKNOig15dG4SHW42QtSdZGObypBHeTJzsamQwpiL3MxhxbAhHXmUwyT5VctFVo7jzszOU4bczMXMxRAV5TxPV5XjOMjNumPZ7j2qkJvcbHmQjeZtZUOERbHNzah48bF+8A+4+4CXn/H5n9kXf67dG/Z/VAZnPZ8ehnmfr9/zcjgcURQpS7Jsx0sdxEsRJEBQJHVQ9KBn/aZt0aTJV2gPmwPXaJsEcZo4zmLZojaS4v4+d//LDGdGi9Ne18ZGhyJsqQxh48xrjs1NCTM2lXNzFWbI1fHwpj35Ol//Hb31Lfvoz+zDP+fLz1zN1dzFIpKrchFm85qZpCgXmbFhSK7mhZmNnVPJS9sIcRSyA/MLFcM2R4dt7MQkwkKYu1xtczdXczHKTWWbIRE2czEqL+VVvfM7fON3+eJj++jf8dmPOT9ns80L29TkopCrzNwNeSHe+pbe+0Me3rIP/y2f/CVffsJOhNlczEthiDC/xMxzc1Mups/+6X8zQ+7m58xdYSETRblLMY+SmRyuNnJXD4ZtGAc5cLBsJ2EoyatmXpXclLtRlDm0hDnNJMakDjpMXijOzVx0OB6eEDtP7XS1zd0YSu6GYTjKUHRED5Kdj6624VAP5LnIzcZG85q5yk0uRtmm8PA2T9/T139L7/6B/fD/5Ef/gvNzlTNsbnayMIZcHJKhcpqbYtMmzM9LrpabHt7izfd49l2evscn3+fDf8Pjp7YxtqkIQ5jXzFWYvG6bmxLmLqNJwjYzr9pIirnYbCNC5Wq5KzYvlJsNuYiRZGaYq2SGEJshzMVmngujcrO5KTY3kZCfNXcdT/j6b/Ls1zi/sI//ks9+xBcfyYGZiw3DKFdtXjUkMzfHGzz7Lu/8LuHj79mn7/P5T+RuLjZK2Oam5HVDmItNYvPS9ME/+ofbXKTN1VyFLM9N0jg3pFJMhqPDudMLlauNIiEzryn1YMuMzVXCgblpyFUYcjHmuZiLDhxmHo7DNjO5GOYijjecc1NUxLmpQ8cDczF2uprnxjblImLLUKmcO3E6jgc6VHY+ugvRodwNsbF5Lj9rSK5mOuJ4xte+q6ffso3Hz/TsV+0Hf8znP6EHOiwXj9rYaaefUwcbsrJ8pXNemp/TwcObevareus79MDnP+Gj/8jjJ5hhYzsxlW3CkCg3c7NNXhW5mM1Nniu5aJibMXOVuZoDB43FHm2nRCk3c1eem1zE5iJXc5fczS+Wba4KY9gm0cxdrpKLuZiFYslBw+mFzesenurZd/XWt+nBPvsRn3zfvvgQY9PczFwl21TEXAy5mbnpib72XX3t1+gN++InfPw9Pv8xmyEXZRvy0iTMhtzNz2nTT//JP5y5yN1YDCEXqdjkcO5Eirk6EItmXhhD1CEXYb6yTT3wcLDDjI25C3NxqMzJKJTm5jQvTHTgoHlVKBch5NwUCuFwVS7CkC3Mq7apXG2jw6Ry9Xg+Opo6OA5hm7tU5hXLzzv8MnPx5Jm+/ls8/Za+/MT5yff59Ic8PHV86+/YD/6ELz7meOA4TOyR85GdbDY3Hbk6yuZiJobcbX6xmUPv/Bd6+29wfs6n3+fT9/nyp8xF7uZqm6vtxNRhGxE2cjXmroOFqTCbu2JzN3IxlW1eKM+9gcOMxb40j5KrchfGXE3N6/Lz8v/HXMxzs41iKCFXpxkOhJSL2dyU12zuHp7y5nt69m3efIcvfmof/Tmf/YQNYW6KDXHkZsyEbV4oHG/y9Js8+46eftM+/5AP/8w+/4kM2aZys5m7YvPX2/TTf/IPZy4itslhm21yV7mqBy/MC7k6z7nqCLHZ5qojV3VgbsZcpYdDHhCbbcLZqYXDXM02laOD2EbMVXQgHDg5T1+Jig4WuYsVy1HkdTtsboZcxHmeKuZmZYtcxE6FDjowdtqow6RyVbna5nV5VRjmcHzjb/L13+KLD+2jP7cvPtDjF2w8/RW994e8/3/Y+Sk9oUxy2uOjdtIMO0dUjuOwjTGnVx3lauaFndM3fpd3f58vP7EP/h99/iMeP8OJEPJCsbk4neepXIS5GUPmdTHKXdk5IgwJIxdDXgpz94Z6wMFmvmRfYnI3F3lNruZmnF4KuSgvDLmb14X5eTsRxnBEmEnmrhg2f628oic8eZuv/bq+9mt89mP74E/54qeYmzCGjpCZbYz8Er3Bk3f09m/oa79un37ffvynPH7sqjDmYu7ynzf6wf/0D5aLDjNzsVQ22mkb4zgO26FyE+YiZOM0V5VfpHK1zUvpCA8UG5vKORfDaOyQg6hsM9NxGIYcVsixsflK2JA6nA7H8WDm3FRqtqkQkpkhOVzNbGxTuZrMS5VkrsKEuUoy/3nlIl9557d572/r8w/sg39pn/3YztPNUgdf+66+/tu8/8d2nhwPHAc72Ti/ZCcd6nCep8pV5WobTtvpZ1V08PTb+s7fY+z9P7ZPv6+NSOYqpLK5mGJjO80pmUnuhlgqL51mkhdmrpKZu7lLclVh5rQNh3qiDjuHRzkxDDPDg5vIxU6Vq23mdXndULna5i7lZhtSbrZczVwNh+TqNIedU1Rm5pdL8tKcGOKNZ/rG39Lbv2kf/yf70b/k/EwO29yUF2b+v0lP3ta7v8fX/oZ98G/tx/8Cp2TmlxvmVZM+/ad/tAoz2ahDrjIX59iQqwqZlJstd6erXPRgMq8am5syqUMdOBHLhlJutslFuRkOwjBXYe6S8GCbo4i5yFcmdSCbi3mhOBdHGHPTcWDOc4rKhrDZOI7DTA7mIiu5GJPkal7a8prQ1MHDUz37Lu/+vnbaj/4v++SvELnbWIh3fkfPftW+/7/jkAc67PzS3dx06HjT9qjzM3pAinOznex0U5ge3tJb7+mdv8nDM+cH/9o++vdqjG1yNyRkm3aomdPVWUKu5u6gg5220xwejifq0ePjlzYKJQy52yYXudsYFXMzY6fNxUEPjk5XcxIizFVuys2mPBc7XSVXczV3mcndsKVytVE05iK2MSoMkYsweUTMReZn5VVtzFfWSeQu8fA13v09vvE7fPQf+Mm/sS8+5PwSmcwYIrERys2Y0eS58Oa7+ubf1lvfsh//K/vpf+CLz9ijq20cFJuLyVVuNld9+I//aC52nhypB85hlC151WxYOqKQjQqzzVVlDobcbV6Yi5I3yMVJmItsHOVqCBPmqlgucpeZu7RwsKkoy3OzuTgcx2Gy+Uo4zZbjiCE3i21sKsNR7nJTGjM7p+OBcpXDxnaqbC5C5mcUb35db31bz37V1fnhn/HJX7oZdbiJMFMH7/6ePTzj/T+m2IFk5mKIHNaD7dT5uXrDUGG2YYSHZ/bkXcfbv84bz+zjv9DH/9HOz9koNjexuUjY5qplhqkH6w3K4bSNTR1yOD2yR0QP7NF2Go5ytY3yQsjIRWxsKsw5dg6jqRjlLhdRbK4qL8zF5oUKM+Ruc1N5YdhmG0Lu5ioRYWM4yt2QcjMncxFC5KXNazavm5kXGqFmD8947w95+k0+/QEff88+/8DOL9hpQyRhXpoxysXcjVJ4812+8Qc8vMknf2WfvM/nH9j5hXKzTWUod6NNH/zPfzRXYVTENoalcByYq51zl8rVlo4szMUQYshzsbnLlhwUnZihDhvNS4XMhMKROYS5ms1zsdhU5CIrd8PhOA4bxlwUYWzkKsWK2OYqnOcpEUodvrKxISuVynBukhyuJhszV735nt7+Nd78hs7P7eO/tE/ft/MLc5ckydWMTvWgb/89Pv+J/eRf+0rJ3eZuEVsOU9mYU6Ls4RnPfpW3fkWyT3/AJ9/j8VNyMeZmm4owlFzEznkp9QbHE5XttMdH9ihXBx7N6WYHna6GijFD7ubqQGVmO2UqGxvmYjrIzGwhuYjkK/kF5itlG3MxV7koV5PkasbczMVmJik3c5W8akhlG6KYX2B+3rw027zQEjJz4uDNb/C139Cb7zr3yGc/5OPv2Zef2qZS2eZmXorK1TZMjbHizW/q7V/nydt8+TE//R6f/ZDzxCjLaxp9/I//aKRyVWwzFxtLsaLUmItc5So35ZS7UWxeSrK5SSY7D8VxZGaSDG2+Uq7maiprOFRe2CaZi6FJholCrirE6SKK2CgsN2UbsdxstAnbKIoOX9np2OjBRJ5LBxsTQgxPv6mv/zZvfp3Pf2Kfvs9nP7LHz10V25DC8qqZ9eDhN/5b+9E/55O/QuS5uRsjEZbO7JirbXrjGW//Jm99m8dP+fR9PvsRX35spiK2YW6G3CSKDZkTYTh0POF4UOnkPL+08wvtEYcZhlg0zByOwmzI3VxMuQnbMJWbzcZMLhrizISpqYghw3xliLk40SEvzEwYKpPkasjVzMWwuQlDqcxcNWYS5XSVZKMmL828bl6zMV9JGDttoySTnrxjz76tZ9/heGIff98+/DPtSxvlbkOYqzpcnZtMTnM1Gx1v8PQ9PfsOT79lj5/ah3/Gpz9QedVMJ/3of/j7c5OOiHNjJC+sw0pOV5WrRq4yrIO8YpibuTiYiyhnHDsYyXIRclVzl6tt7lJotiRX81yYm6HQYe4mOYSNbUQOpDIXuYgNmay52RiHw7CNcLibu8WRo2yem0TZ4e54qnd/n6ff0Sffs4//wr742M4vXZXnMq+ar4QzHp44/sZ/5/yP/yvnZ17YprwUeSFXQz3w9m/p67+pLz52fvQf+PwnnJ+zEUqF2WYoNhdzVcnBODcao3JVB8cDHTJ7fLTzS2EuNtskhLlayUUYM1d5LmxCsdKGmYuxzZDchIUpOnJ1bjRJvcGyPZpHuQpjVGa2qWyTdGQjbCjMJFezkYshF3Fg2AxF2DIUW2ry0swvsrkpzF3uxhBmwuYiHHrylp6+a2//pp68w0d/7vzo3/P4mcNFbMxVGsXMsNPFo47sPJGOzIOevM2z7+rrv8UXP+Unf2qf/9jNRjH66f/yD2YkpCObm9zNRQdHzvOUu9BxCNtUTjnKNsxVHbbThJjnctPh6LCTeZTUoXIuInM1udncFLmpwwvbbGOzXBzqgWao5MGQbLMNMY7joChX26kOhNmY2UnywkIuwgxHhzowL2zuwpEe3tI3/w4Pb/GDP7EvP7adjHlVxLwUKlfbbOnpr+hbf9f5n/43NTZzsVF+kULx7Ncc3/q7PH7Gj/6585P3tRE2yjaVimaboRK2mbuEbFND7mauHlTsZJOrmec2m9flK7nIK1IPMns8rakJ28lm2JiLuZjjOFRuInczNnpTD19XT+z82OMXP9XxyGaS19VhxqYON8UmP6PD1c4xZhaVOmynq1yUeW5sqZFfKnOzmZdyd25sjuMwbKMk5GYnZcvx9Bv6lb/Lm+/Yj/8VH/4Z+8I63IyMXIyxpWY7bScOzFUdODgOvfO3ePf3+OSv7Mf/XF/+VGWmj/7R319lYmO5Oh6yMVOHYdI5X4mOQ2XnqfKquatsUzmXm+UuOiiHg4ONjSRh5lTDA2ObGUUhLw2xsXE8aDlzcUoIB6UjxrmTJQexKM+FJMzGeZ4MHZJioWwuhoiOgw25GpKr9Ybjvd/njbf40b/g/MzN5lUbQ/LCXJRys2H07h/w9BvOv/xnKjdhM+SFkB1v6K1v6b0/4HjDfvh/88n32FyVi9yMeS40L5SXxpDcDDGTu7lrGOU1GyK/2DYi89Kh44Gddp6uatpsJxtliyFOcxyHCrNzmLskZyfHoeOJHjnPz9WjxpZEboZF2KgoXzm9piNk58nmahg6UvlF5mK5qvnrjc2QvGqbmYfjwc2ymLG5ykUxF6eVnn5L7/2XPDy1D/4Nn3yf83Pbl9pJY2yjQ07nxqYyJJXzfFQ5Nz08c/zKH/Ls1/j4L/TTf8cXH+uH/+N/NSVR9jgblZvcnEKOuRhSVMhKYfPCXOQixeZmy10sig6OBx0PdNhmm2PjfDSnOtmDV81FudoZpsNFrnLY6DicLnbKVYwTHYfc5TDM1chdIY2dUxnmYhEVoWwIZSa5i2UmGXr6K3r3b9kH/5ZPv48wSl6au7xuhdjMxXJ897/m0x86P/hTyQsz21Q4HE++xtNv8uxX6Yl99OfOn/4nR7kaMkNydcTGvDDyug2Ru7lIB9sYctPcZJgJeSHDvDCvyt0I81wYG5Fps43N3WFDODwXO+WlbToOnHaODgonmxxsNjpyNQyVbThcVW6KuZirMDRsthlWKkWYl3KVSc1fb6622abys4bk6ujBxjrZiVS2uSq20zYdb+itb/PO7+rhqX3yffv0fT7/MecX7DTUgdnGRrnaJswcZe7qsDfe0Tu/o6fv2Wc/1kf/5B8sVK4a22xUyFdKy+tytSOk5mdVNheZyWGuDnbIrOzhieN4gw7KNp1fcD7ilDHmYjMXOyjmIkMxYw88ZJvKTWm52iYXc9ORHCbnRuzwXEgn26myotjMGHXoOGwUyrww5HA4x4bjcLz9Wzz7rv3gTzi/sE25K8IYDlfzywzr8PCb/719/5/ZZ++zDHnFk7f17Nf19D3baZ++z8d/oX1pw1CuMsxcbCrbkKt51SScfl6EeU35BfKVsPlFKgxzldnmbl6aqw2bTpRtjIrjcLVcTMVmhkNe2k4zRpGxUe6yuRvroDDmIrmaFyo7XcxiqEPhnMwLycJQ5HVlmxcybZhhG+YqEeeoJOtgyM02xUblKrNR6MADz77D27/B8ZTPf2KfvG+f/UCPn6vZZudclZuNuZqjbAjF5qqn3+Tt3/b/AmoE1wSugIfmAAAAAElFTkSuQmCC" | 87,388 | 87,388 | 0.961242 | 3,070 | 87,388 | 27.361889 | 0.933876 | 0.000048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153961 | 0.000023 | 87,388 | 1 | 87,388 | 87,388 | 0.807303 | 0 | 0 | 0 | 0 | 1 | 0.999897 | 0.999897 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
61605bae528032a21ad40a338659fe87a2b33b72 | 80 | py | Python | Task/Quine/Python/quine-10.py | LaudateCorpus1/RosettaCodeData | 9ad63ea473a958506c041077f1d810c0c7c8c18d | [
"Info-ZIP"
] | 5 | 2021-01-29T20:08:05.000Z | 2022-03-22T06:16:05.000Z | Task/Quine/Python/quine-10.py | seanwallawalla-forks/RosettaCodeData | 9ad63ea473a958506c041077f1d810c0c7c8c18d | [
"Info-ZIP"
] | null | null | null | Task/Quine/Python/quine-10.py | seanwallawalla-forks/RosettaCodeData | 9ad63ea473a958506c041077f1d810c0c7c8c18d | [
"Info-ZIP"
] | 1 | 2021-04-13T04:19:31.000Z | 2021-04-13T04:19:31.000Z | x = """x = {0}{1}{0}
print x.format(chr(34)*3,x)"""
print x.format(chr(34)*3,x)
| 20 | 30 | 0.5375 | 19 | 80 | 2.263158 | 0.421053 | 0.27907 | 0.55814 | 0.697674 | 0.883721 | 0.883721 | 0.883721 | 0 | 0 | 0 | 0 | 0.126761 | 0.1125 | 80 | 3 | 31 | 26.666667 | 0.478873 | 0 | 0 | 0 | 0 | 0 | 0.5125 | 0.2625 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 11 |
61664b56edcd3248e3a900afe8b932d74526268c | 468 | py | Python | figlets/FNE.py | LeverImmy/SmojSubmit | 7b18812e8b9726184880d0016fc0d19679e50a8a | [
"MIT"
] | 12 | 2018-08-13T14:47:39.000Z | 2022-03-06T13:13:08.000Z | figlets/FNE.py | LeverImmy/SmojSubmit | 7b18812e8b9726184880d0016fc0d19679e50a8a | [
"MIT"
] | 3 | 2019-08-19T16:22:30.000Z | 2020-09-14T21:38:01.000Z | figlets/FNE.py | LeverImmy/SmojSubmit | 7b18812e8b9726184880d0016fc0d19679e50a8a | [
"MIT"
] | 2 | 2019-04-02T02:31:20.000Z | 2020-05-18T04:21:39.000Z | figlet = (
r" _____ _ _ _ _ _____ " + '\n'
r"| ___(_) | ___ | \ | | __ _ _ __ ___ ___ | ____|_ __ _ __ ___ _ __ " + '\n'
r"| |_ | | |/ _ \ | \| |/ _` | '_ ` _ \ / _ \ | _| | '__| '__/ _ \| '__|" + '\n'
r"| _| | | | __/ | |\ | (_| | | | | | | __/ | |___| | | | | (_) | | " + '\n'
r"|_| |_|_|\___| |_| \_|\__,_|_| |_| |_|\___| |_____|_| |_| \___/|_| " + '\n'
r"" + '\n'
)
| 52 | 87 | 0.254274 | 13 | 468 | 1.384615 | 0.230769 | 0.666667 | 0.833333 | 1.111111 | 0.666667 | 0.666667 | 0.666667 | 0.666667 | 0.666667 | 0.666667 | 0 | 0 | 0.448718 | 468 | 8 | 88 | 58.5 | 0.069767 | 0 | 0 | 0 | 0 | 0.5 | 0.805556 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
61af30c61ca2a346541e5748d33b4b1110e6fa25 | 2,957 | py | Python | investment_report/migrations/0008_lastpage.py | uktrade/pir-api | 79747ceab042c42c287e2b7471f6dade70f68693 | [
"MIT"
] | 1 | 2021-02-02T19:08:55.000Z | 2021-02-02T19:08:55.000Z | investment_report/migrations/0008_lastpage.py | uktrade/invest-pir-api | be56efddf9dfdf81c8557441a9a54d9a4dd4bab1 | [
"MIT"
] | 21 | 2018-07-10T10:20:47.000Z | 2022-03-24T09:36:29.000Z | investment_report/migrations/0008_lastpage.py | uktrade/pir-api | 79747ceab042c42c287e2b7471f6dade70f68693 | [
"MIT"
] | 1 | 2021-02-04T11:28:37.000Z | 2021-02-04T11:28:37.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-05-10 15:28
from __future__ import unicode_literals
from django.db import migrations, models
import markdownx.models
class Migration(migrations.Migration):
dependencies = [
('investment_report', '0007_auto_20180427_1417'),
]
operations = [
migrations.CreateModel(
name='LastPage',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('content', markdownx.models.MarkdownxField(help_text='Markdown input. Please refer to https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet for intructions. Images may be dragged and droped into the editor')),
('content_en', markdownx.models.MarkdownxField(help_text='Markdown input. Please refer to https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet for intructions. Images may be dragged and droped into the editor', null=True)),
('content_de', markdownx.models.MarkdownxField(help_text='Markdown input. Please refer to https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet for intructions. Images may be dragged and droped into the editor', null=True)),
('content_es', markdownx.models.MarkdownxField(help_text='Markdown input. Please refer to https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet for intructions. Images may be dragged and droped into the editor', null=True)),
('content_fr', markdownx.models.MarkdownxField(help_text='Markdown input. Please refer to https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet for intructions. Images may be dragged and droped into the editor', null=True)),
('content_pt', markdownx.models.MarkdownxField(help_text='Markdown input. Please refer to https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet for intructions. Images may be dragged and droped into the editor', null=True)),
('content_ar', markdownx.models.MarkdownxField(help_text='Markdown input. Please refer to https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet for intructions. Images may be dragged and droped into the editor', null=True)),
('content_ja', markdownx.models.MarkdownxField(help_text='Markdown input. Please refer to https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet for intructions. Images may be dragged and droped into the editor', null=True)),
('content_zh_cn', markdownx.models.MarkdownxField(help_text='Markdown input. Please refer to https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet for intructions. Images may be dragged and droped into the editor', null=True)),
],
options={
'verbose_name': '16 - Last Page',
'verbose_name_plural': '16 - Last Page',
},
),
]
| 82.138889 | 254 | 0.709503 | 385 | 2,957 | 5.363636 | 0.231169 | 0.072639 | 0.126392 | 0.143826 | 0.795642 | 0.795642 | 0.795642 | 0.795642 | 0.795642 | 0.795642 | 0 | 0.015625 | 0.177545 | 2,957 | 35 | 255 | 84.485714 | 0.83347 | 0.023334 | 0 | 0 | 1 | 0.321429 | 0.575043 | 0.007972 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.107143 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
61b1b4ca71920fb06e3fba781548a99c3de3484d | 3,078 | py | Python | n_fc/ocr_model.py | mengzhu0308/Tencent-Verification-Code-Recognition | 3afd20dc2ab754ee1f9746bd32d225b09241d694 | [
"Apache-2.0"
] | null | null | null | n_fc/ocr_model.py | mengzhu0308/Tencent-Verification-Code-Recognition | 3afd20dc2ab754ee1f9746bd32d225b09241d694 | [
"Apache-2.0"
] | 1 | 2021-06-25T20:32:49.000Z | 2021-06-27T13:24:09.000Z | n_fc/ocr_model.py | mengzhu0308/Tencent-Verification-Code-Recognition | 3afd20dc2ab754ee1f9746bd32d225b09241d694 | [
"Apache-2.0"
] | null | null | null | #! -*- coding:utf-8 -*-
'''
@Author: ZM
@Date and Time: 2021/1/4 17:57
@File: ocr_model.py
'''
from keras.layers import *
def OCR_Model(x, num_classes=26, seq_len=4):
x = Conv2D(16, 3, use_bias=False, padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(32, 3, use_bias=False, padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
residual = Conv2D(64, 1, strides=2, use_bias=False)(x)
residual = BatchNormalization()(residual)
x = SeparableConv2D(64, 3, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = SeparableConv2D(64, 3, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=3, strides=2, padding='same')(x)
x = add([x, residual])
residual = Conv2D(128, 1, strides=2, use_bias=False)(x)
residual = BatchNormalization()(residual)
x = Activation('relu')(x)
x = SeparableConv2D(128, 3, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = SeparableConv2D(128, 3, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=3, strides=2, padding='same')(x)
x = add([x, residual])
residual = Conv2D(364, 1, strides=2, use_bias=False)(x)
residual = BatchNormalization()(residual)
x = Activation('relu')(x)
x = SeparableConv2D(364, 3, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = SeparableConv2D(364, 3, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=3, strides=2, padding='same')(x)
x = add([x, residual])
for i in range(8):
residual = x
x = Activation('relu')(x)
x = SeparableConv2D(364, 3, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = SeparableConv2D(364, 3, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = SeparableConv2D(364, 3, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = add([residual, x])
residual = Conv2D(512, 1, strides=2, use_bias=False)(x)
residual = BatchNormalization()(residual)
x = SeparableConv2D(364, 3, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = SeparableConv2D(512, 3, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=3, strides=2, padding='same')(x)
x = add([x, residual])
x = SeparableConv2D(768, 3, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = SeparableConv2D(1024, 3, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = GlobalAveragePooling2D()(x)
xs = [Dense(num_classes)(x) for _ in range(seq_len)]
x = concatenate(xs)
return x
| 34.977273 | 70 | 0.619233 | 415 | 3,078 | 4.520482 | 0.151807 | 0.050107 | 0.121535 | 0.117804 | 0.841684 | 0.841684 | 0.841684 | 0.841684 | 0.841684 | 0.841151 | 0 | 0.050286 | 0.205328 | 3,078 | 87 | 71 | 35.37931 | 0.71668 | 0.032814 | 0 | 0.735294 | 0 | 0 | 0.043112 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014706 | false | 0 | 0.014706 | 0 | 0.044118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f60154904dea48d62ec5de7d219fde9d9ee36e12 | 18,582 | py | Python | sdk/python/pulumi_akamai/app_sec_waf_mode.py | pulumi/pulumi-akamai | 85f933ccf2f61738b3074a13fa718132280f8364 | [
"ECL-2.0",
"Apache-2.0"
] | 3 | 2021-01-21T15:22:12.000Z | 2021-08-25T14:15:29.000Z | sdk/python/pulumi_akamai/app_sec_waf_mode.py | pulumi/pulumi-akamai | 85f933ccf2f61738b3074a13fa718132280f8364 | [
"ECL-2.0",
"Apache-2.0"
] | 59 | 2020-08-13T14:39:36.000Z | 2022-03-31T15:19:48.000Z | sdk/python/pulumi_akamai/app_sec_waf_mode.py | pulumi/pulumi-akamai | 85f933ccf2f61738b3074a13fa718132280f8364 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
__all__ = ['AppSecWafModeArgs', 'AppSecWafMode']
@pulumi.input_type
class AppSecWafModeArgs:
def __init__(__self__, *,
config_id: pulumi.Input[int],
mode: pulumi.Input[str],
security_policy_id: pulumi.Input[str]):
"""
The set of arguments for constructing a AppSecWafMode resource.
:param pulumi.Input[int] config_id: . Unique identifier of the security configuration associated with the WAF mode settings being modified.
:param pulumi.Input[str] mode: . Specifies how Kona Rule Set rules are upgraded. Allowed values are:
:param pulumi.Input[str] security_policy_id: . Unique identifier of the security policy associated with the WAF mode settings being modified.
"""
pulumi.set(__self__, "config_id", config_id)
pulumi.set(__self__, "mode", mode)
pulumi.set(__self__, "security_policy_id", security_policy_id)
@property
@pulumi.getter(name="configId")
def config_id(self) -> pulumi.Input[int]:
"""
. Unique identifier of the security configuration associated with the WAF mode settings being modified.
"""
return pulumi.get(self, "config_id")
@config_id.setter
def config_id(self, value: pulumi.Input[int]):
pulumi.set(self, "config_id", value)
@property
@pulumi.getter
def mode(self) -> pulumi.Input[str]:
"""
. Specifies how Kona Rule Set rules are upgraded. Allowed values are:
"""
return pulumi.get(self, "mode")
@mode.setter
def mode(self, value: pulumi.Input[str]):
pulumi.set(self, "mode", value)
@property
@pulumi.getter(name="securityPolicyId")
def security_policy_id(self) -> pulumi.Input[str]:
"""
. Unique identifier of the security policy associated with the WAF mode settings being modified.
"""
return pulumi.get(self, "security_policy_id")
@security_policy_id.setter
def security_policy_id(self, value: pulumi.Input[str]):
pulumi.set(self, "security_policy_id", value)
@pulumi.input_type
class _AppSecWafModeState:
def __init__(__self__, *,
config_id: Optional[pulumi.Input[int]] = None,
current_ruleset: Optional[pulumi.Input[str]] = None,
eval_expiration_date: Optional[pulumi.Input[str]] = None,
eval_ruleset: Optional[pulumi.Input[str]] = None,
eval_status: Optional[pulumi.Input[str]] = None,
mode: Optional[pulumi.Input[str]] = None,
output_text: Optional[pulumi.Input[str]] = None,
security_policy_id: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering AppSecWafMode resources.
:param pulumi.Input[int] config_id: . Unique identifier of the security configuration associated with the WAF mode settings being modified.
:param pulumi.Input[str] mode: . Specifies how Kona Rule Set rules are upgraded. Allowed values are:
:param pulumi.Input[str] output_text: Text Export representation
:param pulumi.Input[str] security_policy_id: . Unique identifier of the security policy associated with the WAF mode settings being modified.
"""
if config_id is not None:
pulumi.set(__self__, "config_id", config_id)
if current_ruleset is not None:
pulumi.set(__self__, "current_ruleset", current_ruleset)
if eval_expiration_date is not None:
pulumi.set(__self__, "eval_expiration_date", eval_expiration_date)
if eval_ruleset is not None:
pulumi.set(__self__, "eval_ruleset", eval_ruleset)
if eval_status is not None:
pulumi.set(__self__, "eval_status", eval_status)
if mode is not None:
pulumi.set(__self__, "mode", mode)
if output_text is not None:
pulumi.set(__self__, "output_text", output_text)
if security_policy_id is not None:
pulumi.set(__self__, "security_policy_id", security_policy_id)
@property
@pulumi.getter(name="configId")
def config_id(self) -> Optional[pulumi.Input[int]]:
"""
. Unique identifier of the security configuration associated with the WAF mode settings being modified.
"""
return pulumi.get(self, "config_id")
@config_id.setter
def config_id(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "config_id", value)
@property
@pulumi.getter(name="currentRuleset")
def current_ruleset(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "current_ruleset")
@current_ruleset.setter
def current_ruleset(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "current_ruleset", value)
@property
@pulumi.getter(name="evalExpirationDate")
def eval_expiration_date(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "eval_expiration_date")
@eval_expiration_date.setter
def eval_expiration_date(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "eval_expiration_date", value)
@property
@pulumi.getter(name="evalRuleset")
def eval_ruleset(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "eval_ruleset")
@eval_ruleset.setter
def eval_ruleset(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "eval_ruleset", value)
@property
@pulumi.getter(name="evalStatus")
def eval_status(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "eval_status")
@eval_status.setter
def eval_status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "eval_status", value)
@property
@pulumi.getter
def mode(self) -> Optional[pulumi.Input[str]]:
"""
. Specifies how Kona Rule Set rules are upgraded. Allowed values are:
"""
return pulumi.get(self, "mode")
@mode.setter
def mode(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "mode", value)
@property
@pulumi.getter(name="outputText")
def output_text(self) -> Optional[pulumi.Input[str]]:
"""
Text Export representation
"""
return pulumi.get(self, "output_text")
@output_text.setter
def output_text(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "output_text", value)
@property
@pulumi.getter(name="securityPolicyId")
def security_policy_id(self) -> Optional[pulumi.Input[str]]:
"""
. Unique identifier of the security policy associated with the WAF mode settings being modified.
"""
return pulumi.get(self, "security_policy_id")
@security_policy_id.setter
def security_policy_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "security_policy_id", value)
class AppSecWafMode(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
config_id: Optional[pulumi.Input[int]] = None,
mode: Optional[pulumi.Input[str]] = None,
security_policy_id: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
**Scopes**: Security policy
Modifies the way your Kona Rule Set rules are updated.
Use **KRS** mode to update the rule sets manually or **AAG** to have those rule sets automatically updated.
**Related API Endpoint**: [/appsec/v1/configs/{configId}/versions/{versionNumber}/security-policies/{policyId}/mode](https://developer.akamai.com/api/cloud_security/application_security/v1.html#putmode)
## Example Usage
Basic usage:
```python
import pulumi
import pulumi_akamai as akamai
configuration = akamai.get_app_sec_configuration(name="Documentation")
waf_mode = akamai.AppSecWafMode("wafMode",
config_id=configuration.config_id,
security_policy_id="gms1_134637",
mode="KRS")
pulumi.export("wafModeMode", waf_mode.mode)
pulumi.export("wafModeCurrentRuleset", waf_mode.current_ruleset)
pulumi.export("wafModeEvalStatus", waf_mode.eval_status)
pulumi.export("wafModeEvalRuleset", waf_mode.eval_ruleset)
pulumi.export("wafModeEvalExpirationDate", waf_mode.eval_expiration_date)
```
## Output Options
The following options can be used to determine the information returned, and how that returned information is formatted:
- `current_ruleset` – Versioning information for the current Kona Rule Set.
- `eval_ruleset`. Versioning information for the Kona Rule Set being evaluated (if applicable) .
- `eval_status`. Returns **enabled** if an evaluation is currently in progress; otherwise returns **disabled**.
- `eval_expiration_date`. Date on which the evaluation period ends (if applicable).
- `output_text`. Tabular report showing the current rule set, WAF mode and evaluation status.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[int] config_id: . Unique identifier of the security configuration associated with the WAF mode settings being modified.
:param pulumi.Input[str] mode: . Specifies how Kona Rule Set rules are upgraded. Allowed values are:
:param pulumi.Input[str] security_policy_id: . Unique identifier of the security policy associated with the WAF mode settings being modified.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: AppSecWafModeArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
**Scopes**: Security policy
Modifies the way your Kona Rule Set rules are updated.
Use **KRS** mode to update the rule sets manually or **AAG** to have those rule sets automatically updated.
**Related API Endpoint**: [/appsec/v1/configs/{configId}/versions/{versionNumber}/security-policies/{policyId}/mode](https://developer.akamai.com/api/cloud_security/application_security/v1.html#putmode)
## Example Usage
Basic usage:
```python
import pulumi
import pulumi_akamai as akamai
configuration = akamai.get_app_sec_configuration(name="Documentation")
waf_mode = akamai.AppSecWafMode("wafMode",
config_id=configuration.config_id,
security_policy_id="gms1_134637",
mode="KRS")
pulumi.export("wafModeMode", waf_mode.mode)
pulumi.export("wafModeCurrentRuleset", waf_mode.current_ruleset)
pulumi.export("wafModeEvalStatus", waf_mode.eval_status)
pulumi.export("wafModeEvalRuleset", waf_mode.eval_ruleset)
pulumi.export("wafModeEvalExpirationDate", waf_mode.eval_expiration_date)
```
## Output Options
The following options can be used to determine the information returned, and how that returned information is formatted:
- `current_ruleset` – Versioning information for the current Kona Rule Set.
- `eval_ruleset`. Versioning information for the Kona Rule Set being evaluated (if applicable) .
- `eval_status`. Returns **enabled** if an evaluation is currently in progress; otherwise returns **disabled**.
- `eval_expiration_date`. Date on which the evaluation period ends (if applicable).
- `output_text`. Tabular report showing the current rule set, WAF mode and evaluation status.
:param str resource_name: The name of the resource.
:param AppSecWafModeArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(AppSecWafModeArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
config_id: Optional[pulumi.Input[int]] = None,
mode: Optional[pulumi.Input[str]] = None,
security_policy_id: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = AppSecWafModeArgs.__new__(AppSecWafModeArgs)
if config_id is None and not opts.urn:
raise TypeError("Missing required property 'config_id'")
__props__.__dict__["config_id"] = config_id
if mode is None and not opts.urn:
raise TypeError("Missing required property 'mode'")
__props__.__dict__["mode"] = mode
if security_policy_id is None and not opts.urn:
raise TypeError("Missing required property 'security_policy_id'")
__props__.__dict__["security_policy_id"] = security_policy_id
__props__.__dict__["current_ruleset"] = None
__props__.__dict__["eval_expiration_date"] = None
__props__.__dict__["eval_ruleset"] = None
__props__.__dict__["eval_status"] = None
__props__.__dict__["output_text"] = None
super(AppSecWafMode, __self__).__init__(
'akamai:index/appSecWafMode:AppSecWafMode',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
config_id: Optional[pulumi.Input[int]] = None,
current_ruleset: Optional[pulumi.Input[str]] = None,
eval_expiration_date: Optional[pulumi.Input[str]] = None,
eval_ruleset: Optional[pulumi.Input[str]] = None,
eval_status: Optional[pulumi.Input[str]] = None,
mode: Optional[pulumi.Input[str]] = None,
output_text: Optional[pulumi.Input[str]] = None,
security_policy_id: Optional[pulumi.Input[str]] = None) -> 'AppSecWafMode':
"""
Get an existing AppSecWafMode resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[int] config_id: . Unique identifier of the security configuration associated with the WAF mode settings being modified.
:param pulumi.Input[str] mode: . Specifies how Kona Rule Set rules are upgraded. Allowed values are:
:param pulumi.Input[str] output_text: Text Export representation
:param pulumi.Input[str] security_policy_id: . Unique identifier of the security policy associated with the WAF mode settings being modified.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _AppSecWafModeState.__new__(_AppSecWafModeState)
__props__.__dict__["config_id"] = config_id
__props__.__dict__["current_ruleset"] = current_ruleset
__props__.__dict__["eval_expiration_date"] = eval_expiration_date
__props__.__dict__["eval_ruleset"] = eval_ruleset
__props__.__dict__["eval_status"] = eval_status
__props__.__dict__["mode"] = mode
__props__.__dict__["output_text"] = output_text
__props__.__dict__["security_policy_id"] = security_policy_id
return AppSecWafMode(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="configId")
def config_id(self) -> pulumi.Output[int]:
"""
. Unique identifier of the security configuration associated with the WAF mode settings being modified.
"""
return pulumi.get(self, "config_id")
@property
@pulumi.getter(name="currentRuleset")
def current_ruleset(self) -> pulumi.Output[str]:
return pulumi.get(self, "current_ruleset")
@property
@pulumi.getter(name="evalExpirationDate")
def eval_expiration_date(self) -> pulumi.Output[str]:
return pulumi.get(self, "eval_expiration_date")
@property
@pulumi.getter(name="evalRuleset")
def eval_ruleset(self) -> pulumi.Output[str]:
return pulumi.get(self, "eval_ruleset")
@property
@pulumi.getter(name="evalStatus")
def eval_status(self) -> pulumi.Output[str]:
return pulumi.get(self, "eval_status")
@property
@pulumi.getter
def mode(self) -> pulumi.Output[str]:
"""
. Specifies how Kona Rule Set rules are upgraded. Allowed values are:
"""
return pulumi.get(self, "mode")
@property
@pulumi.getter(name="outputText")
def output_text(self) -> pulumi.Output[str]:
"""
Text Export representation
"""
return pulumi.get(self, "output_text")
@property
@pulumi.getter(name="securityPolicyId")
def security_policy_id(self) -> pulumi.Output[str]:
"""
. Unique identifier of the security policy associated with the WAF mode settings being modified.
"""
return pulumi.get(self, "security_policy_id")
| 44.137767 | 210 | 0.663545 | 2,174 | 18,582 | 5.414443 | 0.106716 | 0.060743 | 0.059468 | 0.059808 | 0.828137 | 0.781242 | 0.763741 | 0.724577 | 0.7109 | 0.662815 | 0 | 0.001342 | 0.238187 | 18,582 | 420 | 211 | 44.242857 | 0.830037 | 0.360995 | 0 | 0.512931 | 1 | 0 | 0.114291 | 0.003669 | 0 | 0 | 0 | 0 | 0 | 1 | 0.159483 | false | 0.00431 | 0.021552 | 0.034483 | 0.280172 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f61af8d1cc293175d97b87c5d88c8aea268d56d0 | 19,728 | py | Python | swagger_client/apis/messaging_api.py | bmostafa87/messagemedia-rest-api-python-sdk | 760ab08d2a47692c9f39c2ac9a410324a85e1dae | [
"Apache-2.0"
] | 1 | 2016-08-02T02:17:39.000Z | 2016-08-02T02:17:39.000Z | swagger_client/apis/messaging_api.py | bmostafa87/messagemedia-rest-api-python-sdk | 760ab08d2a47692c9f39c2ac9a410324a85e1dae | [
"Apache-2.0"
] | null | null | null | swagger_client/apis/messaging_api.py | bmostafa87/messagemedia-rest-api-python-sdk | 760ab08d2a47692c9f39c2ac9a410324a85e1dae | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
MessageMedia REST API
Australia’s Leading Messaging Solutions for Business and Enterprise.
OpenAPI spec version: 1.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import absolute_import
import sys
import os
import re
# python 2 and python 3 compatibility library
from six import iteritems
from ..configuration import Configuration
from ..api_client import ApiClient
class MessagingApi(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
config = Configuration()
if api_client:
self.api_client = api_client
else:
if not config.api_client:
config.api_client = ApiClient()
self.api_client = config.api_client
def messages_message_id_get(self, message_id, **kwargs):
"""
Retrive the status and details of a submitted message.
Get the status and details of a previously submitted message
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.messages_message_id_get(message_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str message_id: Unique identifier representing message that has been submitted (required)
:param str account: The account to use for this request. This account will be used for the request instead of the account assigned to the API key used to sign the request, allowing one API key to be used to perform requests on behalf of other accounts.
:return: Message
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.messages_message_id_get_with_http_info(message_id, **kwargs)
else:
(data) = self.messages_message_id_get_with_http_info(message_id, **kwargs)
return data
def messages_message_id_get_with_http_info(self, message_id, **kwargs):
"""
Retrive the status and details of a submitted message.
Get the status and details of a previously submitted message
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.messages_message_id_get_with_http_info(message_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str message_id: Unique identifier representing message that has been submitted (required)
:param str account: The account to use for this request. This account will be used for the request instead of the account assigned to the API key used to sign the request, allowing one API key to be used to perform requests on behalf of other accounts.
:return: Message
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['message_id', 'account']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method messages_message_id_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'message_id' is set
if ('message_id' not in params) or (params['message_id'] is None):
raise ValueError("Missing the required parameter `message_id` when calling `messages_message_id_get`")
resource_path = '/messages/{messageId}'.replace('{format}', 'json')
path_params = {}
if 'message_id' in params:
path_params['messageId'] = params['message_id']
query_params = {}
header_params = {}
if 'account' in params:
header_params['Account'] = params['account']
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['hmac']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Message',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def messages_message_id_put(self, message_id, status, **kwargs):
"""
Update the status of a submitted message.
Update the status of a scheduled message to cancelled to prevent the message from being sent
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.messages_message_id_put(message_id, status, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str message_id: Unique identifier representing message to be updated. (required)
:param Status status: New status for the message. (required)
:param str account: The account to use for this request. This account will be used for the request instead of the account assigned to the API key used to sign the request, allowing one API key to be used to perform requests on behalf of other accounts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.messages_message_id_put_with_http_info(message_id, status, **kwargs)
else:
(data) = self.messages_message_id_put_with_http_info(message_id, status, **kwargs)
return data
def messages_message_id_put_with_http_info(self, message_id, status, **kwargs):
"""
Update the status of a submitted message.
Update the status of a scheduled message to cancelled to prevent the message from being sent
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.messages_message_id_put_with_http_info(message_id, status, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str message_id: Unique identifier representing message to be updated. (required)
:param Status status: New status for the message. (required)
:param str account: The account to use for this request. This account will be used for the request instead of the account assigned to the API key used to sign the request, allowing one API key to be used to perform requests on behalf of other accounts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['message_id', 'status', 'account']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method messages_message_id_put" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'message_id' is set
if ('message_id' not in params) or (params['message_id'] is None):
raise ValueError("Missing the required parameter `message_id` when calling `messages_message_id_put`")
# verify the required parameter 'status' is set
if ('status' not in params) or (params['status'] is None):
raise ValueError("Missing the required parameter `status` when calling `messages_message_id_put`")
resource_path = '/messages/{messageId}'.replace('{format}', 'json')
path_params = {}
if 'message_id' in params:
path_params['messageId'] = params['message_id']
query_params = {}
header_params = {}
if 'account' in params:
header_params['Account'] = params['account']
form_params = []
local_var_files = {}
body_params = None
if 'status' in params:
body_params = params['status']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['hmac']
return self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def messages_post(self, messages, **kwargs):
"""
Send one or more SMS or text to voice messages.
Submit one or more (up to 100 per request) SMS or text to voice messages to be sent to the destination address. - A callback URL can be included with each message to which MO and DR notifications will be pushed to via a HTTP POST request. - The content of the message can be a unicode string, up to 5000 characters long - A delivery report can be be requested with the message which will be pushed to a HTTP endpoint if specified, or available via the Check Reports endpoint. - The destination number should be specified in E.164 international format. For information on E.164, please refer to http://en.wikipedia.org/wiki/E.164 - The format specifies which format the message will be sent as, SMS or VOICE - If specified the source number included in the request will be shown as the source number for the message, this feature is not enabled by default, please contact MessageMedia for more information. - If a source number is specified, the type of source number may also be specified. - The message will be scheduled to be delivered in the future if the scheduled parameter is specified. - A message expiry timestamp can be provided, if the message is not delivered by this time, it will be discarded. - Metadata can be included with the message. Up to 5 key value pair strings can be included with each message. This metadata will be available in delivery reports and replies.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.messages_post(messages, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param Messages messages: A list of messages to be sent. (required)
:param str account: The account to use for this request. This account will be used for the request instead of the account assigned to the API key used to sign the request, allowing one API key to be used to perform requests on behalf of other accounts.
:return: MessageList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.messages_post_with_http_info(messages, **kwargs)
else:
(data) = self.messages_post_with_http_info(messages, **kwargs)
return data
def messages_post_with_http_info(self, messages, **kwargs):
"""
Send one or more SMS or text to voice messages.
Submit one or more (up to 100 per request) SMS or text to voice messages to be sent to the destination address. - A callback URL can be included with each message to which MO and DR notifications will be pushed to via a HTTP POST request. - The content of the message can be a unicode string, up to 5000 characters long - A delivery report can be be requested with the message which will be pushed to a HTTP endpoint if specified, or available via the Check Reports endpoint. - The destination number should be specified in E.164 international format. For information on E.164, please refer to http://en.wikipedia.org/wiki/E.164 - The format specifies which format the message will be sent as, SMS or VOICE - If specified the source number included in the request will be shown as the source number for the message, this feature is not enabled by default, please contact MessageMedia for more information. - If a source number is specified, the type of source number may also be specified. - The message will be scheduled to be delivered in the future if the scheduled parameter is specified. - A message expiry timestamp can be provided, if the message is not delivered by this time, it will be discarded. - Metadata can be included with the message. Up to 5 key value pair strings can be included with each message. This metadata will be available in delivery reports and replies.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.messages_post_with_http_info(messages, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param Messages messages: A list of messages to be sent. (required)
:param str account: The account to use for this request. This account will be used for the request instead of the account assigned to the API key used to sign the request, allowing one API key to be used to perform requests on behalf of other accounts.
:return: MessageList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['messages', 'account']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method messages_post" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'messages' is set
if ('messages' not in params) or (params['messages'] is None):
raise ValueError("Missing the required parameter `messages` when calling `messages_post`")
resource_path = '/messages'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
if 'account' in params:
header_params['Account'] = params['account']
form_params = []
local_var_files = {}
body_params = None
if 'messages' in params:
body_params = params['messages']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['hmac']
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='MessageList',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
| 51.375 | 1,391 | 0.630576 | 2,419 | 19,728 | 5.008681 | 0.121538 | 0.034913 | 0.023853 | 0.017828 | 0.904094 | 0.885028 | 0.880406 | 0.865385 | 0.851519 | 0.850528 | 0 | 0.0032 | 0.303072 | 19,728 | 383 | 1,392 | 51.509138 | 0.878027 | 0.505677 | 0 | 0.693642 | 1 | 0 | 0.160875 | 0.04113 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040462 | false | 0 | 0.040462 | 0 | 0.138728 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
f631dd4ee290a22b802db4cc45a31d340896edc8 | 4,808 | py | Python | signal_block/three/tests/test_block.py | spectrum-dev/django-block-monolith | c17a1ef98ae813a4e94581e2e52a4a03f0e65769 | [
"MIT"
] | null | null | null | signal_block/three/tests/test_block.py | spectrum-dev/django-block-monolith | c17a1ef98ae813a4e94581e2e52a4a03f0e65769 | [
"MIT"
] | null | null | null | signal_block/three/tests/test_block.py | spectrum-dev/django-block-monolith | c17a1ef98ae813a4e94581e2e52a4a03f0e65769 | [
"MIT"
] | null | null | null | from django.test.testcases import TestCase
from blocks.event import event_ingestor
class TestAndRun(TestCase):
def setUp(self):
self.payload = {"blockType": "SIGNAL_BLOCK", "blockId": 3}
def test_simple_two_param_and(self):
payload = {
**self.payload,
"inputs": {},
"outputs": {
"SIGNAL_BLOCK-1-1": [
{"timestamp": "01/02/2020", "order": "BUY"},
{"timestamp": "01/07/2020", "order": "BUY"},
{"timestamp": "01/31/2020", "order": "SELL"},
],
"SIGNAL_BLOCK-1-2": [
{"timestamp": "01/07/2020", "order": "BUY"},
{"timestamp": "01/31/2020", "order": "SELL"},
],
},
}
response = event_ingestor(payload)
self.assertEqual(
response,
[
{"timestamp": "01/07/2020", "order": "BUY"},
{"timestamp": "01/31/2020", "order": "SELL"},
],
)
def test_simple_three_param_and(self):
payload = {
**self.payload,
"inputs": {},
"outputs": {
"SIGNAL_BLOCK-1-1": [
{"timestamp": "01/02/2020", "order": "BUY"},
{"timestamp": "01/07/2020", "order": "BUY"},
{"timestamp": "01/21/2020", "order": "BUY"},
],
"SIGNAL_BLOCK-1-2": [
{"timestamp": "01/07/2020", "order": "BUY"},
{"timestamp": "01/14/2020", "order": "BUY"},
],
"SIGNAL_BLOCK-1-3": [{"timestamp": "01/07/2020", "order": "BUY"}],
},
}
response = event_ingestor(payload)
self.assertEqual(response, [{"timestamp": "01/07/2020", "order": "BUY"}])
def test_one_input_empty(self):
payload = {
**self.payload,
"inputs": {},
"outputs": {
"SIGNAL_BLOCK-1-1": [
{"timestamp": "01/02/2020", "order": "BUY"},
{"timestamp": "01/07/2020", "order": "BUY"},
{"timestamp": "01/21/2020", "order": "BUY"},
],
"SIGNAL_BLOCK-1-2": [
{"timestamp": "01/07/2020", "order": "BUY"},
{"timestamp": "01/14/2020", "order": "BUY"},
],
"SIGNAL_BLOCK-1-3": [],
},
}
response = event_ingestor(payload)
self.assertEqual(response, [])
# TODO: Oscar
def test_buy_sell_same_timestamp_does_not_trigger_intersect(self):
# payload = {
# **self.payload,
# "input": {},
# "output": {
# "SIGNAL_BLOCK-1-1": [
# {"timestamp": "01/02/2020", "order": "BUY"},
# {"timestamp": "01/07/2020", "order": "BUY"},
# {"timestamp": "01/21/2020", "order": "BUY"},
# ],
# "SIGNAL_BLOCK-1-2": [
# {"timestamp": "01/07/2020", "order": "SELL"},
# {"timestamp": "01/14/2020", "order": "BUY"},
# ],
# "SIGNAL_BLOCK-1-3": [{"timestamp": "01/07/2020", "order": "BUY"}],
# },
# }
# response = event_ingestor(payload)
# self.assertEqual(response, [])
pass
def test_multiple_timestamp_trigger_multiple_intersect(self):
payload = {
**self.payload,
"inputs": {},
"outputs": {
"SIGNAL_BLOCK-1-1": [
{"timestamp": "01/02/2020", "order": "BUY"},
{"timestamp": "01/07/2020", "order": "SELL"},
{"timestamp": "01/21/2020", "order": "BUY"},
{"timestamp": "01/31/2020", "order": "BUY"},
],
"SIGNAL_BLOCK-1-2": [
{"timestamp": "01/07/2020", "order": "SELL"},
{"timestamp": "01/14/2020", "order": "BUY"},
{"timestamp": "01/31/2020", "order": "BUY"},
],
"SIGNAL_BLOCK-1-3": [
{"timestamp": "01/02/2020", "order": "BUY"},
{"timestamp": "01/07/2020", "order": "SELL"},
{"timestamp": "01/23/2020", "order": "BUY"},
{"timestamp": "01/31/2020", "order": "BUY"},
],
},
}
response = event_ingestor(payload)
self.assertEqual(
response,
[
{"timestamp": "01/07/2020", "order": "SELL"},
{"timestamp": "01/31/2020", "order": "BUY"},
],
)
| 34.589928 | 84 | 0.403286 | 411 | 4,808 | 4.608273 | 0.136253 | 0.220697 | 0.190074 | 0.18849 | 0.838965 | 0.817846 | 0.809398 | 0.782471 | 0.774551 | 0.760824 | 0 | 0.116637 | 0.406198 | 4,808 | 138 | 85 | 34.84058 | 0.54676 | 0.123544 | 0 | 0.712871 | 0 | 0 | 0.268717 | 0 | 0 | 0 | 0 | 0.007246 | 0.039604 | 1 | 0.059406 | false | 0.009901 | 0.019802 | 0 | 0.089109 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
f648cdc1007e41163a534cbfaf231ad7ec1c1e9f | 22,870 | py | Python | tests/test_database/test_fluxes.py | ajstewart/tkp | 2aab1d021d10e3d1f3c4c8a836aea96ac6ae413f | [
"BSD-2-Clause"
] | 9 | 2015-04-30T22:10:14.000Z | 2020-06-09T01:24:20.000Z | tests/test_database/test_fluxes.py | ajstewart/tkp | 2aab1d021d10e3d1f3c4c8a836aea96ac6ae413f | [
"BSD-2-Clause"
] | 218 | 2015-01-08T11:10:57.000Z | 2021-11-25T05:52:42.000Z | tests/test_database/test_fluxes.py | ajstewart/tkp | 2aab1d021d10e3d1f3c4c8a836aea96ac6ae413f | [
"BSD-2-Clause"
] | 14 | 2015-03-11T11:21:58.000Z | 2020-06-16T09:15:57.000Z | import unittest
import tkp.db
import tkp.db.general as dbgen
from tkp.db.generic import get_db_rows_as_dicts, columns_from_table
from tkp.db.associations import associate_extracted_sources
from tkp.db.general import insert_extracted_sources
from tkp.testutil import db_subs
from tkp.testutil import db_queries
from tkp.testutil.decorators import requires_database
class TestOne2OneFlux(unittest.TestCase):
"""
These tests will check the statistical fluxes of a 1-to-1 source associations
"""
@requires_database()
def setUp(self):
self.database = tkp.db.Database()
def tearDown(self):
"""remove all stuff after the test has been run"""
self.database.close()
def test_one2oneflux(self):
dataset = tkp.db.DataSet(database=self.database, data={'description': 'flux test set: 1-1'})
n_images = 3
im_params = db_subs.generate_timespaced_dbimages_data(n_images)
src_list = []
src = db_subs.example_extractedsource_tuple()
src0 = src._replace(flux=2.0)
src_list.append(src0)
src1 = src._replace(flux=2.5)
src_list.append(src1)
src2 = src._replace(flux=2.4)
src_list.append(src2)
for idx, im in enumerate(im_params):
image = tkp.db.Image(database=self.database, dataset=dataset, data=im)
insert_extracted_sources(image._id, [src_list[idx]])
associate_extracted_sources(image.id, deRuiter_r=3.717)
query = """\
SELECT rf.avg_f_int
FROM runningcatalog r
,runningcatalog_flux rf
WHERE r.dataset = %(dataset)s
AND r.id = rf.runcat
"""
self.database.cursor.execute(query, {'dataset': dataset.id})
result = zip(*self.database.cursor.fetchall())
avg_f_int = result[0]
self.assertEqual(len(avg_f_int), 1)
py_metrics = db_subs.lightcurve_metrics(src_list)
self.assertAlmostEqual(avg_f_int[0], py_metrics[-1]['avg_f_int'])
runcat_id = columns_from_table('runningcatalog',
where={'dataset':dataset.id})
self.assertEqual(len(runcat_id),1)
runcat_id = runcat_id[0]['id']
# Check evolution of variability indices
db_metrics = db_queries.get_assoc_entries(self.database,
runcat_id)
self.assertEqual(len(db_metrics), n_images)
# Compare the python- and db-calculated values
for i in range(len(db_metrics)):
for key in ('v_int','eta_int'):
self.assertAlmostEqual(db_metrics[i][key], py_metrics[i][key])
class TestOne2ManyFlux(unittest.TestCase):
"""
These tests will check the 1-to-many source associations, i.e. two extractedsources
have the same one counterpart in the runningcatalog
"""
@requires_database()
def setUp(self):
self.database = tkp.db.Database()
def tearDown(self):
"""remove all stuff after the test has been run"""
self.database.close()
def test_one2manyflux(self):
dataset = tkp.db.DataSet(database=self.database,
data={'description': 'flux test set: 1-n'})
n_images = 2
im_params = db_subs.generate_timespaced_dbimages_data(n_images)
central_ra, central_dec = 123.1235, 10.55,
position_offset_deg = 100./3600 #100 arcsec = 0.03 deg approx
# image 1
image = tkp.db.Image(database=self.database, dataset=dataset, data=im_params[0])
imageid1 = image.id
img1_srclist = []
# 1 source
img1_srclist.append(db_subs.example_extractedsource_tuple(central_ra, central_dec,
peak = 1.5, peak_err = 5e-1,
flux = 3.0, flux_err = 5e-1,
))
dbgen.insert_extracted_sources(imageid1, img1_srclist, 'blind')
associate_extracted_sources(imageid1, deRuiter_r=3.717)
# image 2
image = tkp.db.Image(database=self.database, dataset=dataset, data=im_params[1])
imageid2 = image.id
img2_srclist = []
# 2 sources (both close to source 1, catching the 1-to-many case)
img2_srclist.append(db_subs.example_extractedsource_tuple(
central_ra,
central_dec,
peak = 1.6, peak_err = 5e-1,
flux = 3.2, flux_err = 5e-1,
))
img2_srclist.append(db_subs.example_extractedsource_tuple(
central_ra + position_offset_deg,
central_dec,
peak = 1.9, peak_err = 5e-1,
flux = 3.4, flux_err = 5e-1,
))
dbgen.insert_extracted_sources(imageid2, img2_srclist, 'blind')
associate_extracted_sources(imageid2, deRuiter_r=3.717)
# Manually compose the lists of sources we expect to see associated
# into runningcatalog entries:
# NB img2_srclist[1] has larger RA value.
lightcurves_sorted_by_ra =[]
lightcurves_sorted_by_ra.append( [img1_srclist[0], img2_srclist[0]])
lightcurves_sorted_by_ra.append( [img1_srclist[0], img2_srclist[1]])
#Check the summary statistics (avg flux, etc)
query = """\
SELECT rf.avg_f_int
,rf.avg_f_int_sq
,avg_weighted_f_int
,avg_f_int_weight
FROM runningcatalog r
,runningcatalog_flux rf
WHERE r.dataset = %(dataset)s
AND r.id = rf.runcat
ORDER BY r.wm_ra
"""
self.database.cursor.execute(query, {'dataset': dataset.id})
runcat_flux_entries = get_db_rows_as_dicts(self.database.cursor)
self.assertEqual(len(runcat_flux_entries), 2)
for idx, flux_summary in enumerate(runcat_flux_entries):
py_results = db_subs.lightcurve_metrics(lightcurves_sorted_by_ra[idx])
for key in flux_summary.keys():
self.assertAlmostEqual(flux_summary[key], py_results[-1][key])
#Now check the per-timestep statistics (variability indices)
sorted_runcat_ids = columns_from_table('runningcatalog',
where={'dataset':dataset.id},
order='wm_ra')
sorted_runcat_ids = [entry['id'] for entry in sorted_runcat_ids]
for idx, rcid in enumerate(sorted_runcat_ids):
db_indices = db_queries.get_assoc_entries(self.database,
rcid)
py_indices = db_subs.lightcurve_metrics(lightcurves_sorted_by_ra[idx])
self.assertEqual(len(db_indices), len(py_indices))
for nstep in range(len(db_indices)):
for key in ('v_int', 'eta_int', 'f_datapoints'):
self.assertAlmostEqual(db_indices[nstep][key],
py_indices[nstep][key])
class TestMany2OneFlux(unittest.TestCase):
"""
These tests will check the many-to-1 source associations, i.e. one extractedsource
that has two (or more) counterparts in the runningcatalog.
"""
@requires_database()
def setUp(self):
self.database = tkp.db.Database()
def tearDown(self):
"""remove all stuff after the test has been run"""
self.database.close()
def test_many2oneflux(self):
dataset = tkp.db.DataSet(database=self.database, data={'description': 'flux test set: n-1'})
n_images = 2
central_ra, central_dec = 123.1235, 10.55,
position_offset_deg = 100./3600 #100 arcsec = 0.03 deg approx
im_params = db_subs.generate_timespaced_dbimages_data(n_images)
# image 1
image1 = tkp.db.Image(database=self.database, dataset=dataset, data=im_params[0])
img1_srclist = []
# 2 sources (located close together, so the catching the many-to-1 case in next image
img1_srclist.append(db_subs.example_extractedsource_tuple(
ra=central_ra,
dec=central_dec,
peak = 1.6, peak_err = 5e-1,
flux = 3.2, flux_err = 5e-1,
))
img1_srclist.append(db_subs.example_extractedsource_tuple(
ra=central_ra + position_offset_deg,
dec=central_dec,
peak = 1.9, peak_err = 5e-1,
flux = 3.4, flux_err = 5e-1,
))
dbgen.insert_extracted_sources(image1.id, img1_srclist, 'blind')
associate_extracted_sources(image1.id, deRuiter_r=3.717)
# image 2
image2 = tkp.db.Image(database=self.database, dataset=dataset, data=im_params[1])
img2_srclist = []
# 1 source
img2_srclist.append(db_subs.example_extractedsource_tuple(
ra=central_ra,
dec=central_dec,
peak = 1.5, peak_err = 5e-1,
flux = 3.0, flux_err = 5e-1,
))
dbgen.insert_extracted_sources(image2.id, img2_srclist, 'blind')
associate_extracted_sources(image2.id, deRuiter_r=3.717)
# Manually compose the lists of sources we expect to see associated
# into runningcatalog entries:
# NB img1_srclist[1] has larger RA value.
lightcurves_sorted_by_ra =[]
lightcurves_sorted_by_ra.append( [img1_srclist[0], img2_srclist[0]])
lightcurves_sorted_by_ra.append( [img1_srclist[1], img2_srclist[0]])
#Check the summary statistics (avg flux, etc)
query = """\
SELECT rf.avg_f_int
,rf.avg_f_int_sq
,avg_weighted_f_int
,avg_f_int_weight
FROM runningcatalog r
,runningcatalog_flux rf
WHERE r.dataset = %(dataset)s
AND r.id = rf.runcat
ORDER BY r.wm_ra
"""
self.database.cursor.execute(query, {'dataset': dataset.id})
runcat_flux_entries = get_db_rows_as_dicts(self.database.cursor)
self.assertEqual(len(runcat_flux_entries), 2)
for idx, flux_summary in enumerate(runcat_flux_entries):
py_results = db_subs.lightcurve_metrics(lightcurves_sorted_by_ra[idx])
for key in flux_summary.keys():
self.assertAlmostEqual(flux_summary[key], py_results[-1][key])
#Now check the per-timestep statistics (variability indices)
sorted_runcat_ids = columns_from_table('runningcatalog',
where={'dataset':dataset.id},
order='wm_ra')
sorted_runcat_ids = [entry['id'] for entry in sorted_runcat_ids]
for idx, rcid in enumerate(sorted_runcat_ids):
db_indices = db_queries.get_assoc_entries(self.database,
rcid)
py_indices = db_subs.lightcurve_metrics(lightcurves_sorted_by_ra[idx])
self.assertEqual(len(db_indices), len(py_indices))
for nstep in range(len(db_indices)):
for key in ('v_int', 'eta_int', 'f_datapoints'):
self.assertAlmostEqual(db_indices[nstep][key],
py_indices[nstep][key])
class TestMany2Many(unittest.TestCase):
"""
These tests will check the many-to-many source associations, i.e. many extractedsources
that has two (or more) counterparts in the runningcatalog.
"""
@requires_database()
def setUp(self):
self.database = tkp.db.Database()
def tearDown(self):
"""remove all stuff after the test has been run"""
self.database.close()
def test_many2manyflux_reduced_to_two_1_to_many_one_1to1(self):
"""
(See also assoc. test test_many2many_reduced_to_two_1_to_many_one_1to1 )
In this test-case we cross-associate between a rhombus of sources spread
about a central position, east-west in the first image,
north-south in the second.
The latter, north-south pair are both slightly offset towards positive RA.
The result is that they are both linked with the high-RA (eastern) source
in one-to-many associations.
"""
dataset = tkp.db.DataSet(database=self.database, data={'description': 'flux test set: n-m, ' + self._testMethodName})
n_images = 2
im_params = db_subs.generate_timespaced_dbimages_data(n_images)
centre_ra, centre_dec = 123., 10.5,
offset_deg = 20 / 3600. #20 arcsec
tiny_offset_deg = 1 / 3600. #1 arcsec
eastern_src = db_subs.example_extractedsource_tuple(
ra=centre_ra + offset_deg,
dec=centre_dec,
peak = 1.5, peak_err = 1e-1,
flux = 3.0, flux_err = 1e-1,)
western_src = db_subs.example_extractedsource_tuple(
ra=centre_ra - offset_deg,
dec=centre_dec,
peak = 1.7, peak_err = 1e-1,
flux = 3.2, flux_err = 1e-1,)
northern_source = db_subs.example_extractedsource_tuple(
ra=centre_ra + tiny_offset_deg,
dec=centre_dec + offset_deg,
peak = 1.8, peak_err = 1e-1,
flux = 3.3, flux_err = 1e-1,
)
southern_source = db_subs.example_extractedsource_tuple(
ra=centre_ra + tiny_offset_deg,
dec=centre_dec - offset_deg,
peak = 1.4, peak_err = 1e-1,
flux = 2.9, flux_err = 1e-1,)
# image 1
image1 = tkp.db.Image(database=self.database, dataset=dataset,
data=im_params[0])
dbgen.insert_extracted_sources(
image1.id, [eastern_src,western_src], 'blind')
associate_extracted_sources(image1.id, deRuiter_r=3.717)
# image 2
image2 = tkp.db.Image(database=self.database, dataset=dataset,
data=im_params[1])
dbgen.insert_extracted_sources(
image2.id, [northern_source, southern_source], 'blind')
associate_extracted_sources(image2.id, deRuiter_r=3.717)
# Manually compose the lists of sources we expect to see associated
# into runningcatalog entries:
# NB img1_srclist[1] has larger RA value.
lightcurves_sorted_by_ra_dec =[]
lightcurves_sorted_by_ra_dec.append( [western_src])
lightcurves_sorted_by_ra_dec.append( [eastern_src, southern_source])
lightcurves_sorted_by_ra_dec.append( [eastern_src, northern_source])
#Check the summary statistics (avg flux, etc)
query = """\
SELECT rf.avg_f_int
,rf.avg_f_int_sq
,avg_weighted_f_int
,avg_f_int_weight
FROM runningcatalog r
,runningcatalog_flux rf
WHERE r.dataset = %(dataset)s
AND r.id = rf.runcat
ORDER BY r.wm_ra, r.wm_decl
"""
self.database.cursor.execute(query, {'dataset': dataset.id})
runcat_flux_entries = get_db_rows_as_dicts(self.database.cursor)
self.assertEqual(len(runcat_flux_entries), len(lightcurves_sorted_by_ra_dec))
for idx, flux_summary in enumerate(runcat_flux_entries):
py_results = db_subs.lightcurve_metrics(lightcurves_sorted_by_ra_dec[idx])
for key in flux_summary.keys():
self.assertAlmostEqual(flux_summary[key], py_results[-1][key])
#Now check the per-timestep statistics (variability indices)
sorted_runcat_ids = columns_from_table('runningcatalog',
where={'dataset':dataset.id},
order='wm_ra,wm_decl')
sorted_runcat_ids = [entry['id'] for entry in sorted_runcat_ids]
for idx, rcid in enumerate(sorted_runcat_ids):
db_indices = db_queries.get_assoc_entries(self.database,
rcid)
py_indices = db_subs.lightcurve_metrics(lightcurves_sorted_by_ra_dec[idx])
self.assertEqual(len(db_indices), len(py_indices))
for nstep in range(len(db_indices)):
for key in ('v_int', 'eta_int', 'f_datapoints'):
self.assertAlmostEqual(db_indices[nstep][key],
py_indices[nstep][key])
def test_many2manyflux_reduced_to_two_1to1(self):
"""
(See also assoc. test test_many2many_reduced_to_two_1to1 )
In this test-case we cross-associate between a rhombus of sources spread
about a central position, east-west in the first image,
north-south in the second.
The latter, north-south pair are slightly offset towards positive RA
and negative RA respectively.
The result is that the candidate associations are pruned down to
two one-to-one pairings..
"""
dataset = tkp.db.DataSet(database=self.database, data={'description': 'flux test set: n-m, ' + self._testMethodName})
n_images = 2
im_params = db_subs.generate_timespaced_dbimages_data(n_images)
centre_ra, centre_dec = 123., 10.5,
offset_deg = 20 / 3600. #20 arcsec
tiny_offset_deg = 1 / 3600. #1 arcsec
eastern_src = db_subs.example_extractedsource_tuple(
ra=centre_ra + offset_deg,
dec=centre_dec,
peak = 1.5, peak_err = 1e-1,
flux = 3.0, flux_err = 1e-1,)
western_src = db_subs.example_extractedsource_tuple(
ra=centre_ra - offset_deg,
dec=centre_dec,
peak = 1.7, peak_err = 1e-1,
flux = 3.2, flux_err = 1e-1,)
northern_source = db_subs.example_extractedsource_tuple(
ra=centre_ra + tiny_offset_deg,
dec=centre_dec + offset_deg,
peak = 1.8, peak_err = 1e-1,
flux = 3.3, flux_err = 1e-1,
)
southern_source = db_subs.example_extractedsource_tuple(
ra=centre_ra - tiny_offset_deg,
dec=centre_dec - offset_deg,
peak = 1.4, peak_err = 1e-1,
flux = 2.9, flux_err = 1e-1,)
# image 1
image1 = tkp.db.Image(database=self.database, dataset=dataset,
data=im_params[0])
dbgen.insert_extracted_sources(
image1.id, [eastern_src,western_src], 'blind')
associate_extracted_sources(image1.id, deRuiter_r=3.717)
# image 2
image2 = tkp.db.Image(database=self.database, dataset=dataset,
data=im_params[1])
dbgen.insert_extracted_sources(
image2.id, [northern_source, southern_source], 'blind')
associate_extracted_sources(image2.id, deRuiter_r=3.717)
# Manually compose the lists of sources we expect to see associated
# into runningcatalog entries:
# NB img1_srclist[1] has larger RA value.
lightcurves_sorted_by_ra =[]
lightcurves_sorted_by_ra.append( [western_src, southern_source])
lightcurves_sorted_by_ra.append( [eastern_src, northern_source])
#Check the summary statistics (avg flux, etc)
query = """\
SELECT rf.avg_f_int
,rf.avg_f_int_sq
,avg_weighted_f_int
,avg_f_int_weight
FROM runningcatalog r
,runningcatalog_flux rf
WHERE r.dataset = %(dataset)s
AND r.id = rf.runcat
ORDER BY r.wm_ra, r.wm_decl
"""
self.database.cursor.execute(query, {'dataset': dataset.id})
runcat_flux_entries = get_db_rows_as_dicts(self.database.cursor)
self.assertEqual(len(runcat_flux_entries), len(lightcurves_sorted_by_ra))
for idx, flux_summary in enumerate(runcat_flux_entries):
py_results = db_subs.lightcurve_metrics(lightcurves_sorted_by_ra[idx])
for key in flux_summary.keys():
self.assertAlmostEqual(flux_summary[key], py_results[-1][key])
#Now check the per-timestep statistics (variability indices)
sorted_runcat_ids = columns_from_table('runningcatalog',
where={'dataset':dataset.id},
order='wm_ra,wm_decl')
sorted_runcat_ids = [entry['id'] for entry in sorted_runcat_ids]
for idx, rcid in enumerate(sorted_runcat_ids):
db_indices = db_queries.get_assoc_entries(self.database,
rcid)
py_indices = db_subs.lightcurve_metrics(lightcurves_sorted_by_ra[idx])
self.assertEqual(len(db_indices), len(py_indices))
for nstep in range(len(db_indices)):
for key in ('v_int', 'eta_int', 'f_datapoints'):
self.assertAlmostEqual(db_indices[nstep][key],
py_indices[nstep][key])
@requires_database()
class TestInfiniteErrorFluxes(unittest.TestCase):
"""
What happens if we perform a forced fit, it doesn't converge, and we
get an 'infinite' error-bar? Do we handle this correctly?
"""
@requires_database()
def setUp(self):
self.database = tkp.db.Database()
def tearDown(self):
"""remove all stuff after the test has been run"""
self.database.close()
def test_one2one_flux_infinite_error(self):
dataset = tkp.db.DataSet(database=self.database, data={'description': 'flux test set: 1-1'})
n_images = 3
im_params = db_subs.generate_timespaced_dbimages_data(n_images)
src_list = []
src = db_subs.example_extractedsource_tuple()
src0 = src._replace(flux=2.0)
src_list.append(src0)
src1 = src._replace(flux=2.5)
src_list.append(src1)
src2 = src._replace(flux=0.0001, flux_err=float('inf'),
peak=0.0001, peak_err=float('inf'))
src_list.append(src2)
for idx, im in enumerate(im_params):
image = tkp.db.Image(database=self.database, dataset=dataset, data=im)
insert_extracted_sources(image._id, [src_list[idx]])
associate_extracted_sources(image.id, deRuiter_r=3.717)
query = """\
SELECT rf.avg_f_int
,rf.f_datapoints
FROM runningcatalog r
,runningcatalog_flux rf
WHERE r.dataset = %(dataset)s
AND r.id = rf.runcat
"""
cursor = tkp.db.execute(query, {'dataset': dataset.id})
results = db_subs.get_db_rows_as_dicts(cursor)
self.assertEqual(len(results),1)
self.assertEqual(results[0]['f_datapoints'],2)
self.assertAlmostEqual(results[0]['avg_f_int'],
(src0.flux + src1.flux)/2.0 )
| 41.506352 | 125 | 0.607696 | 2,864 | 22,870 | 4.600908 | 0.097416 | 0.037338 | 0.033164 | 0.036655 | 0.867193 | 0.85118 | 0.831752 | 0.817181 | 0.801928 | 0.795249 | 0 | 0.025897 | 0.2993 | 22,870 | 550 | 126 | 41.581818 | 0.796381 | 0.132357 | 0 | 0.796392 | 0 | 0 | 0.111043 | 0 | 0 | 0 | 0 | 0 | 0.061856 | 1 | 0.041237 | false | 0 | 0.023196 | 0 | 0.07732 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1af6782e0dad23699ec7f98f3fa2dc52ed01d9a7 | 23,256 | py | Python | Packs/CovalenceForSecurityProviders/Integrations/CovalenceForSecurityProviders/CovalenceForSecurityProviders_test.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 799 | 2016-08-02T06:43:14.000Z | 2022-03-31T11:10:11.000Z | Packs/CovalenceForSecurityProviders/Integrations/CovalenceForSecurityProviders/CovalenceForSecurityProviders_test.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 9,317 | 2016-08-07T19:00:51.000Z | 2022-03-31T21:56:04.000Z | Packs/CovalenceForSecurityProviders/Integrations/CovalenceForSecurityProviders/CovalenceForSecurityProviders_test.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
] | 1,297 | 2016-08-04T13:59:00.000Z | 2022-03-31T23:43:06.000Z | """Base Integration for Cortex XSOAR - Unit Tests file
Pytest Unit Tests: all funcion names must start with "test_"
More details: https://xsoar.pan.dev/docs/integrations/unit-testing
MAKE SURE YOU REVIEW/REPLACE ALL THE COMMENTS MARKED AS "TODO"
You must add at least a Unit Test function for every XSOAR command
you are implementing with your integration
"""
import pytest
import os
import requests
import demistomock as demisto
import json
import io
def util_load_json(path):
with io.open(path, mode='r', encoding='utf-8') as f:
return json.loads(f.read())
@pytest.fixture(autouse=True)
def init_tests(mocker):
mocker.patch.object(demisto, 'params', return_value={
'host': 'foo.bar',
'broker': False,
'credentials': {'identifier': 'foo', 'password': 'bar'},
'verify_ssl': False,
'timeout': '100',
'first_run_time_range': '2',
'fetch_limit': '250',
'proxy': False
})
mocker.patch.dict(os.environ, {
'HTTP_PROXY': '',
'HTTPS_PROXY': '',
'http_proxy': '',
'https_proxy': ''
})
def test_find_covs(mocker):
'''
Making sure correct Cov ids are returned when provided with an org name
'''
text = '''<html>
<head><title>Select Covalence</title></head><body><h1>Select Covalence</h1><p>
<a href="/index/2016-001-AA">Capsule Corp</a><p>
<a href="/index/2016-001-AB">Acme Inc.</a><p>
<a href="/index/2016-001-AC">Acme Inc.</a><p>
</body><html>'''
r = requests.Response()
r.status_code = 200
type(r).text = mocker.PropertyMock(return_value=text)
mocker.patch.object(requests, 'get', return_value=r)
from CovalenceForSecurityProviders import find_covs
assert find_covs('Capsule Corp') == ['2016-001-AA']
assert find_covs('Acme Inc.') == ['2016-001-AB', '2016-001-AC']
def test_build_host():
'''
Making sure the Covalence url is correctly built
'''
from CovalenceForSecurityProviders import build_host
host = build_host('foo.bar')
assert host == 'https://foo.bar/CovalenceWebUI/services'
def test_send_request_direct_dict(mocker):
'''
Making sure dict is returned for dict responses
Direct mode
'''
import CovalenceForSecurityProviders
# direct mode, no need to find cov from org_name
mocker.patch.object(CovalenceForSecurityProviders, 'login', return_value=requests.Session())
mock_get_sensor = util_load_json('test_data/get_sensor.json')
r = requests.Response()
r.status_code = 200
r._content = json.dumps(mock_get_sensor).encode('utf-8')
mocker.patch.object(requests.Session, 'send', return_value=r)
resp = CovalenceForSecurityProviders.send_request('GET', '/rest/v1/sensors/sensor_id', target_org=None)
assert resp == [mock_get_sensor]
def test_send_request_direct_list(mocker):
'''
Making sure list is returned for list responses
Direct mode
'''
import CovalenceForSecurityProviders
# direct mode, no need to find cov from org_name
mocker.patch.object(CovalenceForSecurityProviders, 'login', return_value=requests.Session())
mock_list_sensor = util_load_json('test_data/list_sensor.json')
r = requests.Response()
r.status_code = 200
r._content = json.dumps(mock_list_sensor).encode('utf-8')
mocker.patch.object(requests.Session, 'send', return_value=r)
resp = CovalenceForSecurityProviders.send_request('GET', '/rest/v1/sensors', target_org=None)
assert resp == mock_list_sensor
def test_send_request_broker_dict(mocker):
'''
Broker mode, the org has 2 covalences
Making sure the request is sent to both covalences
Making sure that both responses (dict) get merged in a list and returned
'''
mocker.patch.object(demisto, 'params', return_value={'broker': True})
import CovalenceForSecurityProviders
mocker.patch.object(CovalenceForSecurityProviders, 'find_covs', return_value=['2016-001-AB', '2016-001-AC'])
mocker.patch.object(CovalenceForSecurityProviders, 'login', return_value=requests.Session())
mock_get_sensor = util_load_json('test_data/get_sensor.json')
r = requests.Response()
r.status_code = 200
r._content = json.dumps(mock_get_sensor).encode('utf-8')
mocker.patch.object(requests.Session, 'send', return_value=r)
sensor_list = []
sensor_list.append(mock_get_sensor)
sensor_list.append(mock_get_sensor)
resp = CovalenceForSecurityProviders.send_request('GET', '/rest/v1/sensors/sensor_id', target_org='Acme Inc.')
assert resp == sensor_list
def test_send_request_broker_list(mocker):
'''
Broker mode, the org has 2 covalences
Making sure the request is sent to both covalences
Making sure that both responses (list) get merged in a list and returned
'''
mocker.patch.object(demisto, 'params', return_value={'broker': True})
import CovalenceForSecurityProviders
mocker.patch.object(CovalenceForSecurityProviders, 'find_covs', return_value=['2016-001-AB', '2016-001-AC'])
mocker.patch.object(CovalenceForSecurityProviders, 'login', return_value=requests.Session())
mock_list_sensor = util_load_json('test_data/list_sensor.json')
r = requests.Response()
r.status_code = 200
r._content = json.dumps(mock_list_sensor).encode('utf-8')
mocker.patch.object(requests.Session, 'send', return_value=r)
sensor_list = []
sensor_list = sensor_list + mock_list_sensor
sensor_list = sensor_list + mock_list_sensor
resp = CovalenceForSecurityProviders.send_request('GET', '/rest/v1/sensors/sensor_id', target_org='Acme Inc.')
assert resp == sensor_list
def test_list_alerts(mocker):
mock_list_alerts = util_load_json('test_data/list_alerts.json')
import CovalenceForSecurityProviders
mocker.patch.object(demisto, 'args', return_value={
'target_org': None,
'details': 'false'
})
mocker.patch.object(CovalenceForSecurityProviders, 'send_request', return_value=mock_list_alerts)
r = CovalenceForSecurityProviders.list_alerts()
assert len(r[0].keys()) == 8
assert 'acknowledgedStatus' in r[0]
assert 'analystDescription' in r[0]
assert 'destIp' in r[0]
assert 'sourceIp' in r[0]
assert 'subType' in r[0]
assert 'title' in r[0]
assert 'type' in r[0]
def test_list_alerts_details(mocker):
mock_list_alerts = util_load_json('test_data/list_alerts.json')
import CovalenceForSecurityProviders
mocker.patch.object(demisto, 'args', return_value={
'target_org': None,
'details': 'true'
})
mocker.patch.object(CovalenceForSecurityProviders, 'send_request', return_value=mock_list_alerts)
r = CovalenceForSecurityProviders.list_alerts()
assert len(r[0].keys()) == 48
assert 'id' in r[0]
assert 'sensorId' in r[0]
assert 'type' in r[0]
assert 'organizationId' in r[0]
assert 'subType' in r[0]
assert 'severity' in r[0]
assert 'facility' in r[0]
assert 'priority' in r[0]
assert 'createdTime' in r[0]
assert 'lastAlertedTime' in r[0]
assert 'title' in r[0]
assert 'notes' in r[0]
assert 'alertHash' in r[0]
assert 'assignee' in r[0]
assert 'analystTitle' in r[0]
assert 'analystDescription' in r[0]
assert 'endpointAgentUuid' in r[0]
assert 'pcapResourceUuid' in r[0]
assert 'sourceCityName' in r[0]
assert 'sourceCountryName' in r[0]
assert 'destCityName' in r[0]
assert 'destCountryName' in r[0]
assert 'destIp' in r[0]
assert 'destPort' in r[0]
assert 'protocol' in r[0]
assert 'destDomainName' in r[0]
assert 'destCiscoUmbrellaRanking' in r[0]
assert 'destMajesticMillionRanking' in r[0]
assert 'destCiscoUmbrellaTopLevelDomainRanking' in r[0]
assert 'destMajesticMillionTopLevelDomainRanking' in r[0]
assert 'sourceIp' in r[0]
assert 'sourcePort' in r[0]
assert 'sourceDomainName' in r[0]
assert 'sourceCiscoUmbrellaRanking' in r[0]
assert 'sourceMajesticMillionRanking' in r[0]
assert 'sourceCiscoUmbrellaTopLevelDomainRanking' in r[0]
assert 'sourceMajesticMillionTopLevelDomainRanking' in r[0]
assert 'sourceGeoX' in r[0]
assert 'sourceGeoY' in r[0]
assert 'destGeoX' in r[0]
assert 'destGeoY' in r[0]
assert 'isFavorite' in r[0]
assert 'alertCount' in r[0]
assert 'acknowledgedStatus' in r[0]
assert 'blacklistDetails' in r[0]
assert 'sigEvalDetails' in r[0]
assert 'sourceIpAttributes' in r[0]
assert 'destIpAttributes' in r[0]
def test_list_sensors(mocker):
mock_list_sensor = util_load_json('test_data/list_sensor.json')
import CovalenceForSecurityProviders
mocker.patch.object(demisto, 'args', return_value={
'target_org': None,
'details': 'false'
})
mocker.patch.object(CovalenceForSecurityProviders, 'send_request', return_value=mock_list_sensor)
r = CovalenceForSecurityProviders.list_sensors()
assert len(r[0].keys()) == 3
assert 'isAuthorized' in r[0]
assert 'isNetflowGenerator' in r[0]
assert 'name' in r[0]
def test_list_sensors_details(mocker):
mock_list_sensor = util_load_json('test_data/list_sensor.json')
import CovalenceForSecurityProviders
mocker.patch.object(demisto, 'args', return_value={
'target_org': None,
'details': 'true'
})
mocker.patch.object(CovalenceForSecurityProviders, 'send_request', return_value=mock_list_sensor)
r = CovalenceForSecurityProviders.list_sensors()
assert len(r[0].keys()) == 7
assert 'id' in r[0]
assert 'name' in r[0]
assert 'isAuthorized' in r[0]
assert 'listeningInterfaces' in r[0]
assert 'isNetflowGenerator' in r[0]
assert 'bytesIn' in r[0]
assert 'bytesOut' in r[0]
assert 'lastActive' not in r[0]
def test_get_sensor(mocker):
mock_get_sensor = [util_load_json('test_data/get_sensor.json')]
import CovalenceForSecurityProviders
mocker.patch.object(demisto, 'args', return_value={
'target_org': None,
'sensor_id': 'id'
})
mocker.patch.object(CovalenceForSecurityProviders, 'send_request', return_value=mock_get_sensor)
r = CovalenceForSecurityProviders.get_sensor()
assert isinstance(r, list)
assert len(r[0].keys()) == 7
assert 'id' in r[0]
assert 'name' in r[0]
assert 'isAuthorized' in r[0]
assert 'listeningInterfaces' in r[0]
assert 'isNetflowGenerator' in r[0]
assert 'bytesIn' in r[0]
assert 'bytesOut' in r[0]
assert 'lastActive' not in r[0]
def test_connections_summary_by_ip(mocker):
mock_connections_summary_by_ip = util_load_json('test_data/connections_summary_by_ip.json')
import CovalenceForSecurityProviders
mocker.patch.object(demisto, 'args', return_value={
'target_org': None,
'details': 'false'
})
mocker.patch.object(CovalenceForSecurityProviders, 'send_request', return_value=mock_connections_summary_by_ip)
r = CovalenceForSecurityProviders.connections_summary_by_ip()
assert len(r[0].keys()) == 9
assert 'averageDuration' in r[0]
assert 'bytesIn' in r[0]
assert 'clientServerRelationship' in r[0]
assert 'destinationIpAddress' in r[0]
assert 'dstDomainName' in r[0]
assert 'serverPorts' in r[0]
assert 'sourceDomainName' in r[0]
assert 'sourceIpAddress' in r[0]
def test_connections_summary_by_ip_details(mocker):
mock_connections_summary_by_ip = util_load_json('test_data/connections_summary_by_ip.json')
import CovalenceForSecurityProviders
mocker.patch.object(demisto, 'args', return_value={
'target_org': None,
'details': 'true'
})
mocker.patch.object(CovalenceForSecurityProviders, 'send_request', return_value=mock_connections_summary_by_ip)
r = CovalenceForSecurityProviders.connections_summary_by_ip()
assert len(r[0].keys()) == 24
assert 'id' in r[0]
assert 'sourceId' in r[0]
assert 'sourceIpAddress' in r[0]
assert 'sourceMacAddress' in r[0]
assert 'destinationId' in r[0]
assert 'destinationIpAddress' in r[0]
assert 'destinationMacAddress' in r[0]
assert 'serverPortCount' in r[0]
assert 'serverPorts' in r[0]
assert 'bytesIn' in r[0]
assert 'bytesOut' in r[0]
assert 'packetsIn' in r[0]
assert 'packetsOut' in r[0]
assert 'continuingConnectionCount' in r[0]
assert 'terminatedConnectionCount' in r[0]
assert 'totalDuration' in r[0]
assert 'averageDuration' in r[0]
assert 'sourceCity' in r[0]
assert 'sourceCountry' in r[0]
assert 'destinationCity' in r[0]
assert 'destinationCountry' in r[0]
assert 'sourceDomainName' in r[0]
assert 'dstDomainName' in r[0]
assert 'clientServerRelationship' in r[0]
def test_connections_summary_by_port(mocker):
mock_connections_summary_by_port = util_load_json('test_data/connections_summary_by_port.json')
import CovalenceForSecurityProviders
mocker.patch.object(demisto, 'args', return_value={
'target_org': None,
'details': 'false'
})
mocker.patch.object(CovalenceForSecurityProviders, 'send_request', return_value=mock_connections_summary_by_port)
r = CovalenceForSecurityProviders.connections_summary_by_port()
assert len(r[0].keys()) == 8
assert 'averageDuration' in r[0]
assert 'bytesIn' in r[0]
assert 'bytesOut' in r[0]
assert 'destinationIpAddress' in r[0]
assert 'dstDomainName' in r[0]
assert 'serverPort' in r[0]
assert 'sourceDomainName' in r[0]
assert 'sourceIpAddress' in r[0]
def test_connections_summary_by_port_details(mocker):
mock_connections_summary_by_port = util_load_json('test_data/connections_summary_by_port.json')
import CovalenceForSecurityProviders
mocker.patch.object(demisto, 'args', return_value={
'target_org': None,
'details': 'true'
})
mocker.patch.object(CovalenceForSecurityProviders, 'send_request', return_value=mock_connections_summary_by_port)
r = CovalenceForSecurityProviders.connections_summary_by_port()
assert len(r[0].keys()) == 25
assert 'id' in r[0]
assert 'sourceId' in r[0]
assert 'sourceIpAddress' in r[0]
assert 'sourceMacAddress' in r[0]
assert 'destinationId' in r[0]
assert 'destinationIpAddress' in r[0]
assert 'destinationMacAddress' in r[0]
assert 'serverPort' in r[0]
assert 'protocol' in r[0]
assert 'continuingConnectionCount' in r[0]
assert 'terminatedConnectionCount' in r[0]
assert 'bytesIn' in r[0]
assert 'bytesOut' in r[0]
assert 'packetsIn' in r[0]
assert 'packetsOut' in r[0]
assert 'totalDuration' in r[0]
assert 'averageDuration' in r[0]
assert 'sourceCity' in r[0]
assert 'sourceCountry' in r[0]
assert 'destinationCity' in r[0]
assert 'destinationCountry' in r[0]
assert 'sourceDomainName' in r[0]
assert 'dstDomainName' in r[0]
assert 'startTime' in r[0]
assert 'endTime' in r[0]
def test_list_dns_resolutions(mocker):
mock_list_dns_resolutions = util_load_json('test_data/list_dns_resolutions.json')
import CovalenceForSecurityProviders
mocker.patch.object(demisto, 'args', return_value={
'target_org': None,
'details': 'false'
})
mocker.patch.object(CovalenceForSecurityProviders, 'send_request', return_value=mock_list_dns_resolutions)
r = CovalenceForSecurityProviders.list_dns_resolutions()
assert len(r[0].keys()) == 4
assert 'domainName' in r[0]
assert 'requestOriginIp' in r[0]
assert 'requestTime' in r[0]
assert 'resolvedIp' in r[0]
def test_list_dns_resolutions_details(mocker):
mock_list_dns_resolutions = util_load_json('test_data/list_dns_resolutions.json')
import CovalenceForSecurityProviders
mocker.patch.object(demisto, 'args', return_value={
'target_org': None,
'details': 'true'
})
mocker.patch.object(CovalenceForSecurityProviders, 'send_request', return_value=mock_list_dns_resolutions)
r = CovalenceForSecurityProviders.list_dns_resolutions()
assert len(r[0].keys()) == 9
assert 'id' in r[0]
assert 'domainName' in r[0]
assert 'resolvedIp' in r[0]
assert 'requestOriginIp' in r[0]
assert 'nameserverIp' in r[0]
assert 'nodeLabel' in r[0]
assert 'requestTime' in r[0]
assert 'byteCount' in r[0]
assert 'pktCount' in r[0]
def test_list_internal_networks(mocker):
mock_list_internal_networks = util_load_json('test_data/list_internal_networks.json')
import CovalenceForSecurityProviders
mocker.patch.object(demisto, 'args', return_value={
'target_org': None,
'details': 'true'
})
mocker.patch.object(CovalenceForSecurityProviders, 'send_request', return_value=mock_list_internal_networks)
r = CovalenceForSecurityProviders.list_internal_networks()
assert len(r[0].keys()) == 2
assert 'notes' in r[0]
assert 'cidr' in r[0]
def test_list_endpoint_agents(mocker):
mock_list_endpoint_agents = util_load_json('test_data/list_endpoint_agents.json')
import CovalenceForSecurityProviders
mocker.patch.object(demisto, 'args', return_value={
'target_org': None,
'details': 'false'
})
mocker.patch.object(CovalenceForSecurityProviders, 'send_request', return_value=mock_list_endpoint_agents)
r = CovalenceForSecurityProviders.list_endpoint_agents()
assert len(r[0].keys()) == 7
assert 'hardwareVendor' in r[0]
assert 'hostName' in r[0]
assert 'ipAddress' in r[0]
assert 'isConnected' in r[0]
assert 'lastSessionUser' in r[0]
assert 'operatingSystem' in r[0]
assert 'serialNumber' in r[0]
def test_list_endpoint_agents_details(mocker):
mock_list_endpoint_agents = util_load_json('test_data/list_endpoint_agents.json')
import CovalenceForSecurityProviders
mocker.patch.object(demisto, 'args', return_value={
'target_org': None,
'details': 'true'
})
mocker.patch.object(CovalenceForSecurityProviders, 'send_request', return_value=mock_list_endpoint_agents)
r = CovalenceForSecurityProviders.list_endpoint_agents()
assert len(r[0].keys()) == 25
assert 'agentUuid' in r[0]
assert 'agentVersion' in r[0]
assert 'firstSeenTime' in r[0]
assert 'lastSeenTime' in r[0]
assert 'lastSessionUser' in r[0]
assert 'isMobile' in r[0]
assert 'isConnected' in r[0]
assert 'coreVersion' in r[0]
assert 'coreArchitecture' in r[0]
assert 'coreOs' in r[0]
assert 'operatingSystem' in r[0]
assert 'hostName' in r[0]
assert 'hardwareVendor' in r[0]
assert 'hardwareModel' in r[0]
assert 'arch' in r[0]
assert 'osDistro' in r[0]
assert 'osVersion' in r[0]
assert 'kernelVersion' in r[0]
assert 'operatingSystemReleaseId' in r[0]
assert 'ipAddress' in r[0]
assert 'secondaryIpAddress' in r[0]
assert 'ipAddresses' in r[0]
assert 'serialNumber' in r[0]
assert 'deviceIdentifier' in r[0]
assert 'cpuArchitectureEnum' in r[0]
def test_search_endpoint_process(mocker):
mock_search_endpoint_process = util_load_json('test_data/search_endpoint_process.json')
import CovalenceForSecurityProviders
mocker.patch.object(demisto, 'args', return_value={
'target_org': None,
'details': 'false'
})
mocker.patch.object(CovalenceForSecurityProviders, 'send_request', return_value=mock_search_endpoint_process)
r = CovalenceForSecurityProviders.search_endpoint_process()
assert len(r[0].keys()) == 5
assert 'commandLine' in r[0]
assert 'firstSeenTime' in r[0]
assert 'lastSeenTime' in r[0]
assert 'processPath' in r[0]
assert 'username' in r[0]
def test_search_endpoint_process_details(mocker):
mock_search_endpoint_process = util_load_json('test_data/search_endpoint_process.json')
import CovalenceForSecurityProviders
mocker.patch.object(demisto, 'args', return_value={
'target_org': None,
'details': 'true'
})
mocker.patch.object(CovalenceForSecurityProviders, 'send_request', return_value=mock_search_endpoint_process)
r = CovalenceForSecurityProviders.search_endpoint_process()
assert len(r[0].keys()) == 13
assert 'id' in r[0]
assert 'agentUuid' in r[0]
assert 'processName' in r[0]
assert 'processPath' in r[0]
assert 'parentProcessName' in r[0]
assert 'parentProcessPath' in r[0]
assert 'commandLine' in r[0]
assert 'username' in r[0]
assert 'firstSeenTime' in r[0]
assert 'lastSeenTime' in r[0]
assert 'lastEndTime' in r[0]
assert 'seenCount' in r[0]
assert 'activeCount' in r[0]
def test_search_endpoint_installed_software(mocker):
mock_search_endpoint_installed_software = util_load_json('test_data/search_endpoint_installed_software.json')
import CovalenceForSecurityProviders
mocker.patch.object(demisto, 'args', return_value={
'target_org': None,
'details': 'false'
})
mocker.patch.object(CovalenceForSecurityProviders, 'send_request', return_value=mock_search_endpoint_installed_software)
r = CovalenceForSecurityProviders.search_endpoint_installed_software()
assert len(r[0].keys()) == 5
assert 'installTimestamp' in r[0]
assert 'name' in r[0]
assert 'uninstallTimestamp' in r[0]
assert 'vendor' in r[0]
assert 'version' in r[0]
def test_search_endpoint_installed_software_details(mocker):
mock_search_endpoint_installed_software = util_load_json('test_data/search_endpoint_installed_software.json')
import CovalenceForSecurityProviders
mocker.patch.object(demisto, 'args', return_value={
'target_org': None,
'details': 'true'
})
mocker.patch.object(CovalenceForSecurityProviders, 'send_request', return_value=mock_search_endpoint_installed_software)
r = CovalenceForSecurityProviders.search_endpoint_installed_software()
assert len(r[0].keys()) == 16
assert 'arch' in r[0]
assert 'type' in r[0]
assert 'packageManager' in r[0]
assert 'installTimestamp' in r[0]
assert 'uninstallTimestamp' in r[0]
assert 'name' in r[0]
assert 'version' in r[0]
assert 'vendor' in r[0]
assert 'installPath' in r[0]
assert 'appDataPath' in r[0]
assert 'sharedDataPath' in r[0]
assert 'installedForUser' in r[0]
assert 'installSource' in r[0]
assert 'id' in r[0]
assert 'agentUuid' in r[0]
assert 'softwareNotifyAction' in r[0]
def test_list_org(mocker):
mocker.patch.object(demisto, 'params', return_value={'broker': True})
text = '''<html>
<head><title>Select Covalence</title></head><body><h1>Select Covalence</h1><p>
<a href="/index/2016-001-AA">Capsule Corp</a><p>
<a href="/index/2016-001-AB">Acme Inc.</a><p>
<a href="/index/2016-001-AC">Acme Inc.</a><p>
</body><html>'''
r = requests.Response()
r.status_code = 200
type(r).text = mocker.PropertyMock(return_value=text)
mocker.patch.object(requests, 'get', return_value=r)
import CovalenceForSecurityProviders
org_names = CovalenceForSecurityProviders.list_org()
assert len(org_names) == 2
assert {'org_name': 'Capsule Corp'} in org_names
assert {'org_name': 'Acme Inc.'} in org_names
| 34.2 | 124 | 0.702528 | 3,014 | 23,256 | 5.248839 | 0.112475 | 0.030721 | 0.05689 | 0.130847 | 0.80689 | 0.798673 | 0.76713 | 0.758344 | 0.732743 | 0.649684 | 0 | 0.020978 | 0.182147 | 23,256 | 679 | 125 | 34.250368 | 0.810778 | 0.043688 | 0 | 0.708494 | 0 | 0.003861 | 0.241116 | 0.070892 | 0 | 0 | 0 | 0.001473 | 0.490347 | 1 | 0.052124 | false | 0.001931 | 0.059846 | 0 | 0.1139 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
210f78cd4940da76189a505da5ba63b7adc0e29e | 66,609 | py | Python | src/onitool/kinect1.py | isarandi/pyoni | a6d9f3ccbe138b4e3567b389d9153f022385fb33 | [
"MIT"
] | null | null | null | src/onitool/kinect1.py | isarandi/pyoni | a6d9f3ccbe138b4e3567b389d9153f022385fb33 | [
"MIT"
] | null | null | null | src/onitool/kinect1.py | isarandi/pyoni | a6d9f3ccbe138b4e3567b389d9153f022385fb33 | [
"MIT"
] | null | null | null | import array
#20002 ?
def D2S():
a = array.array("H")
a.fromlist([0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 3, 7, 10, 13, 17,
20, 23, 27, 30, 33, 36, 40, 43, 46, 49,
52, 55, 59, 62, 65, 68, 71, 74, 77, 80,
83, 86, 88, 91, 94, 97, 100, 103, 106, 108,
111, 114, 117, 119, 122, 125, 128, 130, 133, 136,
138, 141, 143, 146, 149, 151, 154, 156, 159, 161,
164, 166, 169, 171, 174, 176, 178, 181, 183, 186,
188, 190, 193, 195, 197, 200, 202, 204, 206, 209,
211, 213, 215, 218, 220, 222, 224, 226, 229, 231,
233, 235, 237, 239, 241, 243, 245, 247, 249, 252,
254, 256, 258, 260, 262, 264, 266, 267, 269, 271,
273, 275, 277, 279, 281, 283, 285, 287, 288, 290,
292, 294, 296, 298, 299, 301, 303, 305, 307, 308,
310, 312, 314, 315, 317, 319, 321, 322, 324, 326,
327, 329, 331, 332, 334, 336, 337, 339, 341, 342,
344, 345, 347, 349, 350, 352, 353, 355, 357, 358,
360, 361, 363, 364, 366, 367, 369, 370, 372, 373,
375, 376, 378, 379, 381, 382, 383, 385, 386, 388,
389, 391, 392, 393, 395, 396, 398, 399, 400, 402,
403, 404, 406, 407, 409, 410, 411, 413, 414, 415,
416, 418, 419, 420, 422, 423, 424, 426, 427, 428,
429, 431, 432, 433, 434, 436, 437, 438, 439, 441,
442, 443, 444, 445, 447, 448, 449, 450, 451, 452,
454, 455, 456, 457, 458, 459, 461, 462, 463, 464,
465, 466, 467, 468, 470, 471, 472, 473, 474, 475,
476, 477, 478, 479, 480, 482, 483, 484, 485, 486,
487, 488, 489, 490, 491, 492, 493, 494, 495, 496,
497, 498, 499, 500, 501, 502, 503, 504, 505, 506,
507, 508, 509, 510, 511, 512, 513, 514, 515, 516,
517, 518, 519, 520, 521, 521, 522, 523, 524, 525,
526, 527, 528, 529, 530, 531, 532, 532, 533, 534,
535, 536, 537, 538, 539, 540, 540, 541, 542, 543,
544, 545, 546, 546, 547, 548, 549, 550, 551, 551,
552, 553, 554, 555, 556, 556, 557, 558, 559, 560,
561, 561, 562, 563, 564, 565, 565, 566, 567, 568,
568, 569, 570, 571, 572, 572, 573, 574, 575, 575,
576, 577, 578, 578, 579, 580, 581, 581, 582, 583,
584, 584, 585, 586, 587, 587, 588, 589, 590, 590,
591, 592, 592, 593, 594, 594, 595, 596, 597, 597,
598, 599, 599, 600, 601, 601, 602, 603, 604, 604,
605, 606, 606, 607, 608, 608, 609, 610, 610, 611,
612, 612, 613, 614, 614, 615, 615, 616, 617, 617,
618, 619, 619, 620, 621, 621, 622, 622, 623, 624,
624, 625, 626, 626, 627, 627, 628, 629, 629, 630,
631, 631, 632, 632, 633, 634, 634, 635, 635, 636,
636, 637, 638, 638, 639, 639, 640, 641, 641, 642,
642, 643, 643, 644, 645, 645, 646, 646, 647, 647,
648, 649, 649, 650, 650, 651, 651, 652, 652, 653,
654, 654, 655, 655, 656, 656, 657, 657, 658, 658,
659, 659, 660, 661, 661, 662, 662, 663, 663, 664,
664, 665, 665, 666, 666, 667, 667, 668, 668, 669,
669, 670, 670, 671, 671, 672, 672, 673, 673, 674,
674, 675, 675, 676, 676, 677, 677, 678, 678, 679,
679, 680, 680, 681, 681, 682, 682, 683, 683, 684,
684, 685, 685, 685, 686, 686, 687, 687, 688, 688,
689, 689, 690, 690, 691, 691, 691, 692, 692, 693,
693, 694, 694, 695, 695, 696, 696, 696, 697, 697,
698, 698, 699, 699, 699, 700, 700, 701, 701, 702,
702, 703, 703, 703, 704, 704, 705, 705, 706, 706,
706, 707, 707, 708, 708, 708, 709, 709, 710, 710,
711, 711, 711, 712, 712, 713, 713, 713, 714, 714,
715, 715, 715, 716, 716, 717, 717, 717, 718, 718,
719, 719, 719, 720, 720, 721, 721, 721, 722, 722,
723, 723, 723, 724, 724, 724, 725, 725, 726, 726,
726, 727, 727, 727, 728, 728, 729, 729, 729, 730,
730, 730, 731, 731, 732, 732, 732, 733, 733, 733,
734, 734, 734, 735, 735, 736, 736, 736, 737, 737,
737, 738, 738, 738, 739, 739, 739, 740, 740, 741,
741, 741, 742, 742, 742, 743, 743, 743, 744, 744,
744, 745, 745, 745, 746, 746, 746, 747, 747, 747,
748, 748, 748, 749, 749, 749, 750, 750, 750, 751,
751, 751, 752, 752, 752, 753, 753, 753, 754, 754,
754, 755, 755, 755, 756, 756, 756, 756, 757, 757,
757, 758, 758, 758, 759, 759, 759, 760, 760, 760,
761, 761, 761, 761, 762, 762, 762, 763, 763, 763,
764, 764, 764, 765, 765, 765, 765, 766, 766, 766,
767, 767, 767, 768, 768, 768, 768, 769, 769, 769,
770, 770, 770, 770, 771, 771, 771, 772, 772, 772,
773, 773, 773, 773, 774, 774, 774, 775, 775, 775,
775, 776, 776, 776, 776, 777, 777, 777, 778, 778,
778, 778, 779, 779, 779, 780, 780, 780, 780, 781,
781, 781, 781, 782, 782, 782, 783, 783, 783, 783,
784, 784, 784, 784, 785, 785, 785, 785, 786, 786,
786, 787, 787, 787, 787, 788, 788, 788, 788, 789,
789, 789, 789, 790, 790, 790, 790, 791, 791, 791,
791, 792, 792, 792, 792, 793, 793, 793, 793, 794,
794, 794, 794, 795, 795, 795, 795, 796, 796, 796,
796, 797, 797, 797, 797, 798, 798, 798, 798, 799,
799, 799, 799, 800, 800, 800, 800, 801, 801, 801,
801, 801, 802, 802, 802, 802, 803, 803, 803, 803,
804, 804, 804, 804, 805, 805, 805, 805, 805, 806,
806, 806, 806, 807, 807, 807, 807, 808, 808, 808,
808, 808, 809, 809, 809, 809, 810, 810, 810, 810,
810, 811, 811, 811, 811, 812, 812, 812, 812, 812,
813, 813, 813, 813, 813, 814, 814, 814, 814, 815,
815, 815, 815, 815, 816, 816, 816, 816, 817, 817,
817, 817, 817, 818, 818, 818, 818, 818, 819, 819,
819, 819, 819, 820, 820, 820, 820, 820, 821, 821,
821, 821, 822, 822, 822, 822, 822, 823, 823, 823,
823, 823, 824, 824, 824, 824, 824, 825, 825, 825,
825, 825, 826, 826, 826, 826, 826, 827, 827, 827,
827, 827, 828, 828, 828, 828, 828, 828, 829, 829,
829, 829, 829, 830, 830, 830, 830, 830, 831, 831,
831, 831, 831, 832, 832, 832, 832, 832, 832, 833,
833, 833, 833, 833, 834, 834, 834, 834, 834, 835,
835, 835, 835, 835, 835, 836, 836, 836, 836, 836,
837, 837, 837, 837, 837, 837, 838, 838, 838, 838,
838, 839, 839, 839, 839, 839, 839, 840, 840, 840,
840, 840, 841, 841, 841, 841, 841, 841, 842, 842,
842, 842, 842, 842, 843, 843, 843, 843, 843, 843,
844, 844, 844, 844, 844, 845, 845, 845, 845, 845,
845, 846, 846, 846, 846, 846, 846, 847, 847, 847,
847, 847, 847, 848, 848, 848, 848, 848, 848, 849,
849, 849, 849, 849, 849, 850, 850, 850, 850, 850,
850, 850, 851, 851, 851, 851, 851, 851, 852, 852,
852, 852, 852, 852, 853, 853, 853, 853, 853, 853,
854, 854, 854, 854, 854, 854, 854, 855, 855, 855,
855, 855, 855, 856, 856, 856, 856, 856, 856, 857,
857, 857, 857, 857, 857, 857, 858, 858, 858, 858,
858, 858, 858, 859, 859, 859, 859, 859, 859, 860,
860, 860, 860, 860, 860, 860, 861, 861, 861, 861,
861, 861, 861, 862, 862, 862, 862, 862, 862, 863,
863, 863, 863, 863, 863, 863, 864, 864, 864, 864,
864, 864, 864, 865, 865, 865, 865, 865, 865, 865,
866, 866, 866, 866, 866, 866, 866, 867, 867, 867,
867, 867, 867, 867, 868, 868, 868, 868, 868, 868,
868, 868, 869, 869, 869, 869, 869, 869, 869, 870,
870, 870, 870, 870, 870, 870, 871, 871, 871, 871,
871, 871, 871, 871, 872, 872, 872, 872, 872, 872,
872, 873, 873, 873, 873, 873, 873, 873, 873, 874,
874, 874, 874, 874, 874, 874, 875, 875, 875, 875,
875, 875, 875, 875, 876, 876, 876, 876, 876, 876,
876, 876, 877, 877, 877, 877, 877, 877, 877, 878,
878, 878, 878, 878, 878, 878, 878, 879, 879, 879,
879, 879, 879, 879, 879, 880, 880, 880, 880, 880,
880, 880, 880, 881, 881, 881, 881, 881, 881, 881,
881, 882, 882, 882, 882, 882, 882, 882, 882, 882,
883, 883, 883, 883, 883, 883, 883, 883, 884, 884,
884, 884, 884, 884, 884, 884, 885, 885, 885, 885,
885, 885, 885, 885, 885, 886, 886, 886, 886, 886,
886, 886, 886, 887, 887, 887, 887, 887, 887, 887,
887, 887, 888, 888, 888, 888, 888, 888, 888, 888,
888, 889, 889, 889, 889, 889, 889, 889, 889, 890,
890, 890, 890, 890, 890, 890, 890, 890, 891, 891,
891, 891, 891, 891, 891, 891, 891, 892, 892, 892,
892, 892, 892, 892, 892, 892, 893, 893, 893, 893,
893, 893, 893, 893, 893, 893, 894, 894, 894, 894,
894, 894, 894, 894, 894, 895, 895, 895, 895, 895,
895, 895, 895, 895, 896, 896, 896, 896, 896, 896,
896, 896, 896, 896, 897, 897, 897, 897, 897, 897,
897, 897, 897, 898, 898, 898, 898, 898, 898, 898,
898, 898, 898, 899, 899, 899, 899, 899, 899, 899,
899, 899, 899, 900, 900, 900, 900, 900, 900, 900,
900, 900, 900, 901, 901, 901, 901, 901, 901, 901,
901, 901, 901, 902, 902, 902, 902, 902, 902, 902,
902, 902, 902, 903, 903, 903, 903, 903, 903, 903,
903, 903, 903, 904, 904, 904, 904, 904, 904, 904,
904, 904, 904, 905, 905, 905, 905, 905, 905, 905,
905, 905, 905, 905, 906, 906, 906, 906, 906, 906,
906, 906, 906, 906, 907, 907, 907, 907, 907, 907,
907, 907, 907, 907, 907, 908, 908, 908, 908, 908,
908, 908, 908, 908, 908, 908, 909, 909, 909, 909,
909, 909, 909, 909, 909, 909, 910, 910, 910, 910,
910, 910, 910, 910, 910, 910, 910, 911, 911, 911,
911, 911, 911, 911, 911, 911, 911, 911, 911, 912,
912, 912, 912, 912, 912, 912, 912, 912, 912, 912,
913, 913, 913, 913, 913, 913, 913, 913, 913, 913,
913, 914, 914, 914, 914, 914, 914, 914, 914, 914,
914, 914, 914, 915, 915, 915, 915, 915, 915, 915,
915, 915, 915, 915, 915, 916, 916, 916, 916, 916,
916, 916, 916, 916, 916, 916, 917, 917, 917, 917,
917, 917, 917, 917, 917, 917, 917, 917, 918, 918,
918, 918, 918, 918, 918, 918, 918, 918, 918, 918,
919, 919, 919, 919, 919, 919, 919, 919, 919, 919,
919, 919, 919, 920, 920, 920, 920, 920, 920, 920,
920, 920, 920, 920, 920, 921, 921, 921, 921, 921,
921, 921, 921, 921, 921, 921, 921, 921, 922, 922,
922, 922, 922, 922, 922, 922, 922, 922, 922, 922,
923, 923, 923, 923, 923, 923, 923, 923, 923, 923,
923, 923, 923, 924, 924, 924, 924, 924, 924, 924,
924, 924, 924, 924, 924, 924, 925, 925, 925, 925,
925, 925, 925, 925, 925, 925, 925, 925, 925, 926,
926, 926, 926, 926, 926, 926, 926, 926, 926, 926,
926, 926, 926, 927, 927, 927, 927, 927, 927, 927,
927, 927, 927, 927, 927, 927, 928, 928, 928, 928,
928, 928, 928, 928, 928, 928, 928, 928, 928, 928,
929, 929, 929, 929, 929, 929, 929, 929, 929, 929,
929, 929, 929, 929, 930, 930, 930, 930, 930, 930,
930, 930, 930, 930, 930, 930, 930, 930, 931, 931,
931, 931, 931, 931, 931, 931, 931, 931, 931, 931,
931, 931, 932, 932, 932, 932, 932, 932, 932, 932,
932, 932, 932, 932, 932, 932, 933, 933, 933, 933,
933, 933, 933, 933, 933, 933, 933, 933, 933, 933,
933, 934, 934, 934, 934, 934, 934, 934, 934, 934,
934, 934, 934, 934, 934, 934, 935, 935, 935, 935,
935, 935, 935, 935, 935, 935, 935, 935, 935, 935,
935, 936, 936, 936, 936, 936, 936, 936, 936, 936,
936, 936, 936, 936, 936, 936, 937, 937, 937, 937,
937, 937, 937, 937, 937, 937, 937, 937, 937, 937,
937, 938, 938, 938, 938, 938, 938, 938, 938, 938,
938, 938, 938, 938, 938, 938, 938, 939, 939, 939,
939, 939, 939, 939, 939, 939, 939, 939, 939, 939,
939, 939, 939, 940, 940, 940, 940, 940, 940, 940,
940, 940, 940, 940, 940, 940, 940, 940, 940, 941,
941, 941, 941, 941, 941, 941, 941, 941, 941, 941,
941, 941, 941, 941, 941, 942, 942, 942, 942, 942,
942, 942, 942, 942, 942, 942, 942, 942, 942, 942,
942, 943, 943, 943, 943, 943, 943, 943, 943, 943,
943, 943, 943, 943, 943, 943, 943, 943, 944, 944,
944, 944, 944, 944, 944, 944, 944, 944, 944, 944,
944, 944, 944, 944, 944, 945, 945, 945, 945, 945,
945, 945, 945, 945, 945, 945, 945, 945, 945, 945,
945, 945, 946, 946, 946, 946, 946, 946, 946, 946,
946, 946, 946, 946, 946, 946, 946, 946, 946, 946,
947, 947, 947, 947, 947, 947, 947, 947, 947, 947,
947, 947, 947, 947, 947, 947, 947, 948, 948, 948,
948, 948, 948, 948, 948, 948, 948, 948, 948, 948,
948, 948, 948, 948, 948, 949, 949, 949, 949, 949,
949, 949, 949, 949, 949, 949, 949, 949, 949, 949,
949, 949, 949, 950, 950, 950, 950, 950, 950, 950,
950, 950, 950, 950, 950, 950, 950, 950, 950, 950,
950, 950, 951, 951, 951, 951, 951, 951, 951, 951,
951, 951, 951, 951, 951, 951, 951, 951, 951, 951,
951, 952, 952, 952, 952, 952, 952, 952, 952, 952,
952, 952, 952, 952, 952, 952, 952, 952, 952, 952,
953, 953, 953, 953, 953, 953, 953, 953, 953, 953,
953, 953, 953, 953, 953, 953, 953, 953, 953, 954,
954, 954, 954, 954, 954, 954, 954, 954, 954, 954,
954, 954, 954, 954, 954, 954, 954, 954, 955, 955,
955, 955, 955, 955, 955, 955, 955, 955, 955, 955,
955, 955, 955, 955, 955, 955, 955, 955, 956, 956,
956, 956, 956, 956, 956, 956, 956, 956, 956, 956,
956, 956, 956, 956, 956, 956, 956, 956, 956, 957,
957, 957, 957, 957, 957, 957, 957, 957, 957, 957,
957, 957, 957, 957, 957, 957, 957, 957, 957, 958,
958, 958, 958, 958, 958, 958, 958, 958, 958, 958,
958, 958, 958, 958, 958, 958, 958, 958, 958, 958,
959, 959, 959, 959, 959, 959, 959, 959, 959, 959,
959, 959, 959, 959, 959, 959, 959, 959, 959, 959,
959, 960, 960, 960, 960, 960, 960, 960, 960, 960,
960, 960, 960, 960, 960, 960, 960, 960, 960, 960,
960, 960, 960, 961, 961, 961, 961, 961, 961, 961,
961, 961, 961, 961, 961, 961, 961, 961, 961, 961,
961, 961, 961, 961, 962, 962, 962, 962, 962, 962,
962, 962, 962, 962, 962, 962, 962, 962, 962, 962,
962, 962, 962, 962, 962, 962, 962, 963, 963, 963,
963, 963, 963, 963, 963, 963, 963, 963, 963, 963,
963, 963, 963, 963, 963, 963, 963, 963, 963, 964,
964, 964, 964, 964, 964, 964, 964, 964, 964, 964,
964, 964, 964, 964, 964, 964, 964, 964, 964, 964,
964, 964, 965, 965, 965, 965, 965, 965, 965, 965,
965, 965, 965, 965, 965, 965, 965, 965, 965, 965,
965, 965, 965, 965, 965, 966, 966, 966, 966, 966,
966, 966, 966, 966, 966, 966, 966, 966, 966, 966,
966, 966, 966, 966, 966, 966, 966, 966, 966, 967,
967, 967, 967, 967, 967, 967, 967, 967, 967, 967,
967, 967, 967, 967, 967, 967, 967, 967, 967, 967,
967, 967, 967, 968, 968, 968, 968, 968, 968, 968,
968, 968, 968, 968, 968, 968, 968, 968, 968, 968,
968, 968, 968, 968, 968, 968, 968, 968, 969, 969,
969, 969, 969, 969, 969, 969, 969, 969, 969, 969,
969, 969, 969, 969, 969, 969, 969, 969, 969, 969,
969, 969, 969, 970, 970, 970, 970, 970, 970, 970,
970, 970, 970, 970, 970, 970, 970, 970, 970, 970,
970, 970, 970, 970, 970, 970, 970, 970, 971, 971,
971, 971, 971, 971, 971, 971, 971, 971, 971, 971,
971, 971, 971, 971, 971, 971, 971, 971, 971, 971,
971, 971, 971, 971, 972, 972, 972, 972, 972, 972,
972, 972, 972, 972, 972, 972, 972, 972, 972, 972,
972, 972, 972, 972, 972, 972, 972, 972, 972, 972,
973, 973, 973, 973, 973, 973, 973, 973, 973, 973,
973, 973, 973, 973, 973, 973, 973, 973, 973, 973,
973, 973, 973, 973, 973, 973, 973, 974, 974, 974,
974, 974, 974, 974, 974, 974, 974, 974, 974, 974,
974, 974, 974, 974, 974, 974, 974, 974, 974, 974,
974, 974, 974, 974, 975, 975, 975, 975, 975, 975,
975, 975, 975, 975, 975, 975, 975, 975, 975, 975,
975, 975, 975, 975, 975, 975, 975, 975, 975, 975,
975, 975, 976, 976, 976, 976, 976, 976, 976, 976,
976, 976, 976, 976, 976, 976, 976, 976, 976, 976,
976, 976, 976, 976, 976, 976, 976, 976, 976, 976,
977, 977, 977, 977, 977, 977, 977, 977, 977, 977,
977, 977, 977, 977, 977, 977, 977, 977, 977, 977,
977, 977, 977, 977, 977, 977, 977, 977, 977, 978,
978, 978, 978, 978, 978, 978, 978, 978, 978, 978,
978, 978, 978, 978, 978, 978, 978, 978, 978, 978,
978, 978, 978, 978, 978, 978, 978, 978, 979, 979,
979, 979, 979, 979, 979, 979, 979, 979, 979, 979,
979, 979, 979, 979, 979, 979, 979, 979, 979, 979,
979, 979, 979, 979, 979, 979, 979, 979, 980, 980,
980, 980, 980, 980, 980, 980, 980, 980, 980, 980,
980, 980, 980, 980, 980, 980, 980, 980, 980, 980,
980, 980, 980, 980, 980, 980, 980, 980, 981, 981,
981, 981, 981, 981, 981, 981, 981, 981, 981, 981,
981, 981, 981, 981, 981, 981, 981, 981, 981, 981,
981, 981, 981, 981, 981, 981, 981, 981, 981, 982,
982, 982, 982, 982, 982, 982, 982, 982, 982, 982,
982, 982, 982, 982, 982, 982, 982, 982, 982, 982,
982, 982, 982, 982, 982, 982, 982, 982, 982, 982,
983, 983, 983, 983, 983, 983, 983, 983, 983, 983,
983, 983, 983, 983, 983, 983, 983, 983, 983, 983,
983, 983, 983, 983, 983, 983, 983, 983, 983, 983,
983, 983, 983, 984, 984, 984, 984, 984, 984, 984,
984, 984, 984, 984, 984, 984, 984, 984, 984, 984,
984, 984, 984, 984, 984, 984, 984, 984, 984, 984,
984, 984, 984, 984, 984, 985, 985, 985, 985, 985,
985, 985, 985, 985, 985, 985, 985, 985, 985, 985,
985, 985, 985, 985, 985, 985, 985, 985, 985, 985,
985, 985, 985, 985, 985, 985, 985, 985, 985, 986,
986, 986, 986, 986, 986, 986, 986, 986, 986, 986,
986, 986, 986, 986, 986, 986, 986, 986, 986, 986,
986, 986, 986, 986, 986, 986, 986, 986, 986, 986,
986, 986, 986, 987, 987, 987, 987, 987, 987, 987,
987, 987, 987, 987, 987, 987, 987, 987, 987, 987,
987, 987, 987, 987, 987, 987, 987, 987, 987, 987,
987, 987, 987, 987, 987, 987, 987, 987, 988, 988,
988, 988, 988, 988, 988, 988, 988, 988, 988, 988,
988, 988, 988, 988, 988, 988, 988, 988, 988, 988,
988, 988, 988, 988, 988, 988, 988, 988, 988, 988,
988, 988, 988, 989, 989, 989, 989, 989, 989, 989,
989, 989, 989, 989, 989, 989, 989, 989, 989, 989,
989, 989, 989, 989, 989, 989, 989, 989, 989, 989,
989, 989, 989, 989, 989, 989, 989, 989, 989, 990,
990, 990, 990, 990, 990, 990, 990, 990, 990, 990,
990, 990, 990, 990, 990, 990, 990, 990, 990, 990,
990, 990, 990, 990, 990, 990, 990, 990, 990, 990,
990, 990, 990, 990, 990, 990, 991, 991, 991, 991,
991, 991, 991, 991, 991, 991, 991, 991, 991, 991,
991, 991, 991, 991, 991, 991, 991, 991, 991, 991,
991, 991, 991, 991, 991, 991, 991, 991, 991, 991,
991, 991, 991, 991, 992, 992, 992, 992, 992, 992,
992, 992, 992, 992, 992, 992, 992, 992, 992, 992,
992, 992, 992, 992, 992, 992, 992, 992, 992, 992,
992, 992, 992, 992, 992, 992, 992, 992, 992, 992,
992, 992, 992, 993, 993, 993, 993, 993, 993, 993,
993, 993, 993, 993, 993, 993, 993, 993, 993, 993,
993, 993, 993, 993, 993, 993, 993, 993, 993, 993,
993, 993, 993, 993, 993, 993, 993, 993, 993, 993,
993, 993, 994, 994, 994, 994, 994, 994, 994, 994,
994, 994, 994, 994, 994, 994, 994, 994, 994, 994,
994, 994, 994, 994, 994, 994, 994, 994, 994, 994,
994, 994, 994, 994, 994, 994, 994, 994, 994, 994,
994, 994, 995, 995, 995, 995, 995, 995, 995, 995,
995, 995, 995, 995, 995, 995, 995, 995, 995, 995,
995, 995, 995, 995, 995, 995, 995, 995, 995, 995,
995, 995, 995, 995, 995, 995, 995, 995, 995, 995,
995, 995, 995, 995, 996, 996, 996, 996, 996, 996,
996, 996, 996, 996, 996, 996, 996, 996, 996, 996,
996, 996, 996, 996, 996, 996, 996, 996, 996, 996,
996, 996, 996, 996, 996, 996, 996, 996, 996, 996,
996, 996, 996, 996, 996, 996, 997, 997, 997, 997,
997, 997, 997, 997, 997, 997, 997, 997, 997, 997,
997, 997, 997, 997, 997, 997, 997, 997, 997, 997,
997, 997, 997, 997, 997, 997, 997, 997, 997, 997,
997, 997, 997, 997, 997, 997, 997, 997, 997, 998,
998, 998, 998, 998, 998, 998, 998, 998, 998, 998,
998, 998, 998, 998, 998, 998, 998, 998, 998, 998,
998, 998, 998, 998, 998, 998, 998, 998, 998, 998,
998, 998, 998, 998, 998, 998, 998, 998, 998, 998,
998, 998, 998, 999, 999, 999, 999, 999, 999, 999,
999, 999, 999, 999, 999, 999, 999, 999, 999, 999,
999, 999, 999, 999, 999, 999, 999, 999, 999, 999,
999, 999, 999, 999, 999, 999, 999, 999, 999, 999,
999, 999, 999, 999, 999, 999, 999, 999, 1000, 1000,
1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000,
1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000,
1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000,
1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000,
1000, 1000, 1000, 1000, 1001, 1001, 1001, 1001, 1001, 1001,
1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001,
1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001,
1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001,
1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001,
1001, 1002, 1002, 1002, 1002, 1002, 1002, 1002, 1002, 1002,
1002, 1002, 1002, 1002, 1002, 1002, 1002, 1002, 1002, 1002,
1002, 1002, 1002, 1002, 1002, 1002, 1002, 1002, 1002, 1002,
1002, 1002, 1002, 1002, 1002, 1002, 1002, 1002, 1002, 1002,
1002, 1002, 1002, 1002, 1002, 1002, 1002, 1002, 1002, 1003,
1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003,
1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003,
1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003,
1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003,
1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1004,
1004, 1004, 1004, 1004, 1004, 1004, 1004, 1004, 1004, 1004,
1004, 1004, 1004, 1004, 1004, 1004, 1004, 1004, 1004, 1004,
1004, 1004, 1004, 1004, 1004, 1004, 1004, 1004, 1004, 1004,
1004, 1004, 1004, 1004, 1004, 1004, 1004, 1004, 1004, 1004,
1004, 1004, 1004, 1004, 1004, 1004, 1004, 1004, 1004, 1004,
1005, 1005, 1005, 1005, 1005, 1005, 1005, 1005, 1005, 1005,
1005, 1005, 1005, 1005, 1005, 1005, 1005, 1005, 1005, 1005,
1005, 1005, 1005, 1005, 1005, 1005, 1005, 1005, 1005, 1005,
1005, 1005, 1005, 1005, 1005, 1005, 1005, 1005, 1005, 1005,
1005, 1005, 1005, 1005, 1005, 1005, 1005, 1005, 1005, 1005,
1005, 1006, 1006, 1006, 1006, 1006, 1006, 1006, 1006, 1006,
1006, 1006, 1006, 1006, 1006, 1006, 1006, 1006, 1006, 1006,
1006, 1006, 1006, 1006, 1006, 1006, 1006, 1006, 1006, 1006,
1006, 1006, 1006, 1006, 1006, 1006, 1006, 1006, 1006, 1006,
1006, 1006, 1006, 1006, 1006, 1006, 1006, 1006, 1006, 1006,
1006, 1006, 1006, 1006, 1006, 1007, 1007, 1007, 1007, 1007,
1007, 1007, 1007, 1007, 1007, 1007, 1007, 1007, 1007, 1007,
1007, 1007, 1007, 1007, 1007, 1007, 1007, 1007, 1007, 1007,
1007, 1007, 1007, 1007, 1007, 1007, 1007, 1007, 1007, 1007,
1007, 1007, 1007, 1007, 1007, 1007, 1007, 1007, 1007, 1007,
1007, 1007, 1007, 1007, 1007, 1007, 1007, 1007, 1007, 1008,
1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008,
1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008,
1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008,
1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008,
1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008,
1008, 1008, 1008, 1008, 1008, 1009, 1009, 1009, 1009, 1009,
1009, 1009, 1009, 1009, 1009, 1009, 1009, 1009, 1009, 1009,
1009, 1009, 1009, 1009, 1009, 1009, 1009, 1009, 1009, 1009,
1009, 1009, 1009, 1009, 1009, 1009, 1009, 1009, 1009, 1009,
1009, 1009, 1009, 1009, 1009, 1009, 1009, 1009, 1009, 1009,
1009, 1009, 1009, 1009, 1009, 1009, 1009, 1009, 1009, 1009,
1009, 1009, 1009, 1010, 1010, 1010, 1010, 1010, 1010, 1010,
1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010,
1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010,
1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010,
1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010,
1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010,
1010, 1010, 1011, 1011, 1011, 1011, 1011, 1011, 1011, 1011,
1011, 1011, 1011, 1011, 1011, 1011, 1011, 1011, 1011, 1011,
1011, 1011, 1011, 1011, 1011, 1011, 1011, 1011, 1011, 1011,
1011, 1011, 1011, 1011, 1011, 1011, 1011, 1011, 1011, 1011,
1011, 1011, 1011, 1011, 1011, 1011, 1011, 1011, 1011, 1011,
1011, 1011, 1011, 1011, 1011, 1011, 1011, 1011, 1011, 1011,
1011, 1011, 1012, 1012, 1012, 1012, 1012, 1012, 1012, 1012,
1012, 1012, 1012, 1012, 1012, 1012, 1012, 1012, 1012, 1012,
1012, 1012, 1012, 1012, 1012, 1012, 1012, 1012, 1012, 1012,
1012, 1012, 1012, 1012, 1012, 1012, 1012, 1012, 1012, 1012,
1012, 1012, 1012, 1012, 1012, 1012, 1012, 1012, 1012, 1012,
1012, 1012, 1012, 1012, 1012, 1012, 1012, 1012, 1012, 1012,
1012, 1012, 1012, 1012, 1012, 1013, 1013, 1013, 1013, 1013,
1013, 1013, 1013, 1013, 1013, 1013, 1013, 1013, 1013, 1013,
1013, 1013, 1013, 1013, 1013, 1013, 1013, 1013, 1013, 1013,
1013, 1013, 1013, 1013, 1013, 1013, 1013, 1013, 1013, 1013,
1013, 1013, 1013, 1013, 1013, 1013, 1013, 1013, 1013, 1013,
1013, 1013, 1013, 1013, 1013, 1013, 1013, 1013, 1013, 1013,
1013, 1013, 1013, 1013, 1013, 1013, 1013, 1013, 1014, 1014,
1014, 1014, 1014, 1014, 1014, 1014, 1014, 1014, 1014, 1014,
1014, 1014, 1014, 1014, 1014, 1014, 1014, 1014, 1014, 1014,
1014, 1014, 1014, 1014, 1014, 1014, 1014, 1014, 1014, 1014,
1014, 1014, 1014, 1014, 1014, 1014, 1014, 1014, 1014, 1014,
1014, 1014, 1014, 1014, 1014, 1014, 1014, 1014, 1014, 1014,
1014, 1014, 1014, 1014, 1014, 1014, 1014, 1014, 1014, 1014,
1014, 1014, 1014, 1014, 1015, 1015, 1015, 1015, 1015, 1015,
1015, 1015, 1015, 1015, 1015, 1015, 1015, 1015, 1015, 1015,
1015, 1015, 1015, 1015, 1015, 1015, 1015, 1015, 1015, 1015,
1015, 1015, 1015, 1015, 1015, 1015, 1015, 1015, 1015, 1015,
1015, 1015, 1015, 1015, 1015, 1015, 1015, 1015, 1015, 1015,
1015, 1015, 1015, 1015, 1015, 1015, 1015, 1015, 1015, 1015,
1015, 1015, 1015, 1015, 1015, 1015, 1015, 1015, 1015, 1015,
1015, 1015, 1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016,
1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016,
1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016,
1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016,
1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016,
1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016,
1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016, 1016,
1016, 1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017,
1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017,
1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017,
1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017,
1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017,
1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017,
1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017, 1017,
1017, 1017, 1017, 1018, 1018, 1018, 1018, 1018, 1018, 1018,
1018, 1018, 1018, 1018, 1018, 1018, 1018, 1018, 1018, 1018,
1018, 1018, 1018, 1018, 1018, 1018, 1018, 1018, 1018, 1018,
1018, 1018, 1018, 1018, 1018, 1018, 1018, 1018, 1018, 1018,
1018, 1018, 1018, 1018, 1018, 1018, 1018, 1018, 1018, 1018,
1018, 1018, 1018, 1018, 1018, 1018, 1018, 1018, 1018, 1018,
1018, 1018, 1018, 1018, 1018, 1018, 1018, 1018, 1018, 1018,
1018, 1018, 1018, 1018, 1018, 1018, 1019, 1019, 1019, 1019,
1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019,
1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019,
1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019,
1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019,
1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019,
1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019,
1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019, 1019,
1019, 1019, 1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020,
1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020,
1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020,
1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020,
1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020,
1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020,
1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020,
1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020, 1020,
1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021,
1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021,
1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021,
1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021,
1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021,
1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021,
1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021,
1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021, 1021,
1021, 1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022,
1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022,
1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022,
1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022,
1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022,
1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022,
1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022,
1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022, 1022,
1022, 1022, 1022, 1022, 1023, 1023, 1023, 1023, 1023, 1023,
1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023,
1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023,
1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023,
1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023,
1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023,
1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023,
1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023,
1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023, 1023,
1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024,
1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024,
1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024,
1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024,
1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024,
1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024,
1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024,
1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024,
1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1025, 1025,
1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025,
1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025,
1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025,
1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025,
1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025,
1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025,
1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025,
1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025,
1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025, 1025,
1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026,
1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026,
1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026,
1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026,
1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026,
1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026,
1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026,
1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026,
1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026, 1026,
1026, 1026, 1026, 1026, 1027, 1027, 1027, 1027, 1027, 1027,
1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027,
1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027,
1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027,
1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027,
1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027,
1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027,
1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027,
1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027,
1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027,
1027, 1027, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028,
1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028,
1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028,
1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028,
1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028,
1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028,
1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028,
1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028,
1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028,
1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028, 1028,
1028, 1028, 1028, 1029, 1029, 1029, 1029, 1029, 1029, 1029,
1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029,
1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029,
1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029,
1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029,
1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029,
1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029,
1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029,
1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029,
1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029,
1029, 1029, 1029, 1029, 1029, 1029, 1029, 1029, 1030, 1030,
1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030,
1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030,
1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030,
1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030,
1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030,
1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030,
1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030,
1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030,
1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030,
1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030, 1030,
1030, 1030, 1030, 1030, 1030, 1030, 1031, 1031, 1031, 1031,
1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031,
1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031,
1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031,
1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031,
1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031,
1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031,
1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031,
1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031,
1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031,
1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031,
1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1031, 1032,
1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032,
1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032,
1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032,
1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032,
1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032,
1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032,
1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032,
1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032,
1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032,
1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032,
1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032, 1032,
1032, 1032, 1032, 1032, 1032, 1033, 1033, 1033, 1033, 1033,
1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033,
1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033,
1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033,
1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033,
1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033,
1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033,
1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033,
1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033,
1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033,
1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033,
1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033, 1033,
1033, 1033, 1033, 1033, 1033, 1033, 1033, 1034, 1034, 1034,
1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034,
1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034,
1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034,
1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034,
1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034,
1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034,
1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034,
1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034,
1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034,
1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034,
1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034,
1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034, 1034,
1034, 1034, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035,
1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035,
1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035,
1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035,
1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035,
1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035,
1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035,
1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035,
1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035,
1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035,
1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035,
1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035,
1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035, 1035,
1035, 1035, 1035, 1036, 1036, 1036, 1036, 1036, 1036, 1036,
1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036,
1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036,
1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036,
1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036,
1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036,
1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036,
1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036,
1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036,
1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036,
1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036,
1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036,
1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036,
1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1036, 1037,
1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037,
1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037,
1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037,
1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037,
1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037,
1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037,
1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037,
1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037,
1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037,
1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037,
1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037,
1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037,
1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037,
1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037, 1037,
1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038,
1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038,
1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038,
1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038,
1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038,
1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038,
1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038,
1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038,
1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038,
1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038,
1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038,
1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038,
1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038,
1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038,
1038, 1038, 1038, 1038, 1038, 1038, 1038, 1038, 1039, 1039,
1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039,
1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039,
1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039,
1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039,
1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039,
1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039,
1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039,
1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039,
1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039,
1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039,
1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039,
1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039,
1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039,
1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039,
1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039, 1039,
1039, 1039, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040,
1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040,
1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040,
1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040,
1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040,
1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040,
1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040,
1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040,
1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040,
1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040,
1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040,
1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040,
1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040,
1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040,
1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040,
1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040, 1040,
1040, 1040, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041,
1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041,
1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041,
1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041,
1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041,
1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041,
1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041,
1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041,
1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041,
1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041,
1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041,
1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041,
1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041,
1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041,
1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041,
1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041,
1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041, 1041,
1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042,
1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042,
1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042,
1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042,
1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042,
1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042,
1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042,
1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042,
1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042,
1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042,
1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042,
1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042,
1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042,
1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042,
1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042,
1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042,
1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042, 1042,
1042, 1042, 1042, 1042, 1042, 1043, 1043, 1043, 1043, 1043,
1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043,
1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043,
1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043,
1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043,
1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043,
1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043,
1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043,
1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043,
1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043,
1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043,
1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043,
1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043,
1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043,
1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043,
1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043,
1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043,
1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043,
1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1043, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044, 1044,
1044, 1044, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045, 1045,
1045, 1045, 1045, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046, 1046,
1046, 1046, 1046, 1046, 1046, 1046, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047,
1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1047, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048, 1048,
1048, 1048, 1048, 1048, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049, 1049,
1049, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050, 1050,
1050, 1050, 1050, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051, 1051,
1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052,
1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052,
1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052,
1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052,
1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052,
1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052,
1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052,
1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052,
1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052,
1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052,
1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052,
1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052,
1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052, 1052,
1052])
return a
#low-level depth value to depth_to_rgb
#zero plane used to compute this
#+S2D_PIXEL_CONST DEPTH_SENSOR_X_RES DEPTH_X_RES
def S2D():
a = array.array("H")
a.fromlist([0, 315, 315, 315, 316, 316, 316, 316, 317, 317,
317, 318, 318, 318, 319, 319, 319, 319, 320, 320,
320, 321, 321, 321, 322, 322, 322, 322, 323, 323,
323, 324, 324, 324, 325, 325, 325, 326, 326, 326,
326, 327, 327, 327, 328, 328, 328, 329, 329, 329,
330, 330, 330, 331, 331, 331, 332, 332, 332, 332,
333, 333, 333, 334, 334, 334, 335, 335, 335, 336,
336, 336, 337, 337, 337, 338, 338, 338, 339, 339,
339, 340, 340, 340, 341, 341, 341, 342, 342, 343,
343, 343, 344, 344, 344, 345, 345, 345, 346, 346,
346, 347, 347, 347, 348, 348, 348, 349, 349, 350,
350, 350, 351, 351, 351, 352, 352, 352, 353, 353,
354, 354, 354, 355, 355, 355, 356, 356, 356, 357,
357, 358, 358, 358, 359, 359, 359, 360, 360, 361,
361, 361, 362, 362, 363, 363, 363, 364, 364, 364,
365, 365, 366, 366, 366, 367, 367, 368, 368, 368,
369, 369, 370, 370, 370, 371, 371, 372, 372, 372,
373, 373, 374, 374, 374, 375, 375, 376, 376, 377,
377, 377, 378, 378, 379, 379, 379, 380, 380, 381,
381, 382, 382, 382, 383, 383, 384, 384, 385, 385,
385, 386, 386, 387, 387, 388, 388, 389, 389, 389,
390, 390, 391, 391, 392, 392, 393, 393, 393, 394,
394, 395, 395, 396, 396, 397, 397, 398, 398, 398,
399, 399, 400, 400, 401, 401, 402, 402, 403, 403,
404, 404, 405, 405, 406, 406, 407, 407, 408, 408,
409, 409, 409, 410, 410, 411, 411, 412, 412, 413,
413, 414, 414, 415, 415, 416, 416, 417, 418, 418,
419, 419, 420, 420, 421, 421, 422, 422, 423, 423,
424, 424, 425, 425, 426, 426, 427, 427, 428, 429,
429, 430, 430, 431, 431, 432, 432, 433, 433, 434,
435, 435, 436, 436, 437, 437, 438, 438, 439, 440,
440, 441, 441, 442, 442, 443, 444, 444, 445, 445,
446, 446, 447, 448, 448, 449, 449, 450, 451, 451,
452, 452, 453, 454, 454, 455, 455, 456, 457, 457,
458, 458, 459, 460, 460, 461, 462, 462, 463, 463,
464, 465, 465, 466, 467, 467, 468, 468, 469, 470,
470, 471, 472, 472, 473, 474, 474, 475, 476, 476,
477, 478, 478, 479, 480, 480, 481, 482, 482, 483,
484, 484, 485, 486, 487, 487, 488, 489, 489, 490,
491, 491, 492, 493, 494, 494, 495, 496, 496, 497,
498, 499, 499, 500, 501, 502, 502, 503, 504, 504,
505, 506, 507, 507, 508, 509, 510, 511, 511, 512,
513, 514, 514, 515, 516, 517, 517, 518, 519, 520,
521, 521, 522, 523, 524, 525, 525, 526, 527, 528,
529, 529, 530, 531, 532, 533, 534, 534, 535, 536,
537, 538, 539, 540, 540, 541, 542, 543, 544, 545,
546, 546, 547, 548, 549, 550, 551, 552, 553, 554,
554, 555, 556, 557, 558, 559, 560, 561, 562, 563,
564, 565, 565, 566, 567, 568, 569, 570, 571, 572,
573, 574, 575, 576, 577, 578, 579, 580, 581, 582,
583, 584, 585, 586, 587, 588, 589, 590, 591, 592,
593, 594, 595, 596, 597, 598, 599, 600, 601, 602,
603, 604, 606, 607, 608, 609, 610, 611, 612, 613,
614, 615, 616, 618, 619, 620, 621, 622, 623, 624,
625, 627, 628, 629, 630, 631, 632, 634, 635, 636,
637, 638, 640, 641, 642, 643, 644, 646, 647, 648,
649, 650, 652, 653, 654, 655, 657, 658, 659, 661,
662, 663, 664, 666, 667, 668, 670, 671, 672, 674,
675, 676, 678, 679, 680, 682, 683, 684, 686, 687,
688, 690, 691, 693, 694, 696, 697, 698, 700, 701,
703, 704, 706, 707, 708, 710, 711, 713, 714, 716,
717, 719, 720, 722, 723, 725, 727, 728, 730, 731,
733, 734, 736, 738, 739, 741, 742, 744, 746, 747,
749, 750, 752, 754, 755, 757, 759, 761, 762, 764,
766, 767, 769, 771, 773, 774, 776, 778, 780, 781,
783, 785, 787, 789, 790, 792, 794, 796, 798, 800,
802, 803, 805, 807, 809, 811, 813, 815, 817, 819,
821, 823, 825, 827, 829, 831, 833, 835, 837, 839,
841, 843, 845, 847, 849, 851, 854, 856, 858, 860,
862, 864, 867, 869, 871, 873, 875, 878, 880, 882,
885, 887, 889, 891, 894, 896, 898, 901, 903, 906,
908, 910, 913, 915, 918, 920, 923, 925, 928, 930,
933, 935, 938, 940, 943, 946, 948, 951, 954, 956,
959, 962, 964, 967, 970, 973, 975, 978, 981, 984,
987, 989, 992, 995, 998, 1001, 1004, 1007, 1010, 1013,
1016, 1019, 1022, 1025, 1028, 1031, 1034, 1038, 1041, 1044,
1047, 1050, 1054, 1057, 1060, 1063, 1067, 1070, 1073, 1077,
1080, 1084, 1087, 1090, 1094, 1097, 1101, 1105, 1108, 1112,
1115, 1119, 1123, 1126, 1130, 1134, 1138, 1141, 1145, 1149,
1153, 1157, 1161, 1165, 1169, 1173, 1177, 1181, 1185, 1189,
1193, 1197, 1202, 1206, 1210, 1214, 1219, 1223, 1227, 1232,
1236, 1241, 1245, 1250, 1255, 1259, 1264, 1268, 1273, 1278,
1283, 1288, 1292, 1297, 1302, 1307, 1312, 1317, 1322, 1328,
1333, 1338, 1343, 1349, 1354, 1359, 1365, 1370, 1376, 1381,
1387, 1392, 1398, 1404, 1410, 1415, 1421, 1427, 1433, 1439,
1445, 1452, 1458, 1464, 1470, 1477, 1483, 1489, 1496, 1503,
1509, 1516, 1523, 1529, 1536, 1543, 1550, 1557, 1564, 1572,
1579, 1586, 1594, 1601, 1609, 1616, 1624, 1632, 1639, 1647,
1655, 1663, 1671, 1680, 1688, 1696, 1705, 1713, 1722, 1731,
1739, 1748, 1757, 1766, 1776, 1785, 1794, 1804, 1813, 1823,
1833, 1843, 1853, 1863, 1873, 1883, 1894, 1904, 1915, 1926,
1936, 1947, 1959, 1970, 1981, 1993, 2005, 2016, 2028, 2040,
2053, 2065, 2078, 2090, 2103, 2116, 2129, 2143, 2156, 2170,
2184, 2198, 2212, 2226, 2241, 2256, 2271, 2286, 2301, 2317,
2333, 2349, 2365, 2381, 2398, 2415, 2432, 2450, 2467, 2485,
2503, 2522, 2541, 2560, 2579, 2598, 2618, 2639, 2659, 2680,
2701, 2723, 2744, 2767, 2789, 2812, 2835, 2859, 2883, 2908,
2933, 2958, 2984, 3010, 3037, 3064, 3092, 3120, 3149, 3178,
3208, 3238, 3269, 3300, 3333, 3365, 3399, 3433, 3468, 3503,
3539, 3576, 3614, 3653, 3692, 3732, 3774, 3816, 3859, 3903,
3948, 3994, 4041, 4089, 4139, 4190, 4241, 4295, 4349, 4405,
4463, 4522, 4582, 4645, 4708, 4774, 4842, 4911, 4983, 5056,
5132, 5210, 5291, 5374, 5460, 5548, 5640, 5734, 5832, 5933,
6038, 6146, 6259, 6375, 6497, 6622, 6753, 6889, 7030, 7178,
7332, 7492, 7660, 7835, 8019, 8212, 8413, 8626, 8849, 9084,
9331, 9593, 9870, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0])
return a
##define S2D_PIXEL_CONST 10
##define S2D_CONST_OFFSET 0.375
#define DEPTH_SENSOR_X_RES 1280
#define DEPTH_MIRROR_X 0
#define DEPTH_X_OFFSET 1
#define DEPTH_Y_OFFSET 1
#define DEPTH_X_RES 640
#define DEPTH_Y_RES 480
##define REG_X_VAL_SCALE 256 // "fixed-point" precision for double -> int32_t conversion
#
#Registration_
# reg_index = (y * DEPTH_X_RES + x) #307200
# nx = (registration_table[reg_index][0] + depth)/REG_X_VAL_SCALE
# ny = (registration_table[reg_index][1])
# if outside break
# target_index = (ny * DEPTH_X_RES + nx)) - target_offset
# current_depth = output_mm[target_index]
# if (current_depth == DEPTH_NO_MM_VALUE) || (current_depth > metric_depth)
# ...
#
#Table: depth_to_rgb is for every depth value (0..4096) as 0..255
def niteprops():
return dict(ParamCoeff=4,RegistrationType=0,ConstShift=200,ShiftScale=10,MaxShift=2047,ZPD=120,ZPPS=0.10520000010728836,D2S="",S2D="",LDDIS=7.5,Gain=42,MaxDepthValue=10000)
#Real ONI from Windows Differences:
#LDDIS 0
#Gain 30
#ZPPS -3.6893488147419103e+19
#RegistrationType 2 (!)
#DCRCDIS 3.6893488147419103e+19
if __name__ == '__main__':
print(len(D2S().tostring()))
print(len(S2D().tostring()))
| 52.990453 | 176 | 0.614316 | 12,279 | 66,609 | 3.327225 | 0.102126 | 0.064031 | 0.0959 | 0.127671 | 0.823865 | 0.812997 | 0.803965 | 0.795276 | 0.788667 | 0.779268 | 0 | 0.752333 | 0.20197 | 66,609 | 1,256 | 177 | 53.032643 | 0.016254 | 0.014247 | 0 | 0.636289 | 0 | 0 | 0.000152 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002463 | false | 0 | 0.000821 | 0.000821 | 0.005747 | 0.001642 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
21567a48fd5cf5e37f74ad4c2407e8db913a7441 | 11,002 | py | Python | mocksey/tests/test_tweaksey_tweak_mock.py | mdhalse/mocksey | 585df10036303216964d1f203a229da126ef79f1 | [
"MIT"
] | null | null | null | mocksey/tests/test_tweaksey_tweak_mock.py | mdhalse/mocksey | 585df10036303216964d1f203a229da126ef79f1 | [
"MIT"
] | 1 | 2021-10-05T15:04:05.000Z | 2021-10-05T15:04:05.000Z | mocksey/tests/test_tweaksey_tweak_mock.py | mdhalse/mocksey | 585df10036303216964d1f203a229da126ef79f1 | [
"MIT"
] | 2 | 2018-03-05T18:09:30.000Z | 2021-09-30T22:07:17.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import unittest
from unittest import mock
from mocksey import tweaksey # SUT
class TweakseyTweaksMockCase(unittest.TestCase):
def setUp(self):
self.ncm_acow = mock.NonCallableMock.assert_called_once_with
self.ncm_acw = mock.NonCallableMock.assert_called_with
self.ncm_aac = mock.NonCallableMock.assert_any_call
tweaksey.tweak_mock(mock)
self.some_mock = mock.MagicMock()
def tearDown(self):
mock.NonCallableMock.assert_called_with = self.ncm_acw
mock.NonCallableMock.assert_called_once_with = self.ncm_acow
mock.NonCallableMock.assert_any_call = self.ncm_aac
def test_monkey_patching(self):
"""mocksey.tweaksey.tweak_mock: tweaksey monkey patches / duck punches mock's methods"""
self.assertIn("tweaksey", str(mock.NonCallableMock.assert_any_call.__code__))
self.assertIn("tweaksey", str(mock.NonCallableMock.assert_called_once_with.__code__))
self.assertIn("tweaksey", str(mock.NonCallableMock.assert_called_with.__code__))
def test_assert_called_once_with_override_assert_any_message_with_bad_arg(self):
"""mocksey.tweaksey.tweak_mock.assert_called_once_with: Override message for arg match failure"""
self.some_mock()
with self.assertRaises(AssertionError) as con_man:
self.some_mock.assert_called_once_with("pants")
actual = str(con_man.exception)
expected = "Suffered the following call issues (expected != actual):\n"
self.assertIn(expected, actual)
expected = "Args: Tuples differ: ('pants',) != ()\n\nFirst tuple contains 1 additional elements.\nFirst extra element 0:\n'pants'\n\n- ('pants',)\n+ ()\n"
self.assertIn(expected, actual)
expected = "Kwargs: Nothing!"
self.assertIn(expected, actual)
def test_assert_called_once_with_override_assert_any_message_with_bad_kwarg(self):
"""mocksey.tweaksey.tweak_mock.assert_called_once_with: Override message for kwarg match failure"""
self.some_mock()
with self.assertRaises(AssertionError) as con_man:
self.some_mock.assert_called_once_with(clothing="pants")
actual = str(con_man.exception)
expected = "Suffered the following call issues (expected != actual):\n"
self.assertIn(expected, actual)
expected = "Args: Nothing!\n"
self.assertIn(expected, actual)
expected = "Kwargs: {'clothing': 'pants'} != {}\n- {'clothing': 'pants'}\n+ {}"
self.assertIn(expected, actual)
def test_assert_called_once_with_dont_interfere_when_call_was_good(self):
"""mocksey.tweaksey.tweak_mock.assert_called_once_with: Don't raise without a failure"""
self.some_mock("pants")
try:
self.some_mock.assert_called_once_with("pants") # SUT
except AssertionError:
self.fail("I know this test looks funny, but I need to make sure we don't accidentally raise")
def test_assert_called_once_with_missing_call(self):
"""mocksey.tweaksey.tweak_mock.assert_called_once_with: Don't raise without a failure"""
self.some_mock.__str__.return_value = "lalalalala"
with self.assertRaises(AssertionError) as con_man:
self.some_mock.assert_called_once_with(clothing="pants")
actual = str(con_man.exception)
expected = "Mock 'lalalalala' expected to be called once. Called 0 times with []."
self.assertIn(expected, actual)
#######################################
def test_assert_called_with_override_assert_any_message_with_bad_arg(self):
"""mocksey.tweaksey.tweak_mock.assert_called_with: Override message for arg match failure"""
self.some_mock()
with self.assertRaises(AssertionError) as con_man:
self.some_mock.assert_called_with("pants")
actual = str(con_man.exception)
expected = "Suffered the following call issues (expected != actual):\n"
self.assertIn(expected, actual)
expected = (
"Args: Tuples differ: ('pants',) != ()\n\nFirst tuple contains 1 additional elements."
"\nFirst extra element 0:\n'pants'\n\n- ('pants',)\n+ ()\n"
)
self.assertIn(expected, actual)
expected = "Kwargs: Nothing!"
self.assertIn(expected, actual)
def test_assert_called_with_override_assert_any_message_with_bad_kwarg(self):
"""mocksey.tweaksey.tweak_mock.assert_called_with: Override message for kwarg match failure"""
self.some_mock()
with self.assertRaises(AssertionError) as con_man:
self.some_mock.assert_called_with(clothing="pants")
actual = str(con_man.exception)
expected = "Suffered the following call issues (expected != actual):\n"
self.assertIn(expected, actual)
expected = "Args: Nothing!\n"
self.assertIn(expected, actual)
expected = "Kwargs: {'clothing': 'pants'} != {}\n- {'clothing': 'pants'}\n+ {}"
self.assertIn(expected, actual)
def test_assert_called_with_dont_interfere_when_call_was_good(self):
"""mocksey.tweaksey.tweak_mock.assert_called_with: Don't raise without a failure"""
self.some_mock("pants")
try:
self.some_mock.assert_called_with("pants") # SUT
except AssertionError:
self.fail("I know this test looks funny, but I need to make sure we don't accidentally raise")
def test_assert_called_with_missing_call(self):
"""mocksey.tweaksey.tweak_mock.assert_called_with: Don't raise without a failure"""
self.some_mock.__str__.return_value = "dadadada"
with self.assertRaises(AssertionError) as con_man:
self.some_mock.assert_called_with(clothing="pants")
actual = str(con_man.exception)
expected = "Mock 'dadadada' was never called."
self.assertEqual(expected, actual)
#######################################
# test single calls fall through
def test_assert_any_call_override_assert_any_message_with_bad_arg(self):
"""mocksey.tweaksey.tweak_mock.assert_any_call: Override message for arg match failure"""
self.some_mock()
with self.assertRaises(AssertionError) as con_man:
self.some_mock.assert_any_call("pants")
actual = str(con_man.exception)
expected = "Suffered the following call issues (expected != actual):\n"
self.assertIn(expected, actual)
expected = (
"Args: Tuples differ: ('pants',) != ()\n\nFirst tuple contains 1 additional elements."
"\nFirst extra element 0:\npants\n\n- ('pants',)\n+ ()\n"
)
self.assertIn(expected, actual)
expected = "Kwargs: Nothing!"
self.assertIn(expected, actual)
def test_assert_any_call_override_assert_any_message_with_bad_kwarg(self):
"""mocksey.tweaksey.tweak_mock.assert_any_call: Override message for kwarg match failure"""
self.some_mock()
with self.assertRaises(AssertionError) as con_man:
self.some_mock.assert_any_call(clothing="pants")
actual = str(con_man.exception)
expected = "Suffered the following call issues (expected != actual):\n"
self.assertIn(expected, actual)
expected = "Args: Nothing!\n"
self.assertIn(expected, actual)
expected = "Kwargs: {'clothing': 'pants'} != {}\n- {'clothing': 'pants'}\n+ {}"
self.assertIn(expected, actual)
# test multiple calls
def test_assert_any_call_override_assert_any_message_with_bad_arg(self):
"""mocksey.tweaksey.tweak_mock.assert_any_call: Override message for arg match failure"""
self.some_mock()
self.some_mock("yes again")
with self.assertRaises(AssertionError) as con_man:
self.some_mock.assert_any_call("pants")
actual = str(con_man.exception)
expected = "mock('pants') call not found among:\n\t[call(), call('yes again')]"
self.assertEqual(expected, actual)
def test_assert_any_call_override_assert_any_message_with_bad_kwarg(self):
"""mocksey.tweaksey.tweak_mock.assert_any_call: Override message for kwarg match failure"""
self.some_mock()
self.some_mock(and_again=True)
self.some_mock("once", "more")
with self.assertRaises(AssertionError) as con_man:
self.some_mock.assert_any_call(clothing="pants")
actual = str(con_man.exception)
expected = (
"mock(clothing='pants') call not found among:\n\t[call(), call(and_again=True), call('once', 'more')]"
)
self.assertEqual(expected, actual)
# test multiple calls with named mocks
def test_assert_any_call_override_assert_any_message_with_bad_arg(self):
"""mocksey.tweaksey.tweak_mock.assert_any_call: Override message for arg match failure"""
local_mock = mock.MagicMock(name="the_doctor")
local_mock()
local_mock("yes again")
with self.assertRaises(AssertionError) as con_man:
local_mock.assert_any_call("pants")
actual = str(con_man.exception)
expected = "the_doctor('pants') call not found among:\n\t[the_doctor(), the_doctor('yes again')]"
self.assertEqual(expected, actual)
def test_assert_any_call_override_assert_any_message_with_bad_kwarg(self):
"""mocksey.tweaksey.tweak_mock.assert_any_call: Override message for kwarg match failure"""
local_mock = mock.MagicMock(name="the_doctor")
local_mock()
local_mock(and_again=True)
local_mock("once", "more")
with self.assertRaises(AssertionError) as con_man:
local_mock.assert_any_call(clothing="pants")
actual = str(con_man.exception)
expected = (
"the_doctor(clothing='pants') call not found among:"
"\n\t[the_doctor(), the_doctor(and_again=True), the_doctor('once', 'more')]"
)
self.assertEqual(expected, actual)
def test_assert_any_call_dont_interfere_when_call_was_good(self):
"""mocksey.tweaksey.tweak_mock.assert_any_call: Don't raise without a failure"""
self.some_mock("slippers")
self.some_mock("pants")
self.some_mock(1234)
try:
self.some_mock.assert_any_call("pants") # SUT
except AssertionError:
self.fail("I know this test looks funny, but I need to make sure we don't accidentally raise")
def test_assert_any_call_missing_call(self):
"""mocksey.tweaksey.tweak_mock.assert_any_call: Don't raise without a failure"""
self.some_mock.__str__.return_value = "dadadada"
with self.assertRaises(AssertionError) as con_man:
self.some_mock.assert_any_call(clothing="pants")
actual = str(con_man.exception)
expected = "Mock 'dadadada' was never called."
self.assertEqual(expected, actual)
if __name__ == "__main__":
unittest.main()
| 45.46281 | 163 | 0.67515 | 1,379 | 11,002 | 5.1095 | 0.101523 | 0.047261 | 0.057905 | 0.070111 | 0.925632 | 0.907749 | 0.893982 | 0.877661 | 0.849986 | 0.81493 | 0 | 0.001382 | 0.210507 | 11,002 | 241 | 164 | 45.651452 | 0.809809 | 0.142974 | 0 | 0.689655 | 0 | 0.034483 | 0.219942 | 0.013179 | 0 | 0 | 0 | 0 | 0.471264 | 1 | 0.109195 | false | 0 | 0.017241 | 0 | 0.132184 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0d0c17a9f830f4a315346e5828bcd1e2bf79542f | 27,027 | py | Python | afk-q-babyai/babyai/levels/iclr22_levels.py | IouJenLiu/AFK | db2b47bb3a5614b61766114b87f143e4a61a4a8d | [
"MIT"
] | 1 | 2022-03-12T03:10:29.000Z | 2022-03-12T03:10:29.000Z | afk-q-babyai/babyai/levels/iclr22_levels.py | IouJenLiu/AFK | db2b47bb3a5614b61766114b87f143e4a61a4a8d | [
"MIT"
] | null | null | null | afk-q-babyai/babyai/levels/iclr22_levels.py | IouJenLiu/AFK | db2b47bb3a5614b61766114b87f143e4a61a4a8d | [
"MIT"
] | null | null | null | import gym
from .verifier import *
from .levelgen import *
from .objects import DoorWID, KeyWID, BoxWID
import random
import copy
from gym_minigrid.envs import Key, Ball, Box
class Oracle(Ball):
def __init__(self, color):
super().__init__(color)
def can_overlap(self):
return True
class Level_MultiKeys(RoomGridLevel):
def __init__(self, seed=None):
room_size = 6
super().__init__(
num_rows=1,
num_cols=3,
room_size=room_size,
seed=seed
)
self.real_key_color = None
def gen_mission(self):
#colors = self._rand_subset(COLOR_NAMES, 2)
colors = COLOR_NAMES[0:3]
n_keys = 3
# Add a door of color A connecting left and middle room
locked_door_id = self._rand_int(0, n_keys)
self.locked_door, _ = self.add_id_door(0, 0, id=locked_door_id, door_idx=0, color=colors[0], locked=True)
# Add a door of color B connecting middle and right room
self.add_id_door(1, 0, id=1, door_idx=0, color=colors[1], locked=False)
for i in range(n_keys):
if self._rand_int(0, 2) == 0:
key_col = 1
else:
key_col = 2
obj, _ = self.add_id_object(key_col, 0, id=i, kind="key", color=colors[i])
if i == locked_door_id:
self.real_key_color = colors[i]
self.place_agent(1, 0)
self.instrs = OpenInstr(ObjDesc(self.locked_door.type, color=self.locked_door.color))
def add_id_door(self, i, j, id, door_idx=None, color=None, locked=None):
"""
Add an id door to a room, connecting it to a neighbor
"""
room = self.get_room(i, j)
if door_idx == None:
# Need to make sure that there is a neighbor along this wall
# and that there is not already a door
while True:
door_idx = self._rand_int(0, 4)
if room.neighbors[door_idx] and room.doors[door_idx] is None:
break
if color == None:
color = self._rand_color()
if locked is None:
locked = self._rand_bool()
assert room.doors[door_idx] is None, "door already exists"
room.locked = locked
door = DoorWID(color, id=id, is_locked=locked)
pos = room.door_pos[door_idx]
self.grid.set(*pos, door)
door.cur_pos = pos
neighbor = room.neighbors[door_idx]
room.doors[door_idx] = door
neighbor.doors[(door_idx+2) % 4] = door
return door, pos
def add_id_object(self, i, j, id, kind=None, color=None):
"""
Add a new id object to room (i, j)
"""
if kind == None:
kind = self._rand_elem(['key', 'ball', 'box'])
if color == None:
color = self._rand_color()
# TODO: we probably want to add an Object.make helper function
assert kind in ['key', 'ball', 'box']
if kind == 'key':
obj = KeyWID(id, color)
elif kind == 'ball':
raise NotImplementedError
elif kind == 'box':
raise NotImplementedError
return self.place_in_room(i, j, obj)
class Level_MultiKeysGTAns(Level_MultiKeys):
def __init__(self, seed=None):
super(Level_MultiKeysGTAns, self).__init__(
seed=seed
)
def gen_mission(self):
super().gen_mission()
self.instrs = OpenInstrGTAns(ObjDesc(self.locked_door.type, color=self.locked_door.color), self.real_key_color + ' key')
class Level_GoToBall(RoomGridLevel):
"""
Unlock a door A that requires to unlock a door B before
"""
def __init__(self, seed=None):
room_size = 6
max_steps = 8
super().__init__(
num_rows=1,
num_cols=3,
room_size=room_size,
seed=seed,
#max_steps=max_steps
)
def gen_mission(self):
#colors = self._rand_subset(COLOR_NAMES, 2)
colors = COLOR_NAMES[0:2]
# Add a door of color A connecting left and middle room
self.add_door(0, 0, door_idx=0, color=colors[0], locked=False)
# Add a door of color B connecting middle and right room
self.add_door(1, 0, door_idx=0, color=colors[0], locked=False)
if self._rand_int(0, 2) == 0:
ball_col = 0
else:
ball_col = 2
obj, _ = self.add_object(ball_col, 0, kind="ball")
self.place_agent(1, 0)
self.instrs = GoToInstr(ObjDesc(obj.type))
class Level_GoToBallGTAns(RoomGridLevel):
"""
Unlock a door A that requires to unlock a door B before
"""
def __init__(self, seed=None):
room_size = 6
super().__init__(
num_rows=1,
num_cols=3,
room_size=room_size,
seed=seed
)
def gen_mission(self):
#colors = self._rand_subset(COLOR_NAMES, 2)
colors = COLOR_NAMES[0:2]
# Add a door of color A connecting left and middle room
self.add_door(0, 0, door_idx=0, color=colors[0], locked=False)
# Add a door of color B connecting middle and right room
self.add_door(1, 0, door_idx=0, color=colors[0], locked=False)
if self._rand_int(0, 2) == 0:
ball_col = 0
else:
ball_col = 2
obj, _ = self.add_object(ball_col, 0, kind="ball")
self.place_agent(1, 0)
#self.instrs = PickupInstr(ObjDesc(obj.type))
#self.instrs = GoToGTAns(ObjDesc(obj.type), 'Room{} Room{} Room{} Room{}'.format(ball_col, ball_col, ball_col, ball_col))
self.instrs = GoToGTAns(ObjDesc(obj.type), 'Ball in Room{}'.format(ball_col))
class Level_GoToTwoBall(RoomGridLevel):
"""
Unlock a door A that requires to unlock a door B before
"""
def __init__(self, seed=None):
room_size = 6
super().__init__(
num_rows=1,
num_cols=3,
room_size=room_size,
seed=seed
)
def gen_mission(self):
colors = COLOR_NAMES[0:2]
# Add a door of color A connecting left and middle room
self.add_door(0, 0, door_idx=0, color=colors[0], locked=False)
# Add a door of color B connecting middle and right room
self.add_door(1, 0, door_idx=0, color=colors[0], locked=False)
if self._rand_int(0, 2) == 0:
c1, c2 = colors[0], colors[1]
else:
c1, c2 = colors[1], colors[0]
if self._rand_int(0, 2) == 0:
ball1_col = 0
else:
ball1_col = 2
if self._rand_int(0, 2) == 0:
ball2_col = 0
else:
ball2_col = 2
ball1, _ = self.add_object(ball1_col, 0, kind="ball", color=c1)
ball2, _ = self.add_object(ball2_col, 0, kind="ball", color=c2)
target_ball = random.choice([ball1, ball2])
self.place_agent(1, 0)
self.instrs = GoToInstr(ObjDesc(target_ball.type, color=target_ball.color))
class Level_General(RoomGridLevel):
def __init__(self, seed=None):
room_size = 6
super().__init__(
num_rows=1,
num_cols=3,
room_size=room_size,
seed=seed
)
self.real_key_color = None
def gen_mission(self):
if self._rand_int(0, 2) == 0:
task = 'key'
else:
task = 'ball'
colors = COLOR_NAMES[0:3]
n_keys = 3
# Add a door of color A connecting left and middle room
locked_door_id = self._rand_int(0, n_keys)
self.locked_door, _ = self.add_id_door(0, 0, id=locked_door_id, door_idx=0, color=colors[0], locked=True if task == 'key' else False)
# Add a door of color B connecting middle and right room
self.add_id_door(1, 0, id=1, door_idx=0, color=colors[1], locked=False)
for i in range(n_keys):
if self._rand_int(0, 2) == 0:
key_col = 1
else:
key_col = 2
obj, _ = self.add_id_object(key_col, 0, id=i, kind="key", color=colors[i])
if i == locked_door_id:
self.real_key_color = colors[i]
if self._rand_int(0, 2) == 0:
ball_col = 0
else:
ball_col = 2
obj, _ = self.add_object(ball_col, 0, kind="ball")
if ball_col == 0:
loc = 'left'
else:
loc = 'right'
self.place_agent(1, 0)
if task == 'ball':
self.instrs = GoToGTAns(ObjDesc(obj.type), loc + ' Room')
else:
self.instrs = OpenInstrGTAns(ObjDesc(self.locked_door.type, color=self.locked_door.color), self.real_key_color + ' key')
def add_id_door(self, i, j, id, door_idx=None, color=None, locked=None):
"""
Add an id door to a room, connecting it to a neighbor
"""
room = self.get_room(i, j)
if door_idx == None:
# Need to make sure that there is a neighbor along this wall
# and that there is not already a door
while True:
door_idx = self._rand_int(0, 4)
if room.neighbors[door_idx] and room.doors[door_idx] is None:
break
if color == None:
color = self._rand_color()
if locked is None:
locked = self._rand_bool()
assert room.doors[door_idx] is None, "door already exists"
room.locked = locked
door = DoorWID(color, id=id, is_locked=locked)
pos = room.door_pos[door_idx]
self.grid.set(*pos, door)
door.cur_pos = pos
neighbor = room.neighbors[door_idx]
room.doors[door_idx] = door
neighbor.doors[(door_idx+2) % 4] = door
return door, pos
def add_id_object(self, i, j, id, kind=None, color=None):
"""
Add a new id object to room (i, j)
"""
if kind == None:
kind = self._rand_elem(['key', 'ball', 'box'])
if color == None:
color = self._rand_color()
# TODO: we probably want to add an Object.make helper function
assert kind in ['key', 'ball', 'box']
if kind == 'key':
obj = KeyWID(id, color)
elif kind == 'ball':
raise NotImplementedError
elif kind == 'box':
raise NotImplementedError
return self.place_in_room(i, j, obj)
class Level_GoToBallMaze(RoomGridLevel):
"""
Go to an object, the object may be in another room. Many distractors.
"""
def __init__(
self,
room_size=5,
num_rows=3,
num_cols=3,
num_dists=1,
doors_open=True,
seed=None,
all_doors=True
):
self.num_dists = num_dists
self.doors_open = doors_open
self.num_rows = num_rows
self.num_cols = num_cols
self.all_doors = all_doors
super().__init__(
num_rows=num_rows,
num_cols=num_cols,
room_size=room_size,
seed=seed
)
def gen_mission(self):
self.place_agent()
locked = False
if self.all_doors:
self.add_door(0, 0, door_idx=0, locked=locked)
self.add_door(1, 0, door_idx=0, locked=locked)
self.add_door(0, 1, door_idx=0, locked=locked)
self.add_door(1, 1, door_idx=0, locked=locked)
self.add_door(0, 2, door_idx=0, locked=locked)
self.add_door(1, 2, door_idx=0, locked=locked)
self.add_door(0, 0, door_idx=1, locked=locked)
self.add_door(1, 0, door_idx=1, locked=locked)
self.add_door(2, 0, door_idx=1, locked=locked)
self.add_door(0, 1, door_idx=1, locked=locked)
self.add_door(1, 1, door_idx=1, locked=locked)
self.add_door(2, 1, door_idx=1, locked=locked)
else:
self.connect_all()
num_colors = 2
obj, _ = self.add_object(self._rand_int(0, self.num_cols), self._rand_int(0, self.num_rows), 'ball', color=self._rand_elem(COLOR_NAMES[:num_colors]))
self.check_objs_reachable()
self.instrs = GoToInstr(ObjDesc(obj.type, obj.color))
if self.doors_open:
self.open_all_doors()
class Level_GoToBallMazeS5N1A0(Level_GoToBallMaze):
def __init__(self, seed=None):
super().__init__(room_size=5, num_dists=1, all_doors=False, seed=seed)
class Level_GoToBallMazeS8N1A0(Level_GoToBallMaze):
def __init__(self, seed=None):
super().__init__(room_size=8, num_dists=1, all_doors=False, seed=seed)
class Level_GoToBallMazeS5N1A1O0(Level_GoToBallMaze):
def __init__(self, seed=None):
super().__init__(room_size=5, num_dists=1, all_doors=True, doors_open=False, seed=seed)
class Level_GoToBallMazeS8N1A1O0(Level_GoToBallMaze):
def __init__(self, seed=None):
super().__init__(room_size=8, num_dists=1, all_doors=True, doors_open=False, seed=seed)
class Level_GoToObjMaze2(RoomGridLevel):
"""
Go to an object, the object may be in another room. Many distractors.
"""
def __init__(
self,
room_size=5,
num_rows=3,
num_cols=3,
num_dists=3,
doors_open=True,
seed=None,
all_doors=True,
num_colors=2
):
self.num_dists = num_dists
self.doors_open = doors_open
self.num_rows = num_rows
self.num_cols = num_cols
self.all_doors = all_doors
self.num_colors = num_colors
super().__init__(
num_rows=num_rows,
num_cols=num_cols,
room_size=room_size,
seed=seed
)
def gen_mission(self):
self.place_agent()
locked = False
if self.all_doors:
self.add_door(0, 0, door_idx=0, locked=locked)
self.add_door(1, 0, door_idx=0, locked=locked)
self.add_door(0, 1, door_idx=0, locked=locked)
self.add_door(1, 1, door_idx=0, locked=locked)
self.add_door(0, 2, door_idx=0, locked=locked)
self.add_door(1, 2, door_idx=0, locked=locked)
self.add_door(0, 0, door_idx=1, locked=locked)
self.add_door(1, 0, door_idx=1, locked=locked)
self.add_door(2, 0, door_idx=1, locked=locked)
self.add_door(0, 1, door_idx=1, locked=locked)
self.add_door(1, 1, door_idx=1, locked=locked)
self.add_door(2, 1, door_idx=1, locked=locked)
else:
self.connect_all()
objs = self.add_distractors(num_distractors=self.num_dists, all_unique=True, num_colors=self.num_colors)
obj = self._rand_elem(objs)
self.check_objs_reachable()
self.instrs = GoToInstr(ObjDesc(obj.type, obj.color))
if self.doors_open:
self.open_all_doors()
def add_distractors(self, i=None, j=None, num_distractors=10, all_unique=True, num_colors=2):
"""
Add random objects that can potentially distract/confuse the agent.
"""
# Collect a list of existing objects
objs = []
for row in self.room_grid:
for room in row:
for obj in room.objs:
objs.append((obj.type, obj.color))
# List of distractors added
dists = []
while len(dists) < num_distractors:
color = self._rand_elem(COLOR_NAMES[:num_colors])
type = self._rand_elem(['key', 'ball', 'box'])
obj = (type, color)
if all_unique and obj in objs:
continue
# Add the object to a random room if no room specified
room_i = i
room_j = j
if room_i == None:
room_i = self._rand_int(0, self.num_cols)
if room_j == None:
room_j = self._rand_int(0, self.num_rows)
dist, pos = self.add_object(room_i, room_j, *obj)
objs.append(obj)
dists.append(dist)
return dists
class Level_GoToFavorite(RoomGridLevel):
"""
Go to an object, the object may be in another room. Many distractors.
"""
def __init__(
self,
room_size=5,
num_rows=3,
num_cols=3,
num_dists=3,
doors_open=True,
seed=None,
all_doors=True,
num_colors=2,
oracle_mode='call'
):
self.num_dists = num_dists
self.doors_open = doors_open
self.num_rows = num_rows
self.num_cols = num_cols
self.all_doors = all_doors
self.num_colors = num_colors
self.names = ['jack', 'mary']
self.others_fav = None
self.oracle_mode = oracle_mode
super().__init__(
num_rows=num_rows,
num_cols=num_cols,
room_size=room_size,
seed=seed
)
def gen_mission(self):
agent_room_i = self._rand_int(0, self.num_cols)
agent_room_j = self._rand_int(0, self.num_rows)
self.place_agent(agent_room_i, agent_room_j)
locked = False
if self.all_doors:
self.add_door(0, 0, door_idx=0, locked=locked)
self.add_door(1, 0, door_idx=0, locked=locked)
self.add_door(0, 1, door_idx=0, locked=locked)
self.add_door(1, 1, door_idx=0, locked=locked)
self.add_door(0, 2, door_idx=0, locked=locked)
self.add_door(1, 2, door_idx=0, locked=locked)
self.add_door(0, 0, door_idx=1, locked=locked)
self.add_door(1, 0, door_idx=1, locked=locked)
self.add_door(2, 0, door_idx=1, locked=locked)
self.add_door(0, 1, door_idx=1, locked=locked)
self.add_door(1, 1, door_idx=1, locked=locked)
self.add_door(2, 1, door_idx=1, locked=locked)
else:
self.connect_all()
objs = self.add_distractors(num_distractors=self.num_dists, all_unique=True, num_colors=self.num_colors)
obj = self._rand_elem(objs)
self.check_objs_reachable()
random.shuffle(self.names)
self.instrs = FavoriteInstr(ObjDesc(obj.type, obj.color), name=self.names[0])
room_id = (obj.cur_pos[0] // self.room_size) + (obj.cur_pos[1] // self.room_size) * self.num_cols
self.useful_answers = ['{} toy is {} {}'.format(self.names[0], obj.color, obj.type),
'{} {} in room{}'.format(obj.color, obj.type, room_id)]
#print('Useful answer:', self.useful_answers)
self.others_fav = self._rand_elem(objs)
if self.doors_open:
self.open_all_doors()
if self.oracle_mode == 'single_move':
oracle = Oracle(color='red')
self.place_in_room(agent_room_i, agent_room_j, oracle)
self.oracle = oracle
def add_distractors(self, i=None, j=None, num_distractors=10, all_unique=True, num_colors=2):
"""
Add random objects that can potentially distract/confuse the agent.
"""
# Collect a list of existing objects
objs = []
for row in self.room_grid:
for room in row:
for obj in room.objs:
objs.append((obj.type, obj.color))
# List of distractors added
dists = []
while len(dists) < num_distractors:
color = self._rand_elem(COLOR_NAMES[:num_colors])
type = self._rand_elem(['ball', 'box'])
obj = (type, color)
if all_unique and obj in objs:
continue
# Add the object to a random room if no room specified
room_i = i
room_j = j
if room_i == None:
room_i = self._rand_int(0, self.num_cols)
if room_j == None:
room_j = self._rand_int(0, self.num_rows)
dist, pos = self.add_object(room_i, room_j, *obj)
objs.append(obj)
dists.append(dist)
return dists
class Level_GoToFavoriteSingleMove(Level_GoToFavorite):
def __init__(self, seed=None):
super().__init__(seed=seed, oracle_mode='single_move')
class Level_ObjInLockedBox(RoomGridLevel):
def __init__(self, seed=None, n_keys=3, room_size=9):
room_size = room_size
self.real_key_color = None
self.n_keys = n_keys
self.instrs = None
super().__init__(
num_rows=1,
num_cols=1,
room_size=room_size,
seed=seed
)
def gen_mission(self):
#colors = self._rand_subset(COLOR_NAMES, 2)
colors = COLOR_NAMES[:self.n_keys]
shuffled_colors = copy.deepcopy(colors)
self.np_random.shuffle(shuffled_colors)
shuffled_colors2 = copy.deepcopy(colors)
self.np_random.shuffle(shuffled_colors2)
target_id = self._rand_int(0, self.n_keys)
for i in range(self.n_keys):
key = KeyWID(color=colors[i], id=i)
self.place_in_room(0, 0, key)
ball = Ball(color=shuffled_colors[i])
box = BoxWID(color=shuffled_colors2[i], id=i, contains=ball)
self.place_in_room(0, 0, box)
if i == target_id:
self.instrs = FindInstr(ObjDesc(ball.type, ball.color))
self.target_key_color = key.color
assert self.instrs
self.place_agent(0, 0)
class Level_ObjInLockedBoxOne(Level_ObjInLockedBox):
def __init__(self, seed=None):
super().__init__(
seed=seed,
n_keys=1)
class Level_ObjInLockedBoxTwo(Level_ObjInLockedBox):
def __init__(self, seed=None, room_size=8):
super().__init__(
seed=seed,
n_keys=2)
class Level_ObjInLockedBoxThree(Level_ObjInLockedBox):
def __init__(self, seed=None):
super().__init__(
seed=seed,
n_keys=3)
class Level_ObjInBox(RoomGridLevel):
def __init__(self, seed=None, n_boxes=2, room_size=9):
room_size = room_size
self.instrs = None
self.n_boxes = n_boxes
super().__init__(
num_rows=1,
num_cols=1,
room_size=room_size,
seed=seed
)
def gen_mission(self):
colors = COLOR_NAMES[:self.n_boxes + 1]
shuffled_colors = copy.deepcopy(colors)
self.np_random.shuffle(shuffled_colors)
shuffled_colors2 = copy.deepcopy(colors)
self.np_random.shuffle(shuffled_colors2)
target_id = self._rand_int(0, self.n_boxes)
for i in range(self.n_boxes):
ball = Ball(color=shuffled_colors[i])
box = Box(color=shuffled_colors2[i], contains=ball)
self.place_in_room(0, 0, box)
if i == target_id:
self.instrs = FindOrFailInstr(ObjDesc(ball.type, ball.color))
assert self.instrs
self.place_agent(0, 0)
class Level_ObjInBoxV2(RoomGridLevel):
def __init__(self, seed=None, n_boxes=2, room_size=9, enough_info=False, oracle_mode='single_call', failure_neg=False):
room_size = room_size
self.instrs = None
self.n_boxes = n_boxes
self.names = ['jack', 'mary']
self.enough_info = enough_info
self.oracle_mode = oracle_mode # single_call, multi_call, single_move, multi_move
super().__init__(
num_rows=1,
num_cols=1,
room_size=room_size,
seed=seed,
failure_neg=failure_neg
)
def gen_mission(self):
if self.oracle_mode == 'multi_move':
oracle1 = Oracle(color='red')
_, self.mary_pos = self.place_in_room(0, 0, oracle1)
oracle2 = Oracle(color='yellow')
_, self.jack_pos = self.place_in_room(0, 0, oracle2)
elif self.oracle_mode == 'single_move':
oracle = Oracle(color='red')
_, self.mary_pos = self.place_in_room(0, 0, oracle)
self.jack_pos = self.mary_pos
colors = COLOR_NAMES[:self.n_boxes]
shuffled_colors = copy.deepcopy(colors)
self.np_random.shuffle(shuffled_colors)
shuffled_colors2 = copy.deepcopy(colors)
self.np_random.shuffle(shuffled_colors2)
target_id = self._rand_int(0, self.n_boxes)
toy_owner = self._rand_int(0, 2)
box_owner = 1 - toy_owner
#self.np_random.shuffle(self.names)
#print(self.names)
for i in range(self.n_boxes):
ball = Ball(color=shuffled_colors[i])
box = Box(color=shuffled_colors2[i], contains=ball)
self.place_in_room(0, 0, box)
if i == target_id:
self.box_owner = self.names[box_owner]
self.target_box = box
if self.enough_info:
self.instrs = FindOrFailInstr(ObjDesc(ball.type, ball.color), surface='find ' + self.names[toy_owner] + ' toy, which is in the {} box'.format(self.target_box.color))
else:
self.instrs = FindOrFailInstr(ObjDesc(ball.type, ball.color), surface='find ' + self.names[toy_owner] + ' toy')
self.useful_answers = ['{} toy is {} ball'.format(self.names[toy_owner], ball.color),
'{} ball in {} suitcase'.format(ball.color, self.names[box_owner]),
'{} suitcase is {} box'.format(self.names[box_owner], self.target_box.color)]
assert self.instrs
self.place_agent(0, 0)
class Level_ObjInBoxV2SingleMove(Level_ObjInBoxV2):
def __init__(self, seed=None, n_boxes=2, room_size=9):
super().__init__(seed=seed, n_boxes=n_boxes, room_size=room_size, enough_info=False, oracle_mode='single_move')
class Level_ObjInBoxV2MultiMove(Level_ObjInBoxV2):
def __init__(self, seed=None, n_boxes=2, room_size=9):
super().__init__(seed=seed, n_boxes=n_boxes, room_size=room_size, enough_info=False, oracle_mode='multi_move')
class Level_ObjInBoxMultiSufficient(Level_ObjInBoxV2):
def __init__(self, seed=None, n_boxes=2, room_size=9):
super().__init__(seed=seed, n_boxes=n_boxes, room_size=room_size, enough_info=True)
class Level_ObjInBoxMultiMultiOracle(Level_ObjInBoxV2):
def __init__(self, seed=None, n_boxes=2, room_size=9):
super().__init__(seed=seed, n_boxes=n_boxes, room_size=room_size, enough_info=False, oracle_mode='multi_move')
class Level_ObjInBoxMultiMultiOracleSufficient(Level_ObjInBoxV2):
def __init__(self, seed=None, n_boxes=2, room_size=9):
super().__init__(seed=seed, n_boxes=n_boxes, room_size=room_size, enough_info=True, oracle_mode='multi_move')
class Level_ObjInBoxV2Neg(Level_ObjInBoxV2):
def __init__(self, seed=None):
super().__init__(seed=seed, failure_neg=True)
# Register the levels in this file
register_levels(__name__, globals())
| 33.284483 | 185 | 0.586858 | 3,692 | 27,027 | 4.020314 | 0.065005 | 0.031126 | 0.031126 | 0.042242 | 0.853736 | 0.834804 | 0.804487 | 0.782591 | 0.77161 | 0.759078 | 0 | 0.021486 | 0.306027 | 27,027 | 811 | 186 | 33.325524 | 0.769887 | 0.085359 | 0 | 0.755137 | 0 | 0 | 0.018685 | 0 | 0 | 0 | 0 | 0.002466 | 0.011986 | 1 | 0.078767 | false | 0 | 0.011986 | 0.001712 | 0.148973 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0d297fc0df7e2cc1055967bb9d2cbf948687250e | 9,271 | py | Python | HyperAPI/hdp_api/routes/nitro.py | fujianfeng/HyperAPI | 719d7794cc65f448084a7e59abdf73d0e59f3e5d | [
"BSD-3-Clause"
] | null | null | null | HyperAPI/hdp_api/routes/nitro.py | fujianfeng/HyperAPI | 719d7794cc65f448084a7e59abdf73d0e59f3e5d | [
"BSD-3-Clause"
] | null | null | null | HyperAPI/hdp_api/routes/nitro.py | fujianfeng/HyperAPI | 719d7794cc65f448084a7e59abdf73d0e59f3e5d | [
"BSD-3-Clause"
] | null | null | null | from HyperAPI.hdp_api.base.resource import Resource
from HyperAPI.hdp_api.base.route import Route, SubRoute
class Nitro(Resource):
name = "nitro"
available_since = "1.0"
removed_since = None
class _getForecasts(Route):
name = "getForecasts"
httpMethod = Route.GET
removed_since = "3.0"
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID
}
class _getForecastsPost(SubRoute):
name = "getForecasts"
httpMethod = Route.POST
available_since = "3.0"
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID
}
class _postForecasts(Route):
name = "getForecasts"
httpMethod = Route.POST
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID
}
class _getForecast(Route):
name = "getForecast"
httpMethod = Route.GET
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts/{forecast_ID}"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID,
'forecast_ID': Route.VALIDATOR_OBJECTID
}
class _insertForecast(Route):
name = "insertForecast"
httpMethod = Route.POST
available_since = "3.0"
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts/add"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID
}
class _updateForecast(Route):
name = "updateForecast"
httpMethod = Route.POST
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts/{forecast_ID}"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID,
'forecast_ID': Route.VALIDATOR_OBJECTID
}
class _updateForecastCoef(Route):
name = "updateForecastCoef"
available_since = '2.0'
httpMethod = Route.POST
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts/{forecast_ID}/tunes/updatecoef"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID,
'forecast_ID': Route.VALIDATOR_OBJECTID
}
class _deleteForecast(Route):
name = "deleteForecast"
httpMethod = Route.POST
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts/{forecast_ID}/delete"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID,
'forecast_ID': Route.VALIDATOR_OBJECTID
}
class _getForecastTunes(Route):
name = "getForecastTunes"
httpMethod = Route.GET
available_since = "1.0"
removed_since = "3.0"
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts/{forecast_ID}/tunes"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID,
'forecast_ID': Route.VALIDATOR_OBJECTID
}
class _postForecastTunes(SubRoute):
name = "getForecastTunes"
httpMethod = Route.POST
available_since = "3.0"
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts/{forecast_ID}/tunes"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID,
'forecast_ID': Route.VALIDATOR_OBJECTID
}
class _updateForecastTunes(Route):
name = "updateForecastTunes"
httpMethod = Route.POST
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts/{forecast_ID}/tunes/update"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID,
'forecast_ID': Route.VALIDATOR_OBJECTID
}
class _getForecastTunesAggregate(Route):
name = "getForecastTunesAggregateGeo"
httpMethod = Route.GET
available_since = "1.0"
removed_since = "3.0"
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts/{forecast_ID}/tunes/aggregate"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID,
'forecast_ID': Route.VALIDATOR_OBJECTID
}
class _getForecastTunesAggregateGeo(Route):
name = "getForecastTunesAggregateGeo"
httpMethod = Route.POST
available_since = "3.0"
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts/{forecast_ID}/tunes/aggregate/geo"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID,
'forecast_ID': Route.VALIDATOR_OBJECTID
}
class _getForecastTunesAggregateDepot(Route):
name = "getForecastTunesAggregateDepot"
httpMethod = Route.POST
available_since = "3.0"
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts/{forecast_ID}/tunes/aggregate/depot"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID,
'forecast_ID': Route.VALIDATOR_OBJECTID
}
class _exportForecastTunes(Route):
name = "exportForecastTunes"
httpMethod = Route.GET
available_since = "3.0"
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts/{forecast_ID}/tunes/export"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID,
'forecast_ID': Route.VALIDATOR_OBJECTID
}
class _exportReport(Route):
name = "exportReport"
available_since = '2.0'
httpMethod = Route.GET
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts/{forecast_ID}/tunes/exportreport"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID,
'forecast_ID': Route.VALIDATOR_OBJECTID
}
class _getForecastTunesStats(Route):
name = "getForecastTunesStats"
available_since = '3.0.2'
httpMethod = Route.POST
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts/{forecast_ID}/tunes/stats"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID,
'forecast_ID': Route.VALIDATOR_OBJECTID
}
class _getForecastTunesMetadata(Route):
name = "getForecastTunesMetadata"
available_since = '4.2.2'
removed_since = '4.2.3'
httpMethod = Route.GET
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts/{forecast_ID}/tunes/metadata"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID,
'forecast_ID': Route.VALIDATOR_OBJECTID
}
class _getForecastMetadata(Route):
name = "getForecastMetadata"
available_since = '4.2.3'
httpMethod = Route.GET
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts/metadata"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID
}
class _getForecastTunesModalities(Route):
name = "getForecastTunesModalities"
available_since = '4.2.2'
httpMethod = Route.POST
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/forecasts/{forecast_ID}/tunes/modalities"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID,
'forecast_ID': Route.VALIDATOR_OBJECTID
}
class _getPosRecurrences(Route):
name = "getPosRecurrences"
httpMethod = Route.POST
available_since = "4.2.10"
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/pos/recurrences"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID
}
class _updatePosRecurrence(Route):
name = "updatePosRecurrence"
httpMethod = Route.POST
available_since = "4.2.10"
path = "/nitro/projects/{project_ID}/datasets/{dataset_ID}/pos/{pos_ID}/recurrences"
_path_keys = {
'project_ID': Route.VALIDATOR_OBJECTID,
'dataset_ID': Route.VALIDATOR_OBJECTID,
'pos_ID': Route.VALIDATOR_OBJECTID
} | 38.309917 | 113 | 0.62496 | 895 | 9,271 | 6.172067 | 0.080447 | 0.076032 | 0.173787 | 0.260681 | 0.802317 | 0.758508 | 0.747647 | 0.747647 | 0.747647 | 0.747647 | 0 | 0.00752 | 0.268472 | 9,271 | 242 | 114 | 38.309917 | 0.806989 | 0 | 0 | 0.619266 | 0 | 0 | 0.306622 | 0.205349 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.009174 | 0 | 0.12844 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b4912e72722b65643d376f74da6944218f4b3572 | 525 | py | Python | eval_covid19china_timm-regnetx_002_Clahe.py | BrunoKrinski/segtool | cb604b5f38104c43a76450136e37c3d1c4b6d275 | [
"MIT"
] | null | null | null | eval_covid19china_timm-regnetx_002_Clahe.py | BrunoKrinski/segtool | cb604b5f38104c43a76450136e37c3d1c4b6d275 | [
"MIT"
] | null | null | null | eval_covid19china_timm-regnetx_002_Clahe.py | BrunoKrinski/segtool | cb604b5f38104c43a76450136e37c3d1c4b6d275 | [
"MIT"
] | null | null | null | import os
ls=["python main.py --configs configs/eval_covid19china_unetplusplus_timm-regnetx_002_0_Clahe.yml",
"python main.py --configs configs/eval_covid19china_unetplusplus_timm-regnetx_002_1_Clahe.yml",
"python main.py --configs configs/eval_covid19china_unetplusplus_timm-regnetx_002_2_Clahe.yml",
"python main.py --configs configs/eval_covid19china_unetplusplus_timm-regnetx_002_3_Clahe.yml",
"python main.py --configs configs/eval_covid19china_unetplusplus_timm-regnetx_002_4_Clahe.yml",
]
for l in ls:
os.system(l) | 47.727273 | 99 | 0.841905 | 80 | 525 | 5.15 | 0.3 | 0.121359 | 0.145631 | 0.230583 | 0.902913 | 0.902913 | 0.902913 | 0.902913 | 0.902913 | 0.902913 | 0 | 0.060729 | 0.059048 | 525 | 11 | 100 | 47.727273 | 0.773279 | 0 | 0 | 0 | 0 | 0 | 0.874525 | 0.636882 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
b49b4426af2972ecb7a028b3c500acc9a736064a | 4,312 | py | Python | tests/locals/overload/test_otf.py | beloglazov/openstack-neat | a5a853ae2affb0cdc582e3ab641737f5ebd3d0a7 | [
"Apache-2.0"
] | 34 | 2015-01-04T08:02:37.000Z | 2022-02-19T14:43:47.000Z | tests/locals/overload/test_otf.py | MisterPup/OpenStack-Neat-Ceilometer | 4e6685ea1a9deb75d1186e60097a357251eaed8d | [
"Apache-2.0"
] | 3 | 2015-01-23T07:45:15.000Z | 2019-07-03T11:16:27.000Z | tests/locals/overload/test_otf.py | MisterPup/OpenStack-Neat-Ceilometer | 4e6685ea1a9deb75d1186e60097a357251eaed8d | [
"Apache-2.0"
] | 22 | 2015-01-14T17:54:46.000Z | 2021-08-09T06:09:17.000Z | # Copyright 2012 Anton Beloglazov
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from mocktest import *
from pyqcy import *
import neat.locals.overload.otf as otf
import logging
logging.disable(logging.CRITICAL)
class Otf(TestCase):
def test_otf(self):
state = {'overload': 0, 'total': 0}
decision, state = otf.otf(0.5, 1.0, 4, 1.,
[0.9], state)
self.assertEqual(state, {'overload': 0, 'total': 1})
self.assertFalse(decision)
decision, state = otf.otf(0.5, 1.0, 4, 1.,
[0.9, 1.3], state)
self.assertEqual(state, {'overload': 1, 'total': 2})
self.assertFalse(decision)
decision, state = otf.otf(0.5, 1.0, 4, 1.,
[0.9, 1.3, 1.1], state)
self.assertEqual(state, {'overload': 2, 'total': 3})
self.assertFalse(decision)
decision, state = otf.otf(0.5, 1.0, 4, 1.,
[0.9, 1.3, 1.1, 1.2], state)
self.assertEqual(state, {'overload': 3, 'total': 4})
self.assertTrue(decision)
decision, state = otf.otf(0.5, 1.0, 4, 100.,
[0.9, 1.3, 1.1, 1.2, 0.3], state)
self.assertEqual(state, {'overload': 3, 'total': 5})
self.assertFalse(decision)
decision, state = otf.otf(0.5, 1.0, 4, 1.,
[0.9, 1.3, 1.1, 1.2, 1.3], state)
self.assertEqual(state, {'overload': 4, 'total': 6})
self.assertTrue(decision)
decision, state = otf.otf(0.5, 1.0, 4, 1.,
[0.9, 1.3, 1.1, 1.2, 0.3, 0.2], state)
self.assertEqual(state, {'overload': 4, 'total': 7})
self.assertFalse(decision)
decision, state = otf.otf(0.5, 1.0, 4, 0.,
[0.9, 1.3, 1.1, 1.2, 0.3, 0.2, 0.1], state)
self.assertEqual(state, {'overload': 4, 'total': 8})
self.assertFalse(decision)
decision, state = otf.otf(0.5, 1.0, 4, 0.,
[0.9, 1.3, 1.1, 1.2, 0.3, 0.2, 0.1, 0.1], state)
self.assertEqual(state, {'overload': 4, 'total': 9})
self.assertFalse(decision)
def test_otf_factory(self):
alg = otf.otf_factory(30, 0.,
{'otf': 0.5, 'threshold': 1.0, 'limit': 4})
decision, state = alg([0.9], None)
self.assertEqual(state, {'overload': 0, 'total': 1})
self.assertFalse(decision)
decision, state = alg([0.9, 1.3], state)
self.assertEqual(state, {'overload': 1, 'total': 2})
self.assertFalse(decision)
decision, state = alg([0.9, 1.3, 1.1], state)
self.assertEqual(state, {'overload': 2, 'total': 3})
self.assertFalse(decision)
decision, state = alg([0.9, 1.3, 1.1, 1.2], state)
self.assertEqual(state, {'overload': 3, 'total': 4})
self.assertTrue(decision)
decision, state = alg([0.9, 1.3, 1.1, 1.2, 0.3], state)
self.assertEqual(state, {'overload': 3, 'total': 5})
self.assertFalse(decision)
decision, state = alg([0.9, 1.3, 1.1, 1.2, 1.3], state)
self.assertEqual(state, {'overload': 4, 'total': 6})
self.assertTrue(decision)
decision, state = alg([0.9, 1.3, 1.1, 1.2, 0.3, 0.2], state)
self.assertEqual(state, {'overload': 4, 'total': 7})
self.assertFalse(decision)
decision, state = alg([0.9, 1.3, 1.1, 1.2, 0.3, 0.2, 0.1], state)
self.assertEqual(state, {'overload': 4, 'total': 8})
self.assertFalse(decision)
decision, state = alg([0.9, 1.3, 1.1, 1.2, 0.3, 0.2, 0.1, 0.1], state)
self.assertEqual(state, {'overload': 4, 'total': 9})
self.assertFalse(decision)
| 37.824561 | 82 | 0.540584 | 619 | 4,312 | 3.759289 | 0.161551 | 0.022346 | 0.154706 | 0.216588 | 0.712076 | 0.702192 | 0.702192 | 0.702192 | 0.702192 | 0.702192 | 0 | 0.089836 | 0.292672 | 4,312 | 113 | 83 | 38.159292 | 0.673115 | 0.128247 | 0 | 0.594595 | 0 | 0 | 0.070494 | 0 | 0 | 0 | 0 | 0 | 0.486486 | 1 | 0.027027 | false | 0 | 0.054054 | 0 | 0.094595 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b4f001e07c863dac7e7ac2cbc99e7b83217b2480 | 16,969 | py | Python | tensorflow_checkpoint_reader/pb/tensorflow/core/framework/attr_value_pb2.py | shawwn/tensorflow-checkpoint-reader | f0e65548411e3bd66a07e36bb1850907a05952d0 | [
"MIT"
] | 1 | 2021-12-02T15:06:09.000Z | 2021-12-02T15:06:09.000Z | tensorflow_checkpoint_reader/pb/tensorflow/core/framework/attr_value_pb2.py | shawwn/tensorflow-checkpoint-reader | f0e65548411e3bd66a07e36bb1850907a05952d0 | [
"MIT"
] | null | null | null | tensorflow_checkpoint_reader/pb/tensorflow/core/framework/attr_value_pb2.py | shawwn/tensorflow-checkpoint-reader | f0e65548411e3bd66a07e36bb1850907a05952d0 | [
"MIT"
] | null | null | null |
'Generated protocol buffer code.'
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
_sym_db = _symbol_database.Default()
from ....tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
from ....tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
from ....tensorflow.core.framework import types_pb2 as tensorflow_dot_core_dot_framework_dot_types__pb2
DESCRIPTOR = _descriptor.FileDescriptor(name='tensorflow/core/framework/attr_value.proto', package='tensorflow', syntax='proto3', serialized_options=b'\n\x18org.tensorflow.frameworkB\x0fAttrValueProtosP\x01ZQgithub.com/tensorflow/tensorflow/tensorflow/go/core/framework/attr_value_go_proto\xf8\x01\x01', create_key=_descriptor._internal_create_key, serialized_pb=b'\n*tensorflow/core/framework/attr_value.proto\x12\ntensorflow\x1a&tensorflow/core/framework/tensor.proto\x1a,tensorflow/core/framework/tensor_shape.proto\x1a%tensorflow/core/framework/types.proto"\xa6\x04\n\tAttrValue\x12\x0b\n\x01s\x18\x02 \x01(\x0cH\x00\x12\x0b\n\x01i\x18\x03 \x01(\x03H\x00\x12\x0b\n\x01f\x18\x04 \x01(\x02H\x00\x12\x0b\n\x01b\x18\x05 \x01(\x08H\x00\x12$\n\x04type\x18\x06 \x01(\x0e2\x14.tensorflow.DataTypeH\x00\x12-\n\x05shape\x18\x07 \x01(\x0b2\x1c.tensorflow.TensorShapeProtoH\x00\x12)\n\x06tensor\x18\x08 \x01(\x0b2\x17.tensorflow.TensorProtoH\x00\x12/\n\x04list\x18\x01 \x01(\x0b2\x1f.tensorflow.AttrValue.ListValueH\x00\x12(\n\x04func\x18\n \x01(\x0b2\x18.tensorflow.NameAttrListH\x00\x12\x15\n\x0bplaceholder\x18\t \x01(\tH\x00\x1a\xe9\x01\n\tListValue\x12\t\n\x01s\x18\x02 \x03(\x0c\x12\r\n\x01i\x18\x03 \x03(\x03B\x02\x10\x01\x12\r\n\x01f\x18\x04 \x03(\x02B\x02\x10\x01\x12\r\n\x01b\x18\x05 \x03(\x08B\x02\x10\x01\x12&\n\x04type\x18\x06 \x03(\x0e2\x14.tensorflow.DataTypeB\x02\x10\x01\x12+\n\x05shape\x18\x07 \x03(\x0b2\x1c.tensorflow.TensorShapeProto\x12\'\n\x06tensor\x18\x08 \x03(\x0b2\x17.tensorflow.TensorProto\x12&\n\x04func\x18\t \x03(\x0b2\x18.tensorflow.NameAttrListB\x07\n\x05value"\x92\x01\n\x0cNameAttrList\x12\x0c\n\x04name\x18\x01 \x01(\t\x120\n\x04attr\x18\x02 \x03(\x0b2".tensorflow.NameAttrList.AttrEntry\x1aB\n\tAttrEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12$\n\x05value\x18\x02 \x01(\x0b2\x15.tensorflow.AttrValue:\x028\x01B\x83\x01\n\x18org.tensorflow.frameworkB\x0fAttrValueProtosP\x01ZQgithub.com/tensorflow/tensorflow/tensorflow/go/core/framework/attr_value_go_proto\xf8\x01\x01b\x06proto3', dependencies=[tensorflow_dot_core_dot_framework_dot_tensor__pb2.DESCRIPTOR, tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2.DESCRIPTOR, tensorflow_dot_core_dot_framework_dot_types__pb2.DESCRIPTOR])
_ATTRVALUE_LISTVALUE = _descriptor.Descriptor(name='ListValue', full_name='tensorflow.AttrValue.ListValue', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='s', full_name='tensorflow.AttrValue.ListValue.s', index=0, number=2, type=12, cpp_type=9, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='i', full_name='tensorflow.AttrValue.ListValue.i', index=1, number=3, type=3, cpp_type=2, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\x10\x01', file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='f', full_name='tensorflow.AttrValue.ListValue.f', index=2, number=4, type=2, cpp_type=6, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\x10\x01', file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='b', full_name='tensorflow.AttrValue.ListValue.b', index=3, number=5, type=8, cpp_type=7, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\x10\x01', file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='type', full_name='tensorflow.AttrValue.ListValue.type', index=4, number=6, type=14, cpp_type=8, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=b'\x10\x01', file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='shape', full_name='tensorflow.AttrValue.ListValue.shape', index=5, number=7, type=11, cpp_type=10, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='tensor', full_name='tensorflow.AttrValue.ListValue.tensor', index=6, number=8, type=11, cpp_type=10, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='func', full_name='tensorflow.AttrValue.ListValue.func', index=7, number=9, type=11, cpp_type=10, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=492, serialized_end=725)
_ATTRVALUE = _descriptor.Descriptor(name='AttrValue', full_name='tensorflow.AttrValue', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='s', full_name='tensorflow.AttrValue.s', index=0, number=2, type=12, cpp_type=9, label=1, has_default_value=False, default_value=b'', message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='i', full_name='tensorflow.AttrValue.i', index=1, number=3, type=3, cpp_type=2, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='f', full_name='tensorflow.AttrValue.f', index=2, number=4, type=2, cpp_type=6, label=1, has_default_value=False, default_value=float(0), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='b', full_name='tensorflow.AttrValue.b', index=3, number=5, type=8, cpp_type=7, label=1, has_default_value=False, default_value=False, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='type', full_name='tensorflow.AttrValue.type', index=4, number=6, type=14, cpp_type=8, label=1, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='shape', full_name='tensorflow.AttrValue.shape', index=5, number=7, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='tensor', full_name='tensorflow.AttrValue.tensor', index=6, number=8, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='list', full_name='tensorflow.AttrValue.list', index=7, number=1, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='func', full_name='tensorflow.AttrValue.func', index=8, number=10, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='placeholder', full_name='tensorflow.AttrValue.placeholder', index=9, number=9, type=9, cpp_type=9, label=1, has_default_value=False, default_value=b''.decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[_ATTRVALUE_LISTVALUE], enum_types=[], serialized_options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[_descriptor.OneofDescriptor(name='value', full_name='tensorflow.AttrValue.value', index=0, containing_type=None, create_key=_descriptor._internal_create_key, fields=[])], serialized_start=184, serialized_end=734)
_NAMEATTRLIST_ATTRENTRY = _descriptor.Descriptor(name='AttrEntry', full_name='tensorflow.NameAttrList.AttrEntry', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='key', full_name='tensorflow.NameAttrList.AttrEntry.key', index=0, number=1, type=9, cpp_type=9, label=1, has_default_value=False, default_value=b''.decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='value', full_name='tensorflow.NameAttrList.AttrEntry.value', index=1, number=2, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[], enum_types=[], serialized_options=b'8\x01', is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=817, serialized_end=883)
_NAMEATTRLIST = _descriptor.Descriptor(name='NameAttrList', full_name='tensorflow.NameAttrList', filename=None, file=DESCRIPTOR, containing_type=None, create_key=_descriptor._internal_create_key, fields=[_descriptor.FieldDescriptor(name='name', full_name='tensorflow.NameAttrList.name', index=0, number=1, type=9, cpp_type=9, label=1, has_default_value=False, default_value=b''.decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key), _descriptor.FieldDescriptor(name='attr', full_name='tensorflow.NameAttrList.attr', index=1, number=2, type=11, cpp_type=10, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key)], extensions=[], nested_types=[_NAMEATTRLIST_ATTRENTRY], enum_types=[], serialized_options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[], serialized_start=737, serialized_end=883)
_ATTRVALUE_LISTVALUE.fields_by_name['type'].enum_type = tensorflow_dot_core_dot_framework_dot_types__pb2._DATATYPE
_ATTRVALUE_LISTVALUE.fields_by_name['shape'].message_type = tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2._TENSORSHAPEPROTO
_ATTRVALUE_LISTVALUE.fields_by_name['tensor'].message_type = tensorflow_dot_core_dot_framework_dot_tensor__pb2._TENSORPROTO
_ATTRVALUE_LISTVALUE.fields_by_name['func'].message_type = _NAMEATTRLIST
_ATTRVALUE_LISTVALUE.containing_type = _ATTRVALUE
_ATTRVALUE.fields_by_name['type'].enum_type = tensorflow_dot_core_dot_framework_dot_types__pb2._DATATYPE
_ATTRVALUE.fields_by_name['shape'].message_type = tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2._TENSORSHAPEPROTO
_ATTRVALUE.fields_by_name['tensor'].message_type = tensorflow_dot_core_dot_framework_dot_tensor__pb2._TENSORPROTO
_ATTRVALUE.fields_by_name['list'].message_type = _ATTRVALUE_LISTVALUE
_ATTRVALUE.fields_by_name['func'].message_type = _NAMEATTRLIST
_ATTRVALUE.oneofs_by_name['value'].fields.append(_ATTRVALUE.fields_by_name['s'])
_ATTRVALUE.fields_by_name['s'].containing_oneof = _ATTRVALUE.oneofs_by_name['value']
_ATTRVALUE.oneofs_by_name['value'].fields.append(_ATTRVALUE.fields_by_name['i'])
_ATTRVALUE.fields_by_name['i'].containing_oneof = _ATTRVALUE.oneofs_by_name['value']
_ATTRVALUE.oneofs_by_name['value'].fields.append(_ATTRVALUE.fields_by_name['f'])
_ATTRVALUE.fields_by_name['f'].containing_oneof = _ATTRVALUE.oneofs_by_name['value']
_ATTRVALUE.oneofs_by_name['value'].fields.append(_ATTRVALUE.fields_by_name['b'])
_ATTRVALUE.fields_by_name['b'].containing_oneof = _ATTRVALUE.oneofs_by_name['value']
_ATTRVALUE.oneofs_by_name['value'].fields.append(_ATTRVALUE.fields_by_name['type'])
_ATTRVALUE.fields_by_name['type'].containing_oneof = _ATTRVALUE.oneofs_by_name['value']
_ATTRVALUE.oneofs_by_name['value'].fields.append(_ATTRVALUE.fields_by_name['shape'])
_ATTRVALUE.fields_by_name['shape'].containing_oneof = _ATTRVALUE.oneofs_by_name['value']
_ATTRVALUE.oneofs_by_name['value'].fields.append(_ATTRVALUE.fields_by_name['tensor'])
_ATTRVALUE.fields_by_name['tensor'].containing_oneof = _ATTRVALUE.oneofs_by_name['value']
_ATTRVALUE.oneofs_by_name['value'].fields.append(_ATTRVALUE.fields_by_name['list'])
_ATTRVALUE.fields_by_name['list'].containing_oneof = _ATTRVALUE.oneofs_by_name['value']
_ATTRVALUE.oneofs_by_name['value'].fields.append(_ATTRVALUE.fields_by_name['func'])
_ATTRVALUE.fields_by_name['func'].containing_oneof = _ATTRVALUE.oneofs_by_name['value']
_ATTRVALUE.oneofs_by_name['value'].fields.append(_ATTRVALUE.fields_by_name['placeholder'])
_ATTRVALUE.fields_by_name['placeholder'].containing_oneof = _ATTRVALUE.oneofs_by_name['value']
_NAMEATTRLIST_ATTRENTRY.fields_by_name['value'].message_type = _ATTRVALUE
_NAMEATTRLIST_ATTRENTRY.containing_type = _NAMEATTRLIST
_NAMEATTRLIST.fields_by_name['attr'].message_type = _NAMEATTRLIST_ATTRENTRY
DESCRIPTOR.message_types_by_name['AttrValue'] = _ATTRVALUE
DESCRIPTOR.message_types_by_name['NameAttrList'] = _NAMEATTRLIST
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
AttrValue = _reflection.GeneratedProtocolMessageType('AttrValue', (_message.Message,), {'ListValue': _reflection.GeneratedProtocolMessageType('ListValue', (_message.Message,), {'DESCRIPTOR': _ATTRVALUE_LISTVALUE, '__module__': 'tensorflow.core.framework.attr_value_pb2'}), 'DESCRIPTOR': _ATTRVALUE, '__module__': 'tensorflow.core.framework.attr_value_pb2'})
_sym_db.RegisterMessage(AttrValue)
_sym_db.RegisterMessage(AttrValue.ListValue)
NameAttrList = _reflection.GeneratedProtocolMessageType('NameAttrList', (_message.Message,), {'AttrEntry': _reflection.GeneratedProtocolMessageType('AttrEntry', (_message.Message,), {'DESCRIPTOR': _NAMEATTRLIST_ATTRENTRY, '__module__': 'tensorflow.core.framework.attr_value_pb2'}), 'DESCRIPTOR': _NAMEATTRLIST, '__module__': 'tensorflow.core.framework.attr_value_pb2'})
_sym_db.RegisterMessage(NameAttrList)
_sym_db.RegisterMessage(NameAttrList.AttrEntry)
DESCRIPTOR._options = None
_ATTRVALUE_LISTVALUE.fields_by_name['i']._options = None
_ATTRVALUE_LISTVALUE.fields_by_name['f']._options = None
_ATTRVALUE_LISTVALUE.fields_by_name['b']._options = None
_ATTRVALUE_LISTVALUE.fields_by_name['type']._options = None
_NAMEATTRLIST_ATTRENTRY._options = None
| 265.140625 | 4,187 | 0.829572 | 2,385 | 16,969 | 5.538784 | 0.077568 | 0.042998 | 0.066162 | 0.057229 | 0.793717 | 0.726646 | 0.706132 | 0.690613 | 0.666086 | 0.647918 | 0 | 0.035589 | 0.039602 | 16,969 | 63 | 4,188 | 269.349206 | 0.774989 | 0.001827 | 0 | 0 | 1 | 0.548387 | 0.173916 | 0.133604 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.112903 | 0 | 0.112903 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b4fcbedb86337d3e77e789237397d77a12c64362 | 86 | py | Python | tests/__init__.py | TSFDlib/TSFEL | a4c30acc93dd3717bf93b19e59c3dc927903caf2 | [
"MIT"
] | 1 | 2020-08-02T03:26:32.000Z | 2020-08-02T03:26:32.000Z | tests/__init__.py | TSFDlib/TSFEL | a4c30acc93dd3717bf93b19e59c3dc927903caf2 | [
"MIT"
] | null | null | null | tests/__init__.py | TSFDlib/TSFEL | a4c30acc93dd3717bf93b19e59c3dc927903caf2 | [
"MIT"
] | null | null | null | #from TSFEL.tests.test_features import *
#from TSFEL.tests.test_get_features import *
| 28.666667 | 44 | 0.813953 | 13 | 86 | 5.153846 | 0.538462 | 0.268657 | 0.41791 | 0.537313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 86 | 2 | 45 | 43 | 0.858974 | 0.953488 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2eb62edb918a2367feb95ec39dbc18c6f7ebde50 | 269 | py | Python | helloworld/message.py | thatscotdatasci/aws-lambda-hello-world | 3decd68e4651ce186958e0a020a67584d7357d05 | [
"MIT"
] | null | null | null | helloworld/message.py | thatscotdatasci/aws-lambda-hello-world | 3decd68e4651ce186958e0a020a67584d7357d05 | [
"MIT"
] | null | null | null | helloworld/message.py | thatscotdatasci/aws-lambda-hello-world | 3decd68e4651ce186958e0a020a67584d7357d05 | [
"MIT"
] | null | null | null | class Message(object):
def __init__(self, first_name, last_name):
self._first_name = first_name
self._last_name = last_name
@property
def message(self):
return f'Hello-World! Nice to meet you {self._first_name} {self._last_name}!'
| 26.9 | 85 | 0.672862 | 38 | 269 | 4.342105 | 0.473684 | 0.218182 | 0.236364 | 0.206061 | 0.254545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.226766 | 269 | 9 | 86 | 29.888889 | 0.793269 | 0 | 0 | 0 | 0 | 0 | 0.249071 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0 | 0.142857 | 0.571429 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
2edbe634151648272d6b08f67cc348532b7040d4 | 17,068 | py | Python | 3GPP Meeting Helper/tests/test_tdocs_by_agenda.py | telekom/3gpp-meeting-tools | 1276a62835fd595487aa817c9500c42c3f5e35f3 | [
"MIT"
] | null | null | null | 3GPP Meeting Helper/tests/test_tdocs_by_agenda.py | telekom/3gpp-meeting-tools | 1276a62835fd595487aa817c9500c42c3f5e35f3 | [
"MIT"
] | 1 | 2020-09-04T06:26:41.000Z | 2020-09-04T06:26:41.000Z | 3GPP Meeting Helper/tests/test_tdocs_by_agenda.py | telekom/3gpp-meeting-tools | 1276a62835fd595487aa817c9500c42c3f5e35f3 | [
"MIT"
] | 3 | 2020-06-12T02:09:48.000Z | 2021-08-30T10:36:37.000Z | import unittest
import parsing.html as html_parser
import os
class Test_test_tdocs_by_agenda(unittest.TestCase):
def test_129Bis(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '129Bis.htm')
meeting = html_parser.tdocs_by_agenda(file_name)
self.assertEqual(meeting.meeting_number, '129BIS', 'Expected 129BIS')
self.assertEqual(len(meeting.tdocs), 1529, 'Expected TDoc entries')
self.assertEqual(meeting.tdocs.at['S2-1812368','Revised to'], 'S2-1813194', 'Expected Revised do S2-1813194')
self.assertEqual(meeting.tdocs.at['S2-1813194','Revision of'], 'S2-1812368', 'Expected Revision of S2-1812368')
self.assertEqual(meeting.tdocs.at['S2-1812440','Merged to'], 'S2-1813085', 'Expected Merged to S2-1813085')
self.assertEqual(meeting.tdocs.at['S2-1813085','Merge of'], 'S2-1812440', 'Expected Merge of S2-1812440')
# Meetings 'from S2#129' or similar (i.e. postponed from a past meeting) are not taken as past references. This is so as to make reporting easier and to make the .htm file self.contained
self.assertEqual(meeting.tdocs.at['S2-1813085','Original TDocs'], 'S2-1812299, S2-1812440', 'Expected Original TDocs S2-18122990,S2-1812440')
self.assertEqual(meeting.tdocs.at['S2-1813085','Final TDocs'], 'S2-1813308', 'Expected Final TDocs S2-1813308')
# Check some contribution mappings
self.assertTrue(meeting.tdocs.at['S2-1811737','Contributed by DT'], 'DT contribution')
self.assertTrue(meeting.tdocs.at['S2-1811737','Contributed by TIM'], 'Telecom Italia contribution')
self.assertTrue(meeting.tdocs.at['S2-1811737','Contributed by Intel'], 'Intel contribution')
self.assertFalse(meeting.tdocs.at['S2-1811737','Contributed by Qualcomm'], 'Qualcomm contribution')
self.assertFalse(meeting.tdocs.at['S2-1811737','Contributed by Nokia'], 'Nokia contribution')
# Check the length of the "Others" mapping
self.assertEqual(len(meeting.others_cosigners), 16, 'Length of the "Others" contributors')
def test_129(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '129.htm')
meeting = html_parser.tdocs_by_agenda(file_name)
self.assertEqual(meeting.meeting_number, '129', 'Expected 129')
self.assertEqual(len(meeting.tdocs), 1508, 'Expected TDoc entries')
# Check the length of the "Others" mapping
self.assertEqual(len(meeting.others_cosigners), 19, 'Length of the "Others" contributors')
def test_128Bis(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '128Bis.htm')
meeting = html_parser.tdocs_by_agenda(file_name)
self.assertEqual(meeting.meeting_number, '128BIS', 'Expected 128BIS')
self.assertEqual(len(meeting.tdocs), 1401, 'Expected TDoc entries')
# Check the length of the "Others" mapping
self.assertEqual(len(meeting.others_cosigners), 14, 'Length of the "Others" contributors')
def test_128(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '128.htm')
meeting = html_parser.tdocs_by_agenda(file_name)
self.assertEqual(meeting.meeting_number, '128', 'Expected 128')
self.assertEqual(len(meeting.tdocs), 1301, 'Expected TDoc entries')
# Check the length of the "Others" mapping
self.assertEqual(len(meeting.others_cosigners), 13, 'Length of the "Others" contributors')
def test_inbox(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', 'inbox.htm')
meeting = html_parser.tdocs_by_agenda(file_name)
self.assertEqual(meeting.meeting_number, '129BIS', 'Expected 129BIS')
self.assertEqual(len(meeting.tdocs), 1640, 'Expected TDoc entries')
# Check the length of the "Others" mapping
self.assertEqual(len(meeting.others_cosigners), 16, 'Length of the "Others" contributors')
def test_130_1(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '2019.01.22 TdocsByAgenda.htm')
meeting = html_parser.tdocs_by_agenda(file_name)
self.assertEqual(meeting.meeting_number, '130', 'Expected 130')
self.assertEqual(len(meeting.tdocs), 853, 'Expected TDoc entries')
# Check the length of the "Others" mapping
self.assertEqual(len(meeting.others_cosigners), 19, 'Length of the "Others" contributors')
def test_130_2(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '2019.01.24 TdocsByAgenda.htm')
meeting = html_parser.tdocs_by_agenda(file_name)
self.assertEqual(meeting.meeting_number, '130', 'Expected 130')
self.assertEqual(len(meeting.tdocs), 1026, 'Expected TDoc entries')
# Check the length of the "Others" mapping
self.assertEqual(len(meeting.others_cosigners), 19, 'Length of the "Others" contributors')
def test_130_3(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '2019.01.31 130.htm')
meeting = html_parser.tdocs_by_agenda(file_name)
self.assertEqual(meeting.meeting_number, '130', 'Expected 130S')
self.assertEqual(len(meeting.tdocs), 1306, 'Expected TDoc entries')
test_row = meeting.tdocs.loc['S2-1901378',:]
revised_to = test_row['Revised to']
revision_of = test_row['Revision of']
merged_to = test_row['Merged to']
merge_of = test_row['Merge of']
self.assertEqual(revised_to, '')
self.assertEqual(revision_of, 'S2-1901260')
self.assertEqual(merged_to, '')
self.assertEqual(merge_of, '')
# Meetings 'from S2#129' or similar (i.e. postponed from a past meeting) are not taken as past references. This is so as to make reporting easier and to make the .htm file self.contained
original_tdocs = test_row['Original TDocs']
final_tdocs = test_row['Final TDocs']
self.assertEqual(original_tdocs, 'S2-1900142, S2-1900147, S2-1900281, S2-1900585, S2-1900587')
self.assertEqual(final_tdocs, 'S2-1901378')
# Check some contribution mappings
self.assertTrue(test_row['Contributed by DT'], 'DT contribution')
self.assertFalse(test_row['Contributed by TIM'], 'Not a Telecom Italia contribution')
self.assertFalse(test_row['Contributed by Intel'], 'Not a Intel contribution')
self.assertTrue(test_row['Contributed by ZTE'], 'ZTE contribution')
self.assertTrue(test_row['Contributed by Sprint'], 'Nokia Sprint')
def test_130_4(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '2019.01.31 130.htm')
meeting = html_parser.tdocs_by_agenda(file_name)
self.assertEqual(meeting.meeting_number, '130', 'Expected 130S')
self.assertEqual(len(meeting.tdocs), 1306, 'Expected TDoc entries')
test_row = meeting.tdocs.loc['S2-1900064',:]
revised_to = test_row['Revised to']
revision_of = test_row['Revision of']
merged_to = test_row['Merged to']
merge_of = test_row['Merge of']
self.assertEqual(revised_to, 'S2-1901105')
self.assertEqual(revision_of, '')
self.assertEqual(merged_to, '')
self.assertEqual(merge_of, 'S2-1900142, S2-1900585, S2-1900281, S2-1900147, S2-1900587')
# Meetings 'from S2#129' or similar (i.e. postponed from a past meeting) are not taken as past references. This is so as to make reporting easier and to make the .htm file self.contained
original_tdocs = test_row['Original TDocs']
final_tdocs = test_row['Final TDocs']
self.assertEqual(original_tdocs, 'S2-1900142, S2-1900147, S2-1900281, S2-1900585, S2-1900587')
self.assertEqual(final_tdocs, 'S2-1901378')
# Check some contribution mappings
self.assertTrue(test_row['Contributed by DT'], 'DT contribution')
self.assertFalse(test_row['Contributed by TIM'], 'Not a Telecom Italia contribution')
self.assertFalse(test_row['Contributed by Intel'], 'Not a Intel contribution')
self.assertFalse(test_row['Contributed by ZTE'], 'ZTE contribution')
self.assertFalse(test_row['Contributed by Sprint'], 'Nokia Sprint')
def test_date_130(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '2019.01.31 130.htm')
datetime = html_parser.tdocs_by_agenda.get_tdoc_by_agenda_date(file_name)
self.assertEqual(datetime.year, 2019)
self.assertEqual(datetime.month, 1)
self.assertEqual(datetime.day, 25)
self.assertEqual(datetime.hour, 17)
self.assertEqual(datetime.minute, 25)
def test_date_129Bis(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '129Bis.htm')
datetime = html_parser.tdocs_by_agenda.get_tdoc_by_agenda_date(file_name)
self.assertEqual(datetime.year, 2018)
self.assertEqual(datetime.month, 11)
self.assertEqual(datetime.day, 30)
self.assertEqual(datetime.hour, 7)
self.assertEqual(datetime.minute, 14)
def test_date_129(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '129.htm')
datetime = html_parser.tdocs_by_agenda.get_tdoc_by_agenda_date(file_name)
self.assertEqual(datetime.year, 2018)
self.assertEqual(datetime.month, 10)
self.assertEqual(datetime.day, 27)
self.assertEqual(datetime.hour, 10)
self.assertEqual(datetime.minute, 18)
def test_date_128Bis(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '128Bis.htm')
datetime = html_parser.tdocs_by_agenda.get_tdoc_by_agenda_date(file_name)
self.assertEqual(datetime.year, 2018)
self.assertEqual(datetime.month, 8)
self.assertEqual(datetime.day, 31)
self.assertEqual(datetime.hour, 13)
self.assertEqual(datetime.minute, 29)
def test_date_128(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '128.htm')
datetime = html_parser.tdocs_by_agenda.get_tdoc_by_agenda_date(file_name)
self.assertEqual(datetime.year, 2018)
self.assertEqual(datetime.month, 7)
self.assertEqual(datetime.day, 13)
self.assertEqual(datetime.hour, 14)
self.assertEqual(datetime.minute, 38)
def test_date_comparison_128_128Bis(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '128Bis.htm')
datetime_128Bis = html_parser.tdocs_by_agenda.get_tdoc_by_agenda_date(file_name)
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '128.htm')
datetime_128 = html_parser.tdocs_by_agenda.get_tdoc_by_agenda_date(file_name)
self.assertGreater(datetime_128Bis, datetime_128)
def test_corrupt_dtdocs_by_agenda(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '2019.07.02 TDocsByAgenda_wrong.html')
datetime = html_parser.tdocs_by_agenda.get_tdoc_by_agenda_date(file_name)
self.assertEqual(datetime.year, 2019)
self.assertEqual(datetime.month, 6)
self.assertEqual(datetime.day, 28)
self.assertEqual(datetime.hour, 16)
self.assertEqual(datetime.minute, 36)
meeting = html_parser.tdocs_by_agenda(file_name)
self.assertEqual(meeting.meeting_number, 'Unknown', 'Expected Unknown')
self.assertEqual(len(meeting.tdocs), 1, 'Expected TDoc entries')
def test_exported_dtdocs_by_agenda_v1(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '2019.01.24 TdocsByAgenda.htm')
meeting = html_parser.tdocs_by_agenda(file_name, v=1)
self.assertEqual(meeting.meeting_number, '130', 'Expected 130')
self.assertEqual(len(meeting.tdocs), 1026, 'Expected TDoc entries')
def test_exported_dtdocs_by_agenda_v2(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '2019.01.24 TdocsByAgenda.htm')
meeting = html_parser.tdocs_by_agenda(file_name, v=2)
self.assertEqual(meeting.meeting_number, '130', 'Expected 130')
self.assertEqual(len(meeting.tdocs), 1026, 'Expected TDoc entries')
def test_exported_dtdocs_by_agenda_134_v2(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '134.htm')
meeting = html_parser.tdocs_by_agenda(file_name, v=2)
self.assertEqual(meeting.meeting_number, '134', 'Expected 134')
self.assertEqual(len(meeting.tdocs), 1802, 'Expected TDoc entries')
test_row = meeting.tdocs.loc['S2-1908578',:]
self.assertEqual(test_row['Revision of'], 'S2-1908544', 'Expected S2-1908544')
def test_136(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '136.html')
meeting = html_parser.tdocs_by_agenda(file_name, v=2)
self.assertEqual(meeting.meeting_number, '136', 'Expected 136')
self.assertEqual(len(meeting.tdocs), 1201, 'Expected TDoc entries')
def test_136_v2(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '136_v2.html')
meeting = html_parser.tdocs_by_agenda(file_name, v=2)
self.assertEqual(meeting.meeting_number, '136', 'Expected 136')
self.assertEqual(len(meeting.tdocs), 1762, 'Expected TDoc entries')
def test_136_v3(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '136_v3.html')
meeting = html_parser.tdocs_by_agenda(file_name, v=2)
self.assertEqual(meeting.meeting_number, '136', 'Expected 136')
self.assertEqual(len(meeting.tdocs), 1815, 'Expected TDoc entries')
def test_136_missing_ais(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '136_missing_AIs.html')
meeting = html_parser.tdocs_by_agenda(file_name, v=2)
self.assertEqual(meeting.meeting_number, '136', 'Expected 136')
self.assertEqual(len(meeting.tdocs), 1857, 'Expected TDoc entries')
test_row = meeting.tdocs.loc['S2-1910969',:]
self.assertEqual(test_row['Title'], 'LS from SA WG3LI: LS on Enhancing Location Information Reporting with Dual Connectivity')
self.assertEqual(test_row['Result'], 'Noted')
self.assertEqual(test_row['Comments'], 'Noted')
self.assertEqual(test_row['AI'], '', 'Expected empty string')
def test_137e(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '2020.02.24 TdocsByAgenda SA2-137E.htm')
meeting = html_parser.tdocs_by_agenda(file_name)
self.assertEqual(meeting.meeting_number, '137', 'Expected 137')
self.assertEqual(len(meeting.tdocs), 544, 'Expected TDoc entries')
test_row = meeting.tdocs.loc['S2-2001981',:]
self.assertEqual(test_row['AI'], '5.4', 'Expected AI 5.4')
def test_137e_v2(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '2020.02.29 TdocsByAgenda SA2-137E.htm')
meeting = html_parser.tdocs_by_agenda(file_name)
self.assertEqual(meeting.meeting_number, '137', 'Expected 137')
self.assertEqual(len(meeting.tdocs), 713, 'Expected TDoc entries')
test_row = meeting.tdocs.loc['S2-2001981',:]
self.assertEqual(test_row['AI'], '5.4', 'Expected AI 5.4')
def test_138e(self):
file_name = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'tdocs_by_agenda', '138E_final.html')
meeting = html_parser.tdocs_by_agenda(file_name)
self.assertEqual(meeting.meeting_number, '138E', 'Expected 138E')
self.assertEqual(len(meeting.tdocs), 824, 'Expected TDoc entries')
test_row = meeting.tdocs.loc['S2-2002769',:]
self.assertEqual(test_row['AI'], '7.10.3', 'Expected AI 7.10.3')
self.assertEqual(test_row['Title'], "23.502 CR2190 (Rel-16, 'F'): UE radio capability for 5GS and IWK")
self.assertEqual(test_row['Result'], 'Revised')
test_row = meeting.tdocs.loc['S2-2002694',:]
self.assertEqual(test_row['AI'], '6.3', 'Expected AI 6.3')
self.assertEqual(test_row['Title'], "23.501 CR2226 (Rel-16, 'F'): Multiple N6 interfaces per Network Instance for Ethernet traffic")
self.assertEqual(test_row['Comments'], 'Noted')
self.assertEqual(test_row['Result'], 'Noted')
if __name__ == '__main__':
unittest.main()
| 52.036585 | 194 | 0.694399 | 2,311 | 17,068 | 4.907832 | 0.099957 | 0.144154 | 0.064186 | 0.041968 | 0.829219 | 0.804091 | 0.779316 | 0.753923 | 0.738318 | 0.710192 | 0 | 0.071266 | 0.175416 | 17,068 | 327 | 195 | 52.195719 | 0.734617 | 0.054898 | 0 | 0.443966 | 0 | 0.00431 | 0.233524 | 0.002855 | 0 | 0 | 0 | 0 | 0.538793 | 1 | 0.112069 | false | 0 | 0.012931 | 0 | 0.12931 | 0.008621 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2ee7fc27fef266272059ad193fc765c45eaf1edb | 2,444 | py | Python | argsConf/plotArgs.py | andre-cavalheiro/datascience-project | f9e7187faacc2e3e676c7db0e34e354e8c659c3c | [
"Apache-2.0"
] | null | null | null | argsConf/plotArgs.py | andre-cavalheiro/datascience-project | f9e7187faacc2e3e676c7db0e34e354e8c659c3c | [
"Apache-2.0"
] | 14 | 2019-10-31T13:38:55.000Z | 2019-11-22T13:51:37.000Z | argsConf/plotArgs.py | andre-cavalheiro/datascience-project | f9e7187faacc2e3e676c7db0e34e354e8c659c3c | [
"Apache-2.0"
] | null | null | null | from libs.standardPlots import *
argListPlots = [
{
'name': 'singlePlotTypes',
'type': str,
'default': None,
'required': False,
'help': '',
'possibilities': [
('line', plotDemStats, [('yAxes', 'standard'), ('logFile', 'shared'),
('ymin', 'shared'), ('ymax', 'shared')])
]
},
{
'name': 'seqPlotTypes',
'type': str,
'default': None,
'required': False,
'help': '',
'possibilities': [
('line', plotDemStats, [('yAxes', 'standard'), ('pallets', 'shared'), ('logFile', 'shared'), ('yLabelsLine', 'standard'),
('ymin', 'shared'), ('ymax', 'shared'), ('joinYToLabel', 'standard')]),
('scatter', scaterThemPlots, [('ymin', 'shared'), ('ymax', 'shared'), ('logFile', 'shared'),
('yLabelsScatter', 'shared'), ('annotations', 'shared')]),
('scatterNx', scaterThemPlotsNx, [('ymin', 'shared'), ('ymax', 'shared'), ('logFile', 'shared'),
('yLabelsScatter', 'shared'), ('distanceToLabel', 'shared')]),
('box', plotThemBoxes, [('yAxesBox', 'standard'), ('ymin', 'shared'), ('ymax', 'shared'), ('logFile', 'shared'),
('yLabelsBox', 'shared')])
]
},
{
'name': 'onlyPlotTypes',
'type': str,
'default': None,
'required': False,
'help': '',
'possibilities': [
('line', plotDemStats,
[('yAxes', 'standard'), ('pallets', 'shared'), ('logFile', 'shared'), ('yLabelsLine', 'standard'),
('ymin', 'shared'), ('ymax', 'shared'), ('joinYToLabel', 'standard')]),
('scatter', scaterThemPlots, [('ymin', 'shared'), ('ymax', 'shared'), ('logFile', 'shared'),
('yLabelsScatter', 'shared'), ('annotations', 'shared')]),
('scatterNx', scaterThemPlotsNx, [('ymin', 'shared'), ('ymax', 'shared'), ('logFile', 'shared'),
('yLabelsScatter', 'shared'), ('distanceToLabel', 'shared')]),
('box', plotThemBoxes,
[('yAxesBox', 'standard'), ('ymin', 'shared'), ('ymax', 'shared'), ('logFile', 'shared'),
('yLabelsBox', 'shared')])
]
},
] | 47.921569 | 134 | 0.438216 | 149 | 2,444 | 7.187919 | 0.261745 | 0.109244 | 0.117647 | 0.168067 | 0.88422 | 0.88422 | 0.88422 | 0.88422 | 0.88422 | 0.88422 | 0 | 0 | 0.335106 | 2,444 | 51 | 135 | 47.921569 | 0.659077 | 0 | 0 | 0.54 | 0 | 0 | 0.355828 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02 | 0 | 0.02 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2ee99e9ecff85290563f90d75817c138320f4377 | 92 | py | Python | avwx_api/__init__.py | lucasbstn/avwx-api | 1f638dde328e57f80a11466034c6e027d0f25557 | [
"MIT"
] | 30 | 2016-12-16T07:26:13.000Z | 2019-03-01T07:57:07.000Z | avwx_api/__init__.py | lucasbstn/avwx-api | 1f638dde328e57f80a11466034c6e027d0f25557 | [
"MIT"
] | 12 | 2018-02-18T20:41:39.000Z | 2019-04-13T06:10:54.000Z | avwx_api/__init__.py | lucasbstn/avwx-api | 1f638dde328e57f80a11466034c6e027d0f25557 | [
"MIT"
] | 36 | 2019-11-30T15:52:16.000Z | 2022-03-03T04:28:15.000Z | """
AVWX REST API
"""
from avwx_api.app_config import app
from avwx_api import api, views
| 11.5 | 35 | 0.73913 | 16 | 92 | 4.0625 | 0.5 | 0.246154 | 0.338462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 92 | 7 | 36 | 13.142857 | 0.855263 | 0.141304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
2c2d99a5eaa07ea55ce215dea68ee15cc9e6c11f | 8,700 | py | Python | Metida.py | Cadovvl/py_hockey | 8288eb88de2169ce9c22a4557165747d93366cdd | [
"Apache-2.0"
] | null | null | null | Metida.py | Cadovvl/py_hockey | 8288eb88de2169ce9c22a4557165747d93366cdd | [
"Apache-2.0"
] | null | null | null | Metida.py | Cadovvl/py_hockey | 8288eb88de2169ce9c22a4557165747d93366cdd | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Mon May 19 21:33:29 2014
@author: pasukhov
"""
import math
import random
def solve(a, b, c, flag=True):
if abs(a) <= 0.00001 and abs(b) > 0.000001:
return c / b
D = b ** 2 - 4.0 * a * c
if D < 0:
print a, b, c
return None
if flag:
return (-1.0 * b + math.sqrt(D)) / (2.0 * a)
else:
return (-1.0 * b - math.sqrt(D)) / (2.0 * a)
def nextCircleCoordinate(x1, y1, x2, y2, flag=True): # second is ball
x0 = x2 - x1
y0 = y2 - y1
if x0 == 0:
xk = math.sin(math.pi / 2.0 - math.acos(5.0 / abs(y0))) * 10.0
yk = math.cos(math.pi / 2.0 - math.acos(5.0 / abs(y0))) * 10.0
if (y1 <= y2) ^ flag:
xk = -xk
if (x1 >= x2) ^ flag:
yk = -yk
else:
a = x0 ** 2 + y0 ** 2
xk = solve(a, -100 * x0, 2500.0 - 100.0 * y0 ** 2, (y1 > y2) ^ flag)
yk = solve(a, -100 * y0, 2500.0 - 100.0 * x0 ** 2, (x1 < x2) ^ flag)
if xk == None or yk == None:
print 'None'
print (x1, y1), (x2, y2), (x0, y0)
return (x2, y2)
return (xk + x1, yk + y1 )
class P1:
def move(self, board, gate, index, side, balls, your_team, enemy_team):
gate = [gate.top, gate.bottom]
lu = [board.left, board.top]
rb = [board.right, board.bottom]
your_position_x = your_team[index][0]
your_position_y = your_team[index][1]
gy = (gate[0] + gate[1]) / 2.0
gx = rb[0]
if side == 1:
gx = lu[0]
v1x = your_position_x - gx
v1y = your_position_y - gy
v2x = balls[0][0] - gx
v2y = balls[0][1] + 40 - gy
v3x = balls[0][0] - gx
v3y = balls[0][1] - 40 - gy
vty = balls[0][1] - gy
speed = (balls[0][0] - your_position_x, balls[0][1] - your_position_y)
if (( your_position_x - balls[0][0]) ** 2 + ( your_position_y - balls[0][1]) ** 2 ) < 100:
speed = (gx - your_position_x, gy - your_position_y)
alpha = 10.0 / max(abs(speed[0]), abs(speed[1]))
speed = (alpha * speed[0], alpha * speed[1])
return speed
if ((v1x * v2y - v2x * v1y) * ((v1x * v3y - v3x * v1y)) < 0 and (balls[0][0] - your_position_x) * (
0.5 - side) > 0) or (
( your_position_x - balls[0][0]) ** 2 + ( your_position_y - balls[0][1]) ** 2 ) > 3500:
alpha = 10.0 / max(abs(speed[0]), abs(speed[1]))
speed = (alpha * speed[0], alpha * speed[1])
return speed
if (( your_position_x - balls[0][0]) ** 2 + ( your_position_y - balls[0][1]) ** 2 ) > 400:
target = nextCircleCoordinate(your_position_x, your_position_y, balls[0][0], balls[0][1],
v1x * vty - v1y * v2x > 0)
speed = (target[0] - your_position_x, target[1] - your_position_y)
alpha = 10.0 / max(abs(speed[0]), abs(speed[1]))
speed = (alpha * speed[0], alpha * speed[1])
if speed[0] + your_position_x > rb[0] or speed[0] + your_position_x < lu[0]:
speed = (speed[0], (speed[1] / abs(speed[1])) * 10.0)
if speed[1] + your_position_y > rb[1] or speed[1] + your_position_y < lu[1]:
speed = ( (speed[0] / abs(speed[0])) * 10.0, speed[1])
return (speed[0] + random.random() * 2 - 1.0, speed[1] + random.random() * 2 - 1.0)
else:
return (0, 0)
class P2:
def move(self, board, gate, index, side, balls, your_team, enemy_team):
gate = [gate.top, gate.bottom]
lu = [board.left, board.top]
rb = [board.right, board.bottom]
your_position_x = your_team[index][0]
your_position_y = your_team[index][1]
gy = (gate[0] + gate[1]) / 2.0
gx = rb[0]
if side == 1:
gx = lu[0]
v1x = your_position_x - gx
v1y = your_position_y - gy
v2x = balls[0][0] - gx
v2y = balls[0][1] + 30 - gy
v3x = balls[0][0] - gx
v3y = balls[0][1] - 30 - gy
vty = balls[0][1] - gy
speed = (balls[0][0] - your_position_x, balls[0][1] - your_position_y)
if (( your_position_x - balls[0][0]) ** 2 + ( your_position_y - balls[0][1]) ** 2 ) < 100:
speed = (gx - your_position_x, gy - your_position_y)
alpha = 10.0 / max(abs(speed[0]), abs(speed[1]))
speed = (alpha * speed[0], alpha * speed[1])
return speed
if ((v1x * v2y - v2x * v1y) * ((v1x * v3y - v3x * v1y)) < 0 and (balls[0][0] - your_position_x) * (
0.5 - side) > 0 ) or (
( your_position_x - balls[0][0]) ** 2 + ( your_position_y - balls[0][1]) ** 2 ) > 30000:
alpha = 10.0 / max(abs(speed[0]), abs(speed[1]))
speed = (alpha * speed[0], alpha * speed[1])
return speed
if (( your_position_x - balls[0][0]) ** 2 + ( your_position_y - balls[0][1]) ** 2 ) > 400:
target = nextCircleCoordinate(your_position_x, your_position_y, balls[0][0], balls[0][1],
v1x * vty - v1y * v2x > 0)
speed = (target[0] - your_position_x, target[1] - your_position_y)
alpha = 10.0 / max(abs(speed[0]), abs(speed[1]))
speed = (alpha * speed[0], alpha * speed[1])
if speed[0] + your_position_x > rb[0] or speed[0] + your_position_x < lu[0]:
speed = (speed[0], (speed[1] / abs(speed[1])) * 10.0)
if speed[1] + your_position_y > rb[1] or speed[1] + your_position_y < lu[1]:
speed = ( (speed[0] / abs(speed[0])) * 10.0, speed[1])
return (speed[0] + random.random() * 2 - 1.0, speed[1] + random.random() * 2 - 1.0)
else:
return (0, 0)
class P3:
def move(self, board, gate, index, side, balls, your_team, enemy_team):
gate = [gate.top, gate.bottom]
lu = [board.left, board.top]
rb = [board.right, board.bottom]
your_position_x = your_team[index][0]
your_position_y = your_team[index][1]
gy = (gate[0] + gate[1]) / 2.0
gx = rb[0]
if side == 1:
gx = lu[0]
tx, ty = enemy_team[0][0], enemy_team[0][1]
for i in enemy_team:
if (tx - gx) ** 2 + (ty - gy) ** 2 > (i[0] - gx) ** 2 + (i[1] - gy) ** 2:
tx = i[0]
ty = i[1]
if ((tx - your_position_x) ** 2 + (ty - your_position_y) ** 2) > 20000:
speed = ((tx - your_position_x), (ty - your_position_y))
return speed
if (your_position_x - tx) * (your_position_x - gx) < 0:
speed = ((tx - your_position_x), (ty - your_position_y))
alpha = 10.0 / max(abs(speed[0]), abs(speed[1]))
speed = (alpha * speed[0], alpha * speed[1])
return speed
if (tx - your_position_x) ** 2 + (ty - your_position_y) ** 2 > 200:
target = nextCircleCoordinate(your_position_x, your_position_y, tx, ty,
(tx - gx) * (your_position_y - gy) - (your_position_x - gx) * (ty - gy) > 0)
speed = (target[0] - your_position_x, target[1] - your_position_y)
alpha = 10.0 / max(abs(speed[0]), abs(speed[1]))
speed = (alpha * speed[0], alpha * speed[1])
if speed[0] + your_position_x > rb[0] or speed[0] + your_position_x < lu[0]:
speed = (speed[0], (speed[1] / abs(speed[1])) * 10.0)
if speed[1] + your_position_y > rb[1] or speed[1] + your_position_y < lu[1]:
speed = ((speed[0] / abs(speed[0])) * 10.0, speed[1])
return (speed[0] + random.random() * 2 - 1.0, speed[1] + random.random() * 2 - 1.0)
else:
return (0, 0)
class P4:
def move(self, board, gate, index, side, balls, your_team, enemy_team):
gate = [gate.top, gate.bottom]
lu = [board.left, board.top]
rb = [board.right, board.bottom]
your_position_x = your_team[index][0]
your_position_y = your_team[index][1]
gy = (gate[0] + gate[1]) / 2.0
gx = lu[0] + 10
if side == 1:
gx = rb[0] - 10
target = ( (balls[0][0] + gx * 9.0) / 10.0,
(balls[0][1] + gy * 9.0) / 10.0 )
b = (target[0] - gx) ** 2 + (target[1] - gy) ** 2
if b < 10000 and b > 2:
alpha = 100.0 / math.sqrt(b + 0.1)
target = ((target[0] - gx) * alpha + gx, (target[1] - gy) * alpha + gy)
return ( target[0] - your_position_x, target[1] - your_position_y)
| 41.037736 | 118 | 0.486437 | 1,295 | 8,700 | 3.142085 | 0.085714 | 0.212337 | 0.121406 | 0.048169 | 0.819612 | 0.817646 | 0.811256 | 0.811256 | 0.798476 | 0.784468 | 0 | 0.091626 | 0.343908 | 8,700 | 211 | 119 | 41.232227 | 0.621233 | 0.004138 | 0 | 0.65896 | 0 | 0 | 0.000465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.011561 | null | null | 0.017341 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
2c2f7f5e0dd541f194455f77238fcea00268fdcb | 297 | py | Python | dump/callback_compile.py | conductiveIT/pymunk-1 | 61de8b2e652503356ac14a2d648cc11aa6a8070f | [
"MIT"
] | 670 | 2015-01-01T19:10:15.000Z | 2022-03-29T23:05:47.000Z | dump/callback_compile.py | reter695/pymunk | 9e9d3bf14cd57f61006588b1c56fefc21b453733 | [
"MIT"
] | 122 | 2015-01-02T19:06:19.000Z | 2022-03-20T19:44:25.000Z | dump/callback_compile.py | reter695/pymunk | 9e9d3bf14cd57f61006588b1c56fefc21b453733 | [
"MIT"
] | 222 | 2015-01-28T03:34:52.000Z | 2022-03-27T06:44:52.000Z | import os
os.system("gcc -mrtd -O3 -shared -fno-omit-frame-pointer -std=gnu99 -fPIC -ffast-math -m32 -c callback.c")
#os.system("gcc -O1 -std=gnu99 -fPIC -ffast-math -c callback.c")
#os.system("gcc -dynamiclib ex.o -o libex.dylib")
os.system("gcc -shared -mrtd callback.o -o callback.dll")
| 42.428571 | 107 | 0.690236 | 52 | 297 | 3.942308 | 0.5 | 0.156098 | 0.214634 | 0.165854 | 0.409756 | 0.204878 | 0 | 0 | 0 | 0 | 0 | 0.030888 | 0.127946 | 297 | 6 | 108 | 49.5 | 0.760618 | 0.373737 | 0 | 0 | 0 | 0.333333 | 0.769663 | 0.129213 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
d32b6dae5976cc4972d6d03f19e82bc8b80a13f5 | 7,630 | py | Python | src/Utils/Python/Tests/test_IO.py | DockBio/utilities | 213ed5ac2a64886b16d0fee1fcecb34d36eea9e9 | [
"BSD-3-Clause"
] | null | null | null | src/Utils/Python/Tests/test_IO.py | DockBio/utilities | 213ed5ac2a64886b16d0fee1fcecb34d36eea9e9 | [
"BSD-3-Clause"
] | null | null | null | src/Utils/Python/Tests/test_IO.py | DockBio/utilities | 213ed5ac2a64886b16d0fee1fcecb34d36eea9e9 | [
"BSD-3-Clause"
] | null | null | null | import pytest
import scine_utilities as scine
import os
def test_IO():
# Write a sample XYZ file
sample_XYZ = """50
Fe -4.492570 -0.096320 0.847820
Fe -2.054660 -0.157130 -1.789610
C -1.813660 -1.875500 -1.459220
C -4.969040 2.999280 0.925470
C -6.249510 -0.257510 0.804750
C -4.408740 0.041750 2.840480
C -4.294330 -1.844000 1.009110
C -2.107250 -0.591260 -3.740720
C -0.313290 0.130170 -1.777170
C -4.142080 1.410610 -2.161450
S -4.408720 -0.238460 -1.600010
S -2.183280 0.352810 0.608390
O -1.664640 -3.009190 -1.274240
O -4.155250 -2.985570 1.147960
O 0.835680 0.261970 -1.698400
O -7.392040 -0.434060 0.719920
O -3.105390 -0.364160 -4.402920
C -0.869780 -1.168610 -4.413420
H -0.133600 -0.361770 -4.517270
H -1.116210 -1.559320 -5.405540
H -0.413360 -1.946010 -3.794520
O -3.501840 0.645880 3.386530
C -5.517750 -0.577970 3.678110
H -5.758910 -1.583080 3.321080
H -6.419720 0.034770 3.556400
H -5.237840 -0.599780 4.735860
N -4.211750 1.881960 0.818590
C -2.865400 1.974350 0.714200
C -2.193970 3.191310 0.700110
C -2.961220 4.348520 0.792630
C -4.349030 4.249390 0.905800
C -6.451570 2.842560 1.058480
H -6.694570 2.226970 1.931260
H -6.867270 2.346530 0.175060
H -6.933400 3.815360 1.169820
H -1.113360 3.219300 0.616820
H -2.483250 5.323370 0.778410
H -4.961470 5.141290 0.982050
N -2.811460 1.626370 -2.282410
C -2.348280 2.822830 -2.714460
C -3.256430 3.839770 -3.013160
C -4.629240 3.622250 -2.883990
C -5.091920 2.382130 -2.454860
C -0.868790 3.003810 -2.851970
H -0.636440 4.001570 -3.228380
H -0.455030 2.261230 -3.542570
H -0.373260 2.871710 -1.884410
H -2.879290 4.799710 -3.348950
H -5.331970 4.415900 -3.118460
H -6.148750 2.165060 -2.346420"""
sample_MOL = """
OpenBabel02081817133D
50 54 0 0 0 0 0 0 0 0999 V2000
-4.4926 -0.0963 0.8478 Fe 0 0 0 0 0 0 0 0 0 0 0 0
-2.0547 -0.1571 -1.7896 Fe 0 0 0 0 0 0 0 0 0 0 0 0
-1.8137 -1.8755 -1.4592 C 0 0 0 0 0 0 0 0 0 0 0 0
-4.9690 2.9993 0.9255 C 0 0 0 0 0 0 0 0 0 0 0 0
-6.2495 -0.2575 0.8047 C 0 0 0 0 0 0 0 0 0 0 0 0
-4.4087 0.0418 2.8405 C 0 0 0 0 0 0 0 0 0 0 0 0
-4.2943 -1.8440 1.0091 C 0 0 0 0 0 0 0 0 0 0 0 0
-2.1073 -0.5913 -3.7407 C 0 0 0 0 0 0 0 0 0 0 0 0
-0.3133 0.1302 -1.7772 C 0 0 0 0 0 0 0 0 0 0 0 0
-4.1421 1.4106 -2.1614 C 0 0 0 0 0 0 0 0 0 0 0 0
-4.4087 -0.2385 -1.6000 S 0 0 2 0 0 0 0 0 0 0 0 0
-2.1833 0.3528 0.6084 S 0 0 1 0 0 0 0 0 0 0 0 0
-1.6646 -3.0092 -1.2742 O 0 0 0 0 0 0 0 0 0 0 0 0
-4.1552 -2.9856 1.1480 O 0 0 0 0 0 0 0 0 0 0 0 0
0.8357 0.2620 -1.6984 O 0 0 0 0 0 0 0 0 0 0 0 0
-7.3920 -0.4341 0.7199 O 0 0 0 0 0 0 0 0 0 0 0 0
-3.1054 -0.3642 -4.4029 O 0 0 0 0 0 0 0 0 0 0 0 0
-0.8698 -1.1686 -4.4134 C 0 0 0 0 0 0 0 0 0 0 0 0
-0.1336 -0.3618 -4.5173 H 0 0 0 0 0 0 0 0 0 0 0 0
-1.1162 -1.5593 -5.4055 H 0 0 0 0 0 0 0 0 0 0 0 0
-0.4134 -1.9460 -3.7945 H 0 0 0 0 0 0 0 0 0 0 0 0
-3.5018 0.6459 3.3865 O 0 0 0 0 0 0 0 0 0 0 0 0
-5.5178 -0.5780 3.6781 C 0 0 0 0 0 0 0 0 0 0 0 0
-5.7589 -1.5831 3.3211 H 0 0 0 0 0 0 0 0 0 0 0 0
-6.4197 0.0348 3.5564 H 0 0 0 0 0 0 0 0 0 0 0 0
-5.2378 -0.5998 4.7359 H 0 0 0 0 0 0 0 0 0 0 0 0
-4.2118 1.8820 0.8186 N 0 0 0 0 0 0 0 0 0 0 0 0
-2.8654 1.9744 0.7142 C 0 0 0 0 0 0 0 0 0 0 0 0
-2.1940 3.1913 0.7001 C 0 0 0 0 0 0 0 0 0 0 0 0
-2.9612 4.3485 0.7926 C 0 0 0 0 0 0 0 0 0 0 0 0
-4.3490 4.2494 0.9058 C 0 0 0 0 0 0 0 0 0 0 0 0
-6.4516 2.8426 1.0585 C 0 0 0 0 0 0 0 0 0 0 0 0
-6.6946 2.2270 1.9313 H 0 0 0 0 0 0 0 0 0 0 0 0
-6.8673 2.3465 0.1751 H 0 0 0 0 0 0 0 0 0 0 0 0
-6.9334 3.8154 1.1698 H 0 0 0 0 0 0 0 0 0 0 0 0
-1.1134 3.2193 0.6168 H 0 0 0 0 0 0 0 0 0 0 0 0
-2.4832 5.3234 0.7784 H 0 0 0 0 0 0 0 0 0 0 0 0
-4.9615 5.1413 0.9820 H 0 0 0 0 0 0 0 0 0 0 0 0
-2.8115 1.6264 -2.2824 N 0 0 0 0 0 0 0 0 0 0 0 0
-2.3483 2.8228 -2.7145 C 0 0 0 0 0 0 0 0 0 0 0 0
-3.2564 3.8398 -3.0132 C 0 0 0 0 0 0 0 0 0 0 0 0
-4.6292 3.6223 -2.8840 C 0 0 0 0 0 0 0 0 0 0 0 0
-5.0919 2.3821 -2.4549 C 0 0 0 0 0 0 0 0 0 0 0 0
-0.8688 3.0038 -2.8520 C 0 0 0 0 0 0 0 0 0 0 0 0
-0.6364 4.0016 -3.2284 H 0 0 0 0 0 0 0 0 0 0 0 0
-0.4550 2.2612 -3.5426 H 0 0 0 0 0 0 0 0 0 0 0 0
-0.3733 2.8717 -1.8844 H 0 0 0 0 0 0 0 0 0 0 0 0
-2.8793 4.7997 -3.3489 H 0 0 0 0 0 0 0 0 0 0 0 0
-5.3320 4.4159 -3.1185 H 0 0 0 0 0 0 0 0 0 0 0 0
-6.1487 2.1651 -2.3464 H 0 0 0 0 0 0 0 0 0 0 0 0
1 7 1 0 0 0 0
1 6 1 0 0 0 0
2 9 1 0 0 0 0
2 11 1 0 0 0 0
2 3 1 0 0 0 0
3 13 1 0 0 0 0
4 32 1 0 0 0 0
5 1 1 0 0 0 0
6 22 2 0 0 0 0
6 23 1 0 0 0 0
7 14 1 0 0 0 0
8 2 1 0 0 0 0
9 15 1 0 0 0 0
11 10 1 6 0 0 0
11 1 1 0 0 0 0
12 2 1 6 0 0 0
12 28 1 0 0 0 0
12 1 1 0 0 0 0
16 5 1 0 0 0 0
17 8 2 0 0 0 0
18 21 1 0 0 0 0
18 8 1 0 0 0 0
19 18 1 0 0 0 0
20 18 1 0 0 0 0
23 26 1 0 0 0 0
24 23 1 0 0 0 0
25 23 1 0 0 0 0
27 1 1 0 0 0 0
27 4 1 0 0 0 0
28 27 1 0 0 0 0
29 28 2 0 0 0 0
29 30 1 0 0 0 0
30 31 1 0 0 0 0
31 4 2 0 0 0 0
31 38 1 0 0 0 0
32 35 1 0 0 0 0
32 33 1 0 0 0 0
34 32 1 0 0 0 0
36 29 1 0 0 0 0
37 30 1 0 0 0 0
39 10 1 0 0 0 0
39 2 1 0 0 0 0
40 39 1 0 0 0 0
41 42 1 0 0 0 0
41 40 2 0 0 0 0
42 43 1 0 0 0 0
43 50 1 0 0 0 0
43 10 2 0 0 0 0
44 40 1 0 0 0 0
44 47 1 0 0 0 0
45 44 1 0 0 0 0
46 44 1 0 0 0 0
48 41 1 0 0 0 0
49 42 1 0 0 0 0
M END"""
# Test reading an XYZ file
with open("sample.xyz", "w") as xyz_file:
xyz_file.write(sample_XYZ)
(xyz_atoms, xyz_BO) = scine.IO.read("sample.xyz")
assert xyz_BO.empty()
assert xyz_atoms.size() == 50
assert xyz_atoms.elements[0] == scine.ElementType.Fe
# Write the read atoms into a file and re-read
scine.IO.write("out.xyz", xyz_atoms)
(reread_xyz_atoms, reread_xyz_BO) = scine.IO.read("out.xyz")
assert xyz_atoms == reread_xyz_atoms
# Clean up
os.remove("sample.xyz")
os.remove("out.xyz")
# Test reading a MOL file
with open("sample.mol", "w") as mol_file:
mol_file.write(sample_MOL)
(mol_atoms, mol_BO) = scine.IO.read("sample.mol")
assert mol_atoms.size() == 50
assert mol_atoms.elements[0] == scine.ElementType.Fe
assert mol_BO.get_system_size() == 50
# Write the read info into a file and re-read
scine.IO.write_topology("out.mol", mol_atoms, mol_BO)
(reread_mol_atoms, reread_mol_BO) = scine.IO.read("out.mol")
assert mol_atoms == reread_mol_atoms
assert mol_BO == reread_mol_BO
# Clean up
os.remove("sample.mol")
os.remove("out.mol")
| 36.859903 | 69 | 0.50865 | 1,902 | 7,630 | 2.017876 | 0.211356 | 0.375717 | 0.479937 | 0.530485 | 0.360604 | 0.270453 | 0.200625 | 0.198801 | 0.198801 | 0.183168 | 0 | 0.638387 | 0.405242 | 7,630 | 206 | 70 | 37.038835 | 0.20736 | 0.02346 | 0 | 0 | 0 | 0.268817 | 0.861212 | 0.002821 | 0 | 0 | 0 | 0 | 0.048387 | 1 | 0.005376 | false | 0 | 0.016129 | 0 | 0.021505 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d3419dd736e2778912066066e07c9e67a8f211e6 | 752 | py | Python | tests/cli/list/test_strengths.py | alimohammad1995/python-hdwallet | eb49d2d6a19cb13a785adefb0c2ca4d56ad29965 | [
"ISC"
] | 188 | 2020-10-29T14:26:16.000Z | 2022-03-29T12:18:42.000Z | tests/cli/list/test_strengths.py | meherett/hdwallet | 944b07c9894a6b0d476c9ebe546b7d6eb4b47356 | [
"0BSD"
] | 31 | 2020-12-22T16:22:27.000Z | 2022-03-28T08:29:10.000Z | tests/cli/list/test_strengths.py | meherett/hdwallet | 944b07c9894a6b0d476c9ebe546b7d6eb4b47356 | [
"0BSD"
] | 55 | 2021-03-08T04:35:24.000Z | 2022-03-17T18:40:12.000Z | #!/usr/bin/env python3
from hdwallet.cli.__main__ import main as cli_main
def test_strengths(cli_tester):
hdwallet = cli_tester.invoke(
cli_main, [
"list", "strengths"
]
)
assert hdwallet.exit_code == 0
assert hdwallet.output
hdwallet = cli_tester.invoke(
cli_main, [
"l", "strengths"
]
)
assert hdwallet.exit_code == 0
assert hdwallet.output
hdwallet = cli_tester.invoke(
cli_main, [
"list", "s"
]
)
assert hdwallet.exit_code == 0
assert hdwallet.output
hdwallet = cli_tester.invoke(
cli_main, [
"l", "s"
]
)
assert hdwallet.exit_code == 0
assert hdwallet.output
| 17.090909 | 50 | 0.558511 | 82 | 752 | 4.890244 | 0.280488 | 0.279302 | 0.169576 | 0.229426 | 0.802993 | 0.802993 | 0.802993 | 0.708229 | 0.708229 | 0.59601 | 0 | 0.010121 | 0.343085 | 752 | 43 | 51 | 17.488372 | 0.801619 | 0.027926 | 0 | 0.533333 | 0 | 0 | 0.041096 | 0 | 0 | 0 | 0 | 0 | 0.266667 | 1 | 0.033333 | false | 0 | 0.033333 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d39cf43e66e19273eaefcce302cfc182612dcce8 | 107 | py | Python | app/app/calc.py | aniket463/recipe-app-api | 64d390bea67f840f0d056f6fdb36eb30236be64b | [
"MIT"
] | null | null | null | app/app/calc.py | aniket463/recipe-app-api | 64d390bea67f840f0d056f6fdb36eb30236be64b | [
"MIT"
] | null | null | null | app/app/calc.py | aniket463/recipe-app-api | 64d390bea67f840f0d056f6fdb36eb30236be64b | [
"MIT"
] | null | null | null | def add(x,y):
#add two number
return x+y
def substraction(x,y):
#sub two number
return x-y | 15.285714 | 22 | 0.607477 | 20 | 107 | 3.25 | 0.45 | 0.123077 | 0.461538 | 0.492308 | 0.523077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.280374 | 107 | 7 | 23 | 15.285714 | 0.844156 | 0.261682 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
6caf958aacf72286d3e6f5ebcf867b55bd322892 | 167 | py | Python | python/common/perforce/__init__.py | CountZer0/PipelineConstructionSet | 0aa73a8a63c72989b2d1c677efd78dad4388d335 | [
"BSD-3-Clause"
] | 21 | 2015-04-27T05:01:36.000Z | 2021-11-22T13:45:14.000Z | python/common/perforce/__init__.py | 0xb1dd1e/PipelineConstructionSet | 621349da1b6d1437e95d0c9e48ee9f36d59f19fd | [
"BSD-3-Clause"
] | null | null | null | python/common/perforce/__init__.py | 0xb1dd1e/PipelineConstructionSet | 621349da1b6d1437e95d0c9e48ee9f36d59f19fd | [
"BSD-3-Clause"
] | 7 | 2015-04-11T11:37:19.000Z | 2020-05-22T09:49:04.000Z | '''
Author: Jason.Parks
Created: April 19, 2012
Module: common.perforce.__init__
Purpose: to import perforce
'''
print "common.perforce.__init__ imported" | 18.555556 | 42 | 0.718563 | 20 | 167 | 5.6 | 0.8 | 0.25 | 0.321429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0.173653 | 167 | 9 | 42 | 18.555556 | 0.768116 | 0 | 0 | 0 | 0 | 0 | 0.6875 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 1 | null | null | 1 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 9 |
6cb8d4186020d1320c7f4a621398aedf3a404bc9 | 71,916 | py | Python | test.py | triple-anonymity/PAGB | db190dcdcc88edbefb468c835e04803280885c50 | [
"MIT"
] | null | null | null | test.py | triple-anonymity/PAGB | db190dcdcc88edbefb468c835e04803280885c50 | [
"MIT"
] | null | null | null | test.py | triple-anonymity/PAGB | db190dcdcc88edbefb468c835e04803280885c50 | [
"MIT"
] | 1 | 2021-07-28T03:33:26.000Z | 2021-07-28T03:33:26.000Z | import secrets
import math
import hashlib
from helpfunctions import hash_to_prime, is_prime, shamir_trick, concat, bezoute_coefficients, mul_inv
from main import setup, add, prove_membership, \
batch_add, batch_add_test,\
prove_non_membership, verify_non_membership,\
prove_exponentiation_test, verify_exponentiation_test, prove_knowledge_exponent, verify_knowledge_exponent
from unittest import TestCase
import unittest
import datetime
import numpy as np
import pandas as pd
import pickle
import sys
import json
import random
def create_list(size):
res = []
for i in range(size):
x = secrets.randbelow(pow(2, 256))
res.append(x)
return res
class AccumulatorTest(TestCase):
def test_hash_to_prime(self):
x = secrets.randbelow(pow(2, 256))
#print(x)
h, nonce = hash_to_prime(x, 128)
#print(h)
#print(nonce)
self.assertTrue(is_prime(h))
self.assertTrue(h, math.log2(h) < 128)
def test_batch_add_euler(self):
p = 252533614457563255817176556954479732787
q = 144382690536755041477176940054403505719
n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
# draw random number within range of [0,n-1]
A0 = 65537
# S = dict()
elements_list = []
counter = 0
start_time = datetime.datetime.now()
for line in open("CA-CondMat.txt"):
# print(line)
if line.split(' ', 1)[0] == '#':
# print(1)
continue
# print(line.split('\t', 1))
if counter >= 3:
break
element = line.split('\t', 1)[0]+'to'+line.split('\t', 1)[1]
element_int = int(hashlib.sha256(element.encode("utf-8")).hexdigest(), 16)
# print(element)
elements_list.append(element_int)
counter = counter + 1
# print(counter)
A_post_add = batch_add_test(A0, elements_list, n, p, q)
end_time = datetime.datetime.now()
print((end_time - start_time))
def test_ca(self):
p = 252533614457563255817176556954479732787
q = 144382690536755041477176940054403505719
n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
# draw random number within range of [0,n-1]
A0 = 65537
# S = dict()
elements_list = []
counter = 0
for line in open("CA-CondMat.txt"):
if counter >= 100:
break
if line.split(' ', 1)[0] == '#':
#print(1)
continue
#print(line.split('\t', 1))
element = line.split('\t', 1)[0]+'to'+line.split('\t', 1)[1]
element_int = int(hashlib.sha256(element.encode("utf-8")).hexdigest(), 16)
#print(element)
elements_list.append(element_int)
counter = counter + 1
# print(counter)
A_post_add = batch_add_test(A0, elements_list, n, p ,q)
# print(len(S))
# self.assertEqual(len(S), 10)
# nonces_list = list(map(lambda e: hash_to_prime(e)[1], elements_list))
# is_valid = batch_verify_membership_with_NIPoE(nipoe[0], nipoe[1], A0, elements_list, nonces_list, A_post_add, n)
# self.assertTrue(is_valid)
def test_eff(self):
# p = 252533614457563255817176556954479732787
# q = 144382690536755041477176940054403505719
n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
# draw random number within range of [0,n-1]
A0 = 65537
S = dict()
start_time = datetime.datetime.now()
elements_list = []
counter = 0
product = 1
for line in open("CA-CondMat.txt"):
if counter >= 10000:
break
if line.split(' ', 1)[0] == '#':
#print(1)
continue
#print(line.split('\t', 1))
element = line.split('\t', 1)[0]+'to'+line.split('\t', 1)[1]
element_int = int(hashlib.sha256(element.encode("utf-8")).hexdigest(), 16)
#print(element)
elements_list.append(element_int)
hash_prime, nonce = hash_to_prime(element_int)
counter = counter + 1
S[element_int] = nonce
product *= hash_prime
print(product)
A_post_add = pow(A0, product, n)
print(A_post_add)
end_time = datetime.datetime.now()
print((end_time - start_time))
def test_ca_non_membership(self):
# p = 252533614457563255817176556954479732787
# q = 144382690536755041477176940054403505719
n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
# draw random number within range of [0,n-1]
A0 = 65537
S = dict()
elements_list = []
counter = 0
for line in open("CA-CondMat.txt"):
if counter >= 10:
break
if line.split(' ', 1)[0] == '#':
#print(1)
continue
#print(line.split('\t', 1))
element = line.split('\t', 1)[0]+'to'+line.split('\t', 1)[1]
element_int = int(hashlib.sha256(element.encode("utf-8")).hexdigest(), 16)
#print(element)
elements_list.append(element_int)
counter = counter + 1
# print(counter)
A_post_add = batch_add(A0, S, elements_list, n)
# print(len(S))
start_time = datetime.datetime.now()
proof = prove_non_membership(A0, S, elements_list[0], S[elements_list[0]], n)
# self.assertIsNone(proof)
x = 7
prime, x_nonce = hash_to_prime(x)
print(prime)
proof = prove_non_membership(A0, S, x, x_nonce, n)
is_valid = verify_non_membership(A0, A_post_add, proof[0], proof[1], x, x_nonce, n)
# self.assertTrue(is_valid)
print(is_valid)
end_time = datetime.datetime.now()
print((end_time - start_time))
def test_ca_poe(self):
n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
A0 = 65537
S = dict()
elements_list = []
counter = 0
for line in open("CA-CondMat.txt"):
if counter >= 10:
break
if line.split(' ', 1)[0] == '#':
#print(1)
continue
#print(line.split('\t', 1))
element = line.split('\t', 1)[0]+'to'+line.split('\t', 1)[1]
element_int = int(hashlib.sha256(element.encode("utf-8")).hexdigest(), 16)
elements_list.append(element_int)
counter = counter + 1
# print(counter)
A_post_add = batch_add(A0, S, elements_list, n)
print(A0)
first = 1
for e in elements_list:
first *= hash_to_prime(x=e, nonce=S[e])[0]
print(first)
print(pow(A0, first, n))
print(A_post_add)
start_time = datetime.datetime.now()
proof = prove_exponentiation_test(A0, first, A_post_add, n)
is_valid = verify_exponentiation_test(proof, A0, first, A_post_add, n)
# self.assertTrue(is_valid)
print(is_valid)
end_time = datetime.datetime.now()
print((end_time - start_time))
def test_ca_poke2(self):
n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
A0 = 65537
S = dict()
elements_list = []
counter = 0
for line in open("CA-CondMat.txt"):
if counter >= 100:
break
if line.split(' ', 1)[0] == '#':
#print(1)
continue
#print(line.split('\t', 1))
element = line.split('\t', 1)[0]+'to'+line.split('\t', 1)[1]
element_int = int(hashlib.sha256(element.encode("utf-8")).hexdigest(), 16)
elements_list.append(element_int)
counter = counter + 1
# print(counter)
A_post_add = batch_add(A0, S, elements_list, n)
print(A0)
first = 1
for e in elements_list:
first *= hash_to_prime(x=e, nonce=S[e])[0]
print(first)
print(first.bit_length())
start_time = datetime.datetime.now()
print(pow(A0, first, n)==A_post_add)
end_time = datetime.datetime.now()
print((end_time - start_time))
print(A_post_add)
proof = prove_knowledge_exponent(first, A0, A_post_add, n)
start_time = datetime.datetime.now()
is_valid = verify_knowledge_exponent(A0, A_post_add, proof[0], proof[1], proof[2], n)
# self.assertTrue(is_valid)
print(is_valid)
end_time = datetime.datetime.now()
print((end_time - start_time))
def test_batch_delete(self):
# p = 252533614457563255817176556954479732787
# q = 144382690536755041477176940054403505719
n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
# draw random number within range of [0,n-1]
A0 = 65537
S = dict()
elements_list = []
counter = 0
for line in open("CA-CondMat.txt"):
if counter >= 10:
break
if line.split(' ', 1)[0] == '#':
#print(1)
continue
#print(line.split('\t', 1))
element = line.split('\t', 1)[0]+'to'+line.split('\t', 1)[1]
element_int = int(hashlib.sha256(element.encode("utf-8")).hexdigest(), 16)
#print(element)
elements_list.append(element_int)
counter = counter + 1
# print(counter)
A_post_add = batch_add(A0, S, elements_list, n)
elements_to_delete_list = [elements_list[0], elements_list[2], elements_list[4]]
nonces_list = list(map(lambda e: hash_to_prime(e)[1], elements_to_delete_list))
proofs = list(map(lambda x: prove_membership(A0, S, x, n), elements_to_delete_list))
A_post_delete, nipoe = batch_delete_using_membership_proofs(A_post_add, S, elements_to_delete_list, proofs, n)
# is_valid = batch_verify_membership_with_NIPoE(nipoe[0], nipoe[1], A_post_delete, elements_to_delete_list, nonces_list, A_pre_delete, n)
# self.assertTrue(is_valid)
def test(self):
# n, A0, S = setup()
# print(is_prime((p-1)//2))
# print(is_prime((q-1)//2))
n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
A0 = 65537
f = open('wiki_1m_prime.pk','rb')
primes_list = pickle.load(f)
for total in [100000]:
print(total)
primes_list = primes_list[:total]
start_time = datetime.datetime.now()
A_final = A0
for e in primes_list:
A_final = pow(A_final, e, n)
end_time = datetime.datetime.now()
print((end_time - start_time))
start_time = datetime.datetime.now()
product = listProduct(primes_list)
acc = pow(A0,product,n)
end_time = datetime.datetime.now()
print((end_time - start_time))
print(A_final==acc)
# for i in range(73):
# if (pow(2,i,91)) == 1:
# print(i)
# start_time = datetime.datetime.now()
# t = 2**2560000
# end_time = datetime.datetime.now()
# print((end_time - start_time))
# start_time = datetime.datetime.now()
# t//256;
# end_time = datetime.datetime.now()
# print((end_time - start_time))
def test_ca_completeness(self):
p = 252533614457563255817176556954479732787
q = 144382690536755041477176940054403505719
n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
# draw random number within range of [0,n-1]
A0 = 65537
S = dict()
num = [50000, 60000, 70000, 80000, 90000, 100000, 110000, 120000, 130000, 140000, 150000]
for total in num:
print(total)
C = dict()
elements_list = []
counter = 0
ids = set()
edges = set()
# edgesto = set()
for line in open("CA-CondMat.txt"):
if counter >= total:
break
if line.split(' ', 1)[0] == '#':
#print(1)
continue
# print(line.split('\t', 1))
start = line.split('\t')[0]
end = line.split('\t')[1].split('\n')[0]
ids.update({start, end})
edges.update({start + ',' + end,})
# edgesto.update(start + 'to' + end)
if (start+'out') in C:
C[start+'out'].append(end);
else:
C[start+'out'] = [end];
if (end+'in') in C:
C[end+'in'].append(start);
else:
C[end+'in'] = [start];
counter = counter + 1
print(counter)
for e in set.union(ids, edges):
# print(element)
element_int = int(hashlib.sha256(e.encode("utf-8")).hexdigest(), 16)
elements_list.append(element_int)
for k in C.keys():
res = ""
for i in range(len(C[k])):
if (i>=1):
res += ','
res += str(C[k][i])
element = k+':'+res+';'
element_int = int(hashlib.sha256(element.encode("utf-8")).hexdigest(), 16)
# print(element)
elements_list.append(element_int)
# counter = counter + 1
print(len(ids))
print(len(edges))
print(len(elements_list)-len(ids)-len(edges))
print(len(elements_list))
# # print(len(C))
print('\n')
# start_time = datetime.datetime.now()
# A_post_add = batch_add(A0, S, elements_list, n)
# end_time = datetime.datetime.now()
# print((end_time - start_time))
# print(len(S))
def test_cage_completeness(self):
p = 252533614457563255817176556954479732787
q = 144382690536755041477176940054403505719
n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
# draw random number within range of [0,n-1]
A0 = 65537
S = dict()
num = [100000, 150000, 200000, 250000, 300000, 350000, 400000, 450000, 500000, 550000, 600000]
for total in num:
print(total)
C = dict()
elements_list = []
counter = 0
ids = set()
edges = set()
edgesto = set()
edgestoweight = set()
for line in open("cage15.mtx"):
if line[0] == '%':
continue
if counter == 0:
counter = counter + 1
continue
if counter >= total+1:
break
# print(line)
start = line.split(' ', 2)[0]
end = line.split(' ', 2)[1]
weight = line.split(' ', 2)[2].split('\n')[0]
ids.update({start, end})
edges.update({start + ',' + end,})
# edgesto.update({start + 'to' + end,})
edgestoweight.update({start + ',' + end + ':' + weight,})
# element = start + 'to' + end + ':' + weight + ';'
# print(element)
if (start+'out') in C:
C[start+'out'].append(end);
else:
C[start+'out'] = [end];
if (end+'in') in C:
C[end+'in'].append(start);
else:
C[end+'in'] = [start];
counter = counter + 1
print(counter)
for e in set.union(ids, edges, edgesto, edgestoweight):
# print(element)
element_int = int(hashlib.sha256(e.encode("utf-8")).hexdigest(), 16)
elements_list.append(element_int)
for k in C.keys():
res = ""
for i in range(len(C[k])):
if (i>=1):
res += ','
res += str(C[k][i])
element = k+':'+res+';'
element_int = int(hashlib.sha256(element.encode("utf-8")).hexdigest(), 16)
# print(element)
elements_list.append(element_int)
# counter = counter + 1
print(len(ids))
print(len(edgestoweight))
print(len(elements_list)-len(ids)-len(edgestoweight))
print(len(elements_list))
# print(len(C))
print('\n')
# start_time = datetime.datetime.now()
# A_post_add = batch_add(A0, S, elements_list, n)
# end_time = datetime.datetime.now()
# print((end_time - start_time))
# print(len(S))
def test_wiki_completeness(self):
p = 252533614457563255817176556954479732787
q = 144382690536755041477176940054403505719
n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
# draw random number within range of [0,n-1]
A0 = 65537
S = dict()
# f = open('wiki.pk','rb')
# elements_list = pickle.load(f)
# print(len(elements_list))
num = [100000, 150000, 200000, 250000, 300000, 350000, 400000, 450000, 500000, 550000, 600000]
for total in num:
print(total)
C = dict()
O = dict()
counter = 0
ids = set()
edges = set()
edgesto = set()
edgestoweight = set()
elements_list = []
for line in open("wiki-talk-temporal.txt"):
if counter >= total:
break
# print(line)
start = line.split(' ', 2)[0]
end = line.split(' ', 2)[1]
weight = line.split(' ', 2)[2].split('\n')[0]
# element = start + 'to' + end + ':' + weight + ','
# print(element)
ids.update({start, end})
edges.update({start + ',' + end,})
# edgesto.update({start + 'to' + end,})
# edgestoweight.update({start + 'to' + end + ':' + weight,})
if (start + ',' + end) in O:
O[start + ',' + end].append(weight);
else:
O[start + ',' + end] = [weight];
if (start+'out') in C:
C[start+'out'].append(end);
else:
C[start+'out'] = [end];
if (end+'in') in C:
C[end+'in'].append(start);
else:
C[end+'in'] = [start];
# element_int = int(hashlib.sha256(element.encode("utf-8")).hexdigest(), 16)
# # print(element)
# elements_list.append(element_int)
counter = counter + 1
print(counter)
for k in O.keys():
res = ""
for i in range(len(O[k])):
if (i>=1):
res += ','
res += str(O[k][i])
element = k+':'+res+';'
edgestoweight.update({element,})
# element_int = int(hashlib.sha256(element.encode("utf-8")).hexdigest(), 16)
# # print(element)
# elements_list.append(element_int)
# counter = counter + 1
for e in set.union(ids, edges, edgesto, edgestoweight):
# print(element)
element_int = int(hashlib.sha256(e.encode("utf-8")).hexdigest(), 16)
elements_list.append(element_int)
for k in C.keys():
res = ""
for i in range(len(C[k])):
if (i>=1):
res += ','
res += str(C[k][i])
element = k+':'+res+';'
element_int = int(hashlib.sha256(element.encode("utf-8")).hexdigest(), 16)
# print(element)
elements_list.append(element_int)
# counter = counter + 1
print(len(ids))
# print(len(edges))
# print(len(edgesto))
print(len(edgestoweight))
print(len(elements_list)-len(ids)-len(edgestoweight))
print(len(elements_list))
# # print(len(C))
print('\n')
# start_time = datetime.datetime.now()
# A_post_add = batch_add(A0, S, elements_list, n)
# end_time = datetime.datetime.now()
# print((end_time - start_time))
# print(len(S))
def test_wiki_pickle(self):
O = dict()
C = dict()
elements_list = []
counter = 0
for line in open("wiki-talk-temporal.txt"):
# if counter >= 50000:
# break
# print(line)
start = line.split(' ', 2)[0]
end = line.split(' ', 2)[1]
weight = line.split(' ', 2)[2].split('\n')[0]
element = start + 'to' + end + ':' + weight + ','
# print(element)
if (start + 'to' + end) in O:
O[start + 'to' + end].append(weight);
else:
O[start + 'to' + end] = [weight];
if (start+'out') in C:
C[start+'out'].append(end);
else:
C[start+'out'] = [end];
if (end+'in') in C:
C[end+'in'].append(start);
else:
C[end+'in'] = [start];
element_int = int(hashlib.sha256(element.encode("utf-8")).hexdigest(), 16)
# if element_int in elements_list:
# print(element)
# else:
# elements_list.append(element_int)
# counter = counter + 1
elements_list.append(element_int)
counter = counter + 1
print(counter)
for k in O.keys():
res = ""
for i in range(len(O[k])):
if (i>=1):
res += ','
res += str(O[k][i])
element = k+':'+res+';'
element_int = int(hashlib.sha256(element.encode("utf-8")).hexdigest(), 16)
# if element_int in elements_list:
# print(element)
# else:
# elements_list.append(element_int)
# counter = counter + 1
elements_list.append(element_int)
counter = counter + 1
for k in C.keys():
res = ""
for i in range(len(C[k])):
if (i>=1):
res += ','
res += str(C[k][i])
element = k+':'+res+';'
element_int = int(hashlib.sha256(element.encode("utf-8")).hexdigest(), 16)
# if element_int in elements_list:
# print(element)
# else:
# elements_list.append(element_int)
# counter = counter + 1
elements_list.append(element_int)
counter = counter + 1
# print(len(C))
print(counter)
print(len(elements_list))
print(len(set(elements_list)))
elements_list = list(set(elements_list))
print(len(elements_list))
with open('wiki.pk', 'wb') as f:
pickle.dump(elements_list, f)
def test_concept_completeness(self):
# p = 252533614457563255817176556954479732787
# q = 144382690536755041477176940054403505719
# n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
# # draw random number within range of [0,n-1]
# A0 = 65537
# S = dict()
data = pd.read_csv('chineseconceptnet.csv', delimiter='\t')
data.columns = ['uri', 'relation', 'start', 'end', 'json']
weights = data['json'].apply(lambda row: json.loads(row)['weight'])
datasets = data['json'].apply(lambda row: json.loads(row)['dataset'])
data.pop('json')
data.insert(4,'weights',weights)
data.insert(5,'dataset',datasets)
# print(data[0])
# print(len(data))
num = [100000, 150000, 200000, 250000, 300000, 350000, 400000, 450000, 500000, 550000, 600000]
# num = [300000]
for total in num:
print(total)
elements_list = []
counter = 0
ids = set()
edges = set()
edgesto = set()
edgestoweight = set()
edgesout = set()
edgesin = set()
C = dict()
for i in range(len(data)):
if counter >= total:
break
# print(data.loc[i]['end'])
start = data.loc[i]['start']
end = data.loc[i]['end']
relation = data.loc[i]['relation']
weight = data.loc[i]['weights']
dataset = data.loc[i]['dataset']
ids.update({start, end})
edges.update({start + ',' + end,})
edgesto.update({start +','+ relation +','+ end,})
# if (start +','+ relation +','+ end + ',' + str(weight) in edgestoweight):
# print(counter)
# print(start +','+ relation +','+ end + ',' + str(weight))
edgestoweight.update({start +','+ relation +','+ end + ',weight',})
edgestoweight.update({start +','+ relation +','+ end + ',weight:' + str(weight),})
edgestoweight.update({start +','+ relation +','+ end + ',dataset',})
edgestoweight.update({start +','+ relation +','+ end + ',dataset:' + str(dataset),})
edgesout.update({start +',out,'+ relation,})
edgesin.update({end +',in,'+ relation,})
# element = start + 'to' + end + ':' + weight + ';'
# print(element)
if (start+',out,') in C:
C[start+',out,'+ relation].update({end},);
else:
C[start+',out,'+ relation] = set([end]);
if (end+',in,') in C:
C[end+',in,'+ relation].update({start},);
else:
C[end+',in,'+ relation] = set([start]);
if (start + ',' + end) in C:
C[start + ',' + end].update({relation},);
else:
C[start + ',' + end] = set([relation]);
counter = counter + 1
print(counter)
for e in set.union(ids, edges, edgesto, edgestoweight, edgesout, edgesin):
# print(element)
element_int = int(hashlib.sha256(e.encode("utf-8")).hexdigest(), 16)
elements_list.append(element_int)
for k in C.keys():
res = ""
# t = len(C[k])
for i in range(len(C[k])):
if (i>=1):
res += ','
res += str(C[k].pop())
element = k+':'+res+';'
# if (t>1):
# print(element)
element_int = int(hashlib.sha256(element.encode("utf-8")).hexdigest(), 16)
# print(element)
elements_list.append(element_int)
# counter = counter + 1
print(len(ids))
# print(len(edges))
# print(len(edgesto))
print(len(edgesto))
# print(len(edgesout))
# print(len(edgesin))
print(len(elements_list)-len(ids)-len(edgesto))
print(len(elements_list))
# print(len(set(elements_list)))
# # print(len(C))
print('\n')
# # start_time = datetime.datetime.now()
# # A_post_add = batch_add(A0, S, elements_list, n)
# # end_time = datetime.datetime.now()
# # print((end_time - start_time))
# # print(len(S))
def test_prime_prepare(self):
primes_list = []
f = open('wiki.pk','rb')
elements_list = pickle.load(f)[:1000000]
print(len(elements_list))
for e in elements_list:
hash_prime, nonce = hash_to_prime(e)
primes_list.append(hash_prime)
print(len(primes_list))
with open('wiki_1m_prime.pk', 'wb') as f:
pickle.dump(primes_list, f)
def test_original_wit(self):
# p = 252533614457563255817176556954479732787
# q = 144382690536755041477176940054403505719
# n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
# draw random number within range of [0,n-1]
# A0 = 65537
# S = dict()
# primes_list = []
# counter = 0
# product = 1
# for line in open("wiki-talk-temporal.txt"):
# if counter >= 1000000:
# break
# start = line.split(' ', 2)[0]
# end = line.split(' ', 2)[1]
# weight = line.split(' ', 2)[2].split('\n')[0]
# element = start + 'to' + end + ':' + weight + ','
# element_int = int(hashlib.sha256(element.encode("utf-8")).hexdigest(), 16)
# #print(element)
# hash_prime, nonce = hash_to_prime(element_int)
# primes_list.append(hash_prime)
# counter = counter + 1
# S[element_int] = nonce
# product *= hash_prime
# print(product)
f = open('wiki_1m_prime.pk','rb')
prime_list = pickle.load(f)
# f = open('wiki.pk','rb')
# elements_list = pickle.load(f)[:1000000]
# print(len(elements_list))
# for e in elements_list:
# hash_prime, nonce = hash_to_prime(element_int)
# primes_list.append(hash_prime)
# print(len(primes_list))
# with open('wiki_1m_prime.pk', 'wb') as f:
# pickle.dump(primes_list, f)
for total in [10000, 40000, 90000, 160000, 250000, 360000, 490000, 640000, 810000, 1000000]:
print(total)
product = 1
primes_list = prime_list[:total]
start_time = datetime.datetime.now()
for e in primes_list:
product *= e
end_time = datetime.datetime.now()
print((end_time - start_time))
# print(len(primes_list))
# print(sys.getsizeof(primes_list))
# print(product.bit_length())
# print(sys.getsizeof(product))
print('\n')
def test_ed_wit(self):
f = open('wiki_1m_prime.pk','rb')
primes_list = pickle.load(f)
for total in [30000, 20000, 10000]:
print(total)
primes_list = primes_list[:total]
lproduct = 1
rproduct = 1
product_list = []
for i in range(total):
product_list.append([0]*2)
for i in range(total):
product_list[i][0] = lproduct
product_list[-1-i][1] = rproduct
lproduct *= primes_list[i]
rproduct *= primes_list[-1-i]
start_time = datetime.datetime.now()
product = product_list[5][0] * product_list[5][1]
# product2 = product_list[51][0] * product_list[51][1]
# print(product1*primes_list[50]==product2*primes_list[51])
end_time = datetime.datetime.now()
print((end_time - start_time))
# print(product_list)
print(len(product_list))
size = 0
for e in product_list:
size += sys.getsizeof(e[0]) + sys.getsizeof(e[1])
print(size)
# print(primes_list)
# print(len(primes_list))
# print(sys.getsizeof(primes_list))
# print(product.bit_length())
# print(sys.getsizeof(product))
print('\n')
def test_acc(self):
n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
# draw random number within range of [0,n-1]
A0 = 65537
f = open('wiki.pk','rb')
int_list = pickle.load(f)[:1000000]
for total in [1000000, 900000, 800000, 700000, 600000, 500000, 400000, 300000, 200000, 100000]:
print(total)
int_list = int_list[:total]
prime_list = []
start_time = datetime.datetime.now()
for e in int_list:
hash_prime, nonce = hash_to_prime(e)
prime_list.append(hash_prime)
end_time = datetime.datetime.now()
print((end_time - start_time))
# product = 1
start_time = datetime.datetime.now()
for e in prime_list:
product *= e
# product = listProduct(prime_list)
end_time = datetime.datetime.now()
print((end_time - start_time))
start_time = datetime.datetime.now()
A_post_add = pow(A0, product, n)
end_time = datetime.datetime.now()
print((end_time - start_time))
print(A_post_add)
print('\n')
def test_prove_verify(self):
n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
A0 = 65537
f = open('wiki_1m_prime.pk','rb')
primes_list = pickle.load(f)
primes_list += primes_list[:50000]
for total in [1000000, 900000, 800000, 700000, 600000, 500000, 400000, 300000, 200000, 100000]:
print(total)
primes_list = primes_list[:total]
K1 = math.ceil(pow(total/8,1/4))
K2 = 2*K1
K3 = 4*K1
product_list1 = []
product_list2 = []
product_list3 = []
for i in range(K3):
product_list2.append([])
product_list3.append([])
for j in range(K2):
product_list3[i].append([])
for i in range(K3):
product1 = 1
for j in range(K2):
product2 = 1
for k in range(K1):
product3 = 1
for l in range(K1):
product3 *= primes_list[i*K1*K1*K2 + j*K1*K1 + k*K1 + l]
product_list3[i][j].append(product3)
product2 *= product3
product_list2[i].append(product2)
product1 *= product2
product_list1.append(product1)
product = 1
for e in product_list1:
product *= e
A_final = pow(A0, product, n)
start_time = datetime.datetime.now()
mwp = 1
for i in range(K3-1):
mwp *= product_list1[i]
for i in range(K2-1):
mwp *= product_list2[-1][i]
for i in range(K1-1):
mwp *= product_list3[-1][-1][i]
for i in range(K1-1):
mwp *= primes_list[i-K1]
mw = pow(A0, mwp, n)
end_time = datetime.datetime.now()
print((end_time - start_time))
start_time = datetime.datetime.now()
verified = pow(mw, primes_list[-1], n) == A_final
end_time = datetime.datetime.now()
print((end_time - start_time))
print(verified)
start_time = datetime.datetime.now()
a, b = bezoute_coefficients(252533614457563255817176556954479732787, product)
d = pow(A0, a, n)
# return d, b
end_time = datetime.datetime.now()
print(d,b)
print((end_time - start_time))
start_time = datetime.datetime.now()
verified = (pow(d, 252533614457563255817176556954479732787, n) * pow(A_final, b, n)) % n == A0
end_time = datetime.datetime.now()
print((end_time - start_time))
print(verified)
print('\n')
def test_concept_out(self):
data = pd.read_csv('chineseconceptnet.csv', delimiter='\t')
data.columns = ['uri', 'relation', 'start', 'end', 'json']
weights = data['json'].apply(lambda row: json.loads(row)['weight'])
data.pop('json')
data.insert(4,'weights',weights)
edgetype = set()
ids = set()
edges = set()
edgesto = set()
edgestoweight = set()
edgesout = set()
edgesin = set()
C = dict()
for i in range(len(data)):
start = data.loc[i]['start']
end = data.loc[i]['end']
relation = data.loc[i]['relation']
edgetype.update({relation,})
ids.update({start, end})
# edges.update({start + ',' + end,})
# edgesto.update({start +','+ relation +','+ end,})
# edgestoweight.update({start +','+ relation +','+ end + ',' + str(weight),})
edgesout.update({start +',out,'+ relation,})
edgesin.update({end +',in,'+ relation,})
# element = start + 'to' + end + ':' + weight + ';'
# print(element)
if (start+',out,') in C:
C[start+',out,'+ relation].append(end);
else:
C[start+',out,'+ relation] = [end];
if (end+',in,') in C:
C[end+',in,'+ relation].append(start);
else:
C[end+',in,'+ relation] = [start];
print(len(ids))
print(len(edgetype))
print(edgetype)
for K in range(1,11):
print(K)
maList = []
nmaList = []
for i in range(1,11):
mtotal = 0
nmtotal = 0
time = 50000
for j in range(time):
node = random.sample(ids,1)[0]
# print(node)
types = random.sample(edgetype,i)
# print(types)
m, nm = outbound(node, types, K, C)
mtotal += m
nmtotal += nm
ma = mtotal / time
nma = nmtotal / time
maList.append(ma)
nmaList.append(nma)
print(maList)
print(nmaList)
def test_batch(self):
n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
A0 = 65537
f = open('wiki_1m_prime.pk','rb')
primes_list = pickle.load(f)
# primes_list += primes_list[:50000]
elementTotal = primes_list[:100000]
product = listProduct(elementTotal)
A_final = pow(A0, product, n)
for total in [1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000]:
print(total)
result = primes_list[:total]
start_time = datetime.datetime.now()
p = listProduct(result)
# end_time = datetime.datetime.now()
# print((end_time - start_time))
# start_time = datetime.datetime.now()
mp = product // p
# end_time = datetime.datetime.now()
# print((end_time - start_time))
# start_time = datetime.datetime.now()
A_post = pow(A0, mp, n)
# end_time = datetime.datetime.now()
# print((end_time - start_time))
# start_time = datetime.datetime.now()
proof = prove_exponentiation_test(A_post, p, A_final, n)
end_time = datetime.datetime.now()
print((end_time - start_time))
size = 0
size += sys.getsizeof(A_post)
print(size)
size = 0
size += sys.getsizeof(A_post) + sys.getsizeof(proof)
print(size)
start_time = datetime.datetime.now()
p = listProduct(result)
verified = A_final == pow(A_post, p, n)
end_time = datetime.datetime.now()
print((end_time - start_time))
print(verified)
start_time = datetime.datetime.now()
p = listProduct(result)
is_valid = verify_exponentiation_test(proof, A_post, p, A_final, n)
end_time = datetime.datetime.now()
print((end_time - start_time))
print(is_valid)
result = primes_list[-total:]
start_time = datetime.datetime.now()
p = listProduct(result)
# end_time = datetime.datetime.now()
# print((end_time - start_time))
# start_time = datetime.datetime.now()
a, b = bezoute_coefficients(p, product)
# end_time = datetime.datetime.now()
# print((end_time - start_time))
# start_time = datetime.datetime.now()
d = pow(A0, a, n)
pre = pow(A_final, b, n)
# end_time = datetime.datetime.now()
# print((end_time - start_time))
# start_time = datetime.datetime.now()
proof = prove_knowledge_exponent(b, A_final, pre, n)
end_time = datetime.datetime.now()
# print(d,b)
print((end_time - start_time))
size = 0
size += sys.getsizeof(d) + sys.getsizeof(b)
print(size)
size = 0
size += sys.getsizeof(d) + sys.getsizeof(pre)
for e in proof:
size += sys.getsizeof(e)
print(size)
start_time = datetime.datetime.now()
p = listProduct(result)
verified = (pow(d, p, n) * pow(A_final, b, n)) % n == A0
end_time = datetime.datetime.now()
print((end_time - start_time))
print(verified)
start_time = datetime.datetime.now()
p = listProduct(result)
is_valid = verify_knowledge_exponent(A_final, pre, proof[0], proof[1], proof[2], n)
verified = (pow(d, p, n) * pre) % n == A0
end_time = datetime.datetime.now()
print((end_time - start_time))
print(is_valid)
print(verified)
print('\n')
def test_product(self):
f = open('wiki_1m_prime.pk','rb')
wiki_list = pickle.load(f)
# primes_list += primes_list[:50000]
for total in [1+(1<<17), 1+(1<<18), 1+(1<<19),600000, 700000, 800000, 900000, 1000000]:
print(total)
primes_list = wiki_list[:total]
start_time = datetime.datetime.now()
p = listProduct(primes_list)
end_time = datetime.datetime.now()
print((end_time - start_time))
start_time = datetime.datetime.now()
q = listProductSpaced(primes_list)
end_time = datetime.datetime.now()
print((end_time - start_time))
print(p==q)
print('\n')
def test_single_witness_binary(self):
n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
A0 = 65537
f = open('wiki_1m_prime.pk','rb')
primes_list = pickle.load(f)
for total in [1000000, 900000, 800000, 700000, 600000, 500000, 400000, 300000, 200000, 100000]:
print(total)
primes_list = primes_list[:total]
p = primes_list[:]
if len(p)&(len(p)-1)==0:
t=len(p).bit_length()-1
else:
t=len(p).bit_length()
for i in range((1<<t)-len(p)):
p.append(1)
proResult = [p]
while len(p)!=2:
q=[]
for i in range(len(p)//2):
q.append(p[2*i]*p[2*i+1])
proResult.append(q)
p=q
product = p[0]*p[1]
A_final = pow(A0, product, n)
mwp = 1
sample = 333
k=3
p = primes_list[:]
p[sample] = 1
start_time = datetime.datetime.now()
if len(p)&(len(p)-1)==0:
t=len(p).bit_length()-1
else:
t=len(p).bit_length()
for i in range((1<<t)-len(p)):
p.append(1)
s = sample>>(t-k)
mwp = listProduct(p[s<<(t-k):(s+1)<<(t-k)])
for r in range(k):
if (s>>r)%2==1:
mwp *= proResult[-k+r][((s>>(r+1))<<1)]
else:
mwp *= proResult[-k+r][((s>>(r+1))<<1)+1]
mw = pow(A0, mwp, n)
end_time = datetime.datetime.now()
print((end_time - start_time))
start_time = datetime.datetime.now()
verified = pow(mw, primes_list[sample], n) == A_final
end_time = datetime.datetime.now()
print((end_time - start_time))
print(verified)
start_time = datetime.datetime.now()
a, b = bezoute_coefficients(252533614457563255817176556954479732787, product)
d = pow(A0, a, n)
# return d, b
end_time = datetime.datetime.now()
print(d,b)
print((end_time - start_time))
start_time = datetime.datetime.now()
verified = (pow(d, 252533614457563255817176556954479732787, n) * pow(A_final, b, n)) % n == A0
end_time = datetime.datetime.now()
print((end_time - start_time))
print(verified)
print('\n')
def test_single_witness_division(self):
n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
A0 = 65537
f = open('wiki_1m_prime.pk','rb')
primes_list = pickle.load(f)
for total in [100000]:
print(total)
primes_list = primes_list[:total]
start_time = datetime.datetime.now()
product = listProduct(primes_list)
end_time = datetime.datetime.now()
print((end_time - start_time))
A_final = pow(A0, product, n)
sample = 333
start_time = datetime.datetime.now()
mwp = product//primes_list[sample]
mw = pow(A0, mwp, n)
end_time = datetime.datetime.now()
print((end_time - start_time))
start_time = datetime.datetime.now()
verified = False
# for i in range(10000):
verified = pow(mw, primes_list[sample], n) == A_final
end_time = datetime.datetime.now()
print((end_time - start_time))
print(verified)
start_time = datetime.datetime.now()
a, b = bezoute_coefficients(252533614457563255817176556954479732787, product)
d = pow(A0, a, n)
# return d, b
end_time = datetime.datetime.now()
# print(d,b)
print((end_time - start_time))
start_time = datetime.datetime.now()
verified = False
# for i in range(10000):
verified = (pow(d, 252533614457563255817176556954479732787, n) * pow(A_final, b, n)) % n == A0
end_time = datetime.datetime.now()
print((end_time - start_time))
print(verified)
print('\n')
def test_batch_division(self):
n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
A0 = 65537
f = open('wiki_1m_prime.pk','rb')
primes_list = pickle.load(f)
# primes_list += primes_list[:50000]
elementTotal = primes_list[:100000]
product = listProduct(elementTotal)
A_final = pow(A0, product, n)
for total in [1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000]:
print(total)
result = primes_list[:total]
p = listProduct(result)
# start_time = datetime.datetime.now()
# mw_list = []
# for e in result:
# mp = product // e
# mw_list.append(pow(A0, mp, n))
# end_time = datetime.datetime.now()
# print((end_time - start_time))
# size = 0
# for e in mw_list:
# size += sys.getsizeof(e)
# print(size)
start_time = datetime.datetime.now()
p = listProduct(result)
mp = product // p
A_post = pow(A0, mp, n)
end_time = datetime.datetime.now()
print((end_time - start_time))
size = 0
size += sys.getsizeof(A_post)
print(size)
start_time = datetime.datetime.now()
p = listProduct(result)
mp = product // p
A_post = pow(A0, mp, n)
Q = prove_exponentiation_test(A_post, p, A_final, n)
end_time = datetime.datetime.now()
print((end_time - start_time))
size = 0
size += sys.getsizeof(A_post) + sys.getsizeof(Q)
print(size)
# start_time = datetime.datetime.now()
# verified = True
# for i in range(len(mw_list)):
# if verified:
# verified = A_final == pow(mw_list[i], result[i], n)
# end_time = datetime.datetime.now()
# print((end_time - start_time))
# print(verified)
start_time = datetime.datetime.now()
p = listProduct(result)
verified = A_final == pow(A_post, p, n)
end_time = datetime.datetime.now()
print((end_time - start_time))
print(verified)
start_time = datetime.datetime.now()
p = listProduct(result)
is_valid = verify_exponentiation_test(Q, A_post, p, A_final, n)
end_time = datetime.datetime.now()
print((end_time - start_time))
print(is_valid)
print('\n')
result = primes_list[-total:]
p = listProduct(result)
start_time = datetime.datetime.now()
p = listProduct(result)
a, b = bezoute_coefficients(p, product)
d = pow(A0, a, n)
end_time = datetime.datetime.now()
# print(d,b)
print((end_time - start_time))
size = 0
size += sys.getsizeof(d) + sys.getsizeof(b)
print(size)
result = primes_list[-total:]
start_time = datetime.datetime.now()
p = listProduct(result)
a, b = bezoute_coefficients(p, product)
d = pow(A0, a, n)
pre = pow(A_final, b, n)
pre_inv = mul_inv(pre, n)
# print(pre_inv)
proof = prove_knowledge_exponent(b, A_final, pre, n)
Q = prove_exponentiation_test(d, p, (A0*pre_inv)%n, n)
# print(pow(d,p,n)==(A0*pre_inv)%n)
end_time = datetime.datetime.now()
# print(d,b)
print((end_time - start_time))
size = 0
size += sys.getsizeof(d) + sys.getsizeof(pre)
for e in proof:
size += sys.getsizeof(e)
size += sys.getsizeof(Q)
print(size)
start_time = datetime.datetime.now()
p = listProduct(result)
verified = (pow(d, p, n) * pow(A_final, b, n)) % n == A0
end_time = datetime.datetime.now()
print((end_time - start_time))
print(verified)
start_time = datetime.datetime.now()
p = listProduct(result)
is_valid = verify_knowledge_exponent(A_final, pre, proof[0], proof[1], proof[2], n)
pre_inv = mul_inv(pre, n)
verified = verify_exponentiation_test(Q, d, p, (A0*pre_inv)%n, n)
# verified = (pow(d, p, n) * pre) % n == A0
end_time = datetime.datetime.now()
print((end_time - start_time))
print(is_valid)
print(verified)
print('\n')
def test_deletion(self):
p = 252533614457563255817176556954479732787
q = 144382690536755041477176940054403505719
n = 36461482706354564422592875042006590908268153693683612285024099145347146308853
# draw random number within range of [0,n-1]
A0 = 65537
f = open('wiki_1m_prime.pk','rb')
primes_list = pickle.load(f)
samples = primes_list[:10]
product = listProduct(samples)
A_final = pow(A0, product, n)
sample = 5
# print(primes_list[sample])
sam_inv = mul_inv(primes_list[sample], (p-1)*(q-1))
# print(sam_inv)
A_del = pow(A_final, sam_inv, n)
# print(pow(A_del, primes_list[sample], n) == A_final)
mwp = product//primes_list[sample]
mw = pow(A0, mwp, n)
print(A_del==mw)
def outbound(id, tList, K, C):
m = 0
nm = 0
for t in tList:
if (id+',out,'+t) in C:
m += 1
nextList = C[id+',out,'+t]
if K>1:
for node in nextList:
mr, nmr = outbound(node, tList, K-1, C)
m += mr
nm += nmr
else:
nm += 1
return m, nm
def listProduct(p):
if len(p)&(len(p)-1)==0:
t=len(p).bit_length()-1
else:
t=len(p).bit_length()
for i in range((1<<t)-len(p)):
p.append(1)
while len(p)!=1:
q=[]
for i in range(len(p)//2):
q.append(p[2*i]*p[2*i+1])
# size = 0
# for e in q:
# size += sys.getsizeof(e)
# print(size)
p=q
return p[0]
def listProductSpaced(p):
if len(p)&(len(p)-1)==0:
t=len(p).bit_length()-1
else:
t=len(p).bit_length()
for i in range((1<<t)-len(p)):
p.insert(2*i,1)
while len(p)!=1:
q=[]
for i in range(len(p)//2):
q.append(p[2*i]*p[2*i+1])
p=q
return p[0]
#print(hash_to_prime(10))
s = unittest.TestSuite()
testname = 'test'
print(testname)
s.addTest(AccumulatorTest(testname))
#s.addTests([Test_Myclass1("test_sub"),Test_Myclass1("test_sum")])
fs = open(testname+'.txt',"w")
run = unittest.TextTestRunner(fs)
run.run(s) | 44.779577 | 153 | 0.38577 | 6,031 | 71,916 | 4.460289 | 0.052064 | 0.037472 | 0.083271 | 0.095762 | 0.842677 | 0.815539 | 0.786506 | 0.766468 | 0.750743 | 0.732045 | 0 | 0.118759 | 0.516783 | 71,916 | 1,606 | 154 | 44.779577 | 0.655319 | 0.121684 | 0 | 0.745925 | 0 | 0 | 0.016274 | 0.001368 | 0 | 0 | 0 | 0 | 0.001918 | 1 | 0.028763 | false | 0 | 0.013423 | 0 | 0.04698 | 0.144775 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
6cc1f6caeaf27c03b20b556a64f55a5fda49df60 | 2,405 | py | Python | tests/test_happy_path.py | mbaechtold/django-onetimesecret | 57d2632fc60d63b7c4f48150b51875acadad8964 | [
"MIT"
] | 2 | 2018-07-17T14:13:11.000Z | 2021-07-06T12:44:41.000Z | tests/test_happy_path.py | mbaechtold/django-onetimesecret | 57d2632fc60d63b7c4f48150b51875acadad8964 | [
"MIT"
] | 5 | 2019-12-26T16:47:55.000Z | 2021-06-10T20:27:37.000Z | tests/test_happy_path.py | mbaechtold/django-onetimesecret | 57d2632fc60d63b7c4f48150b51875acadad8964 | [
"MIT"
] | null | null | null | from django_webtest import WebTest
class HappyPathTest(WebTest):
def test_happy_path_without_custom_passphrase(self):
"""
Test the happy path from the start to the end where Bob creates a secret
and Alice retrieves the secret.
"""
# Alice visits the root of our application.
response = self.app.get("/")
assert response.status == "200 OK"
# Alice creates a secret.
form = response.form
form["content"] = "A secret message for Bob"
response = form.submit().follow()
assert response.status == "200 OK"
# Alice copies the url to the secret and sends it to Bob.
secret_url = response.html.select("#url-to-secret")[0].attrs["value"]
# Bob opens the link he got from Alice.
response = self.app.get(secret_url)
assert response.status == "200 OK"
# Bob clicks the confirmation button.
response = response.form.submit()
# Bob can read the secret.
the_secret = response.html.select("#the-secret")[0].text
assert the_secret == "A secret message for Bob"
def test_happy_path_with_custom_passphrase(self):
"""
Test the happy path from the start to the end where Bob creates a secret,
encrypted with a custom passphrase, and Alice retrieves the secret.
"""
# Alice visits the root of our application.
response = self.app.get("/")
assert response.status == "200 OK"
# Alice creates a secret encrypted with a custom passphrase.
form = response.form
form["content"] = "A secret message for Bob"
form["passphrase"] = "not-for-mallory"
response = form.submit().follow()
assert response.status == "200 OK"
# Alice copies the url to the secret and sends it to Bob.
secret_url = response.html.select("#url-to-secret")[0].attrs["value"]
# Bob opens the link he got from Alice.
response = self.app.get(secret_url)
assert response.status == "200 OK"
# Bob enters the passphrase and clicks the confirmation button.
form = response.form
form["passphrase"] = "not-for-mallory"
response = form.submit()
# Bob can read the secret.
the_secret = response.html.select("#the-secret")[0].text
assert the_secret == "A secret message for Bob"
| 35.895522 | 81 | 0.624116 | 314 | 2,405 | 4.719745 | 0.216561 | 0.072874 | 0.080972 | 0.093117 | 0.866397 | 0.866397 | 0.866397 | 0.866397 | 0.789474 | 0.789474 | 0 | 0.012695 | 0.279418 | 2,405 | 66 | 82 | 36.439394 | 0.84247 | 0.31185 | 0 | 0.806452 | 0 | 0 | 0.163395 | 0 | 0 | 0 | 0 | 0 | 0.258065 | 1 | 0.064516 | false | 0.129032 | 0.032258 | 0 | 0.129032 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
6cf38e8998b6f640eb2b441ba5659eb0d3e14da7 | 22,746 | py | Python | tests/test_tokenizer.py | Aplopio/plasticparser | 829dd9cf392a42d5f923ebab59b7803f7d2d0c60 | [
"MIT"
] | 22 | 2015-03-05T08:25:27.000Z | 2021-08-22T00:39:44.000Z | tests/test_tokenizer.py | Aplopio/plasticparser | 829dd9cf392a42d5f923ebab59b7803f7d2d0c60 | [
"MIT"
] | 1 | 2016-05-03T14:03:19.000Z | 2016-05-05T05:24:00.000Z | tests/test_tokenizer.py | Aplopio/plasticparser | 829dd9cf392a42d5f923ebab59b7803f7d2d0c60 | [
"MIT"
] | 5 | 2015-06-14T19:31:23.000Z | 2016-03-03T18:26:51.000Z | import unittest
from plasticparser import tokenizer, grammar_parsers
class TokenizerTest(unittest.TestCase):
def test_should_sanitize_value(self):
for char in grammar_parsers.RESERVED_CHARS:
if char not in '(':
sanitized_value = grammar_parsers.sanitize_value(
"abc{}".format(char))
self.assertEqual(sanitized_value, 'abc\\{}'.format(char))
def get_query_string(self, parsed_dict):
return parsed_dict['query']['filtered'][
'query']['query_string']['query']
def test_should_tokenize_and_parse_logical_expression(self):
query_string = "abc:>def"
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(self.get_query_string(parsed_string), "abc:>def")
query_string = "abc:>def and mms:>asd"
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(
self.get_query_string(parsed_string),
"abc:>def AND mms:>asd")
query_string = "abc:>def mms:>asd"
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(
self.get_query_string(parsed_string),
"abc:>def mms:>asd")
query_string = "(abc:>def mms:>asd)"
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(
self.get_query_string(parsed_string),
"(abc:>def mms:>asd)")
query_string = "abc:>def mms:>asd (abc:def or pqe:123) and blab:blab"
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(
self.get_query_string(parsed_string),
"abc:>def mms:>asd (abc:def OR pqe:123) AND blab:blab")
query_string = "( abc:>def mms:>asd ) (abc:>def mms:>asd) "
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(
self.get_query_string(parsed_string),
"(abc:>def mms:>asd) (abc:>def mms:>asd)")
query_string = "( abc:>def mms:>asd ) and (abc:>def mms:>asd) "
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(
self.get_query_string(parsed_string),
"(abc:>def mms:>asd) AND (abc:>def mms:>asd)")
query_string = "abc def"
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(self.get_query_string(parsed_string), "abc def")
query_string = 'abc (python or london) (abc:def dd:ff) [fgdgdfg]'
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(
self.get_query_string(parsed_string),
u'abc (python OR london) (abc:def dd:ff) \[fgdgdfg\]')
def test_should_parse_logical_expression_with_type_and_facets(self):
query_string = "type:def facets: [ aaa(abc:def) ] (abc:>def mms:>asd)"
parsed_string = tokenizer.tokenize(query_string)
expected_query_string = {
'query': {
'filtered': {
'filter': {
'bool': {
'should': [], 'must_not': [], 'must': [
{
'type': {
'value': 'def'}}]}}, 'query': {
'query_string': {
'query': u'(abc:>def mms:>asd)',
"default_operator": "and"}}}},
'facets': {
'aaa': {
'facet_filter': {
'query': {
'query_string': {
'query': u'abc:def', "default_operator": "and"}}}, 'terms': {
'field': 'aaa_nonngram', 'size': 20}}}}
self.assertEqual(parsed_string, expected_query_string)
def test_should_parse_logical_expression_with_type_and_aggs(self):
self.maxDiff = None
query_string = "type:def aggregations: [ aaa(abc:def) ] (abc:>def mms:>asd)"
parsed_string = tokenizer.tokenize(query_string)
expected_query_string = {
'query': {
'filtered': {
'filter': {
'bool': {
'should': [], 'must_not': [], 'must': [
{
'type': {
'value': 'def'
}
}
]
}
},
'query': {
'query_string': {
'query': u'(abc:>def mms:>asd)',
"default_operator": "and"
}
}
}
},
"size": 0,
'aggregations': {
'aaa': {
'aggregations': {
'aaa': {
'terms': {
'field': 'aaa_nonngram', 'size': 20
},
'aggregations': {
'aaa': {
'filter': {
'query': {
'query_string': {
'query': u'abc:def',
"default_operator": "and"
}
}
}
}
}
},
}
}
}
}
self.assertEqual(parsed_string, expected_query_string)
def test_should_not_misinterpret_words_starting_with_logical_ops(self):
query_string = "type:candidates AND nested:[metadata_facets(field_name:(location) AND field_value:(orlando))] "
parsed_string = tokenizer.tokenize(query_string)
expected_query_string = {
'query': {
'filtered': {
'filter': {
'bool': {
'should': [],
'must_not': [],
'must': [
{
'type': {
'value': u'candidates'
}
},
{
'nested': {
'path': u'metadata_facets',
'query': {
'query_string': {
'query': u'field_name:(location) AND field_value:(orlando)',
'default_operator': 'and'
}
}
}
}
]
}
}
}
}
}
self.assertEqual(parsed_string, expected_query_string)
def test_should_parse_logical_expression_with_type(self):
query_string = "type:def (abc:>def mms:>asd)"
parsed_string = tokenizer.tokenize(query_string)
expected_query_string = {
'query': {
'filtered': {
'filter': {
'bool': {
'should': [],
'must_not': [],
'must': [
{
'type': {
'value': 'def'}}]}},
'query': {
'query_string': {
'query': u'(abc:>def mms:>asd)',
"default_operator": "and"}}}},
}
self.assertEqual(parsed_string, expected_query_string)
def test_should_parse_logical_expression_with_type_multi_facets(self):
query_string = "type:def (abc:>def mms:>asd) facets: [ aaa.bb(abc:def) bbb(cc:ddd) ] "
parsed_string = tokenizer.tokenize(query_string)
expected_query_string = {
'query': {
'filtered': {
'filter': {
'bool': {
'should': [], 'must_not': [], 'must': [
{
'type': {
'value': 'def'}}]}}, 'query': {
'query_string': {
'query': u'(abc:>def mms:>asd)',
"default_operator": "and"}}}},
'facets': {
'aaa.bb': {
'facet_filter': {
'query': {
'query_string': {
'query': u'abc:def', "default_operator": "and"}}}, 'terms': {
'field': 'aaa.bb_nonngram', 'size': 20}, 'nested': u'aaa'},
'bbb': {
'facet_filter': {
'query': {
'query_string': {
'query': u'cc:ddd', "default_operator": "and"}}}, 'terms': {
'field': 'bbb_nonngram', 'size': 20}}}}
self.assertEqual(parsed_string, expected_query_string)
def test_should_parse_logical_expression_with_type_multi_aggs(self):
query_string = "type:def (abc:>def mms:>asd) aggregations: [ aaa.bb(abc:def) bbb(cc:ddd) ] "
parsed_string = tokenizer.tokenize(query_string)
expected_query_string = {
"query": {
"filtered": {
"filter": {
"bool": {
"should": [],
"must_not": [],
"must": [
{
"type": {
"value": "def"
}
}
]
}
},
"query": {
"query_string": {
"query": "(abc:>def mms:>asd)",
"default_operator": "and"
}
}
}
},
"size": 0,
"aggregations": {
"aaa.bb": {
"aggregations": {
"aaa.bb": {
"terms": {
"field": "aaa.bb_nonngram",
"size": 20
},
"aggregations": {
"aaa.bb": {
"filter": {
"query": {
"query_string": {
"query": "abc:def",
"default_operator": "and"
}
}
}
}
}
}
},
"nested": {
"path": "aaa"
}
},
"bbb": {
"aggregations": {
"bbb": {
"terms": {
"field": "bbb_nonngram",
"size": 20
},
"aggregations": {
"bbb": {
"filter": {
"query": {
"query_string": {
"query": "cc:ddd",
"default_operator": "and"
}
}
}
}
}
}
}
}
}
}
self.assertEqual(parsed_string, expected_query_string)
def test_should_parse_basic_logical_expression(self):
query_string = 'title:hello OR description:"world"'
parsed_string = tokenizer.tokenize(query_string)
expected_query_string = {
'query': {
'filtered': {
'filter': {
'bool': {
'should': [],
'must_not': [],
'must': []}},
'query': {
'query_string': {
'query': u'title:hello OR description:"world"',
"default_operator": "and"}}}},
}
self.assertEqual(parsed_string, expected_query_string)
def test_should_parse_basic_logical_expression_facets_with_no_facet_filters(
self):
query_string = "type:def (abc:>def mms:>asd) facets: [ aaa.bb ]"
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(
parsed_string, {'query': {
'filtered': {'filter': {'bool': {'should': [], 'must_not': [], 'must': [{'type': {'value': 'def'}}]}},
'query': {'query_string': {'query': u'(abc:>def mms:>asd)', 'default_operator': 'and'}}}},
'facets': {'aaa.bb': {'terms': {'field': 'aaa.bb_nonngram', 'size': 20}}}})
def test_should_parse_basic_logical_expression_aggs_with_no_filters(
self):
query_string = "type:def (abc:>def mms:>asd) aggregations: [ aaa.bb ]"
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(
parsed_string, {
'query': {
'filtered': {
'filter': {
'bool': {
'should': [],
'must_not': [],
'must': [
{
'type': {
'value': 'def'
}
}
]
}
},
'query': {
'query_string': {
'query': u'(abc:>def mms:>asd)',
'default_operator': 'and'
}
}
}
},
"size": 0,
'aggregations': {
'aaa.bb': {
'aggregations': {
'aaa.bb': {
'terms': {
'field': 'aaa.bb_nonngram', 'size': 20
}
}
}
}
}
}
)
def test_should_parse_basic_logical_expression_facets_with_simple_field(
self):
query_string = "type:def (abc:>def mms:>asd) facets: [ aaa ]"
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(
parsed_string, {
'query': {
'filtered': {
'filter': {
'bool': {
'should': [], 'must_not': [], 'must': [
{
'type': {
'value': 'def'}}]}}, 'query': {
'query_string': {
'query': u'(abc:>def mms:>asd)',
"default_operator": "and"}}}},
'facets': {
'aaa': {
'terms': {
'field': 'aaa_nonngram', 'size': 20}}}})
def test_should_parse_multiword_field_value(self):
query_string = "name:(krace OR kumar) abc:>def"
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(parsed_string['query']['filtered']['query']['query_string']['query'],
u'name:(krace OR kumar) abc:>def')
def test_should_parse_logical_expression_with_type_and_facets_2(self):
query_string = "facets: [aaa(a:b abc:(def fff) c:d e:(f))]"
parsed_string = tokenizer.tokenize(query_string)
expected_parse_string = {
'query': {'filtered': {'filter': {'bool': {'should': [], 'must_not': [], 'must': []}}}}, 'facets': {
'aaa': {'facet_filter': {
'query': {'query_string': {'query': u'a:b abc:(def fff) c:d e:(f)', "default_operator": "and"}}},
'terms': {'field': 'aaa_nonngram', 'size': 20}}}}
self.assertEqual(parsed_string, expected_parse_string)
def test_should_parse_nested_expression(self):
query_string = "nested:[aaa(a:(bb) abc:(def fff))]"
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(parsed_string, {'query': {'filtered': {'filter': {'bool': {'should': [], 'must_not': [],
'must': [{'nested': {'path': 'aaa',
'query': {
'query_string': {
'query': u'a:(bb) abc:(def fff)',
'default_operator': 'and'}}}}]}}}},
})
def test_should_escape_value_with_colon(self):
query_string = "tags:dev:ops"
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(parsed_string['query']['filtered']['query']['query_string']['query'],
u'tags:dev\:ops')
def test_should_add_highlight_fields(self):
query_string = "highlight:[aaa] asdasd"
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(parsed_string, {
'query': {
'filtered': {
'filter': {
'bool': {
'should': [],
'must_not': [],
'must': []
}
},
'query': {
'query_string': {
'query': u'asdasd',
'default_operator': 'and'
}
}
}
},
'highlight': {
'fields': {
u'aaa': {}
}
}
})
def test_should_add_sort_fields(self):
query_string = 'sort:[-sfield(missing:_last)] asdasd'
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(parsed_string, {
'query': {
'filtered': {
'filter': {
'bool': {
'should': [],
'must_not': [],
'must': []
}
},
'query': {
'query_string': {
'query': u'asdasd',
'default_operator': 'and'
}
}
}
},
'sort': {
'sfield': {"order": "desc", "missing": "_last"}
}
})
def test_should_add_sort_fields_with_options(self):
query_string = 'sort:[inbox_sort.user(missing:_last,nested_path:inbox_sort,nested_filter:(term:(inbox_sort.user:3)))] asdasd'
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(parsed_string, {
'query': {
'filtered': {
'filter': {
'bool': {
'should': [],
'must_not': [],
'must': []
}
},
'query': {
'query_string': {
'query': u'asdasd',
'default_operator': 'and'
}
}
}
},
'sort': {
"inbox_sort.user": {
"order": "asc",
"nested_path": "inbox_sort",
"nested_filter": {
"term": {"inbox_sort.user": '3'}
},
"missing": "_last"
}
}
})
def test_should_add_sort_fields_descending(self):
query_string = "sort:[-aaa] asdasd"
parsed_string = tokenizer.tokenize(query_string)
self.assertEqual(parsed_string, {
'query': {
'filtered': {
'filter': {
'bool': {
'should': [],
'must_not': [],
'must': []
}
},
'query': {
'query_string': {
'query': u'asdasd',
'default_operator': 'and'
}
}
}
},
'sort': {
'aaa': {"order": "desc"}
}
})
| 41.206522 | 140 | 0.350743 | 1,514 | 22,746 | 5.007926 | 0.084544 | 0.149433 | 0.067528 | 0.047481 | 0.856634 | 0.821023 | 0.791348 | 0.758903 | 0.748483 | 0.716434 | 0 | 0.003018 | 0.533852 | 22,746 | 551 | 141 | 41.281307 | 0.712063 | 0 | 0 | 0.527938 | 0 | 0.009634 | 0.191154 | 0.010551 | 0 | 0 | 0 | 0 | 0.05395 | 1 | 0.040462 | false | 0 | 0.003854 | 0.001927 | 0.04817 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6cfc66edc22d592891acb7822d3df931a62530a7 | 1,837 | py | Python | tests/test_extract_filename.py | pyrates/multifruits | bdb7c45a16e702be2e4faf271981068fa56482ef | [
"MIT"
] | 3 | 2017-12-11T22:37:25.000Z | 2018-05-08T15:28:10.000Z | tests/test_extract_filename.py | pyrates/multifruits | bdb7c45a16e702be2e4faf271981068fa56482ef | [
"MIT"
] | 3 | 2018-03-20T00:15:51.000Z | 2018-12-03T10:06:46.000Z | tests/test_extract_filename.py | pyrates/multifruits | bdb7c45a16e702be2e4faf271981068fa56482ef | [
"MIT"
] | null | null | null | from multifruits import extract_filename
def test_extract_filename():
assert extract_filename({b'filename': b'foo.bar'}) == 'foo.bar'
def test_extract_filename_without_filename():
assert extract_filename({}) == ''
def test_extract_filename_star():
assert extract_filename({b'filename*': b"utf-8''foo.bar"}) == 'foo.bar'
def test_extract_filename_and_star():
assert extract_filename({
b'filename*': b"utf-8''foo.bar",
b'filename': b"baz.quux"
}) == 'foo.bar'
def test_extract_filename_and_star_wrong_encoding():
assert extract_filename({
b'filename*': b"unknown''foo.bar",
b'filename': b"baz.quux"
}) == 'baz.quux'
def test_extract_filename_star_wrong_encoding():
assert extract_filename({b'filename*': b"unknow''foo.bar"}) == 'foo.bar'
def test_extract_filename_star_latin1_with_accent():
assert extract_filename({
b'filename*': "iso-8859-1''foo-ä.html".encode('latin1')
}) == 'foo-ä.html'
def test_extract_filename_star_utf8_with_accent():
assert extract_filename({
b'filename*': "UTF-8''foo-ä-€.html".encode()
}) == 'foo-ä-€.html'
def test_extract_filename_star_no_encoding():
assert extract_filename({
b'filename*': "''foo-ä-€.html".encode()
}) == "foo-ä-€.html"
def test_extract_filename_star_no_encoding_but_filename():
assert extract_filename({
b'filename*': "''foo-ä-€.html".encode(),
b'filename': b"baz.quux"
}) == "baz.quux"
def test_extract_filename_star_without_quotes():
assert extract_filename({
b'filename*': "foo-ä-€.html".encode()
}) == "foo-ä-€.html"
def test_extract_filename_star_without_quotes_but_filename():
assert extract_filename({
b'filename*': "foo-ä-€.html".encode(),
b'filename': b"baz.quux"
}) == "baz.quux"
| 26.242857 | 76 | 0.659771 | 252 | 1,837 | 4.551587 | 0.142857 | 0.32694 | 0.146469 | 0.230166 | 0.911944 | 0.908457 | 0.790759 | 0.678291 | 0.564952 | 0.484743 | 0 | 0.007208 | 0.169298 | 1,837 | 69 | 77 | 26.623188 | 0.739187 | 0 | 0 | 0.355556 | 0 | 0 | 0.231355 | 0.011976 | 0 | 0 | 0 | 0 | 0.266667 | 1 | 0.266667 | true | 0 | 0.022222 | 0 | 0.288889 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9f1a9e8904cca76eb867442954ecbaae4c1859dd | 178 | py | Python | tests/unit/exceptions/test_dice_exception.py | is-gabs/rpgtk | 11f657ed52374b0d9a106f5e0f9b433441141f6d | [
"MIT"
] | 2 | 2022-02-18T01:22:11.000Z | 2022-03-02T02:32:19.000Z | tests/unit/exceptions/test_dice_exception.py | is-gabs/rpgtk | 11f657ed52374b0d9a106f5e0f9b433441141f6d | [
"MIT"
] | null | null | null | tests/unit/exceptions/test_dice_exception.py | is-gabs/rpgtk | 11f657ed52374b0d9a106f5e0f9b433441141f6d | [
"MIT"
] | null | null | null | from rpgtk.exceptions import DiceException, RPGTKBaseException
def test_should_extends_rpgtk_base_exception():
assert issubclass(DiceException, RPGTKBaseException) is True
| 29.666667 | 64 | 0.853933 | 19 | 178 | 7.736842 | 0.842105 | 0.421769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101124 | 178 | 5 | 65 | 35.6 | 0.91875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
9f504ec69a8c54dbf49e60106cb7062b1d625fc2 | 780 | py | Python | jangl_webleads_inbound/verticals/auto_warranty.py | jangl-platform/jangl-webleads-inbound | 7ba7734c0614c946f52af23829b7c61ba4fa9460 | [
"MIT"
] | 1 | 2021-01-14T21:55:27.000Z | 2021-01-14T21:55:27.000Z | jangl_webleads_inbound/verticals/auto_warranty.py | jangl-platform/jangl-webleads-inbound | 7ba7734c0614c946f52af23829b7c61ba4fa9460 | [
"MIT"
] | 1 | 2021-06-10T21:22:36.000Z | 2021-06-10T21:22:36.000Z | jangl_webleads_inbound/verticals/auto_warranty.py | jangl-platform/jangl-webleads-inbound | 7ba7734c0614c946f52af23829b7c61ba4fa9460 | [
"MIT"
] | null | null | null | from rest_framework import serializers
allow_blank = {'default': '', 'initial': '', 'allow_blank': True}
allow_null = {'default': None, 'initial': None, 'allow_null': True}
class PingDataSerializer(serializers.Serializer):
birth_date = serializers.DateField(**allow_null)
year = serializers.IntegerField(**allow_null)
make = serializers.CharField(**allow_blank)
model = serializers.CharField(**allow_blank)
mileage = serializers.CharField(**allow_blank)
class PostDataSerializer(serializers.Serializer):
birth_date = serializers.DateField(**allow_null)
year = serializers.IntegerField(**allow_null)
make = serializers.CharField(**allow_blank)
model = serializers.CharField(**allow_blank)
mileage = serializers.CharField(**allow_blank)
| 37.142857 | 67 | 0.744872 | 81 | 780 | 6.962963 | 0.308642 | 0.141844 | 0.265957 | 0.319149 | 0.712766 | 0.712766 | 0.712766 | 0.712766 | 0.712766 | 0.712766 | 0 | 0 | 0.126923 | 780 | 20 | 68 | 39 | 0.828194 | 0 | 0 | 0.666667 | 0 | 0 | 0.062821 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.866667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
9f6025e97408278dd5634912db4f65a2273f5531 | 92 | py | Python | randomNumber.py | phutkins/python-meetup-random-number | 0918f355896adc38a9c0c927387f72cac1ab3145 | [
"MIT"
] | null | null | null | randomNumber.py | phutkins/python-meetup-random-number | 0918f355896adc38a9c0c927387f72cac1ab3145 | [
"MIT"
] | null | null | null | randomNumber.py | phutkins/python-meetup-random-number | 0918f355896adc38a9c0c927387f72cac1ab3145 | [
"MIT"
] | null | null | null |
import random
def chooseNumber (start, end=None):
return random.randint(start, end)
| 11.5 | 37 | 0.717391 | 12 | 92 | 5.5 | 0.75 | 0.242424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184783 | 92 | 7 | 38 | 13.142857 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
4c8d5e50f3d8a22554320a9e5bdae8a0ce0c9d2e | 688 | py | Python | stan/proc_functions/to_csv.py | chappers/Stan | 61c189ab12ea50214390804cff5694ac51f8df35 | [
"MIT"
] | 1 | 2015-01-06T11:10:24.000Z | 2015-01-06T11:10:24.000Z | stan/proc_functions/to_csv.py | chappers/Stan | 61c189ab12ea50214390804cff5694ac51f8df35 | [
"MIT"
] | null | null | null | stan/proc_functions/to_csv.py | chappers/Stan | 61c189ab12ea50214390804cff5694ac51f8df35 | [
"MIT"
] | null | null | null | """
The :mod:`stan.proc_functions.to_csv` module converts a dataframe object to csv. See Pandas documentation for more information.
"""
import pandas as pd
def to_csv(data, path_or_buf, sep=',', na_rep='', float_format=None, cols=None, header=True, index=True, index_label=None, mode='w', nanRep=None, encoding=None, quoting=None, line_terminator='n', chunksize=None, tupleize_cols=False, date_format=None, **kwds):
return data.to_csv(path_or_buf, sep=',', na_rep='', float_format=None, cols=None, header=True, index=True, index_label=None, mode='w', nanRep=None, encoding=None, quoting=None, line_terminator='n', chunksize=None, tupleize_cols=False, date_format=None, **kwds)
| 68.8 | 264 | 0.747093 | 108 | 688 | 4.574074 | 0.472222 | 0.040486 | 0.036437 | 0.048583 | 0.704453 | 0.704453 | 0.704453 | 0.704453 | 0.704453 | 0.704453 | 0 | 0 | 0.100291 | 688 | 9 | 265 | 76.444444 | 0.798061 | 0.184593 | 0 | 0 | 0 | 0 | 0.01085 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 10 |
4cb45c83532610b99f3fc7730ac48cc52498cc38 | 17,282 | py | Python | old/dropbox_scripts/helpers/remove_special_characters/unit_tests_remove_special_characters.py | jolitp/automation_scripts | ba0c94e5212f0069b89f75a48fe2e2aafb5c921c | [
"MIT"
] | null | null | null | old/dropbox_scripts/helpers/remove_special_characters/unit_tests_remove_special_characters.py | jolitp/automation_scripts | ba0c94e5212f0069b89f75a48fe2e2aafb5c921c | [
"MIT"
] | null | null | null | old/dropbox_scripts/helpers/remove_special_characters/unit_tests_remove_special_characters.py | jolitp/automation_scripts | ba0c94e5212f0069b89f75a48fe2e2aafb5c921c | [
"MIT"
] | null | null | null | #! /usr/bin/python3
"""
tests remove_special_characters(...) function from helpers module
"""
import unittest
import remove_special_characters as script
class UnitTestRemoveSpecialCharacters(unittest.TestCase):
"""
tests for remove_special_characters(string) function
"""
# region test class TestRemoveSpecialCharacters(unittest.TestCase):
# region def test_string_does_not_contain_special_characters(self):
def test_string_does_not_contain_special_characters(self):
"""
test that the resulting string is the same as the original string if
the original string does not have any special characters
"""
original_string : str = "does not contain special characters"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(original_string, resulting_string)
...
# endregion def test_string_does_not_contain_special_characters(self):
# region def test_string_does_contain_1_backslash_character(self):
def test_string_does_contain_1_backslash_character(self):
"""
test that the resulting string is the same as the original string if
the original string does not have any special characters
"""
original_string : str = "contains 1 \\ character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_backslash_character(self):
# region def test_string_does_contain_1_double_quote_character(self):
def test_string_does_contain_1_double_quote_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 \" character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_double_quote_character(self):
# region def test_string_does_contain_1_single_quote_character(self):
def test_string_does_contain_1_single_quote_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 \' character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_single_quote_character(self):
# region def test_string_does_contain_1_pipe_character(self):
def test_string_does_contain_1_pipe_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 | character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_pipe_character(self):
# region def test_string_does_contain_1_open_parenthesis_character(self):
def test_string_does_contain_1_open_parenthesis_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 ( character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_open_parenthesis_character(self):
# region def test_string_does_contain_1_close_parenthesis_character(self):
def test_string_does_contain_1_close_parenthesis_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 ) character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_close_parenthesis_character(self):
# region def test_string_does_contain_1_tilde_character(self):
def test_string_does_contain_1_tilde_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 ~ character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_tilde_character(self):
# region def test_string_does_contain_1_backtick_character(self):
def test_string_does_contain_1_backtick_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 ` character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_backtick_character(self):
# region def test_string_does_contain_1_pound_character(self):
def test_string_does_contain_1_pound_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 # character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_pound_character(self):
# region def test_string_does_contain_1_at_character(self):
def test_string_does_contain_1_at_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 @ character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_at_character(self):
# region def test_string_does_contain_1_exclamation_point_character(self):
def test_string_does_contain_1_exclamation_point_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 ! character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_exclamation_point_character(self):
# region def test_string_does_contain_1_dollar_sign_character(self):
def test_string_does_contain_1_dollar_sign_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 $ character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_dollar_sign_character(self):
# region def test_string_does_contain_1_percentage_sign_character(self):
def test_string_does_contain_1_percentage_sign_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 $ character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_percentage_sign_character(self):
# region def test_string_does_contain_1_caret_sign_character(self):
def test_string_does_contain_1_caret_sign_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 ^ character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_caret_sign_character(self):
# region def test_string_does_contain_1_ampersand_character(self):
def test_string_does_contain_1_ampersand_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 & character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_ampersand_character(self):
# region def test_string_does_contain_1_asterisk_character(self):
def test_string_does_contain_1_asterisk_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 * character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_asterisk_character(self):
# region def test_string_does_contain_1_open_bracket_character(self):
def test_string_does_contain_1_open_bracket_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 [ character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_open_bracket_character(self):
# region def test_string_does_contain_1_close_bracket_character(self):
def test_string_does_contain_1_close_bracket_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 ] character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_close_bracket_character(self):
# region def test_string_does_contain_1_open_curly_bracket_character(self):
def test_string_does_contain_1_open_curly_bracket_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 { character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_open_curly_bracket_character(self):
# region def test_string_does_contain_1_close_curly_bracket_character(self):
def test_string_does_contain_1_close_curly_bracket_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 } character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_close_curly_bracket_character(self):
# region def test_string_does_contain_1_semicolon_character(self):
def test_string_does_contain_1_semicolon_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 ; character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_semicolon_character(self):
# region def test_string_does_contain_1_colon_character(self):
def test_string_does_contain_1_colon_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 : character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_colon_character(self):
# region def test_string_does_contain_1_open_angle_bracket_character(self):
def test_string_does_contain_1_open_angle_bracket_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 < character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_open_angle_bracket_character(self):
# region def test_string_does_contain_1_close_angle_bracket_character(self):
def test_string_does_contain_1_close_angle_bracket_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 } character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_close_angle_bracket_character(self):
# region def test_string_does_contain_1_interrogation_point_character(self):
def test_string_does_contain_1_interrogation_point_character(self):
"""
test that the resulting string have one less character
"""
original_string : str = "contains 1 ? character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_interrogation_point_character(self):
# region def test_string_does_contain_1_interrogation_point_character(self):
def test_string_does_contain_2_special_characters(self):
"""
test that the resulting string have two less characters
"""
original_string : str = "contains$ 1 ? character"
expected_result : str = "contains 1 character"
resulting_string : str = \
script.remove_special_characters(original_string, debug_function = True)
self.assertEqual(expected_result, resulting_string)
...
# endregion def test_string_does_contain_1_interrogation_point_character(self):
...
# endregion class TestRemoveSpecialCharacters(unittest.TestCase):
if __name__ == "__main__":
unittest.main()
| 33.886275 | 84 | 0.697199 | 1,963 | 17,282 | 5.730005 | 0.040245 | 0.073791 | 0.093617 | 0.122422 | 0.966839 | 0.963549 | 0.963549 | 0.96106 | 0.956348 | 0.867087 | 0 | 0.009921 | 0.235968 | 17,282 | 509 | 85 | 33.952849 | 0.841942 | 0.326467 | 0 | 0.71134 | 0 | 0 | 0.106852 | 0 | 0 | 0 | 0 | 0 | 0.139175 | 1 | 0.139175 | false | 0 | 0.010309 | 0 | 0.154639 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4cf2325ff50b485d30df78e1b801c83dc4777d38 | 2,654 | py | Python | app/migrations/0034_auto_20171013_0855.py | minerva22/mf-dataentry | ef95e2b7acf8ede83048f41079c46b07ec93a3cc | [
"MIT"
] | null | null | null | app/migrations/0034_auto_20171013_0855.py | minerva22/mf-dataentry | ef95e2b7acf8ede83048f41079c46b07ec93a3cc | [
"MIT"
] | null | null | null | app/migrations/0034_auto_20171013_0855.py | minerva22/mf-dataentry | ef95e2b7acf8ede83048f41079c46b07ec93a3cc | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.5 on 2017-10-13 08:55
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('app', '0033_auto_20171013_0745'),
]
operations = [
migrations.AlterField(
model_name='currency',
name='code_dub',
field=models.CharField(blank=True, max_length=10, null=True, unique=True),
),
migrations.AlterField(
model_name='currency',
name='code_leb',
field=models.CharField(blank=True, max_length=10, null=True, unique=True),
),
migrations.AlterField(
model_name='securitybond',
name='circular',
field=models.CharField(max_length=4),
),
migrations.AlterField(
model_name='securitybond',
name='code',
field=models.CharField(max_length=10, unique=True),
),
migrations.AlterField(
model_name='securitybond',
name='symbol',
field=models.CharField(max_length=10),
),
migrations.AlterField(
model_name='securityfutures',
name='circular',
field=models.CharField(max_length=4),
),
migrations.AlterField(
model_name='securityfutures',
name='code',
field=models.CharField(max_length=10, unique=True),
),
migrations.AlterField(
model_name='securityfutures',
name='symbol',
field=models.CharField(max_length=10),
),
migrations.AlterField(
model_name='securityoption',
name='circular',
field=models.CharField(max_length=4),
),
migrations.AlterField(
model_name='securityoption',
name='code',
field=models.CharField(max_length=10, unique=True),
),
migrations.AlterField(
model_name='securityoption',
name='symbol',
field=models.CharField(max_length=10),
),
migrations.AlterField(
model_name='securityshare',
name='circular',
field=models.CharField(max_length=4),
),
migrations.AlterField(
model_name='securityshare',
name='code',
field=models.CharField(max_length=10, unique=True),
),
migrations.AlterField(
model_name='securityshare',
name='symbol',
field=models.CharField(max_length=10),
),
]
| 30.860465 | 86 | 0.556142 | 241 | 2,654 | 5.966805 | 0.224066 | 0.194715 | 0.243394 | 0.282337 | 0.86509 | 0.86509 | 0.769124 | 0.726704 | 0.681502 | 0.681502 | 0 | 0.031969 | 0.328184 | 2,654 | 85 | 87 | 31.223529 | 0.774537 | 0.025622 | 0 | 0.871795 | 1 | 0 | 0.113047 | 0.008904 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025641 | 0 | 0.064103 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
e22fe0c439a382c2bced5809483eb4748254b236 | 4,044 | py | Python | BetterPrinting/system.py | Butter-mit-Brot/BetterPrinting | 1b8af095e0c7a031007c019e9949b835c813a0e4 | [
"MIT"
] | 2 | 2021-12-05T05:24:17.000Z | 2022-01-01T18:29:26.000Z | BetterPrinting/system.py | Butter-mit-Brot/BetterPrinting | 1b8af095e0c7a031007c019e9949b835c813a0e4 | [
"MIT"
] | 3 | 2021-06-14T10:16:01.000Z | 2021-06-15T07:53:29.000Z | BetterPrinting/system.py | Butter-mit-Brot/BetterPrinting | 1b8af095e0c7a031007c019e9949b835c813a0e4 | [
"MIT"
] | 3 | 2021-06-14T09:59:04.000Z | 2022-01-01T18:29:30.000Z | import datetime
import platform
import os
import getpass
import socket
from .color import color
class system:
@staticmethod
def sys_info(**show):
kw = ['all', 'user', 'name', 'os', 'version', 'processor', 'architecture', 'ip']
sysminf = platform.uname()
key = str(show.keys()).replace("dict_keys", "").replace("(", "").replace(")", "").replace("'", "").replace("[",
"").replace(
"]", "").replace(" ", "")
keys = key.split(",")
if not show == {}:
for i in keys:
if i in kw:
if "user" in i:
print("|" + color("USER: ", "green", print_val=False) + f"{getpass.getuser()}")
if "name" in i:
print("|" + color("NAME: ", "green", print_val=False) + f"{sysminf.node}")
if "os" in i:
print("|" + color("OS: ", "green", print_val=False) + f"{sysminf.system}")
if "version" in i:
print("|" + color("VERSION: ", "green", print_val=False) + f"{sysminf.version}")
if "processor" in i:
print("|" + color("PROCESSOR: ", "green", print_val=False) + f"{sysminf.processor}")
if "architecture" in i:
print("|" + color("ARCHITECTURE: ", "green", print_val=False) + f"{sysminf.machine}")
if "ip" in i:
print("|" + color("IP-ADDRESS: ", "green", print_val=False) + f"{socket.gethostbyname(socket.gethostname())}")
if "all" in i:
print("|" + color("USER: ", "green", print_val=False) + f"{getpass.getuser()}")
print("|" + color("NAME: ", "green", print_val=False) + f"{sysminf.node}")
print("|" + color("OS: ", "green", print_val=False) + f"{sysminf.system}")
print("|" + color("VERSION: ", "green", print_val=False) + f"{sysminf.version}")
print("|" + color("PROCESSOR: ", "green", print_val=False) + f"{sysminf.processor}")
print("|" + color("ARCHITECTURE: ", "green", print_val=False) + f"{sysminf.machine}")
print("|" + color("IP-ADDRESS: ", "green", print_val=False) + f"{socket.gethostbyname(socket.gethostname())}")
else:
pass
elif show == {}:
print("|" + color("USER: ", "green", print_val=False) + f"{getpass.getuser()}")
print("|" + color("NAME: ", "green", print_val=False) + f"{sysminf.node}")
print("|" + color("OS: ", "green", print_val=False) + f"{sysminf.system}")
print("|" + color("VERSION: ", "green", print_val=False) + f"{sysminf.version}")
print("|" + color("PROCESSOR: ", "green", print_val=False) + f"{sysminf.processor}")
print("|" + color("ARCHITECTURE: ", "green", print_val=False) + f"{sysminf.machine}")
print("|" + color("IP-ADDRESS: ", "green", print_val=False) + f"{socket.gethostbyname(socket.gethostname())}")
@staticmethod
def time(print_val: bool = True):
now_all = datetime.datetime.now()
now = now_all.strftime("%H:%M:%S")
if print_val:
return print(now)
else:
return now
@staticmethod
def date(print_val: bool = True):
now_all = datetime.datetime.now()
now = now_all.strftime("%d.%m.%G")
if print_val:
return print(now)
else:
return now
@staticmethod
def tasks():
plat = platform.system()
if "Windows" in plat:
os.system("tasklist")
elif "Linux" in plat:
os.system("ps aux")
else:
print("os not detected")
| 47.576471 | 135 | 0.464639 | 397 | 4,044 | 4.654912 | 0.176322 | 0.108225 | 0.147727 | 0.204545 | 0.742424 | 0.719697 | 0.719697 | 0.719697 | 0.719697 | 0.719697 | 0 | 0 | 0.358309 | 4,044 | 84 | 136 | 48.142857 | 0.712139 | 0 | 0 | 0.493333 | 0 | 0 | 0.229545 | 0.033333 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053333 | false | 0.066667 | 0.08 | 0 | 0.2 | 0.373333 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
e2acad23cbfff9984a049a074b6455d76f2d0833 | 56,953 | py | Python | regreg/smooth/glm.py | vishalbelsare/regreg | d1b62cc43cdd83331f2b0817b0ae099d5ef97966 | [
"BSD-2-Clause"
] | 11 | 2016-02-25T01:53:03.000Z | 2020-11-30T00:59:46.000Z | regreg/smooth/glm.py | vishalbelsare/regreg | d1b62cc43cdd83331f2b0817b0ae099d5ef97966 | [
"BSD-2-Clause"
] | 21 | 2015-09-17T19:18:09.000Z | 2021-04-28T06:15:02.000Z | regreg/smooth/glm.py | vishalbelsare/regreg | d1b62cc43cdd83331f2b0817b0ae099d5ef97966 | [
"BSD-2-Clause"
] | 8 | 2016-03-24T00:03:03.000Z | 2019-08-25T23:40:42.000Z | import warnings
from copy import copy
import numpy as np
from scipy import sparse
from scipy.stats import norm as normal_dbn
from . import smooth_atom, affine_smooth
from ..affine import (astransform,
linear_transform,
affine_transform,
scaler)
from ..identity_quadratic import identity_quadratic
from ..atoms.seminorms import l1norm
from .cox import cox_loglike
from .binary import (logistic_loglike,
probit_loglike,
cloglog_loglike,
huber_svm)
class glm(smooth_atom):
"""
A general linear model, usually a log-likelihood
for response $Y$ whose mean is modelled through
$X\beta$.
Usual examples are Gaussian (least squares regression),
logistic regression and Poisson log-linear regression.
Huber loss is also implemented as an example.
"""
def __init__(self,
X,
Y,
loss,
quadratic=None,
initial=None,
offset=None,
case_weights=None):
"""
Parameters
----------
X : ndarray
The design matrix.
Y : ndarray
The response.
loss : `regreg.smooth.smooth_atom`
The loss function that takes arguments the same
size as `Y`. So, for Gaussian regression
the loss is just the map $\mu \mapsto \|\mu - Y\|^2_2/2$.
quadratic : `regreg.identity_quadratic.identity_quadratic`
Optional quadratic part added to objective.
initial : ndarray
An initial guess at the minimizer.
offset : ndarray (optional)
Offset to be applied in parameter space before
evaluating loss.
case_weights : ndarray
Non-negative case weights
"""
self.saturated_loss = loss
self.data = X, Y
self.affine_atom = affine_smooth(loss, X)
if case_weights is None:
case_weights = np.ones(X.shape[0])
self.case_weights = case_weights
if self.case_weights is not None:
if not np.all(case_weights >= 0):
raise ValueError('case_weights should be non-negative')
self.case_weights = np.asarray(case_weights)
if self.case_weights.shape != loss.shape[:1]:
raise ValueError('case_weights should have same shape as loss.shape[:1]')
smooth_atom.__init__(self,
X.shape[1],
coef=1.,
offset=offset,
quadratic=quadratic,
initial=initial)
def smooth_objective(self,
beta,
mode='func',
check_feasibility=False):
"""
Parameters
----------
beta : ndarray
The current parameter values.
mode : str
One of ['func', 'grad', 'both'].
check_feasibility : bool
If True, return `np.inf` when
point is not feasible, i.e. when `beta` is not
in the domain.
Returns
-------
If `mode` is 'func' returns just the objective value
at `beta`, else if `mode` is 'grad' returns the gradient
else returns both.
"""
beta = self.apply_offset(beta)
linear_pred = self.affine_atom.affine_transform.dot(beta)
value = self.saturated_loss.smooth_objective(linear_pred,
mode=mode,
check_feasibility=check_feasibility,
case_weights=self.case_weights)
if mode == 'func':
return self.scale(value)
elif mode == 'grad':
return self.scale(self.affine_atom.affine_transform.adjoint_map(value))
else:
return self.scale(value[0]), self.scale(self.affine_atom.affine_transform.adjoint_map(value[1]))
def get_data(self):
return self._X, self.saturated_loss.data
def set_data(self, data):
X, Y = data
self._transform = astransform(X)
self._X = X
self._is_transform = id(self._X) == id(self._transform) # i.e. astransform was a nullop
self.saturated_loss.data = Y
data = property(get_data, set_data, doc="Data for the general linear model.")
def linear_predictor(self, beta):
"""
Compute $X\beta$.
Parameters
----------
beta : ndarray
Parameters.
Returns
-------
linpred : ndarray
"""
# can have an `R`-type offset by using affine_map here
return self._transform.linear_map(beta)
def objective(self, beta):
"""
Compute the loss $\ell(X\beta)$.
Parameters
----------
beta : ndarray
Parameters.
Returns
-------
objective : float
Value of the loss at $\beta$.
"""
return self.smooth_objective(beta, 'func')
def gradient(self, beta):
"""
Compute the gradient of the loss $ \nabla \ell(X\beta)$.
Parameters
----------
beta : ndarray
Parameters.
Returns
-------
grad : ndarray
Gradient of the loss at $\beta$.
"""
return self.smooth_objective(beta, 'grad')
def hessian(self, beta):
"""
Compute the Hessian of the loss $ \nabla^2 \ell(X\beta)$.
Parameters
----------
beta : ndarray
Parameters.
Returns
-------
hess : ndarray
Hessian of the loss at $\beta$, if defined.
"""
linpred = self.linear_predictor(beta)
X = self.data[0]
if self._is_transform:
raise ValueError('refusing to form Hessian for arbitrary affine_transform, use an ndarray or scipy.sparse')
if not hasattr(self.saturated_loss, 'hessian'):
if not hasattr(self.saturated_loss, 'hessian_mult'):
raise ValueError('loss has no hessian or hessian_mult method')
right_mult = np.zeros(X.shape)
for j in range(X.shape[1]):
right_mult[:,j] = self.saturated_loss.hessian_mult(linpred,
X[:,j],
case_weights=self.case_weights)
else:
W = self.saturated_loss.hessian(linpred,
case_weights=self.case_weights)
right_mult = W[:,None] * X
if not sparse.issparse(X): # assuming it is an ndarray
return X.T.dot(right_mult)
else:
return X.T * right_mult
def latexify(self, var=None, idx=''):
return self.affine_atom.latexify(var=var, idx=idx)
def __copy__(self):
klass = self.__class__
X, Y = self.data
return klass(copy(X),
copy(Y),
copy(self.saturated_loss),
quadratic=copy(self.quadratic),
initial=copy(self.coefs),
offset=copy(self.offset),
case_weights=self.case_weights.copy())
def subsample(self, idx):
"""
Create a loss using a subsample of the data.
Makes a copy of the loss and
multiplies case_weights by the indicator for
`idx`.
Parameters
----------
idx : index
Indices of np.arange(n) to keep.
Returns
-------
subsample_loss : `glm`
Loss after discarding all
cases not in `idx.
"""
subsample_loss = copy(self)
n = subsample_loss.saturated_loss.shape[0]
idx_bool = np.zeros(n, np.bool)
idx_bool[idx] = 1
subsample_loss.case_weights *= idx_bool
return subsample_loss
@classmethod
def gaussian(klass,
X,
response,
case_weights=None,
coef=1.,
saturated_offset=None,
offset=None,
quadratic=None,
initial=None):
"""
Create a loss for a Gaussian regression model.
Parameters
----------
X : [ndarray, `regreg.affine.affine_transform`]
Design matrix
response : ndarray
Response vector.
case_weights : ndarray
Non-negative case weights
coef : float
Scaling to be put in front of loss.
saturated_offset : ndarray (optional)
Offset to be applied in saturated parameter space before
evaluating loss.
offset : ndarray (optional)
Offset to be applied in saturated space before
evaluating loss.
quadratic : `regreg.identity_quadratic.identity_quadratic` (optional)
Optional quadratic to be added to objective.
initial : ndarray
Initial guess at coefficients.
Returns
-------
glm_obj : `regreg.glm.glm`
General linear model loss.
"""
loss = gaussian_loglike(response.shape,
response,
coef=coef,
offset=saturated_offset)
return klass(X,
response,
loss,
offset=offset,
quadratic=quadratic,
initial=initial,
case_weights=case_weights)
@classmethod
def logistic(klass,
X,
successes,
trials=None,
case_weights=None,
coef=1.,
offset=None,
saturated_offset=None,
quadratic=None,
initial=None):
"""
Create a loss for a logistic regression model.
Parameters
----------
X : [ndarray, `regreg.affine.affine_transform`]
Design matrix
successes : ndarray
Responses (should be non-negative integers).
trials : ndarray (optional)
Number of trials for each success. If `None`,
defaults to `np.ones_like(successes)`.
case_weights : ndarray
Non-negative case weights
coef : float
Scaling to be put in front of loss.
saturated_offset : ndarray (optional)
Offset to be applied in saturated parameter space before
evaluating loss.
offset : ndarray (optional)
Offset to be applied in parameter space before
evaluating loss.
quadratic : `regreg.identity_quadratic.identity_quadratic` (optional)
Optional quadratic to be added to objective.
initial : ndarray
Initial guess at coefficients.
Returns
-------
glm_obj : `regreg.glm.glm`
General linear model loss.
"""
loss = logistic_loglike(successes.shape,
successes,
coef=coef,
offset=saturated_offset,
trials=trials)
return klass(X,
(successes, loss.trials),
loss,
offset=offset,
quadratic=quadratic,
initial=initial,
case_weights=case_weights)
@classmethod
def probit(klass,
X,
successes,
trials=None,
case_weights=None,
coef=1.,
offset=None,
saturated_offset=None,
quadratic=None,
initial=None):
"""
Create a loss for a probit regression model.
Parameters
----------
X : [ndarray, `regreg.affine.affine_transform`]
Design matrix
successes : ndarray
Responses (should be non-negative integers).
trials : ndarray (optional)
Number of trials for each success. If `None`,
defaults to `np.ones_like(successes)`.
case_weights : ndarray
Non-negative case weights
coef : float
Scaling to be put in front of loss.
saturated_offset : ndarray (optional)
Offset to be applied in saturated parameter space before
evaluating loss.
offset : ndarray (optional)
Offset to be applied in parameter space before
evaluating loss.
quadratic : `regreg.identity_quadratic.identity_quadratic` (optional)
Optional quadratic to be added to objective.
initial : ndarray
Initial guess at coefficients.
Returns
-------
glm_obj : `regreg.glm.glm`
General linear model loss.
"""
loss = probit_loglike(successes.shape,
successes,
coef=coef,
offset=saturated_offset,
trials=trials)
return klass(X,
(successes, loss.trials),
loss,
offset=offset,
quadratic=quadratic,
initial=initial,
case_weights=case_weights)
@classmethod
def cloglog(klass,
X,
successes,
trials=None,
case_weights=None,
coef=1.,
offset=None,
saturated_offset=None,
quadratic=None,
initial=None):
"""
Create a loss for a cloglog regression model.
Parameters
----------
X : [ndarray, `regreg.affine.affine_transform`]
Design matrix
successes : ndarray
Responses (should be non-negative integers).
trials : ndarray (optional)
Number of trials for each success. If `None`,
defaults to `np.ones_like(successes)`.
case_weights : ndarray
Non-negative case weights
coef : float
Scaling to be put in front of loss.
saturated_offset : ndarray (optional)
Offset to be applied in saturated parameter space before
evaluating loss.
offset : ndarray (optional)
Offset to be applied in parameter space before
evaluating loss.
quadratic : `regreg.identity_quadratic.identity_quadratic` (optional)
Optional quadratic to be added to objective.
initial : ndarray
Initial guess at coefficients.
Returns
-------
glm_obj : `regreg.glm.glm`
General linear model loss.
"""
loss = cloglog_loglike(successes.shape,
successes,
coef=coef,
offset=saturated_offset,
trials=trials)
return klass(X,
(successes, loss.trials),
loss,
offset=offset,
quadratic=quadratic,
initial=initial,
case_weights=case_weights)
@classmethod
def poisson(klass,
X,
counts,
case_weights=None,
coef=1.,
saturated_offset=None,
offset=None,
quadratic=None,
initial=None):
"""
Create a loss for a Poisson regression model.
Parameters
----------
X : [ndarray, `regreg.affine.affine_transform`]
Design matrix
counts : ndarray
Response vector. Should be non-negative integers.
case_weights : ndarray
Non-negative case weights
coef : float
Scaling to be put in front of loss.
saturated_offset : ndarray (optional)
Offset to be applied in saturated parameter space before
evaluating loss.
offset : ndarray (optional)
Offset to be applied in parameter space before
evaluating loss.
quadratic : `regreg.identity_quadratic.identity_quadratic` (optional)
Optional quadratic to be added to objective.
initial : ndarray
Initial guess at coefficients.
Returns
-------
glm_obj : `regreg.glm.glm`
General linear model loss.
"""
loss = poisson_loglike(counts.shape,
counts,
offset=saturated_offset,
coef=coef)
return klass(X, counts, loss,
offset=offset,
quadratic=quadratic,
initial=initial,
case_weights=case_weights)
@classmethod
def huber(klass,
X,
response,
smoothing_parameter,
case_weights=None,
coef=1.,
saturated_offset=None,
offset=None,
quadratic=None,
initial=None):
"""
Create a loss for a regression model using
Huber loss.
Parameters
----------
X : [ndarray, `regreg.affine.affine_transform`]
Design matrix
response : ndarray
Response vector.
smoothing_parameter : float
Smoothing parameter for Huber loss.
case_weights : ndarray
Non-negative case weights
coef : float
Scaling to be put in front of loss.
saturated_offset : ndarray (optional)
Offset to be applied in saturated parameter space before
evaluating loss.
offset : ndarray (optional)
Offset to be applied in parameter space before
evaluating loss.
quadratic : `regreg.identity_quadratic.identity_quadratic` (optional)
Optional quadratic to be added to objective.
initial : ndarray
Initial guess at coefficients.
Returns
-------
glm_obj : `regreg.glm.glm`
General linear model loss.
"""
loss = huber_loss(response.shape,
response,
smoothing_parameter,
offset=saturated_offset,
coef=coef)
return klass(X,
response,
loss,
offset=offset,
quadratic=quadratic,
initial=initial,
case_weights=case_weights)
@classmethod
def huber_svm(klass,
X,
successes,
smoothing_parameter,
case_weights=None,
coef=1.,
offset=None,
saturated_offset=None,
quadratic=None,
initial=None):
"""
Create a loss for a binary regression model using
Huber SVM loss.
Parameters
----------
X : [ndarray, `regreg.affine.affine_transform`]
Design matrix
successes : ndarray
Response vector.
smoothing_parameter : float
Smoothing parameter for Huber loss.
case_weights : ndarray
Non-negative case weights
coef : float
Scaling to be put in front of loss.
saturated_offset : ndarray (optional)
Offset to be applied in saturated parameter space before
evaluating loss.
offset : ndarray (optional)
Offset to be applied in parameter space before
evaluating loss.
quadratic : `regreg.identity_quadratic.identity_quadratic` (optional)
Optional quadratic to be added to objective.
initial : ndarray
Initial guess at coefficients.
Returns
-------
glm_obj : `regreg.glm.glm`
General linear model loss.
"""
loss = huber_svm(successes.shape,
successes,
smoothing_parameter,
offset=saturated_offset,
coef=coef)
return klass(X,
successes,
loss,
offset=offset,
quadratic=quadratic,
initial=initial,
case_weights=case_weights)
@classmethod
def cox(klass,
X,
event_times,
censoring,
case_weights=None,
coef=1.,
saturated_offset=None,
offset=None,
quadratic=None,
initial=None):
"""
Create a loss for a logistic regression model.
Parameters
----------
X : [ndarray, `regreg.affine.affine_transform`]
Design matrix
event_times : ndarray
Observed times for Cox proportional hazard model.
censoring : ndarray
Censoring indicator for Cox proportional hazard model
- 1 indicates observation is a failure, 0 a censored observation.
case_weights : ndarray
Non-negative case weights
coef : float
Scaling to be put in front of loss.
saturated_offset : ndarray (optional)
Offset to be applied in saturated parameter space before
evaluating loss.
offset : ndarray (optional)
Offset to be applied in parameter space before
evaluating loss.
quadratic : `regreg.identity_quadratic.identity_quadratic` (optional)
Optional quadratic to be added to objective.
initial : ndarray
Initial guess at coefficients.
Returns
-------
glm_obj : `regreg.glm.glm`
General linear model loss.
"""
loss = cox_loglike(event_times.shape,
event_times,
censoring,
offset=saturated_offset,
coef=coef)
return klass(X,
np.array([event_times, censoring]).T,
loss,
offset=offset,
quadratic=quadratic,
initial=initial,
case_weights=case_weights)
class gaussian_loglike(smooth_atom):
"""
The Gaussian loss for observations $y$:
.. math::
\mu \mapsto \frac{1}{2} \|y-\mu\|^2_2
"""
objective_template = r"""\ell^{\text{Gauss}}\left(%(var)s\right)"""
def __init__(self,
shape,
response,
coef=1.,
offset=None,
quadratic=None,
initial=None,
case_weights=None):
smooth_atom.__init__(self,
shape,
offset=offset,
quadratic=quadratic,
initial=initial,
coef=coef)
if sparse.issparse(response):
self.response = response.toarray().flatten()
else:
self.response = np.asarray(response)
if case_weights is not None:
if not np.all(case_weights >= 0):
raise ValueError('case_weights should be non-negative')
self.case_weights = np.asarray(case_weights)
if self.case_weights.shape != self.response.shape:
raise ValueError('case_weights should have same shape as response')
else:
self.case_weights = None
def smooth_objective(self,
natural_param,
mode='both',
check_feasibility=False,
case_weights=None):
"""
Evaluate the smooth objective, computing its value, gradient or both.
Parameters
----------
natural_param : ndarray
The current parameter values.
mode : str
One of ['func', 'grad', 'both'].
check_feasibility : bool
If True, return `np.inf` when
point is not feasible, i.e. when `natural_param` is not
in the domain.
case_weights : ndarray
Non-negative case weights
Returns
-------
If `mode` is 'func' returns just the objective value
at `natural_param`, else if `mode` is 'grad' returns the gradient
else returns both.
"""
if case_weights is None:
case_weights = np.ones_like(natural_param)
cw = case_weights
if self.case_weights is not None:
cw *= self.case_weights
natural_param = self.apply_offset(natural_param)
resid = natural_param - self.response
if mode == 'both':
f, g = self.scale(np.sum(cw*resid**2)) / 2., self.scale(cw*resid)
return f, g
elif mode == 'grad':
return self.scale(cw*resid)
elif mode == 'func':
return self.scale(np.sum(cw*resid**2)) / 2.
else:
raise ValueError("mode incorrectly specified")
# Begin loss API
def hessian(self,
natural_param,
case_weights=None):
"""
Hessian of the loss.
Parameters
----------
natural_param : ndarray
Parameters where Hessian will be evaluated.
Returns
-------
hess : ndarray
A 1D-array representing the diagonal of the Hessian
evaluated at `natural_param`.
case_weights : ndarray
Non-negative case weights
"""
if case_weights is None:
case_weights = np.ones_like(natural_param)
cw = case_weights
if self.case_weights is not None:
cw *= self.case_weights
return self.scale(np.ones_like(natural_param) * cw)
def get_data(self):
return self.response
def set_data(self, data):
self.response = data
data = property(get_data, set_data)
def __copy__(self):
return gaussian_loglike(self.shape,
copy(self.response),
coef=self.coef,
offset=copy(self.offset),
quadratic=copy(self.quadratic),
initial=copy(self.coefs),
case_weights=copy(self.case_weights))
def subsample(self, case_idx):
"""
Create a saturated loss using a subsample of the data.
Makes a copy of the loss and
multiplies case_weights by the indicator for
`idx`.
Parameters
----------
idx : index
Indices of np.arange(n) to keep.
Returns
-------
subsample_loss : `smooth_atom`
Loss after discarding all
cases not in `idx.
"""
loss_cp = copy(self)
if loss_cp.case_weights is None:
case_weights = loss_cp.case_weights = np.ones(self.shape[0])
else:
case_weights = loss_cp.case_weights
idx_bool = np.zeros_like(case_weights, np.bool)
idx_bool[case_idx] = 1
case_weights *= idx_bool
return loss_cp
# End loss API
def mean_function(self, eta):
return eta
class poisson_loglike(smooth_atom):
"""
A class for combining the Poisson log-likelihood with a general seminorm
"""
objective_template = r"""\ell^{\text{Pois}}\left(%(var)s\right)"""
def __init__(self,
shape,
counts,
coef=1.,
offset=None,
quadratic=None,
initial=None,
case_weights=None):
smooth_atom.__init__(self,
shape,
offset=offset,
quadratic=quadratic,
initial=initial,
coef=coef)
if sparse.issparse(counts):
#Convert sparse success vector to an array
self.counts = counts.toarray().flatten()
else:
self.counts = counts
if not np.allclose(np.round(self.counts),self.counts):
raise ValueError("Counts vector is not integer valued")
if np.min(self.counts) < 0:
raise ValueError("Counts vector is not non-negative")
saturated = counts
loss_terms = - coef * ((counts - 1) * np.log(counts))
loss_terms[counts == 0] = 0
loss_constant = - coef * loss_terms.sum()
devq = identity_quadratic(0,0,0,-loss_constant)
self.quadratic += devq
if case_weights is not None:
if not np.all(case_weights >= 0):
raise ValueError('case_weights should be non-negative')
self.case_weights = np.asarray(case_weights)
else:
self.case_weights = None
def smooth_objective(self,
natural_param,
mode='both',
check_feasibility=False,
case_weights=None):
"""
Evaluate the smooth objective, computing its value, gradient or both.
Parameters
----------
natural_param : ndarray
The current parameter values.
mode : str
One of ['func', 'grad', 'both'].
check_feasibility : bool
If True, return `np.inf` when
point is not feasible, i.e. when `natural_param` is not
in the domain.
case_weights : ndarray
Non-negative case weights
Returns
-------
If `mode` is 'func' returns just the objective value
at `natural_param`, else if `mode` is 'grad' returns the gradient
else returns both.
"""
if case_weights is None:
case_weights = np.ones_like(natural_param)
cw = case_weights
if self.case_weights is not None:
cw *= self.case_weights
x = natural_param # shorthand
x = self.apply_offset(x)
exp_x = np.exp(x)
if mode == 'both':
f, g = - self.scale(-np.sum(cw * exp_x) + np.dot(cw * self.counts,x)), - self.scale(cw * (self.counts - exp_x))
return f, g
elif mode == 'grad':
f, g = None, - self.scale(cw * (self.counts - exp_x))
return g
elif mode == 'func':
f, g = - self.scale(-np.sum(cw * exp_x) + np.dot(cw * self.counts,x)), None
return f
else:
raise ValueError("mode incorrectly specified")
# Begin loss API
def hessian(self, natural_param, case_weights=None):
"""
Hessian of the loss.
Parameters
----------
natural_param : ndarray
Parameters where Hessian will be evaluated.
case_weights : ndarray
Non-negative case weights
Returns
-------
hess : ndarray
A 1D-array representing the diagonal of the Hessian
evaluated at `natural_param`.
"""
x = natural_param # shorthand
if case_weights is None:
case_weights = np.ones_like(natural_param)
cw = case_weights
if self.case_weights is not None:
cw *= self.case_weights
return self.scale(cw * np.exp(x))
def get_data(self):
return self.counts
def set_data(self, data):
self.counts = data
data = property(get_data, set_data)
def __copy__(self):
counts = self.data
return poisson_loglike(self.shape,
copy(counts),
coef=self.coef,
offset=copy(self.offset),
quadratic=copy(self.quadratic),
initial=copy(self.coefs),
case_weights=copy(self.case_weights))
def subsample(self, case_idx):
"""
Create a saturated loss using a subsample of the data.
Makes a copy of the loss and
multiplies case_weights by the indicator for
`idx`.
Parameters
----------
idx : index
Indices of np.arange(n) to keep.
Returns
-------
subsample_loss : `smooth_atom`
Loss after discarding all
cases not in `idx.
"""
loss_cp = copy(self)
if loss_cp.case_weights is None:
case_weights = loss_cp.case_weights = np.ones(self.shape[0])
else:
case_weights = loss_cp.case_weights
idx_bool = np.zeros_like(case_weights, np.bool)
idx_bool[case_idx] = 1
case_weights *= idx_bool
return loss_cp
# End loss API
def mean_function(self, eta):
return np.exp(eta)
class huber_loss(smooth_atom):
objective_template = r"""\ell^{\text{Huber}}\left(%(var)s\right)"""
def __init__(self,
shape,
response,
smoothing_parameter,
coef=1.,
offset=None,
quadratic=None,
initial=None,
case_weights=None):
smooth_atom.__init__(self,
shape,
offset=offset,
quadratic=quadratic,
initial=initial,
coef=coef)
self.smoothing_parameter = smoothing_parameter
atom = l1norm(shape, lagrange=1.)
Q = identity_quadratic(smoothing_parameter, 0, 0, 0)
self.smoothed_atom = atom.smoothed(Q)
if sparse.issparse(response):
self.response = response.toarray().flatten()
else:
self.response = np.asarray(response)
if case_weights is not None:
if not np.all(case_weights >= 0):
raise ValueError('case_weights should be non-negative')
self.case_weights = np.asarray(case_weights)
if self.case_weights.shape != self.response.shape:
raise ValueError('case_weights should have same shape as response')
else:
self.case_weights = None
def smooth_objective(self,
param,
mode='both',
check_feasibility=False,
case_weights=None):
"""
Evaluate the smooth objective, computing its value, gradient or both.
Parameters
----------
param : ndarray
The current parameter values.
mode : str
One of ['func', 'grad', 'both'].
check_feasibility : bool
If True, return `np.inf` when
point is not feasible, i.e. when `param` is not
in the domain.
case_weights : ndarray
Non-negative case weights
Returns
-------
If `mode` is 'func' returns just the objective value
at `param`, else if `mode` is 'grad' returns the gradient
else returns both.
"""
if case_weights is None:
case_weights = np.ones_like(param)
cw = case_weights
if self.case_weights is not None:
cw *= self.case_weights
param = self.apply_offset(param)
resid = param - self.response
f, g = _huber_loss(resid, smoothing_parameter=self.smoothing_parameter)
if mode == 'func':
return self.scale((f * cw).sum())
elif mode == 'grad':
return self.scale(g * cw)
elif mode == 'both':
return self.scale((f * cw).sum()), self.scale(g * cw)
else:
raise ValueError("mode incorrectly specified")
# Begin loss API
def hessian(self, param, case_weights=None):
"""
Hessian of the loss.
Parameters
----------
param : ndarray
Parameters where Hessian will be evaluated.
case_weights : ndarray
Non-negative case weights
Returns
-------
hess : ndarray
A 1D-array representing the diagonal of the Hessian
evaluated at `natural_param`.
"""
# it is piecwise C^2 though... maybe use this?
raise NotImplementedError('Huber loss is not twice differentiable')
def get_data(self):
return self.response
def set_data(self, data):
self.response = data
data = property(get_data, set_data)
def subsample(self, case_idx):
"""
Create a saturated loss using a subsample of the data.
Makes a copy of the loss and
multiplies case_weights by the indicator for
`idx`.
Parameters
----------
idx : index
Indices of np.arange(n) to keep.
Returns
-------
subsample_loss : `smooth_atom`
Loss after discarding all
cases not in `idx.
"""
loss_cp = copy(self)
if loss_cp.case_weights is None:
case_weights = loss_cp.case_weights = np.ones(self.shape[0])
else:
case_weights = loss_cp.case_weights
idx_bool = np.zeros_like(case_weights, np.bool)
idx_bool[case_idx] = 1
case_weights *= idx_bool
return loss_cp
def __copy__(self):
response = self.data
return huber_loss(self.shape,
copy(response),
self.smoothing_parameter,
coef=self.coef,
offset=copy(self.offset),
quadratic=copy(self.quadratic),
initial=copy(self.coefs))
# End loss API
def _huber_loss(arg, smoothing_parameter):
# returns vector whose sum is total loss as well as gradient vector
eps = smoothing_parameter
proj_arg = np.sign(arg) * np.minimum(np.abs(arg) / eps, 1) # the maximizer is the gradient
# by convex conjugacy
return arg * proj_arg - eps * proj_arg**2 / 2, proj_arg
class stacked_loglike(smooth_atom):
"""
A class for stacking `K=len(losses)` saturated losses with common
shapes (roughly speaking) summed over losses.
Roughly speaking a model of `K` independent measurements per individual.
"""
objective_template = r"""\ell^{\text{stack}}\left(%(var)s\right)"""
def __init__(self,
losses,
coef=1.,
offset=None,
quadratic=None,
initial=None,
case_weights=None):
shape = (np.sum([l.shape[0] for l in losses]),)
smooth_atom.__init__(self,
shape,
offset=offset,
quadratic=quadratic,
initial=initial,
coef=coef)
responses = [l.data for l in losses]
shapes = [r.shape for r in responses]
dims = np.array([len(s) for s in shapes])
if np.all(dims == 1):
self.data = np.hstack(responses)
elif np.all(dims == 2):
self.data = np.vstack(responses)
else:
raise ValueError('expecting either 1 or dimensional data for saturated losses')
self._slices = []
idx = 0
for l in losses:
self._slices.append(slice(idx, idx + l.shape[0], 1))
idx += l.shape[0]
self._losses = losses
self._gradient = np.zeros(self.shape)
if case_weights is not None:
if not np.all(case_weights >= 0):
raise ValueError('case_weights should be non-negative')
self.case_weights = np.asarray(case_weights)
if self.case_weights.shape != self.shape[:1]:
raise ValueError('case_weights should have same shape as response')
else:
self.case_weights = None
def smooth_objective(self,
natural_param,
mode='both',
check_feasibility=False,
case_weights=None):
"""
Evaluate the smooth objective, computing its value, gradient or both.
Parameters
----------
natural_param : ndarray
The current parameter values.
mode : str
One of ['func', 'grad', 'both'].
check_feasibility : bool
If True, return `np.inf` when
point is not feasible, i.e. when `natural_param` is not
in the domain.
Returns
-------
If `mode` is 'func' returns just the objective value
at `natural_param`, else if `mode` is 'grad' returns the gradient
else returns both.
"""
if case_weights is None:
case_weights = np.ones(natural_param.shape[:1])
cw = case_weights
if self.case_weights is not None:
cw *= self.case_weights
linpred = natural_param # shorthand
linpred = self.apply_offset(linpred)
if mode == 'grad':
for d, slice in enumerate(self._slices):
self._gradient[slice] = self._losses[d].smooth_objective(linpred[slice], 'grad')
return self.scale(self._gradient)
elif mode == 'func':
value = 0
for d, slice in enumerate(self._slices):
value += self._losses[d].smooth_objective(linpred[slice], 'func')
return self.scale(value)
elif mode == 'both':
value = 0
for d, slice in enumerate(self._slices):
f, g = self._losses[d].smooth_objective(linpred[slice], 'both')
self._gradient[slice] = g
value += f
return self.scale(value), self.scale(self._gradient)
else:
raise ValueError("mode incorrectly specified")
def get_data(self):
return self._data
def set_data(self, data):
self._data = data
data = property(get_data, set_data)
def __copy__(self):
return stacked_loglike(copy(self._losses),
coef=self.coef,
offset=copy(self.offset),
quadratic=copy(self.quadratic),
initial=copy(self.coefs),
case_weights=copy(self.case_weights))
def subsample(self, case_idx):
"""
Create a saturated loss using a subsample of the data.
Makes a copy of the loss and
multiplies case_weights by the indicator for
`idx`.
Parameters
----------
idx : index
Indices of np.arange(n) to keep.
Returns
-------
subsample_loss : `smooth_atom`
Loss after discarding all
cases not in `idx.
"""
loss_cp = copy(self)
if loss_cp.case_weights is None:
case_weights = loss_cp.case_weights = np.ones(self.shape[0])
else:
case_weights = loss_cp.case_weights
idx_bool = np.zeros_like(case_weights, np.bool)
idx_bool[case_idx] = 1
case_weights *= idx_bool
return loss_cp
@classmethod
def gaussian(klass,
responses,
case_weights=None,
coef=1.,
offset=None,
quadratic=None,
initial=None):
"""
Create a loss for a Gaussian regression model.
Parameters
----------
responses : ndarray
Response vectors.
case_weights : ndarray
Non-negative case weights
offset : ndarray (optional)
Offset to be applied in parameter space before
evaluating loss.
quadratic : `regreg.identity_quadratic.identity_quadratic` (optional)
Optional quadratic to be added to objective.
initial : ndarray
Initial guess at coefficients.
Returns
-------
mglm_obj : `regreg.mglm.mglm`
General linear model loss.
"""
losses = [gaussian_loglike(response.shape,
response,
coef=coef)
for response in responses]
return klass(losses,
offset=offset,
quadratic=quadratic,
initial=initial,
case_weights=case_weights)
@classmethod
def logistic(klass,
successes,
trials=None,
case_weights=None,
coef=1.,
offset=None,
quadratic=None,
initial=None):
"""
Create a loss for a logistic regression model.
Parameters
----------
successes : ndarray
Responses (should be non-negative integers).
trials : ndarray (optional)
Number of trials for each success. If `None`,
defaults to `np.ones_like(successes)`.
case_weights : ndarray
Non-negative case weights
offset : ndarray (optional)
Offset to be applied in parameter space before
evaluating loss.
quadratic : `regreg.identity_quadratic.identity_quadratic` (optional)
Optional quadratic to be added to objective.
initial : ndarray
Initial guess at coefficients.
Returns
-------
mglm_obj : `regreg.mglm.mglm`
General linear model loss.
"""
losses = [logistic_loglike(successes[i],
trials[i],
coef=coef)
for i in range(len(successes))]
return klass(losses,
offset=offset,
quadratic=quadratic,
initial=initial,
case_weights=case_weights)
@classmethod
def probit(klass,
successes,
trials=None,
case_weights=None,
coef=1.,
offset=None,
quadratic=None,
initial=None):
"""
Create a loss for a probit regression model.
Parameters
----------
successes : ndarray
Responses (should be non-negative integers).
trials : ndarray (optional)
Number of trials for each success. If `None`,
defaults to `np.ones_like(successes)`.
case_weights : ndarray
Non-negative case weights
offset : ndarray (optional)
Offset to be applied in parameter space before
evaluating loss.
quadratic : `regreg.identity_quadratic.identity_quadratic` (optional)
Optional quadratic to be added to objective.
initial : ndarray
Initial guess at coefficients.
Returns
-------
mglm_obj : `regreg.mglm.mglm`
General linear model loss.
"""
losses = [probit_loglike(successes[i],
trials[i],
coef=coef)
for i in range(len(successes))]
return klass(losses,
offset=offset,
quadratic=quadratic,
initial=initial,
case_weights=case_weights)
@classmethod
def cloglog(klass,
successes,
trials=None,
case_weights=None,
coef=1.,
offset=None,
quadratic=None,
initial=None):
"""
Create a loss for a logistic regression model.
Parameters
----------
successes : ndarray
Responses (should be non-negative integers).
trials : ndarray (optional)
Number of trials for each success. If `None`,
defaults to `np.ones_like(successes)`.
case_weights : ndarray
Non-negative case weights
offset : ndarray (optional)
Offset to be applied in parameter space before
evaluating loss.
quadratic : `regreg.identity_quadratic.identity_quadratic` (optional)
Optional quadratic to be added to objective.
initial : ndarray
Initial guess at coefficients.
Returns
-------
mglm_obj : `regreg.mglm.mglm`
General linear model loss.
"""
losses = [cloglog_loglike(successes[i],
trials[i],
coef=coef)
for i in range(len(successes))]
return klass(losses,
offset=offset,
quadratic=quadratic,
initial=initial,
case_weights=case_weights)
@classmethod
def poisson(klass,
counts,
case_weights=None,
coef=1.,
offset=None,
quadratic=None,
initial=None):
"""
Create a loss for a Poisson regression model.
Parameters
----------
counts : ndarray
Response vector. Should be non-negative integers.
case_weights : ndarray
Non-negative case weights
offset : ndarray (optional)
Offset to be applied in parameter space before
evaluating loss.
quadratic : `regreg.identity_quadratic.identity_quadratic` (optional)
Optional quadratic to be added to objective.
initial : ndarray
Initial guess at coefficients.
Returns
-------
mglm_obj : `regreg.mglm.mglm`
General linear model loss.
"""
losses = [logistic_loglike(successes[i],
trials[i],
coef=coef)
for i in range(len(successes))]
return klass(losses,
offset=offset,
quadratic=quadratic,
initial=initial,
case_weights=case_weights)
@classmethod
def huber(klass,
X,
responses,
smoothing_parameter,
case_weights=None,
coef=1.,
offset=None,
quadratic=None,
initial=None):
"""
Create a loss for a regression model using
Huber loss.
Parameters
----------
response : ndarray
Response vector.
smoothing_parameter : float
Smoothing parameter for Huber loss.
case_weights : ndarray
Non-negative case weights
offset : ndarray (optional)
Offset to be applied in parameter space before
evaluating loss.
quadratic : `regreg.identity_quadratic.identity_quadratic` (optional)
Optional quadratic to be added to objective.
initial : ndarray
Initial guess at coefficients.
Returns
-------
mglm_obj : `regreg.mglm.mglm`
General linear model loss.
"""
losses = [huber_loss(response.shape,
response,
smoothing_parameter)
for response in responses]
return klass(losses,
offset=offset,
quadratic=quadratic,
initial=initial,
case_weights=case_weights)
@classmethod
def huber_svm(klass,
X,
successes,
smoothing_parameter,
case_weights=None,
coef=1.,
offset=None,
quadratic=None,
initial=None):
"""
Create a loss for a regression model using
Huber loss.
Parameters
----------
response : ndarray
Response vector.
smoothing_parameter : float
Smoothing parameter for Huber loss.
case_weights : ndarray
Non-negative case weights
offset : ndarray (optional)
Offset to be applied in parameter space before
evaluating loss.
quadratic : `regreg.identity_quadratic.identity_quadratic` (optional)
Optional quadratic to be added to objective.
initial : ndarray
Initial guess at coefficients.
Returns
-------
mglm_obj : `regreg.mglm.mglm`
General linear model loss.
"""
losses = [huber_svm(response.shape,
response,
smoothing_parameter)
for response in successes]
return klass(losses,
offset=offset,
quadratic=quadratic,
initial=initial,
case_weights=case_weights)
@classmethod
def cox(klass,
X,
event_times,
censoring,
coef=1.,
offset=None,
quadratic=None,
initial=None,
case_weights=None):
"""
Create a loss for a Cox regression model.
Parameters
----------
X : [ndarray, `regreg.affine.affine_transform`]
Design matrix
event_times : ndarray
Observed times for Cox proportional hazard model.
censoring : ndarray
Censoring indicator for Cox proportional hazard model
- 1 indicates observation is a failure, 0 a censored observation.
offset : ndarray (optional)
Offset to be applied in parameter space before
evaluating loss.
quadratic : `regreg.identity_quadratic.identity_quadratic` (optional)
Optional quadratic to be added to objective.
initial : ndarray
Initial guess at coefficients.
Returns
-------
mglm_obj : `regreg.mglm.mglm`
General linear model loss.
"""
losses = [cox_loglike(event_times[i].shape,
event_times[i],
censoring[i],
coef=coef)
for i in range(len(event_times))]
return klass(losses,
offset=offset,
quadratic=quadratic,
initial=initial,
case_weights=case_weights)
# Deprecated
def logistic_loss(X, Y, trials=None, coef=1.):
'''
Construct a logistic loss function for successes Y and
affine transform X.
Parameters
----------
X : [affine_transform, ndarray]
Design matrix
Y : ndarray
'''
warnings.warn('"logistic_loss" is deprecated use "regreg.smooth.glm.logistic" instead')
return glm.logistic(X,
Y,
trials=trials,
coef=coef)
| 28.391326 | 123 | 0.506523 | 5,551 | 56,953 | 5.075482 | 0.062511 | 0.087066 | 0.018102 | 0.023958 | 0.810393 | 0.795237 | 0.775147 | 0.754596 | 0.734152 | 0.725137 | 0 | 0.002796 | 0.415887 | 56,953 | 2,005 | 124 | 28.405486 | 0.84411 | 0.342756 | 0 | 0.739348 | 0 | 0 | 0.035896 | 0.005757 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072682 | false | 0 | 0.013784 | 0.012531 | 0.176692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2c523dea64069f095cd192dade2b39194e6e78fa | 2,687 | py | Python | tests/unit/api/test_entity.py | CiscoSecurity/tr-05-api-module | ce0f8d583b2fce3aadcc5a5c174a5b2b23e14d72 | [
"MIT"
] | 10 | 2019-07-16T15:11:05.000Z | 2022-02-07T19:58:55.000Z | tests/unit/api/test_entity.py | CiscoSecurity/tr-05-api-module | ce0f8d583b2fce3aadcc5a5c174a5b2b23e14d72 | [
"MIT"
] | 26 | 2019-07-18T09:31:12.000Z | 2021-11-19T09:52:50.000Z | tests/unit/api/test_entity.py | CiscoSecurity/tr-05-api-module | ce0f8d583b2fce3aadcc5a5c174a5b2b23e14d72 | [
"MIT"
] | 13 | 2019-07-15T12:31:35.000Z | 2021-02-23T16:57:38.000Z | from functools import partial
from threatresponse.api.entity import EntityAPI
from .assertions import *
def entity_api(url):
return partial(EntityAPI, url=url)
def test_get_succeeds():
request = invoke(entity_api('/x'), lambda api: api.get())
request.perform.assert_called_once_with(
'GET',
'/x'
)
request = invoke(entity_api('/x'),
lambda api: api.get(response_type='raw'),
'raw')
request.perform.assert_called_once_with(
'GET',
'/x'
)
def test_get_with_id_succeeds():
request = invoke(entity_api('/x'), lambda api: api.get('42'))
request.perform.assert_called_once_with(
'GET',
'/x/42'
)
request = invoke(entity_api('/x'),
lambda api: api.get('42', response_type='raw'),
'raw')
request.perform.assert_called_once_with(
'GET',
'/x/42'
)
def test_get_with_id_and_fields_succeeds():
params = {'fields': ['schema_version', 'revision']}
request = invoke(entity_api('/x'),
lambda api: api.get('42', params=params))
request.perform.assert_called_once_with(
'GET',
'/x/42',
params=params
)
request = invoke(entity_api('/x'),
lambda api: api.get('42',
params=params,
response_type='raw'),
'raw')
request.perform.assert_called_once_with(
'GET',
'/x/42',
params=params
)
def test_post_succeeds():
request = invoke(entity_api('/x'), lambda api: api.post(payload))
request.perform.assert_called_once_with(
'POST',
'/x',
json=payload
)
request = invoke(entity_api('/x'),
lambda api: api.post(payload, response_type='raw'),
'raw')
request.perform.assert_called_once_with(
'POST',
'/x',
json=payload
)
def test_delete_succeeds():
request = invoke(entity_api('/x'), lambda api: api.delete('42'), 'raw')
request.perform.assert_called_once_with(
'DELETE',
'/x/42'
)
def test_put_succeeds():
request = invoke(entity_api('/x'), lambda api: api.put('12', payload))
request.perform.assert_called_once_with(
'PUT',
'/x/12',
json=payload
)
request = invoke(entity_api('/x'),
lambda api: api.put('12', payload, response_type='raw'),
'raw')
request.perform.assert_called_once_with(
'PUT',
'/x/12',
json=payload
)
| 24.651376 | 77 | 0.540752 | 299 | 2,687 | 4.628763 | 0.137124 | 0.078035 | 0.151012 | 0.174855 | 0.825145 | 0.802023 | 0.802023 | 0.775289 | 0.747832 | 0.648844 | 0 | 0.015351 | 0.321176 | 2,687 | 108 | 78 | 24.87963 | 0.743421 | 0 | 0 | 0.569767 | 0 | 0 | 0.066245 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 1 | 0.081395 | false | 0 | 0.034884 | 0.011628 | 0.127907 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2c85d1e41d4feab535b4c846b6b7d1a9b4d72c16 | 69,455 | py | Python | cpims/cpovc_reports/queries-bak.py | sizler20/cpims_update | 23b86e40ca779b751383e268ad4fbf6a321ab211 | [
"MIT"
] | null | null | null | cpims/cpovc_reports/queries-bak.py | sizler20/cpims_update | 23b86e40ca779b751383e268ad4fbf6a321ab211 | [
"MIT"
] | null | null | null | cpims/cpovc_reports/queries-bak.py | sizler20/cpims_update | 23b86e40ca779b751383e268ad4fbf6a321ab211 | [
"MIT"
] | 1 | 2022-02-27T13:36:47.000Z | 2022-02-27T13:36:47.000Z | QUERIES = {}
# Reports
REPORTS = {}
# Reports listings
REPORTS[1] = 'registration'
REPORTS[2] = 'registration'
REPORTS[3] = 'registration'
REPORTS[4] = 'registration'
REPORTS[5] = 'not_served'
REPORTS[6] = 'pepfar_detailed'
REPORTS[7] = 'registration'
REPORTS[8] = 'registration'
REPORTS[9] = 'registration'
REPORTS[10] = 'registration'
REPORTS[11] = 'form1b_summary'
REPORTS['GOK_1'] = 'org_units'
REPORTS['GOK_2'] = 'institution_register'
REPORTS['GOK_3'] = 'case_load'
REPORTS['GOK_4'] = 'excel_tool_a'
REPORTS['GOK_5'] = 'missing_children'
REPORTS['GOK_6'] = 'vac'
QUERIES['org_units'] = '''
select
ROW_NUMBER () OVER (ORDER BY rp.date_linked) as SNO,
ou.org_unit_name as "case category", ou.org_unit_type_id as "type name",
concat(pp.first_name,' ',pp.surname,' ',pp.other_names) as Names,
CASE pp.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Sex,
CASE
WHEN date_part('year', age(timestamp '{end_date}', pp.date_of_birth)) < 35 THEN 'a.[< 35 yrs]'
WHEN date_part('year', age(timestamp '{end_date}', pp.date_of_birth)) BETWEEN 35 AND 50 THEN 'b.[35 - 50 yrs]'
ELSE 'c.[50+ yrs]' END AS agerange,
rp.primary_unit,
1 as ovccount
from reg_persons_org_units as rp
inner join reg_org_unit as ou on ou.id=rp.org_unit_id
inner join reg_person as pp on pp.id=rp.person_id
where rp.is_void = False
'''
QUERIES['institution_register'] = '''
SELECT
ROW_NUMBER () OVER (ORDER BY ovc_placement.timestamp_created) as SNO,
concat(pp.first_name,' ',pp.surname,' ',pp.other_names) as Names,
date_part('year', age(admission_date, pp.date_of_birth)) AS Admission_Age,
CASE pp.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Sex,
admission_date as "admission date", df.date_of_discharge as "discharge date",
CASE
WHEN date_part('year', age(admission_date, pp.date_of_birth)) < 6 THEN 'a.[0 - 5 yrs]'
WHEN date_part('year', age(admission_date, pp.date_of_birth)) BETWEEN 6 AND 9 THEN 'b.[6 - 9 yrs]'
WHEN date_part('year', age(admission_date, pp.date_of_birth)) BETWEEN 10 AND 15 THEN 'c.[10 - 15 yrs]'
WHEN date_part('year', age(admission_date, pp.date_of_birth)) BETWEEN 16 AND 18 THEN 'd.[16 - 18 yrs]'
ELSE 'e.[18+ yrs]' END AS agerange,
c_cat.item_description as "case category",
CASE ovc_placement.is_active WHEN 'TRUE' THEN 'Active' ELSE 'Discharged' END AS Status,
1 as ovccount
from ovc_placement
inner join reg_person as pp on person_id = pp.id
left outer join ovc_case_record as cr on cr.case_id = ovc_placement.case_record_id
left outer join ovc_case_category as cc on cc.case_id_id = cr.case_id
left outer join list_general c_cat on c_cat.item_id=cc.case_category and c_cat.field_name = 'case_category_id'
left outer join ovc_discharge_followup as df on df.placement_id_id = ovc_placement.placement_id
where ovc_placement.is_void = False and admission_date between '{start_date}' and '{end_date}'
and residential_institution_name = '{org_unit}'
'''
QUERIES['case_load'] = '''
select ovc_case_record.person_id as cpims_id,
TO_CHAR(date_case_opened :: DATE, 'dd-Mon-yyyy') as case_date, case_serial,
CASE risk_level WHEN 'RLHG' THEN 'High' WHEN 'RLMD' THEN 'Medium' ELSE 'Low' END AS risk_level,
CASE perpetrator_status WHEN 'PSSL' THEN 'Self' WHEN 'PKNW' THEN 'Unknown'
WHEN 'PUNK' THEN 'Unknown' ELSE 'Not Available' END AS perpetrator_status,
cr_cat.item_description as case_reporter,
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS sex,
date_part('year', age(date_case_opened, reg_person.date_of_birth)) AS age,
CASE
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) < 6 THEN 'a.[0 - 5 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 6 AND 9 THEN 'b.[6 - 9 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 10 AND 15 THEN 'c.[10 - 15 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 16 AND 18 THEN 'd.[16 - 18 yrs]'
ELSE 'e.[18+ yrs]' END AS agerange,
CASE
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) < 5 THEN 'a.[0 - 4 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 5 AND 9 THEN 'b.[5 - 9 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 10 AND 14 THEN 'c.[10 - 14 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 15 AND 18 THEN 'd.[15 - 18 yrs]'
ELSE 'e.[18+ yrs]' END AS knbs_agerange,
CASE case_stage WHEN 2 THEN 'Closed' WHEN 1 THEN 'Active' ELSE 'Pending' END AS Case_status,
case_status as case_state,
CASE ccat.case_nature WHEN 'OOEV' THEN 'One Off' ELSE 'Chronic' END AS Case_Nature,
ev_cat.item_description as place_of_event, c_cat.item_description as "case category",
cs_cat.item_description as case_sub_category,
reg_org_unit.org_unit_name as org_unit, scou_geo.area_name as sub_county,
cou_geo.area_name as county,
case omed.mental_condition when 'MNRM' THEN 'Normal' else 'Has Condition' End as mental_condition,
case omed.physical_condition when 'PNRM' THEN 'Normal' else 'Has Condition' End as physical_condition,
case omed.other_condition when 'CHNM' THEN 'Normal' else 'Has Condition' End as other_condition,
CASE cen.service_provided WHEN cen.service_provided THEN intv.item_description ELSE 'Case Open' END AS intervention,
TO_CHAR(ovc_case_record.timestamp_created :: DATE, 'dd-Mon-yyyy') as system_date,
1 as ovccount
from ovc_case_record
inner join ovc_case_category as ccat on case_id = ccat.case_id_id
inner join ovc_case_geo as cgeo on cgeo.case_id_id = case_id
inner join ovc_medical as omed on omed.case_id_id = case_id
left outer join reg_person on ovc_case_record.person_id=reg_person.id
left outer join reg_org_unit on reg_org_unit.id=cgeo.report_orgunit_id
left outer join list_geo as scou_geo on scou_geo.area_id=cgeo.report_subcounty_id and scou_geo.area_id > 47
left outer join list_geo as cou_geo on cou_geo.area_id=scou_geo.parent_area_id and cou_geo.area_id < 48
left outer join ovc_case_sub_category cscat on cscat.case_category_id=ccat.case_category_id
left outer join list_general c_cat on c_cat.item_id=ccat.case_category and c_cat.field_name = 'case_category_id'
left outer join list_general ev_cat on ev_cat.item_id=ccat.place_of_event and ev_cat.field_name = 'event_place_id'
left outer join list_general cr_cat on cr_cat.item_id=case_reporter and cr_cat.field_name = 'case_reporter_id'
left outer join list_general cs_cat on cs_cat.item_id=cscat.sub_category_id
left join ovc_case_events as cev on cev.case_id_id = case_id and cev.case_event_type_id = 'CLOSURE' and cev.is_void = false
left join ovc_case_event_encounters as cen on cen.case_event_id_id=cev.case_event_id
left outer join list_general intv on intv.item_id=cen.service_provided and intv.field_name = 'intervention_id'
where date_case_opened between '{start_date}' and '{end_date}' {other_params};
'''
# Serial SubCounty Date Case Category Sex Age Case Intervention Special Comments ReportingMonth Date of Birth
QUERIES['excel_tool_a'] = '''
select case_serial as "Case Number",
ROW_NUMBER () OVER (ORDER BY ovc_case_record.timestamp_created) as Serial,
cou_geo.area_name as County, scou_geo.area_name as SubCounty,
date_case_opened as Date, c_cat.item_description as "Case Category",
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Sex,
date_part('year', age(date_case_opened, reg_person.date_of_birth)) AS Age,
CASE
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) < 6 THEN 'a.[0 - 5 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 6 AND 9 THEN 'b.[6 - 9 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 10 AND 15 THEN 'c.[10 - 15 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 16 AND 18 THEN 'd.[16 - 18 yrs]'
ELSE 'e.[18+ yrs]' END AS AgeRange,
NULL as "Case Intervention", NULL as "Special Comments",
TO_CHAR(reg_person.date_of_birth :: DATE, 'dd-Mon-yyyy') as "Date of Birth",
1 as ovccount
from ovc_case_record
inner join ovc_case_category as ccat on case_id = ccat.case_id_id
inner join ovc_case_geo as cgeo on cgeo.case_id_id = case_id
left outer join reg_person on ovc_case_record.person_id=reg_person.id
left outer join list_geo as scou_geo on scou_geo.area_id=cgeo.report_subcounty_id and scou_geo.area_id > 47
left outer join list_geo as cou_geo on cou_geo.area_id=scou_geo.parent_area_id and cou_geo.area_id < 48
left outer join list_general c_cat on c_cat.item_id=ccat.case_category and c_cat.field_name = 'case_category_id'
where date_case_opened between '{start_date}' and '{end_date}'
ORDER BY ovc_case_record.timestamp_created ASC;
'''
QUERIES['vac'] = '''
select case_serial as "Case Number",
ROW_NUMBER () OVER (ORDER BY ovc_case_record.timestamp_created) as Serial,
cou_geo.area_name as County, scou_geo.area_name as SubCounty,
date_case_opened as Date, c_cat.item_description as "Case Category",
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Sex,
date_part('year', age(date_case_opened, reg_person.date_of_birth)) AS Age,
CASE
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) < 6 THEN 'a.[0 - 5 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 6 AND 9 THEN 'b.[6 - 9 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 10 AND 15 THEN 'c.[10 - 15 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 16 AND 18 THEN 'd.[16 - 18 yrs]'
ELSE 'e.[18+ yrs]' END AS AgeRange,
NULL as "Case Intervention",
TO_CHAR(reg_person.date_of_birth :: DATE, 'dd-Mon-yyyy') as "Date of Birth",
1 as ovccount
from ovc_case_record
inner join ovc_case_category as ccat on case_id = ccat.case_id_id
inner join ovc_case_geo as cgeo on cgeo.case_id_id = case_id
left outer join reg_person on ovc_case_record.person_id=reg_person.id
left outer join list_geo as scou_geo on scou_geo.area_id=cgeo.report_subcounty_id and scou_geo.area_id > 47
left outer join list_geo as cou_geo on cou_geo.area_id=scou_geo.parent_area_id and cou_geo.area_id < 48
left outer join list_general c_cat on c_cat.item_id=ccat.case_category and c_cat.field_name = 'case_category_id'
where ccat.case_category in ('CCCM', 'CCDF', 'CCEA', 'CSCS', 'CSCU', 'CSDF', 'CSNG', 'CSRG', 'CSSO')
and date_case_opened between '{start_date}' and '{end_date}'
ORDER BY ovc_case_record.timestamp_created ASC;
'''
QUERIES['missing_children'] = '''
select case_serial as "Case Number",
ROW_NUMBER () OVER (ORDER BY ovc_case_record.timestamp_created) as Serial,
cou_geo.area_name as County, scou_geo.area_name as SubCounty,
date_case_opened as Date, c_cat.item_description as "Case Category",
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Sex,
date_part('year', age(date_case_opened, reg_person.date_of_birth)) AS Age,
CASE
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) < 6 THEN 'a.[0 - 5 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 6 AND 9 THEN 'b.[6 - 9 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 10 AND 15 THEN 'c.[10 - 15 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 16 AND 18 THEN 'd.[16 - 18 yrs]'
ELSE 'e.[18+ yrs]' END AS AgeRange,
NULL as "Case Intervention",
TO_CHAR(reg_person.date_of_birth :: DATE, 'dd-Mon-yyyy') as "Date of Birth",
1 as ovccount
from ovc_case_record
inner join ovc_case_category as ccat on case_id = ccat.case_id_id
inner join ovc_case_geo as cgeo on cgeo.case_id_id = case_id
left outer join reg_person on ovc_case_record.person_id=reg_person.id
left outer join list_geo as scou_geo on scou_geo.area_id=cgeo.report_subcounty_id and scou_geo.area_id > 47
left outer join list_geo as cou_geo on cou_geo.area_id=scou_geo.parent_area_id and cou_geo.area_id < 48
left outer join list_general c_cat on c_cat.item_id=ccat.case_category and c_cat.field_name = 'case_category_id'
where ccat.case_category in ('CHCP', 'CDSA', 'CLFC')
and date_case_opened between '{start_date}' and '{end_date}'
ORDER BY ovc_case_record.timestamp_created ASC;
'''
# Registration List
QUERIES['registration'] = '''
select reg_org_unit.org_unit_name AS CBO,
reg_person.first_name, reg_person.surname,
reg_person.other_names, reg_person.date_of_birth, registration_date,
date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) AS age,
date_part('year', age(ovc_registration.registration_date, reg_person.date_of_birth)) AS age_at_reg,
child_cbo_id as OVCID,
list_geo.area_name as ward, scc.area_name as constituency, cc.area_name as county,
CASE
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) < 1 THEN 'a.[<1yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 1 AND 4 THEN 'b.[1-4yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 5 AND 9 THEN 'c.[5-9yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 10 AND 14 THEN 'd.[10-14yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 15 AND 17 THEN 'e.[15-17yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 18 AND 24 THEN 'f.[18-24yrs]'
ELSE 'g.[25+yrs]' END AS AgeRange,
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Gender,
CASE has_bcert WHEN 'True' THEN 'HAS BIRTHCERT' ELSE 'NO BIRTHCERT' END AS BirthCert,
CASE has_bcert WHEN 'True' THEN 'BCERT' ELSE NULL END AS BCertNumber,
CASE is_disabled WHEN 'True' THEN 'HAS DISABILITY' ELSE 'NO DISABILITY' END AS OVCDisability,
CASE is_Disabled WHEN 'True' THEN 'NCPWD' ELSE NULL END AS NCPWDNumber,
CASE
WHEN hiv_status = 'HSTP' THEN 'POSITIVE'
WHEN hiv_status = 'HSTN' THEN 'NEGATIVE'
ELSE 'NOT KNOWN' END AS OVCHIVstatus,
CASE hiv_status WHEN 'HSTP' THEN 'ART' ELSE NULL END AS ARTStatus,
concat(chw.first_name,' ',chw.surname,' ',chw.other_names) as CHW,
concat(cgs.first_name,' ',cgs.surname,' ',cgs.other_names) as parent_names,
CASE is_active WHEN 'True' THEN 'ACTIVE' ELSE 'EXITED' END AS Exit_status,
CASE is_active WHEN 'False' THEN exit_date ELSE NULL END AS Exit_date,
CASE
WHEN school_level = 'SLTV' THEN 'Tertiary'
WHEN school_level = 'SLUN' THEN 'University'
WHEN school_level = 'SLSE' THEN 'Secondary'
WHEN school_level = 'SLPR' THEN 'Primary'
WHEN school_level = 'SLEC' THEN 'ECDE'
ELSE 'Not in School' END AS Schoollevel,
CASE immunization_status
WHEN 'IMFI' THEN 'Fully Immunized'
WHEN 'IMNI' THEN 'Not Immunized'
WHEN 'IMNC' THEN 'Not Completed'
ELSE 'Not Known' END AS immunization
from ovc_registration
left outer join reg_person on person_id=reg_person.id
left outer join reg_person chw on child_chv_id=chw.id
left outer join reg_person cgs on caretaker_id=cgs.id
left outer join reg_org_unit on child_cbo_id=reg_org_unit.id
left outer join reg_persons_geo on ovc_registration.person_id=reg_persons_geo.person_id
left outer join list_geo on list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
left outer join list_geo as cc on cc.area_id=scc.parent_area_id
where reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False and child_cbo_id in ({cbos})
and ovc_registration.registration_date between '{start_date}' and '{end_date}';'''
# PEPFAR
QUERIES['pepfar'] = '''
select
cast(count(distinct ovc_care_events.person_id) as integer) as OVCCount,
reg_org_unit.org_unit_name AS CBO,
list_geo.area_name as ward, scc.area_name as constituency, cc.area_name as county,
date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) AS age,
CASE
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) < 1 THEN 'a.[<1yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 1 AND 4 THEN 'b.[1-4yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 5 AND 9 THEN 'c.[5-9yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 10 AND 14 THEN 'd.[10-14yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 15 AND 17 THEN 'e.[15-17yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 18 AND 24 THEN 'f.[18-24yrs]'
ELSE 'g.[25+yrs]' END AS AgeRange,
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Gender,
CASE ovc_care_services.service_provided
WHEN 'HC1S' THEN 'Health' {domains}
ELSE 'Unknown'
END AS Domain
from ovc_care_services
INNER JOIN ovc_care_events ON ovc_care_events.event=ovc_care_services.event_id
INNER JOIN reg_person ON ovc_care_events.person_id=reg_person.id
LEFT OUTER JOIN ovc_registration ON ovc_care_events.person_id=ovc_registration.person_id
LEFT OUTER JOIN reg_org_unit ON reg_org_unit.id=ovc_registration.child_cbo_id
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
LEFT OUTER JOIN list_geo ON list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
left outer join list_geo as cc on cc.area_id=scc.parent_area_id
WHERE reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and ovc_care_services.is_void = False
and ovc_care_events.event_type_id='FSAM'
and ovc_care_events.date_of_event between '{start_date}' and '{end_date}'
and ovc_registration.child_cbo_id in ({cbos})
GROUP BY ovc_care_services.service_provided, reg_person.date_of_birth,
reg_person.sex_id, ovc_registration.child_cbo_id,
reg_org_unit.org_unit_name, reg_persons_geo.area_id,
ward, constituency, county;'''
# PEPFAR SUMMARY
QUERIES['pepfar_sum'] = '''
'''
# DATIM
QUERIES['datim'] = '''
select
cast(count(distinct ovc_registration.person_id) as integer) as OVCCount,
reg_org_unit.org_unit_name AS CBO,
list_geo.area_name as ward, scc.area_name as constituency, cc.area_name as county,
date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) AS age,
CASE
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) < 1 THEN 'a.[<1yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 1 AND 4 THEN 'b.[1-4yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 5 AND 9 THEN 'c.[5-9yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 10 AND 14 THEN 'd.[10-14yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 15 AND 17 THEN 'e.[15-17yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 18 AND 24 THEN 'f.[18-24yrs]'
ELSE 'g.[25+yrs]' END AS AgeRange,
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Gender,
CASE ovc_registration.hiv_status
WHEN 'HSTP' THEN '2a. (i) OVC_HIVSTAT: HIV+'
WHEN 'HSTN' THEN '2b. OVC_HIVSTAT: HIV-'
ELSE '2c. OVC_HIVSTAT: HIV Status NOT Known'
END AS Domain,
0 as Wardactive, 0 as WARDGraduated, 0 as WARDTransferred,
0 as WARDExitedWithoutGraduation
from ovc_registration
INNER JOIN reg_person ON ovc_registration.person_id=reg_person.id
LEFT OUTER JOIN reg_org_unit ON reg_org_unit.id=ovc_registration.child_cbo_id
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
LEFT OUTER JOIN list_geo ON list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
left outer join list_geo as cc on cc.area_id=scc.parent_area_id
WHERE reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and ovc_registration.is_active = True
and ovc_registration.child_cbo_id in ({cbos})
and ((ovc_registration.is_active = True and ovc_registration.registration_date <= '{end_date}')
or (ovc_registration.is_active = False
and (ovc_registration.registration_date between '{start_date}' and '{end_date}' ))
or (ovc_registration.is_active = False and ovc_registration.registration_date <= '{end_date}'
and ovc_registration.exit_date > '{end_date}' )
or (ovc_registration.is_active = False and ovc_registration.registration_date <= '{end_date}'
and ovc_registration.exit_date between '{start_date}' and '{end_date}' ))
and not (ovc_registration.school_level = 'SLNS'
and date_part('year', '{end_date}'::date) - date_part('year', reg_person.date_of_birth::date) > 17)
GROUP BY ovc_registration.person_id, reg_person.date_of_birth,
reg_person.sex_id, ovc_registration.child_cbo_id,
reg_org_unit.org_unit_name, reg_persons_geo.area_id,
ovc_registration.hiv_status, ward, constituency, county
;'''
# DATIM - Served
QUERIES['datim_1'] = '''
select
cast(count(distinct ovc_care_events.person_id) as integer) as OVCCount,
reg_org_unit.org_unit_name AS CBO,
list_geo.area_name as ward, scc.area_name as constituency, cc.area_name as county,
date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) AS age,
CASE
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) < 1 THEN 'a.[<1yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 1 AND 4 THEN 'b.[1-4yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 5 AND 9 THEN 'c.[5-9yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 10 AND 14 THEN 'd.[10-14yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 15 AND 17 THEN 'e.[15-17yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 18 AND 24 THEN 'f.[18-24yrs]'
ELSE 'g.[25+yrs]' END AS AgeRange,
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Gender,
'1. OVC_Serv' as Domain
from ovc_care_services
INNER JOIN ovc_care_events ON ovc_care_events.event=ovc_care_services.event_id
INNER JOIN reg_person ON ovc_care_events.person_id=reg_person.id
LEFT OUTER JOIN ovc_registration ON ovc_care_events.person_id=ovc_registration.person_id
LEFT OUTER JOIN reg_org_unit ON reg_org_unit.id=ovc_registration.child_cbo_id
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
LEFT OUTER JOIN list_geo ON list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
left outer join list_geo as cc on cc.area_id=scc.parent_area_id
WHERE reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and ovc_care_services.is_void = False
and ovc_care_events.event_type_id='FSAM'
and ovc_care_events.date_of_event between '{start_date}' and '{end_date}'
and ovc_registration.child_cbo_id in ({cbos})
and ((ovc_registration.is_active = True and ovc_registration.registration_date <= '{end_date}')
or (ovc_registration.is_active = False
and (ovc_registration.registration_date between '{start_date}' and '{end_date}' ))
or (ovc_registration.is_active = False and ovc_registration.registration_date <= '{end_date}'
and ovc_registration.exit_date > '{end_date}' )
or (ovc_registration.is_active = False and ovc_registration.registration_date <= '{end_date}'
and ovc_registration.exit_date between '{start_date}' and '{end_date}' ))
and not (ovc_registration.school_level = 'SLNS'
and date_part('year', '{end_date}'::date) - date_part('year', reg_person.date_of_birth::date) > 17)
GROUP BY reg_person.date_of_birth,
reg_person.sex_id, ovc_registration.child_cbo_id,
reg_org_unit.org_unit_name, reg_persons_geo.area_id,
ward, constituency, county;'''
# DATIM ART
QUERIES['datim_2'] = '''
select
cast(count(distinct ovc_registration.person_id) as integer) as OVCCount,
reg_org_unit.org_unit_name AS CBO,
list_geo.area_name as ward, scc.area_name as constituency, cc.area_name as county,
date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) AS age,
CASE
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) < 1 THEN 'a.[<1yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 1 AND 4 THEN 'b.[1-4yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 5 AND 9 THEN 'c.[5-9yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 10 AND 14 THEN 'd.[10-14yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 15 AND 17 THEN 'e.[15-17yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 18 AND 24 THEN 'f.[18-24yrs]'
ELSE 'g.[25+yrs]' END AS AgeRange,
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Gender,
CASE ovc_care_health.art_status
WHEN 'ARAR' THEN '2a. (ii) OVC_HIVSTAT: HIV+ on ARV Treatment'
WHEN 'ARPR' THEN '2a. (ii) OVC_HIVSTAT: HIV+ on ARV Treatment'
ELSE '2a. (iii) OVC_HIVSTAT: HIV+ NOT on ARV Treatment'
END AS Domain,
0 as Wardactive, 0 as WARDGraduated, 0 as WARDTransferred,
0 as WARDExitedWithoutGraduation
from ovc_registration
INNER JOIN reg_person ON ovc_registration.person_id=reg_person.id
LEFT OUTER JOIN reg_org_unit ON reg_org_unit.id=ovc_registration.child_cbo_id
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
LEFT OUTER JOIN list_geo ON list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
left outer join list_geo as cc on cc.area_id=scc.parent_area_id
LEFT OUTER JOIN ovc_care_health ON ovc_care_health.person_id=ovc_registration.person_id
WHERE reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
AND ovc_registration.hiv_status = 'HSTP'
and ovc_registration.child_cbo_id in ({cbos})
and ((ovc_registration.is_active = True and ovc_registration.registration_date <= '{end_date}')
or (ovc_registration.is_active = False
and (ovc_registration.registration_date between '{start_date}' and '{end_date}' ))
or (ovc_registration.is_active = False and ovc_registration.registration_date <= '{end_date}'
and ovc_registration.exit_date > '{end_date}' )
or (ovc_registration.is_active = False and ovc_registration.registration_date <= '{end_date}'
and ovc_registration.exit_date between '{start_date}' and '{end_date}' ))
and not (ovc_registration.school_level = 'SLNS'
and date_part('year', '{end_date}'::date) - date_part('year', reg_person.date_of_birth::date) > 17)
GROUP BY ovc_registration.person_id, reg_person.date_of_birth,
reg_person.sex_id, ovc_registration.child_cbo_id,
reg_org_unit.org_unit_name, reg_persons_geo.area_id, list_geo.parent_area_id,
ovc_registration.hiv_status, ovc_care_health.art_status,
ward, constituency, county;'''
# Datim Ward summary
QUERIES['datim_3'] = '''
SELECT *
FROM crosstab(
'select cast(ward as text), graduation,
cast(sum(ccount) as integer) as ovcs from (
select cast(count(*) as integer) as ccount,
list_geo.area_name as ward,
case exit_reason
WHEN ''ERDE'' THEN ''WARDExitedWithoutGraduation''
WHEN ''EROE'' THEN ''WARDGraduated''
WHEN ''ERFI'' THEN ''WARDGraduated''
WHEN ''ERFR'' THEN ''WARDGraduated''
WHEN ''ERFS'' THEN ''WARDGraduated''
WHEN ''ERAD'' THEN ''WARDGraduated''
WHEN ''ERSE'' THEN ''WARDGraduated''
WHEN ''ERIN'' THEN ''WARDExitedWithoutGraduation''
WHEN ''ERRL'' THEN ''WARDTransferred''
WHEN ''ERDU'' THEN ''WARDTransferred''
WHEN ''ERTR'' THEN ''WARDGraduated''
WHEN ''ERLW'' THEN ''WARDExitedWithoutGraduation''
WHEN ''ERMA'' THEN ''WARDExitedWithoutGraduation''
WHEN ''ERTL'' THEN ''WARDTransferred''
WHEN ''ERDO'' THEN ''WARDExitedWithoutGraduation''
else ''WardActive'' END AS Graduation
from ovc_registration
INNER JOIN reg_person ON ovc_registration.person_id=reg_person.id
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
LEFT OUTER JOIN list_geo ON list_geo.area_id=reg_persons_geo.area_id
WHERE reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and ovc_registration.child_cbo_id in ({cbos})
and ((ovc_registration.is_active = True and ovc_registration.registration_date <= ''{end_date}'' )
or (ovc_registration.is_active = False
and (ovc_registration.registration_date between ''{start_date}'' and ''{end_date}'' ))
or (ovc_registration.is_active = False and ovc_registration.registration_date <= ''{end_date}''
and ovc_registration.exit_date > ''{end_date}'' )
or (ovc_registration.is_active = False and ovc_registration.registration_date <= ''{end_date}''
and ovc_registration.exit_date between ''{start_date}'' and ''{end_date}'' ))
and not (ovc_registration.school_level = ''SLNS''
and date_part(''year'', ''{end_date}''::date) - date_part(''year'', reg_person.date_of_birth::date) > 17)
group by exit_reason, list_geo.area_name order by ward) as wc
group by ward, graduation
order by 1,2')
AS ct("ward" text, "WardActive" int, "WARDGraduated" int,
"WARDTransferred" int, "WARDExitedWithoutGraduation" int);;'''
# KPI
QUERIES['kpi'] = '''
select
cast(count(distinct ovc_registration.person_id) as integer) as OVCCount,
ovc_registration.child_cbo_id,
reg_org_unit.org_unit_name,
reg_persons_geo.area_id,
list_geo.area_name,
date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) AS age,
CASE
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) < 1 THEN 'a.[<1yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 1 AND 4 THEN 'b.[1-4yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 5 AND 9 THEN 'c.[5-9yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 10 AND 14 THEN 'd.[10-14yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 15 AND 17 THEN 'e.[15-17yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 18 AND 24 THEN 'f.[18-24yrs]'
ELSE 'g.[25+yrs]' END AS AgeRange,
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Gender,
CASE ovc_care_health.art_status
WHEN 'ARAR' THEN '2a. (ii) OVC_HIVSTAT: HIV+ on ARV Treatment'
WHEN 'ARPR' THEN '2a. (ii) OVC_HIVSTAT: HIV+ on ARV Treatment'
ELSE '2a. (iii) OVC_HIVSTAT: HIV+ NOT on ARV Treatment'
END AS Domain
from ovc_registration
INNER JOIN reg_person ON ovc_registration.person_id=reg_person.id
LEFT OUTER JOIN reg_org_unit ON reg_org_unit.id=ovc_registration.child_cbo_id
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
LEFT OUTER JOIN list_geo ON list_geo.area_id=reg_persons_geo.area_id
LEFT OUTER JOIN ovc_care_health ON ovc_care_health.person_id=ovc_registration.person_id
WHERE reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and ovc_registration.is_active = True
AND ovc_registration.hiv_status = 'HSTP'
%s
GROUP BY ovc_registration.person_id, reg_person.date_of_birth,
reg_person.sex_id, ovc_registration.child_cbo_id,
reg_org_unit.org_unit_name, reg_persons_geo.area_id,
ovc_registration.hiv_status, list_geo.area_name, ovc_care_health.art_status;'''
QUERIES['served'] = '''
SELECT * FROM (%s) a
INNER JOIN (%s) b
ON a.ward = b.ward;'''
# NOT SERVED LIST
QUERIES['not_served_list'] = '''
select person_id from(
select person_id, count(person_id) as scnts
from(
select person_id, domain, count(distinct(domain)) as domaincount from (
select ovc_registration.person_id, event_type_id, domain from ovc_care_assessment
inner join ovc_care_events on ovc_care_assessment.event_id=ovc_care_events.event
inner join ovc_registration on ovc_care_events.person_id = ovc_registration.person_id
where ovc_registration.child_cbo_id in ({cbos})
and domain in ('DHNU', 'DPSS')
and ovc_care_events.date_of_event between '{start_date}' and '{end_date}'
union all
select ovc_registration.person_id, event_type_id,
CASE
WHEN (service_provided = 'SC1S' or service_provided = 'SC2S' or service_provided = 'SC3S'
or service_provided = 'SC4S' or service_provided = 'SC5S' or service_provided = 'SC6S'
or service_provided = 'SC7S') THEN 'DSHC'
WHEN (service_provided = 'PS1S' or service_provided = 'PS2S' or service_provided = 'PS3S'
or service_provided = 'PS4S' or service_provided = 'PS5S') THEN 'DPSS'
WHEN (service_provided = 'PT1S' or service_provided = 'PT2S' or service_provided = 'PT3S'
or service_provided = 'PT4S' or service_provided = 'PT5S') THEN 'DPRO'
WHEN (service_provided = 'HE1S' or service_provided = 'HE2S' or service_provided = 'HE3S'
or service_provided = 'HE4S') THEN 'DHES'
WHEN (service_provided = 'HC1S' or service_provided = 'HC2S' or service_provided = 'HC3S'
or service_provided = 'HC4S' or service_provided = 'HC5S' or service_provided = 'HC6S'
or service_provided = 'HC7S' or service_provided = 'HC8S' or service_provided = 'HC9S'
or service_provided = 'HC10S') THEN 'DHNU'
WHEN (service_provided = 'SE1S' or service_provided = 'SE2S' or service_provided = 'SE3S'
or service_provided = 'SE4S' or service_provided = 'SE5S' or service_provided = 'SE6S'
or service_provided = 'SE7S' or service_provided = 'SE8S') THEN 'DEDU'
ELSE 'NULL'
END AS domain
from ovc_care_services
inner join ovc_care_events on ovc_care_services.event_id=ovc_care_events.event
inner join ovc_registration on ovc_care_events.person_id = ovc_registration.person_id
where ovc_registration.child_cbo_id in ({cbos})
and ovc_care_events.date_of_event between '{start_date}' and '{end_date}') as dcs
group by person_id, domain) as scounts
group by person_id) as fp where scnts > 0'''
# NOT Served
QUERIES['not_served'] = '''
select reg_org_unit.org_unit_name AS CBO,
reg_person.first_name, reg_person.surname,
reg_person.other_names, reg_person.date_of_birth, registration_date,
date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) AS age,
date_part('year', age(ovc_registration.registration_date,
reg_person.date_of_birth)) AS age_at_reg,
child_cbo_id as OVCID,
list_geo.area_name as ward, scc.area_name as constituency, cc.area_name as county,
CASE
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) < 1 THEN 'a.[<1yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 1 AND 4 THEN 'b.[1-4yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 5 AND 9 THEN 'c.[5-9yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 10 AND 14 THEN 'd.[10-14yrs]'
WHEN date_part('year',
age(reg_person.date_of_birth)) BETWEEN 15 AND 17 THEN 'e.[15-17yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 18 AND 24 THEN 'f.[18-24yrs]'
ELSE 'g.[25+yrs]' END AS AgeRange,
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Gender,
CASE has_bcert WHEN 'True' THEN 'HAS BIRTHCERT' ELSE 'NO BIRTHCERT' END AS BirthCert,
CASE has_bcert WHEN 'True' THEN 'BCERT' ELSE NULL END AS BCertNumber,
CASE is_disabled WHEN 'True' THEN 'HAS DISABILITY' ELSE 'NO DISABILITY' END AS OVCDisability,
CASE is_Disabled WHEN 'True' THEN 'NCPWD' ELSE NULL END AS NCPWDNumber,
CASE
WHEN hiv_status = 'HSTP' THEN 'POSITIVE'
WHEN hiv_status = 'HSTN' THEN 'NEGATIVE'
ELSE 'NOT KNOWN' END AS OVCHIVstatus,
CASE hiv_status WHEN 'HSTP' THEN 'ART' ELSE NULL END AS ARTStatus,
concat(chw.first_name,' ',chw.surname,' ',chw.other_names) as CHW,
concat(cgs.first_name,' ',cgs.surname,' ',cgs.other_names) as parent_names,
CASE is_active WHEN 'True' THEN 'ACTIVE' ELSE 'EXITED' END AS Exit_status,
CASE is_active WHEN 'False' THEN exit_date ELSE NULL END AS Exit_date,
CASE
WHEN school_level = 'SLTV' THEN 'Tertiary'
WHEN school_level = 'SLUN' THEN 'University'
WHEN school_level = 'SLSE' THEN 'Secondary'
WHEN school_level = 'SLPR' THEN 'Primary'
WHEN school_level = 'SLEC' THEN 'ECDE'
ELSE 'Not in School' END AS Schoollevel,
CASE immunization_status
WHEN 'IMFI' THEN 'Fully Immunized'
WHEN 'IMNI' THEN 'Not Immunized'
WHEN 'IMNC' THEN 'Not Completed'
ELSE 'Not Known' END AS immunization
from ovc_registration
left outer join reg_person on person_id=reg_person.id
left outer join reg_person chw on child_chv_id=chw.id
left outer join reg_person cgs on caretaker_id=cgs.id
left outer join reg_org_unit on child_cbo_id=reg_org_unit.id
left outer join reg_persons_geo on ovc_registration.person_id=reg_persons_geo.person_id
left outer join list_geo on list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
left outer join list_geo as cc on cc.area_id=scc.parent_area_id
where reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and child_cbo_id in ({cbos})
and ovc_registration.registration_date between '{start_date}' and '{end_date}'
and ovc_registration.person_id not in (%s);''' % (QUERIES['not_served_list'])
# PEPFAR DETAILED SUMMARY
QUERIES['pepfar_detailed'] = '''
select * FROM (
select cast('CBO' as text) as level,
CASE
WHEN scnts = 0 THEN 'Not Served'
WHEN scnts = 1 THEN '1 or 2 Services'
WHEN scnts = 2 THEN '1 or 2 Services'
WHEN scnts > 2 THEN '3 or More Services' END AS Services,
Gender, age, AgeRange, CBO as name, count(scnts) AS OVCCOUNT from(
select person_id, count(person_id) as scnts, Gender, age, AgeRange, CBO
from(
select person_id, domain, Gender, age, AgeRange, CBO, count(distinct(domain)) as domaincount from (
select ovc_registration.person_id, event_type_id, domain,
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Gender,
date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) AS age,
CASE
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) < 1 THEN 'a.[<1yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 1 AND 4 THEN 'b.[1-4yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 5 AND 9 THEN 'c.[5-9yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 10 AND 14 THEN 'd.[10-14yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 15 AND 17 THEN 'e.[15-17yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 18 AND 24 THEN 'f.[18-24yrs]'
ELSE 'g.[25+yrs]' END AS AgeRange,
reg_org_unit.org_unit_name AS CBO
from ovc_care_assessment
inner join ovc_care_events on ovc_care_assessment.event_id=ovc_care_events.event
inner join ovc_registration on ovc_care_events.person_id=ovc_registration.person_id
inner join reg_person on reg_person.id = ovc_registration.person_id
left outer join reg_org_unit on ovc_registration.child_cbo_id=reg_org_unit.id
where domain in ('DHNU', 'DPSS')
and ovc_registration.child_cbo_id in ({cbos})
and ovc_care_events.date_of_event between '{start_date}' and '{end_date}'
union all
select ovc_registration.person_id, event_type_id,
CASE
WHEN (service_provided = 'SC1S' or service_provided = 'SC2S' or service_provided = 'SC3S'
or service_provided = 'SC4S' or service_provided = 'SC5S' or service_provided = 'SC6S'
or service_provided = 'SC7S') THEN 'DSHC'
WHEN (service_provided = 'PS1S' or service_provided = 'PS2S' or service_provided = 'PS3S'
or service_provided = 'PS4S' or service_provided = 'PS5S') THEN 'DPSS'
WHEN (service_provided = 'PT1S' or service_provided = 'PT2S' or service_provided = 'PT3S'
or service_provided = 'PT4S' or service_provided = 'PT5S') THEN 'DPRO'
WHEN (service_provided = 'HE1S' or service_provided = 'HE2S' or service_provided = 'HE3S'
or service_provided = 'HE4S') THEN 'DHES'
WHEN (service_provided = 'HC1S' or service_provided = 'HC2S' or service_provided = 'HC3S'
or service_provided = 'HC4S' or service_provided = 'HC5S' or service_provided = 'HC6S'
or service_provided = 'HC7S' or service_provided = 'HC8S' or service_provided = 'HC9S'
or service_provided = 'HC10S') THEN 'DHNU'
WHEN (service_provided = 'SE1S' or service_provided = 'SE2S' or service_provided = 'SE3S'
or service_provided = 'SE4S' or service_provided = 'SE5S' or service_provided = 'SE6S'
or service_provided = 'SE7S' or service_provided = 'SE8S') THEN 'DEDU'
ELSE 'NULL'
END AS domain,
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Gender,
date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) AS age,
CASE
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) < 1 THEN 'a.[<1yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 1 AND 4 THEN 'b.[1-4yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 5 AND 9 THEN 'c.[5-9yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 10 AND 14 THEN 'd.[10-14yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 15 AND 17 THEN 'e.[15-17yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 18 AND 24 THEN 'f.[18-24yrs]'
ELSE 'g.[25+yrs]' END AS AgeRange,
reg_org_unit.org_unit_name AS CBO
from ovc_care_services
inner join ovc_care_events on ovc_care_services.event_id=ovc_care_events.event
inner join ovc_registration on ovc_care_events.person_id=ovc_registration.person_id
inner join reg_person on reg_person.id = ovc_registration.person_id
left outer join reg_org_unit on ovc_registration.child_cbo_id=reg_org_unit.id
where ovc_care_events.date_of_event between '{start_date}' and '{end_date}'
and ovc_registration.child_cbo_id in ({cbos})) as dcs
group by person_id, domain, Gender, age, AgeRange, CBO) as scounts
group by person_id, Gender, age, AgeRange, CBO
union all
select ovc_registration.person_id, 0 as scnts,
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Gender,
date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) AS age,
CASE
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) < 1 THEN 'a.[<1yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 1 AND 4 THEN 'b.[1-4yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 5 AND 9 THEN 'c.[5-9yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 10 AND 14 THEN 'd.[10-14yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 15 AND 17 THEN 'e.[15-17yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 18 AND 24 THEN 'f.[18-24yrs]'
ELSE 'g.[25+yrs]' END AS AgeRange,
reg_org_unit.org_unit_name AS CBO
from ovc_registration
inner join reg_person on reg_person.id = ovc_registration.person_id
left outer join reg_org_unit on ovc_registration.child_cbo_id=reg_org_unit.id
where ovc_registration.child_cbo_id in ({cbos})
) as fp group by Gender, age, AgeRange, scnts, cbo) a
INNER JOIN (
select count(ovc_registration.person_id) as active,
reg_org_unit.org_unit_name as name
from ovc_registration
inner join reg_org_unit on reg_org_unit.id = ovc_registration.child_cbo_id
where ovc_registration.is_active = True
and ovc_registration.child_cbo_id in ({cbos})
group by ovc_registration.child_cbo_id, name) b
ON a.name = b.name;'''
# Blanks to fill up all services, ages, genders
QUERIES['pepfar_detailed_blank'] = '''
select * FROM (
select cast('CBO' as text) as level,
CASE
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) < 15 THEN cast('1 or 2 Services' as text)
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) BETWEEN 15 AND 28 THEN cast('3 or More Services' as text)
ELSE cast('Not Served' as text) END AS Services,
CASE
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (
1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41) THEN cast('Female' as text)
ELSE cast('Male' as text) END AS Gender,
0 as age,
CASE
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (1,2,15,16,29,30) THEN cast('a.[<1yrs]' as text)
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (3,4,16,17,31,32) THEN cast('b.[1-4yrs]' as text)
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (5,6,18,19,33,34) THEN cast('c.[5-9yrs]' as text)
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (7,8,20,21,35,36) THEN cast('d.[10-14yrs]' as text)
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (9,10,22,23,37,38) THEN cast('e.[15-17yrs]' as text)
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (11,12,24,25,39,40) THEN cast('f.[18-24yrs]' as text)
ELSE cast('g.[25+yrs]' as text) END AS AgeRange,
reg_org_unit.org_unit_name AS name,
0 as OVCCOUNT from ovc_registration
left outer join reg_org_unit on ovc_registration.child_cbo_id=reg_org_unit.id
where ovc_registration.child_cbo_id in ({cbos}) limit 42) a
INNER JOIN (
select count(ovc_registration.person_id) as active,
reg_org_unit.org_unit_name as name
from ovc_registration
inner join reg_org_unit on reg_org_unit.id = ovc_registration.child_cbo_id
where ovc_registration.is_active = True
and ovc_registration.child_cbo_id in ({cbos})
group by ovc_registration.child_cbo_id, name) b
ON a.name = b.name;'''
# For constituency
QUERIES['pepfar_detailed_1'] = '''
select * FROM (
select cast('Constituency' as text) as level,
CASE
WHEN scnts = 0 THEN 'Not Served'
WHEN scnts = 1 THEN '1 or 2 Services'
WHEN scnts = 2 THEN '1 or 2 Services'
WHEN scnts > 2 THEN '3 or More Services' END AS Services,
Gender, age, AgeRange, scounty as name, count(scnts) AS OVCCOUNT from(
select person_id, count(person_id) as scnts, Gender, age, AgeRange, scounty
from(
select person_id, domain, Gender, age, AgeRange, scounty, count(distinct(domain)) as domaincount from (
select ovc_registration.person_id, event_type_id, domain,
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Gender,
date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) AS age,
CASE
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) < 1 THEN 'a.[<1yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 1 AND 4 THEN 'b.[1-4yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 5 AND 9 THEN 'c.[5-9yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 10 AND 14 THEN 'd.[10-14yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 15 AND 17 THEN 'e.[15-17yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 18 AND 24 THEN 'f.[18-24yrs]'
ELSE 'g.[25+yrs]' END AS AgeRange,
scc.area_name AS scounty
from ovc_care_assessment
inner join ovc_care_events on ovc_care_assessment.event_id=ovc_care_events.event
inner join ovc_registration on ovc_care_events.person_id=ovc_registration.person_id
inner join reg_person on reg_person.id = ovc_registration.person_id
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
left outer join list_geo on list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
where reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and domain in ('DHNU', 'DPSS')
and ovc_registration.child_cbo_id in ({cbos})
and ovc_care_events.date_of_event between '{start_date}' and '{end_date}'
union all
select ovc_registration.person_id, event_type_id,
CASE
WHEN (service_provided = 'SC1S' or service_provided = 'SC2S' or service_provided = 'SC3S'
or service_provided = 'SC4S' or service_provided = 'SC5S' or service_provided = 'SC6S'
or service_provided = 'SC7S') THEN 'DSHC'
WHEN (service_provided = 'PS1S' or service_provided = 'PS2S' or service_provided = 'PS3S'
or service_provided = 'PS4S' or service_provided = 'PS5S') THEN 'DPSS'
WHEN (service_provided = 'PT1S' or service_provided = 'PT2S' or service_provided = 'PT3S'
or service_provided = 'PT4S' or service_provided = 'PT5S') THEN 'DPRO'
WHEN (service_provided = 'HE1S' or service_provided = 'HE2S' or service_provided = 'HE3S'
or service_provided = 'HE4S') THEN 'DHES'
WHEN (service_provided = 'HC1S' or service_provided = 'HC2S' or service_provided = 'HC3S'
or service_provided = 'HC4S' or service_provided = 'HC5S' or service_provided = 'HC6S'
or service_provided = 'HC7S' or service_provided = 'HC8S' or service_provided = 'HC9S'
or service_provided = 'HC10S') THEN 'DHNU'
WHEN (service_provided = 'SE1S' or service_provided = 'SE2S' or service_provided = 'SE3S'
or service_provided = 'SE4S' or service_provided = 'SE5S' or service_provided = 'SE6S'
or service_provided = 'SE7S' or service_provided = 'SE8S') THEN 'DEDU'
ELSE 'NULL'
END AS domain,
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Gender,
date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) AS age,
CASE
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) < 1 THEN 'a.[<1yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 1 AND 4 THEN 'b.[1-4yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 5 AND 9 THEN 'c.[5-9yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 10 AND 14 THEN 'd.[10-14yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 15 AND 17 THEN 'e.[15-17yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 18 AND 24 THEN 'f.[18-24yrs]'
ELSE 'g.[25+yrs]' END AS AgeRange,
scc.area_name AS scounty
from ovc_care_services
inner join ovc_care_events on ovc_care_services.event_id=ovc_care_events.event
inner join ovc_registration on ovc_care_events.person_id=ovc_registration.person_id
inner join reg_person on reg_person.id = ovc_registration.person_id
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
left outer join list_geo on list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
where reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and ovc_care_events.date_of_event between '{start_date}' and '{end_date}'
and ovc_registration.child_cbo_id in ({cbos})) as dcs
group by person_id, domain, Gender, age, AgeRange, scounty) as scounts
group by person_id, Gender, age, AgeRange, scounty
union all
select ovc_registration.person_id, 0 as scnts,
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Gender,
date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) AS age,
CASE
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) < 1 THEN 'a.[<1yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 1 AND 4 THEN 'b.[1-4yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 5 AND 9 THEN 'c.[5-9yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 10 AND 14 THEN 'd.[10-14yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 15 AND 17 THEN 'e.[15-17yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 18 AND 24 THEN 'f.[18-24yrs]'
ELSE 'g.[25+yrs]' END AS AgeRange,
scc.area_name AS scounty
from ovc_registration
inner join reg_person on reg_person.id = ovc_registration.person_id
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
left outer join list_geo on list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
where reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and ovc_registration.child_cbo_id in ({cbos})
) as fp group by Gender, age, AgeRange, scnts, scounty) a
INNER JOIN (
select count(ovc_registration.person_id) as active,
scc.area_name as name
from ovc_registration
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
left outer join list_geo on list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
where reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and ovc_registration.is_active = True
and ovc_registration.child_cbo_id in ({cbos})
group by name) b
ON a.name = b.name;'''
# For consituency blanks
QUERIES['pepfar_detailed_blank_1'] = '''
select * FROM (
select cast('Constituency' as text) as level,
CASE
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) < 15 THEN cast('1 or 2 Services' as text)
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) BETWEEN 15 AND 28 THEN cast('3 or More Services' as text)
ELSE cast('Not Served' as text) END AS Services,
CASE
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (
1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41) THEN cast('Female' as text)
ELSE cast('Male' as text) END AS Gender,
0 as age,
CASE
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (1,2,15,16,29,30) THEN cast('a.[<1yrs]' as text)
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (3,4,16,17,31,32) THEN cast('b.[1-4yrs]' as text)
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (5,6,18,19,33,34) THEN cast('c.[5-9yrs]' as text)
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (7,8,20,21,35,36) THEN cast('d.[10-14yrs]' as text)
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (9,10,22,23,37,38) THEN cast('e.[15-17yrs]' as text)
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (11,12,24,25,39,40) THEN cast('f.[18-24yrs]' as text)
ELSE cast('g.[25+yrs]' as text) END AS AgeRange,
scc.area_name AS name,
0 as OVCCOUNT from ovc_registration
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
left outer join list_geo on list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
where reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and ovc_registration.child_cbo_id in ({cbos}) limit 42) a
INNER JOIN (
select count(ovc_registration.person_id) as active,
scc.area_name as name
from ovc_registration
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
left outer join list_geo on list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
where reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and ovc_registration.is_active = True
and ovc_registration.child_cbo_id in ({cbos})
group by name) b
ON a.name = b.name;'''
# For County
QUERIES['pepfar_detailed_2'] = '''
select * FROM (
select cast('County' as text) as level,
CASE
WHEN scnts = 0 THEN 'Not Served'
WHEN scnts = 1 THEN '1 or 2 Services'
WHEN scnts = 2 THEN '1 or 2 Services'
WHEN scnts > 2 THEN '3 or More Services' END AS Services,
Gender, age, AgeRange, scounty as name, count(scnts) AS OVCCOUNT from(
select person_id, count(person_id) as scnts, Gender, age, AgeRange, scounty
from(
select person_id, domain, Gender, age, AgeRange, scounty, count(distinct(domain)) as domaincount from (
select ovc_registration.person_id, event_type_id, domain,
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Gender,
date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) AS age,
CASE
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) < 1 THEN 'a.[<1yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 1 AND 4 THEN 'b.[1-4yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 5 AND 9 THEN 'c.[5-9yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 10 AND 14 THEN 'd.[10-14yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 15 AND 17 THEN 'e.[15-17yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 18 AND 24 THEN 'f.[18-24yrs]'
ELSE 'g.[25+yrs]' END AS AgeRange,
cc.area_name AS scounty
from ovc_care_assessment
inner join ovc_care_events on ovc_care_assessment.event_id=ovc_care_events.event
inner join ovc_registration on ovc_care_events.person_id=ovc_registration.person_id
inner join reg_person on reg_person.id = ovc_registration.person_id
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
left outer join list_geo on list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
left outer join list_geo as cc on cc.area_id=scc.parent_area_id
where reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and domain in ('DHNU', 'DPSS')
and ovc_registration.child_cbo_id in ({cbos})
and ovc_care_events.date_of_event between '{start_date}' and '{end_date}'
union all
select ovc_registration.person_id, event_type_id,
CASE
WHEN (service_provided = 'SC1S' or service_provided = 'SC2S' or service_provided = 'SC3S'
or service_provided = 'SC4S' or service_provided = 'SC5S' or service_provided = 'SC6S'
or service_provided = 'SC7S') THEN 'DSHC'
WHEN (service_provided = 'PS1S' or service_provided = 'PS2S' or service_provided = 'PS3S'
or service_provided = 'PS4S' or service_provided = 'PS5S') THEN 'DPSS'
WHEN (service_provided = 'PT1S' or service_provided = 'PT2S' or service_provided = 'PT3S'
or service_provided = 'PT4S' or service_provided = 'PT5S') THEN 'DPRO'
WHEN (service_provided = 'HE1S' or service_provided = 'HE2S' or service_provided = 'HE3S'
or service_provided = 'HE4S') THEN 'DHES'
WHEN (service_provided = 'HC1S' or service_provided = 'HC2S' or service_provided = 'HC3S'
or service_provided = 'HC4S' or service_provided = 'HC5S' or service_provided = 'HC6S'
or service_provided = 'HC7S' or service_provided = 'HC8S' or service_provided = 'HC9S'
or service_provided = 'HC10S') THEN 'DHNU'
WHEN (service_provided = 'SE1S' or service_provided = 'SE2S' or service_provided = 'SE3S'
or service_provided = 'SE4S' or service_provided = 'SE5S' or service_provided = 'SE6S'
or service_provided = 'SE7S' or service_provided = 'SE8S') THEN 'DEDU'
ELSE 'NULL'
END AS domain,
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Gender,
date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) AS age,
CASE
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) < 1 THEN 'a.[<1yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 1 AND 4 THEN 'b.[1-4yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 5 AND 9 THEN 'c.[5-9yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 10 AND 14 THEN 'd.[10-14yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 15 AND 17 THEN 'e.[15-17yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 18 AND 24 THEN 'f.[18-24yrs]'
ELSE 'g.[25+yrs]' END AS AgeRange,
cc.area_name AS scounty
from ovc_care_services
inner join ovc_care_events on ovc_care_services.event_id=ovc_care_events.event
inner join ovc_registration on ovc_care_events.person_id=ovc_registration.person_id
inner join reg_person on reg_person.id = ovc_registration.person_id
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
left outer join list_geo on list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
left outer join list_geo as cc on cc.area_id=scc.parent_area_id
where reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and ovc_care_events.date_of_event between '{start_date}' and '{end_date}'
and ovc_registration.child_cbo_id in ({cbos})) as dcs
group by person_id, domain, Gender, age, AgeRange, scounty) as scounts
group by person_id, Gender, age, AgeRange, scounty
union all
select ovc_registration.person_id, 0 as scnts,
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS Gender,
date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) AS age,
CASE
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) < 1 THEN 'a.[<1yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 1 AND 4 THEN 'b.[1-4yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 5 AND 9 THEN 'c.[5-9yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 10 AND 14 THEN 'd.[10-14yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 15 AND 17 THEN 'e.[15-17yrs]'
WHEN date_part('year', age(timestamp '{end_date}', reg_person.date_of_birth)) BETWEEN 18 AND 24 THEN 'f.[18-24yrs]'
ELSE 'g.[25+yrs]' END AS AgeRange,
cc.area_name AS scounty
from ovc_registration
inner join reg_person on reg_person.id = ovc_registration.person_id
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
left outer join list_geo on list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
left outer join list_geo as cc on cc.area_id=scc.parent_area_id
where reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and ovc_registration.child_cbo_id in ({cbos})
) as fp group by Gender, age, AgeRange, scnts, scounty) a
INNER JOIN (
select count(ovc_registration.person_id) as active,
cc.area_name as name
from ovc_registration
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
left outer join list_geo on list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
left outer join list_geo as cc on cc.area_id=scc.parent_area_id
where reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and ovc_registration.is_active = True
and ovc_registration.child_cbo_id in ({cbos})
group by name) b
ON a.name = b.name;'''
# For county blank
QUERIES['pepfar_detailed_blank_2'] = '''
select * FROM (
select cast('County' as text) as level,
CASE
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) < 15 THEN cast('1 or 2 Services' as text)
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) BETWEEN 15 AND 28 THEN cast('3 or More Services' as text)
ELSE cast('Not Served' as text) END AS Services,
CASE
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (
1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41) THEN cast('Female' as text)
ELSE cast('Male' as text) END AS Gender,
0 as age,
CASE
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (1,2,15,16,29,30) THEN cast('a.[<1yrs]' as text)
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (3,4,16,17,31,32) THEN cast('b.[1-4yrs]' as text)
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (5,6,18,19,33,34) THEN cast('c.[5-9yrs]' as text)
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (7,8,20,21,35,36) THEN cast('d.[10-14yrs]' as text)
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (9,10,22,23,37,38) THEN cast('e.[15-17yrs]' as text)
WHEN ROW_NUMBER () OVER (ORDER BY ovc_registration ASC) IN (11,12,24,25,39,40) THEN cast('f.[18-24yrs]' as text)
ELSE cast('g.[25+yrs]' as text) END AS AgeRange,
cc.area_name AS name,
0 as OVCCOUNT from ovc_registration
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
left outer join list_geo on list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
left outer join list_geo as cc on cc.area_id=scc.parent_area_id
where reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and ovc_registration.child_cbo_id in ({cbos}) limit 42) a
INNER JOIN (
select count(ovc_registration.person_id) as active,
cc.area_name as name
from ovc_registration
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
left outer join list_geo on list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
left outer join list_geo as cc on cc.area_id=scc.parent_area_id
where reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and ovc_registration.is_active = True
and ovc_registration.child_cbo_id in ({cbos})
group by name) b
ON a.name = b.name;'''
# Constituency active
'''
select count(ovc_registration.person_id),
list_geo.area_name as ward,
list_geo.parent_area_id as sc, scc.area_name as sc_name,
scc.parent_area_id as cid, cc.area_name as county
from ovc_registration
left outer join reg_org_unit on child_cbo_id=reg_org_unit.id
left outer join reg_person on person_id=reg_person.id
LEFT OUTER JOIN reg_persons_geo ON reg_persons_geo.person_id=ovc_registration.person_id
left outer join list_geo on list_geo.area_id=reg_persons_geo.area_id
left outer join list_geo as scc on scc.area_id=list_geo.parent_area_id
left outer join list_geo as cc on cc.area_id=scc.parent_area_id
where reg_persons_geo.area_id > 337 and reg_persons_geo.is_void = False
and ovc_registration.is_active = True
group by ovc_registration.child_cbo_id, ward, sc, sc_name, cid, county
'''
QUERIES['form1b_summary'] = '''
'''
QUERIES['pivot_report'] = '''
select
cou_geo.area_name as "County",
scou_geo.area_name as "Sub County",
reg_org_unit.org_unit_name as "Organization Unit",
ou_type.item_description as "Unit Type",
c_cat.item_description as "Category",
TO_CHAR(ovc_case_record.date_case_opened :: DATE, 'dd-Mon-yyyy') as "Case Date",
to_char(date_case_opened, 'YYYY')::INTEGER as "Year",
TO_CHAR(date_case_opened :: DATE, 'MM-Mon') as "Month",
case
when to_char(date_case_opened, 'MM')::INTEGER BETWEEN 1 AND 3 THEN 3
when to_char(date_case_opened, 'MM')::INTEGER BETWEEN 4 AND 6 THEN 4
when to_char(date_case_opened, 'MM')::INTEGER BETWEEN 7 AND 9 THEN 1
else 2 end as "Qtr",
CASE reg_person.sex_id WHEN 'SFEM' THEN 'Female' ELSE 'Male' END AS "Sex",
date_part('year', age(date_case_opened, reg_person.date_of_birth)) AS "Age",
CASE
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) < 6 THEN 'a.[0 - 5 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 6 AND 9 THEN 'b.[6 - 9 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 10 AND 15 THEN 'c.[10 - 15 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 16 AND 18 THEN 'd.[16 - 18 yrs]'
ELSE 'e.[18+ yrs]' END AS "Age Set",
CASE
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) < 5 THEN 'a.[0 - 4 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 5 AND 9 THEN 'b.[5 - 9 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 10 AND 14 THEN 'c.[10 - 14 yrs]'
WHEN date_part('year', age(date_case_opened, reg_person.date_of_birth)) BETWEEN 15 AND 18 THEN 'd.[15 - 18 yrs]'
ELSE 'e.[18+ yrs]' END AS "KNBS Age Set",
CASE case_stage WHEN 2 THEN 'Closed' WHEN 1 THEN 'Active' ELSE 'Pending' END AS "Case Status",
case_status as "Case State",
TO_CHAR(ovc_case_record.timestamp_created :: DATE, 'dd-Mon-yyyy') as "System Date",
1 as "OVCCount"
from ovc_case_record
inner join ovc_case_category as ccat on case_id = ccat.case_id_id
inner join ovc_case_geo as cgeo on cgeo.case_id_id = case_id
left outer join reg_person on ovc_case_record.person_id=reg_person.id
left outer join reg_org_unit on reg_org_unit.id=cgeo.report_orgunit_id
left outer join list_geo as scou_geo on scou_geo.area_id=cgeo.report_subcounty_id and scou_geo.area_id > 47
left outer join list_geo as cou_geo on cou_geo.area_id=scou_geo.parent_area_id and cou_geo.area_id < 48
left outer join ovc_case_sub_category cscat on cscat.case_category_id=ccat.case_category_id
left outer join list_general c_cat on c_cat.item_id=ccat.case_category and c_cat.field_name = 'case_category_id'
left outer join list_general ou_type on ou_type.item_id=reg_org_unit.org_unit_type_id
where date_case_opened between '{start_date}' and '{end_date}' {extras}
'''
| 60.08218 | 124 | 0.760377 | 12,090 | 69,455 | 4.09694 | 0.035567 | 0.044517 | 0.038198 | 0.048756 | 0.919628 | 0.910664 | 0.902346 | 0.894634 | 0.892332 | 0.88787 | 0 | 0.025495 | 0.131438 | 69,455 | 1,155 | 125 | 60.134199 | 0.795577 | 0.005932 | 0 | 0.802571 | 0 | 0.169881 | 0.986612 | 0.278324 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e2d18cabe032a251105e3eac9fcb6bebf271271b | 4,830 | py | Python | pm/models/situ_mobility_standing.py | aosojnik/pipeline-manager-features-models | e5232f1c1b2073253a1c505dc9fff0d3d839dd6c | [
"MIT"
] | null | null | null | pm/models/situ_mobility_standing.py | aosojnik/pipeline-manager-features-models | e5232f1c1b2073253a1c505dc9fff0d3d839dd6c | [
"MIT"
] | null | null | null | pm/models/situ_mobility_standing.py | aosojnik/pipeline-manager-features-models | e5232f1c1b2073253a1c505dc9fff0d3d839dd6c | [
"MIT"
] | null | null | null | ##################################################################################
##########--this is an autogenerated python model definition for proDEX--#########
##--original file: Mobility_general_v02.dxi --##
##################################################################################
from .lib.proDEX import *
situ_mobility_standing = Node()
feat_mobility_standing_episode_length = Atrib()
feat_mobility_standing_daily_duration = Atrib()
feat_mobility_standing_mode = Atrib()
situ_mobility_standing.setName('situ_mobility_standing')
feat_mobility_standing_episode_length.setName('feat_mobility_standing_episode_length')
feat_mobility_standing_daily_duration.setName('feat_mobility_standing_daily_duration')
feat_mobility_standing_mode.setName('feat_mobility_standing_mode')
situ_mobility_standing.setValues(['very_low', 'low', 'medium', 'high', 'very_high'])
feat_mobility_standing_episode_length.setValues(['short', 'medium', 'long'])
feat_mobility_standing_daily_duration.setValues(['low', 'medium', 'high'])
feat_mobility_standing_mode.setValues(['with_aids', 'no_aids'])
situ_mobility_standing.addChild(feat_mobility_standing_episode_length)
feat_mobility_standing_episode_length.setParent(situ_mobility_standing)
situ_mobility_standing.addChild(feat_mobility_standing_daily_duration)
feat_mobility_standing_daily_duration.setParent(situ_mobility_standing)
situ_mobility_standing.addChild(feat_mobility_standing_mode)
feat_mobility_standing_mode.setParent(situ_mobility_standing)
situ_mobility_standing.addFunctionRow([{feat_mobility_standing_episode_length:'short', feat_mobility_standing_daily_duration:'low', feat_mobility_standing_mode:'with_aids'}, 'very_low'])
situ_mobility_standing.addFunctionRow([{feat_mobility_standing_episode_length:'short', feat_mobility_standing_daily_duration:'low', feat_mobility_standing_mode:'no_aids'}, 'low'])
situ_mobility_standing.addFunctionRow([{feat_mobility_standing_episode_length:'short', feat_mobility_standing_daily_duration:'medium', feat_mobility_standing_mode:'with_aids'}, 'very_low'])
situ_mobility_standing.addFunctionRow([{feat_mobility_standing_episode_length:'short', feat_mobility_standing_daily_duration:'medium', feat_mobility_standing_mode:'no_aids'}, 'low'])
situ_mobility_standing.addFunctionRow([{feat_mobility_standing_episode_length:'short', feat_mobility_standing_daily_duration:'high', feat_mobility_standing_mode:'with_aids'}, 'low'])
situ_mobility_standing.addFunctionRow([{feat_mobility_standing_episode_length:'short', feat_mobility_standing_daily_duration:'high', feat_mobility_standing_mode:'no_aids'}, 'medium'])
situ_mobility_standing.addFunctionRow([{feat_mobility_standing_episode_length:'medium', feat_mobility_standing_daily_duration:'low', feat_mobility_standing_mode:'with_aids'}, 'low'])
situ_mobility_standing.addFunctionRow([{feat_mobility_standing_episode_length:'medium', feat_mobility_standing_daily_duration:'low', feat_mobility_standing_mode:'no_aids'}, 'medium'])
situ_mobility_standing.addFunctionRow([{feat_mobility_standing_episode_length:'medium', feat_mobility_standing_daily_duration:'medium', feat_mobility_standing_mode:'with_aids'}, 'medium'])
situ_mobility_standing.addFunctionRow([{feat_mobility_standing_episode_length:'medium', feat_mobility_standing_daily_duration:'medium', feat_mobility_standing_mode:'no_aids'}, 'medium'])
situ_mobility_standing.addFunctionRow([{feat_mobility_standing_episode_length:'medium', feat_mobility_standing_daily_duration:'high', feat_mobility_standing_mode:'with_aids'}, 'medium'])
situ_mobility_standing.addFunctionRow([{feat_mobility_standing_episode_length:'medium', feat_mobility_standing_daily_duration:'high', feat_mobility_standing_mode:'no_aids'}, 'high'])
situ_mobility_standing.addFunctionRow([{feat_mobility_standing_episode_length:'long', feat_mobility_standing_daily_duration:'low', feat_mobility_standing_mode:'with_aids'}, 'medium'])
situ_mobility_standing.addFunctionRow([{feat_mobility_standing_episode_length:'long', feat_mobility_standing_daily_duration:'low', feat_mobility_standing_mode:'no_aids'}, 'high'])
situ_mobility_standing.addFunctionRow([{feat_mobility_standing_episode_length:'long', feat_mobility_standing_daily_duration:'medium', feat_mobility_standing_mode:'with_aids'}, 'medium'])
situ_mobility_standing.addFunctionRow([{feat_mobility_standing_episode_length:'long', feat_mobility_standing_daily_duration:'medium', feat_mobility_standing_mode:'no_aids'}, 'high'])
situ_mobility_standing.addFunctionRow([{feat_mobility_standing_episode_length:'long', feat_mobility_standing_daily_duration:'high', feat_mobility_standing_mode:'with_aids'}, 'medium'])
situ_mobility_standing.addFunctionRow([{feat_mobility_standing_episode_length:'long', feat_mobility_standing_daily_duration:'high', feat_mobility_standing_mode:'no_aids'}, 'very_high'])
| 98.571429 | 189 | 0.831056 | 585 | 4,830 | 6.268376 | 0.071795 | 0.436324 | 0.392692 | 0.176711 | 0.886556 | 0.832288 | 0.822198 | 0.806654 | 0.754295 | 0.754295 | 0 | 0.000426 | 0.027536 | 4,830 | 48 | 190 | 100.625 | 0.780285 | 0.022153 | 0 | 0 | 1 | 0 | 0.133627 | 0.027122 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027027 | 0 | 0.027027 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
e2d1adcdf22aef25b6adb731313cf4a8a9af9d3e | 61,951 | py | Python | sgd/SVM.py | vibhatha/PSGDSVMPY | 69ed88f5db8d9a250ee944f44b88e54351f8696f | [
"Apache-2.0"
] | null | null | null | sgd/SVM.py | vibhatha/PSGDSVMPY | 69ed88f5db8d9a250ee944f44b88e54351f8696f | [
"Apache-2.0"
] | null | null | null | sgd/SVM.py | vibhatha/PSGDSVMPY | 69ed88f5db8d9a250ee944f44b88e54351f8696f | [
"Apache-2.0"
] | null | null | null | import sys
import os
HOME=os.environ['HOME']
sys.path.insert(1,HOME+'/github/StreamingSVM')
import numpy as np
from operations import LoadLibsvm
from operations import Print
from kernel import Kernel
import scipy as sc
from scipy.spatial.distance import pdist, squareform
import time
class SVM:
def __init__(self, trainPath=None, testPath=None, X=None, y=None, alpha=0.01, C=1, gamma=1, degree=2, n_features=22, eta=0.1, ld=0.01, epochs=100, randomize=False, bulk=False, split=False):
self.X = X
self.y = y
self.alpha = alpha
self.trainPath = trainPath
self.testPath = testPath
self.n_features = n_features
self.eta = eta
self.ld = ld
self.C = C
self.gamma = gamma
self.degree = degree
self.epochs = epochs
self.randomize = randomize
self.bulk = bulk
self.split = split
def init_weights(self):
self.w = np.zeros(self.n_features)
self.w = np.random.uniform(0, 1, self.n_features)
return self.w
def init_indices(self, X):
m = len(X)
indices = np.random.choice(m, m, replace=False)
return indices
def split_data(self, X, y, percentage=60):
size = len(y)
percentage = int(size * percentage / 100)
X_train, X_test = X[:percentage,:], X[percentage:,:]
y_train, y_test = y[:percentage], y[percentage:]
return X_train, y_train, X_test, y_test
def train(self, X, y, alpha=0.01, epochs=1000, w_init=[]):
self.w = np.zeros(self.n_features)
self.w = np.random.uniform(0,1, self.n_features)
for epoch in range(1, epochs):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
if (self.randomize == False):
for i, x in enumerate(X):
condition = y[i] * np.dot(X[i], self.w)
if (condition < 1):
self.w = self.w + alpha * ((X[i] * y[i]) + (-2 * (1 / epoch) * self.w))
else:
self.w = self.w + alpha * (-2 * (1 / epoch) * self.w)
if(self.randomize == True):
num_range = len(X)
indices = np.random.choice(num_range, num_range, replace=False)
for index in indices:
condition = y[index] * np.dot(X[index], self.w)
if (condition < 1):
self.w = self.w + alpha * ((X[index] * y[index]) + (-2 * (1 / epoch) * self.w))
else:
self.w = self.w + alpha * (-2 * (1 / epoch) * self.w)
cost = 0.5 * np.dot(self.w, self.w.T) + self.C * condition
if (epoch % 1 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
return self.w
def train_sgd(self, X, y, alpha=0.01, epochs=1000, w_init=[]):
self.w = np.zeros(self.n_features)
self.w = np.random.uniform(0,1, self.n_features)
for epoch in range(1, epochs):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
if (self.randomize == False):
for i, x in enumerate(X):
condition = y[i] * np.dot(X[i], self.w)
if (condition < 1):
self.w = self.w - alpha * (-1*(X[i] * y[i]) + (self.w))
else:
self.w = self.w - alpha * (self.w)
if(self.randomize == True):
num_range = len(X)
indices = np.random.choice(num_range, num_range, replace=False)
for index in indices:
condition = y[index] * np.dot(X[index], self.w)
if (condition < 1):
self.w = self.w - alpha * (-1 * (X[index] * y[index]) + (self.w))
else:
self.w = self.w - alpha * (self.w)
cost = 0.5 * np.dot(self.w, self.w.T) + self.C * condition
if (epoch % 1 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
return self.w
def train_sgd_online(self, X, y, alpha=0.01, epochs=1000, w_init=[]):
self.w = np.zeros(self.n_features)
self.w = np.random.uniform(0,1, self.n_features)
for epoch in range(1, 2):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
if (self.randomize == False):
for i, x in enumerate(X):
condition = y[i] * np.dot(X[i], self.w)
if (condition < 1):
self.w = self.w - alpha * (-1*(X[i] * y[i]) + (self.w))
else:
self.w = self.w - alpha * (self.w)
if(self.randomize == True):
num_range = len(X)
indices = np.random.choice(num_range, num_range, replace=False)
for index in indices:
condition = y[index] * np.dot(X[index], self.w)
if (condition < 1):
self.w = self.w - alpha * (-1 * (X[index] * y[index]) + (self.w))
else:
self.w = self.w - alpha * (self.w)
cost = 0.5 * np.dot(self.w, self.w.T) + self.C * condition
if (epoch % 1 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
return self.w
def train_sgd_online_init_weight(self, X, y, alpha=0.01, epochs=1000, w_init=[]):
self.w = np.zeros(self.n_features)
self.w = w_init
for epoch in range(1, 2):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
if (self.randomize == False):
for i, x in enumerate(X):
condition = y[i] * np.dot(X[i], self.w)
if (condition < 1):
self.w = self.w - alpha * (-1*(X[i] * y[i]) + (self.w))
else:
self.w = self.w - alpha * (self.w)
if(self.randomize == True):
num_range = len(X)
indices = np.random.choice(num_range, num_range, replace=False)
for index in indices:
condition = y[index] * np.dot(X[index], self.w)
if (condition < 1):
self.w = self.w - alpha * (-1 * (X[index] * y[index]) + (self.w))
else:
self.w = self.w - alpha * (self.w)
cost = 0.5 * np.dot(self.w, self.w.T) + self.C * condition
if (epoch % 1 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
return self.w
def train_with_simple_kernel(self, X, y, alpha=0.01, kernel='linear', epochs=10000):
m = len(X)
B = np.zeros((epochs,m),dtype=float)
A = np.zeros((epochs, m), dtype=float)
indices = np.random.choice(m, m, replace=False)
for epoch in range(0,epochs-1):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
A[epoch] = (1.0 / alpha * epoch) * B[epoch]
for i in indices:
for j in indices:
if (j!=i):
B[epoch+1][j] = B[epoch][j]
kernel_value = 0
if kernel=='rbf':
kernel_value = 0.5 * (X[i]- X[j]).T * (X[i]- X[j]) * (1.0/self.gamma**2)
kernel_value = np.exp(-1.0 * kernel_value)
condition = np.sum(A[epoch][j] * kernel_value, axis=0)
if (condition < 1):
B[epoch+1][i] = B[epoch][i] + y[i]
else:
B[epoch+1][i] = B[epoch][i]
if (epoch % 1 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs))
A_bar = (1.0/float(epochs)) * np.sum(A, axis=0)
A_bar = np.reshape(A_bar, (1, len(A_bar)))
if kernel == 'rbf':
kv = 0.5 * X * X * (1.0 / self.gamma ** 2)
kv = np.exp(-1.0 * kv)
a = A_bar.T * kv
self.w = np.sum(a, axis=0)
def train_with_kernel(self, Xorg, X, y, alpha=0.01, epochs=1000, kernel='linear'):
m = len(X) # number of samples
a = np.zeros((1,m), dtype=float)
indices = np.random.choice(m, m, replace=False)
t = 1
for epoch in range(1, epochs-1):
if (epoch % 10 == 0):
Print.Print.overwrite("Epoch " + str(epoch) + "/" + str(epochs))
for index in indices:
yi = y[index]
Xi = X[index]
y1 = yi * a[0]
Xi = np.reshape(Xi, (1,len(Xi)))
aXi = np.dot(y1, Xi.T)
print(aXi.T)
condition = (1.0/(alpha*epoch)) * aXi
a[0] = a[0] * (1.0 - (1.0/float(t)))
if (condition < 1):
a[0][index] = a[0][index] + (float(yi) / float(alpha * t))
#else:
# a[index] = a[index]
t = t + 1
a_nonzero_indices = np.where(a[0]!=0)
aT = [a[0][index] for index in a_nonzero_indices]
SV = [Xorg[index] for index in a_nonzero_indices]
self.a = np.array(aT)[0].reshape(len(aT[0]),1)
self.SV = np.array(SV)[0]
def custom_minibatch_kernel_sgd_train(self, X, y, alpha=0.01, kernel='linear', gamma=2, degree=2, epochs=100,
batch_size=10000):
m = len(X) # number of samples
a = np.zeros((1, m), dtype=float)
for epoch in range(1, epochs):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
if (self.randomize == False):
for i, x in enumerate(X):
condition = y[i] * np.dot(X[i], self.w)
if (condition < 1):
self.w = self.w + alpha * ((X[i] * y[i]) + (-2 * (1 / epoch) * self.w))
else:
self.w = self.w + alpha * (-2 * (1 / epoch) * self.w)
if (self.randomize == True):
num_range = len(X)
indices = np.random.choice(num_range, num_range, replace=False)
num_minibatches = num_range / batch_size
minibatch_indices = np.array_split(indices, num_minibatches)
t = 1
for minibatch_inds in minibatch_indices:
X_sub = np.take(X, minibatch_inds, axis=0)
y_sub = np.take(y, minibatch_inds, axis=0)
X_sub_k = Kernel.Kernel.generate_kerenelized_feature_matrix(X=X_sub, kernel=kernel, degree=degree,
gamma=gamma)
a_sub = np.take(a[0], minibatch_inds, axis=0)
for i,x in enumerate(X_sub_k):
yi = y_sub[i]
Xi = X_sub_k[i]
y1 = yi * a_sub
Xi = np.reshape(Xi, (1, len(Xi)))
aXi = np.dot(y1, Xi.T)
condition = (1.0 / (alpha * epoch)) * aXi
a_sub = a_sub * (1.0 - (1.0 / float(t)))
if (condition < 1):
a_sub[i] = a_sub[i] + (float(yi) / float(alpha * t))
# else:
# a[index] = a[index]
t = t + 1
sub_itr = 0
for index in minibatch_inds:
a[0][index] = a_sub[sub_itr]
sub_itr = sub_itr + 1
if (epoch % 1 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs))
a_nonzero_indices = np.where(a[0] != 0)
aT = [a[0][index] for index in a_nonzero_indices]
SV = [X[index] for index in a_nonzero_indices]
self.a = np.array(aT)[0].reshape(len(aT[0]), 1)
self.SV = np.array(SV)[0]
def auto_sgd_train(self, X, y, epochs=1000):
self.w = np.zeros(self.n_features)
self.w = np.random.uniform(0,1, self.n_features)
for epoch in range(1, epochs):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
if (self.randomize == False):
for i, x in enumerate(X):
condition = y[i] * np.dot(X[i], self.w)
if (condition < 1):
self.w = self.w + alpha * ((X[i] * y[i]) + (-2 * (1 / epoch) * self.w))
else:
self.w = self.w + alpha * (-2 * (1 / epoch) * self.w)
if(self.randomize == True):
num_range = len(X)
indices = np.random.choice(num_range, num_range, replace=False)
for index in indices:
condition = y[index] * np.dot(X[index], self.w)
if (condition < 1):
self.w = self.w + alpha * ((X[index] * y[index]) + (-2 * (1 / epoch) * self.w))
else:
self.w = self.w + alpha * (-2 * (1 / epoch) * self.w)
cost = 0.5 * np.dot(self.w, self.w.T) + self.C * condition
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
return self.w
def custom_minibatch_sgd_train(self,X, y, alpha=0.01, epochs=100, batch_size=10000):
self.w = np.random.uniform(0, 1, self.n_features)
for epoch in range(1, epochs):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
if (self.randomize == True):
num_range = len(X)
indices = np.random.choice(num_range, num_range, replace=False)
num_minibatches = num_range / batch_size
minibatch_indices = np.array_split(indices, num_minibatches)
for minibatch_inds in minibatch_indices:
for index in minibatch_inds:
condition = y[index] * np.dot(X[index], self.w)
if (condition < 1):
self.w = self.w + alpha * ((X[index] * y[index]) + (-2 * (1 / epoch) * self.w))
else:
self.w = self.w + alpha * (-2 * (1 / epoch) * self.w)
cost = abs(0.5 * np.dot(self.w, self.w.T) + self.C * condition)
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
# this algorithm is wrong, because we need to select the xy<1 data samples out of the minibatch :D
def minibatch_sgd_train_version1(self,X, y, alpha=0.01, epochs=100, batch_size=10000):
self.w = np.zeros((X.shape[1],1))
print(self.w.shape)
v = np.zeros(self.w.shape)
beta1 = 0.5
beta2 = 0.5
epsilon = 0.00000001
r = np.zeros(self.w.shape)
num_range = len(X)
num_minibatches = int(num_range / batch_size)
real_total_data = num_minibatches * batch_size
indices = np.random.choice(real_total_data, real_total_data, replace=False)
minibatch_indices = np.array_split(indices, num_minibatches)
print(minibatch_indices[0].shape, len(minibatch_indices))
for epoch in range(1, epochs):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
if (self.randomize == True):
for minibatch_inds in minibatch_indices:
Xb = np.take(X, minibatch_inds,axis=0)
yb = np.take(y, minibatch_inds,axis=0)
Xw = Xb.dot(self.w)
#print(minibatch_inds.shape, Xb.shape, yb.shape, self.w.shape, Xw.shape, yb.shape)
condition1 = (Xw.T * yb)/float(batch_size)
condition = np.sum(condition1)
# print("<1",condition)
Xy = np.matmul(Xb.T, yb)
Xy = np.reshape(Xy, (X.shape[1], 1))
Xy_sum = np.sum(Xy,axis=1)/batch_size
print(Xy_sum)
gradient = Xy_sum
gradient = np.reshape(gradient, (gradient.shape[0],1))
v = beta1 * v + (1 - beta1) * gradient
v_hat = v / (1 - beta1 ** epoch)
r = beta2 * r + (1 - beta2) * (np.multiply(gradient, gradient))
r_hat = r / (1 - beta2 ** epoch)
self.w = self.w - alpha * np.multiply((v_hat), 1.0 / (np.sqrt(r_hat) + epsilon))
#print(Xy_sum.shape, self.w.shape)
#self.w = self.w + alpha * (Xy_sum + (-2 * (1 / epoch) * self.w))
cost = abs(0.5 * np.dot(self.w.T, self.w) + self.C * condition)/batch_size
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
def custom_minibatch_sgd_train_without_coeff(self,X, y, alpha=0.01, epochs=100, batch_size=10000):
self.w = np.random.uniform(0, 1, self.n_features)
for epoch in range(1, epochs):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
if (self.randomize == True):
num_range = len(X)
indices = np.random.choice(num_range, num_range, replace=False)
num_minibatches = num_range / batch_size
minibatch_indices = np.array_split(indices, num_minibatches)
for minibatch_inds in minibatch_indices:
for index in minibatch_inds:
condition = y[index] * np.dot(X[index], self.w)
if (condition < 1):
self.w = self.w + alpha * ((X[index] * y[index]) + (self.w))
else:
self.w = self.w + alpha * (self.w*1)
cost = abs(0.5 * np.dot(self.w, self.w.T) + self.C * condition)
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
def minibatch_sgd_train(self, X, y, alpha=0.01, epochs=1000, batch_size=10000):
self.w = np.zeros(self.n_features)
self.w = np.random.uniform(0,1, self.n_features)
condition = 1
for epoch in range(1, epochs):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
eta = 1.0 / (alpha * epoch)
## optimize the code reusability by taking the algo out of indices generation
if (self.randomize == False):
num_range = len(X)
indices = np.arange(0,num_range,1)
num_minibatches = num_range / batch_size
minibatch_indices = np.array_split(indices, num_minibatches)
for minibatch_inds in minibatch_indices:
for index in minibatch_inds:
condition = y[index] * np.dot(X[index], self.w)
sum_Xy = []
if (condition < 1):
k1 = X[index] * y[index]
sum_Xy.append(k1)
w1 = (eta / batch_size) * np.sum(np.array(sum_Xy))
w2 = (1 - (eta * alpha)) * self.w
self.w = w1 + w2
cost = 0.5 * np.dot(self.w, self.w.T) + self.C * condition
if (epoch % 1 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
if(self.randomize == True):
num_range = len(X)
indices = np.random.choice(num_range, num_range, replace=False)
num_minibatches = num_range / batch_size
minibatch_indices = np.array_split(indices, num_minibatches)
for minibatch_inds in minibatch_indices:
for index in minibatch_inds:
condition = y[index] * np.dot(X[index], self.w)
sum_Xy = []
if (condition < 1):
k1 = X[index] * y[index]
sum_Xy.append(k1)
w1 = (eta/batch_size) * np.sum(np.array(sum_Xy))
w2 = (1 - (eta * alpha)) * self.w
self.w = w1 + w2
cost = 0.5 * np.dot(self.w, self.w.T) + self.C * condition
if (epoch % 1 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
def manual_sgd_train(self, X, y, alpha=0.01, epochs=1000):
self.w = np.zeros(self.n_features)
self.w = np.random.uniform(0,1, self.n_features)
for epoch in range(1, epochs):
if (self.randomize == False):
for i, x in enumerate(X):
condition = y[i] * np.dot(X[i], self.w)
if (condition < 1):
self.w = self.w + alpha * ((X[i] * y[i]) + (-2 * (1 / epoch) * self.w))
else:
self.w = self.w + alpha * (-2 * (1 / epoch) * self.w)
if(self.randomize == True):
num_range = len(X)
indices = np.random.choice(num_range, num_range, replace=False)
for index in indices:
condition = y[index] * np.dot(X[index], self.w)
if (condition < 1):
self.w = self.w + alpha * ((X[index] * y[index]) + (-2 * (1 / epoch) * self.w))
else:
self.w = self.w + alpha * (-2 * (1 / epoch) * self.w)
cost = 0.5 * np.dot(self.w, self.w.T) + self.C * condition
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
return self.w
def train_momentum(self, X, y, alpha=0.01, epochs=1000):
#self.w = np.zeros(self.n_features)
#self.w = np.random.uniform(0,1, self.n_features)
gamma = 0.9
C = self.C
# momentum portion
v = np.array(np.zeros(self.w.shape))
for epoch in range(1, epochs):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
coefficient = 1
if (self.randomize == False):
for i, x in enumerate(X):
condition = y[i] * np.dot(X[i], self.w)
#(-2 * (1 / epoch))
if (condition < 1):
# self.w = self.w + alpha * ((X[i] * y[i]) + (-2 * (1 / epoch) * self.w))
v = gamma * v + (1-gamma) * ((X[i] * y[i]) + ( -2 * (1 / epoch) * self.w))
self.w = self.w - alpha * v
else:
v = gamma * v + (1-gamma) * (coefficient * -2 * (1 / epoch) * self.w)
self.w = self.w - alpha * v
cost = 0.5 * np.dot(self.w, self.w.T) + C * condition
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
if(self.randomize == True):
num_range = len(X)
indices = np.random.choice(num_range, num_range, replace=False)
for index in indices:
condition = y[index] * np.dot(X[index], self.w)
if (condition < 1):
v = gamma * v - alpha * (coefficient * self.w - C * X[index] * y[index])
self.w = self.w + v
else:
v = gamma * v - alpha * (coefficient * self.w)
self.w = self.w + v
cost = 0.5 * np.dot(self.w, self.w.T) + C * condition
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
return self.w
def auto_sgd_train_cost(self, X, y, epochs=1000, w_init=[]):
self.w = w_init
cost = 10000000
epoch = 1
condition = 1
while(cost > 0.01 and epoch < epochs):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
if (self.randomize == False):
for i, x in enumerate(X):
condition = y[i] * np.dot(X[i], self.w)
if (condition < 1):
self.w = self.w - alpha * (self.w - (X[i] * y[i]))
else:
self.w = self.w - alpha * self.w
if (self.randomize == True):
num_range = len(X)
indices = np.random.choice(num_range, num_range, replace=False)
for index in indices:
condition = y[index] * np.dot(X[index], self.w)
if (condition < 1):
self.w = self.w - alpha * ( self.w - self.C * (X[index] * y[index]))
else:
self.w = self.w - alpha * self.w
cost = abs(0.5 * np.dot(self.w, self.w.T) + self.C * condition)
epoch = epoch + 1
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
return self.w
def auto_sgd_train_cost1(self, X, y, epochs=1000, w_init=[]):
#self.w = np.zeros(self.n_features)
self.w = w_init
cost = 10000000
epoch = 1
condition = 1
coefficient = 1#(2 * (1 / epoch))
while(cost > 0.01 and epoch < epochs):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
if (self.randomize == False):
for i, x in enumerate(X):
condition = y[i] * np.dot(X[i], self.w)
if (condition < 1):
self.w = self.w - alpha * (self.w - (X[i] * y[i]))
else:
self.w = self.w - alpha * self.w
if (self.randomize == True):
num_range = len(X)
indices = np.random.choice(num_range, num_range, replace=False)
for index in indices:
condition = y[index] * np.dot(X[index], self.w)
if (condition < 1):
self.w = self.w - alpha * (coefficient * self.w - self.C * (X[index] * y[index]))
else:
self.w = self.w - alpha * coefficient *self.w
cost = abs(0.5 * np.dot(self.w, self.w.T) + self.C * condition)
epoch = epoch + 1
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
return self.w
def auto_sgd_train_cost2(self, X, y, epochs=1000, w_init=[], indices_init=[]):
#self.w = np.zeros(self.n_features)
w = w_init
indices = indices_init
cost = 10000000
epoch = 1
condition = 1
coefficient = (2 * (1 / epoch))
print("Initial Cost : " + str(abs(0.5 * np.dot(w, w.T) + y[indices[0]] * np.dot(X[indices[0]], w))))
while(cost > 0.01 and epoch < epochs):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
for index in indices:
condition = y[index] * np.dot(X[index], w)
if (condition < 1):
w = w - alpha * (coefficient * w - self.C * (X[index] * y[index]))
else:
w = w - alpha * coefficient * w
epoch = epoch + 1
cost = abs(0.5 * np.dot(w, w.T) + self.C * condition)
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
self.w = w
return self.w
def train_momentum_cost(self, X, y, alpha=0.01, epochs=1000, indices_init=[], w_init=[]):
#self.w = np.zeros(self.n_features)
self.w = np.random.uniform(0,1, self.n_features)
gamma = 0.99
C = self.C
# momentum portion
v = np.array(np.zeros(self.w.shape))
epoch = 1
cost = 100000
while (cost > 0.01 and epoch < epochs):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
coefficient = 1
if (self.randomize == False):
for i, x in enumerate(X):
condition = y[i] * np.dot(X[i], self.w)
#(-2 * (1 / epoch))
if (condition < 1):
# self.w = self.w + alpha * ((X[i] * y[i]) + (-2 * (1 / epoch) * self.w))
v = gamma * v + (1-gamma) * ((X[i] * y[i]) + ( -2 * (1 / epoch) * self.w))
self.w = self.w - alpha * v
else:
v = gamma * v + (1-gamma) * (coefficient * -2 * (1 / epoch) * self.w)
self.w = self.w - alpha * v
if(self.randomize == True):
num_range = len(X)
indices = np.random.choice(num_range, num_range, replace=False)
for index in indices:
condition = y[index] * np.dot(X[index], self.w)
if (condition < 1):
v = gamma * v + (1-gamma) * (coefficient * self.w - (self.C * X[index] * y[index]) )
self.w = self.w - alpha * v
else:
v = gamma * v + (1-gamma) * (coefficient * self.w)
self.w = self.w - alpha * v
epoch = epoch + 1
cost = abs(0.5 * np.dot(self.w, self.w.T) + C * condition)
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
return self.w
def train_momentum_cost1(self, X, y, alpha=0.01, epochs=1000, w_init=[]):
#self.w = np.zeros(self.n_features)
#self.w = np.random.uniform(0,1, self.n_features)
self.w = w_init
gamma = 0.99
C = self.C
# momentum portion
v = np.array(np.zeros(self.w.shape))
epoch = 1
cost = 100000
condition = 1
coefficient = (2 * (1 / epoch))
while (cost > 0.01 and epoch < epochs):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
if (self.randomize == False):
for i, x in enumerate(X):
condition = y[i] * np.dot(X[i], self.w)
#(-2 * (1 / epoch))
if (condition < 1):
# self.w = self.w + alpha * ((X[i] * y[i]) + (-2 * (1 / epoch) * self.w))
v = gamma * v - ((coefficient * self.w) - (X[i] * y[i]))
self.w = self.w + alpha * v
else:
v = gamma * v - (coefficient * self.w)
self.w = self.w + alpha * v
if(self.randomize == True):
num_range = len(X)
indices = np.random.choice(num_range, num_range, replace=False)
for index in indices:
condition = y[index] * np.dot(X[index], self.w)
if (condition < 1):
v = gamma * v - (coefficient * self.w - (self.C * X[index] * y[index]))
self.w = self.w + alpha * v
else:
v = gamma * v - (coefficient * self.w)
self.w = self.w + alpha * v
epoch = epoch + 1
cost = abs(0.5 * np.dot(self.w, self.w.T) + C * condition)
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
return self.w
def train_momentum_cost2(self, X, y, alpha=0.01, epochs=1000, w_init=[], indices_init=[], gamma=0.999):
w = w_init
indices = indices_init
gamma = gamma
C = self.C
# momentum portion
v = np.array(np.zeros(self.w.shape))
print("Initial Cost : " + str(abs(0.5 * np.dot(w, w.T) + y[indices[0]] * np.dot(X[indices[0]], w))))
epoch = 1
cost = 100000
condition = 1
coefficient = (2 * (1 / epoch))
while (cost > 0.001 and epoch < epochs):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
cost = abs(0.5 * np.dot(w, w.T) + C * condition)
for index in indices:
condition = y[index] * np.dot(X[index], w)
if (condition < 1):
v = gamma * v - (1)*(coefficient * w - (self.C * X[index] * y[index]))
w = w + alpha * v
else:
v = gamma * v - (1)*(coefficient * w)
w = w + alpha * v
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
epoch = epoch + 1
self.w = w
return self.w
def train_non_momentum(self, X, y, alpha=0.01, epochs=1000):
#self.w = np.zeros(self.n_features)
#self.w = np.random.uniform(0,1, self.n_features)
gamma = self.gamma
C = self.C
# momentum portion
v = np.array(np.zeros(self.w.shape))
for epoch in range(1, epochs):
# alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
coefficient = 1
if (self.randomize == False):
for i, x in enumerate(X):
condition = y[i] * np.dot(X[i], self.w)
#(-2 * (1 / epoch))
if (condition <= 1):
self.w = (1-alpha) * self.w + alpha * C * y[i] * X[i]
else:
self.w = (1-alpha) * self.w
cost = 0.5 * np.dot(self.w, self.w.T) + C * condition
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
if(self.randomize == True):
num_range = len(X)
indices = np.random.choice(num_range, num_range, replace=False)
for index in indices:
condition = y[index] * np.dot(X[index], self.w)
if (condition <= 1):
self.w = (1-alpha) * self.w + alpha * C * y[index] * X[index]
else:
self.w = (1-alpha) * self.w
cost = 0.5 * np.dot(self.w, self.w.T) + C * condition
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
return self.w
def pegasus_train(self,X, y):
#self.w = np.zeros(self.n_features)
for epoch in range(1, self.epochs):
self.eta = 1.0 / (self.ld * epoch)
if (self.randomize == False):
for i, x in enumerate(X):
condition = y[i] * np.dot(X[i], self.w)
if (condition < 1):
self.w = (1 - self.eta * self.ld) * self.w + self.eta * y[i] * X[i]
else:
self.w = (1 - self.eta * self.ld) * self.w
if (self.randomize == True):
num_range = len(X)
indices = np.random.choice(num_range, num_range, replace=False)
for index in indices:
condition = y[index] * np.dot(X[index], self.w)
if (condition < 1):
self.w = (1 - self.eta * self.ld) * self.w + self.eta * y[index] * X[index]
else:
self.w = (1 - self.eta * self.ld) * self.w
cost = abs(0.5 * self.ld * np.dot(self.w, self.w.T) + condition)
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(self.epochs)
+ ", Cost : " + str(cost))
def pegasus_train_cost(self,X, y):
#self.w = np.zeros(self.n_features)
epoch = 1
cost = 100000
while(cost > 0.01 and epoch < self.epochs):
self.eta = 1.0 / (self.ld * epoch)
if (self.randomize == False):
for i, x in enumerate(X):
condition = y[i] * np.dot(X[i], self.w)
if (condition < 1):
self.w = (1 - self.eta * self.ld) * self.w + self.eta * y[i] * X[i]
else:
self.w = (1 - self.eta * self.ld) * self.w
if (self.randomize == True):
num_range = len(X)
indices = np.random.choice(num_range, num_range, replace=False)
for index in indices:
condition = y[index] * np.dot(X[index], self.w)
if (condition < 1):
self.w = (1 - self.eta * self.ld) * self.w + self.eta * y[index] * X[index]
else:
self.w = (1 - self.eta * self.ld) * self.w
cost = abs(0.5 * self.ld * np.dot(self.w, self.w.T) + condition)
epoch = epoch + 1
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(self.epochs)
+ ", Cost : " + str(cost))
return self.w
def train1(self, X, y, alpha=0.01, epochs=1000):
self.w = np.zeros(self.n_features)
m = X.shape[0]
ld = 1.0
C = 1/(ld*m)
t = 0
alpha = 1 / (t + 1)
def kernel_predict(self, X, kernel):
print("X shape ", X.shape)
print("A shape ", self.a.shape)
print("SV shape ", self.SV.shape)
SV = self.SV
a = self.a
a = a[0]
y_pred = []
#print(a)
print("X[0].shape ", X[0].shape)
print("SV[0].shape ", SV[0].shape)
itr = 0
for sample in X:
itr = itr + 1
accumulator = 0
prediction_label = 1
for index in range(len(a)):
kernel_value = 0
if (kernel == 'linear'):
kernel_value = np.inner(SV[index], sample)
if (kernel == 'rbf'):
pairwise_dists = squareform(pdist((np.array(SV[index]) - np.array([sample])), 'euclidean'))
K = sc.exp(-pairwise_dists ** 2 / self.gamma ** 2)
kernel_value = K
if (kernel == 'poly'):
kernel_value = np.inner(SV[index], sample) ** self.degree
accumulator += a[index] * kernel_value
print(accumulator)
if accumulator < 0:
prediction_label = -1
else:
prediction_label = 1
y_pred.append(prediction_label)
# for i, x in enumerate(X):
# y_pred.append(np.sign(np.dot(X[i], self.w)))
print(y_pred)
return y_pred
def predict(self, X):
y_pred = []
for i, x in enumerate(X):
y_pred.append(np.sign(np.dot(X[i], self.w)))
return y_pred
def custom_predict(self,X, w):
y_pred = []
for i, x in enumerate(X):
y_pred.append(np.sign(np.dot(X[i], w)))
return y_pred
def get_accuracy(self, y_test, y_pred):
acc_count = 0
for i, y in enumerate(y_pred):
if (y_test[i] == (y_pred[i])):
acc_count += 1
return (float(acc_count) / float(len(y_pred)) * 100)
##################### ADA SVM TRAIN #############################
def ada_sgd_train(self, X, y, alpha=0.01, epochs=1000, w_init=[]):
w = w_init
epsilon = 0.00000001
r = np.zeros(w.shape)
num_range = len(X)
epoch = 1
cost = 100000
indices = np.random.choice(num_range, num_range, replace=False)
while(cost > 0.01 and epoch < epochs):
gradient = 0
coefficient = 1
for index in indices:
condition = y[index] * np.dot(X[index], w)
if (condition < 1):
gradient = alpha * (-(X[index] * y[index]) + (coefficient * w))
else:
gradient = alpha * (coefficient * w)
r = r + np.multiply(gradient,gradient)
r = r + epsilon;
d1 = np.multiply(gradient,1.0/r)
w = w - (alpha * d1)
cost = abs(0.5 * np.dot(w, w.T) + self.C * condition)
epoch = epoch + 1
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
self.w = w
return w
###################### RMS PROP ############################################
def rms_prop_sgd_train(self, X, y, alpha=0.01, epochs=1000, w_init=[], beta=0.90):
w = w_init
beta = beta
epsilon = 0.00000001
r = np.zeros(w.shape)
num_range = len(X)
cost = 100000000
epoch = 1
indices = np.random.choice(num_range, num_range, replace=False)
while(cost > 0.01 and epoch < epochs):
gradient = 0
alpha = 1.0 / (1.0 + float(epoch))
coefficient = 1#(-1 / (float(epoch) + epsilon))
for index in indices:
condition = y[index] * np.dot(X[index], w)
if (condition < 1):
gradient = alpha * (-(X[index] * y[index]) + (coefficient * w))
else:
gradient = alpha * (coefficient * w)
r = (beta**epoch) * r + (1-(beta**epoch)) * np.multiply(gradient, gradient)
r = np.sqrt(r) + epsilon;
d1 = np.multiply(gradient, 1.0 / r)
w = w - alpha * d1;
cost = abs(0.5 * np.dot(w, w.T) + self.C * condition)
epoch = epoch + 1
if (epoch % 10 == 0):
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost))
self.w = w
return w
###################### ADAM ############################################
def adam_sgd_train(self, X, y, X_testing, y_testing, alpha=0.01, epochs=1000, w_init=[], beta1=0.93, beta2=0.999):
self.x_testing = X_testing
self.y_testing = y_testing
w = w_init
v = np.zeros(w.shape)
beta = 0.90
beta1 = beta1
beta2 = beta2
epsilon = 0.00000001
r = np.zeros(w.shape)
num_range = len(X)
cost = 100000000
epoch = 1
indices = np.random.choice(num_range, num_range, replace=False)
while(cost > 0.001 and epoch < epochs):
gradient = 0
alpha = 1.0 / (1.0 + float(epoch))
coefficient = ((1/float(epoch)))
for index in indices:
condition = y[index] * np.dot(X[index], w)
if (condition < 1):
gradient = alpha * (-(X[index] * y[index]) + (coefficient * w))
else:
gradient = alpha * (coefficient * w)
v = beta1 * v + (1-beta1) * gradient
v_hat = v / (1-beta1**epoch)
r = beta2 * r + (1-beta2) * (np.multiply(gradient,gradient))
r_hat = r / (1-beta2**epoch)
w = w - alpha * np.multiply((v_hat),1.0/(np.sqrt(r_hat) + epsilon))
cost = abs(0.5 * np.dot(w, w.T) + self.C * condition)
epoch = epoch + 1
if (epoch % 10 == 0):
online_acc = self.online_accuracy(w)
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost) + ", Test Accuracy : " + str(online_acc) +" %" )
self.w = w
return w
###### Online Accuracy Testing ########
def online_accuracy(self, wt):
acc = 0
if self.bulk:
testing_filepath = self.testing_file
print("Loading Bulk Testing Files")
files = os.listdir(testing_filepath)
print("File Path : " + testing_filepath)
print(files)
for file in files:
print("Loading Testing Bulk File : " + file)
testing_loader = LoadLibsvm.LoadLibSVM(filename=testing_filepath + "/" + file,
n_features=self.n_features)
x_testing, y_testing = testing_loader.load_all_data()
y_pred = self.custom_predict(x_testing, w=wt)
acc += self.get_accuracy(y_test=y_testing, y_pred=y_pred)
acc = acc / len(files)
if self.split:
# Print.Print.result2("Data Splitting ...")
# training_loader = LoadLibsvm.LoadLibSVM(filename=self.trainPath, n_features=self.n_features)
# x_all, y_all = training_loader.load_all_data()
# ratio = 0.9
# world_size = len(x_all)
# split_index = int(world_size * ratio)
# self.x_training = x_all[:split_index]
# self.x_testing = x_all[split_index:]
# self.y_training = y_all[:split_index]
# self.y_testing = y_all[split_index:]
y_pred = self.custom_predict(X=self.x_testing, w=wt)
acc = self.get_accuracy(y_test=self.y_testing, y_pred=y_pred)
else:
testing_loader = LoadLibsvm.LoadLibSVM(filename=self.testPath, n_features=self.n_features)
self.x_testing, self.y_testing = testing_loader.load_all_data()
y_pred = self.custom_predict(X=self.x_testing, w=wt)
acc = self.get_accuracy(y_test=self.y_testing, y_pred=y_pred)
return acc
def online_accuracy_light(self,wt, x_testing, y_testing):
y_pred = self.custom_predict(X=x_testing, w=wt)
acc = self.get_accuracy(y_test=y_testing, y_pred=y_pred)
return acc
##########################################
###### OVERALL BENCHMARK SUITE ###########
##########################################
###### SGD Auto and Manual ############
def train_sgd_light(self, X, y, X_test, y_test, alpha=0.01, epochs=1000, w_init=[], log_file='', log_frequency=1 ,tolerance=0.01, indices_init=[]):
#self.w = np.zeros(self.n_features)
#self.w = np.random.uniform(0,1, self.n_features)
w = w_init
epoch = 1
cost = 1000000
io_time = 0
while(cost > tolerance and epoch < epochs):
alpha = 1.0 / (1.0 + float(epoch)) # alpha update rule
coefficient = 1.0 / float(1.0 + epoch)
if (self.randomize == False):
for i, x in enumerate(X):
condition = y[i] * np.dot(X[i], w)
if (condition < 1):
w = w - alpha * (-(X[i] * y[i]) + (coefficient * w))
else:
w = w - alpha * (coefficient * w)
if(self.randomize == True):
num_range = len(X)
indices = indices_init#np.random.choice(n, n, replace=False)
for index in indices:
condition = y[index] * np.dot(X[index], w)
if (condition < 1):
w = w - alpha * (-1*(X[index] * y[index]) + (coefficient * w))
else:
w = w - alpha * (coefficient * w)
cost = abs(0.5 * np.dot(w, w.T) + self.C * condition)
if epoch ==1:
initial_cost = cost
epoch = epoch + 1
if (epoch % log_frequency == 0):
start_io_time = 0
start_io_time -= time.time()
acc = self.online_accuracy_light(wt=w, x_testing=X_test, y_testing=y_test)
self.write_epoch_log(alpha=alpha, epoch=epoch, cost=cost, acc=acc, log_file=log_file)
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost) + ", Test Accuracy : " + str(acc) +" %")
start_io_time += time.time()
io_time += start_io_time
return w, epoch, cost, io_time, initial_cost
def train_sgd_light_no_coff(self, X, y, X_test, y_test, alpha=0.01, epochs=1000, w_init=[], log_file='', log_frequency=1 ,tolerance=0.01, indices_init=[]):
#self.w = np.zeros(self.n_features)
#self.w = np.random.uniform(0,1, self.n_features)
w = w_init
epoch = 1
cost = 1000000
io_time = 0
initial_cost = 0
while(cost > tolerance and epoch < epochs):
coefficient = 1.0 / (1.0 + float(epoch))
if (self.randomize == False):
for i, x in enumerate(X):
condition = y[i] * np.dot(X[i], w)
if (condition < 1):
w = w - alpha * (-(X[i] * y[i]) + (coefficient * w))
else:
w = w - alpha * (coefficient * w)
if(self.randomize == True):
num_range = len(X)
indices = indices_init#np.random.choice(n, n, replace=False)
for index in indices:
condition = y[index] * np.dot(X[index], w)
if (condition < 1):
w = w - alpha * (-1*(X[index] * y[index]) + (coefficient* w))
else:
w = w - alpha * (coefficient* w)
cost = abs(0.5 * np.dot(w, w.T) + self.C * condition)
if epoch ==1:
initial_cost = cost
epoch = epoch + 1
if (epoch % log_frequency == 0):
start_io_time = 0
start_io_time -= time.time()
acc = self.online_accuracy_light(wt=w, x_testing=X_test, y_testing=y_test)
self.write_epoch_log(alpha=alpha, epoch=epoch, cost=cost, acc=acc, log_file=log_file)
# Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
# + ", Cost : " + str(cost) + ", Test Accuracy : " + str(acc) +" %")
start_io_time += time.time()
io_time += start_io_time
return w, epoch, cost, io_time, initial_cost
def train_sgd_manual_light(self, X, y, X_test, y_test, alpha=0.01, epochs=1000, w_init=[], log_file='', log_frequency=1, tolerance=0.01, indices_init=[]):
w = w_init
epoch = 1
cost = 1000000
io_time = 0
initial_cost = 0
while(cost > tolerance and epoch < epochs):
if (self.randomize == False):
for i, x in enumerate(X):
condition = y[i] * np.dot(X[i], w)
if (condition < 1):
w = w + alpha * ((X[i] * y[i]) + (-2 * (1 / epoch) * w))
else:
w = w + alpha * (-2 * (1 / epoch) * w)
if(self.randomize == True):
num_range = len(X)
indices = indices_init#np.random.choice(n, n, replace=False)
for index in indices:
condition = y[index] * np.dot(X[index], w)
if (condition < 1):
w = w + alpha * ((X[index] * y[index]) + (-1 * (1 / epoch) * w))
else:
w = w + alpha * (-1 * (1 / epoch) * w)
cost = abs(0.5 * np.dot(w, w.T) + self.C * condition)
if epoch ==1:
initial_cost = cost
epoch = epoch + 1
if (epoch % log_frequency == 0):
start_io_time = 0
start_io_time -= time.time()
acc = self.online_accuracy_light(wt=w, x_testing=X_test, y_testing=y_test)
self.write_epoch_log(alpha=alpha, epoch=epoch, cost=cost, acc=acc, log_file=log_file)
# Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
# + ", Cost : " + str(cost) + ", Test Accuracy : " + str(acc) + " %")
start_io_time += time.time()
io_time += start_io_time
return w, epoch, cost, io_time, initial_cost
def write_epoch_log(self,alpha=0, acc=0, cost=0, epoch=0, log_file=''):
fp = open(log_file, "a")
# fp.write("alpha : " + str(self.alpha) + ", epochs : " + str(self.epochs) + ", accuracy : " + str(self.acc) + "%" + ", time : " + str(self.training_time) + " s\n")
fp.write(str(epoch) + "," + str(alpha) + "," + str(cost) + "," + str(acc) + "\n")
fp.close()
########## SGD Momemtum #############
def train_sgd_momentum(self, X, y, X_test, y_test, alpha=0.01, epochs=1000, w_init=[], log_file='', log_frequency=1 ,tolerance=0.01, indices_init=[], gamma=0.98):
C = self.C
w = w_init
# momentum portion
v = np.array(np.zeros(w.shape))
epoch = 1
cost = 100000
io_time = 0
coefficient = 1
initial_cost = 0
indices = indices_init
while (cost > tolerance and epoch < epochs):
num_range = len(X)
coefficient = 1.0/(1.0 + float(epoch))
#indices = np.random.choice(n, n, replace=False)
for index in indices:
condition = y[index] * np.dot(X[index], w)
if (condition < 1):
v = gamma * v + (1 - gamma) * (coefficient * w - (self.C * X[index] * y[index]))
w = w - alpha * v
else:
v = gamma * v + (1 - gamma) * (coefficient * w)
w = w - alpha * v
cost = abs(0.5 * np.dot(w, w.T) + C * condition)
if epoch ==1:
initial_cost = cost
epoch = epoch + 1
if (epoch % log_frequency == 0):
start_io_time = 0
start_io_time -= time.time()
acc = self.online_accuracy_light(wt=w, x_testing=X_test, y_testing=y_test)
self.write_epoch_log(alpha=alpha, epoch=epoch, cost=cost, acc=acc, log_file=log_file)
# Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
# + ", Cost : " + str(cost) + ", Test Accuracy : " + str(acc) + " %")
start_io_time += time.time()
io_time += start_io_time
return w, epoch, cost, io_time, initial_cost
##################### Train SGD Ada Light #########################
def ada_sgd_train_light(self, X, y, X_test, y_test, alpha=0.01, epochs=1000, w_init=[], log_file='', log_frequency=1 ,tolerance=0.01, indices_init=[]):
w = w_init
epsilon = 0.00000001
r = np.zeros(w.shape)
num_range = len(X)
epoch = 1
cost = 100000
initial_cost = 0
io_time = 0
indices = indices_init
while(cost > tolerance and epoch < epochs):
gradient = 0
#coefficient = 1.0 / (1.0 + float(epoch))
coefficient = 1
for index in indices:
condition = y[index] * np.dot(X[index], w)
if (condition < 1):
gradient = alpha * (-(X[index] * y[index]) + (coefficient * w))
else:
gradient = alpha * (coefficient * w)
r = r + np.multiply(gradient,gradient)
r = r + epsilon;
d1 = np.multiply(gradient,1.0/(np.sqrt(r)))
w = w - (alpha * d1)
cost = abs(0.5 * np.dot(w, w.T) + self.C * condition)
if epoch ==1:
initial_cost = cost
epoch = epoch + 1
if (epoch % log_frequency == 0):
start_io_time = 0
start_io_time -= time.time()
acc = self.online_accuracy_light(wt=w, x_testing=X_test, y_testing=y_test)
self.write_epoch_log(alpha=alpha, epoch=epoch, cost=cost, acc=acc, log_file=log_file)
# Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
# + ", Cost : " + str(cost) + ", Test Accuracy : " + str(acc) + " %")
start_io_time += time.time()
io_time += start_io_time
self.w = w
return w, epoch, cost, io_time, initial_cost
##################### Train SGD Rmsprop Light #########################
def train_rmsprop_sgd(self, X, y, X_test, y_test, alpha=0.01, beta=0.90, epochs=1000, w_init=[], log_file='',
log_frequency=1, tolerance=0.01, indices_init=[]):
w = w_init
beta = beta
epsilon = 0.00000001
r = np.zeros(w.shape)
num_range = len(X)
cost = 100000000
epoch = 1
initial_cost = 0
io_time = 0
indices = indices_init
while (cost > 0.0000001 and epoch < epochs):
gradient = 0
coefficient = (1.0 / (1 + float(epoch)))
for index in indices:
condition = y[index] * np.dot(X[index], w)
if (condition < 1):
gradient = alpha * (-(X[index] * y[index]) + (coefficient * w))
else:
gradient = alpha * (coefficient * w)
r = (beta ** epoch) * r + (1 - (beta ** epoch)) * np.multiply(gradient, gradient)
r = np.sqrt(r) + epsilon;
d1 = np.multiply(gradient, 1.0 / r)
w = w - alpha * d1;
cost = abs(0.5 * np.dot(w, w.T) + self.C * condition)
if epoch ==1:
initial_cost = cost
epoch = epoch + 1
if (epoch % log_frequency == 0):
start_io_time = 0
start_io_time -= time.time()
acc = self.online_accuracy_light(wt=w, x_testing=X_test, y_testing=y_test)
self.write_epoch_log(alpha=alpha, epoch=epoch, cost=cost, acc=acc, log_file=log_file)
# Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
# + ", Cost : " + str(cost) + ", Test Accuracy : " + str(acc) + " %")
start_io_time += time.time()
io_time += start_io_time
self.w = w
return w, epoch, cost, io_time, initial_cost
##################### Train SGD Adam Light #########################
def adam_sgd_train_light(self, X, y, X_test, y_test, alpha=0.01, beta1=0.93, beta2=0.999, epochs=1000, w_init=[], log_file='', log_frequency=1 ,tolerance=0.01, indices_init=[]):
w = w_init
v = np.zeros(w.shape)
beta1 = beta1
beta2 = beta2
epsilon = 0.00000001
r = np.zeros(w.shape)
num_range = len(X)
cost = 100000000
epoch = 1
initial_cost = 0
io_time = 0
indices = indices_init
while (cost > 0.0000001 and epoch < epochs):
gradient = 0
coefficient = 1#((1 / float(epoch)))
for index in indices:
condition = y[index] * np.dot(X[index], w)
if (condition < 1):
gradient = alpha * (-(X[index] * y[index]) + (coefficient * w))
else:
gradient = alpha * (coefficient * w)
v = beta1 * v + (1 - beta1) * gradient
v_hat = v / (1 - beta1 ** epoch)
r = beta2 * r + (1 - beta2) * (np.multiply(gradient, gradient))
r_hat = r / (1 - beta2 ** epoch)
w = w - alpha * np.multiply((v_hat), 1.0 / (np.sqrt(r_hat) + epsilon))
#print(w)
cost = abs(0.5 * np.dot(w, w.T) + self.C * condition)
if epoch ==1:
initial_cost = cost
epoch = epoch + 1
if (epoch % log_frequency == 0):
start_io_time = 0
start_io_time -= time.time()
acc = self.online_accuracy_light(wt=w, x_testing=X_test, y_testing=y_test)
self.write_epoch_log(alpha=alpha, epoch=epoch, cost=cost, acc=acc, log_file=log_file)
Print.Print.result1("Epoch " + str(epoch) + "/" + str(epochs)
+ ", Cost : " + str(cost) + ", Test Accuracy : " + str(acc) + " %")
start_io_time += time.time()
io_time += start_io_time
self.w = w
return w, epoch, cost, io_time, initial_cost
| 44.441176 | 193 | 0.455521 | 7,714 | 61,951 | 3.562354 | 0.034742 | 0.062591 | 0.030786 | 0.030932 | 0.852111 | 0.827948 | 0.811135 | 0.792249 | 0.7754 | 0.770342 | 0 | 0.03714 | 0.402835 | 61,951 | 1,393 | 194 | 44.47308 | 0.705663 | 0.059547 | 0 | 0.787124 | 0 | 0 | 0.013001 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038627 | false | 0 | 0.007725 | 0 | 0.076395 | 0.014592 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e2ee3a9a042e2d85e298861afe749d1dfbf7fffa | 155 | py | Python | 1-flask-hello_world/aplicacao/visualizacoes.py | miguel7penteado/python-flask | 0a2b8e37174ea4e196c70031b986147e79201afe | [
"Apache-2.0"
] | null | null | null | 1-flask-hello_world/aplicacao/visualizacoes.py | miguel7penteado/python-flask | 0a2b8e37174ea4e196c70031b986147e79201afe | [
"Apache-2.0"
] | null | null | null | 1-flask-hello_world/aplicacao/visualizacoes.py | miguel7penteado/python-flask | 0a2b8e37174ea4e196c70031b986147e79201afe | [
"Apache-2.0"
] | null | null | null | from aplicacao import variavel_aplicacao
@variavel_aplicacao.route('/')
@variavel_aplicacao.route('/indice')
def indice():
return "Ola Mundo!"
| 22.142857 | 41 | 0.722581 | 17 | 155 | 6.411765 | 0.588235 | 0.46789 | 0.40367 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148387 | 155 | 7 | 42 | 22.142857 | 0.825758 | 0 | 0 | 0 | 0 | 0 | 0.12 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.2 | 0.2 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
3944757f6c8ad3ba225352733eddd3fb5ca2a0de | 25,572 | py | Python | experiments/hcp/architectures.py | cassianobecker/dnn | bb2ea04f77733de9df10f795bb049ac3b9d30478 | [
"MIT"
] | 3 | 2020-02-21T21:35:07.000Z | 2020-09-29T15:20:00.000Z | experiments/hcp/architectures.py | cassianobecker/dnn | bb2ea04f77733de9df10f795bb049ac3b9d30478 | [
"MIT"
] | 27 | 2020-02-20T21:00:23.000Z | 2020-05-22T15:23:25.000Z | experiments/synth/architectures.py | cassianobecker/dnn | bb2ea04f77733de9df10f795bb049ac3b9d30478 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as func
from nn.DtiConv3d import DwiConv3dUnitKernel
from util.architecture import Dimensions
class DnnHcp(nn.Module):
def __init__(self, img_dims, number_of_classes, cholesky_weights=False):
super(DnnHcp, self).__init__()
img_channels = img_dims[0]
kernel_dims = [4, 4, 4]
strides = [1, 1, 1]
c_out1 = 2*4*10
c_out2 = 3*4*10
pool_size1 = 5
pool_size = 2
self.conv1 = DwiConv3dUnitKernel(img_channels, c_out1, cholesky_weights=cholesky_weights)
self.conv2 = nn.Conv3d(c_out1, c_out2, kernel_dims, strides)
self.conv3 = nn.Conv3d(c_out2, c_out2, kernel_dims, strides)
self.max1 = nn.MaxPool3d(pool_size1)
self.max2 = nn.MaxPool3d(pool_size)
self.max3 = nn.MaxPool3d(pool_size)
self.dropout1 = nn.Dropout3d(0.25)
self.dropout2 = nn.Dropout2d(0.25)
self.dropout3 = nn.Dropout2d(0.25)
linear_size1 = Dimensions().dimensions_for_linear(
img_dims,
[
self.conv1, self.max1,
self.conv2, self.max2,
self.conv3, self.max3
])
linear_size2 = 128
self.fc1 = nn.Linear(int(linear_size1), linear_size2)
self.fc2 = nn.Linear(linear_size2, number_of_classes)
def forward(self, x):
x = self.conv1(x)
x = func.relu(x)
x = self.max1(x)
x = self.dropout1(x)
x = self.conv2(x)
x = func.relu(x)
x = self.max2(x)
x = self.dropout2(x)
x = self.conv3(x)
x = func.relu(x)
x = self.max3(x)
x = self.dropout3(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = func.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = func.log_softmax(x, dim=1)
return output
class CnnHcp(nn.Module):
def __init__(self, img_dims, number_of_classes, cholesky_weights=False):
super(CnnHcp, self).__init__()
k = 5
s = 2
img_channels = img_dims[0]
kernel_dims1 = [k, k, k]
kernel_dims2 = [k, k, k]
strides1 = [s, s, s]
strides2 = [s, s, s]
c_out1 = 128
c_out2 = 64
pool_size = 2
self.conv0 = nn.Conv3d(img_channels, c_out1, kernel_dims1, strides1)
self.conv1 = nn.Conv3d(c_out1, c_out2, kernel_dims1, strides1)
self.conv2 = nn.Conv3d(c_out2, c_out2, kernel_dims2, strides2)
self.dropout1 = nn.Dropout3d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
self.max1 = nn.MaxPool3d(pool_size)
linear_size1 = Dimensions().dimensions_for_linear(img_dims, [self.conv0, self.conv1, self.conv2, self.max1])
#linear_size1 = 28224
linear_size2 = 128
self.fc1 = nn.Linear(int(linear_size1), linear_size2)
self.fc2 = nn.Linear(linear_size2, number_of_classes)
def forward(self, x):
x = self.conv0(x)
x = self.conv1(x)
x = func.relu(x)
x = self.conv2(x)
x = func.relu(x)
x = self.max1(x)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = func.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = func.log_softmax(x, dim=1)
return output
class DnnHcpUnitKernel1(nn.Module):
def __init__(self, img_dims, number_of_classes, cholesky_weights=False):
super(DnnHcpUnitKernel1, self).__init__()
k = 5
s = 2
img_channels = img_dims[0]
kernel_dims1 = [k, k, k]
kernel_dims2 = [k, k, k]
strides1 = [s, s, s]
strides2 = [s, s, s]
c_out1 = 128
c_out2 = 64
pool_size = 3
self.conv0 = DwiConv3dUnitKernel(img_channels, c_out1, cholesky_weights=cholesky_weights)
self.conv1 = nn.Conv3d(c_out1, c_out2, kernel_dims1, strides1)
self.conv2 = nn.Conv3d(c_out2, c_out2, kernel_dims2, strides2)
self.dropout1 = nn.Dropout3d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
self.max1 = nn.MaxPool3d(pool_size)
linear_size1 = Dimensions().dimensions_for_linear(img_dims, [self.conv0, self.conv1, self.conv2, self.max1])
# linear_size1 = 28224
linear_size2 = 128
self.fc1 = nn.Linear(int(linear_size1), linear_size2)
self.fc2 = nn.Linear(linear_size2, number_of_classes)
def forward(self, x):
x = self.conv0(x)
x = self.conv1(x)
x = func.relu(x)
x = self.conv2(x)
x = func.relu(x)
x = self.max1(x)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = func.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = func.log_softmax(x, dim=1)
return output
# larger pool size, smaller stride
class DnnHcpUnitKernel2(nn.Module):
def __init__(self, img_dims, number_of_classes, cholesky_weights=False):
super(DnnHcpUnitKernel2, self).__init__()
k = 5
s = 2
img_channels = img_dims[0]
kernel_dims1 = [k, k, k]
kernel_dims2 = [k, k, k]
strides1 = [s, s, s]
strides2 = [s, s, s]
c_out1 = 128
c_out2 = 64
pool_size = 4
self.conv0 = DwiConv3dUnitKernel(img_channels, c_out1, cholesky_weights=cholesky_weights)
self.conv1 = nn.Conv3d(c_out1, c_out2, kernel_dims1, strides1)
self.conv2 = nn.Conv3d(c_out2, c_out2, kernel_dims2, strides2)
self.dropout1 = nn.Dropout3d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
self.max1 = nn.MaxPool3d(pool_size)
linear_size1 = Dimensions().dimensions_for_linear(img_dims, [self.conv0, self.conv1, self.conv2, self.max1])
# linear_size1 = 28224
linear_size2 = 128
self.fc1 = nn.Linear(int(linear_size1), linear_size2)
self.fc2 = nn.Linear(linear_size2, number_of_classes)
def forward(self, x):
x = self.conv0(x)
x = self.conv1(x)
x = func.relu(x)
x = self.conv2(x)
x = func.relu(x)
x = self.max1(x)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = func.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = func.log_softmax(x, dim=1)
return output
# larger kernel, larger pool size
class DnnHcpUnitKernel3(nn.Module):
def __init__(self, img_dims, number_of_classes, cholesky_weights=False):
super(DnnHcpUnitKernel3, self).__init__()
k = 7
s = 3
img_channels = img_dims[0]
kernel_dims1 = [k, k, k]
kernel_dims2 = [k, k, k]
strides1 = [s, s, s]
strides2 = [s, s, s]
c_out1 = 128
c_out2 = 64
pool_size = 5
self.conv0 = DwiConv3dUnitKernel(img_channels, c_out1, cholesky_weights=cholesky_weights)
self.conv1 = nn.Conv3d(c_out1, c_out2, kernel_dims1, strides1)
self.conv2 = nn.Conv3d(c_out2, c_out2, kernel_dims2, strides2)
self.dropout1 = nn.Dropout3d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
self.max1 = nn.MaxPool3d(pool_size)
linear_size1 = Dimensions().dimensions_for_linear(img_dims, [self.conv0, self.conv1, self.conv2, self.max1])
# linear_size1 = 28224
linear_size2 = 128
self.fc1 = nn.Linear(int(linear_size1), linear_size2)
self.fc2 = nn.Linear(linear_size2, number_of_classes)
def forward(self, x):
x = self.conv0(x)
x = self.conv1(x)
x = func.relu(x)
x = self.conv2(x)
x = func.relu(x)
x = self.max1(x)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = func.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = func.log_softmax(x, dim=1)
return output
# less dropout
class DnnHcpUnitKernel4(nn.Module):
def __init__(self, img_dims, number_of_classes, cholesky_weights=False):
super(DnnHcpUnitKernel4, self).__init__()
k = 5
s = 3
img_channels = img_dims[0]
kernel_dims1 = [k, k, k]
kernel_dims2 = [k, k, k]
strides1 = [s, s, s]
strides2 = [s, s, s]
c_out1 = 2*128
c_out2 = 2*64
pool_size = 3
self.conv0 = DwiConv3dUnitKernel(img_channels, c_out1, cholesky_weights=cholesky_weights)
self.conv1 = nn.Conv3d(c_out1, c_out2, kernel_dims1, strides1)
self.conv2 = nn.Conv3d(c_out2, c_out2, kernel_dims2, strides2)
self.dropout1 = nn.Dropout3d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
self.max1 = nn.MaxPool3d(pool_size)
linear_size1 = Dimensions().dimensions_for_linear(img_dims, [self.conv0, self.conv1, self.conv2, self.max1])
# linear_size1 = 28224
linear_size2 = 128
self.fc1 = nn.Linear(int(linear_size1), linear_size2)
self.fc2 = nn.Linear(linear_size2, number_of_classes)
def forward(self, x):
x = self.conv0(x)
x = self.conv1(x)
x = func.relu(x)
x = self.conv2(x)
x = func.relu(x)
x = self.max1(x)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = func.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = func.log_softmax(x, dim=1)
return output
# less dropout
class DnnHcpUnitKernelShallow0(nn.Module):
def __init__(self, img_dims, number_of_classes, cholesky_weights=False):
super(DnnHcpUnitKernelShallow0, self).__init__()
k = 5
s = 3
img_channels = img_dims[0]
kernel_dims1 = [k, k, k]
kernel_dims2 = [k, k, k]
strides1 = [s, s, s]
strides2 = [s, s, s]
c_out1 = 128
c_out2 = 64
pool_size = 4
self.conv0 = DwiConv3dUnitKernel(img_channels, c_out1, cholesky_weights=cholesky_weights)
self.conv1 = nn.Conv3d(c_out1, c_out2, kernel_dims1, strides1)
# self.conv2 = nn.Conv3d(c_out2, c_out2, kernel_dims2, strides2)
self.dropout1 = nn.Dropout3d(0.2)
self.dropout2 = nn.Dropout2d(0.2)
self.max1 = nn.MaxPool3d(pool_size)
linear_size1 = Dimensions().dimensions_for_linear(img_dims, [self.conv0, self.conv1, self.max1])
# linear_size1 = 28224
linear_size2 = 128
self.fc1 = nn.Linear(int(linear_size1), linear_size2)
self.fc2 = nn.Linear(linear_size2, number_of_classes)
def forward(self, x):
x = self.conv0(x)
x = self.conv1(x)
x = func.relu(x)
# x = self.conv2(x)
# x = func.relu(x)
x = self.max1(x)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = func.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = func.log_softmax(x, dim=1)
return output
# less dropout
class DnnHcpUnitKernelShallow1(nn.Module):
def __init__(self, img_dims, number_of_classes, cholesky_weights=False):
super(DnnHcpUnitKernelShallow1, self).__init__()
k = 5
s = 3
img_channels = img_dims[0]
kernel_dims1 = [k, k, k]
kernel_dims2 = [k, k, k]
strides1 = [s, s, s]
strides2 = [s, s, s]
c_out1 = 2*128
c_out2 = 2*64
pool_size = 4
self.conv0 = DwiConv3dUnitKernel(img_channels, c_out1, cholesky_weights=cholesky_weights)
self.conv1 = nn.Conv3d(c_out1, c_out2, kernel_dims1, strides1)
# self.conv2 = nn.Conv3d(c_out2, c_out2, kernel_dims2, strides2)
self.dropout1 = nn.Dropout3d(0.2)
self.dropout2 = nn.Dropout2d(0.2)
self.max1 = nn.MaxPool3d(pool_size)
linear_size1 = Dimensions().dimensions_for_linear(img_dims, [self.conv0, self.conv1, self.max1])
# linear_size1 = 28224
linear_size2 = 128
self.fc1 = nn.Linear(int(linear_size1), linear_size2)
self.fc2 = nn.Linear(linear_size2, number_of_classes)
def forward(self, x):
x = self.conv0(x)
x = self.conv1(x)
x = func.relu(x)
# x = self.conv2(x)
# x = func.relu(x)
x = self.max1(x)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = func.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = func.log_softmax(x, dim=1)
return output
# less dropout
class DnnHcpUnitKernelShallow2(nn.Module):
def __init__(self, img_dims, number_of_classes, cholesky_weights=False):
super(DnnHcpUnitKernelShallow2, self).__init__()
k = 5
s = 3
img_channels = img_dims[0]
kernel_dims1 = [k, k, k]
kernel_dims2 = [k, k, k]
strides1 = [s, s, s]
strides2 = [s, s, s]
c_out1 = 128
c_out2 = 64
pool_size = 2
self.conv0 = DwiConv3dUnitKernel(img_channels, c_out1, cholesky_weights=cholesky_weights)
self.conv1 = nn.Conv3d(c_out1, c_out2, kernel_dims1, strides1)
# self.conv2 = nn.Conv3d(c_out2, c_out2, kernel_dims2, strides2)
self.dropout1 = nn.Dropout3d(0.2)
self.dropout2 = nn.Dropout2d(0.2)
self.max1 = nn.MaxPool3d(pool_size)
linear_size1 = Dimensions().dimensions_for_linear(img_dims, [self.conv0, self.conv1, self.max1])
# linear_size1 = 28224
linear_size2 = 128
self.fc1 = nn.Linear(int(linear_size1), linear_size2)
self.fc2 = nn.Linear(linear_size2, number_of_classes)
def forward(self, x):
x = self.conv0(x)
x = func.relu(x)
x = self.conv1(x)
x = func.relu(x)
# x = self.conv2(x)
# x = func.relu(x)
x = self.max1(x)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = func.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = func.log_softmax(x, dim=1)
return output
class DnnHcpUnitKernelShallow3(nn.Module):
def __init__(self, img_dims, number_of_classes, cholesky_weights=False):
super(DnnHcpUnitKernelShallow3, self).__init__()
k = 5
s = 3
img_channels = img_dims[0]
kernel_dims1 = [k, k, k]
kernel_dims2 = [k, k, k]
strides1 = [s, s, s]
strides2 = [s, s, s]
c_out1 = 2*128
c_out2 = 2*64
pool_size = 2
self.conv0 = DwiConv3dUnitKernel(img_channels, c_out1, cholesky_weights=cholesky_weights)
self.conv1 = nn.Conv3d(c_out1, c_out2, kernel_dims1, strides1)
# self.conv2 = nn.Conv3d(c_out2, c_out2, kernel_dims2, strides2)
self.dropout1 = nn.Dropout3d(0.2)
self.dropout2 = nn.Dropout2d(0.2)
self.max1 = nn.MaxPool3d(pool_size)
linear_size1 = Dimensions().dimensions_for_linear(img_dims, [self.conv0, self.conv1, self.max1])
# linear_size1 = 28224
linear_size2 = 128
self.fc1 = nn.Linear(int(linear_size1), linear_size2)
self.fc2 = nn.Linear(linear_size2, number_of_classes)
def forward(self, x):
x = self.conv0(x)
x = func.relu(x)
x = self.conv1(x)
x = func.relu(x)
# x = self.conv2(x)
# x = func.relu(x)
x = self.max1(x)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = func.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = func.log_softmax(x, dim=1)
return output
# big dropout
class DnnHcpUnitKernelShallow4(nn.Module):
def __init__(self, img_dims, number_of_classes, cholesky_weights=False):
super(DnnHcpUnitKernelShallow4, self).__init__()
k = 5
s = 3
img_channels = img_dims[0]
kernel_dims1 = [k, k, k]
kernel_dims2 = [k, k, k]
strides1 = [s, s, s]
strides2 = [s, s, s]
c_out1 = 2*128
c_out2 = 2*64
pool_size = 2
self.conv0 = DwiConv3dUnitKernel(img_channels, c_out1, cholesky_weights=cholesky_weights)
self.conv1 = nn.Conv3d(c_out1, c_out2, kernel_dims1, strides1)
# self.conv2 = nn.Conv3d(c_out2, c_out2, kernel_dims2, strides2)
self.dropout1 = nn.Dropout3d(0.5)
self.dropout2 = nn.Dropout2d(0.5)
self.max1 = nn.MaxPool3d(pool_size)
linear_size1 = Dimensions().dimensions_for_linear(img_dims, [self.conv0, self.conv1, self.max1])
# linear_size1 = 28224
linear_size2 = 128
self.fc1 = nn.Linear(int(linear_size1), linear_size2)
self.fc2 = nn.Linear(linear_size2, number_of_classes)
def forward(self, x):
x = self.conv0(x)
x = self.conv1(x)
x = func.relu(x)
# x = self.conv2(x)
# x = func.relu(x)
x = self.max1(x)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = func.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = func.log_softmax(x, dim=1)
return output
# no maxpool
class DnnHcpUnitKernelShallow5(nn.Module):
def __init__(self, img_dims, number_of_classes, cholesky_weights=False):
super(DnnHcpUnitKernelShallow5, self).__init__()
k = 5
s = 3
img_channels = img_dims[0]
kernel_dims1 = [k, k, k]
kernel_dims2 = [k, k, k]
strides1 = [s, s, s]
strides2 = [s, s, s]
c_out1 = 2*128
c_out2 = 2*64
# pool_size = 1
self.conv0 = DwiConv3dUnitKernel(img_channels, c_out1, cholesky_weights=cholesky_weights)
self.conv1 = nn.Conv3d(c_out1, c_out2, kernel_dims1, strides1)
# self.conv2 = nn.Conv3d(c_out2, c_out2, kernel_dims2, strides2)
self.dropout1 = nn.Dropout3d(0.2)
self.dropout2 = nn.Dropout2d(0.2)
# self.max1 = nn.MaxPool3d(pool_size)
linear_size1 = Dimensions().dimensions_for_linear(img_dims, [self.conv0, self.conv1])
# linear_size1 = 28224
linear_size2 = 128
self.fc1 = nn.Linear(int(linear_size1), linear_size2)
self.fc2 = nn.Linear(linear_size2, number_of_classes)
def forward(self, x):
x = self.conv0(x)
# x = func.relu(x)
x = self.conv1(x)
x = func.relu(x)
# x = self.conv2(x)
# x = func.relu(x)
x = self.max1(x)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = func.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = func.log_softmax(x, dim=1)
return output
# low stride
class DnnHcpUnitKernelShallow6(nn.Module):
def __init__(self, img_dims, number_of_classes, cholesky_weights=False):
super(DnnHcpUnitKernelShallow6, self).__init__()
k = 5
s = 1
img_channels = img_dims[0]
kernel_dims1 = [k, k, k]
kernel_dims2 = [k, k, k]
strides1 = [s, s, s]
strides2 = [s, s, s]
c_out1 = 2*128
c_out2 = 2*64
pool_size = 3
self.conv0 = DwiConv3dUnitKernel(img_channels, c_out1, cholesky_weights=cholesky_weights)
self.conv1 = nn.Conv3d(c_out1, c_out2, kernel_dims1, strides1)
# self.conv2 = nn.Conv3d(c_out2, c_out2, kernel_dims2, strides2)
self.dropout1 = nn.Dropout3d(0.3)
self.dropout2 = nn.Dropout2d(0.3)
self.max1 = nn.MaxPool3d(pool_size)
linear_size1 = Dimensions().dimensions_for_linear(img_dims, [self.conv0, self.conv1, self.max1])
# linear_size1 = 28224
linear_size2 = 128
self.fc1 = nn.Linear(int(linear_size1), linear_size2)
self.fc2 = nn.Linear(linear_size2, number_of_classes)
def forward(self, x):
x = self.conv0(x)
# x = func.relu(x)
x = self.conv1(x)
x = func.relu(x)
# x = self.conv2(x)
# x = func.relu(x)
x = self.max1(x)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = func.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = func.log_softmax(x, dim=1)
return output
class DnnHcpUnitKernelRegression2(nn.Module):
def __init__(self, img_dims, number_of_classes, cholesky_weights=False):
super(DnnHcpUnitKernelRegression2, self).__init__()
k = 4
s = 1
img_channels = img_dims[0]
kernel_dims1 = [k, k, k]
kernel_dims2 = [k, k, k]
strides1 = [s, s, s]
strides2 = [s, s, s]
c_out1 = 64
c_out2 = 32
pool_size = 2
self.conv0 = DwiConv3dUnitKernel(img_channels, c_out1, cholesky_weights=cholesky_weights)
self.conv1 = nn.Conv3d(c_out1, c_out2, kernel_dims1, strides1)
self.conv2 = nn.Conv3d(c_out2, c_out2, kernel_dims2, strides2)
self.dropout1 = nn.Dropout3d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
self.max1 = nn.MaxPool3d(pool_size)
self.max2 = nn.MaxPool3d(pool_size)
linear_size1 = Dimensions().dimensions_for_linear(img_dims, [self.conv0, self.max1, self.conv1, self.max1, self.conv2, self.max2])
# linear_size1 = Dimensions().dimensions_for_linear(img_dims, [self.conv0, self.conv1])
# linear_size1 = 28224
linear_size2 = 2*128
self.fc1 = nn.Linear(int(linear_size1), linear_size2)
self.fc2 = nn.Linear(linear_size2, number_of_classes)
def forward(self, x):
x = self.conv0(x)
# x = func.relu(x)
x = self.max1(x)
x = self.conv1(x)
x = func.relu(x)
x = self.max1(x)
x = self.conv2(x)
x = func.relu(x)
x = self.max2(x)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = func.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = func.log_softmax(x, dim=1)
return output
class DnnHcpUnitKernelRegression3(nn.Module):
def __init__(self, img_dims, number_of_classes, cholesky_weights=False):
super(DnnHcpUnitKernelRegression3, self).__init__()
k = 4
s = 1
img_channels = img_dims[0]
kernel_dims1 = [k, k, k]
kernel_dims2 = [k, k, k]
strides1 = [s, s, s]
strides2 = [s, s, s]
c_out1 = 64
c_out2 = 32
pool_size = 2
self.conv0 = DwiConv3dUnitKernel(img_channels, c_out1, cholesky_weights=cholesky_weights)
self.conv1 = nn.Conv3d(c_out1, c_out2, kernel_dims1, strides1)
self.conv2 = nn.Conv3d(c_out2, c_out2, kernel_dims2, strides2)
self.dropout1 = nn.Dropout3d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
self.max1 = nn.MaxPool3d(pool_size)
self.max2 = nn.MaxPool3d(pool_size)
linear_size1 = Dimensions().dimensions_for_linear(img_dims, [self.conv0, self.conv1, self.max1, self.conv2, self.max2])
# linear_size1 = Dimensions().dimensions_for_linear(img_dims, [self.conv0, self.conv1])
# linear_size1 = 28224
linear_size2 = 2*128
self.fc1 = nn.Linear(int(linear_size1), linear_size2)
self.fc2 = nn.Linear(linear_size2, number_of_classes)
def forward(self, x):
x = self.conv0(x)
# x = func.relu(x)
# x = self.max1(x)
x = self.conv1(x)
x = func.relu(x)
x = self.max1(x)
x = self.conv2(x)
x = func.relu(x)
x = self.max2(x)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = func.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = func.log_softmax(x, dim=1)
return output
class DnnHcpUnitKernelRegression4(nn.Module):
def __init__(self, img_dims, number_of_classes, cholesky_weights=False):
super(DnnHcpUnitKernelRegression4, self).__init__()
k = 4
s = 1
img_channels = img_dims[0]
kernel_dims1 = [k, k, k]
kernel_dims2 = [k, k, k]
strides1 = [s, s, s]
strides2 = [s, s, s]
c_out1 = 2*64
c_out2 = 2*32
pool_size = 2
self.conv0 = DwiConv3dUnitKernel(img_channels, c_out1, cholesky_weights=cholesky_weights)
self.conv1 = nn.Conv3d(c_out1, c_out2, kernel_dims1, strides1)
self.conv2 = nn.Conv3d(c_out2, c_out2, kernel_dims2, strides2)
self.dropout1 = nn.Dropout3d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
self.max1 = nn.MaxPool3d(pool_size)
self.max2 = nn.MaxPool3d(pool_size)
linear_size1 = Dimensions().dimensions_for_linear(img_dims, [self.conv0, self.conv1, self.max1, self.conv2, self.max2])
# linear_size1 = Dimensions().dimensions_for_linear(img_dims, [self.conv0, self.conv1])
# linear_size1 = 28224
linear_size2 = 2*128
self.fc1 = nn.Linear(int(linear_size1), linear_size2)
self.fc2 = nn.Linear(linear_size2, number_of_classes)
def forward(self, x):
x = self.conv0(x)
# x = func.relu(x)
# x = self.max1(x)
x = self.conv1(x)
x = func.relu(x)
x = self.max1(x)
x = self.conv2(x)
x = func.relu(x)
x = self.max2(x)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = func.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = func.log_softmax(x, dim=1)
return output | 27.175345 | 139 | 0.580987 | 3,523 | 25,572 | 4.02214 | 0.030656 | 0.027382 | 0.051658 | 0.03952 | 0.921454 | 0.915737 | 0.909951 | 0.904446 | 0.902329 | 0.89873 | 0 | 0.06733 | 0.30072 | 25,572 | 941 | 140 | 27.175345 | 0.725087 | 0.061708 | 0 | 0.886885 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052459 | false | 0 | 0.008197 | 0 | 0.113115 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1a313918b965ac86982ab93c6537411317e9b272 | 115 | py | Python | main.py | rccreager/Sentiment-19 | 5a167f72f13c154704e863df26e82e6fe20761f5 | [
"MIT"
] | null | null | null | main.py | rccreager/Sentiment-19 | 5a167f72f13c154704e863df26e82e6fe20761f5 | [
"MIT"
] | null | null | null | main.py | rccreager/Sentiment-19 | 5a167f72f13c154704e863df26e82e6fe20761f5 | [
"MIT"
] | 3 | 2020-04-20T23:46:34.000Z | 2020-05-20T17:13:25.000Z | # from test import test
from senti19.senti19.test import test_name
# test.test_name()
# Tests().test_print_name() | 19.166667 | 42 | 0.765217 | 18 | 115 | 4.666667 | 0.388889 | 0.238095 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039604 | 0.121739 | 115 | 6 | 43 | 19.166667 | 0.792079 | 0.556522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1a7d55a99c839f35c858e0fa1b467da256123e8a | 63 | py | Python | bl20.py | rinapyktina/bl | 38546465c8802be184fbd44ae521af54a5ec504f | [
"MIT"
] | null | null | null | bl20.py | rinapyktina/bl | 38546465c8802be184fbd44ae521af54a5ec504f | [
"MIT"
] | null | null | null | bl20.py | rinapyktina/bl | 38546465c8802be184fbd44ae521af54a5ec504f | [
"MIT"
] | null | null | null | x = 1000
y = 3
x = x//y
y = y * x
print("x= ", x, "y= ", y) | 12.6 | 25 | 0.349206 | 15 | 63 | 1.466667 | 0.333333 | 0.272727 | 0.272727 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0.365079 | 63 | 5 | 25 | 12.6 | 0.425 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 1 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1a8b114c56dd98cdb443e34f9b5abf19ad8d2c5c | 214 | py | Python | tf_supervised_inference/linear_model.py | AmiiThinks/tf-supervised_inference | 6a4d4d6da7d98f48f024a1949c265093d4f86a0d | [
"MIT"
] | null | null | null | tf_supervised_inference/linear_model.py | AmiiThinks/tf-supervised_inference | 6a4d4d6da7d98f48f024a1949c265093d4f86a0d | [
"MIT"
] | 3 | 2018-06-25T05:02:51.000Z | 2018-06-26T20:37:51.000Z | tf_supervised_inference/linear_model.py | AmiiThinks/tf-supervised_inference | 6a4d4d6da7d98f48f024a1949c265093d4f86a0d | [
"MIT"
] | null | null | null | class LinearModel(object):
def __init__(self, weights):
self.weights = [weights]
def __call__(self, phi):
return phi @ self.weights[0]
def predict(self, phi):
return self(phi)
| 21.4 | 36 | 0.61215 | 26 | 214 | 4.730769 | 0.461538 | 0.268293 | 0.211382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00641 | 0.271028 | 214 | 9 | 37 | 23.777778 | 0.782051 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0.285714 | 0.857143 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
2038f2662c533bd321549e4efb02b78a02533ee2 | 2,659 | py | Python | tests/test_states/test_horodecki.py | rurz/toqito | a051ff771678b7573dfda4d583dc60eaa8cb541f | [
"MIT"
] | 76 | 2020-01-28T17:02:01.000Z | 2022-02-14T18:02:15.000Z | tests/test_states/test_horodecki.py | rurz/toqito | a051ff771678b7573dfda4d583dc60eaa8cb541f | [
"MIT"
] | 82 | 2020-05-31T20:09:38.000Z | 2022-03-28T17:13:59.000Z | tests/test_states/test_horodecki.py | rurz/toqito | a051ff771678b7573dfda4d583dc60eaa8cb541f | [
"MIT"
] | 30 | 2020-04-02T16:07:11.000Z | 2022-02-05T13:39:22.000Z | """Test horodecki."""
import numpy as np
from toqito.states import horodecki
def test_horodecki_state_3_3_default():
"""The 3-by-3 Horodecki state (no dimensions specified on input)."""
expected_res = np.array(
[
[0.1000, 0, 0, 0, 0.1000, 0, 0, 0, 0.1000],
[0, 0.1000, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0.1000, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0.1000, 0, 0, 0, 0, 0],
[0.1000, 0, 0, 0, 0.1000, 0, 0, 0, 0.1000],
[0, 0, 0, 0, 0, 0.1000, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0.1500, 0, 0.0866],
[0, 0, 0, 0, 0, 0, 0, 0.1000, 0],
[0.1000, 0, 0, 0, 0.1000, 0, 0.0866, 0, 0.1500],
]
)
res = horodecki(0.5)
bool_mat = np.isclose(expected_res, res, atol=0.0001)
np.testing.assert_equal(np.all(bool_mat), True)
def test_horodecki_state_3_3():
"""The 3-by-3 Horodecki state."""
expected_res = np.array(
[
[0.1000, 0, 0, 0, 0.1000, 0, 0, 0, 0.1000],
[0, 0.1000, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0.1000, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0.1000, 0, 0, 0, 0, 0],
[0.1000, 0, 0, 0, 0.1000, 0, 0, 0, 0.1000],
[0, 0, 0, 0, 0, 0.1000, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0.1500, 0, 0.0866],
[0, 0, 0, 0, 0, 0, 0, 0.1000, 0],
[0.1000, 0, 0, 0, 0.1000, 0, 0.0866, 0, 0.1500],
]
)
res = horodecki(0.5, [3, 3])
bool_mat = np.isclose(expected_res, res, atol=0.0001)
np.testing.assert_equal(np.all(bool_mat), True)
def test_horodecki_state_2_4():
"""The 2-by-4 Horodecki state."""
expected_res = np.array(
[
[0.1111, 0, 0, 0, 0, 0.1111, 0, 0],
[0, 0.1111, 0, 0, 0, 0, 0.1111, 0],
[0, 0, 0.1111, 0, 0, 0, 0, 0.1111],
[0, 0, 0, 0.1111, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0.1667, 0, 0.0962],
[0.1111, 0, 0, 0, 0, 0.1111, 0, 0],
[0, 0.1111, 0, 0, 0, 0, 0.1111, 0],
[0, 0, 0.1111, 0, 0, 0.0962, 0, 0.1667],
]
)
res = horodecki(0.5, [2, 4])
bool_mat = np.isclose(expected_res, res, atol=0.2)
np.testing.assert_equal(np.all(bool_mat), True)
def test_horodecki_invalid_a_param():
"""Tests for invalid a_param inputs."""
with np.testing.assert_raises(ValueError):
horodecki(-5)
with np.testing.assert_raises(ValueError):
horodecki(5)
def test_horodecki_invalid_dim():
"""Tests for invalid dimension inputs."""
with np.testing.assert_raises(ValueError):
horodecki(0.5, [3, 4])
if __name__ == "__main__":
np.testing.run_module_suite()
| 31.282353 | 72 | 0.491162 | 471 | 2,659 | 2.66879 | 0.127389 | 0.280032 | 0.310263 | 0.305489 | 0.790772 | 0.790772 | 0.748608 | 0.712013 | 0.595863 | 0.568019 | 0 | 0.255763 | 0.31478 | 2,659 | 84 | 73 | 31.654762 | 0.434138 | 0.076721 | 0 | 0.532258 | 0 | 0 | 0.0033 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 1 | 0.080645 | false | 0 | 0.032258 | 0 | 0.112903 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
204ce81db404d14503b6d781da6370458f2bc11d | 33 | py | Python | function_21210111.py | hapfooo1/Study-8 | 8760116d24d58e2ebeff4b0d739227599828f672 | [
"MIT"
] | null | null | null | function_21210111.py | hapfooo1/Study-8 | 8760116d24d58e2ebeff4b0d739227599828f672 | [
"MIT"
] | null | null | null | function_21210111.py | hapfooo1/Study-8 | 8760116d24d58e2ebeff4b0d739227599828f672 | [
"MIT"
] | null | null | null | print('My student_id: 21210111')
| 16.5 | 32 | 0.757576 | 5 | 33 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.266667 | 0.090909 | 33 | 1 | 33 | 33 | 0.533333 | 0 | 0 | 0 | 0 | 0 | 0.69697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
64acef168f82e5bc027d8718c29b6cfb2066ada1 | 169 | py | Python | app/api/__init__.py | ryzencool/flask_arch | 432a426ea81dce1830881315d64b5f6691c94fdd | [
"Apache-2.0"
] | null | null | null | app/api/__init__.py | ryzencool/flask_arch | 432a426ea81dce1830881315d64b5f6691c94fdd | [
"Apache-2.0"
] | null | null | null | app/api/__init__.py | ryzencool/flask_arch | 432a426ea81dce1830881315d64b5f6691c94fdd | [
"Apache-2.0"
] | null | null | null | from flask import Flask, Blueprint
user = Blueprint("user", __name__)
trans = Blueprint("trans", __name__)
from app.api import user_view
from app.api import trans_view | 24.142857 | 36 | 0.781065 | 25 | 169 | 4.88 | 0.4 | 0.213115 | 0.163934 | 0.262295 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130178 | 169 | 7 | 37 | 24.142857 | 0.829932 | 0 | 0 | 0 | 0 | 0 | 0.052941 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0.6 | 1 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
b3e304271f8afeb6ab85c6b7a15f131b5ae32b9f | 45 | py | Python | CPAC/anat_preproc/__init__.py | danlurie/C-PAC | 5ddc2d4fa71eb13728d6156f73cb6e7621dda69d | [
"BSD-3-Clause"
] | null | null | null | CPAC/anat_preproc/__init__.py | danlurie/C-PAC | 5ddc2d4fa71eb13728d6156f73cb6e7621dda69d | [
"BSD-3-Clause"
] | null | null | null | CPAC/anat_preproc/__init__.py | danlurie/C-PAC | 5ddc2d4fa71eb13728d6156f73cb6e7621dda69d | [
"BSD-3-Clause"
] | 1 | 2017-02-21T18:16:06.000Z | 2017-02-21T18:16:06.000Z | from anat_preproc import create_anat_preproc
| 22.5 | 44 | 0.911111 | 7 | 45 | 5.428571 | 0.714286 | 0.578947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
b3f5dde46185a97706917fb1c672450a7e91a3ca | 44 | py | Python | skypy/cluster/tests/test_import.py | ArthurTolley/skypy | 5621877ada75c667b1af7e665b02a91026f7ef0f | [
"BSD-3-Clause"
] | 1 | 2020-12-28T18:00:24.000Z | 2020-12-28T18:00:24.000Z | skypy/cluster/tests/test_import.py | ArthurTolley/skypy | 5621877ada75c667b1af7e665b02a91026f7ef0f | [
"BSD-3-Clause"
] | 2 | 2020-12-28T20:14:40.000Z | 2020-12-28T21:49:27.000Z | skypy/cluster/tests/test_import.py | ArthurTolley/skypy | 5621877ada75c667b1af7e665b02a91026f7ef0f | [
"BSD-3-Clause"
] | null | null | null | def test_import():
import skypy.cluster
| 14.666667 | 24 | 0.727273 | 6 | 44 | 5.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 44 | 2 | 25 | 22 | 0.861111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 1 | 0 | 1.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
3738b5ebe1bd1e3bc84ef69a0d7cda0ff878668d | 366 | py | Python | NodeDefender/mqtt/message/respond/icpe/sys/time.py | CTSNE/NodeDefender | 24e19f53a27d3b53e599cba8b1448f8f16c0bd5e | [
"MIT"
] | 4 | 2016-09-23T17:51:05.000Z | 2017-03-14T02:52:26.000Z | NodeDefender/mqtt/message/respond/icpe/sys/time.py | CTSNE/NodeDefender | 24e19f53a27d3b53e599cba8b1448f8f16c0bd5e | [
"MIT"
] | 1 | 2016-09-22T11:32:33.000Z | 2017-11-14T10:00:24.000Z | NodeDefender/mqtt/message/respond/icpe/sys/time.py | CTSNE/NodeDefender | 24e19f53a27d3b53e599cba8b1448f8f16c0bd5e | [
"MIT"
] | 4 | 2016-10-09T19:05:16.000Z | 2020-05-14T04:00:30.000Z | import NodeDefender
def set(topic, payload):
enabled = bool(eval(payload.pop(0)))
interval = payload.pop(0)
server_1 = payload.pop(0)
server_2 = payload.pop(0)
return True
def qry(topic, payload):
enabled = bool(eval(payload.pop(0)))
interval = payload.pop(0)
server_1 = payload.pop(0)
server_2 = payload.pop(0)
return True
| 22.875 | 40 | 0.655738 | 54 | 366 | 4.37037 | 0.333333 | 0.338983 | 0.372881 | 0.288136 | 0.872881 | 0.872881 | 0.872881 | 0.872881 | 0.872881 | 0.872881 | 0 | 0.041667 | 0.213115 | 366 | 15 | 41 | 24.4 | 0.777778 | 0 | 0 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.076923 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
3782873d388923edc52b001e36583d7de6a6ee0a | 162 | py | Python | main_path.py | ygan/sqlova | fa335b94d4244bdd5b07c2710ab485559825a5ea | [
"Apache-2.0"
] | null | null | null | main_path.py | ygan/sqlova | fa335b94d4244bdd5b07c2710ab485559825a5ea | [
"Apache-2.0"
] | null | null | null | main_path.py | ygan/sqlova | fa335b94d4244bdd5b07c2710ab485559825a5ea | [
"Apache-2.0"
] | null | null | null | def MAIN_PATH():
return "/home/yj/python/Github/sqlova"
def BERT_PATH():
return "/home/yj/python/Github/pytorch-pretrained-BERT/.pytorch_pretrained_bert" | 32.4 | 84 | 0.753086 | 23 | 162 | 5.130435 | 0.521739 | 0.169492 | 0.237288 | 0.271186 | 0.474576 | 0.474576 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098765 | 162 | 5 | 84 | 32.4 | 0.808219 | 0 | 0 | 0 | 0 | 0 | 0.613497 | 0.613497 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 10 |
37bddabe0bb4badeec229838e1c14d096bf47ef1 | 2,470 | py | Python | motorInferencia.py | ArthurNunes10/Sistema-Especialista-Elevador | 3b02bd70bab3f0466c4ea208ed54e58318693953 | [
"MIT"
] | null | null | null | motorInferencia.py | ArthurNunes10/Sistema-Especialista-Elevador | 3b02bd70bab3f0466c4ea208ed54e58318693953 | [
"MIT"
] | null | null | null | motorInferencia.py | ArthurNunes10/Sistema-Especialista-Elevador | 3b02bd70bab3f0466c4ea208ed54e58318693953 | [
"MIT"
] | null | null | null | def inferencia(andar,botao):
if andar == '1' and botao == '1':
return "Abrir a porta"
if andar == '1' and botao == '2':
return "Ir para o 2º andar"
if andar == '1' and botao == '3':
return "Ir para o 3º andar"
if andar == '1' and botao == '4':
return "Ir para o 4º andar"
if andar == '1' and botao == '5':
return "Ir para o 5º andar"
if andar == '1' and botao == '6':
return "Ir para o 6º andar"
if andar == '2' and botao == '1':
return "Ir para 1º andar"
if andar == '2' and botao == '2':
return "Abrir a porta"
if andar == '2' and botao == '3':
return "Ir para o 3º andar"
if andar == '2' and botao == '4':
return "Ir para o 4º andar"
if andar == '2' and botao == '5':
return "Ir para o 5º andar"
if andar == '2' and botao == '6':
return "Ir para o 6º andar"
if andar == '3' and botao == '1':
return "Ir para 1º andar"
if andar == '3' and botao == '2':
return "Ir para o 2º andar"
if andar == '3' and botao == '3':
return "Abrir a porta"
if andar == '3' and botao == '4':
return "Ir para o 4º andar"
if andar == '3' and botao == '5':
return "Ir para o 5º andar"
if andar == '3' and botao == '6':
return "Ir para o 6º andar"
if andar == '4' and botao == '1':
return "Ir para 1º andar"
if andar == '4' and botao == '2':
return "Ir para o 2º andar"
if andar == '4' and botao == '3':
return "Ir para o 3º andar"
if andar == '4' and botao == '4':
return "Abrir a porta"
if andar == '4' and botao == '5':
return "Ir para o 5º andar"
if andar == '4' and botao == '6':
return "Ir para o 6º andar"
if andar == '5' and botao == '1':
return "Ir para 1º andar"
if andar == '5' and botao == '2':
return "Ir para o 2º andar"
if andar == '5' and botao == '3':
return "Ir para o 3º andar"
if andar == '5' and botao == '4':
return "Ir para o 4º andar"
if andar == '5' and botao == '5':
return "Abrir a porta"
if andar == '5' and botao == '6':
return "Ir para o 6º andar"
if andar == '6' and botao == '1':
return "Ir para 1º andar"
if andar == '6' and botao == '2':
return "Ir para o 2º andar"
if andar == '6' and botao == '3':
return "Ir para o 3º andar"
if andar == '6' and botao == '4':
return "Ir para o 4º andar"
if andar == '6' and botao == '5':
return "Ir para o 5º andar"
if andar == '6' and botao == '6':
return "Abrir a porta" | 22.87037 | 35 | 0.539271 | 419 | 2,470 | 3.178998 | 0.059666 | 0.189189 | 0.27027 | 0.243994 | 0.965465 | 0.9497 | 0.807057 | 0.807057 | 0.807057 | 0.807057 | 0 | 0.058891 | 0.298785 | 2,470 | 108 | 36 | 22.87037 | 0.710162 | 0 | 0 | 0.493151 | 0 | 0 | 0.287648 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013699 | false | 0 | 0 | 0 | 0.506849 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 |
8079a0a338b16fc209aa3bda33080906870d1063 | 14,286 | py | Python | anuga/file/tests/test_read_sww.py | samcom12/anuga_core | f4378114dbf02d666fe6423de45798add5c42806 | [
"Python-2.0",
"OLDAP-2.7"
] | 136 | 2015-05-07T05:47:43.000Z | 2022-02-16T03:07:40.000Z | anuga/file/tests/test_read_sww.py | samcom12/anuga_core | f4378114dbf02d666fe6423de45798add5c42806 | [
"Python-2.0",
"OLDAP-2.7"
] | 184 | 2015-05-03T09:27:54.000Z | 2021-12-20T04:22:48.000Z | anuga/file/tests/test_read_sww.py | samcom12/anuga_core | f4378114dbf02d666fe6423de45798add5c42806 | [
"Python-2.0",
"OLDAP-2.7"
] | 70 | 2015-03-18T07:35:22.000Z | 2021-11-01T07:07:29.000Z | #!/usr/bin/env python
#
# This file was reverted from changeset:5484 to changeset:5470 on 10th July
# by Ole.
from __future__ import division
from builtins import str
from past.utils import old_div
import unittest
import copy
import os
import numpy as num
import anuga.file.sww as sww
class Test_read_sww(unittest.TestCase):
# Class variable
verbose = False
def set_verbose(self):
Test_read_sww.verbose = True
def setUp(self):
pass
def tearDown(self):
for filename in ['read_sww_test0.sww', 'read_sww_test_c0.sww']:
try:
os.remove(filename)
except:
pass
def test_read_sww(self):
"""
Save to an sww file and then read back the info.
Here we store the info "uniquely"
"""
#---------------------------------------------------------------------
# Import necessary modules
#---------------------------------------------------------------------
from anuga.abstract_2d_finite_volumes.mesh_factory import \
rectangular_cross
from anuga.shallow_water.shallow_water_domain import Domain
from anuga import Reflective_boundary
from anuga.abstract_2d_finite_volumes.generic_boundary_conditions\
import Dirichlet_boundary, Time_boundary
#---------------------------------------------------------------------
# Setup computational domain
#---------------------------------------------------------------------
length = 8.
width = 4.
dx = dy = 2 # Resolution: Length of subdivisions on both axes
inc = 0.05 # Elevation increment
points, vertices, boundary = rectangular_cross(int(old_div(length,dx)),
int(old_div(width,dy)),
len1=length,
len2=width)
domain = Domain(points, vertices, boundary)
domain.set_name('read_sww_test'+str(domain.processor)) # Output name
domain.set_quantities_to_be_stored({'elevation': 2,
'stage': 2,
'xmomentum': 2,
'ymomentum': 2,
'friction': 1})
domain.set_store_vertices_uniquely(True)
#---------------------------------------------------------------------
# Setup initial conditions
#---------------------------------------------------------------------
domain.set_quantity('elevation', 0.0) # Flat bed initially
domain.set_quantity('friction', 0.01) # Constant friction
domain.set_quantity('stage', 0.0) # Dry initial condition
#------------------------------------------------------------------
# Setup boundary conditions
#------------------------------------------------------------------
Bi = Dirichlet_boundary([0.4, 0, 0]) # Inflow
Br = Reflective_boundary(domain) # Solid reflective wall
Bo = Dirichlet_boundary([-5, 0, 0]) # Outflow
domain.set_boundary({'left': Bi, 'right': Bo, 'top': Br, 'bottom': Br})
#-------------------------------------------------------------------
# Evolve system through time
#-------------------------------------------------------------------
for t in domain.evolve(yieldstep=1, finaltime=4.0):
pass
# Check that quantities have been stored correctly
source = domain.get_name() + '.sww'
#x = fid.variables['x'][:]
#y = fid.variables['y'][:]
#stage = fid.variables['stage'][:]
#elevation = fid.variables['elevation'][:]
#fid.close()
#assert len(stage.shape) == 2
#assert len(elevation.shape) == 2
#M, N = stage.shape
sww_file = sww.Read_sww(source)
#print 'last frame number',sww_file.get_last_frame_number()
assert num.allclose(sww_file.x, domain.get_vertex_coordinates()[:,0])
assert num.allclose(sww_file.y, domain.get_vertex_coordinates()[:,1])
assert num.allclose(sww_file.time, [0.0, 1.0, 2.0, 3.0, 4.0])
M = domain.get_number_of_triangles()
assert num.allclose(num.reshape(num.arange(3*M), (M,3)), sww_file.vertices)
last_frame_number = sww_file.get_last_frame_number()
assert last_frame_number == 4
assert num.allclose(sww_file.get_bounds(), [0.0, length, 0.0, width])
assert 'stage' in list(sww_file.quantities.keys())
assert 'friction' in list(sww_file.quantities.keys())
assert 'elevation' in list(sww_file.quantities.keys())
assert 'xmomentum' in list(sww_file.quantities.keys())
assert 'ymomentum' in list(sww_file.quantities.keys())
for qname, q in list(sww_file.read_quantities(last_frame_number).items()):
#print qname
#print num.linalg.norm(num.abs((domain.get_quantity(qname).get_values()-q).flatten()), ord=1)
assert num.allclose(domain.get_quantity(qname).get_values(), q)
#-----------------------------------------
# Start the evolution off again at frame 3
#-----------------------------------------
sww_file.read_quantities(last_frame_number-1)
points, vertices, boundary = rectangular_cross(int(old_div(length,dx)),
int(old_div(width,dy)),
len1=length,
len2=width)
new_domain = Domain(points, vertices, boundary)
new_domain.set_quantities_to_be_stored(None)
new_domain.set_store_vertices_uniquely(True)
for qname, q in list(sww_file.read_quantities(last_frame_number-1).items()):
new_domain.set_quantity(qname, q)
#------------------------------------------------------------------
# Setup boundary conditions
#------------------------------------------------------------------
Bi = Dirichlet_boundary([0.4, 0, 0]) # Inflow
Br = Reflective_boundary(new_domain) # Solid reflective wall
Bo = Dirichlet_boundary([-5, 0, 0]) # Outflow
new_domain.set_boundary({'left': Bi, 'right': Bo, 'top': Br, 'bottom': Br})
#-------------------------------------------------------------------
# Evolve system through time
#-------------------------------------------------------------------
for t in new_domain.evolve(yieldstep=1.0, finaltime=1.0):
pass
# Compare new_domain and domain quantities
for quantity in domain.get_quantity_names():
dv = domain.get_quantity(quantity).get_values()
ndv = new_domain.get_quantity(quantity).get_values()
#print dv-ndv
assert num.allclose( dv, ndv, rtol=5.e-2, atol=5.e-2)
# Clean up
#os.remove(source)
def test_read_sww_with_centroids(self):
"""
Save to an sww file and then read back the info.
Here we store the info "uniquely"
"""
#---------------------------------------------------------------------
# Import necessary modules
#---------------------------------------------------------------------
from anuga.abstract_2d_finite_volumes.mesh_factory import \
rectangular_cross
from anuga.shallow_water.shallow_water_domain import Domain
from anuga.shallow_water.boundaries import Reflective_boundary
from anuga.abstract_2d_finite_volumes.generic_boundary_conditions\
import Dirichlet_boundary, Time_boundary
#---------------------------------------------------------------------
# Setup computational domain
#---------------------------------------------------------------------
length = 8.
width = 4.
dx = dy = 2 # Resolution: Length of subdivisions on both axes
inc = 0.05 # Elevation increment
points, vertices, boundary = rectangular_cross(int(old_div(length,dx)),
int(old_div(width,dy)),
len1=length,
len2=width)
domain = Domain(points, vertices, boundary)
domain.set_name('read_sww_test_c'+str(domain.processor)) # Output name
domain.set_quantities_to_be_stored({'elevation': 2,
'stage': 2,
'xmomentum': 2,
'ymomentum': 2,
'friction': 1})
domain.set_store_vertices_uniquely(True)
domain.set_store_centroids(True)
#---------------------------------------------------------------------
# Setup initial conditions
#---------------------------------------------------------------------
domain.set_quantity('elevation', 0.0) # Flat bed initially
domain.set_quantity('friction', 0.01) # Constant friction
domain.set_quantity('stage', 0.0) # Dry initial condition
#------------------------------------------------------------------
# Setup boundary conditions
#------------------------------------------------------------------
Bi = Dirichlet_boundary([0.4, 0, 0]) # Inflow
Br = Reflective_boundary(domain) # Solid reflective wall
Bo = Dirichlet_boundary([-5, 0, 0]) # Outflow
domain.set_boundary({'left': Bi, 'right': Bo, 'top': Br, 'bottom': Br})
#-------------------------------------------------------------------
# Evolve system through time
#-------------------------------------------------------------------
for t in domain.evolve(yieldstep=1, finaltime=4.0):
pass
# Check that quantities have been stored correctly
source = domain.get_name() + '.sww'
#x = fid.variables['x'][:]
#y = fid.variables['y'][:]
#stage = fid.variables['stage'][:]
#elevation = fid.variables['elevation'][:]
#fid.close()
#assert len(stage.shape) == 2
#assert len(elevation.shape) == 2
#M, N = stage.shape
sww_file = sww.Read_sww(source)
#print 'last frame number',sww_file.get_last_frame_number()
assert num.allclose(sww_file.x, domain.get_vertex_coordinates()[:,0])
assert num.allclose(sww_file.y, domain.get_vertex_coordinates()[:,1])
assert num.allclose(sww_file.time, [0.0, 1.0, 2.0, 3.0, 4.0])
M = domain.get_number_of_triangles()
assert num.allclose(num.reshape(num.arange(3*M), (M,3)), sww_file.vertices)
last_frame_number = sww_file.get_last_frame_number()
assert last_frame_number == 4
assert num.allclose(sww_file.get_bounds(), [0.0, length, 0.0, width])
#print 50*"="
#print sww_file.quantities.keys()
assert 'stage' in list(sww_file.quantities.keys())
assert 'friction' in list(sww_file.quantities.keys())
assert 'elevation' in list(sww_file.quantities.keys())
assert 'xmomentum' in list(sww_file.quantities.keys())
assert 'ymomentum' in list(sww_file.quantities.keys())
for qname, q in list(sww_file.read_quantities(last_frame_number).items()):
assert num.allclose(domain.get_quantity(qname).get_values(), q)
#-----------------------------------------
# Start the evolution off again at frame 3
#-----------------------------------------
sww_file.read_quantities(last_frame_number-1)
points, vertices, boundary = rectangular_cross(int(old_div(length,dx)),
int(old_div(width,dy)),
len1=length,
len2=width)
new_domain = Domain(points, vertices, boundary)
new_domain.set_quantities_to_be_stored(None)
new_domain.set_store_vertices_uniquely(True)
for qname, q in list(sww_file.read_quantities(last_frame_number-1).items()):
new_domain.set_quantity(qname, q)
#------------------------------------------------------------------
# Setup boundary conditions
#------------------------------------------------------------------
Bi = Dirichlet_boundary([0.4, 0, 0]) # Inflow
Br = Reflective_boundary(new_domain) # Solid reflective wall
Bo = Dirichlet_boundary([-5, 0, 0]) # Outflow
new_domain.set_boundary({'left': Bi, 'right': Bo, 'top': Br, 'bottom': Br})
#-------------------------------------------------------------------
# Evolve system through time
#-------------------------------------------------------------------
for t in new_domain.evolve(yieldstep=1.0, finaltime=1.0):
pass
# Compare new_domain and domain quantities
for quantity in domain.get_quantity_names():
dv = domain.get_quantity(quantity).get_values()
ndv = new_domain.get_quantity(quantity).get_values()
#print dv-ndv
assert num.allclose( dv, ndv, rtol=5.e-2, atol=5.e-2)
# Clean up
#os.remove(source)
if __name__ == "__main__":
suite = unittest.makeSuite(Test_read_sww, 'test')
runner = unittest.TextTestRunner() #verbosity=2)
runner.run(suite)
| 40.355932 | 105 | 0.466471 | 1,369 | 14,286 | 4.674945 | 0.15851 | 0.038281 | 0.0375 | 0.028438 | 0.896094 | 0.892813 | 0.892813 | 0.887813 | 0.887813 | 0.887813 | 0 | 0.016317 | 0.305054 | 14,286 | 353 | 106 | 40.470255 | 0.628324 | 0.293924 | 0 | 0.822086 | 0 | 0 | 0.03651 | 0 | 0 | 0 | 0 | 0 | 0.159509 | 1 | 0.030675 | false | 0.03681 | 0.09816 | 0 | 0.141104 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
80a0774c4308fc4a7de6178142d4f049c34aa55d | 334,394 | py | Python | sdk/python/pulumi_sakuracloud/outputs.py | sacloud/pulumi-sakuracloud | 3eff14c6ec8ef4ad6422e0cdf15585df67eb4d6e | [
"ECL-2.0",
"Apache-2.0"
] | 6 | 2019-12-07T07:46:05.000Z | 2020-12-19T02:41:42.000Z | sdk/python/pulumi_sakuracloud/outputs.py | sacloud/pulumi-sakuracloud | 3eff14c6ec8ef4ad6422e0cdf15585df67eb4d6e | [
"ECL-2.0",
"Apache-2.0"
] | 5 | 2019-09-11T04:41:06.000Z | 2021-10-19T07:50:34.000Z | sdk/python/pulumi_sakuracloud/outputs.py | sacloud/pulumi-sakuracloud | 3eff14c6ec8ef4ad6422e0cdf15585df67eb4d6e | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2019-09-08T05:38:16.000Z | 2021-06-24T01:32:47.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
from . import outputs
__all__ = [
'CertificateAuthorityClient',
'CertificateAuthorityClientSubject',
'CertificateAuthorityServer',
'CertificateAuthorityServerSubject',
'CertificateAuthoritySubject',
'ContainerRegistryUser',
'DNSRecord',
'DatabaseBackup',
'DatabaseNetworkInterface',
'DatabaseReadReplicaNetworkInterface',
'GSLBHealthCheck',
'GSLBServer',
'LoadBalancerNetworkInterface',
'LoadBalancerVip',
'LoadBalancerVipServer',
'LocalRouterNetworkInterface',
'LocalRouterPeer',
'LocalRouterStaticRoute',
'LocalRouterSwitch',
'MobileGatewayPrivateNetworkInterface',
'MobileGatewaySim',
'MobileGatewaySimRoute',
'MobileGatewayStaticRoute',
'MobileGatewayTrafficControl',
'NFSNetworkInterface',
'PacketFilterExpression',
'PacketFilterRuleExpression',
'ProxyLBACMECertificate',
'ProxyLBACMECertificateAdditionalCertificate',
'ProxyLBBindPort',
'ProxyLBBindPortResponseHeader',
'ProxyLBCertificate',
'ProxyLBCertificateAdditionalCertificate',
'ProxyLBHealthCheck',
'ProxyLBRule',
'ProxyLBServer',
'ProxyLBSorryServer',
'ProxyLBSyslog',
'ServerDiskEditParameter',
'ServerDiskEditParameterNote',
'ServerNetworkInterface',
'SimpleMonitorHealthCheck',
'VPCRouterDhcpServer',
'VPCRouterDhcpStaticMapping',
'VPCRouterFirewall',
'VPCRouterFirewallExpression',
'VPCRouterL2tp',
'VPCRouterPortForwarding',
'VPCRouterPptp',
'VPCRouterPrivateNetworkInterface',
'VPCRouterPublicNetworkInterface',
'VPCRouterSiteToSiteVpn',
'VPCRouterStaticNat',
'VPCRouterStaticRoute',
'VPCRouterUser',
'VPCRouterWireGuard',
'VPCRouterWireGuardPeer',
'GetArchiveFilterResult',
'GetArchiveFilterConditionResult',
'GetBridgeFilterResult',
'GetBridgeFilterConditionResult',
'GetCDROMFilterResult',
'GetCDROMFilterConditionResult',
'GetCertificateAuthorityClientResult',
'GetCertificateAuthorityFilterResult',
'GetCertificateAuthorityFilterConditionResult',
'GetCertificateAuthorityServerResult',
'GetContainerRegistryFilterResult',
'GetContainerRegistryFilterConditionResult',
'GetContainerRegistryUserResult',
'GetDNSFilterResult',
'GetDNSFilterConditionResult',
'GetDNSRecordResult',
'GetDatabaseBackupResult',
'GetDatabaseFilterResult',
'GetDatabaseFilterConditionResult',
'GetDatabaseNetworkInterfaceResult',
'GetDiskFilterResult',
'GetDiskFilterConditionResult',
'GetESMEFilterResult',
'GetESMEFilterConditionResult',
'GetEnhancedDBFilterResult',
'GetEnhancedDBFilterConditionResult',
'GetGSLBFilterResult',
'GetGSLBFilterConditionResult',
'GetGSLBHealthCheckResult',
'GetGSLBServerResult',
'GetIconFilterResult',
'GetIconFilterConditionResult',
'GetInternetFilterResult',
'GetInternetFilterConditionResult',
'GetLoadBalancerFilterResult',
'GetLoadBalancerFilterConditionResult',
'GetLoadBalancerNetworkInterfaceResult',
'GetLoadBalancerVipResult',
'GetLoadBalancerVipServerResult',
'GetLocalRouterFilterResult',
'GetLocalRouterFilterConditionResult',
'GetLocalRouterNetworkInterfaceResult',
'GetLocalRouterPeerResult',
'GetLocalRouterStaticRouteResult',
'GetLocalRouterSwitchResult',
'GetNFSFilterResult',
'GetNFSFilterConditionResult',
'GetNFSNetworkInterfaceResult',
'GetNoteFilterResult',
'GetNoteFilterConditionResult',
'GetPacketFilterExpressionResult',
'GetPacketFilterFilterResult',
'GetPacketFilterFilterConditionResult',
'GetPrivateHostFilterResult',
'GetPrivateHostFilterConditionResult',
'GetProxyLBBindPortResult',
'GetProxyLBBindPortResponseHeaderResult',
'GetProxyLBCertificateResult',
'GetProxyLBCertificateAdditionalCertificateResult',
'GetProxyLBFilterResult',
'GetProxyLBFilterConditionResult',
'GetProxyLBHealthCheckResult',
'GetProxyLBRuleResult',
'GetProxyLBServerResult',
'GetProxyLBSorryServerResult',
'GetProxyLBSyslogResult',
'GetSSHKeyFilterResult',
'GetSSHKeyFilterConditionResult',
'GetServerFilterResult',
'GetServerFilterConditionResult',
'GetServerNetworkInterfaceResult',
'GetSimpleMonitorFilterResult',
'GetSimpleMonitorFilterConditionResult',
'GetSimpleMonitorHealthCheckResult',
'GetSwitchFilterResult',
'GetSwitchFilterConditionResult',
'GetVPCRouterDhcpServerResult',
'GetVPCRouterDhcpStaticMappingResult',
'GetVPCRouterFilterResult',
'GetVPCRouterFilterConditionResult',
'GetVPCRouterFirewallResult',
'GetVPCRouterFirewallExpressionResult',
'GetVPCRouterL2tpResult',
'GetVPCRouterPortForwardingResult',
'GetVPCRouterPptpResult',
'GetVPCRouterPrivateNetworkInterfaceResult',
'GetVPCRouterPublicNetworkInterfaceResult',
'GetVPCRouterSiteToSiteVpnResult',
'GetVPCRouterStaticNatResult',
'GetVPCRouterStaticRouteResult',
'GetVPCRouterUserResult',
'GetVPCRouterWireGuardResult',
'GetVPCRouterWireGuardPeerResult',
]
@pulumi.output_type
class CertificateAuthorityClient(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "validityPeriodHours":
suggest = "validity_period_hours"
elif key == "issueState":
suggest = "issue_state"
elif key == "notAfter":
suggest = "not_after"
elif key == "notBefore":
suggest = "not_before"
elif key == "publicKey":
suggest = "public_key"
elif key == "serialNumber":
suggest = "serial_number"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in CertificateAuthorityClient. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
CertificateAuthorityClient.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
CertificateAuthorityClient.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
subject: 'outputs.CertificateAuthorityClientSubject',
validity_period_hours: int,
certificate: Optional[str] = None,
csr: Optional[str] = None,
email: Optional[str] = None,
hold: Optional[bool] = None,
id: Optional[str] = None,
issue_state: Optional[str] = None,
not_after: Optional[str] = None,
not_before: Optional[str] = None,
public_key: Optional[str] = None,
serial_number: Optional[str] = None,
url: Optional[str] = None):
"""
:param 'CertificateAuthorityClientSubjectArgs' subject: A `subject` block as defined below.
:param int validity_period_hours: The number of hours after initial issuing that the certificate will become invalid.
:param str certificate: The body of the CA's certificate in PEM format.
:param str csr: Input for issuing a certificate.
:param str email: Input for issuing a certificate.
:param bool hold: Flag to suspend/hold the certificate.
:param str id: The id of the certificate.
:param str issue_state: Current state of the certificate.
:param str not_after: The date on which the certificate validity period ends, in RFC3339 format.
:param str not_before: The date on which the certificate validity period begins, in RFC3339 format.
:param str public_key: Input for issuing a certificate.
:param str serial_number: The body of the CA's certificate in PEM format.
:param str url: The URL for issuing the certificate.
"""
pulumi.set(__self__, "subject", subject)
pulumi.set(__self__, "validity_period_hours", validity_period_hours)
if certificate is not None:
pulumi.set(__self__, "certificate", certificate)
if csr is not None:
pulumi.set(__self__, "csr", csr)
if email is not None:
pulumi.set(__self__, "email", email)
if hold is not None:
pulumi.set(__self__, "hold", hold)
if id is not None:
pulumi.set(__self__, "id", id)
if issue_state is not None:
pulumi.set(__self__, "issue_state", issue_state)
if not_after is not None:
pulumi.set(__self__, "not_after", not_after)
if not_before is not None:
pulumi.set(__self__, "not_before", not_before)
if public_key is not None:
pulumi.set(__self__, "public_key", public_key)
if serial_number is not None:
pulumi.set(__self__, "serial_number", serial_number)
if url is not None:
pulumi.set(__self__, "url", url)
@property
@pulumi.getter
def subject(self) -> 'outputs.CertificateAuthorityClientSubject':
"""
A `subject` block as defined below.
"""
return pulumi.get(self, "subject")
@property
@pulumi.getter(name="validityPeriodHours")
def validity_period_hours(self) -> int:
"""
The number of hours after initial issuing that the certificate will become invalid.
"""
return pulumi.get(self, "validity_period_hours")
@property
@pulumi.getter
def certificate(self) -> Optional[str]:
"""
The body of the CA's certificate in PEM format.
"""
return pulumi.get(self, "certificate")
@property
@pulumi.getter
def csr(self) -> Optional[str]:
"""
Input for issuing a certificate.
"""
return pulumi.get(self, "csr")
@property
@pulumi.getter
def email(self) -> Optional[str]:
"""
Input for issuing a certificate.
"""
return pulumi.get(self, "email")
@property
@pulumi.getter
def hold(self) -> Optional[bool]:
"""
Flag to suspend/hold the certificate.
"""
return pulumi.get(self, "hold")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The id of the certificate.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="issueState")
def issue_state(self) -> Optional[str]:
"""
Current state of the certificate.
"""
return pulumi.get(self, "issue_state")
@property
@pulumi.getter(name="notAfter")
def not_after(self) -> Optional[str]:
"""
The date on which the certificate validity period ends, in RFC3339 format.
"""
return pulumi.get(self, "not_after")
@property
@pulumi.getter(name="notBefore")
def not_before(self) -> Optional[str]:
"""
The date on which the certificate validity period begins, in RFC3339 format.
"""
return pulumi.get(self, "not_before")
@property
@pulumi.getter(name="publicKey")
def public_key(self) -> Optional[str]:
"""
Input for issuing a certificate.
"""
return pulumi.get(self, "public_key")
@property
@pulumi.getter(name="serialNumber")
def serial_number(self) -> Optional[str]:
"""
The body of the CA's certificate in PEM format.
"""
return pulumi.get(self, "serial_number")
@property
@pulumi.getter
def url(self) -> Optional[str]:
"""
The URL for issuing the certificate.
"""
return pulumi.get(self, "url")
@pulumi.output_type
class CertificateAuthorityClientSubject(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "commonName":
suggest = "common_name"
elif key == "organizationUnits":
suggest = "organization_units"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in CertificateAuthorityClientSubject. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
CertificateAuthorityClientSubject.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
CertificateAuthorityClientSubject.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
common_name: str,
country: str,
organization: str,
organization_units: Optional[Sequence[str]] = None):
"""
:param str common_name: .
:param str country: .
:param str organization: .
:param Sequence[str] organization_units: .
"""
pulumi.set(__self__, "common_name", common_name)
pulumi.set(__self__, "country", country)
pulumi.set(__self__, "organization", organization)
if organization_units is not None:
pulumi.set(__self__, "organization_units", organization_units)
@property
@pulumi.getter(name="commonName")
def common_name(self) -> str:
"""
.
"""
return pulumi.get(self, "common_name")
@property
@pulumi.getter
def country(self) -> str:
"""
.
"""
return pulumi.get(self, "country")
@property
@pulumi.getter
def organization(self) -> str:
"""
.
"""
return pulumi.get(self, "organization")
@property
@pulumi.getter(name="organizationUnits")
def organization_units(self) -> Optional[Sequence[str]]:
"""
.
"""
return pulumi.get(self, "organization_units")
@pulumi.output_type
class CertificateAuthorityServer(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "validityPeriodHours":
suggest = "validity_period_hours"
elif key == "issueState":
suggest = "issue_state"
elif key == "notAfter":
suggest = "not_after"
elif key == "notBefore":
suggest = "not_before"
elif key == "publicKey":
suggest = "public_key"
elif key == "serialNumber":
suggest = "serial_number"
elif key == "subjectAlternativeNames":
suggest = "subject_alternative_names"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in CertificateAuthorityServer. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
CertificateAuthorityServer.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
CertificateAuthorityServer.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
subject: 'outputs.CertificateAuthorityServerSubject',
validity_period_hours: int,
certificate: Optional[str] = None,
csr: Optional[str] = None,
hold: Optional[bool] = None,
id: Optional[str] = None,
issue_state: Optional[str] = None,
not_after: Optional[str] = None,
not_before: Optional[str] = None,
public_key: Optional[str] = None,
serial_number: Optional[str] = None,
subject_alternative_names: Optional[Sequence[str]] = None):
"""
:param 'CertificateAuthorityServerSubjectArgs' subject: A `subject` block as defined below.
:param int validity_period_hours: The number of hours after initial issuing that the certificate will become invalid.
:param str certificate: The body of the CA's certificate in PEM format.
:param str csr: Input for issuing a certificate.
:param bool hold: Flag to suspend/hold the certificate.
:param str id: The id of the certificate.
:param str issue_state: Current state of the certificate.
:param str not_after: The date on which the certificate validity period ends, in RFC3339 format.
:param str not_before: The date on which the certificate validity period begins, in RFC3339 format.
:param str public_key: Input for issuing a certificate.
:param str serial_number: The body of the CA's certificate in PEM format.
:param Sequence[str] subject_alternative_names: .
"""
pulumi.set(__self__, "subject", subject)
pulumi.set(__self__, "validity_period_hours", validity_period_hours)
if certificate is not None:
pulumi.set(__self__, "certificate", certificate)
if csr is not None:
pulumi.set(__self__, "csr", csr)
if hold is not None:
pulumi.set(__self__, "hold", hold)
if id is not None:
pulumi.set(__self__, "id", id)
if issue_state is not None:
pulumi.set(__self__, "issue_state", issue_state)
if not_after is not None:
pulumi.set(__self__, "not_after", not_after)
if not_before is not None:
pulumi.set(__self__, "not_before", not_before)
if public_key is not None:
pulumi.set(__self__, "public_key", public_key)
if serial_number is not None:
pulumi.set(__self__, "serial_number", serial_number)
if subject_alternative_names is not None:
pulumi.set(__self__, "subject_alternative_names", subject_alternative_names)
@property
@pulumi.getter
def subject(self) -> 'outputs.CertificateAuthorityServerSubject':
"""
A `subject` block as defined below.
"""
return pulumi.get(self, "subject")
@property
@pulumi.getter(name="validityPeriodHours")
def validity_period_hours(self) -> int:
"""
The number of hours after initial issuing that the certificate will become invalid.
"""
return pulumi.get(self, "validity_period_hours")
@property
@pulumi.getter
def certificate(self) -> Optional[str]:
"""
The body of the CA's certificate in PEM format.
"""
return pulumi.get(self, "certificate")
@property
@pulumi.getter
def csr(self) -> Optional[str]:
"""
Input for issuing a certificate.
"""
return pulumi.get(self, "csr")
@property
@pulumi.getter
def hold(self) -> Optional[bool]:
"""
Flag to suspend/hold the certificate.
"""
return pulumi.get(self, "hold")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The id of the certificate.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="issueState")
def issue_state(self) -> Optional[str]:
"""
Current state of the certificate.
"""
return pulumi.get(self, "issue_state")
@property
@pulumi.getter(name="notAfter")
def not_after(self) -> Optional[str]:
"""
The date on which the certificate validity period ends, in RFC3339 format.
"""
return pulumi.get(self, "not_after")
@property
@pulumi.getter(name="notBefore")
def not_before(self) -> Optional[str]:
"""
The date on which the certificate validity period begins, in RFC3339 format.
"""
return pulumi.get(self, "not_before")
@property
@pulumi.getter(name="publicKey")
def public_key(self) -> Optional[str]:
"""
Input for issuing a certificate.
"""
return pulumi.get(self, "public_key")
@property
@pulumi.getter(name="serialNumber")
def serial_number(self) -> Optional[str]:
"""
The body of the CA's certificate in PEM format.
"""
return pulumi.get(self, "serial_number")
@property
@pulumi.getter(name="subjectAlternativeNames")
def subject_alternative_names(self) -> Optional[Sequence[str]]:
"""
.
"""
return pulumi.get(self, "subject_alternative_names")
@pulumi.output_type
class CertificateAuthorityServerSubject(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "commonName":
suggest = "common_name"
elif key == "organizationUnits":
suggest = "organization_units"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in CertificateAuthorityServerSubject. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
CertificateAuthorityServerSubject.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
CertificateAuthorityServerSubject.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
common_name: str,
country: str,
organization: str,
organization_units: Optional[Sequence[str]] = None):
"""
:param str common_name: .
:param str country: .
:param str organization: .
:param Sequence[str] organization_units: .
"""
pulumi.set(__self__, "common_name", common_name)
pulumi.set(__self__, "country", country)
pulumi.set(__self__, "organization", organization)
if organization_units is not None:
pulumi.set(__self__, "organization_units", organization_units)
@property
@pulumi.getter(name="commonName")
def common_name(self) -> str:
"""
.
"""
return pulumi.get(self, "common_name")
@property
@pulumi.getter
def country(self) -> str:
"""
.
"""
return pulumi.get(self, "country")
@property
@pulumi.getter
def organization(self) -> str:
"""
.
"""
return pulumi.get(self, "organization")
@property
@pulumi.getter(name="organizationUnits")
def organization_units(self) -> Optional[Sequence[str]]:
"""
.
"""
return pulumi.get(self, "organization_units")
@pulumi.output_type
class CertificateAuthoritySubject(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "commonName":
suggest = "common_name"
elif key == "organizationUnits":
suggest = "organization_units"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in CertificateAuthoritySubject. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
CertificateAuthoritySubject.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
CertificateAuthoritySubject.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
common_name: str,
country: str,
organization: str,
organization_units: Optional[Sequence[str]] = None):
"""
:param str common_name: .
:param str country: .
:param str organization: .
:param Sequence[str] organization_units: .
"""
pulumi.set(__self__, "common_name", common_name)
pulumi.set(__self__, "country", country)
pulumi.set(__self__, "organization", organization)
if organization_units is not None:
pulumi.set(__self__, "organization_units", organization_units)
@property
@pulumi.getter(name="commonName")
def common_name(self) -> str:
"""
.
"""
return pulumi.get(self, "common_name")
@property
@pulumi.getter
def country(self) -> str:
"""
.
"""
return pulumi.get(self, "country")
@property
@pulumi.getter
def organization(self) -> str:
"""
.
"""
return pulumi.get(self, "organization")
@property
@pulumi.getter(name="organizationUnits")
def organization_units(self) -> Optional[Sequence[str]]:
"""
.
"""
return pulumi.get(self, "organization_units")
@pulumi.output_type
class ContainerRegistryUser(dict):
def __init__(__self__, *,
name: str,
password: str,
permission: str):
"""
:param str name: The user name used to authenticate remote access.
:param str password: The password used to authenticate remote access.
:param str permission: The level of access that allow to the user. This must be one of [`all`/`readwrite`/`readonly`].
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "password", password)
pulumi.set(__self__, "permission", permission)
@property
@pulumi.getter
def name(self) -> str:
"""
The user name used to authenticate remote access.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def password(self) -> str:
"""
The password used to authenticate remote access.
"""
return pulumi.get(self, "password")
@property
@pulumi.getter
def permission(self) -> str:
"""
The level of access that allow to the user. This must be one of [`all`/`readwrite`/`readonly`].
"""
return pulumi.get(self, "permission")
@pulumi.output_type
class DNSRecord(dict):
def __init__(__self__, *,
name: str,
type: str,
value: str,
port: Optional[int] = None,
priority: Optional[int] = None,
ttl: Optional[int] = None,
weight: Optional[int] = None):
"""
:param str name: The name of the DNS Record. The length of this value must be in the range [`1`-`64`].
:param str type: The type of DNS Record. This must be one of [`A`/`AAAA`/`ALIAS`/`CNAME`/`NS`/`MX`/`TXT`/`SRV`/`CAA`/`PTR`].
:param str value: The value of the DNS Record.
:param int port: The number of port. This must be in the range [`1`-`65535`].
:param int priority: The priority of target DNS Record. This must be in the range [`0`-`65535`].
:param int ttl: The number of the TTL.
:param int weight: The weight of target DNS Record. This must be in the range [`0`-`65535`].
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "type", type)
pulumi.set(__self__, "value", value)
if port is not None:
pulumi.set(__self__, "port", port)
if priority is not None:
pulumi.set(__self__, "priority", priority)
if ttl is not None:
pulumi.set(__self__, "ttl", ttl)
if weight is not None:
pulumi.set(__self__, "weight", weight)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the DNS Record. The length of this value must be in the range [`1`-`64`].
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def type(self) -> str:
"""
The type of DNS Record. This must be one of [`A`/`AAAA`/`ALIAS`/`CNAME`/`NS`/`MX`/`TXT`/`SRV`/`CAA`/`PTR`].
"""
return pulumi.get(self, "type")
@property
@pulumi.getter
def value(self) -> str:
"""
The value of the DNS Record.
"""
return pulumi.get(self, "value")
@property
@pulumi.getter
def port(self) -> Optional[int]:
"""
The number of port. This must be in the range [`1`-`65535`].
"""
return pulumi.get(self, "port")
@property
@pulumi.getter
def priority(self) -> Optional[int]:
"""
The priority of target DNS Record. This must be in the range [`0`-`65535`].
"""
return pulumi.get(self, "priority")
@property
@pulumi.getter
def ttl(self) -> Optional[int]:
"""
The number of the TTL.
"""
return pulumi.get(self, "ttl")
@property
@pulumi.getter
def weight(self) -> Optional[int]:
"""
The weight of target DNS Record. This must be in the range [`0`-`65535`].
"""
return pulumi.get(self, "weight")
@pulumi.output_type
class DatabaseBackup(dict):
def __init__(__self__, *,
time: Optional[str] = None,
weekdays: Optional[Sequence[str]] = None):
"""
:param str time: The time to take backup. This must be formatted with `HH:mm`.
:param Sequence[str] weekdays: A list of weekdays to backed up. The values in the list must be in [`sun`/`mon`/`tue`/`wed`/`thu`/`fri`/`sat`].
"""
if time is not None:
pulumi.set(__self__, "time", time)
if weekdays is not None:
pulumi.set(__self__, "weekdays", weekdays)
@property
@pulumi.getter
def time(self) -> Optional[str]:
"""
The time to take backup. This must be formatted with `HH:mm`.
"""
return pulumi.get(self, "time")
@property
@pulumi.getter
def weekdays(self) -> Optional[Sequence[str]]:
"""
A list of weekdays to backed up. The values in the list must be in [`sun`/`mon`/`tue`/`wed`/`thu`/`fri`/`sat`].
"""
return pulumi.get(self, "weekdays")
@pulumi.output_type
class DatabaseNetworkInterface(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "ipAddress":
suggest = "ip_address"
elif key == "switchId":
suggest = "switch_id"
elif key == "sourceRanges":
suggest = "source_ranges"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in DatabaseNetworkInterface. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
DatabaseNetworkInterface.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
DatabaseNetworkInterface.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
gateway: str,
ip_address: str,
netmask: int,
switch_id: str,
port: Optional[int] = None,
source_ranges: Optional[Sequence[str]] = None):
"""
:param str gateway: The IP address of the gateway used by Database.
:param str ip_address: The IP address to assign to the Database.
:param int netmask: The bit length of the subnet to assign to the Database. This must be in the range [`8`-`29`].
:param str switch_id: The id of the switch to which the Database connects.
:param int port: The number of the listening port. This must be in the range [`1024`-`65535`].
:param Sequence[str] source_ranges: The range of source IP addresses that allow to access to the Database via network.
"""
pulumi.set(__self__, "gateway", gateway)
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "netmask", netmask)
pulumi.set(__self__, "switch_id", switch_id)
if port is not None:
pulumi.set(__self__, "port", port)
if source_ranges is not None:
pulumi.set(__self__, "source_ranges", source_ranges)
@property
@pulumi.getter
def gateway(self) -> str:
"""
The IP address of the gateway used by Database.
"""
return pulumi.get(self, "gateway")
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address to assign to the Database.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def netmask(self) -> int:
"""
The bit length of the subnet to assign to the Database. This must be in the range [`8`-`29`].
"""
return pulumi.get(self, "netmask")
@property
@pulumi.getter(name="switchId")
def switch_id(self) -> str:
"""
The id of the switch to which the Database connects.
"""
return pulumi.get(self, "switch_id")
@property
@pulumi.getter
def port(self) -> Optional[int]:
"""
The number of the listening port. This must be in the range [`1024`-`65535`].
"""
return pulumi.get(self, "port")
@property
@pulumi.getter(name="sourceRanges")
def source_ranges(self) -> Optional[Sequence[str]]:
"""
The range of source IP addresses that allow to access to the Database via network.
"""
return pulumi.get(self, "source_ranges")
@pulumi.output_type
class DatabaseReadReplicaNetworkInterface(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "ipAddress":
suggest = "ip_address"
elif key == "sourceRanges":
suggest = "source_ranges"
elif key == "switchId":
suggest = "switch_id"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in DatabaseReadReplicaNetworkInterface. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
DatabaseReadReplicaNetworkInterface.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
DatabaseReadReplicaNetworkInterface.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
ip_address: str,
gateway: Optional[str] = None,
netmask: Optional[int] = None,
source_ranges: Optional[Sequence[str]] = None,
switch_id: Optional[str] = None):
"""
:param str ip_address: The IP address to assign to the read-replica database.
:param str gateway: The IP address of the gateway used by read-replica database. If `gateway` isn't specified, it will be set to the same value of the master database.
:param int netmask: The bit length of the subnet to assign to the read-replica database. This must be in the range [`8`-`29`]. If `netmask` isn't specified, it will be set to the same value of the master database.
:param Sequence[str] source_ranges: The range of source IP addresses that allow to access to the read-replica database via network.
:param str switch_id: The id of the switch to which the read-replica database connects. If `switch_id` isn't specified, it will be set to the same value of the master database.
"""
pulumi.set(__self__, "ip_address", ip_address)
if gateway is not None:
pulumi.set(__self__, "gateway", gateway)
if netmask is not None:
pulumi.set(__self__, "netmask", netmask)
if source_ranges is not None:
pulumi.set(__self__, "source_ranges", source_ranges)
if switch_id is not None:
pulumi.set(__self__, "switch_id", switch_id)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address to assign to the read-replica database.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def gateway(self) -> Optional[str]:
"""
The IP address of the gateway used by read-replica database. If `gateway` isn't specified, it will be set to the same value of the master database.
"""
return pulumi.get(self, "gateway")
@property
@pulumi.getter
def netmask(self) -> Optional[int]:
"""
The bit length of the subnet to assign to the read-replica database. This must be in the range [`8`-`29`]. If `netmask` isn't specified, it will be set to the same value of the master database.
"""
return pulumi.get(self, "netmask")
@property
@pulumi.getter(name="sourceRanges")
def source_ranges(self) -> Optional[Sequence[str]]:
"""
The range of source IP addresses that allow to access to the read-replica database via network.
"""
return pulumi.get(self, "source_ranges")
@property
@pulumi.getter(name="switchId")
def switch_id(self) -> Optional[str]:
"""
The id of the switch to which the read-replica database connects. If `switch_id` isn't specified, it will be set to the same value of the master database.
"""
return pulumi.get(self, "switch_id")
@pulumi.output_type
class GSLBHealthCheck(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "delayLoop":
suggest = "delay_loop"
elif key == "hostHeader":
suggest = "host_header"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in GSLBHealthCheck. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
GSLBHealthCheck.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
GSLBHealthCheck.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
protocol: str,
delay_loop: Optional[int] = None,
host_header: Optional[str] = None,
path: Optional[str] = None,
port: Optional[int] = None,
status: Optional[str] = None):
"""
:param str protocol: The protocol used for health checks. This must be one of [`http`/`https`/`tcp`/`ping`].
:param int delay_loop: The interval in seconds between checks. This must be in the range [`10`-`60`].
:param str host_header: The value of host header send when checking by HTTP/HTTPS.
:param str path: The path used when checking by HTTP/HTTPS.
:param int port: The port number used when checking by TCP.
:param str status: The response-code to expect when checking by HTTP/HTTPS.
"""
pulumi.set(__self__, "protocol", protocol)
if delay_loop is not None:
pulumi.set(__self__, "delay_loop", delay_loop)
if host_header is not None:
pulumi.set(__self__, "host_header", host_header)
if path is not None:
pulumi.set(__self__, "path", path)
if port is not None:
pulumi.set(__self__, "port", port)
if status is not None:
pulumi.set(__self__, "status", status)
@property
@pulumi.getter
def protocol(self) -> str:
"""
The protocol used for health checks. This must be one of [`http`/`https`/`tcp`/`ping`].
"""
return pulumi.get(self, "protocol")
@property
@pulumi.getter(name="delayLoop")
def delay_loop(self) -> Optional[int]:
"""
The interval in seconds between checks. This must be in the range [`10`-`60`].
"""
return pulumi.get(self, "delay_loop")
@property
@pulumi.getter(name="hostHeader")
def host_header(self) -> Optional[str]:
"""
The value of host header send when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "host_header")
@property
@pulumi.getter
def path(self) -> Optional[str]:
"""
The path used when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "path")
@property
@pulumi.getter
def port(self) -> Optional[int]:
"""
The port number used when checking by TCP.
"""
return pulumi.get(self, "port")
@property
@pulumi.getter
def status(self) -> Optional[str]:
"""
The response-code to expect when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "status")
@pulumi.output_type
class GSLBServer(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "ipAddress":
suggest = "ip_address"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in GSLBServer. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
GSLBServer.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
GSLBServer.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
ip_address: str,
enabled: Optional[bool] = None,
weight: Optional[int] = None):
"""
:param str ip_address: The IP address of the server.
:param bool enabled: The flag to enable as destination of load balancing.
:param int weight: The weight used when weighted load balancing is enabled. This must be in the range [`1`-`10000`].
"""
pulumi.set(__self__, "ip_address", ip_address)
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
if weight is not None:
pulumi.set(__self__, "weight", weight)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address of the server.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def enabled(self) -> Optional[bool]:
"""
The flag to enable as destination of load balancing.
"""
return pulumi.get(self, "enabled")
@property
@pulumi.getter
def weight(self) -> Optional[int]:
"""
The weight used when weighted load balancing is enabled. This must be in the range [`1`-`10000`].
"""
return pulumi.get(self, "weight")
@pulumi.output_type
class LoadBalancerNetworkInterface(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "ipAddresses":
suggest = "ip_addresses"
elif key == "switchId":
suggest = "switch_id"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in LoadBalancerNetworkInterface. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
LoadBalancerNetworkInterface.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
LoadBalancerNetworkInterface.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
ip_addresses: Sequence[str],
netmask: int,
switch_id: str,
vrid: int,
gateway: Optional[str] = None):
"""
:param Sequence[str] ip_addresses: A list of IP address to assign to the LoadBalancer. .
:param int netmask: The bit length of the subnet assigned to the LoadBalancer. This must be in the range [`8`-`29`].
:param str switch_id: The id of the switch to which the LoadBalancer connects.
:param int vrid: The Virtual Router Identifier.
:param str gateway: The IP address of the gateway used by LoadBalancer.
"""
pulumi.set(__self__, "ip_addresses", ip_addresses)
pulumi.set(__self__, "netmask", netmask)
pulumi.set(__self__, "switch_id", switch_id)
pulumi.set(__self__, "vrid", vrid)
if gateway is not None:
pulumi.set(__self__, "gateway", gateway)
@property
@pulumi.getter(name="ipAddresses")
def ip_addresses(self) -> Sequence[str]:
"""
A list of IP address to assign to the LoadBalancer. .
"""
return pulumi.get(self, "ip_addresses")
@property
@pulumi.getter
def netmask(self) -> int:
"""
The bit length of the subnet assigned to the LoadBalancer. This must be in the range [`8`-`29`].
"""
return pulumi.get(self, "netmask")
@property
@pulumi.getter(name="switchId")
def switch_id(self) -> str:
"""
The id of the switch to which the LoadBalancer connects.
"""
return pulumi.get(self, "switch_id")
@property
@pulumi.getter
def vrid(self) -> int:
"""
The Virtual Router Identifier.
"""
return pulumi.get(self, "vrid")
@property
@pulumi.getter
def gateway(self) -> Optional[str]:
"""
The IP address of the gateway used by LoadBalancer.
"""
return pulumi.get(self, "gateway")
@pulumi.output_type
class LoadBalancerVip(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "delayLoop":
suggest = "delay_loop"
elif key == "sorryServer":
suggest = "sorry_server"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in LoadBalancerVip. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
LoadBalancerVip.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
LoadBalancerVip.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
port: int,
vip: str,
delay_loop: Optional[int] = None,
description: Optional[str] = None,
servers: Optional[Sequence['outputs.LoadBalancerVipServer']] = None,
sorry_server: Optional[str] = None):
"""
:param int port: The target port number for load-balancing. This must be in the range [`1`-`65535`].
:param str vip: The virtual IP address.
:param int delay_loop: The interval in seconds between checks. This must be in the range [`10`-`2147483647`].
:param str description: The description of the VIP. The length of this value must be in the range [`1`-`512`].
:param Sequence['LoadBalancerVipServerArgs'] servers: One or more `server` blocks as defined below.
:param str sorry_server: The IP address of the SorryServer. This will be used when all servers under this VIP are down.
"""
pulumi.set(__self__, "port", port)
pulumi.set(__self__, "vip", vip)
if delay_loop is not None:
pulumi.set(__self__, "delay_loop", delay_loop)
if description is not None:
pulumi.set(__self__, "description", description)
if servers is not None:
pulumi.set(__self__, "servers", servers)
if sorry_server is not None:
pulumi.set(__self__, "sorry_server", sorry_server)
@property
@pulumi.getter
def port(self) -> int:
"""
The target port number for load-balancing. This must be in the range [`1`-`65535`].
"""
return pulumi.get(self, "port")
@property
@pulumi.getter
def vip(self) -> str:
"""
The virtual IP address.
"""
return pulumi.get(self, "vip")
@property
@pulumi.getter(name="delayLoop")
def delay_loop(self) -> Optional[int]:
"""
The interval in seconds between checks. This must be in the range [`10`-`2147483647`].
"""
return pulumi.get(self, "delay_loop")
@property
@pulumi.getter
def description(self) -> Optional[str]:
"""
The description of the VIP. The length of this value must be in the range [`1`-`512`].
"""
return pulumi.get(self, "description")
@property
@pulumi.getter
def servers(self) -> Optional[Sequence['outputs.LoadBalancerVipServer']]:
"""
One or more `server` blocks as defined below.
"""
return pulumi.get(self, "servers")
@property
@pulumi.getter(name="sorryServer")
def sorry_server(self) -> Optional[str]:
"""
The IP address of the SorryServer. This will be used when all servers under this VIP are down.
"""
return pulumi.get(self, "sorry_server")
@pulumi.output_type
class LoadBalancerVipServer(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "ipAddress":
suggest = "ip_address"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in LoadBalancerVipServer. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
LoadBalancerVipServer.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
LoadBalancerVipServer.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
ip_address: str,
protocol: str,
enabled: Optional[bool] = None,
path: Optional[str] = None,
status: Optional[str] = None):
"""
:param str ip_address: The IP address of the destination server.
:param str protocol: The protocol used for health checks. This must be one of [`http`/`https`/`tcp`/`ping`].
:param bool enabled: The flag to enable as destination of load balancing.
:param str path: The path used when checking by HTTP/HTTPS.
:param str status: The response code to expect when checking by HTTP/HTTPS.
"""
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "protocol", protocol)
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
if path is not None:
pulumi.set(__self__, "path", path)
if status is not None:
pulumi.set(__self__, "status", status)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address of the destination server.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def protocol(self) -> str:
"""
The protocol used for health checks. This must be one of [`http`/`https`/`tcp`/`ping`].
"""
return pulumi.get(self, "protocol")
@property
@pulumi.getter
def enabled(self) -> Optional[bool]:
"""
The flag to enable as destination of load balancing.
"""
return pulumi.get(self, "enabled")
@property
@pulumi.getter
def path(self) -> Optional[str]:
"""
The path used when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "path")
@property
@pulumi.getter
def status(self) -> Optional[str]:
"""
The response code to expect when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "status")
@pulumi.output_type
class LocalRouterNetworkInterface(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "ipAddresses":
suggest = "ip_addresses"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in LocalRouterNetworkInterface. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
LocalRouterNetworkInterface.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
LocalRouterNetworkInterface.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
ip_addresses: Sequence[str],
netmask: int,
vip: str,
vrid: int):
"""
:param Sequence[str] ip_addresses: A list of IP address to assign to the LocalRouter.
:param int netmask: The bit length of the subnet assigned to the LocalRouter. This must be in the range [`8`-`29`].
:param str vip: The virtual IP address.
:param int vrid: The Virtual Router Identifier.
"""
pulumi.set(__self__, "ip_addresses", ip_addresses)
pulumi.set(__self__, "netmask", netmask)
pulumi.set(__self__, "vip", vip)
pulumi.set(__self__, "vrid", vrid)
@property
@pulumi.getter(name="ipAddresses")
def ip_addresses(self) -> Sequence[str]:
"""
A list of IP address to assign to the LocalRouter.
"""
return pulumi.get(self, "ip_addresses")
@property
@pulumi.getter
def netmask(self) -> int:
"""
The bit length of the subnet assigned to the LocalRouter. This must be in the range [`8`-`29`].
"""
return pulumi.get(self, "netmask")
@property
@pulumi.getter
def vip(self) -> str:
"""
The virtual IP address.
"""
return pulumi.get(self, "vip")
@property
@pulumi.getter
def vrid(self) -> int:
"""
The Virtual Router Identifier.
"""
return pulumi.get(self, "vrid")
@pulumi.output_type
class LocalRouterPeer(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "peerId":
suggest = "peer_id"
elif key == "secretKey":
suggest = "secret_key"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in LocalRouterPeer. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
LocalRouterPeer.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
LocalRouterPeer.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
peer_id: str,
secret_key: str,
description: Optional[str] = None,
enabled: Optional[bool] = None):
"""
:param str peer_id: The ID of the peer LocalRouter.
:param str secret_key: The secret key of the peer LocalRouter.
:param str description: The description of the LocalRouter. The length of this value must be in the range [`1`-`512`].
:param bool enabled: The flag to enable the LocalRouter.
"""
pulumi.set(__self__, "peer_id", peer_id)
pulumi.set(__self__, "secret_key", secret_key)
if description is not None:
pulumi.set(__self__, "description", description)
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
@property
@pulumi.getter(name="peerId")
def peer_id(self) -> str:
"""
The ID of the peer LocalRouter.
"""
return pulumi.get(self, "peer_id")
@property
@pulumi.getter(name="secretKey")
def secret_key(self) -> str:
"""
The secret key of the peer LocalRouter.
"""
return pulumi.get(self, "secret_key")
@property
@pulumi.getter
def description(self) -> Optional[str]:
"""
The description of the LocalRouter. The length of this value must be in the range [`1`-`512`].
"""
return pulumi.get(self, "description")
@property
@pulumi.getter
def enabled(self) -> Optional[bool]:
"""
The flag to enable the LocalRouter.
"""
return pulumi.get(self, "enabled")
@pulumi.output_type
class LocalRouterStaticRoute(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "nextHop":
suggest = "next_hop"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in LocalRouterStaticRoute. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
LocalRouterStaticRoute.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
LocalRouterStaticRoute.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
next_hop: str,
prefix: str):
"""
:param str next_hop: The IP address of the next hop.
:param str prefix: The CIDR block of destination.
"""
pulumi.set(__self__, "next_hop", next_hop)
pulumi.set(__self__, "prefix", prefix)
@property
@pulumi.getter(name="nextHop")
def next_hop(self) -> str:
"""
The IP address of the next hop.
"""
return pulumi.get(self, "next_hop")
@property
@pulumi.getter
def prefix(self) -> str:
"""
The CIDR block of destination.
"""
return pulumi.get(self, "prefix")
@pulumi.output_type
class LocalRouterSwitch(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "zoneId":
suggest = "zone_id"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in LocalRouterSwitch. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
LocalRouterSwitch.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
LocalRouterSwitch.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
code: str,
zone_id: str,
category: Optional[str] = None):
"""
:param str code: The resource ID of the Switch.
:param str zone_id: The id of the Zone.
:param str category: The category name of connected services (e.g. `cloud`, `vps`).
"""
pulumi.set(__self__, "code", code)
pulumi.set(__self__, "zone_id", zone_id)
if category is not None:
pulumi.set(__self__, "category", category)
@property
@pulumi.getter
def code(self) -> str:
"""
The resource ID of the Switch.
"""
return pulumi.get(self, "code")
@property
@pulumi.getter(name="zoneId")
def zone_id(self) -> str:
"""
The id of the Zone.
"""
return pulumi.get(self, "zone_id")
@property
@pulumi.getter
def category(self) -> Optional[str]:
"""
The category name of connected services (e.g. `cloud`, `vps`).
"""
return pulumi.get(self, "category")
@pulumi.output_type
class MobileGatewayPrivateNetworkInterface(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "ipAddress":
suggest = "ip_address"
elif key == "switchId":
suggest = "switch_id"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in MobileGatewayPrivateNetworkInterface. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
MobileGatewayPrivateNetworkInterface.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
MobileGatewayPrivateNetworkInterface.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
ip_address: str,
netmask: int,
switch_id: str):
"""
:param str ip_address: The IP address to assign to the MobileGateway.
:param int netmask: The bit length of the subnet to assign to the MobileGateway. This must be in the range [`8`-`29`].
:param str switch_id: The id of the switch to which the MobileGateway connects.
"""
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "netmask", netmask)
pulumi.set(__self__, "switch_id", switch_id)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address to assign to the MobileGateway.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def netmask(self) -> int:
"""
The bit length of the subnet to assign to the MobileGateway. This must be in the range [`8`-`29`].
"""
return pulumi.get(self, "netmask")
@property
@pulumi.getter(name="switchId")
def switch_id(self) -> str:
"""
The id of the switch to which the MobileGateway connects.
"""
return pulumi.get(self, "switch_id")
@pulumi.output_type
class MobileGatewaySim(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "ipAddress":
suggest = "ip_address"
elif key == "simId":
suggest = "sim_id"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in MobileGatewaySim. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
MobileGatewaySim.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
MobileGatewaySim.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
ip_address: str,
sim_id: str):
"""
:param str ip_address: The IP address to assign to the SIM.
:param str sim_id: The id of the Switch connected to the MobileGateway.
"""
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "sim_id", sim_id)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address to assign to the SIM.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter(name="simId")
def sim_id(self) -> str:
"""
The id of the Switch connected to the MobileGateway.
"""
return pulumi.get(self, "sim_id")
@pulumi.output_type
class MobileGatewaySimRoute(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "simId":
suggest = "sim_id"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in MobileGatewaySimRoute. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
MobileGatewaySimRoute.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
MobileGatewaySimRoute.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
prefix: str,
sim_id: str):
"""
:param str prefix: The destination network prefix used by the sim routing. This must be specified by CIDR block formatted string.
:param str sim_id: The id of the routing destination SIM.
"""
pulumi.set(__self__, "prefix", prefix)
pulumi.set(__self__, "sim_id", sim_id)
@property
@pulumi.getter
def prefix(self) -> str:
"""
The destination network prefix used by the sim routing. This must be specified by CIDR block formatted string.
"""
return pulumi.get(self, "prefix")
@property
@pulumi.getter(name="simId")
def sim_id(self) -> str:
"""
The id of the routing destination SIM.
"""
return pulumi.get(self, "sim_id")
@pulumi.output_type
class MobileGatewayStaticRoute(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "nextHop":
suggest = "next_hop"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in MobileGatewayStaticRoute. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
MobileGatewayStaticRoute.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
MobileGatewayStaticRoute.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
next_hop: str,
prefix: str):
"""
:param str next_hop: The IP address of next hop.
:param str prefix: The destination network prefix used by static routing. This must be specified by CIDR block formatted string.
"""
pulumi.set(__self__, "next_hop", next_hop)
pulumi.set(__self__, "prefix", prefix)
@property
@pulumi.getter(name="nextHop")
def next_hop(self) -> str:
"""
The IP address of next hop.
"""
return pulumi.get(self, "next_hop")
@property
@pulumi.getter
def prefix(self) -> str:
"""
The destination network prefix used by static routing. This must be specified by CIDR block formatted string.
"""
return pulumi.get(self, "prefix")
@pulumi.output_type
class MobileGatewayTrafficControl(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "autoTrafficShaping":
suggest = "auto_traffic_shaping"
elif key == "bandWidthLimit":
suggest = "band_width_limit"
elif key == "enableEmail":
suggest = "enable_email"
elif key == "enableSlack":
suggest = "enable_slack"
elif key == "slackWebhook":
suggest = "slack_webhook"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in MobileGatewayTrafficControl. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
MobileGatewayTrafficControl.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
MobileGatewayTrafficControl.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
quota: int,
auto_traffic_shaping: Optional[bool] = None,
band_width_limit: Optional[int] = None,
enable_email: Optional[bool] = None,
enable_slack: Optional[bool] = None,
slack_webhook: Optional[str] = None):
"""
:param int quota: The threshold of monthly traffic usage to enable to the traffic shaping.
:param bool auto_traffic_shaping: The flag to enable the traffic shaping.
:param int band_width_limit: The bandwidth allowed when the traffic shaping is enabled.
:param bool enable_email: The flag to enable email notification when the traffic shaping is enabled.
:param bool enable_slack: The flag to enable slack notification when the traffic shaping is enabled.
:param str slack_webhook: The webhook URL used when sends notification. It will only used when `enable_slack` is set `true`.
"""
pulumi.set(__self__, "quota", quota)
if auto_traffic_shaping is not None:
pulumi.set(__self__, "auto_traffic_shaping", auto_traffic_shaping)
if band_width_limit is not None:
pulumi.set(__self__, "band_width_limit", band_width_limit)
if enable_email is not None:
pulumi.set(__self__, "enable_email", enable_email)
if enable_slack is not None:
pulumi.set(__self__, "enable_slack", enable_slack)
if slack_webhook is not None:
pulumi.set(__self__, "slack_webhook", slack_webhook)
@property
@pulumi.getter
def quota(self) -> int:
"""
The threshold of monthly traffic usage to enable to the traffic shaping.
"""
return pulumi.get(self, "quota")
@property
@pulumi.getter(name="autoTrafficShaping")
def auto_traffic_shaping(self) -> Optional[bool]:
"""
The flag to enable the traffic shaping.
"""
return pulumi.get(self, "auto_traffic_shaping")
@property
@pulumi.getter(name="bandWidthLimit")
def band_width_limit(self) -> Optional[int]:
"""
The bandwidth allowed when the traffic shaping is enabled.
"""
return pulumi.get(self, "band_width_limit")
@property
@pulumi.getter(name="enableEmail")
def enable_email(self) -> Optional[bool]:
"""
The flag to enable email notification when the traffic shaping is enabled.
"""
return pulumi.get(self, "enable_email")
@property
@pulumi.getter(name="enableSlack")
def enable_slack(self) -> Optional[bool]:
"""
The flag to enable slack notification when the traffic shaping is enabled.
"""
return pulumi.get(self, "enable_slack")
@property
@pulumi.getter(name="slackWebhook")
def slack_webhook(self) -> Optional[str]:
"""
The webhook URL used when sends notification. It will only used when `enable_slack` is set `true`.
"""
return pulumi.get(self, "slack_webhook")
@pulumi.output_type
class NFSNetworkInterface(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "ipAddress":
suggest = "ip_address"
elif key == "switchId":
suggest = "switch_id"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in NFSNetworkInterface. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
NFSNetworkInterface.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
NFSNetworkInterface.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
ip_address: str,
netmask: int,
switch_id: str,
gateway: Optional[str] = None):
"""
:param str ip_address: The IP address to assign to the NFS.
:param int netmask: The bit length of the subnet to assign to the NFS. This must be in the range [`8`-`29`].
:param str switch_id: The id of the switch to which the NFS connects.
:param str gateway: The IP address of the gateway used by NFS.
"""
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "netmask", netmask)
pulumi.set(__self__, "switch_id", switch_id)
if gateway is not None:
pulumi.set(__self__, "gateway", gateway)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address to assign to the NFS.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def netmask(self) -> int:
"""
The bit length of the subnet to assign to the NFS. This must be in the range [`8`-`29`].
"""
return pulumi.get(self, "netmask")
@property
@pulumi.getter(name="switchId")
def switch_id(self) -> str:
"""
The id of the switch to which the NFS connects.
"""
return pulumi.get(self, "switch_id")
@property
@pulumi.getter
def gateway(self) -> Optional[str]:
"""
The IP address of the gateway used by NFS.
"""
return pulumi.get(self, "gateway")
@pulumi.output_type
class PacketFilterExpression(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "destinationPort":
suggest = "destination_port"
elif key == "sourceNetwork":
suggest = "source_network"
elif key == "sourcePort":
suggest = "source_port"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in PacketFilterExpression. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
PacketFilterExpression.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
PacketFilterExpression.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
protocol: str,
allow: Optional[bool] = None,
description: Optional[str] = None,
destination_port: Optional[str] = None,
source_network: Optional[str] = None,
source_port: Optional[str] = None):
"""
:param str protocol: The protocol used for filtering. This must be one of [`http`/`https`/`tcp`/`udp`/`icmp`/`fragment`/`ip`].
:param bool allow: The flag to allow the packet through the filter.
:param str description: The description of the packetFilter. The length of this value must be in the range [`1`-`512`].
:param str destination_port: A destination port number or port range used for filtering (e.g. `1024`, `1024-2048`).
:param str source_network: A source IP address or CIDR block used for filtering (e.g. `192.0.2.1`, `192.0.2.0/24`).
:param str source_port: A source port number or port range used for filtering (e.g. `1024`, `1024-2048`).
"""
pulumi.set(__self__, "protocol", protocol)
if allow is not None:
pulumi.set(__self__, "allow", allow)
if description is not None:
pulumi.set(__self__, "description", description)
if destination_port is not None:
pulumi.set(__self__, "destination_port", destination_port)
if source_network is not None:
pulumi.set(__self__, "source_network", source_network)
if source_port is not None:
pulumi.set(__self__, "source_port", source_port)
@property
@pulumi.getter
def protocol(self) -> str:
"""
The protocol used for filtering. This must be one of [`http`/`https`/`tcp`/`udp`/`icmp`/`fragment`/`ip`].
"""
return pulumi.get(self, "protocol")
@property
@pulumi.getter
def allow(self) -> Optional[bool]:
"""
The flag to allow the packet through the filter.
"""
return pulumi.get(self, "allow")
@property
@pulumi.getter
def description(self) -> Optional[str]:
"""
The description of the packetFilter. The length of this value must be in the range [`1`-`512`].
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="destinationPort")
def destination_port(self) -> Optional[str]:
"""
A destination port number or port range used for filtering (e.g. `1024`, `1024-2048`).
"""
return pulumi.get(self, "destination_port")
@property
@pulumi.getter(name="sourceNetwork")
def source_network(self) -> Optional[str]:
"""
A source IP address or CIDR block used for filtering (e.g. `192.0.2.1`, `192.0.2.0/24`).
"""
return pulumi.get(self, "source_network")
@property
@pulumi.getter(name="sourcePort")
def source_port(self) -> Optional[str]:
"""
A source port number or port range used for filtering (e.g. `1024`, `1024-2048`).
"""
return pulumi.get(self, "source_port")
@pulumi.output_type
class PacketFilterRuleExpression(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "destinationPort":
suggest = "destination_port"
elif key == "sourceNetwork":
suggest = "source_network"
elif key == "sourcePort":
suggest = "source_port"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in PacketFilterRuleExpression. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
PacketFilterRuleExpression.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
PacketFilterRuleExpression.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
protocol: str,
allow: Optional[bool] = None,
description: Optional[str] = None,
destination_port: Optional[str] = None,
source_network: Optional[str] = None,
source_port: Optional[str] = None):
"""
:param str protocol: The protocol used for filtering. This must be one of [`http`/`https`/`tcp`/`udp`/`icmp`/`fragment`/`ip`].
:param bool allow: The flag to allow the packet through the filter.
:param str description: The description of the expression.
:param str destination_port: A destination port number or port range used for filtering (e.g. `1024`, `1024-2048`).
:param str source_network: A source IP address or CIDR block used for filtering (e.g. `192.0.2.1`, `192.0.2.0/24`).
:param str source_port: A source port number or port range used for filtering (e.g. `1024`, `1024-2048`).
"""
pulumi.set(__self__, "protocol", protocol)
if allow is not None:
pulumi.set(__self__, "allow", allow)
if description is not None:
pulumi.set(__self__, "description", description)
if destination_port is not None:
pulumi.set(__self__, "destination_port", destination_port)
if source_network is not None:
pulumi.set(__self__, "source_network", source_network)
if source_port is not None:
pulumi.set(__self__, "source_port", source_port)
@property
@pulumi.getter
def protocol(self) -> str:
"""
The protocol used for filtering. This must be one of [`http`/`https`/`tcp`/`udp`/`icmp`/`fragment`/`ip`].
"""
return pulumi.get(self, "protocol")
@property
@pulumi.getter
def allow(self) -> Optional[bool]:
"""
The flag to allow the packet through the filter.
"""
return pulumi.get(self, "allow")
@property
@pulumi.getter
def description(self) -> Optional[str]:
"""
The description of the expression.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="destinationPort")
def destination_port(self) -> Optional[str]:
"""
A destination port number or port range used for filtering (e.g. `1024`, `1024-2048`).
"""
return pulumi.get(self, "destination_port")
@property
@pulumi.getter(name="sourceNetwork")
def source_network(self) -> Optional[str]:
"""
A source IP address or CIDR block used for filtering (e.g. `192.0.2.1`, `192.0.2.0/24`).
"""
return pulumi.get(self, "source_network")
@property
@pulumi.getter(name="sourcePort")
def source_port(self) -> Optional[str]:
"""
A source port number or port range used for filtering (e.g. `1024`, `1024-2048`).
"""
return pulumi.get(self, "source_port")
@pulumi.output_type
class ProxyLBACMECertificate(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "additionalCertificates":
suggest = "additional_certificates"
elif key == "commonName":
suggest = "common_name"
elif key == "intermediateCert":
suggest = "intermediate_cert"
elif key == "privateKey":
suggest = "private_key"
elif key == "serverCert":
suggest = "server_cert"
elif key == "subjectAltNames":
suggest = "subject_alt_names"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in ProxyLBACMECertificate. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
ProxyLBACMECertificate.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
ProxyLBACMECertificate.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
additional_certificates: Optional[Sequence['outputs.ProxyLBACMECertificateAdditionalCertificate']] = None,
common_name: Optional[str] = None,
intermediate_cert: Optional[str] = None,
private_key: Optional[str] = None,
server_cert: Optional[str] = None,
subject_alt_names: Optional[str] = None):
"""
:param Sequence['ProxyLBACMECertificateAdditionalCertificateArgs'] additional_certificates: A list of `additional_certificate` blocks as defined below.
:param str common_name: The FQDN used by ACME. This must set resolvable value. Changing this forces a new resource to be created.
:param str intermediate_cert: The intermediate certificate for a server.
:param str private_key: The private key for a server.
:param str server_cert: The certificate for a server.
:param str subject_alt_names: The Subject alternative names used by ACME. Changing this forces a new resource to be created.
"""
if additional_certificates is not None:
pulumi.set(__self__, "additional_certificates", additional_certificates)
if common_name is not None:
pulumi.set(__self__, "common_name", common_name)
if intermediate_cert is not None:
pulumi.set(__self__, "intermediate_cert", intermediate_cert)
if private_key is not None:
pulumi.set(__self__, "private_key", private_key)
if server_cert is not None:
pulumi.set(__self__, "server_cert", server_cert)
if subject_alt_names is not None:
pulumi.set(__self__, "subject_alt_names", subject_alt_names)
@property
@pulumi.getter(name="additionalCertificates")
def additional_certificates(self) -> Optional[Sequence['outputs.ProxyLBACMECertificateAdditionalCertificate']]:
"""
A list of `additional_certificate` blocks as defined below.
"""
return pulumi.get(self, "additional_certificates")
@property
@pulumi.getter(name="commonName")
def common_name(self) -> Optional[str]:
"""
The FQDN used by ACME. This must set resolvable value. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "common_name")
@property
@pulumi.getter(name="intermediateCert")
def intermediate_cert(self) -> Optional[str]:
"""
The intermediate certificate for a server.
"""
return pulumi.get(self, "intermediate_cert")
@property
@pulumi.getter(name="privateKey")
def private_key(self) -> Optional[str]:
"""
The private key for a server.
"""
return pulumi.get(self, "private_key")
@property
@pulumi.getter(name="serverCert")
def server_cert(self) -> Optional[str]:
"""
The certificate for a server.
"""
return pulumi.get(self, "server_cert")
@property
@pulumi.getter(name="subjectAltNames")
def subject_alt_names(self) -> Optional[str]:
"""
The Subject alternative names used by ACME. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "subject_alt_names")
@pulumi.output_type
class ProxyLBACMECertificateAdditionalCertificate(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "intermediateCert":
suggest = "intermediate_cert"
elif key == "privateKey":
suggest = "private_key"
elif key == "serverCert":
suggest = "server_cert"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in ProxyLBACMECertificateAdditionalCertificate. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
ProxyLBACMECertificateAdditionalCertificate.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
ProxyLBACMECertificateAdditionalCertificate.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
intermediate_cert: Optional[str] = None,
private_key: Optional[str] = None,
server_cert: Optional[str] = None):
"""
:param str intermediate_cert: The intermediate certificate for a server.
:param str private_key: The private key for a server.
:param str server_cert: The certificate for a server.
"""
if intermediate_cert is not None:
pulumi.set(__self__, "intermediate_cert", intermediate_cert)
if private_key is not None:
pulumi.set(__self__, "private_key", private_key)
if server_cert is not None:
pulumi.set(__self__, "server_cert", server_cert)
@property
@pulumi.getter(name="intermediateCert")
def intermediate_cert(self) -> Optional[str]:
"""
The intermediate certificate for a server.
"""
return pulumi.get(self, "intermediate_cert")
@property
@pulumi.getter(name="privateKey")
def private_key(self) -> Optional[str]:
"""
The private key for a server.
"""
return pulumi.get(self, "private_key")
@property
@pulumi.getter(name="serverCert")
def server_cert(self) -> Optional[str]:
"""
The certificate for a server.
"""
return pulumi.get(self, "server_cert")
@pulumi.output_type
class ProxyLBBindPort(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "proxyMode":
suggest = "proxy_mode"
elif key == "redirectToHttps":
suggest = "redirect_to_https"
elif key == "responseHeaders":
suggest = "response_headers"
elif key == "sslPolicy":
suggest = "ssl_policy"
elif key == "supportHttp2":
suggest = "support_http2"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in ProxyLBBindPort. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
ProxyLBBindPort.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
ProxyLBBindPort.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
proxy_mode: str,
port: Optional[int] = None,
redirect_to_https: Optional[bool] = None,
response_headers: Optional[Sequence['outputs.ProxyLBBindPortResponseHeader']] = None,
ssl_policy: Optional[str] = None,
support_http2: Optional[bool] = None):
"""
:param str proxy_mode: The proxy mode. This must be one of [`http`/`https`/`tcp`].
:param int port: The number of listening port.
:param bool redirect_to_https: The flag to enable redirection from http to https. This flag is used only when `proxy_mode` is `http`.
:param Sequence['ProxyLBBindPortResponseHeaderArgs'] response_headers: One or more `response_header` blocks as defined below.
:param str ssl_policy: The ssl policy. This must be one of [`TLS-1-2-2019-04`/`TLS-1-2-2021-06`/`TLS-1-3-2021-06`].
:param bool support_http2: The flag to enable HTTP/2. This flag is used only when `proxy_mode` is `https`.
"""
pulumi.set(__self__, "proxy_mode", proxy_mode)
if port is not None:
pulumi.set(__self__, "port", port)
if redirect_to_https is not None:
pulumi.set(__self__, "redirect_to_https", redirect_to_https)
if response_headers is not None:
pulumi.set(__self__, "response_headers", response_headers)
if ssl_policy is not None:
pulumi.set(__self__, "ssl_policy", ssl_policy)
if support_http2 is not None:
pulumi.set(__self__, "support_http2", support_http2)
@property
@pulumi.getter(name="proxyMode")
def proxy_mode(self) -> str:
"""
The proxy mode. This must be one of [`http`/`https`/`tcp`].
"""
return pulumi.get(self, "proxy_mode")
@property
@pulumi.getter
def port(self) -> Optional[int]:
"""
The number of listening port.
"""
return pulumi.get(self, "port")
@property
@pulumi.getter(name="redirectToHttps")
def redirect_to_https(self) -> Optional[bool]:
"""
The flag to enable redirection from http to https. This flag is used only when `proxy_mode` is `http`.
"""
return pulumi.get(self, "redirect_to_https")
@property
@pulumi.getter(name="responseHeaders")
def response_headers(self) -> Optional[Sequence['outputs.ProxyLBBindPortResponseHeader']]:
"""
One or more `response_header` blocks as defined below.
"""
return pulumi.get(self, "response_headers")
@property
@pulumi.getter(name="sslPolicy")
def ssl_policy(self) -> Optional[str]:
"""
The ssl policy. This must be one of [`TLS-1-2-2019-04`/`TLS-1-2-2021-06`/`TLS-1-3-2021-06`].
"""
return pulumi.get(self, "ssl_policy")
@property
@pulumi.getter(name="supportHttp2")
def support_http2(self) -> Optional[bool]:
"""
The flag to enable HTTP/2. This flag is used only when `proxy_mode` is `https`.
"""
return pulumi.get(self, "support_http2")
@pulumi.output_type
class ProxyLBBindPortResponseHeader(dict):
def __init__(__self__, *,
header: str,
value: str):
"""
:param str header: The field name of HTTP header added to response by the ProxyLB.
:param str value: The field value of HTTP header added to response by the ProxyLB.
"""
pulumi.set(__self__, "header", header)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def header(self) -> str:
"""
The field name of HTTP header added to response by the ProxyLB.
"""
return pulumi.get(self, "header")
@property
@pulumi.getter
def value(self) -> str:
"""
The field value of HTTP header added to response by the ProxyLB.
"""
return pulumi.get(self, "value")
@pulumi.output_type
class ProxyLBCertificate(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "additionalCertificates":
suggest = "additional_certificates"
elif key == "commonName":
suggest = "common_name"
elif key == "intermediateCert":
suggest = "intermediate_cert"
elif key == "privateKey":
suggest = "private_key"
elif key == "serverCert":
suggest = "server_cert"
elif key == "subjectAltNames":
suggest = "subject_alt_names"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in ProxyLBCertificate. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
ProxyLBCertificate.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
ProxyLBCertificate.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
additional_certificates: Optional[Sequence['outputs.ProxyLBCertificateAdditionalCertificate']] = None,
common_name: Optional[str] = None,
intermediate_cert: Optional[str] = None,
private_key: Optional[str] = None,
server_cert: Optional[str] = None,
subject_alt_names: Optional[str] = None):
"""
:param Sequence['ProxyLBCertificateAdditionalCertificateArgs'] additional_certificates: One or more `additional_certificate` blocks as defined below.
:param str common_name: The common name of the certificate.
:param str intermediate_cert: The intermediate certificate for a server.
:param str private_key: The private key for a server.
:param str server_cert: The certificate for a server.
:param str subject_alt_names: The subject alternative names of the certificate.
"""
if additional_certificates is not None:
pulumi.set(__self__, "additional_certificates", additional_certificates)
if common_name is not None:
pulumi.set(__self__, "common_name", common_name)
if intermediate_cert is not None:
pulumi.set(__self__, "intermediate_cert", intermediate_cert)
if private_key is not None:
pulumi.set(__self__, "private_key", private_key)
if server_cert is not None:
pulumi.set(__self__, "server_cert", server_cert)
if subject_alt_names is not None:
pulumi.set(__self__, "subject_alt_names", subject_alt_names)
@property
@pulumi.getter(name="additionalCertificates")
def additional_certificates(self) -> Optional[Sequence['outputs.ProxyLBCertificateAdditionalCertificate']]:
"""
One or more `additional_certificate` blocks as defined below.
"""
return pulumi.get(self, "additional_certificates")
@property
@pulumi.getter(name="commonName")
def common_name(self) -> Optional[str]:
"""
The common name of the certificate.
"""
return pulumi.get(self, "common_name")
@property
@pulumi.getter(name="intermediateCert")
def intermediate_cert(self) -> Optional[str]:
"""
The intermediate certificate for a server.
"""
return pulumi.get(self, "intermediate_cert")
@property
@pulumi.getter(name="privateKey")
def private_key(self) -> Optional[str]:
"""
The private key for a server.
"""
return pulumi.get(self, "private_key")
@property
@pulumi.getter(name="serverCert")
def server_cert(self) -> Optional[str]:
"""
The certificate for a server.
"""
return pulumi.get(self, "server_cert")
@property
@pulumi.getter(name="subjectAltNames")
def subject_alt_names(self) -> Optional[str]:
"""
The subject alternative names of the certificate.
"""
return pulumi.get(self, "subject_alt_names")
@pulumi.output_type
class ProxyLBCertificateAdditionalCertificate(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "privateKey":
suggest = "private_key"
elif key == "serverCert":
suggest = "server_cert"
elif key == "intermediateCert":
suggest = "intermediate_cert"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in ProxyLBCertificateAdditionalCertificate. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
ProxyLBCertificateAdditionalCertificate.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
ProxyLBCertificateAdditionalCertificate.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
private_key: str,
server_cert: str,
intermediate_cert: Optional[str] = None):
"""
:param str private_key: The private key for a server.
:param str server_cert: The certificate for a server.
:param str intermediate_cert: The intermediate certificate for a server.
"""
pulumi.set(__self__, "private_key", private_key)
pulumi.set(__self__, "server_cert", server_cert)
if intermediate_cert is not None:
pulumi.set(__self__, "intermediate_cert", intermediate_cert)
@property
@pulumi.getter(name="privateKey")
def private_key(self) -> str:
"""
The private key for a server.
"""
return pulumi.get(self, "private_key")
@property
@pulumi.getter(name="serverCert")
def server_cert(self) -> str:
"""
The certificate for a server.
"""
return pulumi.get(self, "server_cert")
@property
@pulumi.getter(name="intermediateCert")
def intermediate_cert(self) -> Optional[str]:
"""
The intermediate certificate for a server.
"""
return pulumi.get(self, "intermediate_cert")
@pulumi.output_type
class ProxyLBHealthCheck(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "delayLoop":
suggest = "delay_loop"
elif key == "hostHeader":
suggest = "host_header"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in ProxyLBHealthCheck. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
ProxyLBHealthCheck.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
ProxyLBHealthCheck.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
protocol: str,
delay_loop: Optional[int] = None,
host_header: Optional[str] = None,
path: Optional[str] = None,
port: Optional[int] = None):
"""
:param str protocol: The protocol used for health checks. This must be one of [`http`/`tcp`].
:param int delay_loop: The interval in seconds between checks. This must be in the range [`10`-`60`].
:param str host_header: The value of host header send when checking by HTTP.
:param str path: The path used when checking by HTTP.
:param int port: The port number used when checking by TCP.
"""
pulumi.set(__self__, "protocol", protocol)
if delay_loop is not None:
pulumi.set(__self__, "delay_loop", delay_loop)
if host_header is not None:
pulumi.set(__self__, "host_header", host_header)
if path is not None:
pulumi.set(__self__, "path", path)
if port is not None:
pulumi.set(__self__, "port", port)
@property
@pulumi.getter
def protocol(self) -> str:
"""
The protocol used for health checks. This must be one of [`http`/`tcp`].
"""
return pulumi.get(self, "protocol")
@property
@pulumi.getter(name="delayLoop")
def delay_loop(self) -> Optional[int]:
"""
The interval in seconds between checks. This must be in the range [`10`-`60`].
"""
return pulumi.get(self, "delay_loop")
@property
@pulumi.getter(name="hostHeader")
def host_header(self) -> Optional[str]:
"""
The value of host header send when checking by HTTP.
"""
return pulumi.get(self, "host_header")
@property
@pulumi.getter
def path(self) -> Optional[str]:
"""
The path used when checking by HTTP.
"""
return pulumi.get(self, "path")
@property
@pulumi.getter
def port(self) -> Optional[int]:
"""
The port number used when checking by TCP.
"""
return pulumi.get(self, "port")
@pulumi.output_type
class ProxyLBRule(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "fixedContentType":
suggest = "fixed_content_type"
elif key == "fixedMessageBody":
suggest = "fixed_message_body"
elif key == "fixedStatusCode":
suggest = "fixed_status_code"
elif key == "redirectLocation":
suggest = "redirect_location"
elif key == "redirectStatusCode":
suggest = "redirect_status_code"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in ProxyLBRule. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
ProxyLBRule.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
ProxyLBRule.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
action: Optional[str] = None,
fixed_content_type: Optional[str] = None,
fixed_message_body: Optional[str] = None,
fixed_status_code: Optional[str] = None,
group: Optional[str] = None,
host: Optional[str] = None,
path: Optional[str] = None,
redirect_location: Optional[str] = None,
redirect_status_code: Optional[str] = None):
"""
:param str action: The type of action to be performed when requests matches the rule. This must be one of [`forward`/`redirect`/`fixed`] Default: `forward`.
:param str fixed_content_type: Content-Type header value for fixed response sent when requests matches the rule. This must be one of [`text/plain`/`text/html`/`application/javascript`/`application/json`].
:param str fixed_message_body: Content body for fixed response sent when requests matches the rule.
:param str fixed_status_code: HTTP status code for fixed response sent when requests matches the rule. This must be one of [`200`/`403`/`503`].
:param str group: The name of load balancing group. When proxyLB received request which matched to `host` and `path`, proxyLB forwards the request to servers that having same group name. The length of this value must be in the range [`1`-`10`].
:param str host: The value of HTTP host header that is used as condition of rule-based balancing.
:param str path: The request path that is used as condition of rule-based balancing.
:param str redirect_location: The URL to redirect to when the request matches the rule. see https://manual.sakura.ad.jp/cloud/appliance/enhanced-lb/#enhanced-lb-rule for details.
:param str redirect_status_code: HTTP status code for redirects sent when requests matches the rule. This must be one of [`301`/`302`].
"""
if action is not None:
pulumi.set(__self__, "action", action)
if fixed_content_type is not None:
pulumi.set(__self__, "fixed_content_type", fixed_content_type)
if fixed_message_body is not None:
pulumi.set(__self__, "fixed_message_body", fixed_message_body)
if fixed_status_code is not None:
pulumi.set(__self__, "fixed_status_code", fixed_status_code)
if group is not None:
pulumi.set(__self__, "group", group)
if host is not None:
pulumi.set(__self__, "host", host)
if path is not None:
pulumi.set(__self__, "path", path)
if redirect_location is not None:
pulumi.set(__self__, "redirect_location", redirect_location)
if redirect_status_code is not None:
pulumi.set(__self__, "redirect_status_code", redirect_status_code)
@property
@pulumi.getter
def action(self) -> Optional[str]:
"""
The type of action to be performed when requests matches the rule. This must be one of [`forward`/`redirect`/`fixed`] Default: `forward`.
"""
return pulumi.get(self, "action")
@property
@pulumi.getter(name="fixedContentType")
def fixed_content_type(self) -> Optional[str]:
"""
Content-Type header value for fixed response sent when requests matches the rule. This must be one of [`text/plain`/`text/html`/`application/javascript`/`application/json`].
"""
return pulumi.get(self, "fixed_content_type")
@property
@pulumi.getter(name="fixedMessageBody")
def fixed_message_body(self) -> Optional[str]:
"""
Content body for fixed response sent when requests matches the rule.
"""
return pulumi.get(self, "fixed_message_body")
@property
@pulumi.getter(name="fixedStatusCode")
def fixed_status_code(self) -> Optional[str]:
"""
HTTP status code for fixed response sent when requests matches the rule. This must be one of [`200`/`403`/`503`].
"""
return pulumi.get(self, "fixed_status_code")
@property
@pulumi.getter
def group(self) -> Optional[str]:
"""
The name of load balancing group. When proxyLB received request which matched to `host` and `path`, proxyLB forwards the request to servers that having same group name. The length of this value must be in the range [`1`-`10`].
"""
return pulumi.get(self, "group")
@property
@pulumi.getter
def host(self) -> Optional[str]:
"""
The value of HTTP host header that is used as condition of rule-based balancing.
"""
return pulumi.get(self, "host")
@property
@pulumi.getter
def path(self) -> Optional[str]:
"""
The request path that is used as condition of rule-based balancing.
"""
return pulumi.get(self, "path")
@property
@pulumi.getter(name="redirectLocation")
def redirect_location(self) -> Optional[str]:
"""
The URL to redirect to when the request matches the rule. see https://manual.sakura.ad.jp/cloud/appliance/enhanced-lb/#enhanced-lb-rule for details.
"""
return pulumi.get(self, "redirect_location")
@property
@pulumi.getter(name="redirectStatusCode")
def redirect_status_code(self) -> Optional[str]:
"""
HTTP status code for redirects sent when requests matches the rule. This must be one of [`301`/`302`].
"""
return pulumi.get(self, "redirect_status_code")
@pulumi.output_type
class ProxyLBServer(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "ipAddress":
suggest = "ip_address"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in ProxyLBServer. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
ProxyLBServer.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
ProxyLBServer.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
ip_address: str,
port: int,
enabled: Optional[bool] = None,
group: Optional[str] = None):
"""
:param str ip_address: The IP address of the destination server.
:param int port: The port number of the destination server. This must be in the range [`1`-`65535`].
:param bool enabled: The flag to enable as destination of load balancing.
:param str group: The name of load balancing group. This is used when using rule-based load balancing. The length of this value must be in the range [`1`-`10`].
"""
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "port", port)
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
if group is not None:
pulumi.set(__self__, "group", group)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address of the destination server.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def port(self) -> int:
"""
The port number of the destination server. This must be in the range [`1`-`65535`].
"""
return pulumi.get(self, "port")
@property
@pulumi.getter
def enabled(self) -> Optional[bool]:
"""
The flag to enable as destination of load balancing.
"""
return pulumi.get(self, "enabled")
@property
@pulumi.getter
def group(self) -> Optional[str]:
"""
The name of load balancing group. This is used when using rule-based load balancing. The length of this value must be in the range [`1`-`10`].
"""
return pulumi.get(self, "group")
@pulumi.output_type
class ProxyLBSorryServer(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "ipAddress":
suggest = "ip_address"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in ProxyLBSorryServer. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
ProxyLBSorryServer.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
ProxyLBSorryServer.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
ip_address: str,
port: Optional[int] = None):
"""
:param str ip_address: The IP address of the SorryServer. This will be used when all servers are down.
:param int port: The port number of the SorryServer. This will be used when all servers are down.
"""
pulumi.set(__self__, "ip_address", ip_address)
if port is not None:
pulumi.set(__self__, "port", port)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address of the SorryServer. This will be used when all servers are down.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def port(self) -> Optional[int]:
"""
The port number of the SorryServer. This will be used when all servers are down.
"""
return pulumi.get(self, "port")
@pulumi.output_type
class ProxyLBSyslog(dict):
def __init__(__self__, *,
port: Optional[int] = None,
server: Optional[str] = None):
"""
:param int port: The number of syslog port.
:param str server: The address of syslog server.
"""
if port is not None:
pulumi.set(__self__, "port", port)
if server is not None:
pulumi.set(__self__, "server", server)
@property
@pulumi.getter
def port(self) -> Optional[int]:
"""
The number of syslog port.
"""
return pulumi.get(self, "port")
@property
@pulumi.getter
def server(self) -> Optional[str]:
"""
The address of syslog server.
"""
return pulumi.get(self, "server")
@pulumi.output_type
class ServerDiskEditParameter(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "changePartitionUuid":
suggest = "change_partition_uuid"
elif key == "disablePwAuth":
suggest = "disable_pw_auth"
elif key == "enableDhcp":
suggest = "enable_dhcp"
elif key == "ipAddress":
suggest = "ip_address"
elif key == "noteIds":
suggest = "note_ids"
elif key == "sshKeyIds":
suggest = "ssh_key_ids"
elif key == "sshKeys":
suggest = "ssh_keys"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in ServerDiskEditParameter. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
ServerDiskEditParameter.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
ServerDiskEditParameter.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
change_partition_uuid: Optional[bool] = None,
disable_pw_auth: Optional[bool] = None,
enable_dhcp: Optional[bool] = None,
gateway: Optional[str] = None,
hostname: Optional[str] = None,
ip_address: Optional[str] = None,
netmask: Optional[int] = None,
note_ids: Optional[Sequence[str]] = None,
notes: Optional[Sequence['outputs.ServerDiskEditParameterNote']] = None,
password: Optional[str] = None,
ssh_key_ids: Optional[Sequence[str]] = None,
ssh_keys: Optional[Sequence[str]] = None):
"""
:param bool change_partition_uuid: The flag to change partition uuid.
:param bool disable_pw_auth: The flag to disable password authentication.
:param bool enable_dhcp: The flag to enable DHCP client.
:param str gateway: The gateway address used by the Server.
:param str hostname: The hostname of the Server. The length of this value must be in the range [`1`-`64`].
:param str ip_address: The IP address to assign to the Server.
:param int netmask: The bit length of the subnet to assign to the Server.
:param Sequence[str] note_ids: A list of the Note id.
Note: **The `note_ids` will be removed in a future version. Please use the `note` instead**
:param Sequence['ServerDiskEditParameterNoteArgs'] notes: A list of the `note` block as defined below.
:param str password: The password of default user. The length of this value must be in the range [`8`-`64`].
:param Sequence[str] ssh_key_ids: A list of the SSHKey id.
:param Sequence[str] ssh_keys: A list of the SSHKey text.
"""
if change_partition_uuid is not None:
pulumi.set(__self__, "change_partition_uuid", change_partition_uuid)
if disable_pw_auth is not None:
pulumi.set(__self__, "disable_pw_auth", disable_pw_auth)
if enable_dhcp is not None:
pulumi.set(__self__, "enable_dhcp", enable_dhcp)
if gateway is not None:
pulumi.set(__self__, "gateway", gateway)
if hostname is not None:
pulumi.set(__self__, "hostname", hostname)
if ip_address is not None:
pulumi.set(__self__, "ip_address", ip_address)
if netmask is not None:
pulumi.set(__self__, "netmask", netmask)
if note_ids is not None:
pulumi.set(__self__, "note_ids", note_ids)
if notes is not None:
pulumi.set(__self__, "notes", notes)
if password is not None:
pulumi.set(__self__, "password", password)
if ssh_key_ids is not None:
pulumi.set(__self__, "ssh_key_ids", ssh_key_ids)
if ssh_keys is not None:
pulumi.set(__self__, "ssh_keys", ssh_keys)
@property
@pulumi.getter(name="changePartitionUuid")
def change_partition_uuid(self) -> Optional[bool]:
"""
The flag to change partition uuid.
"""
return pulumi.get(self, "change_partition_uuid")
@property
@pulumi.getter(name="disablePwAuth")
def disable_pw_auth(self) -> Optional[bool]:
"""
The flag to disable password authentication.
"""
return pulumi.get(self, "disable_pw_auth")
@property
@pulumi.getter(name="enableDhcp")
def enable_dhcp(self) -> Optional[bool]:
"""
The flag to enable DHCP client.
"""
return pulumi.get(self, "enable_dhcp")
@property
@pulumi.getter
def gateway(self) -> Optional[str]:
"""
The gateway address used by the Server.
"""
return pulumi.get(self, "gateway")
@property
@pulumi.getter
def hostname(self) -> Optional[str]:
"""
The hostname of the Server. The length of this value must be in the range [`1`-`64`].
"""
return pulumi.get(self, "hostname")
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> Optional[str]:
"""
The IP address to assign to the Server.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def netmask(self) -> Optional[int]:
"""
The bit length of the subnet to assign to the Server.
"""
return pulumi.get(self, "netmask")
@property
@pulumi.getter(name="noteIds")
def note_ids(self) -> Optional[Sequence[str]]:
"""
A list of the Note id.
Note: **The `note_ids` will be removed in a future version. Please use the `note` instead**
"""
return pulumi.get(self, "note_ids")
@property
@pulumi.getter
def notes(self) -> Optional[Sequence['outputs.ServerDiskEditParameterNote']]:
"""
A list of the `note` block as defined below.
"""
return pulumi.get(self, "notes")
@property
@pulumi.getter
def password(self) -> Optional[str]:
"""
The password of default user. The length of this value must be in the range [`8`-`64`].
"""
return pulumi.get(self, "password")
@property
@pulumi.getter(name="sshKeyIds")
def ssh_key_ids(self) -> Optional[Sequence[str]]:
"""
A list of the SSHKey id.
"""
return pulumi.get(self, "ssh_key_ids")
@property
@pulumi.getter(name="sshKeys")
def ssh_keys(self) -> Optional[Sequence[str]]:
"""
A list of the SSHKey text.
"""
return pulumi.get(self, "ssh_keys")
@pulumi.output_type
class ServerDiskEditParameterNote(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "apiKeyId":
suggest = "api_key_id"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in ServerDiskEditParameterNote. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
ServerDiskEditParameterNote.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
ServerDiskEditParameterNote.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
id: str,
api_key_id: Optional[str] = None,
variables: Optional[Mapping[str, str]] = None):
"""
:param str id: The id of the Note/StartupScript.
:param str api_key_id: The id of the API key to be injected into the Note/StartupScript when editing the disk.
:param Mapping[str, str] variables: The value of the variable that be injected into the Note/StartupScript when editing the disk.
"""
pulumi.set(__self__, "id", id)
if api_key_id is not None:
pulumi.set(__self__, "api_key_id", api_key_id)
if variables is not None:
pulumi.set(__self__, "variables", variables)
@property
@pulumi.getter
def id(self) -> str:
"""
The id of the Note/StartupScript.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="apiKeyId")
def api_key_id(self) -> Optional[str]:
"""
The id of the API key to be injected into the Note/StartupScript when editing the disk.
"""
return pulumi.get(self, "api_key_id")
@property
@pulumi.getter
def variables(self) -> Optional[Mapping[str, str]]:
"""
The value of the variable that be injected into the Note/StartupScript when editing the disk.
"""
return pulumi.get(self, "variables")
@pulumi.output_type
class ServerNetworkInterface(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "macAddress":
suggest = "mac_address"
elif key == "packetFilterId":
suggest = "packet_filter_id"
elif key == "userIpAddress":
suggest = "user_ip_address"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in ServerNetworkInterface. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
ServerNetworkInterface.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
ServerNetworkInterface.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
upstream: str,
mac_address: Optional[str] = None,
packet_filter_id: Optional[str] = None,
user_ip_address: Optional[str] = None):
"""
:param str upstream: The upstream type or upstream switch id. This must be one of [`shared`/`disconnect`/`<switch id>`].
:param str mac_address: The MAC address.
:param str packet_filter_id: The id of the packet filter to attach to the network interface.
:param str user_ip_address: The IP address for only display. This value doesn't affect actual NIC settings.
"""
pulumi.set(__self__, "upstream", upstream)
if mac_address is not None:
pulumi.set(__self__, "mac_address", mac_address)
if packet_filter_id is not None:
pulumi.set(__self__, "packet_filter_id", packet_filter_id)
if user_ip_address is not None:
pulumi.set(__self__, "user_ip_address", user_ip_address)
@property
@pulumi.getter
def upstream(self) -> str:
"""
The upstream type or upstream switch id. This must be one of [`shared`/`disconnect`/`<switch id>`].
"""
return pulumi.get(self, "upstream")
@property
@pulumi.getter(name="macAddress")
def mac_address(self) -> Optional[str]:
"""
The MAC address.
"""
return pulumi.get(self, "mac_address")
@property
@pulumi.getter(name="packetFilterId")
def packet_filter_id(self) -> Optional[str]:
"""
The id of the packet filter to attach to the network interface.
"""
return pulumi.get(self, "packet_filter_id")
@property
@pulumi.getter(name="userIpAddress")
def user_ip_address(self) -> Optional[str]:
"""
The IP address for only display. This value doesn't affect actual NIC settings.
"""
return pulumi.get(self, "user_ip_address")
@pulumi.output_type
class SimpleMonitorHealthCheck(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "containsString":
suggest = "contains_string"
elif key == "excepctedData":
suggest = "excepcted_data"
elif key == "hostHeader":
suggest = "host_header"
elif key == "remainingDays":
suggest = "remaining_days"
elif key == "snmpVersion":
suggest = "snmp_version"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in SimpleMonitorHealthCheck. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
SimpleMonitorHealthCheck.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
SimpleMonitorHealthCheck.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
protocol: str,
community: Optional[str] = None,
contains_string: Optional[str] = None,
excepcted_data: Optional[str] = None,
ftps: Optional[str] = None,
host_header: Optional[str] = None,
http2: Optional[bool] = None,
oid: Optional[str] = None,
password: Optional[str] = None,
path: Optional[str] = None,
port: Optional[int] = None,
qname: Optional[str] = None,
remaining_days: Optional[int] = None,
sni: Optional[bool] = None,
snmp_version: Optional[str] = None,
status: Optional[int] = None,
username: Optional[str] = None):
"""
:param str protocol: The protocol used for health checks. This must be one of [`http`/`https`/`ping`/`tcp`/`dns`/`ssh`/`smtp`/`pop3`/`snmp`/`sslcertificate`/`ftp`].
:param str community: The SNMP community string used when checking by SNMP.
:param str contains_string: The string that should be included in the response body when checking for HTTP/HTTPS.
:param str excepcted_data: The expected value used when checking by DNS.
:param str ftps: The methods of invoking security for monitoring with FTPS. This must be one of [``/`implicit`/`explicit`].
:param str host_header: The value of host header send when checking by HTTP/HTTPS.
:param bool http2: The flag to enable HTTP/2 when checking by HTTPS.
:param str oid: The SNMP OID used when checking by SNMP.
:param str password: The password for basic auth used when checking by HTTP/HTTPS.
:param str path: The path used when checking by HTTP/HTTPS.
:param int port: The target port number.
:param str qname: The FQDN used when checking by DNS.
:param int remaining_days: The number of remaining days until certificate expiration used when checking SSL certificates. This must be in the range [`1`-`9999`].
:param bool sni: The flag to enable SNI when checking by HTTP/HTTPS.
:param str snmp_version: The SNMP version used when checking by SNMP. This must be one of `1`/`2c`.
:param int status: The response-code to expect when checking by HTTP/HTTPS.
:param str username: The user name for basic auth used when checking by HTTP/HTTPS.
"""
pulumi.set(__self__, "protocol", protocol)
if community is not None:
pulumi.set(__self__, "community", community)
if contains_string is not None:
pulumi.set(__self__, "contains_string", contains_string)
if excepcted_data is not None:
pulumi.set(__self__, "excepcted_data", excepcted_data)
if ftps is not None:
pulumi.set(__self__, "ftps", ftps)
if host_header is not None:
pulumi.set(__self__, "host_header", host_header)
if http2 is not None:
pulumi.set(__self__, "http2", http2)
if oid is not None:
pulumi.set(__self__, "oid", oid)
if password is not None:
pulumi.set(__self__, "password", password)
if path is not None:
pulumi.set(__self__, "path", path)
if port is not None:
pulumi.set(__self__, "port", port)
if qname is not None:
pulumi.set(__self__, "qname", qname)
if remaining_days is not None:
pulumi.set(__self__, "remaining_days", remaining_days)
if sni is not None:
pulumi.set(__self__, "sni", sni)
if snmp_version is not None:
pulumi.set(__self__, "snmp_version", snmp_version)
if status is not None:
pulumi.set(__self__, "status", status)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def protocol(self) -> str:
"""
The protocol used for health checks. This must be one of [`http`/`https`/`ping`/`tcp`/`dns`/`ssh`/`smtp`/`pop3`/`snmp`/`sslcertificate`/`ftp`].
"""
return pulumi.get(self, "protocol")
@property
@pulumi.getter
def community(self) -> Optional[str]:
"""
The SNMP community string used when checking by SNMP.
"""
return pulumi.get(self, "community")
@property
@pulumi.getter(name="containsString")
def contains_string(self) -> Optional[str]:
"""
The string that should be included in the response body when checking for HTTP/HTTPS.
"""
return pulumi.get(self, "contains_string")
@property
@pulumi.getter(name="excepctedData")
def excepcted_data(self) -> Optional[str]:
"""
The expected value used when checking by DNS.
"""
return pulumi.get(self, "excepcted_data")
@property
@pulumi.getter
def ftps(self) -> Optional[str]:
"""
The methods of invoking security for monitoring with FTPS. This must be one of [``/`implicit`/`explicit`].
"""
return pulumi.get(self, "ftps")
@property
@pulumi.getter(name="hostHeader")
def host_header(self) -> Optional[str]:
"""
The value of host header send when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "host_header")
@property
@pulumi.getter
def http2(self) -> Optional[bool]:
"""
The flag to enable HTTP/2 when checking by HTTPS.
"""
return pulumi.get(self, "http2")
@property
@pulumi.getter
def oid(self) -> Optional[str]:
"""
The SNMP OID used when checking by SNMP.
"""
return pulumi.get(self, "oid")
@property
@pulumi.getter
def password(self) -> Optional[str]:
"""
The password for basic auth used when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "password")
@property
@pulumi.getter
def path(self) -> Optional[str]:
"""
The path used when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "path")
@property
@pulumi.getter
def port(self) -> Optional[int]:
"""
The target port number.
"""
return pulumi.get(self, "port")
@property
@pulumi.getter
def qname(self) -> Optional[str]:
"""
The FQDN used when checking by DNS.
"""
return pulumi.get(self, "qname")
@property
@pulumi.getter(name="remainingDays")
def remaining_days(self) -> Optional[int]:
"""
The number of remaining days until certificate expiration used when checking SSL certificates. This must be in the range [`1`-`9999`].
"""
return pulumi.get(self, "remaining_days")
@property
@pulumi.getter
def sni(self) -> Optional[bool]:
"""
The flag to enable SNI when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "sni")
@property
@pulumi.getter(name="snmpVersion")
def snmp_version(self) -> Optional[str]:
"""
The SNMP version used when checking by SNMP. This must be one of `1`/`2c`.
"""
return pulumi.get(self, "snmp_version")
@property
@pulumi.getter
def status(self) -> Optional[int]:
"""
The response-code to expect when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "status")
@property
@pulumi.getter
def username(self) -> Optional[str]:
"""
The user name for basic auth used when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "username")
@pulumi.output_type
class VPCRouterDhcpServer(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "interfaceIndex":
suggest = "interface_index"
elif key == "rangeStart":
suggest = "range_start"
elif key == "rangeStop":
suggest = "range_stop"
elif key == "dnsServers":
suggest = "dns_servers"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in VPCRouterDhcpServer. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
VPCRouterDhcpServer.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
VPCRouterDhcpServer.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
interface_index: int,
range_start: str,
range_stop: str,
dns_servers: Optional[Sequence[str]] = None):
"""
:param int interface_index: The index of the network interface on which to enable the DHCP service. This must be in the range [`1`-`7`].
:param str range_start: The start value of IP address range to assign to DHCP client.
:param str range_stop: The end value of IP address range to assign to DHCP client.
:param Sequence[str] dns_servers: A list of IP address of DNS server to assign to DHCP client.
"""
pulumi.set(__self__, "interface_index", interface_index)
pulumi.set(__self__, "range_start", range_start)
pulumi.set(__self__, "range_stop", range_stop)
if dns_servers is not None:
pulumi.set(__self__, "dns_servers", dns_servers)
@property
@pulumi.getter(name="interfaceIndex")
def interface_index(self) -> int:
"""
The index of the network interface on which to enable the DHCP service. This must be in the range [`1`-`7`].
"""
return pulumi.get(self, "interface_index")
@property
@pulumi.getter(name="rangeStart")
def range_start(self) -> str:
"""
The start value of IP address range to assign to DHCP client.
"""
return pulumi.get(self, "range_start")
@property
@pulumi.getter(name="rangeStop")
def range_stop(self) -> str:
"""
The end value of IP address range to assign to DHCP client.
"""
return pulumi.get(self, "range_stop")
@property
@pulumi.getter(name="dnsServers")
def dns_servers(self) -> Optional[Sequence[str]]:
"""
A list of IP address of DNS server to assign to DHCP client.
"""
return pulumi.get(self, "dns_servers")
@pulumi.output_type
class VPCRouterDhcpStaticMapping(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "ipAddress":
suggest = "ip_address"
elif key == "macAddress":
suggest = "mac_address"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in VPCRouterDhcpStaticMapping. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
VPCRouterDhcpStaticMapping.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
VPCRouterDhcpStaticMapping.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
ip_address: str,
mac_address: str):
"""
:param str ip_address: The static IP address to assign to DHCP client.
:param str mac_address: The source MAC address of static mapping.
"""
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "mac_address", mac_address)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The static IP address to assign to DHCP client.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter(name="macAddress")
def mac_address(self) -> str:
"""
The source MAC address of static mapping.
"""
return pulumi.get(self, "mac_address")
@pulumi.output_type
class VPCRouterFirewall(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "interfaceIndex":
suggest = "interface_index"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in VPCRouterFirewall. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
VPCRouterFirewall.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
VPCRouterFirewall.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
direction: str,
expressions: Sequence['outputs.VPCRouterFirewallExpression'],
interface_index: Optional[int] = None):
"""
:param str direction: The direction to apply the firewall. This must be one of [`send`/`receive`].
:param Sequence['VPCRouterFirewallExpressionArgs'] expressions: One or more `expression` blocks as defined below.
:param int interface_index: The index of the network interface on which to enable filtering. This must be in the range [`0`-`7`].
"""
pulumi.set(__self__, "direction", direction)
pulumi.set(__self__, "expressions", expressions)
if interface_index is not None:
pulumi.set(__self__, "interface_index", interface_index)
@property
@pulumi.getter
def direction(self) -> str:
"""
The direction to apply the firewall. This must be one of [`send`/`receive`].
"""
return pulumi.get(self, "direction")
@property
@pulumi.getter
def expressions(self) -> Sequence['outputs.VPCRouterFirewallExpression']:
"""
One or more `expression` blocks as defined below.
"""
return pulumi.get(self, "expressions")
@property
@pulumi.getter(name="interfaceIndex")
def interface_index(self) -> Optional[int]:
"""
The index of the network interface on which to enable filtering. This must be in the range [`0`-`7`].
"""
return pulumi.get(self, "interface_index")
@pulumi.output_type
class VPCRouterFirewallExpression(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "destinationNetwork":
suggest = "destination_network"
elif key == "destinationPort":
suggest = "destination_port"
elif key == "sourceNetwork":
suggest = "source_network"
elif key == "sourcePort":
suggest = "source_port"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in VPCRouterFirewallExpression. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
VPCRouterFirewallExpression.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
VPCRouterFirewallExpression.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
allow: bool,
protocol: str,
description: Optional[str] = None,
destination_network: Optional[str] = None,
destination_port: Optional[str] = None,
logging: Optional[bool] = None,
source_network: Optional[str] = None,
source_port: Optional[str] = None):
"""
:param bool allow: The flag to allow the packet through the filter.
:param str protocol: The protocol used for filtering. This must be one of [`tcp`/`udp`/`icmp`/`ip`].
:param str description: The description of the expression. The length of this value must be in the range [`0`-`512`].
:param str destination_network: A destination IP address or CIDR block used for filtering (e.g. `192.0.2.1`, `192.0.2.0/24`).
:param str destination_port: A destination port number or port range used for filtering (e.g. `1024`, `1024-2048`). This is only used when `protocol` is `tcp` or `udp`.
:param bool logging: The flag to enable packet logging when matching the expression.
:param str source_network: A source IP address or CIDR block used for filtering (e.g. `192.0.2.1`, `192.0.2.0/24`).
:param str source_port: A source port number or port range used for filtering (e.g. `1024`, `1024-2048`). This is only used when `protocol` is `tcp` or `udp`.
"""
pulumi.set(__self__, "allow", allow)
pulumi.set(__self__, "protocol", protocol)
if description is not None:
pulumi.set(__self__, "description", description)
if destination_network is not None:
pulumi.set(__self__, "destination_network", destination_network)
if destination_port is not None:
pulumi.set(__self__, "destination_port", destination_port)
if logging is not None:
pulumi.set(__self__, "logging", logging)
if source_network is not None:
pulumi.set(__self__, "source_network", source_network)
if source_port is not None:
pulumi.set(__self__, "source_port", source_port)
@property
@pulumi.getter
def allow(self) -> bool:
"""
The flag to allow the packet through the filter.
"""
return pulumi.get(self, "allow")
@property
@pulumi.getter
def protocol(self) -> str:
"""
The protocol used for filtering. This must be one of [`tcp`/`udp`/`icmp`/`ip`].
"""
return pulumi.get(self, "protocol")
@property
@pulumi.getter
def description(self) -> Optional[str]:
"""
The description of the expression. The length of this value must be in the range [`0`-`512`].
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="destinationNetwork")
def destination_network(self) -> Optional[str]:
"""
A destination IP address or CIDR block used for filtering (e.g. `192.0.2.1`, `192.0.2.0/24`).
"""
return pulumi.get(self, "destination_network")
@property
@pulumi.getter(name="destinationPort")
def destination_port(self) -> Optional[str]:
"""
A destination port number or port range used for filtering (e.g. `1024`, `1024-2048`). This is only used when `protocol` is `tcp` or `udp`.
"""
return pulumi.get(self, "destination_port")
@property
@pulumi.getter
def logging(self) -> Optional[bool]:
"""
The flag to enable packet logging when matching the expression.
"""
return pulumi.get(self, "logging")
@property
@pulumi.getter(name="sourceNetwork")
def source_network(self) -> Optional[str]:
"""
A source IP address or CIDR block used for filtering (e.g. `192.0.2.1`, `192.0.2.0/24`).
"""
return pulumi.get(self, "source_network")
@property
@pulumi.getter(name="sourcePort")
def source_port(self) -> Optional[str]:
"""
A source port number or port range used for filtering (e.g. `1024`, `1024-2048`). This is only used when `protocol` is `tcp` or `udp`.
"""
return pulumi.get(self, "source_port")
@pulumi.output_type
class VPCRouterL2tp(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "preSharedSecret":
suggest = "pre_shared_secret"
elif key == "rangeStart":
suggest = "range_start"
elif key == "rangeStop":
suggest = "range_stop"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in VPCRouterL2tp. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
VPCRouterL2tp.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
VPCRouterL2tp.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
pre_shared_secret: str,
range_start: str,
range_stop: str):
"""
:param str pre_shared_secret: The pre shared secret for the VPN. The length of this value must be in the range [`0`-`40`].
:param str range_start: The start value of IP address range to assign to DHCP client.
:param str range_stop: The end value of IP address range to assign to DHCP client.
"""
pulumi.set(__self__, "pre_shared_secret", pre_shared_secret)
pulumi.set(__self__, "range_start", range_start)
pulumi.set(__self__, "range_stop", range_stop)
@property
@pulumi.getter(name="preSharedSecret")
def pre_shared_secret(self) -> str:
"""
The pre shared secret for the VPN. The length of this value must be in the range [`0`-`40`].
"""
return pulumi.get(self, "pre_shared_secret")
@property
@pulumi.getter(name="rangeStart")
def range_start(self) -> str:
"""
The start value of IP address range to assign to DHCP client.
"""
return pulumi.get(self, "range_start")
@property
@pulumi.getter(name="rangeStop")
def range_stop(self) -> str:
"""
The end value of IP address range to assign to DHCP client.
"""
return pulumi.get(self, "range_stop")
@pulumi.output_type
class VPCRouterPortForwarding(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "privateIp":
suggest = "private_ip"
elif key == "privatePort":
suggest = "private_port"
elif key == "publicPort":
suggest = "public_port"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in VPCRouterPortForwarding. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
VPCRouterPortForwarding.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
VPCRouterPortForwarding.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
private_ip: str,
private_port: int,
protocol: str,
public_port: int,
description: Optional[str] = None):
"""
:param str private_ip: The destination ip address of the port forwarding.
:param int private_port: The destination port number of the port forwarding. This will be a port number on a private network.
:param str protocol: The protocol used for port forwarding. This must be one of [`tcp`/`udp`].
:param int public_port: The source port number of the port forwarding. This must be a port number on a public network.
:param str description: The description of the port forwarding. The length of this value must be in the range [`0`-`512`].
"""
pulumi.set(__self__, "private_ip", private_ip)
pulumi.set(__self__, "private_port", private_port)
pulumi.set(__self__, "protocol", protocol)
pulumi.set(__self__, "public_port", public_port)
if description is not None:
pulumi.set(__self__, "description", description)
@property
@pulumi.getter(name="privateIp")
def private_ip(self) -> str:
"""
The destination ip address of the port forwarding.
"""
return pulumi.get(self, "private_ip")
@property
@pulumi.getter(name="privatePort")
def private_port(self) -> int:
"""
The destination port number of the port forwarding. This will be a port number on a private network.
"""
return pulumi.get(self, "private_port")
@property
@pulumi.getter
def protocol(self) -> str:
"""
The protocol used for port forwarding. This must be one of [`tcp`/`udp`].
"""
return pulumi.get(self, "protocol")
@property
@pulumi.getter(name="publicPort")
def public_port(self) -> int:
"""
The source port number of the port forwarding. This must be a port number on a public network.
"""
return pulumi.get(self, "public_port")
@property
@pulumi.getter
def description(self) -> Optional[str]:
"""
The description of the port forwarding. The length of this value must be in the range [`0`-`512`].
"""
return pulumi.get(self, "description")
@pulumi.output_type
class VPCRouterPptp(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "rangeStart":
suggest = "range_start"
elif key == "rangeStop":
suggest = "range_stop"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in VPCRouterPptp. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
VPCRouterPptp.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
VPCRouterPptp.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
range_start: str,
range_stop: str):
"""
:param str range_start: The start value of IP address range to assign to PPTP client.
:param str range_stop: The end value of IP address range to assign to PPTP client.
"""
pulumi.set(__self__, "range_start", range_start)
pulumi.set(__self__, "range_stop", range_stop)
@property
@pulumi.getter(name="rangeStart")
def range_start(self) -> str:
"""
The start value of IP address range to assign to PPTP client.
"""
return pulumi.get(self, "range_start")
@property
@pulumi.getter(name="rangeStop")
def range_stop(self) -> str:
"""
The end value of IP address range to assign to PPTP client.
"""
return pulumi.get(self, "range_stop")
@pulumi.output_type
class VPCRouterPrivateNetworkInterface(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "ipAddresses":
suggest = "ip_addresses"
elif key == "switchId":
suggest = "switch_id"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in VPCRouterPrivateNetworkInterface. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
VPCRouterPrivateNetworkInterface.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
VPCRouterPrivateNetworkInterface.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
index: int,
ip_addresses: Sequence[str],
netmask: int,
switch_id: str,
vip: Optional[str] = None):
"""
:param int index: The index of the network interface. This must be in the range [`1`-`7`].
:param Sequence[str] ip_addresses: A list of ip address to assign to the network interface. This is required only one value when `plan` is `standard`, two values otherwise.
:param int netmask: The bit length of the subnet to assign to the network interface.
:param str switch_id: The id of the connected switch.
:param str vip: The virtual IP address to assign to the network interface. This is only required when `plan` is not `standard`.
"""
pulumi.set(__self__, "index", index)
pulumi.set(__self__, "ip_addresses", ip_addresses)
pulumi.set(__self__, "netmask", netmask)
pulumi.set(__self__, "switch_id", switch_id)
if vip is not None:
pulumi.set(__self__, "vip", vip)
@property
@pulumi.getter
def index(self) -> int:
"""
The index of the network interface. This must be in the range [`1`-`7`].
"""
return pulumi.get(self, "index")
@property
@pulumi.getter(name="ipAddresses")
def ip_addresses(self) -> Sequence[str]:
"""
A list of ip address to assign to the network interface. This is required only one value when `plan` is `standard`, two values otherwise.
"""
return pulumi.get(self, "ip_addresses")
@property
@pulumi.getter
def netmask(self) -> int:
"""
The bit length of the subnet to assign to the network interface.
"""
return pulumi.get(self, "netmask")
@property
@pulumi.getter(name="switchId")
def switch_id(self) -> str:
"""
The id of the connected switch.
"""
return pulumi.get(self, "switch_id")
@property
@pulumi.getter
def vip(self) -> Optional[str]:
"""
The virtual IP address to assign to the network interface. This is only required when `plan` is not `standard`.
"""
return pulumi.get(self, "vip")
@pulumi.output_type
class VPCRouterPublicNetworkInterface(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "ipAddresses":
suggest = "ip_addresses"
elif key == "switchId":
suggest = "switch_id"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in VPCRouterPublicNetworkInterface. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
VPCRouterPublicNetworkInterface.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
VPCRouterPublicNetworkInterface.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
aliases: Optional[Sequence[str]] = None,
ip_addresses: Optional[Sequence[str]] = None,
switch_id: Optional[str] = None,
vip: Optional[str] = None,
vrid: Optional[int] = None):
"""
:param Sequence[str] aliases: A list of ip alias to assign to the VPC Router. This can only be specified if `plan` is not `standard`.
:param Sequence[str] ip_addresses: The list of the IP address to assign to the VPC Router. This is required only one value when `plan` is `standard`, two values otherwise.
:param str switch_id: The id of the switch to connect. This is only required when when `plan` is not `standard`.
:param str vip: The virtual IP address of the VPC Router. This is only required when `plan` is not `standard`.
:param int vrid: The Virtual Router Identifier. This is only required when `plan` is not `standard`.
"""
if aliases is not None:
pulumi.set(__self__, "aliases", aliases)
if ip_addresses is not None:
pulumi.set(__self__, "ip_addresses", ip_addresses)
if switch_id is not None:
pulumi.set(__self__, "switch_id", switch_id)
if vip is not None:
pulumi.set(__self__, "vip", vip)
if vrid is not None:
pulumi.set(__self__, "vrid", vrid)
@property
@pulumi.getter
def aliases(self) -> Optional[Sequence[str]]:
"""
A list of ip alias to assign to the VPC Router. This can only be specified if `plan` is not `standard`.
"""
return pulumi.get(self, "aliases")
@property
@pulumi.getter(name="ipAddresses")
def ip_addresses(self) -> Optional[Sequence[str]]:
"""
The list of the IP address to assign to the VPC Router. This is required only one value when `plan` is `standard`, two values otherwise.
"""
return pulumi.get(self, "ip_addresses")
@property
@pulumi.getter(name="switchId")
def switch_id(self) -> Optional[str]:
"""
The id of the switch to connect. This is only required when when `plan` is not `standard`.
"""
return pulumi.get(self, "switch_id")
@property
@pulumi.getter
def vip(self) -> Optional[str]:
"""
The virtual IP address of the VPC Router. This is only required when `plan` is not `standard`.
"""
return pulumi.get(self, "vip")
@property
@pulumi.getter
def vrid(self) -> Optional[int]:
"""
The Virtual Router Identifier. This is only required when `plan` is not `standard`.
"""
return pulumi.get(self, "vrid")
@pulumi.output_type
class VPCRouterSiteToSiteVpn(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "localPrefixes":
suggest = "local_prefixes"
elif key == "preSharedSecret":
suggest = "pre_shared_secret"
elif key == "remoteId":
suggest = "remote_id"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in VPCRouterSiteToSiteVpn. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
VPCRouterSiteToSiteVpn.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
VPCRouterSiteToSiteVpn.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
local_prefixes: Sequence[str],
peer: str,
pre_shared_secret: str,
remote_id: str,
routes: Sequence[str]):
"""
:param Sequence[str] local_prefixes: A list of CIDR block of the network under the VPC Router.
:param str peer: The IP address of the opposing appliance connected to the VPC Router.
:param str pre_shared_secret: The pre shared secret for the VPN. The length of this value must be in the range [`0`-`40`].
:param str remote_id: The id of the opposing appliance connected to the VPC Router. This is typically set same as value of `peer`.
:param Sequence[str] routes: A list of CIDR block of VPN connected networks.
"""
pulumi.set(__self__, "local_prefixes", local_prefixes)
pulumi.set(__self__, "peer", peer)
pulumi.set(__self__, "pre_shared_secret", pre_shared_secret)
pulumi.set(__self__, "remote_id", remote_id)
pulumi.set(__self__, "routes", routes)
@property
@pulumi.getter(name="localPrefixes")
def local_prefixes(self) -> Sequence[str]:
"""
A list of CIDR block of the network under the VPC Router.
"""
return pulumi.get(self, "local_prefixes")
@property
@pulumi.getter
def peer(self) -> str:
"""
The IP address of the opposing appliance connected to the VPC Router.
"""
return pulumi.get(self, "peer")
@property
@pulumi.getter(name="preSharedSecret")
def pre_shared_secret(self) -> str:
"""
The pre shared secret for the VPN. The length of this value must be in the range [`0`-`40`].
"""
return pulumi.get(self, "pre_shared_secret")
@property
@pulumi.getter(name="remoteId")
def remote_id(self) -> str:
"""
The id of the opposing appliance connected to the VPC Router. This is typically set same as value of `peer`.
"""
return pulumi.get(self, "remote_id")
@property
@pulumi.getter
def routes(self) -> Sequence[str]:
"""
A list of CIDR block of VPN connected networks.
"""
return pulumi.get(self, "routes")
@pulumi.output_type
class VPCRouterStaticNat(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "privateIp":
suggest = "private_ip"
elif key == "publicIp":
suggest = "public_ip"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in VPCRouterStaticNat. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
VPCRouterStaticNat.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
VPCRouterStaticNat.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
private_ip: str,
public_ip: str,
description: Optional[str] = None):
"""
:param str private_ip: The private IP address used for the static NAT.
:param str public_ip: The public IP address used for the static NAT.
:param str description: The description of the static nat. The length of this value must be in the range [`0`-`512`].
"""
pulumi.set(__self__, "private_ip", private_ip)
pulumi.set(__self__, "public_ip", public_ip)
if description is not None:
pulumi.set(__self__, "description", description)
@property
@pulumi.getter(name="privateIp")
def private_ip(self) -> str:
"""
The private IP address used for the static NAT.
"""
return pulumi.get(self, "private_ip")
@property
@pulumi.getter(name="publicIp")
def public_ip(self) -> str:
"""
The public IP address used for the static NAT.
"""
return pulumi.get(self, "public_ip")
@property
@pulumi.getter
def description(self) -> Optional[str]:
"""
The description of the static nat. The length of this value must be in the range [`0`-`512`].
"""
return pulumi.get(self, "description")
@pulumi.output_type
class VPCRouterStaticRoute(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "nextHop":
suggest = "next_hop"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in VPCRouterStaticRoute. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
VPCRouterStaticRoute.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
VPCRouterStaticRoute.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
next_hop: str,
prefix: str):
"""
:param str next_hop: The IP address of the next hop.
:param str prefix: The CIDR block of destination.
"""
pulumi.set(__self__, "next_hop", next_hop)
pulumi.set(__self__, "prefix", prefix)
@property
@pulumi.getter(name="nextHop")
def next_hop(self) -> str:
"""
The IP address of the next hop.
"""
return pulumi.get(self, "next_hop")
@property
@pulumi.getter
def prefix(self) -> str:
"""
The CIDR block of destination.
"""
return pulumi.get(self, "prefix")
@pulumi.output_type
class VPCRouterUser(dict):
def __init__(__self__, *,
name: str,
password: str):
"""
:param str name: The user name used to authenticate remote access.
:param str password: The password used to authenticate remote access.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "password", password)
@property
@pulumi.getter
def name(self) -> str:
"""
The user name used to authenticate remote access.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def password(self) -> str:
"""
The password used to authenticate remote access.
"""
return pulumi.get(self, "password")
@pulumi.output_type
class VPCRouterWireGuard(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "ipAddress":
suggest = "ip_address"
elif key == "publicKey":
suggest = "public_key"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in VPCRouterWireGuard. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
VPCRouterWireGuard.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
VPCRouterWireGuard.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
ip_address: str,
peers: Optional[Sequence['outputs.VPCRouterWireGuardPeer']] = None,
public_key: Optional[str] = None):
"""
:param str ip_address: The IP address for WireGuard server. This must be formatted with xxx.xxx.xxx.xxx/nn.
:param Sequence['VPCRouterWireGuardPeerArgs'] peers: One or more `peer` blocks as defined below.
:param str public_key: the public key of the WireGuard client.
"""
pulumi.set(__self__, "ip_address", ip_address)
if peers is not None:
pulumi.set(__self__, "peers", peers)
if public_key is not None:
pulumi.set(__self__, "public_key", public_key)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address for WireGuard server. This must be formatted with xxx.xxx.xxx.xxx/nn.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def peers(self) -> Optional[Sequence['outputs.VPCRouterWireGuardPeer']]:
"""
One or more `peer` blocks as defined below.
"""
return pulumi.get(self, "peers")
@property
@pulumi.getter(name="publicKey")
def public_key(self) -> Optional[str]:
"""
the public key of the WireGuard client.
"""
return pulumi.get(self, "public_key")
@pulumi.output_type
class VPCRouterWireGuardPeer(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "ipAddress":
suggest = "ip_address"
elif key == "publicKey":
suggest = "public_key"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in VPCRouterWireGuardPeer. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
VPCRouterWireGuardPeer.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
VPCRouterWireGuardPeer.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
ip_address: str,
name: str,
public_key: str):
"""
:param str ip_address: The IP address for peer.
:param str name: the of the peer.
:param str public_key: the public key of the WireGuard client.
"""
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "public_key", public_key)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address for peer.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def name(self) -> str:
"""
the of the peer.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="publicKey")
def public_key(self) -> str:
"""
the public key of the WireGuard client.
"""
return pulumi.get(self, "public_key")
@pulumi.output_type
class GetArchiveFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetArchiveFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetArchiveFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetArchiveFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetArchiveFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetBridgeFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetBridgeFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None):
"""
:param Sequence['GetBridgeFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetBridgeFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@pulumi.output_type
class GetBridgeFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetCDROMFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetCDROMFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetCDROMFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetCDROMFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetCDROMFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetCertificateAuthorityClientResult(dict):
def __init__(__self__, *,
certificate: str,
hold: bool,
id: str,
issue_state: str,
not_after: str,
not_before: str,
serial_number: str,
subject_string: str,
url: str):
"""
:param str certificate: The body of the CA's certificate in PEM format.
:param bool hold: Flag to suspend/hold the certificate.
:param str id: The resource id on SakuraCloud used for filtering.
:param str issue_state: Current state of the certificate.
:param str not_after: The date on which the certificate validity period ends, in RFC3339 format.
:param str not_before: The date on which the certificate validity period begins, in RFC3339 format.
:param str serial_number: The body of the CA's certificate in PEM format.
:param str subject_string: .
:param str url: The URL for issuing the certificate.
"""
pulumi.set(__self__, "certificate", certificate)
pulumi.set(__self__, "hold", hold)
pulumi.set(__self__, "id", id)
pulumi.set(__self__, "issue_state", issue_state)
pulumi.set(__self__, "not_after", not_after)
pulumi.set(__self__, "not_before", not_before)
pulumi.set(__self__, "serial_number", serial_number)
pulumi.set(__self__, "subject_string", subject_string)
pulumi.set(__self__, "url", url)
@property
@pulumi.getter
def certificate(self) -> str:
"""
The body of the CA's certificate in PEM format.
"""
return pulumi.get(self, "certificate")
@property
@pulumi.getter
def hold(self) -> bool:
"""
Flag to suspend/hold the certificate.
"""
return pulumi.get(self, "hold")
@property
@pulumi.getter
def id(self) -> str:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="issueState")
def issue_state(self) -> str:
"""
Current state of the certificate.
"""
return pulumi.get(self, "issue_state")
@property
@pulumi.getter(name="notAfter")
def not_after(self) -> str:
"""
The date on which the certificate validity period ends, in RFC3339 format.
"""
return pulumi.get(self, "not_after")
@property
@pulumi.getter(name="notBefore")
def not_before(self) -> str:
"""
The date on which the certificate validity period begins, in RFC3339 format.
"""
return pulumi.get(self, "not_before")
@property
@pulumi.getter(name="serialNumber")
def serial_number(self) -> str:
"""
The body of the CA's certificate in PEM format.
"""
return pulumi.get(self, "serial_number")
@property
@pulumi.getter(name="subjectString")
def subject_string(self) -> str:
"""
.
"""
return pulumi.get(self, "subject_string")
@property
@pulumi.getter
def url(self) -> str:
"""
The URL for issuing the certificate.
"""
return pulumi.get(self, "url")
@pulumi.output_type
class GetCertificateAuthorityFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetCertificateAuthorityFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetCertificateAuthorityFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetCertificateAuthorityFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetCertificateAuthorityFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetCertificateAuthorityServerResult(dict):
def __init__(__self__, *,
certificate: str,
id: str,
issue_state: str,
not_after: str,
not_before: str,
serial_number: str,
subject_alternative_names: Sequence[str],
subject_string: str,
hold: Optional[bool] = None):
"""
:param str certificate: The body of the CA's certificate in PEM format.
:param str id: The resource id on SakuraCloud used for filtering.
:param str issue_state: Current state of the certificate.
:param str not_after: The date on which the certificate validity period ends, in RFC3339 format.
:param str not_before: The date on which the certificate validity period begins, in RFC3339 format.
:param str serial_number: The body of the CA's certificate in PEM format.
:param Sequence[str] subject_alternative_names: .
:param str subject_string: .
:param bool hold: Flag to suspend/hold the certificate.
"""
pulumi.set(__self__, "certificate", certificate)
pulumi.set(__self__, "id", id)
pulumi.set(__self__, "issue_state", issue_state)
pulumi.set(__self__, "not_after", not_after)
pulumi.set(__self__, "not_before", not_before)
pulumi.set(__self__, "serial_number", serial_number)
pulumi.set(__self__, "subject_alternative_names", subject_alternative_names)
pulumi.set(__self__, "subject_string", subject_string)
if hold is not None:
pulumi.set(__self__, "hold", hold)
@property
@pulumi.getter
def certificate(self) -> str:
"""
The body of the CA's certificate in PEM format.
"""
return pulumi.get(self, "certificate")
@property
@pulumi.getter
def id(self) -> str:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="issueState")
def issue_state(self) -> str:
"""
Current state of the certificate.
"""
return pulumi.get(self, "issue_state")
@property
@pulumi.getter(name="notAfter")
def not_after(self) -> str:
"""
The date on which the certificate validity period ends, in RFC3339 format.
"""
return pulumi.get(self, "not_after")
@property
@pulumi.getter(name="notBefore")
def not_before(self) -> str:
"""
The date on which the certificate validity period begins, in RFC3339 format.
"""
return pulumi.get(self, "not_before")
@property
@pulumi.getter(name="serialNumber")
def serial_number(self) -> str:
"""
The body of the CA's certificate in PEM format.
"""
return pulumi.get(self, "serial_number")
@property
@pulumi.getter(name="subjectAlternativeNames")
def subject_alternative_names(self) -> Sequence[str]:
"""
.
"""
return pulumi.get(self, "subject_alternative_names")
@property
@pulumi.getter(name="subjectString")
def subject_string(self) -> str:
"""
.
"""
return pulumi.get(self, "subject_string")
@property
@pulumi.getter
def hold(self) -> Optional[bool]:
"""
Flag to suspend/hold the certificate.
"""
return pulumi.get(self, "hold")
@pulumi.output_type
class GetContainerRegistryFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetContainerRegistryFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetContainerRegistryFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetContainerRegistryFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetContainerRegistryFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetContainerRegistryUserResult(dict):
def __init__(__self__, *,
name: str,
permission: str):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param str permission: The level of access that allow to the user. This will be one of [`all`/`readwrite`/`readonly`].
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "permission", permission)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def permission(self) -> str:
"""
The level of access that allow to the user. This will be one of [`all`/`readwrite`/`readonly`].
"""
return pulumi.get(self, "permission")
@pulumi.output_type
class GetDNSFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetDNSFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetDNSFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetDNSFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetDNSFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetDNSRecordResult(dict):
def __init__(__self__, *,
name: str,
port: int,
priority: int,
ttl: int,
type: str,
value: str,
weight: int):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param int port: The number of port.
:param int priority: The priority of target DNS Record.
:param int ttl: The number of the TTL.
:param str type: The type of DNS Record. This will be one of [`A`/`AAAA`/`ALIAS`/`CNAME`/`NS`/`MX`/`TXT`/`SRV`/`CAA`/`PTR`].
:param str value: The value of the DNS Record.
:param int weight: The weight of target DNS Record.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "port", port)
pulumi.set(__self__, "priority", priority)
pulumi.set(__self__, "ttl", ttl)
pulumi.set(__self__, "type", type)
pulumi.set(__self__, "value", value)
pulumi.set(__self__, "weight", weight)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def port(self) -> int:
"""
The number of port.
"""
return pulumi.get(self, "port")
@property
@pulumi.getter
def priority(self) -> int:
"""
The priority of target DNS Record.
"""
return pulumi.get(self, "priority")
@property
@pulumi.getter
def ttl(self) -> int:
"""
The number of the TTL.
"""
return pulumi.get(self, "ttl")
@property
@pulumi.getter
def type(self) -> str:
"""
The type of DNS Record. This will be one of [`A`/`AAAA`/`ALIAS`/`CNAME`/`NS`/`MX`/`TXT`/`SRV`/`CAA`/`PTR`].
"""
return pulumi.get(self, "type")
@property
@pulumi.getter
def value(self) -> str:
"""
The value of the DNS Record.
"""
return pulumi.get(self, "value")
@property
@pulumi.getter
def weight(self) -> int:
"""
The weight of target DNS Record.
"""
return pulumi.get(self, "weight")
@pulumi.output_type
class GetDatabaseBackupResult(dict):
def __init__(__self__, *,
time: str,
weekdays: Sequence[str]):
"""
:param str time: The time to take backup. This will be formatted with `HH:mm`.
:param Sequence[str] weekdays: The list of name of weekday that doing backup. This will be in [`sun`/`mon`/`tue`/`wed`/`thu`/`fri`/`sat`].
"""
pulumi.set(__self__, "time", time)
pulumi.set(__self__, "weekdays", weekdays)
@property
@pulumi.getter
def time(self) -> str:
"""
The time to take backup. This will be formatted with `HH:mm`.
"""
return pulumi.get(self, "time")
@property
@pulumi.getter
def weekdays(self) -> Sequence[str]:
"""
The list of name of weekday that doing backup. This will be in [`sun`/`mon`/`tue`/`wed`/`thu`/`fri`/`sat`].
"""
return pulumi.get(self, "weekdays")
@pulumi.output_type
class GetDatabaseFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetDatabaseFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetDatabaseFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetDatabaseFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetDatabaseFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetDatabaseNetworkInterfaceResult(dict):
def __init__(__self__, *,
gateway: str,
ip_address: str,
netmask: int,
port: int,
source_ranges: Sequence[str],
switch_id: str):
"""
:param str gateway: The IP address of the gateway used by Database.
:param str ip_address: The IP address assigned to the Database.
:param int netmask: The bit length of the subnet assigned to the Database.
:param int port: The number of the listening port.
:param Sequence[str] source_ranges: The range of source IP addresses that allow to access to the Database via network.
:param str switch_id: The id of the switch connected from the Database.
"""
pulumi.set(__self__, "gateway", gateway)
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "netmask", netmask)
pulumi.set(__self__, "port", port)
pulumi.set(__self__, "source_ranges", source_ranges)
pulumi.set(__self__, "switch_id", switch_id)
@property
@pulumi.getter
def gateway(self) -> str:
"""
The IP address of the gateway used by Database.
"""
return pulumi.get(self, "gateway")
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address assigned to the Database.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def netmask(self) -> int:
"""
The bit length of the subnet assigned to the Database.
"""
return pulumi.get(self, "netmask")
@property
@pulumi.getter
def port(self) -> int:
"""
The number of the listening port.
"""
return pulumi.get(self, "port")
@property
@pulumi.getter(name="sourceRanges")
def source_ranges(self) -> Sequence[str]:
"""
The range of source IP addresses that allow to access to the Database via network.
"""
return pulumi.get(self, "source_ranges")
@property
@pulumi.getter(name="switchId")
def switch_id(self) -> str:
"""
The id of the switch connected from the Database.
"""
return pulumi.get(self, "switch_id")
@pulumi.output_type
class GetDiskFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetDiskFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetDiskFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetDiskFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetDiskFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetESMEFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetESMEFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetESMEFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetESMEFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetESMEFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetEnhancedDBFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetEnhancedDBFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetEnhancedDBFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetEnhancedDBFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetEnhancedDBFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetGSLBFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetGSLBFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetGSLBFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetGSLBFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetGSLBFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetGSLBHealthCheckResult(dict):
def __init__(__self__, *,
delay_loop: int,
host_header: str,
path: str,
port: int,
protocol: str,
status: str):
"""
:param int delay_loop: The interval in seconds between checks.
:param str host_header: The value of host header send when checking by HTTP/HTTPS.
:param str path: The path used when checking by HTTP/HTTPS.
:param int port: The port number used when checking by TCP.
:param str protocol: The protocol used for health checks. This will be one of [`http`/`https`/`tcp`/`ping`].
:param str status: The response-code to expect when checking by HTTP/HTTPS.
"""
pulumi.set(__self__, "delay_loop", delay_loop)
pulumi.set(__self__, "host_header", host_header)
pulumi.set(__self__, "path", path)
pulumi.set(__self__, "port", port)
pulumi.set(__self__, "protocol", protocol)
pulumi.set(__self__, "status", status)
@property
@pulumi.getter(name="delayLoop")
def delay_loop(self) -> int:
"""
The interval in seconds between checks.
"""
return pulumi.get(self, "delay_loop")
@property
@pulumi.getter(name="hostHeader")
def host_header(self) -> str:
"""
The value of host header send when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "host_header")
@property
@pulumi.getter
def path(self) -> str:
"""
The path used when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "path")
@property
@pulumi.getter
def port(self) -> int:
"""
The port number used when checking by TCP.
"""
return pulumi.get(self, "port")
@property
@pulumi.getter
def protocol(self) -> str:
"""
The protocol used for health checks. This will be one of [`http`/`https`/`tcp`/`ping`].
"""
return pulumi.get(self, "protocol")
@property
@pulumi.getter
def status(self) -> str:
"""
The response-code to expect when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "status")
@pulumi.output_type
class GetGSLBServerResult(dict):
def __init__(__self__, *,
enabled: bool,
ip_address: str,
weight: int):
"""
:param bool enabled: The flag to enable as destination of load balancing.
:param str ip_address: The IP address of the server.
:param int weight: The weight used when weighted load balancing is enabled.
"""
pulumi.set(__self__, "enabled", enabled)
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "weight", weight)
@property
@pulumi.getter
def enabled(self) -> bool:
"""
The flag to enable as destination of load balancing.
"""
return pulumi.get(self, "enabled")
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address of the server.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def weight(self) -> int:
"""
The weight used when weighted load balancing is enabled.
"""
return pulumi.get(self, "weight")
@pulumi.output_type
class GetIconFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetIconFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetIconFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetIconFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetIconFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetInternetFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetInternetFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetInternetFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetInternetFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetInternetFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetLoadBalancerFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetLoadBalancerFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetLoadBalancerFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetLoadBalancerFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetLoadBalancerFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetLoadBalancerNetworkInterfaceResult(dict):
def __init__(__self__, *,
gateway: str,
ip_addresses: Sequence[str],
netmask: int,
switch_id: str,
vrid: int):
"""
:param str gateway: The IP address of the gateway used by LoadBalancer.
:param Sequence[str] ip_addresses: The list of IP address assigned to the LoadBalancer.
:param int netmask: The bit length of the subnet assigned to the LoadBalancer.
:param str switch_id: The id of the switch connected from the LoadBalancer.
:param int vrid: The Virtual Router Identifier.
"""
pulumi.set(__self__, "gateway", gateway)
pulumi.set(__self__, "ip_addresses", ip_addresses)
pulumi.set(__self__, "netmask", netmask)
pulumi.set(__self__, "switch_id", switch_id)
pulumi.set(__self__, "vrid", vrid)
@property
@pulumi.getter
def gateway(self) -> str:
"""
The IP address of the gateway used by LoadBalancer.
"""
return pulumi.get(self, "gateway")
@property
@pulumi.getter(name="ipAddresses")
def ip_addresses(self) -> Sequence[str]:
"""
The list of IP address assigned to the LoadBalancer.
"""
return pulumi.get(self, "ip_addresses")
@property
@pulumi.getter
def netmask(self) -> int:
"""
The bit length of the subnet assigned to the LoadBalancer.
"""
return pulumi.get(self, "netmask")
@property
@pulumi.getter(name="switchId")
def switch_id(self) -> str:
"""
The id of the switch connected from the LoadBalancer.
"""
return pulumi.get(self, "switch_id")
@property
@pulumi.getter
def vrid(self) -> int:
"""
The Virtual Router Identifier.
"""
return pulumi.get(self, "vrid")
@pulumi.output_type
class GetLoadBalancerVipResult(dict):
def __init__(__self__, *,
delay_loop: int,
description: str,
port: int,
servers: Sequence['outputs.GetLoadBalancerVipServerResult'],
sorry_server: str,
vip: str):
"""
:param int delay_loop: The interval in seconds between checks.
:param str description: The description of the VIP.
:param int port: The target port number for load-balancing.
:param Sequence['GetLoadBalancerVipServerArgs'] servers: A list of `server` blocks as defined below.
:param str sorry_server: The IP address of the SorryServer. This will be used when all servers under this VIP are down.
:param str vip: The virtual IP address.
"""
pulumi.set(__self__, "delay_loop", delay_loop)
pulumi.set(__self__, "description", description)
pulumi.set(__self__, "port", port)
pulumi.set(__self__, "servers", servers)
pulumi.set(__self__, "sorry_server", sorry_server)
pulumi.set(__self__, "vip", vip)
@property
@pulumi.getter(name="delayLoop")
def delay_loop(self) -> int:
"""
The interval in seconds between checks.
"""
return pulumi.get(self, "delay_loop")
@property
@pulumi.getter
def description(self) -> str:
"""
The description of the VIP.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter
def port(self) -> int:
"""
The target port number for load-balancing.
"""
return pulumi.get(self, "port")
@property
@pulumi.getter
def servers(self) -> Sequence['outputs.GetLoadBalancerVipServerResult']:
"""
A list of `server` blocks as defined below.
"""
return pulumi.get(self, "servers")
@property
@pulumi.getter(name="sorryServer")
def sorry_server(self) -> str:
"""
The IP address of the SorryServer. This will be used when all servers under this VIP are down.
"""
return pulumi.get(self, "sorry_server")
@property
@pulumi.getter
def vip(self) -> str:
"""
The virtual IP address.
"""
return pulumi.get(self, "vip")
@pulumi.output_type
class GetLoadBalancerVipServerResult(dict):
def __init__(__self__, *,
enabled: bool,
ip_address: str,
path: str,
protocol: str,
status: str):
"""
:param bool enabled: The flag to enable as destination of load balancing.
:param str ip_address: The IP address of the destination server.
:param str path: The path used when checking by HTTP/HTTPS.
:param str protocol: The protocol used for health checks. This will be one of [`http`/`https`/`tcp`/`ping`].
:param str status: The response code to expect when checking by HTTP/HTTPS.
"""
pulumi.set(__self__, "enabled", enabled)
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "path", path)
pulumi.set(__self__, "protocol", protocol)
pulumi.set(__self__, "status", status)
@property
@pulumi.getter
def enabled(self) -> bool:
"""
The flag to enable as destination of load balancing.
"""
return pulumi.get(self, "enabled")
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address of the destination server.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def path(self) -> str:
"""
The path used when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "path")
@property
@pulumi.getter
def protocol(self) -> str:
"""
The protocol used for health checks. This will be one of [`http`/`https`/`tcp`/`ping`].
"""
return pulumi.get(self, "protocol")
@property
@pulumi.getter
def status(self) -> str:
"""
The response code to expect when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "status")
@pulumi.output_type
class GetLocalRouterFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetLocalRouterFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetLocalRouterFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetLocalRouterFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetLocalRouterFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetLocalRouterNetworkInterfaceResult(dict):
def __init__(__self__, *,
ip_addresses: Sequence[str],
netmask: int,
vip: str,
vrid: int):
"""
:param Sequence[str] ip_addresses: The list of IP address assigned to the LocalRouter.
:param int netmask: The bit length of the subnet assigned to the LocalRouter.
:param str vip: The virtual IP address.
:param int vrid: The Virtual Router Identifier.
"""
pulumi.set(__self__, "ip_addresses", ip_addresses)
pulumi.set(__self__, "netmask", netmask)
pulumi.set(__self__, "vip", vip)
pulumi.set(__self__, "vrid", vrid)
@property
@pulumi.getter(name="ipAddresses")
def ip_addresses(self) -> Sequence[str]:
"""
The list of IP address assigned to the LocalRouter.
"""
return pulumi.get(self, "ip_addresses")
@property
@pulumi.getter
def netmask(self) -> int:
"""
The bit length of the subnet assigned to the LocalRouter.
"""
return pulumi.get(self, "netmask")
@property
@pulumi.getter
def vip(self) -> str:
"""
The virtual IP address.
"""
return pulumi.get(self, "vip")
@property
@pulumi.getter
def vrid(self) -> int:
"""
The Virtual Router Identifier.
"""
return pulumi.get(self, "vrid")
@pulumi.output_type
class GetLocalRouterPeerResult(dict):
def __init__(__self__, *,
description: str,
enabled: bool,
peer_id: str,
secret_key: str):
"""
:param str description: The description of the LocalRouter.
:param bool enabled: The flag to enable the LocalRouter.
:param str peer_id: The ID of the peer LocalRouter.
:param str secret_key: The secret key of the peer LocalRouter.
"""
pulumi.set(__self__, "description", description)
pulumi.set(__self__, "enabled", enabled)
pulumi.set(__self__, "peer_id", peer_id)
pulumi.set(__self__, "secret_key", secret_key)
@property
@pulumi.getter
def description(self) -> str:
"""
The description of the LocalRouter.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter
def enabled(self) -> bool:
"""
The flag to enable the LocalRouter.
"""
return pulumi.get(self, "enabled")
@property
@pulumi.getter(name="peerId")
def peer_id(self) -> str:
"""
The ID of the peer LocalRouter.
"""
return pulumi.get(self, "peer_id")
@property
@pulumi.getter(name="secretKey")
def secret_key(self) -> str:
"""
The secret key of the peer LocalRouter.
"""
return pulumi.get(self, "secret_key")
@pulumi.output_type
class GetLocalRouterStaticRouteResult(dict):
def __init__(__self__, *,
next_hop: str,
prefix: str):
"""
:param str next_hop: The IP address of the next hop.
:param str prefix: The CIDR block of destination.
"""
pulumi.set(__self__, "next_hop", next_hop)
pulumi.set(__self__, "prefix", prefix)
@property
@pulumi.getter(name="nextHop")
def next_hop(self) -> str:
"""
The IP address of the next hop.
"""
return pulumi.get(self, "next_hop")
@property
@pulumi.getter
def prefix(self) -> str:
"""
The CIDR block of destination.
"""
return pulumi.get(self, "prefix")
@pulumi.output_type
class GetLocalRouterSwitchResult(dict):
def __init__(__self__, *,
category: str,
code: str,
zone_id: str):
"""
:param str category: The category name of connected services (e.g. `cloud`, `vps`).
:param str code: The resource ID of the Switch.
:param str zone_id: The id of the Zone.
"""
pulumi.set(__self__, "category", category)
pulumi.set(__self__, "code", code)
pulumi.set(__self__, "zone_id", zone_id)
@property
@pulumi.getter
def category(self) -> str:
"""
The category name of connected services (e.g. `cloud`, `vps`).
"""
return pulumi.get(self, "category")
@property
@pulumi.getter
def code(self) -> str:
"""
The resource ID of the Switch.
"""
return pulumi.get(self, "code")
@property
@pulumi.getter(name="zoneId")
def zone_id(self) -> str:
"""
The id of the Zone.
"""
return pulumi.get(self, "zone_id")
@pulumi.output_type
class GetNFSFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetNFSFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetNFSFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetNFSFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetNFSFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetNFSNetworkInterfaceResult(dict):
def __init__(__self__, *,
gateway: str,
ip_address: str,
netmask: int,
switch_id: str):
"""
:param str gateway: The IP address of the gateway used by NFS.
:param str ip_address: The IP address assigned to the NFS.
:param int netmask: The bit length of the subnet assigned to the NFS.
:param str switch_id: The id of the switch connected from the NFS.
"""
pulumi.set(__self__, "gateway", gateway)
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "netmask", netmask)
pulumi.set(__self__, "switch_id", switch_id)
@property
@pulumi.getter
def gateway(self) -> str:
"""
The IP address of the gateway used by NFS.
"""
return pulumi.get(self, "gateway")
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address assigned to the NFS.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def netmask(self) -> int:
"""
The bit length of the subnet assigned to the NFS.
"""
return pulumi.get(self, "netmask")
@property
@pulumi.getter(name="switchId")
def switch_id(self) -> str:
"""
The id of the switch connected from the NFS.
"""
return pulumi.get(self, "switch_id")
@pulumi.output_type
class GetNoteFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetNoteFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetNoteFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetNoteFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetNoteFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetPacketFilterExpressionResult(dict):
def __init__(__self__, *,
allow: bool,
description: str,
destination_port: str,
protocol: str,
source_network: str,
source_port: str):
"""
:param bool allow: The flag to allow the packet through the filter.
:param str description: The description of the expression.
:param str destination_port: A destination port number or port range used for filtering (e.g. `1024`, `1024-2048`).
:param str protocol: The protocol used for filtering. This will be one of [`http`/`https`/`tcp`/`udp`/`icmp`/`fragment`/`ip`].
:param str source_network: A source IP address or CIDR block used for filtering (e.g. `192.0.2.1`, `192.0.2.0/24`).
:param str source_port: A source port number or port range used for filtering (e.g. `1024`, `1024-2048`).
"""
pulumi.set(__self__, "allow", allow)
pulumi.set(__self__, "description", description)
pulumi.set(__self__, "destination_port", destination_port)
pulumi.set(__self__, "protocol", protocol)
pulumi.set(__self__, "source_network", source_network)
pulumi.set(__self__, "source_port", source_port)
@property
@pulumi.getter
def allow(self) -> bool:
"""
The flag to allow the packet through the filter.
"""
return pulumi.get(self, "allow")
@property
@pulumi.getter
def description(self) -> str:
"""
The description of the expression.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="destinationPort")
def destination_port(self) -> str:
"""
A destination port number or port range used for filtering (e.g. `1024`, `1024-2048`).
"""
return pulumi.get(self, "destination_port")
@property
@pulumi.getter
def protocol(self) -> str:
"""
The protocol used for filtering. This will be one of [`http`/`https`/`tcp`/`udp`/`icmp`/`fragment`/`ip`].
"""
return pulumi.get(self, "protocol")
@property
@pulumi.getter(name="sourceNetwork")
def source_network(self) -> str:
"""
A source IP address or CIDR block used for filtering (e.g. `192.0.2.1`, `192.0.2.0/24`).
"""
return pulumi.get(self, "source_network")
@property
@pulumi.getter(name="sourcePort")
def source_port(self) -> str:
"""
A source port number or port range used for filtering (e.g. `1024`, `1024-2048`).
"""
return pulumi.get(self, "source_port")
@pulumi.output_type
class GetPacketFilterFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetPacketFilterFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None):
"""
:param Sequence['GetPacketFilterFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetPacketFilterFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@pulumi.output_type
class GetPacketFilterFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetPrivateHostFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetPrivateHostFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetPrivateHostFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetPrivateHostFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetPrivateHostFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetProxyLBBindPortResult(dict):
def __init__(__self__, *,
port: int,
proxy_mode: str,
redirect_to_https: bool,
response_headers: Sequence['outputs.GetProxyLBBindPortResponseHeaderResult'],
ssl_policy: str,
support_http2: bool):
"""
:param int port: The number of syslog port.
:param str proxy_mode: The proxy mode. This will be one of [`http`/`https`/`tcp`].
:param bool redirect_to_https: The flag to enable redirection from http to https. This flag is used only when `proxy_mode` is `http`.
:param Sequence['GetProxyLBBindPortResponseHeaderArgs'] response_headers: A list of `response_header` blocks as defined below.
:param bool support_http2: The flag to enable HTTP/2. This flag is used only when `proxy_mode` is `https`.
"""
pulumi.set(__self__, "port", port)
pulumi.set(__self__, "proxy_mode", proxy_mode)
pulumi.set(__self__, "redirect_to_https", redirect_to_https)
pulumi.set(__self__, "response_headers", response_headers)
pulumi.set(__self__, "ssl_policy", ssl_policy)
pulumi.set(__self__, "support_http2", support_http2)
@property
@pulumi.getter
def port(self) -> int:
"""
The number of syslog port.
"""
return pulumi.get(self, "port")
@property
@pulumi.getter(name="proxyMode")
def proxy_mode(self) -> str:
"""
The proxy mode. This will be one of [`http`/`https`/`tcp`].
"""
return pulumi.get(self, "proxy_mode")
@property
@pulumi.getter(name="redirectToHttps")
def redirect_to_https(self) -> bool:
"""
The flag to enable redirection from http to https. This flag is used only when `proxy_mode` is `http`.
"""
return pulumi.get(self, "redirect_to_https")
@property
@pulumi.getter(name="responseHeaders")
def response_headers(self) -> Sequence['outputs.GetProxyLBBindPortResponseHeaderResult']:
"""
A list of `response_header` blocks as defined below.
"""
return pulumi.get(self, "response_headers")
@property
@pulumi.getter(name="sslPolicy")
def ssl_policy(self) -> str:
return pulumi.get(self, "ssl_policy")
@property
@pulumi.getter(name="supportHttp2")
def support_http2(self) -> bool:
"""
The flag to enable HTTP/2. This flag is used only when `proxy_mode` is `https`.
"""
return pulumi.get(self, "support_http2")
@pulumi.output_type
class GetProxyLBBindPortResponseHeaderResult(dict):
def __init__(__self__, *,
header: str,
value: str):
"""
:param str header: The field name of HTTP header added to response by the ProxyLB.
:param str value: The field value of HTTP header added to response by the ProxyLB.
"""
pulumi.set(__self__, "header", header)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def header(self) -> str:
"""
The field name of HTTP header added to response by the ProxyLB.
"""
return pulumi.get(self, "header")
@property
@pulumi.getter
def value(self) -> str:
"""
The field value of HTTP header added to response by the ProxyLB.
"""
return pulumi.get(self, "value")
@pulumi.output_type
class GetProxyLBCertificateResult(dict):
def __init__(__self__, *,
additional_certificates: Sequence['outputs.GetProxyLBCertificateAdditionalCertificateResult'],
common_name: str,
intermediate_cert: str,
private_key: str,
server_cert: str,
subject_alt_names: str):
"""
:param Sequence['GetProxyLBCertificateAdditionalCertificateArgs'] additional_certificates: A list of `additional_certificate` blocks as defined below.
:param str common_name: The common name of the certificate.
:param str intermediate_cert: The intermediate certificate for a server.
:param str private_key: The private key for a server.
:param str server_cert: The certificate for a server.
:param str subject_alt_names: The subject alternative names of the certificate.
"""
pulumi.set(__self__, "additional_certificates", additional_certificates)
pulumi.set(__self__, "common_name", common_name)
pulumi.set(__self__, "intermediate_cert", intermediate_cert)
pulumi.set(__self__, "private_key", private_key)
pulumi.set(__self__, "server_cert", server_cert)
pulumi.set(__self__, "subject_alt_names", subject_alt_names)
@property
@pulumi.getter(name="additionalCertificates")
def additional_certificates(self) -> Sequence['outputs.GetProxyLBCertificateAdditionalCertificateResult']:
"""
A list of `additional_certificate` blocks as defined below.
"""
return pulumi.get(self, "additional_certificates")
@property
@pulumi.getter(name="commonName")
def common_name(self) -> str:
"""
The common name of the certificate.
"""
return pulumi.get(self, "common_name")
@property
@pulumi.getter(name="intermediateCert")
def intermediate_cert(self) -> str:
"""
The intermediate certificate for a server.
"""
return pulumi.get(self, "intermediate_cert")
@property
@pulumi.getter(name="privateKey")
def private_key(self) -> str:
"""
The private key for a server.
"""
return pulumi.get(self, "private_key")
@property
@pulumi.getter(name="serverCert")
def server_cert(self) -> str:
"""
The certificate for a server.
"""
return pulumi.get(self, "server_cert")
@property
@pulumi.getter(name="subjectAltNames")
def subject_alt_names(self) -> str:
"""
The subject alternative names of the certificate.
"""
return pulumi.get(self, "subject_alt_names")
@pulumi.output_type
class GetProxyLBCertificateAdditionalCertificateResult(dict):
def __init__(__self__, *,
intermediate_cert: str,
private_key: str,
server_cert: str):
"""
:param str intermediate_cert: The intermediate certificate for a server.
:param str private_key: The private key for a server.
:param str server_cert: The certificate for a server.
"""
pulumi.set(__self__, "intermediate_cert", intermediate_cert)
pulumi.set(__self__, "private_key", private_key)
pulumi.set(__self__, "server_cert", server_cert)
@property
@pulumi.getter(name="intermediateCert")
def intermediate_cert(self) -> str:
"""
The intermediate certificate for a server.
"""
return pulumi.get(self, "intermediate_cert")
@property
@pulumi.getter(name="privateKey")
def private_key(self) -> str:
"""
The private key for a server.
"""
return pulumi.get(self, "private_key")
@property
@pulumi.getter(name="serverCert")
def server_cert(self) -> str:
"""
The certificate for a server.
"""
return pulumi.get(self, "server_cert")
@pulumi.output_type
class GetProxyLBFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetProxyLBFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetProxyLBFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetProxyLBFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetProxyLBFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetProxyLBHealthCheckResult(dict):
def __init__(__self__, *,
delay_loop: int,
host_header: str,
path: str,
port: int,
protocol: str):
"""
:param int delay_loop: The interval in seconds between checks.
:param str host_header: The value of host header send when checking by HTTP.
:param str path: The request path that is used as condition of rule-based balancing.
:param int port: The number of syslog port.
:param str protocol: The protocol used for health checks. This will be one of [`http`/`tcp`].
"""
pulumi.set(__self__, "delay_loop", delay_loop)
pulumi.set(__self__, "host_header", host_header)
pulumi.set(__self__, "path", path)
pulumi.set(__self__, "port", port)
pulumi.set(__self__, "protocol", protocol)
@property
@pulumi.getter(name="delayLoop")
def delay_loop(self) -> int:
"""
The interval in seconds between checks.
"""
return pulumi.get(self, "delay_loop")
@property
@pulumi.getter(name="hostHeader")
def host_header(self) -> str:
"""
The value of host header send when checking by HTTP.
"""
return pulumi.get(self, "host_header")
@property
@pulumi.getter
def path(self) -> str:
"""
The request path that is used as condition of rule-based balancing.
"""
return pulumi.get(self, "path")
@property
@pulumi.getter
def port(self) -> int:
"""
The number of syslog port.
"""
return pulumi.get(self, "port")
@property
@pulumi.getter
def protocol(self) -> str:
"""
The protocol used for health checks. This will be one of [`http`/`tcp`].
"""
return pulumi.get(self, "protocol")
@pulumi.output_type
class GetProxyLBRuleResult(dict):
def __init__(__self__, *,
action: str,
fixed_content_type: str,
fixed_message_body: str,
fixed_status_code: str,
group: str,
host: str,
path: str,
redirect_location: str,
redirect_status_code: str):
"""
:param str action: The type of action to be performed when requests matches the rule. This will be one of [`forward`/`redirect`/`fixed`].
:param str fixed_content_type: Content-Type header value for fixed response sent when requests matches the rule. This will be one of [`text/plain`/`text/html`/`application/javascript`/`application/json`].
:param str fixed_message_body: Content body for fixed response sent when requests matches the rule.
:param str fixed_status_code: HTTP status code for fixed response sent when requests matches the rule. This will be one of [`200`/`403`/`503`].
:param str group: The name of load balancing group. This is used when using rule-based load balancing.
:param str host: The value of HTTP host header that is used as condition of rule-based balancing.
:param str path: The request path that is used as condition of rule-based balancing.
:param str redirect_location: The URL to redirect to when the request matches the rule. see https://manual.sakura.ad.jp/cloud/appliance/enhanced-lb/#enhanced-lb-rule for details.
:param str redirect_status_code: HTTP status code for redirects sent when requests matches the rule. This will be one of [`301`/`302`].
"""
pulumi.set(__self__, "action", action)
pulumi.set(__self__, "fixed_content_type", fixed_content_type)
pulumi.set(__self__, "fixed_message_body", fixed_message_body)
pulumi.set(__self__, "fixed_status_code", fixed_status_code)
pulumi.set(__self__, "group", group)
pulumi.set(__self__, "host", host)
pulumi.set(__self__, "path", path)
pulumi.set(__self__, "redirect_location", redirect_location)
pulumi.set(__self__, "redirect_status_code", redirect_status_code)
@property
@pulumi.getter
def action(self) -> str:
"""
The type of action to be performed when requests matches the rule. This will be one of [`forward`/`redirect`/`fixed`].
"""
return pulumi.get(self, "action")
@property
@pulumi.getter(name="fixedContentType")
def fixed_content_type(self) -> str:
"""
Content-Type header value for fixed response sent when requests matches the rule. This will be one of [`text/plain`/`text/html`/`application/javascript`/`application/json`].
"""
return pulumi.get(self, "fixed_content_type")
@property
@pulumi.getter(name="fixedMessageBody")
def fixed_message_body(self) -> str:
"""
Content body for fixed response sent when requests matches the rule.
"""
return pulumi.get(self, "fixed_message_body")
@property
@pulumi.getter(name="fixedStatusCode")
def fixed_status_code(self) -> str:
"""
HTTP status code for fixed response sent when requests matches the rule. This will be one of [`200`/`403`/`503`].
"""
return pulumi.get(self, "fixed_status_code")
@property
@pulumi.getter
def group(self) -> str:
"""
The name of load balancing group. This is used when using rule-based load balancing.
"""
return pulumi.get(self, "group")
@property
@pulumi.getter
def host(self) -> str:
"""
The value of HTTP host header that is used as condition of rule-based balancing.
"""
return pulumi.get(self, "host")
@property
@pulumi.getter
def path(self) -> str:
"""
The request path that is used as condition of rule-based balancing.
"""
return pulumi.get(self, "path")
@property
@pulumi.getter(name="redirectLocation")
def redirect_location(self) -> str:
"""
The URL to redirect to when the request matches the rule. see https://manual.sakura.ad.jp/cloud/appliance/enhanced-lb/#enhanced-lb-rule for details.
"""
return pulumi.get(self, "redirect_location")
@property
@pulumi.getter(name="redirectStatusCode")
def redirect_status_code(self) -> str:
"""
HTTP status code for redirects sent when requests matches the rule. This will be one of [`301`/`302`].
"""
return pulumi.get(self, "redirect_status_code")
@pulumi.output_type
class GetProxyLBServerResult(dict):
def __init__(__self__, *,
enabled: bool,
group: str,
ip_address: str,
port: int):
"""
:param bool enabled: The flag to enable as destination of load balancing.
:param str group: The name of load balancing group. This is used when using rule-based load balancing.
:param str ip_address: The IP address of the SorryServer. This will be used when all servers are down.
:param int port: The number of syslog port.
"""
pulumi.set(__self__, "enabled", enabled)
pulumi.set(__self__, "group", group)
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "port", port)
@property
@pulumi.getter
def enabled(self) -> bool:
"""
The flag to enable as destination of load balancing.
"""
return pulumi.get(self, "enabled")
@property
@pulumi.getter
def group(self) -> str:
"""
The name of load balancing group. This is used when using rule-based load balancing.
"""
return pulumi.get(self, "group")
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address of the SorryServer. This will be used when all servers are down.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def port(self) -> int:
"""
The number of syslog port.
"""
return pulumi.get(self, "port")
@pulumi.output_type
class GetProxyLBSorryServerResult(dict):
def __init__(__self__, *,
ip_address: str,
port: int):
"""
:param str ip_address: The IP address of the SorryServer. This will be used when all servers are down.
:param int port: The number of syslog port.
"""
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "port", port)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address of the SorryServer. This will be used when all servers are down.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def port(self) -> int:
"""
The number of syslog port.
"""
return pulumi.get(self, "port")
@pulumi.output_type
class GetProxyLBSyslogResult(dict):
def __init__(__self__, *,
port: int,
server: str):
"""
:param int port: The number of syslog port.
:param str server: The address of syslog server.
"""
pulumi.set(__self__, "port", port)
pulumi.set(__self__, "server", server)
@property
@pulumi.getter
def port(self) -> int:
"""
The number of syslog port.
"""
return pulumi.get(self, "port")
@property
@pulumi.getter
def server(self) -> str:
"""
The address of syslog server.
"""
return pulumi.get(self, "server")
@pulumi.output_type
class GetSSHKeyFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetSSHKeyFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None):
"""
:param Sequence['GetSSHKeyFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetSSHKeyFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@pulumi.output_type
class GetSSHKeyFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetServerFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetServerFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetServerFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetServerFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetServerFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetServerNetworkInterfaceResult(dict):
def __init__(__self__, *,
mac_address: str,
packet_filter_id: str,
upstream: str,
user_ip_address: str):
"""
:param str mac_address: The MAC address.
:param str packet_filter_id: The id of the packet filter attached to the network interface.
:param str upstream: The upstream type or upstream switch id. This will be one of [`shared`/`disconnect`/`<switch id>`].
:param str user_ip_address: The IP address for only display. This value doesn't affect actual NIC settings.
"""
pulumi.set(__self__, "mac_address", mac_address)
pulumi.set(__self__, "packet_filter_id", packet_filter_id)
pulumi.set(__self__, "upstream", upstream)
pulumi.set(__self__, "user_ip_address", user_ip_address)
@property
@pulumi.getter(name="macAddress")
def mac_address(self) -> str:
"""
The MAC address.
"""
return pulumi.get(self, "mac_address")
@property
@pulumi.getter(name="packetFilterId")
def packet_filter_id(self) -> str:
"""
The id of the packet filter attached to the network interface.
"""
return pulumi.get(self, "packet_filter_id")
@property
@pulumi.getter
def upstream(self) -> str:
"""
The upstream type or upstream switch id. This will be one of [`shared`/`disconnect`/`<switch id>`].
"""
return pulumi.get(self, "upstream")
@property
@pulumi.getter(name="userIpAddress")
def user_ip_address(self) -> str:
"""
The IP address for only display. This value doesn't affect actual NIC settings.
"""
return pulumi.get(self, "user_ip_address")
@pulumi.output_type
class GetSimpleMonitorFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetSimpleMonitorFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetSimpleMonitorFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetSimpleMonitorFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetSimpleMonitorFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetSimpleMonitorHealthCheckResult(dict):
def __init__(__self__, *,
community: str,
contains_string: str,
excepcted_data: str,
ftps: str,
host_header: str,
http2: bool,
oid: str,
password: str,
path: str,
port: int,
protocol: str,
qname: str,
remaining_days: int,
sni: bool,
snmp_version: str,
status: int,
username: str):
"""
:param str community: The SNMP community string used when checking by SNMP.
:param str contains_string: The string that should be included in the response body when checking for HTTP/HTTPS.
:param str excepcted_data: The expected value used when checking by DNS.
:param str ftps: The methods of invoking security for monitoring with FTPS. This will be one of [``/`implicit`/`explicit`].
:param str host_header: The value of host header send when checking by HTTP/HTTPS.
:param bool http2: The flag to enable HTTP/2 when checking by HTTPS.
:param str oid: The SNMP OID used when checking by SNMP.
:param str password: The password for basic auth used when checking by HTTP/HTTPS.
:param str path: The path used when checking by HTTP/HTTPS.
:param int port: The target port number.
:param str protocol: The protocol used for health checks. This will be one of [`http`/`https`/`ping`/`tcp`/`dns`/`ssh`/`smtp`/`pop3`/`snmp`/`sslcertificate`/`ftp`].
:param str qname: The FQDN used when checking by DNS.
:param int remaining_days: The number of remaining days until certificate expiration used when checking SSL certificates.
:param bool sni: The flag to enable SNI when checking by HTTP/HTTPS.
:param str snmp_version: The SNMP version used when checking by SNMP.
:param int status: The response-code to expect when checking by HTTP/HTTPS.
:param str username: The user name for basic auth used when checking by HTTP/HTTPS.
"""
pulumi.set(__self__, "community", community)
pulumi.set(__self__, "contains_string", contains_string)
pulumi.set(__self__, "excepcted_data", excepcted_data)
pulumi.set(__self__, "ftps", ftps)
pulumi.set(__self__, "host_header", host_header)
pulumi.set(__self__, "http2", http2)
pulumi.set(__self__, "oid", oid)
pulumi.set(__self__, "password", password)
pulumi.set(__self__, "path", path)
pulumi.set(__self__, "port", port)
pulumi.set(__self__, "protocol", protocol)
pulumi.set(__self__, "qname", qname)
pulumi.set(__self__, "remaining_days", remaining_days)
pulumi.set(__self__, "sni", sni)
pulumi.set(__self__, "snmp_version", snmp_version)
pulumi.set(__self__, "status", status)
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def community(self) -> str:
"""
The SNMP community string used when checking by SNMP.
"""
return pulumi.get(self, "community")
@property
@pulumi.getter(name="containsString")
def contains_string(self) -> str:
"""
The string that should be included in the response body when checking for HTTP/HTTPS.
"""
return pulumi.get(self, "contains_string")
@property
@pulumi.getter(name="excepctedData")
def excepcted_data(self) -> str:
"""
The expected value used when checking by DNS.
"""
return pulumi.get(self, "excepcted_data")
@property
@pulumi.getter
def ftps(self) -> str:
"""
The methods of invoking security for monitoring with FTPS. This will be one of [``/`implicit`/`explicit`].
"""
return pulumi.get(self, "ftps")
@property
@pulumi.getter(name="hostHeader")
def host_header(self) -> str:
"""
The value of host header send when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "host_header")
@property
@pulumi.getter
def http2(self) -> bool:
"""
The flag to enable HTTP/2 when checking by HTTPS.
"""
return pulumi.get(self, "http2")
@property
@pulumi.getter
def oid(self) -> str:
"""
The SNMP OID used when checking by SNMP.
"""
return pulumi.get(self, "oid")
@property
@pulumi.getter
def password(self) -> str:
"""
The password for basic auth used when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "password")
@property
@pulumi.getter
def path(self) -> str:
"""
The path used when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "path")
@property
@pulumi.getter
def port(self) -> int:
"""
The target port number.
"""
return pulumi.get(self, "port")
@property
@pulumi.getter
def protocol(self) -> str:
"""
The protocol used for health checks. This will be one of [`http`/`https`/`ping`/`tcp`/`dns`/`ssh`/`smtp`/`pop3`/`snmp`/`sslcertificate`/`ftp`].
"""
return pulumi.get(self, "protocol")
@property
@pulumi.getter
def qname(self) -> str:
"""
The FQDN used when checking by DNS.
"""
return pulumi.get(self, "qname")
@property
@pulumi.getter(name="remainingDays")
def remaining_days(self) -> int:
"""
The number of remaining days until certificate expiration used when checking SSL certificates.
"""
return pulumi.get(self, "remaining_days")
@property
@pulumi.getter
def sni(self) -> bool:
"""
The flag to enable SNI when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "sni")
@property
@pulumi.getter(name="snmpVersion")
def snmp_version(self) -> str:
"""
The SNMP version used when checking by SNMP.
"""
return pulumi.get(self, "snmp_version")
@property
@pulumi.getter
def status(self) -> int:
"""
The response-code to expect when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "status")
@property
@pulumi.getter
def username(self) -> str:
"""
The user name for basic auth used when checking by HTTP/HTTPS.
"""
return pulumi.get(self, "username")
@pulumi.output_type
class GetSwitchFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetSwitchFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetSwitchFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetSwitchFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetSwitchFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetVPCRouterDhcpServerResult(dict):
def __init__(__self__, *,
dns_servers: Sequence[str],
interface_index: int,
range_start: str,
range_stop: str):
"""
:param Sequence[str] dns_servers: A list of IP address of DNS server to assign to DHCP client.
:param int interface_index: The index of the network interface on which to enable filtering. This will be between `0`-`7`.
:param str range_start: The start value of IP address range to assign to PPTP client.
:param str range_stop: The end value of IP address range to assign to PPTP client.
"""
pulumi.set(__self__, "dns_servers", dns_servers)
pulumi.set(__self__, "interface_index", interface_index)
pulumi.set(__self__, "range_start", range_start)
pulumi.set(__self__, "range_stop", range_stop)
@property
@pulumi.getter(name="dnsServers")
def dns_servers(self) -> Sequence[str]:
"""
A list of IP address of DNS server to assign to DHCP client.
"""
return pulumi.get(self, "dns_servers")
@property
@pulumi.getter(name="interfaceIndex")
def interface_index(self) -> int:
"""
The index of the network interface on which to enable filtering. This will be between `0`-`7`.
"""
return pulumi.get(self, "interface_index")
@property
@pulumi.getter(name="rangeStart")
def range_start(self) -> str:
"""
The start value of IP address range to assign to PPTP client.
"""
return pulumi.get(self, "range_start")
@property
@pulumi.getter(name="rangeStop")
def range_stop(self) -> str:
"""
The end value of IP address range to assign to PPTP client.
"""
return pulumi.get(self, "range_stop")
@pulumi.output_type
class GetVPCRouterDhcpStaticMappingResult(dict):
def __init__(__self__, *,
ip_address: str,
mac_address: str):
"""
:param str ip_address: The IP address for peer.
:param str mac_address: The source MAC address of static mapping.
"""
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "mac_address", mac_address)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address for peer.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter(name="macAddress")
def mac_address(self) -> str:
"""
The source MAC address of static mapping.
"""
return pulumi.get(self, "mac_address")
@pulumi.output_type
class GetVPCRouterFilterResult(dict):
def __init__(__self__, *,
conditions: Optional[Sequence['outputs.GetVPCRouterFilterConditionResult']] = None,
id: Optional[str] = None,
names: Optional[Sequence[str]] = None,
tags: Optional[Sequence[str]] = None):
"""
:param Sequence['GetVPCRouterFilterConditionArgs'] conditions: One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
:param str id: The resource id on SakuraCloud used for filtering.
:param Sequence[str] names: The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
:param Sequence[str] tags: The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if id is not None:
pulumi.set(__self__, "id", id)
if names is not None:
pulumi.set(__self__, "names", names)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def conditions(self) -> Optional[Sequence['outputs.GetVPCRouterFilterConditionResult']]:
"""
One or more name/values pairs used for filtering. There are several valid keys, for a full reference, check out finding section in the [SakuraCloud API reference](https://developer.sakura.ad.jp/cloud/api/1.1/).
"""
return pulumi.get(self, "conditions")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The resource id on SakuraCloud used for filtering.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def names(self) -> Optional[Sequence[str]]:
"""
The resource names on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence[str]]:
"""
The resource tags on SakuraCloud used for filtering. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "tags")
@pulumi.output_type
class GetVPCRouterFilterConditionResult(dict):
def __init__(__self__, *,
name: str,
values: Sequence[str]):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param Sequence[str] values: The values of the condition. If multiple values are specified, they combined as AND condition.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
The values of the condition. If multiple values are specified, they combined as AND condition.
"""
return pulumi.get(self, "values")
@pulumi.output_type
class GetVPCRouterFirewallResult(dict):
def __init__(__self__, *,
direction: str,
expressions: Sequence['outputs.GetVPCRouterFirewallExpressionResult'],
interface_index: int):
"""
:param str direction: The direction to apply the firewall. This will be one of [`send`/`receive`].
:param Sequence['GetVPCRouterFirewallExpressionArgs'] expressions: A list of `expression` blocks as defined below.
:param int interface_index: The index of the network interface on which to enable filtering. This will be between `0`-`7`.
"""
pulumi.set(__self__, "direction", direction)
pulumi.set(__self__, "expressions", expressions)
pulumi.set(__self__, "interface_index", interface_index)
@property
@pulumi.getter
def direction(self) -> str:
"""
The direction to apply the firewall. This will be one of [`send`/`receive`].
"""
return pulumi.get(self, "direction")
@property
@pulumi.getter
def expressions(self) -> Sequence['outputs.GetVPCRouterFirewallExpressionResult']:
"""
A list of `expression` blocks as defined below.
"""
return pulumi.get(self, "expressions")
@property
@pulumi.getter(name="interfaceIndex")
def interface_index(self) -> int:
"""
The index of the network interface on which to enable filtering. This will be between `0`-`7`.
"""
return pulumi.get(self, "interface_index")
@pulumi.output_type
class GetVPCRouterFirewallExpressionResult(dict):
def __init__(__self__, *,
allow: bool,
description: str,
destination_network: str,
destination_port: str,
logging: bool,
protocol: str,
source_network: str,
source_port: str):
"""
:param bool allow: The flag to allow the packet through the filter.
:param str description: The description of the static NAT.
:param str destination_network: A destination IP address or CIDR block used for filtering (e.g. `192.0.2.1`, `192.0.2.0/24`).
:param str destination_port: A destination port number or port range used for filtering (e.g. `1024`, `1024-2048`). This is only used when `protocol` is `tcp` or `udp`.
:param bool logging: The flag to enable packet logging when matching the expression.
:param str protocol: The protocol used for port forwarding. This will be one of [`tcp`/`udp`].
:param str source_network: A source IP address or CIDR block used for filtering (e.g. `192.0.2.1`, `192.0.2.0/24`).
:param str source_port: A source port number or port range used for filtering (e.g. `1024`, `1024-2048`). This is only used when `protocol` is `tcp` or `udp`.
"""
pulumi.set(__self__, "allow", allow)
pulumi.set(__self__, "description", description)
pulumi.set(__self__, "destination_network", destination_network)
pulumi.set(__self__, "destination_port", destination_port)
pulumi.set(__self__, "logging", logging)
pulumi.set(__self__, "protocol", protocol)
pulumi.set(__self__, "source_network", source_network)
pulumi.set(__self__, "source_port", source_port)
@property
@pulumi.getter
def allow(self) -> bool:
"""
The flag to allow the packet through the filter.
"""
return pulumi.get(self, "allow")
@property
@pulumi.getter
def description(self) -> str:
"""
The description of the static NAT.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="destinationNetwork")
def destination_network(self) -> str:
"""
A destination IP address or CIDR block used for filtering (e.g. `192.0.2.1`, `192.0.2.0/24`).
"""
return pulumi.get(self, "destination_network")
@property
@pulumi.getter(name="destinationPort")
def destination_port(self) -> str:
"""
A destination port number or port range used for filtering (e.g. `1024`, `1024-2048`). This is only used when `protocol` is `tcp` or `udp`.
"""
return pulumi.get(self, "destination_port")
@property
@pulumi.getter
def logging(self) -> bool:
"""
The flag to enable packet logging when matching the expression.
"""
return pulumi.get(self, "logging")
@property
@pulumi.getter
def protocol(self) -> str:
"""
The protocol used for port forwarding. This will be one of [`tcp`/`udp`].
"""
return pulumi.get(self, "protocol")
@property
@pulumi.getter(name="sourceNetwork")
def source_network(self) -> str:
"""
A source IP address or CIDR block used for filtering (e.g. `192.0.2.1`, `192.0.2.0/24`).
"""
return pulumi.get(self, "source_network")
@property
@pulumi.getter(name="sourcePort")
def source_port(self) -> str:
"""
A source port number or port range used for filtering (e.g. `1024`, `1024-2048`). This is only used when `protocol` is `tcp` or `udp`.
"""
return pulumi.get(self, "source_port")
@pulumi.output_type
class GetVPCRouterL2tpResult(dict):
def __init__(__self__, *,
pre_shared_secret: str,
range_start: str,
range_stop: str):
"""
:param str pre_shared_secret: The pre shared secret for the VPN.
:param str range_start: The start value of IP address range to assign to PPTP client.
:param str range_stop: The end value of IP address range to assign to PPTP client.
"""
pulumi.set(__self__, "pre_shared_secret", pre_shared_secret)
pulumi.set(__self__, "range_start", range_start)
pulumi.set(__self__, "range_stop", range_stop)
@property
@pulumi.getter(name="preSharedSecret")
def pre_shared_secret(self) -> str:
"""
The pre shared secret for the VPN.
"""
return pulumi.get(self, "pre_shared_secret")
@property
@pulumi.getter(name="rangeStart")
def range_start(self) -> str:
"""
The start value of IP address range to assign to PPTP client.
"""
return pulumi.get(self, "range_start")
@property
@pulumi.getter(name="rangeStop")
def range_stop(self) -> str:
"""
The end value of IP address range to assign to PPTP client.
"""
return pulumi.get(self, "range_stop")
@pulumi.output_type
class GetVPCRouterPortForwardingResult(dict):
def __init__(__self__, *,
description: str,
private_ip: str,
private_port: int,
protocol: str,
public_port: int):
"""
:param str description: The description of the static NAT.
:param str private_ip: The private IP address used for the static NAT.
:param int private_port: The destination port number of the port forwarding. This will be a port number on a private network.
:param str protocol: The protocol used for port forwarding. This will be one of [`tcp`/`udp`].
:param int public_port: The source port number of the port forwarding. This will be a port number on a public network.
"""
pulumi.set(__self__, "description", description)
pulumi.set(__self__, "private_ip", private_ip)
pulumi.set(__self__, "private_port", private_port)
pulumi.set(__self__, "protocol", protocol)
pulumi.set(__self__, "public_port", public_port)
@property
@pulumi.getter
def description(self) -> str:
"""
The description of the static NAT.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="privateIp")
def private_ip(self) -> str:
"""
The private IP address used for the static NAT.
"""
return pulumi.get(self, "private_ip")
@property
@pulumi.getter(name="privatePort")
def private_port(self) -> int:
"""
The destination port number of the port forwarding. This will be a port number on a private network.
"""
return pulumi.get(self, "private_port")
@property
@pulumi.getter
def protocol(self) -> str:
"""
The protocol used for port forwarding. This will be one of [`tcp`/`udp`].
"""
return pulumi.get(self, "protocol")
@property
@pulumi.getter(name="publicPort")
def public_port(self) -> int:
"""
The source port number of the port forwarding. This will be a port number on a public network.
"""
return pulumi.get(self, "public_port")
@pulumi.output_type
class GetVPCRouterPptpResult(dict):
def __init__(__self__, *,
range_start: str,
range_stop: str):
"""
:param str range_start: The start value of IP address range to assign to PPTP client.
:param str range_stop: The end value of IP address range to assign to PPTP client.
"""
pulumi.set(__self__, "range_start", range_start)
pulumi.set(__self__, "range_stop", range_stop)
@property
@pulumi.getter(name="rangeStart")
def range_start(self) -> str:
"""
The start value of IP address range to assign to PPTP client.
"""
return pulumi.get(self, "range_start")
@property
@pulumi.getter(name="rangeStop")
def range_stop(self) -> str:
"""
The end value of IP address range to assign to PPTP client.
"""
return pulumi.get(self, "range_stop")
@pulumi.output_type
class GetVPCRouterPrivateNetworkInterfaceResult(dict):
def __init__(__self__, *,
index: int,
ip_addresses: Sequence[str],
netmask: int,
switch_id: str,
vip: str):
"""
:param int index: The index of the network interface. This will be between `1`-`7`.
:param Sequence[str] ip_addresses: The list of the IP address assigned to the VPC Router. This will be only one value when `plan` is `standard`, two values otherwise.
:param int netmask: The bit length of the subnet assigned to the network interface.
:param str switch_id: The id of the switch connected from the VPCRouter.
:param str vip: The virtual IP address of the VPC Router. This is only used when `plan` is not `standard`.
"""
pulumi.set(__self__, "index", index)
pulumi.set(__self__, "ip_addresses", ip_addresses)
pulumi.set(__self__, "netmask", netmask)
pulumi.set(__self__, "switch_id", switch_id)
pulumi.set(__self__, "vip", vip)
@property
@pulumi.getter
def index(self) -> int:
"""
The index of the network interface. This will be between `1`-`7`.
"""
return pulumi.get(self, "index")
@property
@pulumi.getter(name="ipAddresses")
def ip_addresses(self) -> Sequence[str]:
"""
The list of the IP address assigned to the VPC Router. This will be only one value when `plan` is `standard`, two values otherwise.
"""
return pulumi.get(self, "ip_addresses")
@property
@pulumi.getter
def netmask(self) -> int:
"""
The bit length of the subnet assigned to the network interface.
"""
return pulumi.get(self, "netmask")
@property
@pulumi.getter(name="switchId")
def switch_id(self) -> str:
"""
The id of the switch connected from the VPCRouter.
"""
return pulumi.get(self, "switch_id")
@property
@pulumi.getter
def vip(self) -> str:
"""
The virtual IP address of the VPC Router. This is only used when `plan` is not `standard`.
"""
return pulumi.get(self, "vip")
@pulumi.output_type
class GetVPCRouterPublicNetworkInterfaceResult(dict):
def __init__(__self__, *,
aliases: Sequence[str],
ip_addresses: Sequence[str],
switch_id: str,
vip: str,
vrid: int):
"""
:param Sequence[str] aliases: A list of ip alias assigned to the VPC Router. This is only used when `plan` is not `standard`.
:param Sequence[str] ip_addresses: The list of the IP address assigned to the VPC Router. This will be only one value when `plan` is `standard`, two values otherwise.
:param str switch_id: The id of the switch connected from the VPCRouter.
:param str vip: The virtual IP address of the VPC Router. This is only used when `plan` is not `standard`.
:param int vrid: The Virtual Router Identifier. This is only used when `plan` is not `standard`.
"""
pulumi.set(__self__, "aliases", aliases)
pulumi.set(__self__, "ip_addresses", ip_addresses)
pulumi.set(__self__, "switch_id", switch_id)
pulumi.set(__self__, "vip", vip)
pulumi.set(__self__, "vrid", vrid)
@property
@pulumi.getter
def aliases(self) -> Sequence[str]:
"""
A list of ip alias assigned to the VPC Router. This is only used when `plan` is not `standard`.
"""
return pulumi.get(self, "aliases")
@property
@pulumi.getter(name="ipAddresses")
def ip_addresses(self) -> Sequence[str]:
"""
The list of the IP address assigned to the VPC Router. This will be only one value when `plan` is `standard`, two values otherwise.
"""
return pulumi.get(self, "ip_addresses")
@property
@pulumi.getter(name="switchId")
def switch_id(self) -> str:
"""
The id of the switch connected from the VPCRouter.
"""
return pulumi.get(self, "switch_id")
@property
@pulumi.getter
def vip(self) -> str:
"""
The virtual IP address of the VPC Router. This is only used when `plan` is not `standard`.
"""
return pulumi.get(self, "vip")
@property
@pulumi.getter
def vrid(self) -> int:
"""
The Virtual Router Identifier. This is only used when `plan` is not `standard`.
"""
return pulumi.get(self, "vrid")
@pulumi.output_type
class GetVPCRouterSiteToSiteVpnResult(dict):
def __init__(__self__, *,
local_prefixes: Sequence[str],
peer: str,
pre_shared_secret: str,
remote_id: str,
routes: Sequence[str]):
"""
:param Sequence[str] local_prefixes: A list of CIDR block of the network under the VPC Router.
:param str peer: A list of `peer` blocks as defined below.
:param str pre_shared_secret: The pre shared secret for the VPN.
:param str remote_id: The id of the opposing appliance connected to the VPC Router. This is typically set same as value of `peer`.
:param Sequence[str] routes: A list of CIDR block of VPN connected networks.
"""
pulumi.set(__self__, "local_prefixes", local_prefixes)
pulumi.set(__self__, "peer", peer)
pulumi.set(__self__, "pre_shared_secret", pre_shared_secret)
pulumi.set(__self__, "remote_id", remote_id)
pulumi.set(__self__, "routes", routes)
@property
@pulumi.getter(name="localPrefixes")
def local_prefixes(self) -> Sequence[str]:
"""
A list of CIDR block of the network under the VPC Router.
"""
return pulumi.get(self, "local_prefixes")
@property
@pulumi.getter
def peer(self) -> str:
"""
A list of `peer` blocks as defined below.
"""
return pulumi.get(self, "peer")
@property
@pulumi.getter(name="preSharedSecret")
def pre_shared_secret(self) -> str:
"""
The pre shared secret for the VPN.
"""
return pulumi.get(self, "pre_shared_secret")
@property
@pulumi.getter(name="remoteId")
def remote_id(self) -> str:
"""
The id of the opposing appliance connected to the VPC Router. This is typically set same as value of `peer`.
"""
return pulumi.get(self, "remote_id")
@property
@pulumi.getter
def routes(self) -> Sequence[str]:
"""
A list of CIDR block of VPN connected networks.
"""
return pulumi.get(self, "routes")
@pulumi.output_type
class GetVPCRouterStaticNatResult(dict):
def __init__(__self__, *,
description: str,
private_ip: str,
public_ip: str):
"""
:param str description: The description of the static NAT.
:param str private_ip: The private IP address used for the static NAT.
:param str public_ip: The public IP address used for the static NAT.
"""
pulumi.set(__self__, "description", description)
pulumi.set(__self__, "private_ip", private_ip)
pulumi.set(__self__, "public_ip", public_ip)
@property
@pulumi.getter
def description(self) -> str:
"""
The description of the static NAT.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="privateIp")
def private_ip(self) -> str:
"""
The private IP address used for the static NAT.
"""
return pulumi.get(self, "private_ip")
@property
@pulumi.getter(name="publicIp")
def public_ip(self) -> str:
"""
The public IP address used for the static NAT.
"""
return pulumi.get(self, "public_ip")
@pulumi.output_type
class GetVPCRouterStaticRouteResult(dict):
def __init__(__self__, *,
next_hop: str,
prefix: str):
"""
:param str next_hop: The IP address of the next hop.
:param str prefix: The CIDR block of destination.
"""
pulumi.set(__self__, "next_hop", next_hop)
pulumi.set(__self__, "prefix", prefix)
@property
@pulumi.getter(name="nextHop")
def next_hop(self) -> str:
"""
The IP address of the next hop.
"""
return pulumi.get(self, "next_hop")
@property
@pulumi.getter
def prefix(self) -> str:
"""
The CIDR block of destination.
"""
return pulumi.get(self, "prefix")
@pulumi.output_type
class GetVPCRouterUserResult(dict):
def __init__(__self__, *,
name: str,
password: str):
"""
:param str name: The name of the target field. This value is case-sensitive.
:param str password: The password used to authenticate remote access.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "password", password)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def password(self) -> str:
"""
The password used to authenticate remote access.
"""
return pulumi.get(self, "password")
@pulumi.output_type
class GetVPCRouterWireGuardResult(dict):
def __init__(__self__, *,
ip_address: str,
peers: Sequence['outputs.GetVPCRouterWireGuardPeerResult'],
public_key: str):
"""
:param str ip_address: The IP address for peer.
:param Sequence['GetVPCRouterWireGuardPeerArgs'] peers: A list of `peer` blocks as defined below.
:param str public_key: the public key of the WireGuard client.
"""
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "peers", peers)
pulumi.set(__self__, "public_key", public_key)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address for peer.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def peers(self) -> Sequence['outputs.GetVPCRouterWireGuardPeerResult']:
"""
A list of `peer` blocks as defined below.
"""
return pulumi.get(self, "peers")
@property
@pulumi.getter(name="publicKey")
def public_key(self) -> str:
"""
the public key of the WireGuard client.
"""
return pulumi.get(self, "public_key")
@pulumi.output_type
class GetVPCRouterWireGuardPeerResult(dict):
def __init__(__self__, *,
ip_address: str,
name: str,
public_key: str):
"""
:param str ip_address: The IP address for peer.
:param str name: The name of the target field. This value is case-sensitive.
:param str public_key: the public key of the WireGuard client.
"""
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "public_key", public_key)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The IP address for peer.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the target field. This value is case-sensitive.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="publicKey")
def public_key(self) -> str:
"""
the public key of the WireGuard client.
"""
return pulumi.get(self, "public_key")
| 36.131172 | 292 | 0.613223 | 39,017 | 334,394 | 5.087859 | 0.019863 | 0.023414 | 0.040143 | 0.058671 | 0.902213 | 0.893075 | 0.884355 | 0.860694 | 0.845778 | 0.828817 | 0 | 0.004375 | 0.280866 | 334,394 | 9,254 | 293 | 36.135077 | 0.821132 | 0.313672 | 0 | 0.84055 | 1 | 0.009478 | 0.137865 | 0.0385 | 0 | 0 | 0 | 0 | 0 | 1 | 0.170229 | false | 0.004832 | 0.001115 | 0.000186 | 0.332094 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
80bf882215f8015d39584e9e734b84e5e76f178f | 712 | py | Python | tests/test_provider_phzietsman_awsx.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 507 | 2017-07-26T02:58:38.000Z | 2022-01-21T12:35:13.000Z | tests/test_provider_phzietsman_awsx.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 135 | 2017-07-20T12:01:59.000Z | 2021-10-04T22:25:40.000Z | tests/test_provider_phzietsman_awsx.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 81 | 2018-02-20T17:55:28.000Z | 2022-01-31T07:08:40.000Z | # tests/test_provider_phzietsman_awsx.py
# Automatically generated by tools/makecode.py (24-Sep-2021 15:12:40 UTC)
def test_provider_import():
import terrascript.provider.phzietsman.awsx
def test_resource_import():
from terrascript.resource.phzietsman.awsx import controltower_account_vending
# TODO: Shortcut imports without namespace for official and supported providers.
# TODO: This has to be moved into a required_providers block.
# def test_version_source():
#
# import terrascript.provider.phzietsman.awsx
#
# t = terrascript.provider.phzietsman.awsx.awsx()
# s = str(t)
#
# assert 'https://github.com/phzietsman/terraform-provider-awsx' in s
# assert '1.0.0' in s
| 28.48 | 81 | 0.75 | 96 | 712 | 5.4375 | 0.604167 | 0.1341 | 0.168582 | 0.189655 | 0.149425 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024876 | 0.15309 | 712 | 24 | 82 | 29.666667 | 0.840796 | 0.691011 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041667 | 0 | 1 | 0.5 | true | 0 | 1 | 0 | 1.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
039151117c3d0b8f079f617592ed3a1090d52454 | 491 | py | Python | python/base_agent/dialogue_objects/reference_object_helpers.py | boldsort/craftassist | 8058d115a250e30deb60d969b7b1a5fefd6e974c | [
"MIT"
] | 626 | 2019-07-18T18:40:44.000Z | 2022-03-29T17:34:43.000Z | python/base_agent/dialogue_objects/reference_object_helpers.py | boldsort/craftassist | 8058d115a250e30deb60d969b7b1a5fefd6e974c | [
"MIT"
] | 42 | 2019-07-27T11:04:15.000Z | 2021-02-23T03:15:14.000Z | python/base_agent/dialogue_objects/reference_object_helpers.py | boldsort/craftassist | 8058d115a250e30deb60d969b7b1a5fefd6e974c | [
"MIT"
] | 89 | 2019-07-19T15:07:39.000Z | 2022-02-15T18:44:24.000Z | """
Copyright (c) Facebook, Inc. and its affiliates.
"""
# TODO rewrite functions in intepreter and helpers as classes
# finer granularity of (code) objects
# interpreter is an input to interpret ref object, maybe clean that up?
class ReferenceObjectInterpreter:
def __init__(self, interpret_reference_object):
self.interpret_reference_object = interpret_reference_object
def __call__(self, *args, **kwargs):
return self.interpret_reference_object(*args, **kwargs)
| 35.071429 | 71 | 0.757637 | 60 | 491 | 5.933333 | 0.7 | 0.202247 | 0.269663 | 0.235955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160896 | 491 | 13 | 72 | 37.769231 | 0.864078 | 0.437882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
039c5a2cdeaaee07af502ee87ba166ba9a1eb2a4 | 98 | py | Python | diffpy.utils/run_test.py | st3107/conda-recipes | 61a8fbefa807f43f1023397fd00310551da200a9 | [
"BSD-3-Clause"
] | null | null | null | diffpy.utils/run_test.py | st3107/conda-recipes | 61a8fbefa807f43f1023397fd00310551da200a9 | [
"BSD-3-Clause"
] | null | null | null | diffpy.utils/run_test.py | st3107/conda-recipes | 61a8fbefa807f43f1023397fd00310551da200a9 | [
"BSD-3-Clause"
] | 1 | 2020-12-01T18:11:29.000Z | 2020-12-01T18:11:29.000Z | #!/usr/bin/env python
import diffpy.utils.tests
assert diffpy.utils.tests.test().wasSuccessful()
| 19.6 | 48 | 0.77551 | 14 | 98 | 5.428571 | 0.785714 | 0.289474 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 98 | 4 | 49 | 24.5 | 0.835165 | 0.204082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
20d0ac9e7fd436d1bdb98c8ce1527d26f7a5c75c | 209 | py | Python | quark_core_api/common/__init__.py | arcticle/Quark | 17aa5b5869a9e9c7a04c1a371fef5998f33dc319 | [
"MIT"
] | null | null | null | quark_core_api/common/__init__.py | arcticle/Quark | 17aa5b5869a9e9c7a04c1a371fef5998f33dc319 | [
"MIT"
] | null | null | null | quark_core_api/common/__init__.py | arcticle/Quark | 17aa5b5869a9e9c7a04c1a371fef5998f33dc319 | [
"MIT"
] | null | null | null | from quark_core_api.common.events import EventHandler, DelayedEventHandler
from quark_core_api.common.utils import Cache, ReadOnlyCache
from quark_core_api.common.context_initializers import ContextInitializer | 69.666667 | 74 | 0.899522 | 27 | 209 | 6.703704 | 0.555556 | 0.149171 | 0.21547 | 0.265193 | 0.364641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.062201 | 209 | 3 | 75 | 69.666667 | 0.923469 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
459fcbbcd472224b870158a9acde32b1a910fc41 | 1,863 | py | Python | api/app/users/mixins.py | NikolaSiplakova/Baobab | 180cd3cb492ed47d38ca0b473572fad0ac6f604b | [
"Apache-2.0"
] | 52 | 2019-01-10T16:04:26.000Z | 2022-02-10T00:55:59.000Z | api/app/users/mixins.py | NikolaSiplakova/Baobab | 180cd3cb492ed47d38ca0b473572fad0ac6f604b | [
"Apache-2.0"
] | 535 | 2019-01-08T21:24:01.000Z | 2022-02-27T15:24:06.000Z | api/app/users/mixins.py | NikolaSiplakova/Baobab | 180cd3cb492ed47d38ca0b473572fad0ac6f604b | [
"Apache-2.0"
] | 36 | 2019-01-10T16:09:15.000Z | 2021-06-28T21:02:47.000Z | from flask_restful import reqparse
from flask_restful.inputs import boolean
class SignupMixin(object):
req_parser = reqparse.RequestParser()
req_parser.add_argument('email', type=str, required=True)
req_parser.add_argument('firstname', type=str, required=True)
req_parser.add_argument('lastname', type=str, required=True)
req_parser.add_argument('user_title', type=str, required=True)
req_parser.add_argument('password', type=str, required=False)
req_parser.add_argument('policy_agreed', type=boolean, required=True)
req_parser.add_argument('language', type=str, required=True)
put_req_parser = reqparse.RequestParser()
put_req_parser.add_argument('email', type=str, required=True)
put_req_parser.add_argument('firstname', type=str, required=True)
put_req_parser.add_argument('lastname', type=str, required=True)
put_req_parser.add_argument('user_title', type=str, required=True)
put_req_parser.add_argument('language', type=str, required=True)
put_req_parser.add_argument('password', type=str, required=False)
class AuthenticateMixin(object):
req_parser = reqparse.RequestParser()
req_parser.add_argument('email', type=str, required=True)
req_parser.add_argument('password', type=str, required=True)
class UserProfileListMixin(object):
req_parser = reqparse.RequestParser()
req_parser.add_argument('event_id', type=int, required=True)
class UserProfileMixin(object):
req_parser = reqparse.RequestParser()
req_parser.add_argument('user_id', type=int, required=True)
class PrivacyPolicyMixin(object):
req_parser = reqparse.RequestParser()
req_parser.add_argument('policy_agreed', type=boolean, required=True)
class EventAttendeeMixin(object):
req_parser = reqparse.RequestParser()
req_parser.add_argument('event_id', type=int, required=True)
| 35.150943 | 73 | 0.764895 | 242 | 1,863 | 5.636364 | 0.157025 | 0.171554 | 0.167155 | 0.278592 | 0.846041 | 0.840909 | 0.818182 | 0.818182 | 0.818182 | 0.552786 | 0 | 0 | 0.118089 | 1,863 | 52 | 74 | 35.826923 | 0.830189 | 0 | 0 | 0.352941 | 0 | 0 | 0.084809 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.088235 | 0.058824 | 0 | 0.441176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
45bb5471deea1a4469e9152fae2840365615173c | 106 | py | Python | pytsp/cli/__init__.py | billsioros/pytsp | 7f3a8172bcb3bb9bec8655dcb490099b60a4c962 | [
"MIT"
] | 7 | 2020-07-09T10:26:28.000Z | 2021-06-13T06:40:30.000Z | pytsp/cli/__init__.py | billsioros/pytsp | 7f3a8172bcb3bb9bec8655dcb490099b60a4c962 | [
"MIT"
] | null | null | null | pytsp/cli/__init__.py | billsioros/pytsp | 7f3a8172bcb3bb9bec8655dcb490099b60a4c962 | [
"MIT"
] | null | null | null |
from pytsp.cli.decorators import plot, safe
from pytsp.cli.options import Dictionary, Timewindow, Method
| 26.5 | 60 | 0.820755 | 15 | 106 | 5.8 | 0.733333 | 0.206897 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113208 | 106 | 3 | 61 | 35.333333 | 0.925532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
b313bd69ed840f174fc7eadceb29934f09e59474 | 35,994 | py | Python | sdk/python/pulumi_azure/domainservices/replica_set.py | ScriptBox99/pulumi-azure | 1b8c6d5479ccabc39094741eac25a8ca44c8833a | [
"ECL-2.0",
"Apache-2.0"
] | 109 | 2018-06-18T00:19:44.000Z | 2022-02-20T05:32:57.000Z | sdk/python/pulumi_azure/domainservices/replica_set.py | ScriptBox99/pulumi-azure | 1b8c6d5479ccabc39094741eac25a8ca44c8833a | [
"ECL-2.0",
"Apache-2.0"
] | 663 | 2018-06-18T21:08:46.000Z | 2022-03-31T20:10:11.000Z | sdk/python/pulumi_azure/domainservices/replica_set.py | ScriptBox99/pulumi-azure | 1b8c6d5479ccabc39094741eac25a8ca44c8833a | [
"ECL-2.0",
"Apache-2.0"
] | 41 | 2018-07-19T22:37:38.000Z | 2022-03-14T10:56:26.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['ReplicaSetArgs', 'ReplicaSet']
@pulumi.input_type
class ReplicaSetArgs:
def __init__(__self__, *,
domain_service_id: pulumi.Input[str],
subnet_id: pulumi.Input[str],
location: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a ReplicaSet resource.
:param pulumi.Input[str] domain_service_id: The ID of the Domain Service for which to create this Replica Set. Changing this forces a new resource to be created.
:param pulumi.Input[str] subnet_id: The ID of the subnet in which to place this Replica Set.
:param pulumi.Input[str] location: The Azure location where this Replica Set should exist. Changing this forces a new resource to be created.
"""
pulumi.set(__self__, "domain_service_id", domain_service_id)
pulumi.set(__self__, "subnet_id", subnet_id)
if location is not None:
pulumi.set(__self__, "location", location)
@property
@pulumi.getter(name="domainServiceId")
def domain_service_id(self) -> pulumi.Input[str]:
"""
The ID of the Domain Service for which to create this Replica Set. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "domain_service_id")
@domain_service_id.setter
def domain_service_id(self, value: pulumi.Input[str]):
pulumi.set(self, "domain_service_id", value)
@property
@pulumi.getter(name="subnetId")
def subnet_id(self) -> pulumi.Input[str]:
"""
The ID of the subnet in which to place this Replica Set.
"""
return pulumi.get(self, "subnet_id")
@subnet_id.setter
def subnet_id(self, value: pulumi.Input[str]):
pulumi.set(self, "subnet_id", value)
@property
@pulumi.getter
def location(self) -> Optional[pulumi.Input[str]]:
"""
The Azure location where this Replica Set should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "location")
@location.setter
def location(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "location", value)
@pulumi.input_type
class _ReplicaSetState:
def __init__(__self__, *,
domain_controller_ip_addresses: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
domain_service_id: Optional[pulumi.Input[str]] = None,
external_access_ip_address: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
service_status: Optional[pulumi.Input[str]] = None,
subnet_id: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering ReplicaSet resources.
:param pulumi.Input[Sequence[pulumi.Input[str]]] domain_controller_ip_addresses: A list of subnet IP addresses for the domain controllers in this Replica Set, typically two.
:param pulumi.Input[str] domain_service_id: The ID of the Domain Service for which to create this Replica Set. Changing this forces a new resource to be created.
:param pulumi.Input[str] external_access_ip_address: The publicly routable IP address for the domain controllers in this Replica Set.
:param pulumi.Input[str] location: The Azure location where this Replica Set should exist. Changing this forces a new resource to be created.
:param pulumi.Input[str] service_status: The current service status for the replica set.
:param pulumi.Input[str] subnet_id: The ID of the subnet in which to place this Replica Set.
"""
if domain_controller_ip_addresses is not None:
pulumi.set(__self__, "domain_controller_ip_addresses", domain_controller_ip_addresses)
if domain_service_id is not None:
pulumi.set(__self__, "domain_service_id", domain_service_id)
if external_access_ip_address is not None:
pulumi.set(__self__, "external_access_ip_address", external_access_ip_address)
if location is not None:
pulumi.set(__self__, "location", location)
if service_status is not None:
pulumi.set(__self__, "service_status", service_status)
if subnet_id is not None:
pulumi.set(__self__, "subnet_id", subnet_id)
@property
@pulumi.getter(name="domainControllerIpAddresses")
def domain_controller_ip_addresses(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of subnet IP addresses for the domain controllers in this Replica Set, typically two.
"""
return pulumi.get(self, "domain_controller_ip_addresses")
@domain_controller_ip_addresses.setter
def domain_controller_ip_addresses(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "domain_controller_ip_addresses", value)
@property
@pulumi.getter(name="domainServiceId")
def domain_service_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the Domain Service for which to create this Replica Set. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "domain_service_id")
@domain_service_id.setter
def domain_service_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "domain_service_id", value)
@property
@pulumi.getter(name="externalAccessIpAddress")
def external_access_ip_address(self) -> Optional[pulumi.Input[str]]:
"""
The publicly routable IP address for the domain controllers in this Replica Set.
"""
return pulumi.get(self, "external_access_ip_address")
@external_access_ip_address.setter
def external_access_ip_address(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "external_access_ip_address", value)
@property
@pulumi.getter
def location(self) -> Optional[pulumi.Input[str]]:
"""
The Azure location where this Replica Set should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "location")
@location.setter
def location(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "location", value)
@property
@pulumi.getter(name="serviceStatus")
def service_status(self) -> Optional[pulumi.Input[str]]:
"""
The current service status for the replica set.
"""
return pulumi.get(self, "service_status")
@service_status.setter
def service_status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "service_status", value)
@property
@pulumi.getter(name="subnetId")
def subnet_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the subnet in which to place this Replica Set.
"""
return pulumi.get(self, "subnet_id")
@subnet_id.setter
def subnet_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "subnet_id", value)
class ReplicaSet(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
domain_service_id: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
subnet_id: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Manages a Replica Set for an Active Directory Domain Service.
## Example Usage
```python
import pulumi
import pulumi_azure as azure
import pulumi_azuread as azuread
primary_resource_group = azure.core.ResourceGroup("primaryResourceGroup", location="West Europe")
primary_virtual_network = azure.network.VirtualNetwork("primaryVirtualNetwork",
location=primary_resource_group.location,
resource_group_name=primary_resource_group.name,
address_spaces=["10.0.1.0/16"])
primary_subnet = azure.network.Subnet("primarySubnet",
resource_group_name=primary_resource_group.name,
virtual_network_name=primary_virtual_network.name,
address_prefixes=["10.0.1.0/24"])
primary_network_security_group = azure.network.NetworkSecurityGroup("primaryNetworkSecurityGroup",
location=primary_resource_group.location,
resource_group_name=primary_resource_group.name,
security_rules=[
azure.network.NetworkSecurityGroupSecurityRuleArgs(
name="AllowSyncWithAzureAD",
priority=101,
direction="Inbound",
access="Allow",
protocol="Tcp",
source_port_range="*",
destination_port_range="443",
source_address_prefix="AzureActiveDirectoryDomainServices",
destination_address_prefix="*",
),
azure.network.NetworkSecurityGroupSecurityRuleArgs(
name="AllowRD",
priority=201,
direction="Inbound",
access="Allow",
protocol="Tcp",
source_port_range="*",
destination_port_range="3389",
source_address_prefix="CorpNetSaw",
destination_address_prefix="*",
),
azure.network.NetworkSecurityGroupSecurityRuleArgs(
name="AllowPSRemoting",
priority=301,
direction="Inbound",
access="Allow",
protocol="Tcp",
source_port_range="*",
destination_port_range="5986",
source_address_prefix="AzureActiveDirectoryDomainServices",
destination_address_prefix="*",
),
azure.network.NetworkSecurityGroupSecurityRuleArgs(
name="AllowLDAPS",
priority=401,
direction="Inbound",
access="Allow",
protocol="Tcp",
source_port_range="*",
destination_port_range="636",
source_address_prefix="*",
destination_address_prefix="*",
),
])
primary_subnet_network_security_group_association = azure.network.SubnetNetworkSecurityGroupAssociation("primarySubnetNetworkSecurityGroupAssociation",
subnet_id=primary_subnet.id,
network_security_group_id=primary_network_security_group.id)
dc_admins = azuread.Group("dcAdmins")
admin_user = azuread.User("adminUser",
user_principal_name="dc-admin@$hashicorp-example.net",
display_name="DC Administrator",
password="Pa55w0Rd!!1")
admin_group_member = azuread.GroupMember("adminGroupMember",
group_object_id=dc_admins.object_id,
member_object_id=admin_user.object_id)
example_service_principal = azuread.ServicePrincipal("exampleServicePrincipal", application_id="2565bd9d-da50-47d4-8b85-4c97f669dc36")
# published app for domain services
aadds = azure.core.ResourceGroup("aadds", location="westeurope")
example_service = azure.domainservices.Service("exampleService",
location=aadds.location,
resource_group_name=aadds.name,
domain_name="widgetslogin.net",
sku="Enterprise",
filtered_sync_enabled=False,
initial_replica_set=azure.domainservices.ServiceInitialReplicaSetArgs(
location=primary_virtual_network.location,
subnet_id=primary_subnet.id,
),
notifications=azure.domainservices.ServiceNotificationsArgs(
additional_recipients=[
"notifyA@example.net",
"notifyB@example.org",
],
notify_dc_admins=True,
notify_global_admins=True,
),
security=azure.domainservices.ServiceSecurityArgs(
sync_kerberos_passwords=True,
sync_ntlm_passwords=True,
sync_on_prem_passwords=True,
),
tags={
"Environment": "prod",
},
opts=pulumi.ResourceOptions(depends_on=[
example_service_principal,
primary_subnet_network_security_group_association,
]))
replica_resource_group = azure.core.ResourceGroup("replicaResourceGroup", location="North Europe")
replica_virtual_network = azure.network.VirtualNetwork("replicaVirtualNetwork",
location=replica_resource_group.location,
resource_group_name=replica_resource_group.name,
address_spaces=["10.20.0.0/16"])
aadds_replica_subnet = azure.network.Subnet("aaddsReplicaSubnet",
resource_group_name=replica_resource_group.name,
virtual_network_name=replica_virtual_network.name,
address_prefixes=["10.20.0.0/24"])
aadds_replica_network_security_group = azure.network.NetworkSecurityGroup("aaddsReplicaNetworkSecurityGroup",
location=replica_resource_group.location,
resource_group_name=replica_resource_group.name,
security_rules=[
azure.network.NetworkSecurityGroupSecurityRuleArgs(
name="AllowSyncWithAzureAD",
priority=101,
direction="Inbound",
access="Allow",
protocol="Tcp",
source_port_range="*",
destination_port_range="443",
source_address_prefix="AzureActiveDirectoryDomainServices",
destination_address_prefix="*",
),
azure.network.NetworkSecurityGroupSecurityRuleArgs(
name="AllowRD",
priority=201,
direction="Inbound",
access="Allow",
protocol="Tcp",
source_port_range="*",
destination_port_range="3389",
source_address_prefix="CorpNetSaw",
destination_address_prefix="*",
),
azure.network.NetworkSecurityGroupSecurityRuleArgs(
name="AllowPSRemoting",
priority=301,
direction="Inbound",
access="Allow",
protocol="Tcp",
source_port_range="*",
destination_port_range="5986",
source_address_prefix="AzureActiveDirectoryDomainServices",
destination_address_prefix="*",
),
azure.network.NetworkSecurityGroupSecurityRuleArgs(
name="AllowLDAPS",
priority=401,
direction="Inbound",
access="Allow",
protocol="Tcp",
source_port_range="*",
destination_port_range="636",
source_address_prefix="*",
destination_address_prefix="*",
),
])
replica_subnet_network_security_group_association = azure.network.SubnetNetworkSecurityGroupAssociation("replicaSubnetNetworkSecurityGroupAssociation",
subnet_id=aadds_replica_subnet.id,
network_security_group_id=aadds_replica_network_security_group.id)
primary_replica = azure.network.VirtualNetworkPeering("primaryReplica",
resource_group_name=primary_virtual_network.resource_group_name,
virtual_network_name=primary_virtual_network.name,
remote_virtual_network_id=replica_virtual_network.id,
allow_forwarded_traffic=True,
allow_gateway_transit=False,
allow_virtual_network_access=True,
use_remote_gateways=False)
replica_primary = azure.network.VirtualNetworkPeering("replicaPrimary",
resource_group_name=replica_virtual_network.resource_group_name,
virtual_network_name=replica_virtual_network.name,
remote_virtual_network_id=primary_virtual_network.id,
allow_forwarded_traffic=True,
allow_gateway_transit=False,
allow_virtual_network_access=True,
use_remote_gateways=False)
replica_virtual_network_dns_servers = azure.network.VirtualNetworkDnsServers("replicaVirtualNetworkDnsServers",
virtual_network_id=replica_virtual_network.id,
dns_servers=example_service.initial_replica_set.domain_controller_ip_addresses)
replica_replica_set = azure.domainservices.ReplicaSet("replicaReplicaSet",
domain_service_id=example_service.id,
location=replica_resource_group.location,
subnet_id=aadds_replica_subnet.id,
opts=pulumi.ResourceOptions(depends_on=[
replica_subnet_network_security_group_association,
primary_replica,
replica_primary,
]))
```
## Import
Domain Service Replica Sets can be imported using the resource ID of the parent Domain Service and the Replica Set ID, e.g.
```sh
$ pulumi import azure:domainservices/replicaSet:ReplicaSet example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.AAD/domainServices/instance1/replicaSets/00000000-0000-0000-0000-000000000000
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] domain_service_id: The ID of the Domain Service for which to create this Replica Set. Changing this forces a new resource to be created.
:param pulumi.Input[str] location: The Azure location where this Replica Set should exist. Changing this forces a new resource to be created.
:param pulumi.Input[str] subnet_id: The ID of the subnet in which to place this Replica Set.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: ReplicaSetArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Manages a Replica Set for an Active Directory Domain Service.
## Example Usage
```python
import pulumi
import pulumi_azure as azure
import pulumi_azuread as azuread
primary_resource_group = azure.core.ResourceGroup("primaryResourceGroup", location="West Europe")
primary_virtual_network = azure.network.VirtualNetwork("primaryVirtualNetwork",
location=primary_resource_group.location,
resource_group_name=primary_resource_group.name,
address_spaces=["10.0.1.0/16"])
primary_subnet = azure.network.Subnet("primarySubnet",
resource_group_name=primary_resource_group.name,
virtual_network_name=primary_virtual_network.name,
address_prefixes=["10.0.1.0/24"])
primary_network_security_group = azure.network.NetworkSecurityGroup("primaryNetworkSecurityGroup",
location=primary_resource_group.location,
resource_group_name=primary_resource_group.name,
security_rules=[
azure.network.NetworkSecurityGroupSecurityRuleArgs(
name="AllowSyncWithAzureAD",
priority=101,
direction="Inbound",
access="Allow",
protocol="Tcp",
source_port_range="*",
destination_port_range="443",
source_address_prefix="AzureActiveDirectoryDomainServices",
destination_address_prefix="*",
),
azure.network.NetworkSecurityGroupSecurityRuleArgs(
name="AllowRD",
priority=201,
direction="Inbound",
access="Allow",
protocol="Tcp",
source_port_range="*",
destination_port_range="3389",
source_address_prefix="CorpNetSaw",
destination_address_prefix="*",
),
azure.network.NetworkSecurityGroupSecurityRuleArgs(
name="AllowPSRemoting",
priority=301,
direction="Inbound",
access="Allow",
protocol="Tcp",
source_port_range="*",
destination_port_range="5986",
source_address_prefix="AzureActiveDirectoryDomainServices",
destination_address_prefix="*",
),
azure.network.NetworkSecurityGroupSecurityRuleArgs(
name="AllowLDAPS",
priority=401,
direction="Inbound",
access="Allow",
protocol="Tcp",
source_port_range="*",
destination_port_range="636",
source_address_prefix="*",
destination_address_prefix="*",
),
])
primary_subnet_network_security_group_association = azure.network.SubnetNetworkSecurityGroupAssociation("primarySubnetNetworkSecurityGroupAssociation",
subnet_id=primary_subnet.id,
network_security_group_id=primary_network_security_group.id)
dc_admins = azuread.Group("dcAdmins")
admin_user = azuread.User("adminUser",
user_principal_name="dc-admin@$hashicorp-example.net",
display_name="DC Administrator",
password="Pa55w0Rd!!1")
admin_group_member = azuread.GroupMember("adminGroupMember",
group_object_id=dc_admins.object_id,
member_object_id=admin_user.object_id)
example_service_principal = azuread.ServicePrincipal("exampleServicePrincipal", application_id="2565bd9d-da50-47d4-8b85-4c97f669dc36")
# published app for domain services
aadds = azure.core.ResourceGroup("aadds", location="westeurope")
example_service = azure.domainservices.Service("exampleService",
location=aadds.location,
resource_group_name=aadds.name,
domain_name="widgetslogin.net",
sku="Enterprise",
filtered_sync_enabled=False,
initial_replica_set=azure.domainservices.ServiceInitialReplicaSetArgs(
location=primary_virtual_network.location,
subnet_id=primary_subnet.id,
),
notifications=azure.domainservices.ServiceNotificationsArgs(
additional_recipients=[
"notifyA@example.net",
"notifyB@example.org",
],
notify_dc_admins=True,
notify_global_admins=True,
),
security=azure.domainservices.ServiceSecurityArgs(
sync_kerberos_passwords=True,
sync_ntlm_passwords=True,
sync_on_prem_passwords=True,
),
tags={
"Environment": "prod",
},
opts=pulumi.ResourceOptions(depends_on=[
example_service_principal,
primary_subnet_network_security_group_association,
]))
replica_resource_group = azure.core.ResourceGroup("replicaResourceGroup", location="North Europe")
replica_virtual_network = azure.network.VirtualNetwork("replicaVirtualNetwork",
location=replica_resource_group.location,
resource_group_name=replica_resource_group.name,
address_spaces=["10.20.0.0/16"])
aadds_replica_subnet = azure.network.Subnet("aaddsReplicaSubnet",
resource_group_name=replica_resource_group.name,
virtual_network_name=replica_virtual_network.name,
address_prefixes=["10.20.0.0/24"])
aadds_replica_network_security_group = azure.network.NetworkSecurityGroup("aaddsReplicaNetworkSecurityGroup",
location=replica_resource_group.location,
resource_group_name=replica_resource_group.name,
security_rules=[
azure.network.NetworkSecurityGroupSecurityRuleArgs(
name="AllowSyncWithAzureAD",
priority=101,
direction="Inbound",
access="Allow",
protocol="Tcp",
source_port_range="*",
destination_port_range="443",
source_address_prefix="AzureActiveDirectoryDomainServices",
destination_address_prefix="*",
),
azure.network.NetworkSecurityGroupSecurityRuleArgs(
name="AllowRD",
priority=201,
direction="Inbound",
access="Allow",
protocol="Tcp",
source_port_range="*",
destination_port_range="3389",
source_address_prefix="CorpNetSaw",
destination_address_prefix="*",
),
azure.network.NetworkSecurityGroupSecurityRuleArgs(
name="AllowPSRemoting",
priority=301,
direction="Inbound",
access="Allow",
protocol="Tcp",
source_port_range="*",
destination_port_range="5986",
source_address_prefix="AzureActiveDirectoryDomainServices",
destination_address_prefix="*",
),
azure.network.NetworkSecurityGroupSecurityRuleArgs(
name="AllowLDAPS",
priority=401,
direction="Inbound",
access="Allow",
protocol="Tcp",
source_port_range="*",
destination_port_range="636",
source_address_prefix="*",
destination_address_prefix="*",
),
])
replica_subnet_network_security_group_association = azure.network.SubnetNetworkSecurityGroupAssociation("replicaSubnetNetworkSecurityGroupAssociation",
subnet_id=aadds_replica_subnet.id,
network_security_group_id=aadds_replica_network_security_group.id)
primary_replica = azure.network.VirtualNetworkPeering("primaryReplica",
resource_group_name=primary_virtual_network.resource_group_name,
virtual_network_name=primary_virtual_network.name,
remote_virtual_network_id=replica_virtual_network.id,
allow_forwarded_traffic=True,
allow_gateway_transit=False,
allow_virtual_network_access=True,
use_remote_gateways=False)
replica_primary = azure.network.VirtualNetworkPeering("replicaPrimary",
resource_group_name=replica_virtual_network.resource_group_name,
virtual_network_name=replica_virtual_network.name,
remote_virtual_network_id=primary_virtual_network.id,
allow_forwarded_traffic=True,
allow_gateway_transit=False,
allow_virtual_network_access=True,
use_remote_gateways=False)
replica_virtual_network_dns_servers = azure.network.VirtualNetworkDnsServers("replicaVirtualNetworkDnsServers",
virtual_network_id=replica_virtual_network.id,
dns_servers=example_service.initial_replica_set.domain_controller_ip_addresses)
replica_replica_set = azure.domainservices.ReplicaSet("replicaReplicaSet",
domain_service_id=example_service.id,
location=replica_resource_group.location,
subnet_id=aadds_replica_subnet.id,
opts=pulumi.ResourceOptions(depends_on=[
replica_subnet_network_security_group_association,
primary_replica,
replica_primary,
]))
```
## Import
Domain Service Replica Sets can be imported using the resource ID of the parent Domain Service and the Replica Set ID, e.g.
```sh
$ pulumi import azure:domainservices/replicaSet:ReplicaSet example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.AAD/domainServices/instance1/replicaSets/00000000-0000-0000-0000-000000000000
```
:param str resource_name: The name of the resource.
:param ReplicaSetArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ReplicaSetArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
domain_service_id: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
subnet_id: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ReplicaSetArgs.__new__(ReplicaSetArgs)
if domain_service_id is None and not opts.urn:
raise TypeError("Missing required property 'domain_service_id'")
__props__.__dict__["domain_service_id"] = domain_service_id
__props__.__dict__["location"] = location
if subnet_id is None and not opts.urn:
raise TypeError("Missing required property 'subnet_id'")
__props__.__dict__["subnet_id"] = subnet_id
__props__.__dict__["domain_controller_ip_addresses"] = None
__props__.__dict__["external_access_ip_address"] = None
__props__.__dict__["service_status"] = None
super(ReplicaSet, __self__).__init__(
'azure:domainservices/replicaSet:ReplicaSet',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
domain_controller_ip_addresses: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
domain_service_id: Optional[pulumi.Input[str]] = None,
external_access_ip_address: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
service_status: Optional[pulumi.Input[str]] = None,
subnet_id: Optional[pulumi.Input[str]] = None) -> 'ReplicaSet':
"""
Get an existing ReplicaSet resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[Sequence[pulumi.Input[str]]] domain_controller_ip_addresses: A list of subnet IP addresses for the domain controllers in this Replica Set, typically two.
:param pulumi.Input[str] domain_service_id: The ID of the Domain Service for which to create this Replica Set. Changing this forces a new resource to be created.
:param pulumi.Input[str] external_access_ip_address: The publicly routable IP address for the domain controllers in this Replica Set.
:param pulumi.Input[str] location: The Azure location where this Replica Set should exist. Changing this forces a new resource to be created.
:param pulumi.Input[str] service_status: The current service status for the replica set.
:param pulumi.Input[str] subnet_id: The ID of the subnet in which to place this Replica Set.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _ReplicaSetState.__new__(_ReplicaSetState)
__props__.__dict__["domain_controller_ip_addresses"] = domain_controller_ip_addresses
__props__.__dict__["domain_service_id"] = domain_service_id
__props__.__dict__["external_access_ip_address"] = external_access_ip_address
__props__.__dict__["location"] = location
__props__.__dict__["service_status"] = service_status
__props__.__dict__["subnet_id"] = subnet_id
return ReplicaSet(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="domainControllerIpAddresses")
def domain_controller_ip_addresses(self) -> pulumi.Output[Sequence[str]]:
"""
A list of subnet IP addresses for the domain controllers in this Replica Set, typically two.
"""
return pulumi.get(self, "domain_controller_ip_addresses")
@property
@pulumi.getter(name="domainServiceId")
def domain_service_id(self) -> pulumi.Output[str]:
"""
The ID of the Domain Service for which to create this Replica Set. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "domain_service_id")
@property
@pulumi.getter(name="externalAccessIpAddress")
def external_access_ip_address(self) -> pulumi.Output[str]:
"""
The publicly routable IP address for the domain controllers in this Replica Set.
"""
return pulumi.get(self, "external_access_ip_address")
@property
@pulumi.getter
def location(self) -> pulumi.Output[str]:
"""
The Azure location where this Replica Set should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "location")
@property
@pulumi.getter(name="serviceStatus")
def service_status(self) -> pulumi.Output[str]:
"""
The current service status for the replica set.
"""
return pulumi.get(self, "service_status")
@property
@pulumi.getter(name="subnetId")
def subnet_id(self) -> pulumi.Output[str]:
"""
The ID of the subnet in which to place this Replica Set.
"""
return pulumi.get(self, "subnet_id")
| 48.379032 | 249 | 0.630049 | 3,554 | 35,994 | 6.078784 | 0.089195 | 0.034114 | 0.038234 | 0.029532 | 0.915247 | 0.900991 | 0.885947 | 0.868913 | 0.861924 | 0.838502 | 0 | 0.013528 | 0.287353 | 35,994 | 743 | 250 | 48.444145 | 0.828701 | 0.61702 | 0 | 0.553299 | 1 | 0 | 0.12519 | 0.048415 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15736 | false | 0.005076 | 0.025381 | 0 | 0.279188 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2fa269f98ccfe05e65cbfc0be0f604588a29b044 | 45 | py | Python | nyc/__init__.py | lopez86/NYCDataTools | 9c860545bacd27a7a1106bba3e3d75cd0320e6df | [
"MIT"
] | null | null | null | nyc/__init__.py | lopez86/NYCDataTools | 9c860545bacd27a7a1106bba3e3d75cd0320e6df | [
"MIT"
] | null | null | null | nyc/__init__.py | lopez86/NYCDataTools | 9c860545bacd27a7a1106bba3e3d75cd0320e6df | [
"MIT"
] | null | null | null | from . import mapping
from . import plotting
| 15 | 22 | 0.777778 | 6 | 45 | 5.833333 | 0.666667 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 45 | 2 | 23 | 22.5 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
ff931c9f433843beb89ed7228293812c2f575194 | 247 | py | Python | codes/course1/demo4.py | BigShuang/big-shuang-python-introductory-course | c4fd1343c4c539567180072c749b68bda7c28075 | [
"MIT"
] | null | null | null | codes/course1/demo4.py | BigShuang/big-shuang-python-introductory-course | c4fd1343c4c539567180072c749b68bda7c28075 | [
"MIT"
] | null | null | null | codes/course1/demo4.py | BigShuang/big-shuang-python-introductory-course | c4fd1343c4c539567180072c749b68bda7c28075 | [
"MIT"
] | null | null | null | x = 1
y = 2
z = x + y
print(str(x) + " + " + str(y) + " = " + str(z))
x = 2
y = 2
z = x + y
print(str(x) + " + " + str(y) + " = " + str(z))
x = 3
y = 4
z = x + y
print(str(x) + " + " + str(y) + " = " + str(z))
x = 3
y = 4
z = x + y
print("%s")
| 12.35 | 47 | 0.34413 | 51 | 247 | 1.666667 | 0.196078 | 0.164706 | 0.141176 | 0.376471 | 0.952941 | 0.952941 | 0.952941 | 0.952941 | 0.952941 | 0.952941 | 0 | 0.05 | 0.352227 | 247 | 19 | 48 | 13 | 0.48125 | 0 | 0 | 0.8125 | 0 | 0 | 0.080972 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 1 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
4464aef0d4663478f0d8d03f1cc4999d72c6447a | 11,947 | py | Python | tests/base/test_cmd_line.py | gbmarc1/spock | 27cbcbb80cd3e4eac1c79032011afaf1a3e83f08 | [
"Apache-2.0"
] | 58 | 2020-08-07T20:28:28.000Z | 2022-03-14T17:32:37.000Z | tests/base/test_cmd_line.py | gbmarc1/spock | 27cbcbb80cd3e4eac1c79032011afaf1a3e83f08 | [
"Apache-2.0"
] | 178 | 2020-08-11T15:02:07.000Z | 2022-03-30T07:18:06.000Z | tests/base/test_cmd_line.py | gbmarc1/spock | 27cbcbb80cd3e4eac1c79032011afaf1a3e83f08 | [
"Apache-2.0"
] | 7 | 2020-08-11T13:44:50.000Z | 2022-02-24T14:35:55.000Z | # -*- coding: utf-8 -*-
import sys
import pytest
from spock.builder import ConfigArgBuilder
from tests.base.attr_configs_test import *
class TestClassCmdLineOverride:
"""Testing command line overrides"""
@staticmethod
@pytest.fixture
def arg_builder(monkeypatch):
with monkeypatch.context() as m:
m.setattr(
sys,
"argv",
[
"",
"--config",
"./tests/conf/yaml/test_class.yaml",
"--TypeConfig.bool_p",
"--TypeConfig.int_p",
"11",
"--TypeConfig.float_p",
"11.0",
"--TypeConfig.string_p",
"Hooray",
"--TypeConfig.list_p_float",
"[11.0,21.0]",
"--TypeConfig.list_p_int",
"[11, 21]",
"--TypeConfig.list_p_str",
"['Hooray', 'Working']",
"--TypeConfig.list_p_bool",
"[False, True]",
"--TypeConfig.tuple_p_float",
"(11.0, 21.0)",
"--TypeConfig.tuple_p_int",
"(11, 21)",
"--TypeConfig.tuple_p_str",
"('Hooray', 'Working')",
"--TypeConfig.tuple_p_bool",
"(False, True)",
"--TypeConfig.tuple_p_mixed",
"(5, 11.5)",
"--TypeConfig.list_list_p_int",
"[[11, 21], [11, 21]]",
"--TypeConfig.choice_p_str",
"option_2",
"--TypeConfig.choice_p_int",
"20",
"--TypeConfig.choice_p_float",
"20.0",
"--TypeConfig.list_choice_p_str",
"['option_2']",
"--TypeConfig.list_list_choice_p_str",
"[['option_2'], ['option_2']]",
"--TypeConfig.list_choice_p_int",
"[20]",
"--TypeConfig.list_choice_p_float",
"[20.0]",
"--TypeConfig.class_enum",
"NestedStuff",
"--NestedStuff.one",
"12",
"--NestedStuff.two",
"ancora",
"--TypeConfig.nested_list.NestedListStuff.one",
"[11, 21]",
"--TypeConfig.nested_list.NestedListStuff.two",
"['Hooray', 'Working']",
"--TypeConfig.high_config",
"SingleNestedConfig",
"--SingleNestedConfig.double_nested_config",
"SecondDoubleNestedConfig",
"--SecondDoubleNestedConfig.morph_tolerance",
"0.2"
],
)
config = ConfigArgBuilder(
TypeConfig, NestedStuff, NestedListStuff, SingleNestedConfig,
FirstDoubleNestedConfig, SecondDoubleNestedConfig,desc="Test Builder"
)
return config.generate()
def test_class_overrides(self, arg_builder):
assert arg_builder.TypeConfig.bool_p is True
assert arg_builder.TypeConfig.int_p == 11
assert arg_builder.TypeConfig.float_p == 11.0
assert arg_builder.TypeConfig.string_p == "Hooray"
assert arg_builder.TypeConfig.list_p_float == [11.0, 21.0]
assert arg_builder.TypeConfig.list_p_int == [11, 21]
assert arg_builder.TypeConfig.list_p_str == ["Hooray", "Working"]
assert arg_builder.TypeConfig.list_p_bool == [False, True]
assert arg_builder.TypeConfig.tuple_p_float == (11.0, 21.0)
assert arg_builder.TypeConfig.tuple_p_int == (11, 21)
assert arg_builder.TypeConfig.tuple_p_str == ("Hooray", "Working")
assert arg_builder.TypeConfig.tuple_p_bool == (False, True)
assert arg_builder.TypeConfig.tuple_p_mixed == (5, 11.5)
assert arg_builder.TypeConfig.choice_p_str == "option_2"
assert arg_builder.TypeConfig.choice_p_int == 20
assert arg_builder.TypeConfig.choice_p_float == 20.0
assert arg_builder.TypeConfig.list_list_p_int == [[11, 21], [11, 21]]
assert arg_builder.TypeConfig.list_choice_p_str == ["option_2"]
assert arg_builder.TypeConfig.list_list_choice_p_str == [
["option_2"],
["option_2"],
]
assert arg_builder.TypeConfig.list_choice_p_int == [20]
assert arg_builder.TypeConfig.list_choice_p_float == [20.0]
assert arg_builder.TypeConfig.class_enum.one == 12
assert arg_builder.TypeConfig.class_enum.two == "ancora"
assert arg_builder.NestedListStuff[0].one == 11
assert arg_builder.NestedListStuff[0].two == "Hooray"
assert arg_builder.NestedListStuff[1].one == 21
assert arg_builder.NestedListStuff[1].two == "Working"
assert isinstance(arg_builder.SingleNestedConfig.double_nested_config, SecondDoubleNestedConfig) is True
assert arg_builder.SecondDoubleNestedConfig.morph_tolerance == 0.2
class TestClassOnlyCmdLine:
"""Testing command line overrides"""
@staticmethod
@pytest.fixture
def arg_builder(monkeypatch):
with monkeypatch.context() as m:
m.setattr(
sys,
"argv",
[
"",
"--TypeConfig.bool_p",
"--TypeConfig.int_p",
"11",
"--TypeConfig.float_p",
"11.0",
"--TypeConfig.string_p",
"Hooray",
"--TypeConfig.list_p_float",
"[11.0, 21.0]",
"--TypeConfig.list_p_int",
"[11, 21]",
"--TypeConfig.list_p_str",
"['Hooray', 'Working']",
"--TypeConfig.list_p_bool",
"[False, True]",
"--TypeConfig.tuple_p_float",
"(11.0, 21.0)",
"--TypeConfig.tuple_p_int",
"(11, 21)",
"--TypeConfig.tuple_p_str",
"('Hooray', 'Working')",
"--TypeConfig.tuple_p_bool",
"(False, True)",
"--TypeConfig.tuple_p_mixed",
"(5, 11.5)",
"--TypeConfig.list_list_p_int",
"[[11, 21], [11, 21]]",
"--TypeConfig.choice_p_str",
"option_2",
"--TypeConfig.choice_p_int",
"20",
"--TypeConfig.choice_p_float",
"20.0",
"--TypeConfig.list_choice_p_str",
"['option_2']",
"--TypeConfig.list_list_choice_p_str",
"[['option_2'], ['option_2']]",
"--TypeConfig.list_choice_p_int",
"[20]",
"--TypeConfig.list_choice_p_float",
"[20.0]",
"--TypeConfig.class_enum",
"NestedStuff",
"--TypeConfig.nested",
"NestedStuff",
"--NestedStuff.one",
"12",
"--NestedStuff.two",
"ancora",
"--TypeConfig.nested_list.NestedListStuff.one",
"[11, 21]",
"--TypeConfig.nested_list.NestedListStuff.two",
"['Hooray', 'Working']",
"--TypeConfig.high_config",
"SingleNestedConfig"
],
)
config = ConfigArgBuilder(
TypeConfig, NestedStuff, NestedListStuff, SingleNestedConfig,
FirstDoubleNestedConfig, SecondDoubleNestedConfig, desc="Test Builder"
)
return config.generate()
def test_class_overrides(self, arg_builder):
assert arg_builder.TypeConfig.bool_p is True
assert arg_builder.TypeConfig.int_p == 11
assert arg_builder.TypeConfig.float_p == 11.0
assert arg_builder.TypeConfig.string_p == "Hooray"
assert arg_builder.TypeConfig.list_p_float == [11.0, 21.0]
assert arg_builder.TypeConfig.list_p_int == [11, 21]
assert arg_builder.TypeConfig.list_p_str == ["Hooray", "Working"]
assert arg_builder.TypeConfig.list_p_bool == [False, True]
assert arg_builder.TypeConfig.tuple_p_float == (11.0, 21.0)
assert arg_builder.TypeConfig.tuple_p_int == (11, 21)
assert arg_builder.TypeConfig.tuple_p_str == ("Hooray", "Working")
assert arg_builder.TypeConfig.tuple_p_bool == (False, True)
assert arg_builder.TypeConfig.tuple_p_mixed == (5, 11.5)
assert arg_builder.TypeConfig.choice_p_str == "option_2"
assert arg_builder.TypeConfig.choice_p_int == 20
assert arg_builder.TypeConfig.choice_p_float == 20.0
assert arg_builder.TypeConfig.list_list_p_int == [[11, 21], [11, 21]]
assert arg_builder.TypeConfig.list_choice_p_str == ["option_2"]
assert arg_builder.TypeConfig.list_list_choice_p_str == [
["option_2"],
["option_2"],
]
assert arg_builder.TypeConfig.list_choice_p_int == [20]
assert arg_builder.TypeConfig.list_choice_p_float == [20.0]
assert arg_builder.TypeConfig.class_enum.one == 12
assert arg_builder.TypeConfig.class_enum.two == "ancora"
assert isinstance(arg_builder.TypeConfig.high_config.double_nested_config,
SecondDoubleNestedConfig) is True
assert arg_builder.TypeConfig.high_config.double_nested_config.morph_kernels_thickness == 1
assert arg_builder.TypeConfig.high_config.double_nested_config.morph_tolerance == 0.1
assert arg_builder.NestedListStuff[0].one == 11
assert arg_builder.NestedListStuff[0].two == "Hooray"
assert arg_builder.NestedListStuff[1].one == 21
assert arg_builder.NestedListStuff[1].two == "Working"
class TestRaiseCmdLineNoKey:
"""Testing command line overrides"""
def test_cmd_line_no_key(self, monkeypatch):
with monkeypatch.context() as m:
m.setattr(
sys,
"argv",
[
"",
"--config",
"./tests/conf/yaml/test.yaml",
"--TypeConfig.foo_bar_stuff",
"11",
],
)
with pytest.raises(SystemExit):
config = ConfigArgBuilder(
TypeConfig, NestedStuff, NestedListStuff, SingleNestedConfig,
FirstDoubleNestedConfig, SecondDoubleNestedConfig, desc="Test Builder"
)
return config.generate()
class TestRaiseCmdLineListLen:
"""Testing command line overrides"""
def test_cmd_line_list_len(self, monkeypatch):
with monkeypatch.context() as m:
m.setattr(
sys,
"argv",
[
"",
"--config",
"./tests/conf/yaml/test.yaml",
"--TypeConfig.nested_list.NestedListStuff.one",
"[11]",
],
)
with pytest.raises(ValueError):
config = ConfigArgBuilder(
TypeConfig, NestedStuff, NestedListStuff, SingleNestedConfig,
FirstDoubleNestedConfig, SecondDoubleNestedConfig, desc="Test Builder"
)
return config.generate()
| 42.365248 | 112 | 0.512262 | 1,086 | 11,947 | 5.356354 | 0.087477 | 0.108303 | 0.156782 | 0.214544 | 0.93519 | 0.908028 | 0.902871 | 0.902871 | 0.883617 | 0.864363 | 0 | 0.033111 | 0.373064 | 11,947 | 281 | 113 | 42.516014 | 0.743525 | 0.012221 | 0 | 0.838462 | 0 | 0 | 0.213243 | 0.125637 | 0 | 0 | 0 | 0 | 0.226923 | 1 | 0.023077 | false | 0 | 0.015385 | 0 | 0.069231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
2b80d9e1b419f80ff46311e072543bc34e8aa2a5 | 14,354 | py | Python | DeepAlignmentNetwork/menpofit/math/correlationfilter.py | chiawei-liu/DeepAlignmentNetwork | 52621cd2f697abe372b88c9ea0ee08f0d93b43d8 | [
"MIT"
] | 220 | 2019-09-01T01:52:04.000Z | 2022-03-28T12:52:07.000Z | DeepAlignmentNetwork/menpofit/math/correlationfilter.py | chiawei-liu/DeepAlignmentNetwork | 52621cd2f697abe372b88c9ea0ee08f0d93b43d8 | [
"MIT"
] | 80 | 2015-01-05T16:17:39.000Z | 2020-11-22T13:42:00.000Z | DeepAlignmentNetwork/menpofit/math/correlationfilter.py | chiawei-liu/DeepAlignmentNetwork | 52621cd2f697abe372b88c9ea0ee08f0d93b43d8 | [
"MIT"
] | 64 | 2015-02-02T15:11:38.000Z | 2022-02-28T06:19:31.000Z | import numpy as np
from numpy.fft import fft2, ifft2, ifftshift
from scipy.sparse import spdiags, eye as speye
from scipy.sparse.linalg import spsolve
from menpofit.math.fft_utils import pad, crop
def mosse(X, y, l=0.01, boundary='constant', crop_filter=True):
r"""
Minimum Output Sum of Squared Errors (MOSSE) filter.
Parameters
----------
X : ``(n_images, n_channels, image_h, image_w)`` `ndarray`
The training images.
y : ``(1, response_h, response_w)`` `ndarray`
The desired response.
l : `float`, optional
Regularization parameter.
boundary : ``{'constant', 'symmetric'}``, optional
Determines how the image is padded.
crop_filter : `bool`, optional
If ``True``, the shape of the MOSSE filter is the same as the shape
of the desired response. If ``False``, the filter's shape is equal to:
``X[0].shape + y.shape - 1``
Returns
-------
f : ``(1, response_h, response_w)`` `ndarray`
Minimum Output Sum od Squared Errors (MOSSE) filter associated to
the training images.
sXY : ``(N,)`` `ndarray`
The auto-correlation array, where
``N = (image_h+response_h-1) * (image_w+response_w-1) * n_channels``.
sXX : ``(N, N)`` `ndarray`
The cross-correlation array, where
``N = (image_h+response_h-1) * (image_w+response_w-1) * n_channels``.
References
----------
.. [1] D. S. Bolme, J. R. Beveridge, B. A. Draper, and Y. M. Lui. "Visual
Object Tracking using Adaptive Correlation Filters", IEEE Proceedings
of International Conference on Computer Vision and Pattern Recognition
(CVPR), 2010.
"""
# number of images, number of channels, height and width
n, k, hx, wx = X.shape
# height and width of desired responses
_, hy, wy = y.shape
y_shape = (hy, wy)
# extended shape
ext_h = hx + hy - 1
ext_w = wx + wy - 1
ext_shape = (ext_h, ext_w)
# extend desired response
ext_y = pad(y, ext_shape)
# fft of extended desired response
fft_ext_y = fft2(ext_y)
# auto and cross spectral energy matrices
sXX = 0
sXY = 0
# for each training image and desired response
for x in X:
# extend image
ext_x = pad(x, ext_shape, boundary=boundary)
# fft of extended image
fft_ext_x = fft2(ext_x)
# update auto and cross spectral energy matrices
sXX += fft_ext_x.conj() * fft_ext_x
sXY += fft_ext_x.conj() * fft_ext_y
# compute desired correlation filter
fft_ext_f = sXY / (sXX + l)
# reshape extended filter to extended image shape
fft_ext_f = fft_ext_f.reshape((k, ext_h, ext_w))
# compute extended filter inverse fft
f = np.real(ifftshift(ifft2(fft_ext_f), axes=(-2, -1)))
if crop_filter:
# crop extended filter to match desired response shape
f = crop(f, y_shape)
return f, sXY, sXX
def imosse(A, B, n_ab, X, y, l=0.01, boundary='constant',
crop_filter=True, f=1.0):
r"""
Incremental Minimum Output Sum of Squared Errors (iMOSSE) filter.
Parameters
----------
A : ``(N,)`` `ndarray`
The current auto-correlation array, where
``N = (patch_h+response_h-1) * (patch_w+response_w-1) * n_channels``.
B : ``(N, N)`` `ndarray`
The current cross-correlation array, where
``N = (patch_h+response_h-1) * (patch_w+response_w-1) * n_channels``.
n_ab : `int`
The current number of images.
X : ``(n_images, n_channels, image_h, image_w)`` `ndarray`
The training images (patches).
y : ``(1, response_h, response_w)`` `ndarray`
The desired response.
l : `float`, optional
Regularization parameter.
boundary : ``{'constant', 'symmetric'}``, optional
Determines how the image is padded.
crop_filter : `bool`, optional
If ``True``, the shape of the MOSSE filter is the same as the shape
of the desired response. If ``False``, the filter's shape is equal to:
``X[0].shape + y.shape - 1``
f : ``[0, 1]`` `float`, optional
Forgetting factor that weights the relative contribution of new
samples vs old samples. If ``1.0``, all samples are weighted equally.
If ``<1.0``, more emphasis is put on the new samples.
Returns
-------
f : ``(1, response_h, response_w)`` `ndarray`
Minimum Output Sum od Squared Errors (MOSSE) filter associated to
the training images.
sXY : ``(N,)`` `ndarray`
The auto-correlation array, where
``N = (image_h+response_h-1) * (image_w+response_w-1) * n_channels``.
sXX : ``(N, N)`` `ndarray`
The cross-correlation array, where
``N = (image_h+response_h-1) * (image_w+response_w-1) * n_channels``.
References
----------
.. [1] D. S. Bolme, J. R. Beveridge, B. A. Draper, and Y. M. Lui. "Visual
Object Tracking using Adaptive Correlation Filters", IEEE Proceedings
of International Conference on Computer Vision and Pattern Recognition
(CVPR), 2010.
"""
# number of images; number of channels, height and width
n_x, k, hz, wz = X.shape
# height and width of desired responses
_, hy, wy = y.shape
y_shape = (hy, wy)
# multiply the number of samples used to produce the auto and cross
# spectral energy matrices A and B by forgetting factor
n_ab *= f
# total number of samples
n = n_ab + n_x
# compute weighting factors
nu_ab = n_ab / n
nu_x = n_x / n
# extended shape
ext_h = hz + hy - 1
ext_w = wz + wy - 1
ext_shape = (ext_h, ext_w)
# extend desired response
ext_y = pad(y, ext_shape)
# fft of extended desired response
fft_ext_y = fft2(ext_y)
# extend images
ext_X = pad(X, ext_shape, boundary=boundary)
# auto and cross spectral energy matrices
sXX = 0
sXY = 0
# for each training image and desired response
for ext_x in ext_X:
# fft of extended image
fft_ext_x = fft2(ext_x)
# update auto and cross spectral energy matrices
sXX += fft_ext_x.conj() * fft_ext_x
sXY += fft_ext_x.conj() * fft_ext_y
# combine old and new auto and cross spectral energy matrices
sXY = nu_ab * A + nu_x * sXY
sXX = nu_ab * B + nu_x * sXX
# compute desired correlation filter
fft_ext_f = sXY / (sXX + l)
# reshape extended filter to extended image shape
fft_ext_f = fft_ext_f.reshape((k, ext_h, ext_w))
# compute filter inverse fft
f = np.real(ifftshift(ifft2(fft_ext_f), axes=(-2, -1)))
if crop_filter:
# crop extended filter to match desired response shape
f = crop(f, y_shape)
return f, sXY, sXX
def mccf(X, y, l=0.01, boundary='constant', crop_filter=True):
r"""
Multi-Channel Correlation Filter (MCCF).
Parameters
----------
X : ``(n_images, n_channels, image_h, image_w)`` `ndarray`
The training images.
y : ``(1, response_h, response_w)`` `ndarray`
The desired response.
l : `float`, optional
Regularization parameter.
boundary : ``{'constant', 'symmetric'}``, optional
Determines how the image is padded.
crop_filter : `bool`, optional
If ``True``, the shape of the MOSSE filter is the same as the shape
of the desired response. If ``False``, the filter's shape is equal to:
``X[0].shape + y.shape - 1``
Returns
-------
f : ``(1, response_h, response_w)`` `ndarray`
Multi-Channel Correlation Filter (MCCF) filter associated to the
training images.
sXY : ``(N,)`` `ndarray`
The auto-correlation array, where
``N = (image_h+response_h-1) * (image_w+response_w-1) * n_channels``.
sXX : ``(N, N)`` `ndarray`
The cross-correlation array, where
``N = (image_h+response_h-1) * (image_w+response_w-1) * n_channels``.
References
----------
.. [1] H. K. Galoogahi, T. Sim, and Simon Lucey. "Multi-Channel
Correlation Filters". IEEE Proceedings of International Conference on
Computer Vision (ICCV), 2013.
"""
# number of images; number of channels, height and width
n, k, hx, wx = X.shape
# height and width of desired responses
_, hy, wy = y.shape
y_shape = (hy, wy)
# extended shape
ext_h = hx + hy - 1
ext_w = wx + wy - 1
ext_shape = (ext_h, ext_w)
# extended dimensionality
ext_d = ext_h * ext_w
# extend desired response
ext_y = pad(y, ext_shape)
# fft of extended desired response
fft_ext_y = fft2(ext_y)
# extend images
ext_X = pad(X, ext_shape, boundary=boundary)
# auto and cross spectral energy matrices
sXX = 0
sXY = 0
# for each training image and desired response
for ext_x in ext_X:
# fft of extended image
fft_ext_x = fft2(ext_x)
# store extended image fft as sparse diagonal matrix
diag_fft_x = spdiags(fft_ext_x.reshape((k, -1)),
-np.arange(0, k) * ext_d, ext_d * k, ext_d).T
# vectorize extended desired response fft
diag_fft_y = fft_ext_y.ravel()
# update auto and cross spectral energy matrices
sXX += diag_fft_x.conj().T.dot(diag_fft_x)
sXY += diag_fft_x.conj().T.dot(diag_fft_y)
# solve ext_d independent k x k linear systems (with regularization)
# to obtain desired extended multi-channel correlation filter
fft_ext_f = spsolve(sXX + l * speye(sXX.shape[-1]), sXY)
# reshape extended filter to extended image shape
fft_ext_f = fft_ext_f.reshape((k, ext_h, ext_w))
# compute filter inverse fft
f = np.real(ifftshift(ifft2(fft_ext_f), axes=(-2, -1)))
if crop_filter:
# crop extended filter to match desired response shape
f = crop(f, y_shape)
return f, sXY, sXX
def imccf(A, B, n_ab, X, y, l=0.01, boundary='constant', crop_filter=True,
f=1.0):
r"""
Incremental Multi-Channel Correlation Filter (MCCF)
Parameters
----------
A : ``(N,)`` `ndarray`
The current auto-correlation array, where
``N = (patch_h+response_h-1) * (patch_w+response_w-1) * n_channels``.
B : ``(N, N)`` `ndarray`
The current cross-correlation array, where
``N = (patch_h+response_h-1) * (patch_w+response_w-1) * n_channels``.
n_ab : `int`
The current number of images.
X : ``(n_images, n_channels, image_h, image_w)`` `ndarray`
The training images (patches).
y : ``(1, response_h, response_w)`` `ndarray`
The desired response.
l : `float`, optional
Regularization parameter.
boundary : ``{'constant', 'symmetric'}``, optional
Determines how the image is padded.
crop_filter : `bool`, optional
If ``True``, the shape of the MOSSE filter is the same as the shape
of the desired response. If ``False``, the filter's shape is equal to:
``X[0].shape + y.shape - 1``
f : ``[0, 1]`` `float`, optional
Forgetting factor that weights the relative contribution of new
samples vs old samples. If ``1.0``, all samples are weighted equally.
If ``<1.0``, more emphasis is put on the new samples.
Returns
-------
f : ``(1, response_h, response_w)`` `ndarray`
Multi-Channel Correlation Filter (MCCF) filter associated to the
training images.
sXY : ``(N,)`` `ndarray`
The auto-correlation array, where
``N = (image_h+response_h-1) * (image_w+response_w-1) * n_channels``.
sXX : ``(N, N)`` `ndarray`
The cross-correlation array, where
``N = (image_h+response_h-1) * (image_w+response_w-1) * n_channels``.
References
----------
.. [1] D. S. Bolme, J. R. Beveridge, B. A. Draper, and Y. M. Lui. "Visual
Object Tracking using Adaptive Correlation Filters", IEEE Proceedings
of International Conference on Computer Vision and Pattern Recognition
(CVPR), 2010.
.. [2] H. K. Galoogahi, T. Sim, and Simon Lucey. "Multi-Channel
Correlation Filters". IEEE Proceedings of International Conference on
Computer Vision (ICCV), 2013.
"""
# number of images; number of channels, height and width
n_x, k, hz, wz = X.shape
# height and width of desired responses
_, hy, wy = y.shape
y_shape = (hy, wy)
# multiply the number of samples used to produce the auto and cross
# spectral energy matrices A and B by forgetting factor
n_ab *= f
# total number of samples
n = n_ab + n_x
# compute weighting factors
nu_ab = n_ab / n
nu_x = n_x / n
# extended shape
ext_h = hz + hy - 1
ext_w = wz + wy - 1
ext_shape = (ext_h, ext_w)
# extended dimensionality
ext_d = ext_h * ext_w
# extend desired response
ext_y = pad(y, ext_shape)
# fft of extended desired response
fft_ext_y = fft2(ext_y)
# extend images
ext_X = pad(X, ext_shape, boundary=boundary)
# auto and cross spectral energy matrices
sXX = 0
sXY = 0
# for each training image and desired response
for ext_x in ext_X:
# fft of extended image
fft_ext_x = fft2(ext_x)
# store extended image fft as sparse diagonal matrix
diag_fft_x = spdiags(fft_ext_x.reshape((k, -1)),
-np.arange(0, k) * ext_d, ext_d * k, ext_d).T
# vectorize extended desired response fft
diag_fft_y = fft_ext_y.ravel()
# update auto and cross spectral energy matrices
sXX += diag_fft_x.conj().T.dot(diag_fft_x)
sXY += diag_fft_x.conj().T.dot(diag_fft_y)
# combine old and new auto and cross spectral energy matrices
sXY = nu_ab * A + nu_x * sXY
sXX = nu_ab * B + nu_x * sXX
# solve ext_d independent k x k linear systems (with regularization)
# to obtain desired extended multi-channel correlation filter
fft_ext_f = spsolve(sXX + l * speye(sXX.shape[-1]), sXY)
# reshape extended filter to extended image shape
fft_ext_f = fft_ext_f.reshape((k, ext_h, ext_w))
# compute filter inverse fft
f = np.real(ifftshift(ifft2(fft_ext_f), axes=(-2, -1)))
if crop_filter:
# crop extended filter to match desired response shape
f = crop(f, y_shape)
return f, sXY, sXX
| 34.587952 | 78 | 0.617946 | 2,094 | 14,354 | 4.086437 | 0.094078 | 0.025242 | 0.013089 | 0.030852 | 0.973121 | 0.973121 | 0.958163 | 0.958163 | 0.954423 | 0.954423 | 0 | 0.013055 | 0.268915 | 14,354 | 414 | 79 | 34.671498 | 0.802363 | 0.644907 | 0 | 0.857143 | 0 | 0 | 0.007334 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033613 | false | 0 | 0.042017 | 0 | 0.109244 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2bd0cfd76a886260ca4c7a5954ebf0a47a2f5ab7 | 309 | py | Python | legacy/field/settings/__init__.py | kingsdigitallab/field-django | 6ceba79866d6971a6891f0b81ca9ed2a2d5a32db | [
"MIT"
] | null | null | null | legacy/field/settings/__init__.py | kingsdigitallab/field-django | 6ceba79866d6971a6891f0b81ca9ed2a2d5a32db | [
"MIT"
] | 2 | 2020-08-12T23:53:01.000Z | 2022-02-10T09:41:09.000Z | legacy/field/settings/__init__.py | kingsdigitallab/field-django | 6ceba79866d6971a6891f0b81ca9ed2a2d5a32db | [
"MIT"
] | null | null | null | # -----------------------------------------------------------------------------
# Imports local settings
# Use it for settings specific to the installation and do not
# commit to version control.
# -----------------------------------------------------------------------------
from .local_dev import * # noqa
| 44.142857 | 79 | 0.368932 | 23 | 309 | 4.913043 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10356 | 309 | 6 | 80 | 51.5 | 0.407942 | 0.873786 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
9205592b1a6d985738cfd6a7776e89fe5f98cfbc | 12,538 | py | Python | samplers.py | atimashov/project_cs236 | 574f7ca4dd749a2b811b4054003fb0fb4defb727 | [
"Apache-2.0"
] | 1 | 2022-02-28T01:18:05.000Z | 2022-02-28T01:18:05.000Z | samplers.py | atimashov/project_cs236 | 574f7ca4dd749a2b811b4054003fb0fb4defb727 | [
"Apache-2.0"
] | null | null | null | samplers.py | atimashov/project_cs236 | 574f7ca4dd749a2b811b4054003fb0fb4defb727 | [
"Apache-2.0"
] | 1 | 2022-02-28T01:18:16.000Z | 2022-02-28T01:18:16.000Z | import torch
import tqdm
import numpy as np
from scipy import integrate
num_steps = 500
def Euler_Maruyama_sampler(score_model,
marginal_prob_std,
diffusion_coeff,
batch_size=64,
num_steps=num_steps,
device='cuda',
eps=1e-3):
"""Generate samples from score-based models with the Euler-Maruyama solver.
Args:
score_model: A PyTorch model that represents the time-dependent score-based model.
marginal_prob_std: A function that gives the standard deviation of
the perturbation kernel.
diffusion_coeff: A function that gives the diffusion coefficient of the SDE.
batch_size: The number of samplers to generate by calling this function once.
num_steps: The number of sampling steps.
Equivalent to the number of discretized time steps.
device: 'cuda' for running on GPUs, and 'cpu' for running on CPUs.
eps: The smallest time step for numerical stability.
Returns:
Samples.
"""
t = torch.ones(batch_size, device=device)
init_x = torch.randn(batch_size, 1, 28, 28, device=device) \
* marginal_prob_std(t)[:, None, None, None]
time_steps = torch.linspace(1., eps, num_steps, device=device)
step_size = time_steps[0] - time_steps[1]
x = init_x
with torch.no_grad():
for time_step in tqdm.notebook.tqdm(time_steps):
batch_time_step = torch.ones(batch_size, device=device) * time_step
g = diffusion_coeff(batch_time_step)
mean_x = x + (g**2)[:, None, None, None] * score_model(x, batch_time_step) * step_size
x = mean_x + torch.sqrt(step_size) * g[:, None, None, None] * torch.randn_like(x)
# Do not include any noise in the last sampling step.
return mean_x
def Euler_Maruyama_sampler_MY(score_model,
marginal_prob_std,
diffusion_coeff,
batch_size=64,
num_steps=num_steps,
device='cuda',
eps=1e-3):
"""Generate samples from score-based models with the Euler-Maruyama solver.
Args:
score_model: A PyTorch model that represents the time-dependent score-based model.
marginal_prob_std: A function that gives the standard deviation of
the perturbation kernel.
diffusion_coeff: A function that gives the diffusion coefficient of the SDE.
batch_size: The number of samplers to generate by calling this function once.
num_steps: The number of sampling steps.
Equivalent to the number of discretized time steps.
device: 'cuda' for running on GPUs, and 'cpu' for running on CPUs.
eps: The smallest time step for numerical stability.
Returns:
Samples.
"""
t = torch.ones(batch_size, device=device)
init_x = torch.randn(batch_size, 2 * 68, device=device) \
* marginal_prob_std(t)[:, None]
time_steps = torch.linspace(1., eps, num_steps, device=device)
step_size = time_steps[0] - time_steps[1]
x = init_x
with torch.no_grad():
for time_step in tqdm.tqdm(time_steps):
batch_time_step = torch.ones(batch_size, device=device) * time_step
g = diffusion_coeff(batch_time_step)
mean_x = x + (g**2)[:, None] * score_model(x, batch_time_step) * step_size
x = mean_x + torch.sqrt(step_size) * g[:, None] * torch.randn_like(x)
# Do not include any noise in the last sampling step.
return mean_x
signal_to_noise_ratio = 0.16
## The number of sampling steps.
def pc_sampler(score_model,
marginal_prob_std,
diffusion_coeff,
batch_size=64,
num_steps=num_steps,
snr=signal_to_noise_ratio,
device='cuda',
eps=1e-3):
"""Generate samples from score-based models with Predictor-Corrector method.
Args:
score_model: A PyTorch model that represents the time-dependent score-based model.
marginal_prob_std: A function that gives the standard deviation
of the perturbation kernel.
diffusion_coeff: A function that gives the diffusion coefficient
of the SDE.
batch_size: The number of samplers to generate by calling this function once.
num_steps: The number of sampling steps.
Equivalent to the number of discretized time steps.
device: 'cuda' for running on GPUs, and 'cpu' for running on CPUs.
eps: The smallest time step for numerical stability.
Returns:
Samples.
"""
t = torch.ones(batch_size, device=device)
init_x = torch.randn(batch_size, 1, 28, 28, device=device) * marginal_prob_std(t)[:, None, None, None]
time_steps = np.linspace(1., eps, num_steps)
step_size = time_steps[0] - time_steps[1]
x = init_x
with torch.no_grad():
for time_step in tqdm.notebook.tqdm(time_steps):
batch_time_step = torch.ones(batch_size, device=device) * time_step
# Corrector step (Langevin MCMC)
grad = score_model(x, batch_time_step)
grad_norm = torch.norm(grad.reshape(grad.shape[0], -1), dim=-1).mean()
noise_norm = np.sqrt(np.prod(x.shape[1:]))
langevin_step_size = 2 * (snr * noise_norm / grad_norm)**2
x = x + langevin_step_size * grad + torch.sqrt(2 * langevin_step_size) * torch.randn_like(x)
# Predictor step (Euler-Maruyama)
g = diffusion_coeff(batch_time_step)
x_mean = x + (g**2)[:, None, None, None] * score_model(x, batch_time_step) * step_size
x = x_mean + torch.sqrt(g**2 * step_size)[:, None, None, None] * torch.randn_like(x)
# The last step does not include any noise
return x_mean
def pc_sampler_MY(score_model,
marginal_prob_std,
diffusion_coeff,
batch_size=64,
num_steps=num_steps,
snr=signal_to_noise_ratio,
device='cuda',
eps=1e-3):
"""Generate samples from score-based models with Predictor-Corrector method.
Args:
score_model: A PyTorch model that represents the time-dependent score-based model.
marginal_prob_std: A function that gives the standard deviation
of the perturbation kernel.
diffusion_coeff: A function that gives the diffusion coefficient
of the SDE.
batch_size: The number of samplers to generate by calling this function once.
num_steps: The number of sampling steps.
Equivalent to the number of discretized time steps.
device: 'cuda' for running on GPUs, and 'cpu' for running on CPUs.
eps: The smallest time step for numerical stability.
Returns:
Samples.
"""
num_steps = 1500
t = torch.ones(batch_size, device=device)
init_x = torch.randn(batch_size, 140 - 4, device=device) * marginal_prob_std(t)[:, None]
time_steps = np.linspace(1., eps, num_steps)
step_size = time_steps[0] - time_steps[1]
x = init_x
with torch.no_grad():
for time_step in time_steps:
batch_time_step = torch.ones(batch_size, device=device) * time_step
# Corrector step (Langevin MCMC)
grad = score_model(x, batch_time_step)
grad_norm = torch.norm(grad.reshape(grad.shape[0], -1), dim=-1).mean()
noise_norm = np.sqrt(np.prod(x.shape[1:]))
langevin_step_size = 2 * (snr * noise_norm / grad_norm)**2
x = x + langevin_step_size * grad + torch.sqrt(2 * langevin_step_size) * torch.randn_like(x)
# Predictor step (Euler-Maruyama)
g = diffusion_coeff(batch_time_step)
x_mean = x + (g**2)[:, None] * score_model(x, batch_time_step) * step_size
x = x_mean + torch.sqrt(g**2 * step_size)[:, None] * torch.randn_like(x)
# The last step does not include any noise
return x_mean
## The error tolerance for the black-box ODE solver
error_tolerance = 1e-5
def ode_sampler(score_model,
marginal_prob_std,
diffusion_coeff,
batch_size=64,
atol=error_tolerance,
rtol=error_tolerance,
device='cuda',
z=None,
eps=1e-3):
"""Generate samples from score-based models with black-box ODE solvers.
Args:
score_model: A PyTorch model that represents the time-dependent score-based model.
marginal_prob_std: A function that returns the standard deviation
of the perturbation kernel.
diffusion_coeff: A function that returns the diffusion coefficient of the SDE.
batch_size: The number of samplers to generate by calling this function once.
atol: Tolerance of absolute errors.
rtol: Tolerance of relative errors.
device: 'cuda' for running on GPUs, and 'cpu' for running on CPUs.
z: The latent code that governs the final sample. If None, we start from p_1;
otherwise, we start from the given z.
eps: The smallest time step for numerical stability.
"""
t = torch.ones(batch_size, device=device)
# Create the latent code
if z is None:
init_x = torch.randn(batch_size, 1, 28, 28, device=device) \
* marginal_prob_std(t)[:, None, None, None]
else:
init_x = z
shape = init_x.shape
def score_eval_wrapper(sample, time_steps):
"""A wrapper of the score-based model for use by the ODE solver."""
sample = torch.tensor(sample, device=device, dtype=torch.float32).reshape(shape)
time_steps = torch.tensor(time_steps, device=device, dtype=torch.float32).reshape((sample.shape[0], ))
with torch.no_grad():
score = score_model(sample, time_steps)
return score.cpu().numpy().reshape((-1,)).astype(np.float64)
def ode_func(t, x):
"""The ODE function for use by the ODE solver."""
time_steps = np.ones((shape[0],)) * t
g = diffusion_coeff(torch.tensor(t)).cpu().numpy()
return -0.5 * (g**2) * score_eval_wrapper(x, time_steps)
# Run the black-box ODE solver.
res = integrate.solve_ivp(ode_func, (1., eps), init_x.reshape(-1).cpu().numpy(), rtol=rtol, atol=atol, method='RK45')
print(f"Number of function evaluations: {res.nfev}")
x = torch.tensor(res.y[:, -1], device=device).reshape(shape)
return x
def ode_sampler_MY(score_model,
marginal_prob_std,
diffusion_coeff,
batch_size=64,
atol=error_tolerance,
rtol=error_tolerance,
device='cuda',
z=None,
eps=1e-3):
"""Generate samples from score-based models with black-box ODE solvers.
Args:
score_model: A PyTorch model that represents the time-dependent score-based model.
marginal_prob_std: A function that returns the standard deviation
of the perturbation kernel.
diffusion_coeff: A function that returns the diffusion coefficient of the SDE.
batch_size: The number of samplers to generate by calling this function once.
atol: Tolerance of absolute errors.
rtol: Tolerance of relative errors.
device: 'cuda' for running on GPUs, and 'cpu' for running on CPUs.
z: The latent code that governs the final sample. If None, we start from p_1;
otherwise, we start from the given z.
eps: The smallest time step for numerical stability.
"""
t = torch.ones(batch_size, device=device)
# Create the latent code
if z is None:
init_x = torch.randn(batch_size, 140, device=device) \
* marginal_prob_std(t)[:, None]
else:
init_x = z
shape = init_x.shape
def score_eval_wrapper(sample, time_steps):
"""A wrapper of the score-based model for use by the ODE solver."""
sample = torch.tensor(sample, device=device, dtype=torch.float32).reshape(shape)
time_steps = torch.tensor(time_steps, device=device, dtype=torch.float32).reshape((sample.shape[0], ))
with torch.no_grad():
score = score_model(sample, time_steps)
return score.cpu().numpy().reshape((-1,)).astype(np.float64)
def ode_func(t, x):
"""The ODE function for use by the ODE solver."""
time_steps = np.ones((shape[0],)) * t
g = diffusion_coeff(torch.tensor(t)).cpu().numpy()
return -0.5 * (g**2) * score_eval_wrapper(x, time_steps)
# Run the black-box ODE solver.
res = integrate.solve_ivp(ode_func, (1., eps), init_x.reshape(-1).cpu().numpy(), rtol=rtol, atol=atol, method='RK45')
print(f"Number of function evaluations: {res.nfev}")
x = torch.tensor(res.y[:, -1], device=device).reshape(shape)
return x
| 41.654485 | 121 | 0.654411 | 1,795 | 12,538 | 4.405571 | 0.09415 | 0.036419 | 0.034143 | 0.030349 | 0.974836 | 0.969272 | 0.969272 | 0.963961 | 0.963961 | 0.956879 | 0 | 0.013718 | 0.24996 | 12,538 | 301 | 122 | 41.654485 | 0.827201 | 0.390573 | 0 | 0.822785 | 0 | 0 | 0.015671 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063291 | false | 0 | 0.025316 | 0 | 0.151899 | 0.012658 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a6571b6ea7141fe02e20f3871d59249f7086803c | 43 | py | Python | examples/string/example.py | interkosmos/f03python | 05fd5c41e6cce84e890903ad29d843a5e651105e | [
"0BSD"
] | 2 | 2018-03-09T22:36:11.000Z | 2020-01-03T09:15:49.000Z | examples/string/example.py | interkosmos/f03python | 05fd5c41e6cce84e890903ad29d843a5e651105e | [
"0BSD"
] | null | null | null | examples/string/example.py | interkosmos/f03python | 05fd5c41e6cce84e890903ad29d843a5e651105e | [
"0BSD"
] | null | null | null | def konnichiwa():
return 'Konnichiwa!'
| 14.333333 | 24 | 0.674419 | 4 | 43 | 7.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186047 | 43 | 2 | 25 | 21.5 | 0.828571 | 0 | 0 | 0 | 0 | 0 | 0.255814 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.