hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9c81e936930a3693e7ef21b5bc1d1f5421bf520d | 38 | py | Python | src/ahsnap/service/__init__.py | cackharot/ahsnap | 1b09fd779a833086fb9035cdf30a54e49d6ac178 | [
"MIT"
] | null | null | null | src/ahsnap/service/__init__.py | cackharot/ahsnap | 1b09fd779a833086fb9035cdf30a54e49d6ac178 | [
"MIT"
] | 2 | 2021-03-31T18:31:04.000Z | 2021-12-13T19:43:55.000Z | src/ahsnap/service/__init__.py | cackharot/ahsnap | 1b09fd779a833086fb9035cdf30a54e49d6ac178 | [
"MIT"
] | null | null | null | from .snap_service import SnapService
| 19 | 37 | 0.868421 | 5 | 38 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
92c55256126eba22544b27fe2a5bff168b926442 | 6,035 | py | Python | tests/commands/run/test_call.py | pybee/briefcase | d7e9aa7bf15aa2abbc71e97aef9bea287129fdaa | [
"BSD-3-Clause"
] | 522 | 2015-07-28T16:06:18.000Z | 2019-03-25T17:16:55.000Z | tests/commands/run/test_call.py | pybee/briefcase | d7e9aa7bf15aa2abbc71e97aef9bea287129fdaa | [
"BSD-3-Clause"
] | 154 | 2015-09-17T02:50:55.000Z | 2019-03-22T07:10:34.000Z | tests/commands/run/test_call.py | pybee/briefcase | d7e9aa7bf15aa2abbc71e97aef9bea287129fdaa | [
"BSD-3-Clause"
] | 105 | 2015-09-25T08:43:26.000Z | 2019-03-25T15:59:27.000Z | import pytest
from briefcase.exceptions import BriefcaseCommandError
def test_no_args_one_app(run_command, first_app):
"""If there is one app, run starts that app by default."""
# Add a single app
run_command.apps = {
"first": first_app,
}
# Configure no command line options
options = run_command.parse_options([])
# Run the run command
run_command(**options)
# The right sequence of things will be done
assert run_command.actions == [
# Tools are verified
("verify",),
# Run the first app
("run", "first", {}),
]
def test_no_args_two_apps(run_command, first_app, second_app):
"""If there are one app, run starts that app by default."""
# Add two apps
run_command.apps = {
"first": first_app,
"second": second_app,
}
# Configure no command line options
options = run_command.parse_options([])
# Invoking the run command raises an error
with pytest.raises(BriefcaseCommandError):
run_command(**options)
# Only verification actions will be performed
assert run_command.actions == [
("verify",),
]
def test_with_arg_one_app(run_command, first_app):
"""If there is one app, and a -a argument, run starts that app."""
# Add a single app
run_command.apps = {
"first": first_app,
}
# Configure a -a command line option
options = run_command.parse_options(["-a", "first"])
# Run the run command
run_command(**options)
# The right sequence of things will be done
assert run_command.actions == [
# Tools are verified
("verify",),
# Run the first app
("run", "first", {}),
]
def test_with_arg_two_apps(run_command, first_app, second_app):
"""If there are multiple apps, the --app argument starts app nominated."""
# Add two apps
run_command.apps = {
"first": first_app,
"second": second_app,
}
# Configure a --app command line option
options = run_command.parse_options(["--app", "second"])
# Run the run command
run_command(**options)
# The right sequence of things will be done
assert run_command.actions == [
# Tools are verified
("verify",),
# Run the second app
("run", "second", {}),
]
def test_bad_app_reference(run_command, first_app, second_app):
"""If the command line argument refers to an app that doesn't exist, raise
an error."""
# Add two apps
run_command.apps = {
"first": first_app,
"second": second_app,
}
# Configure a --app command line option
options = run_command.parse_options(["--app", "does-not-exist"])
# Invoking the run command raises an error
with pytest.raises(BriefcaseCommandError):
run_command(**options)
# Only verification actions will be performed
assert run_command.actions == [
("verify",),
]
def test_create_app_before_start(run_command, first_app_config):
"""If the app to be started doesn't exist, create it first."""
# Add a single app, using the 'config only' fixture
run_command.apps = {
"first": first_app_config,
}
# Configure no command line options
options = run_command.parse_options([])
# Run the run command
run_command(**options)
# The right sequence of things will be done
assert run_command.actions == [
# Tools are verified
("verify",),
# App doesn't exist, so it will be created and built
("create", "first", {}),
("build", "first", {"create_state": "first"}),
# Then, it will be started
("run", "first", {"create_state": "first", "build_state": "first"}),
]
def test_update_app(run_command, first_app):
"""The run command can request that the app is updated first."""
# Add a single app
run_command.apps = {
"first": first_app,
}
# Configure no command line options
options = run_command.parse_options(["-u"])
# Run the run command
run_command(**options)
# The right sequence of things will be done
assert run_command.actions == [
# Tools are verified
("verify",),
# An update was requested
("update", "first", {}),
("build", "first", {"update_state": "first"}),
# Then, it will be started
("run", "first", {"update_state": "first", "build_state": "first"}),
]
def test_update_uncompiled_app(run_command, first_app_uncompiled):
"""The run command can request that an uncompiled app is updated first."""
# Add a single app
run_command.apps = {
"first": first_app_uncompiled,
}
# Configure no command line options
options = run_command.parse_options(["-u"])
# Run the run command
run_command(**options)
# The right sequence of things will be done
assert run_command.actions == [
# Tools are verified
("verify",),
# An update was requested
("update", "first", {}),
("build", "first", {"update_state": "first"}),
# Then, it will be started
("run", "first", {"update_state": "first", "build_state": "first"}),
]
def test_update_non_existent(run_command, first_app_config):
"""Requesting an update of a non-existent app causes a create."""
# Add a single app, using the 'config only' fixture
run_command.apps = {
"first": first_app_config,
}
# Configure no command line options
options = run_command.parse_options(["-u"])
# Run the run command
run_command(**options)
# The right sequence of things will be done
assert run_command.actions == [
# Tools are verified
("verify",),
# App doesn't exist, so it will be created and built
("create", "first", {}),
("build", "first", {"create_state": "first"}),
# Then, it will be started
("run", "first", {"create_state": "first", "build_state": "first"}),
]
| 28.200935 | 78 | 0.615742 | 761 | 6,035 | 4.716163 | 0.12615 | 0.156032 | 0.039844 | 0.045138 | 0.872388 | 0.845918 | 0.830872 | 0.822792 | 0.806353 | 0.79075 | 0 | 0 | 0.263297 | 6,035 | 213 | 79 | 28.333333 | 0.807242 | 0.35261 | 0 | 0.669903 | 0 | 0 | 0.131703 | 0 | 0 | 0 | 0 | 0 | 0.087379 | 1 | 0.087379 | false | 0 | 0.019417 | 0 | 0.106796 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
92d8cbdc50a90adf7ed0aef347b2aaef20d9efb4 | 237 | py | Python | product_docs/scripts/post_pandoc_script.py | EFM-Bobby/docs | 6ee5c8207323097793afc39d8a97f7b3b71ed0d0 | [
"Apache-2.0"
] | 10 | 2021-01-12T19:42:08.000Z | 2022-03-31T13:22:42.000Z | product_docs/scripts/post_pandoc_script.py | EFM-Bobby/docs | 6ee5c8207323097793afc39d8a97f7b3b71ed0d0 | [
"Apache-2.0"
] | 1,120 | 2020-11-13T06:02:13.000Z | 2022-03-31T22:08:28.000Z | product_docs/scripts/post_pandoc_script.py | EFM-Bobby/docs | 6ee5c8207323097793afc39d8a97f7b3b71ed0d0 | [
"Apache-2.0"
] | 79 | 2020-11-09T20:07:06.000Z | 2022-03-31T18:08:32.000Z | exec(open('scripts/add_frontmatter.py').read())
exec(open('scripts/sort_toc_nested.py').read())
exec(open('scripts/fix_image_links.py').read())
exec(open('scripts/internal_links.py').read())
exec(open('scripts/substitutions.py').read())
| 39.5 | 47 | 0.746835 | 36 | 237 | 4.75 | 0.416667 | 0.233918 | 0.438596 | 0.327485 | 0.549708 | 0.304094 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021097 | 237 | 5 | 48 | 47.4 | 0.737069 | 0 | 0 | 0 | 0 | 0 | 0.535865 | 0.535865 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
13828d2a8e0cba6936884efa9f89806568a76aa2 | 163 | py | Python | test.py | Cameron-Waller/Games-Poll-2 | bdbbfea21266fe4445f4504cc0c8242545c3c085 | [
"MIT"
] | null | null | null | test.py | Cameron-Waller/Games-Poll-2 | bdbbfea21266fe4445f4504cc0c8242545c3c085 | [
"MIT"
] | null | null | null | test.py | Cameron-Waller/Games-Poll-2 | bdbbfea21266fe4445f4504cc0c8242545c3c085 | [
"MIT"
] | null | null | null | import subprocess
subprocess.call("wget -O bebelac.sh https://gitlab.com/game.pack-v.2/version.29.04.2021/-/raw/master/bebelac.sh && bash gamepool.sh", shell=True) | 81.5 | 145 | 0.760736 | 28 | 163 | 4.428571 | 0.857143 | 0.145161 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058442 | 0.055215 | 163 | 2 | 145 | 81.5 | 0.746753 | 0 | 0 | 0 | 0 | 0.5 | 0.695122 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
138d5f9a99532176a5497fe0bb77278bc8035e70 | 199 | py | Python | inflearn_machine_learning/code/ch99/teamlab_classifier.py | Junhojuno/TIL | c252b62b94dc519ccd528c2cd8b638e85adee89c | [
"MIT"
] | null | null | null | inflearn_machine_learning/code/ch99/teamlab_classifier.py | Junhojuno/TIL | c252b62b94dc519ccd528c2cd8b638e85adee89c | [
"MIT"
] | null | null | null | inflearn_machine_learning/code/ch99/teamlab_classifier.py | Junhojuno/TIL | c252b62b94dc519ccd528c2cd8b638e85adee89c | [
"MIT"
] | 3 | 2018-05-23T03:33:41.000Z | 2018-07-09T14:34:15.000Z | class SoftmaxRegressionClassifier(Object):
def softmax():
return value
def loss():
return loss
def fit():
return value
def predict():
return value
| 14.214286 | 42 | 0.572864 | 19 | 199 | 6 | 0.526316 | 0.289474 | 0.245614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.351759 | 199 | 13 | 43 | 15.307692 | 0.883721 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.444444 | true | 0 | 0 | 0.444444 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
1393c620dd4fd605b71447bb7f2f1a652cce9879 | 64,199 | py | Python | app/eg023_idv_authentication.py | OleksiiSemko/eg-03-python-auth-code-grant | ecd534b3d9ba7da981bf19705883d44b34909011 | [
"MIT"
] | null | null | null | app/eg023_idv_authentication.py | OleksiiSemko/eg-03-python-auth-code-grant | ecd534b3d9ba7da981bf19705883d44b34909011 | [
"MIT"
] | null | null | null | app/eg023_idv_authentication.py | OleksiiSemko/eg-03-python-auth-code-grant | ecd534b3d9ba7da981bf19705883d44b34909011 | [
"MIT"
] | null | null | null | """ Example 023: ID Verificiation Based authentication"""
from flask import render_template, url_for, redirect, session, flash, request
from os import path
from app import app, ds_config, views
import base64
import re
import json
from docusign_esign import *
from docusign_esign.client.api_exception import ApiException
eg = "eg023" # Reference (and URL) for this example
demo_docs_path = path.abspath(path.join(path.dirname(path.realpath(__file__)), "static/demo_documents"))
def controller():
"""Controller router using the HTTP method"""
if request.method == "GET":
return get_controller()
elif request.method == "POST":
return create_controller()
else:
return render_template("404.html"), 404
def create_controller():
"""
1. Check the token
2. Call the worker method
"""
minimum_buffer_min = 3
if views.ds_token_ok(minimum_buffer_min):
# More data validation would be a good idea here
# Strip anything other than the characters listed
pattern = re.compile("([^\w \-\@\.\,])+")
signer_email = pattern.sub("", request.form.get("signer_email"))
signer_name = pattern.sub("", request.form.get("signer_name"))
envelope_args = {
"signer_email": signer_email,
"signer_name": signer_name,
"status": "sent",
}
args = {
# Step 1: Obtain your OAuth token
"account_id": session["ds_account_id"], # represents your {ACCOUNT_ID}
"base_path": session["ds_base_path"],
"ds_access_token": session["ds_access_token"], # represnts your {ACCESS_TOKEN}
"envelope_args": envelope_args
}
try:
# Step 2: Construct your API headers
api_client = ApiClient()
api_client.host = args["base_path"]
api_client.set_default_header("Authorization", "Bearer " + args["ds_access_token"])
# Step 3: Retrieve the workflow ID
workflow_details = AccountsApi(api_client)
workflow_response = workflow_details.get_account_identity_verification(args["account_id"])
workflow_id = workflow_response.identity_verification[0].workflow_id
app.logger.info("We found the following workflowID: " + workflow_id)
# Step 4: Construct your envelope JSON body
envelope_definition = EnvelopeDefinition(
email_subject="Please sign this document set"
)
# Add a document
document1 = Document( # Create the DocuSign Document object
document_base64="JVBERi0xLjMNJeLjz9MNCjUgMCBvYmoNPDwvTGluZWFyaXplZCAxL0wgNDI3MTAvTyA3L0UgMzg3NDMvTiAxL1QgNDI0OTEvSCBbIDg5NiAxODVdPj4NZW5kb2JqDSAgICAgICAgICAgICAgICAgICAgDQp4cmVmDQo1IDMwDQowMDAwMDAwMDE2IDAwMDAwIG4NCjAwMDAwMDEwODEgMDAwMDAgbg0KMDAwMDAwMTE0MSAwMDAwMCBuDQowMDAwMDAxMzE4IDAwMDAwIG4NCjAwMDAwMDE0NzkgMDAwMDAgbg0KMDAwMDAwMTg0OCAwMDAwMCBuDQowMDAwMDAxOTk2IDAwMDAwIG4NCjAwMDAwMDIxOTcgMDAwMDAgbg0KMDAwMDAwMjYyMSAwMDAwMCBuDQowMDAwMDAyNjU2IDAwMDAwIG4NCjAwMDAwMDMzOTYgMDAwMDAgbg0KMDAwMDAwMzkwMSAwMDAwMCBuDQowMDAwMDA0NDExIDAwMDAwIG4NCjAwMDAwMDUwMTEgMDAwMDAgbg0KMDAwMDAwNTUzMCAwMDAwMCBuDQowMDAwMDA2MDQ5IDAwMDAwIG4NCjAwMDAwMDY1ODcgMDAwMDAgbg0KMDAwMDAwNjk4MyAwMDAwMCBuDQowMDAwMDA5NjkwIDAwMDAwIG4NCjAwMDAwMTYzMjUgMDAwMDAgbg0KMDAwMDAxNjU0NyAwMDAwMCBuDQowMDAwMDE3MDg3IDAwMDAwIG4NCjAwMDAwMTczMDYgMDAwMDAgbg0KMDAwMDAxNzYwMCAwMDAwMCBuDQowMDAwMDE5NTcxIDAwMDAwIG4NCjAwMDAwMTk3OTUgMDAwMDAgbg0KMDAwMDAyMDE3MiAwMDAwMCBuDQowMDAwMDMwNTAxIDAwMDAwIG4NCjAwMDAwMzA3MzMgMDAwMDAgbg0KMDAwMDAwMDg5NiAwMDAwMCBuDQp0cmFpbGVyDQo8PC9TaXplIDM1L1Jvb3QgNiAwIFIvSW5mbyA0IDAgUi9JRFs8OTNEREQ1RjRBQjk1NTU2NTVFMUFFQkU3Mjc4OTFGNzQ+PDUyRUNGNjUzRTlDQTM4NDNBMEI2MTY0ODI1RkZENjJDPl0vUHJldiA0MjQ4MT4+DQpzdGFydHhyZWYNCjANCiUlRU9GDQogICAgICAgICAgICAgICAgDQozNCAwIG9iag08PC9GaWx0ZXIvRmxhdGVEZWNvZGUvSSAxMTYvTGVuZ3RoIDEwNC9TIDQwPj5zdHJlYW0NCmjeYmBgkGZgYN7DAASTHjGgAmYgZmHgWIAqKg3FDAzKDHxMFuwPghsKmWZIBDAwHWSPkN3Q6/iEfYJ8QZRXQboC94Y6hx0sPJUM+o5hC27whJ88ADWDhYFhSRiQZgTiRwABBgBLlxXzDQplbmRzdHJlYW0NZW5kb2JqDTYgMCBvYmoNPDwvTWV0YWRhdGEgMiAwIFIvUGFnZXMgMSAwIFIvVHlwZS9DYXRhbG9nPj4NZW5kb2JqDTcgMCBvYmoNPDwvQ29udGVudHNbMTQgMCBSIDE1IDAgUiAxNiAwIFIgMTcgMCBSIDE4IDAgUiAxOSAwIFIgMjAgMCBSIDIxIDAgUl0vQ3JvcEJveFswIDAgNjEyIDc5Ml0vTWVkaWFCb3hbMCAwIDYxMiA3OTJdL1BhcmVudCAxIDAgUi9SZXNvdXJjZXMgOCAwIFIvUm90YXRlIDAvVHlwZS9QYWdlPj4NZW5kb2JqDTggMCBvYmoNPDwvQ29sb3JTcGFjZTw8L0NzMSAxMyAwIFI+Pi9Gb250PDwvVFQxIDkgMCBSL1RUMyAxMCAwIFIvVFQ1IDExIDAgUi9UVDYgMTIgMCBSPj4vUHJvY1NldFsvUERGL1RleHQvSW1hZ2VCL0ltYWdlQy9JbWFnZUldL1hPYmplY3Q8PC9JbTEgMzMgMCBSPj4+Pg1lbmRvYmoNOSAwIG9iag08PC9CYXNlRm9udC9aUFFQU0ErVHJlYnVjaGV0TVMvRW5jb2RpbmcvTWFjUm9tYW5FbmNvZGluZy9GaXJzdENoYXIgMzIvRm9udERlc2NyaXB0b3IgMjQgMCBSL0xhc3RDaGFyIDExOC9TdWJ0eXBlL1RydWVUeXBlL1R5cGUvRm9udC9XaWR0aHNbMzAxIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgNTk4IDYxMyAwIDAgMCAwIDI3OCAwIDAgNTA2IDAgMCAwIDAgMCAwIDAgMCAwIDAgODUyIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDU1NyA1NDUgMzcwIDAgMCAyODUgMCAwIDI5NSA4MzAgNTQ2IDUzNyA1NTcgMCAzODkgNDA1IDAgNTQ2IDQ5MF0+Pg1lbmRvYmoNMTAgMCBvYmoNPDwvQmFzZUZvbnQvTVVLRlJOK0NhbGlicmkvRmlyc3RDaGFyIDMzL0ZvbnREZXNjcmlwdG9yIDI2IDAgUi9MYXN0Q2hhciAzMy9TdWJ0eXBlL1RydWVUeXBlL1RvVW5pY29kZSAyNyAwIFIvVHlwZS9Gb250L1dpZHRoc1syMjZdPj4NZW5kb2JqDTExIDAgb2JqDTw8L0Jhc2VGb250L0hGQU1aRitDYWxpYnJpLUJvbGQvRmlyc3RDaGFyIDMzL0ZvbnREZXNjcmlwdG9yIDI5IDAgUi9MYXN0Q2hhciA0NS9TdWJ0eXBlL1RydWVUeXBlL1RvVW5pY29kZSAzMCAwIFIvVHlwZS9Gb250L1dpZHRoc1syMjYgNjA2IDQ3NCAzNTUgNTAzIDUzNyA0OTQgNTM3IDM5OSAyNDYgMjc2IDQzMCA1MDddPj4NZW5kb2JqDTEyIDAgb2JqDTw8L0Jhc2VGb250L1VHSkVDSCtIZWx2ZXRpY2EvRW5jb2RpbmcvTWFjUm9tYW5FbmNvZGluZy9GaXJzdENoYXIgMzIvRm9udERlc2NyaXB0b3IgMzIgMCBSL0xhc3RDaGFyIDEyMi9TdWJ0eXBlL1RydWVUeXBlL1R5cGUvRm9udC9XaWR0aHNbMjc4IDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAyNzggMzMzIDI3OCAwIDAgMCA1NTYgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDY2NyA2NjcgNzIyIDcyMiAwIDAgMCAwIDI3OCAwIDY2NyA1NTYgMCA3MjIgNzc4IDY2NyAwIDcyMiAwIDYxMSA3MjIgMCAwIDY2NyAwIDAgMCAwIDAgMCAwIDAgNTU2IDU1NiA1MDAgNTU2IDU1NiAyNzggNTU2IDU1NiAyMjIgMCA1MDAgMjIyIDgzMyA1NTYgNTU2IDU1NiAwIDMzMyA1MDAgMjc4IDU1NiA1MDAgNzIyIDUwMCA1MDAgNTAwXT4+DWVuZG9iag0xMyAwIG9iag1bL0lDQ0Jhc2VkIDIyIDAgUl0NZW5kb2JqDTE0IDAgb2JqDTw8L0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggNjcwPj5zdHJlYW0NCkiJjFVNc9MwEL3rV2wpBRsaVd+SrxQOZbgw45kcCIeSpDSQr8Y0DP8eWbLl2LKdJhlrZa3e2327Up7gKzwBZeVPGgNaGTgsYQpbuLktKMwLIEAwZVJkhFmLayU0UVDM7T6CmSjXvUEVx1oSCirTWAg038CHHChxDmHMN3CT5xQo5A+QQAr5L/iUuzA6cJrgzBDZhiOYEEuRz8eBv0Ey3aXWTg7rBUxXiyXc2vl3yD9bNhSzcZZhySWr2MCzTRwdH6FDPo/DfjgRi4oZt5gtaCGdRzV4NF6hXYyoEkTWChtoRGFlkNI4Tz+gjtSjWjNMjVQRKjdnYK3QH1cpMEiOTm5vF+7pbV+GbRC/h54xggXn2tNXlW6RvrxtGBNYSSpaWE4gHaUS9czuwQULo8EKiqVUOiLg/DzBl6op3XO5SWHCHJ2d3aW2+sm+cJPnzVgInFGsiWk3QauhurVv9PLd23d8s6zqLEntgYeeKlSoEs62qbYHyRZC98JYVdCLYWRmMGenLSmGWlIGmV9dOhVfu+fV1Rs3Xrh2fOvsWeKG04VZaicoeecml6de1f73YyWhhmOpMt7EiwZ6uCdt6r59NTEK6/Lq8Kin3Ryfd+iqcD1LywwmNvaJH65Pr8DBTmDU4ExmbemHqdC5CmqKs/MwpSIodCmm3H16rZ6YSx6hcKZqHp+dH6r+UM0xTMur3dZ5WRvHYB3ua+tn5AUpEhwLSGKfVW2se3aB32VvlxYFSjbB6W9t7EIgv4uw80+1cxdh7muoQ1g7hlgWw7HcR29qAJT8qNeea6OoAwjexb9tFPE+uK9qqCJKvskvLD1GIgZZHyPpY32b2q2jpHax08HqgzoQQf0CdxO9q17UCkSNgsLSKiZzfyXwX4ABALkO5agNCmVuZHN0cmVhbQ1lbmRvYmoNMTUgMCBvYmoNPDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCA0MzU+PnN0cmVhbQ0KSImUVT1zgkAQ7e9XbAlFLgiI2mYmTZpMZq7LpCD4RQyagOL47wPCLeTeQcxYuLO3t/fe27eqnuhR0Qt9kyf9kLz6Uwczn6aBL6cRJRk9KJpeT9ovoTK6VyqiCak1vZITu3RXXSPnC4L8oCMuSrY6WrmiCQqqUjO5IOfo0jXD1/gk4VTO3btUrFvpBitoUOijnKFA8aYJhHOpr83JWZt4CriTWyB+doEw6L/bwBt4Up0oTRqCa1hFLt7rYIepDchxdAUq3WN8YnjwIlC3oEL9D0iqY46CtXBKLDnZ3ngjNeLlSYXAFxYvk+lli3PRbkVq2i2FGpZetiUVyKhaKXKetQ/ZbTFc4n6JTaI2uHBL7tShPvfEaYpGZ9SUbDsawniMERXAOeH7Z5Ah54Eu4bUyBSSImjs1xhTdyRKh5ScY2uBO9TrhvuyBSDnmsTBYyDAMJv9x2vDW9zwkBiWLQSmL+DuWAX+IktvXxwvk7La/ApxJb9uL1KRcO7hFJQzh+3b57ZEMF5Kp82zzbAWz1K/KMb6+70svCsOW9d+jrAGqj0HxwtlcevNKPGFrQz8CDAB5QXlvDQplbmRzdHJlYW0NZW5kb2JqDTE2IDAgb2JqDTw8L0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggNDQwPj5zdHJlYW0NCkiJjFTJboMwEL37K+YIUuIQICzHRuqlt0o+VIp6oCVJqULSOpvy92WbMcqYFHHAmuX5zfOzZ0pFMAe1gRU4S124MPWkD84eF1tcQLUIIhmDc3JFEznYUtBEvjC1Hq7J2pRwPrBm96DriqE+xXdQL/Cs4BV+a8wQvPqrF/NkIZMgiiGME+kl4rOEpYJFk8efKmFG4ztTF9T3EFhagfnJaLDVAN/7oY5nV7QxfaG6S8b06HWgepqqMLJGrC3bmvQ88mOwnCIhEeWKaLfSlrP5oSSBEbtPdIuZz4KQY4gASq5bhrSQesG6qFZ2JWaLp9NdSIxy65pB572izoCCeyb2IYwCGSUwxi00wKZTqyaQpv1h7STbmpzJfps0baJXtGVq/XPb27YN+QStemW7aSIw6Ync9mccco9QNMrVVnRPV5e2myGGdGIXxGDzmiMmhSF1ObDtjHfzYaXwFgintKjPDUURfqcLROION9BvZlmwefjlEGPNNfy61uYOU+mFo55CfWbkuYVu9UnOvUCmNjYZk/2hdwmJPXKPvNto1jVmFjDz9nJe5c5GbFhBP6rkS6OejPAnwADWhXX1DQplbmRzdHJlYW0NZW5kb2JqDTE3IDAgb2JqDTw8L0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggNTMwPj5zdHJlYW0NCkiJjFVNU4MwEL3nV+yRHhqhobVc/bh4cZzBk+OhUmqroDXYOvrrBWSXbTYwDgcyydu3Xy+bixTmITRf90tLOEvTBUSQbuABgnwC01DPIFjjAupFFBqdQJAVf5sqeMfTwxi+Ilgh8J8TaJl2uEHYN1zoFnLCeI9nFQVq6VgFz7iZC7IevzoNwZ+zopxlXMJOnLCIdzXVKYgdWqJYsUo9QnoD1yncwUdDF7eNahdzs9BhaBKI40SHscpKuBjtaDCdQPriJzufQWxivVxCVqpxmoc+NepXmbOE2v8XqmMn2l642K1I/Cj4ygPWrnALTB72woOgQaVJzWZbQUOg3WBzq1od7WLTbZCNBXcHIfivto4jFVjpiWSsx8RgTKijmfl3E5uCjGkhTGqXHkUpVwqXond7UTUs1srpDxPHD+/UNDLnOvJNAXZYiOJ9eVBSWrKwbs3Z6GBM2DV7GMyPoYXO2BnZ0YgZHh6E3XQ6+0YinhMjf/INEWexx6tEGPvKKI7S9CAiJbzqWdbDia09oZJdJub/UdT2WxQXvMV9wh7mLoOolm96V7jTvzdq+NbVN8UksZ4vfFdFTM3broX9cLN2RAttmFHcPA2f7qDMsYN/oDmzJ0ap7+F7OEJz/F+Ict6LoMnJiGCYtTrdkMNAypQ8ZIVDqILsVQRk5StARSNQxZLcdDrtDUtyedWMZ/gVYAC6zcvPDQplbmRzdHJlYW0NZW5kb2JqDTE4IDAgb2JqDTw8L0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggNDQ5Pj5zdHJlYW0NCkiJjFRNU4MwEL3nV+yRHozQBGyvznjx4MdMbo4HbKFFSa2lyvjvpZQsIUuww4Gd3c3Le/sRdQ93Cp7hC0I+lxCevpMho5DHYiFALCWPE1hpuFUQt/Hux5SGa6USiEDl8AJBNoOr5jAEP48zaI298VS880DjiaIYggcTSs8GC3bG82mMY3cIgVdbY9Hksvews7Exnt/m1ldQHqk3cxCLkEeXiSwMqNZI61AVhA4SbBWHSy6YkTrg6qZvyLm+Dqg+MxJNkrSIIVTeHcNLDjp1S1sYpAni1sUFoYmh2iCldtkNQk0QSmJYYlYl4VVll1WGglEsNPYjmJOVyGdsiLD6RoK0A6SZDZvKmljmkC7HSufnI+0CuFj/znwyOvCgNBsM/Jt3LY/ufGnoPKgDx2tHGsSn+EVC8lAIL8vhWp60qne/UhnyBuoSsU+oZY0Nxnmu+qXxNJq1rZE3y7ER65+LkS2k74G7TYM1aa/oX8PURwhvtZhpmp36jjXZmZPNkDUi0nr0u7AmiFaJ2PChwkhN6nH4yN3ZwlhtiFUIQEYTI0gDu4nicdJxMVFFtSVAqeWZXrR5zJMFGx8/gD8BBgANznUrDQplbmRzdHJlYW0NZW5kb2JqDTE5IDAgb2JqDTw8L0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggNDQ5Pj5zdHJlYW0NCkiJjFW5csIwEO31FVvaBQLLB9BmJk1SZUYdk8IxZ/BBbELC38eAd6V4bcNQeEfSHu/t22WsdQQe6DUswFm7MJEKnDJ2YXS1shVaP2gUrrgZ5V4278EF5ckZOLo5IP/9Dq0cjQ0ajZdwKkpi8qZo0V1VUZo9HZ7p7HA7EsahRDgFnWQxKydZteoBZ4eh6NESDYKTUKQjy2IKLTASloLf1Ar4DvoFnjW8wdflMoDJ5Xcx/LmSYTiPwFehjGaQZOJJQ3i9x4/OYGz3MGfEUTWtSoW5Mbz8EsKMrKo/1JaxTk8IYnFXAiZejOKi5BTGEls/Y1MFvudJ9RhXrytG1oF1OkeIdtWjaSS9Wn6E7Zu6b26NSldtV9HBinFssfu/l82bDzMzosVUyl/T9G4ZvvakCcvtwAroGqreGbdDxbyoJYvO22AI4zN46qDlPq1ikNbq2F5h5XFIcEFYu/nKb2QnkgyGZeeMXNCfPcGiifTULHg42MISHqPM3t45ioTBSxmt+OTMBWlvjsEhVPNpjQTuAxAdKii7O3zbu7zBJxJuuWGxuvd8z16iJCX9yVXVEFJV7+XAi4bhiq7mw58AAwAldHafDQplbmRzdHJlYW0NZW5kb2JqDTIwIDAgb2JqDTw8L0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggNDY4Pj5zdHJlYW0NCkiJlFRNc4IwEL3nV+wRDqYQwodXbS+9OZNbxwNF7FgLWqg69tc3KAkhiwwdZ0zIbt6+3X2bFXyDRxkHr/k1G8bmlPvMBzaPqc8gK2AhILzZ24WIAp6EiMAHsYU3cI4uzORlcL7UJlWbHxdu67ZdD8pQFXA7Is5OHZX2pVwdfOhbCFjfPrjEgqEurEG8wouAFcozCELKoyiZnKcj+YrPYbCYAYt9yvkQDIiC9Mq17CjbdcMFfLecyVgNzjkCukras4RJcgboRmEVuPj6pFbwOKAOo8FJd1YcUX4XtdFOVW5x6WEVOmK1z+3gtZHReYcKZliPShK6GidDUY+1wZnUBkviXlOJ3VRTG7PH2uDNQDGedGBEKsSjnucxEBnSigmblpsRYC6BeRS0wGRYelNZhoHEaOZ/Itj42F+NZpao+aqTFySUal/T1ghjHWoGLoxoMonns2ZVIqLdAGQpUktviJiRcDXGzQ89GkVB9A+GOv+sm7UaTXepCI7Oo/xfMPW97Bu0/OtcYVX13aQx1UuNsKu7I34xJFPS54UrODju5vOWYmqWR44C5zbnnV0YqSZiYGkldGUuHr+mmZ0PQYro3p/6hDLRztrp12gUWQP8CTAADON86g0KZW5kc3RyZWFtDWVuZG9iag0yMSAwIG9iag08PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDMyNj4+c3RyZWFtDQpIiYySP0/DMBDFd3+Kx9YMuLFrO54RDGXhjywxIIbKBNoqaSFNi/j2xEnOgbZIUSLl7t3553ex3S1uHB7wiZRLhTQ8IcgkpJLcKPgSVw66rdDHlZg6ZyDg3vCMid8n7LJZh8muTtAG2wSdUuYUVehrr6SsYik2+TphbRCV2E3LfWRvKDgMm7xT1Ets6PL5MerrxAn5X5IQN4szUgtBBvOH2FwMq47AK7J1OsUiDvFN7EeS7nph3n95ghe4cHrs9PRmNuVWSD36DMMsbs3aq9CVd/7snRCaa0k82/HsL57ueRcJc+t/r5awlls7ChNtncMYydVoTPerAiibcWMbQIoSQiuuTZsUlBiluVVN3jXGdIknbCBkeHXjPzOWVXkrTu/zyucf9X5RoFo1e4QT6KxkigvzF9U4ns5LgestgqMfAQYAXae4kA0KZW5kc3RyZWFtDWVuZG9iag0yMiAwIG9iag08PC9BbHRlcm5hdGUvRGV2aWNlUkdCL0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggMjYxMi9OIDM+PnN0cmVhbQ0KeAGdlndUU9kWh8+9N73QEiIgJfQaegkg0jtIFQRRiUmAUAKGhCZ2RAVGFBEpVmRUwAFHhyJjRRQLg4Ji1wnyEFDGwVFEReXdjGsJ7601896a/cdZ39nnt9fZZ+9917oAUPyCBMJ0WAGANKFYFO7rwVwSE8vE9wIYEAEOWAHA4WZmBEf4RALU/L09mZmoSMaz9u4ugGS72yy/UCZz1v9/kSI3QyQGAApF1TY8fiYX5QKUU7PFGTL/BMr0lSkyhjEyFqEJoqwi48SvbPan5iu7yZiXJuShGlnOGbw0noy7UN6aJeGjjAShXJgl4GejfAdlvVRJmgDl9yjT0/icTAAwFJlfzOcmoWyJMkUUGe6J8gIACJTEObxyDov5OWieAHimZ+SKBIlJYqYR15hp5ejIZvrxs1P5YjErlMNN4Yh4TM/0tAyOMBeAr2+WRQElWW2ZaJHtrRzt7VnW5mj5v9nfHn5T/T3IevtV8Sbsz55BjJ5Z32zsrC+9FgD2JFqbHbO+lVUAtG0GQOXhrE/vIADyBQC03pzzHoZsXpLE4gwnC4vs7GxzAZ9rLivoN/ufgm/Kv4Y595nL7vtWO6YXP4EjSRUzZUXlpqemS0TMzAwOl89k/fcQ/+PAOWnNycMsnJ/AF/GF6FVR6JQJhIlou4U8gViQLmQKhH/V4X8YNicHGX6daxRodV8AfYU5ULhJB8hvPQBDIwMkbj96An3rWxAxCsi+vGitka9zjzJ6/uf6Hwtcim7hTEEiU+b2DI9kciWiLBmj34RswQISkAd0oAo0gS4wAixgDRyAM3AD3iAAhIBIEAOWAy5IAmlABLJBPtgACkEx2AF2g2pwANSBetAEToI2cAZcBFfADXALDIBHQAqGwUswAd6BaQiC8BAVokGqkBakD5lC1hAbWgh5Q0FQOBQDxUOJkBCSQPnQJqgYKoOqoUNQPfQjdBq6CF2D+qAH0CA0Bv0BfYQRmALTYQ3YALaA2bA7HAhHwsvgRHgVnAcXwNvhSrgWPg63whfhG/AALIVfwpMIQMgIA9FGWAgb8URCkFgkAREha5EipAKpRZqQDqQbuY1IkXHkAwaHoWGYGBbGGeOHWYzhYlZh1mJKMNWYY5hWTBfmNmYQM4H5gqVi1bGmWCesP3YJNhGbjS3EVmCPYFuwl7ED2GHsOxwOx8AZ4hxwfrgYXDJuNa4Etw/XjLuA68MN4SbxeLwq3hTvgg/Bc/BifCG+Cn8cfx7fjx/GvyeQCVoEa4IPIZYgJGwkVBAaCOcI/YQRwjRRgahPdCKGEHnEXGIpsY7YQbxJHCZOkxRJhiQXUiQpmbSBVElqIl0mPSa9IZPJOmRHchhZQF5PriSfIF8lD5I/UJQoJhRPShxFQtlOOUq5QHlAeUOlUg2obtRYqpi6nVpPvUR9Sn0vR5Mzl/OX48mtk6uRa5Xrl3slT5TXl3eXXy6fJ18hf0r+pvy4AlHBQMFTgaOwVqFG4bTCPYVJRZqilWKIYppiiWKD4jXFUSW8koGStxJPqUDpsNIlpSEaQtOledK4tE20Otpl2jAdRzek+9OT6cX0H+i99AllJWVb5SjlHOUa5bPKUgbCMGD4M1IZpYyTjLuMj/M05rnP48/bNq9pXv+8KZX5Km4qfJUilWaVAZWPqkxVb9UU1Z2qbapP1DBqJmphatlq+9Uuq43Pp893ns+dXzT/5PyH6rC6iXq4+mr1w+o96pMamhq+GhkaVRqXNMY1GZpumsma5ZrnNMe0aFoLtQRa5VrntV4wlZnuzFRmJbOLOaGtru2nLdE+pN2rPa1jqLNYZ6NOs84TXZIuWzdBt1y3U3dCT0svWC9fr1HvoT5Rn62fpL9Hv1t/ysDQINpgi0GbwaihiqG/YZ5ho+FjI6qRq9Eqo1qjO8Y4Y7ZxivE+41smsImdSZJJjclNU9jU3lRgus+0zwxr5mgmNKs1u8eisNxZWaxG1qA5wzzIfKN5m/krCz2LWIudFt0WXyztLFMt6ywfWSlZBVhttOqw+sPaxJprXWN9x4Zq42Ozzqbd5rWtqS3fdr/tfTuaXbDdFrtOu8/2DvYi+yb7MQc9h3iHvQ732HR2KLuEfdUR6+jhuM7xjOMHJ3snsdNJp9+dWc4pzg3OowsMF/AX1C0YctFx4bgccpEuZC6MX3hwodRV25XjWuv6zE3Xjed2xG3E3dg92f24+ysPSw+RR4vHlKeT5xrPC16Il69XkVevt5L3Yu9q76c+Oj6JPo0+E752vqt9L/hh/QL9dvrd89fw5/rX+08EOASsCegKpARGBFYHPgsyCRIFdQTDwQHBu4IfL9JfJFzUFgJC/EN2hTwJNQxdFfpzGC4sNKwm7Hm4VXh+eHcELWJFREPEu0iPyNLIR4uNFksWd0bJR8VF1UdNRXtFl0VLl1gsWbPkRoxajCCmPRYfGxV7JHZyqffS3UuH4+ziCuPuLjNclrPs2nK15anLz66QX8FZcSoeGx8d3xD/iRPCqeVMrvRfuXflBNeTu4f7kufGK+eN8V34ZfyRBJeEsoTRRJfEXYljSa5JFUnjAk9BteB1sl/ygeSplJCUoykzqdGpzWmEtPi000IlYYqwK10zPSe9L8M0ozBDuspp1e5VE6JA0ZFMKHNZZruYjv5M9UiMJJslg1kLs2qy3mdHZZ/KUcwR5vTkmuRuyx3J88n7fjVmNXd1Z752/ob8wTXuaw6thdauXNu5Tnddwbrh9b7rj20gbUjZ8MtGy41lG99uit7UUaBRsL5gaLPv5sZCuUJR4b0tzlsObMVsFWzt3WazrWrblyJe0fViy+KK4k8l3JLr31l9V/ndzPaE7b2l9qX7d+B2CHfc3em681iZYlle2dCu4F2t5czyovK3u1fsvlZhW3FgD2mPZI+0MqiyvUqvakfVp+qk6oEaj5rmvep7t+2d2sfb17/fbX/TAY0DxQc+HhQcvH/I91BrrUFtxWHc4azDz+ui6rq/Z39ff0TtSPGRz0eFR6XHwo911TvU1zeoN5Q2wo2SxrHjccdv/eD1Q3sTq+lQM6O5+AQ4ITnx4sf4H++eDDzZeYp9qukn/Z/2ttBailqh1tzWibakNml7THvf6YDTnR3OHS0/m/989Iz2mZqzymdLz5HOFZybOZ93fvJCxoXxi4kXhzpXdD66tOTSna6wrt7LgZevXvG5cqnbvfv8VZerZ645XTt9nX297Yb9jdYeu56WX+x+aem172296XCz/ZbjrY6+BX3n+l37L972un3ljv+dGwOLBvruLr57/17cPel93v3RB6kPXj/Mejj9aP1j7OOiJwpPKp6qP6391fjXZqm99Oyg12DPs4hnj4a4Qy//lfmvT8MFz6nPK0a0RupHrUfPjPmM3Xqx9MXwy4yX0+OFvyn+tveV0auffnf7vWdiycTwa9HrmT9K3qi+OfrW9m3nZOjk03dp76anit6rvj/2gf2h+2P0x5Hp7E/4T5WfjT93fAn88ngmbWbm3/eE8/sNCmVuZHN0cmVhbQ1lbmRvYmoNMjMgMCBvYmoNPDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCA2NTUwL0xlbmd0aDEgMTAzMTY+PnN0cmVhbQ0KeAG1Wgt4VNdxPufc5z60une12l0QQrushWSvsF4IAcHmgrSSYC2QBFgrjCwJEBKOsYUNQWnshtjOB5Ed5Kb5IImdOs3LbvKlXm2oEW6CnZchjnHsxHFcN23dlhrqIJf4I6mLpVX/OSthyNfH16S50tyZ87xzZubMzDnSnrv29jMv288U5mzb1TfE5OPbCPTctg/tieTKusmYGNoxNLArVzaHGdNGBm7/8I5cOb+FsfCnBvv7tufKbBJ4ySAqcmW+GPiawV17MI4en4pX2+13bptpz5+H8nW7+oZnvs9+gXLkjr5d/cB4ysvxumbozrv3yCIro/abhu7qn+nPU4yp38i1zb59jHHQAdbKDJZkGhPMYpVsE1ZygU9jvVy2a5g49HpRT/6KX7MiLBPPN/qOzSH83e99/UfTbVNnzSeMB1F0y/7UgHmN+7PLGDP/dbpteth84nILtdITcNwbn/kKd75iFzZ9eZx7MzUlfz7OC51MScmdJ6ujdwB2AW4HfBBwG2AnYBAwANgB6AdsB2wDbAX0AXoBPYBbAd2ALYBbAJsBXYAUoBNwM2ATYCNgA6AD0A5oA6wHrAO0Am4CJAFrAWsALYBmQBMgAWgEjPP6zJ0m0JLMHYTqMrsILc7cTqg280FCNZnbCFVndhKqygwSqswMELo+s4PQokw/oYrMdkLxzDZC12W2Ero200eoPNNLqCzTQ2hh5lZCpZluQtdkthCKZW4htCCzmVA000UokkkRKsl0EpqfuZlQcWYToXmZjYSKMhsIzc10EJqTaScUzrQRCmXWEwpm1hEqzLQSCmRuIlSQSRLyZ9YSsjNrCFmZFkL5mWZCvkwTobxMgpDXmW40ozs7q0tSgJsB7R3VJU2N1SUJwPp11SWtgMiRqiPOkbYjatVBnv8QH33gsQeefOCZB378gDY6+Njgk4NK786hnWL0Fj66mQ918tG2x9qebHum7cdt2mj7Y+1PtiujHY91PNmhrLxn/T2i7SO9Hxn6iDK0jg+N8qrR3tGhUYUd4vh1Dg0dEuxQ1SHnUNuhXhR0a8gZEr17eO/dfKiR5zaW3zad+vyVLwZ5/hdLvijCUHoA4APkAbwAD8ANcAFMgAHQARpABSgAAeAA5x6G9xthM/qLgBl91WdGf5ZnRl/xmtGfeszoT9xm9GWXGX3JNKM/Nszoi7oZPa2Z0RdUM/ojxYw+L8zoD7kZPcXMaMyXWOBNRN2JiJko0RPz1USxSMxjiblm2AyaAdNvWqbP9Jpu0zR1UzWFyczkuDHdkUybbbekxjg/1JX2J1ly4+rjjPPpj38y/js+d6/mxcl00YZU+nBxVzJdA4IVjwXZ6q5kBKVY+nD75lS6qrgrzhM7N6zmybbUmInWhi05HLSGbhyrr0/sjKTZxlTa6e1qHKtiQ9+sYVVszlB46G757NmTw1e8f0d+/y/D9sThsLQ3AcfZHEBArYTXZNNnAecJslvQ9j3GsogKylLQP4UP3QB8Cj7x//s5xU7j5zA7ip/cc4Kdws9D7AvsEdTTk6th7Gv4oWc7u5fdhx6H0WeWfoT95DKNelHFl/Aw/zJ/lTWJMK/g34Ubf5X9kv2Sv8w/yjfyAp7gg7yCfVLU8S5lpaaBPsruwKhb+Qv8BfU1dgdKr2LWHn4RbcPiJf6w8lG2X+xHC/H6lewXWQ07Dj5+78f8X/Ux+wnSBz2kjz/I8wfSh7NkYzK6dk1Lc1OisWH1KmfljTes+MDyZUvrl9RVXr+oonxh6TWxBSXhgG3l53ncLtPQNVURnFUkYk29kfTC3rS6MNbSsojKsT5U9F1R0ZuOoKrp6j7pCI3rQ9NVPR303PFbPZ1cT+dyT25FVrAViyoiiVgkfboxFhnnm9tToD/ZGOuKpCck3SppdaEs5KEQjWJEJBEebIykeW8kkW760OBIordxUQUf87gbYg397kUVbMztAekBlS6PDY3x8hu5JER5YvmYYGYefTatlCb6tqfb2lOJxqJotEvWsQY5V1pvSBtyrsjONHhmD0bGKp4deWjcYlt7497tse19W1JppQ+DRpTEyMiBtB1PXxtrTF/7R2fCEGB/uiLWmEjHY2As2XH5AzytlVqxyMivGZiPTZwH11fU9M3U6KXWrxk10hIviynN+2ZpBt7AIdYXjRIvD447bCsK6f3tqVw5wrYWZZhTGe9Ki15qeXa2pXATteyfbbk8vDcGySZiid6Z3w8NhtP7t0YWVUCz8rc0rZaiPZJWFvZu3TZIuK9/JNaIFUKW0gM3gnD6ZoSZGKuqRP++XixiJ4mhPZWujA2lA7HVOWmjApOUwren5JBcbSIdaEiz3m0zo9KVCYyFiSRGSDHEIM0Va08dZ7XTb4wtjhR9s5YtZl3ERzrYAKUsTIyktu9Il/QWbYd97oikiqJppwvi64ql+rtISzErfe0b+BweKFCOwtp+q/dsZyw7bZSakZQoUrpIW6iINOEVW70CDVZazxVJo6tXRFK8iM12w1dmehB11TwoKKUNLRgMjKENLUVRGLd8/geWinILABtp8zJPKpjQ3ucp953/lrVcb2Lo2kiiv/EKBq+aFAXJ4Mxs/zWfgmQxIwywYJI6W2gNiyoE6AiazbTAOmUVaTGMYN0WScX6Y10x2JDTliLlkKylfpMbYklEfantmT05YsaSG0aoNrY0V8UiI2vSDObkYCst9S/O1TbBiY2MNMUiTSO9I33j0/u3xiJWbGQsmRwZSsD5sLYUbGB8+ukHi9JND3Wlrd5Bvhy2OxJbs30ktiG1AgqAZbVd3qxp0bAxNcOQ/LK0GPSBB1k9FuMH28ccfnDD5tRxi7HIwY2pjOCioXd1V9cinJwYothO4B+yNvUcOyyWs+NqHjusv8zy1L0sqXazVeLXgBbWJrag/svsgNoDSLJVaD+gzGOHlRtA97B7lV/iNMlwHqKzFwOts3HgCOucqZHVv8dL4OQ2+9ApEmnIbPEqrF9VurJgMJO5ZEUue/GAyzzmY/mos5iNt58VzAwoZ+Wc8++ICnG/skm1tBbtW/oa/VnjBXOXq87tdt/jiXqe8a5Bb8H281uV/eqt4M9gC52g9qjyefVRg4VYPbFsPMpRZKxyamqCV/Z0A1VXFdhRuzRqR/crbGq/YFmGKdgUtEESbJs+q35Y+wkr5D1O5mwhZ4bJlU6Diz+2+B95eKHOlYPsM+wJCPgUe42dw2cNb8hb5q33qswb8Xr1pDekJ2yvZYmkrefn4x3xePC2vF68/Xl5eHt9Pj1ph6k31eAdrGdnmRCH8LXhfL4vwEW/i7uVzcptinJd/vJ8oXi8R/jj/Bg/yX/Oz3Kd8bNezk0WZgeZQiwc8GLB49MvOfm2rSdZwOsQ9vqU8elzR+l7IN45almSOHc0P18SF1AjqOmcU0rMKmsCrjC10VskXWt8roBFg+gtkgGFrYzHV07E4xYAj3Wafnu6u3u6a3q67VpQIPHb082tV2cz4gmQPd3PUQuguko2GwtjC4Qd8NfW1If0aITZFovWqDc09L74nX/9t2+/eOcdf5l9O/uP2acQ4kL/pt3/9abs0eylS9kffuqz3+B/xjfwFp4hC0B2qn4GmbLJ/LzOqQg4JOdwHm4nhtFq5h/MF/mJQWPYEKTGAwYvN7gRKCgQSWN8+i25fBAXHA9pyXDTaFkOkaYML+nIECS/GXp8+qKTTzoz3I0GD/EyrnASMQkJxPmjNDWIi3JqEJccD4mSq/QBlLNHaWYQb0u1gJikHhiNL8g53nNq6AO8IGwNWsOW0pm3L0+wTd5+716vstnmSpnGRaGXu70tXLjsFpWbqmArV1oTNbUAKX2S9RUaiMe7p16ZVUdc6mhCqqKbx6NR+5pakj+3g7U1ftuKLuDfzv4Nn7ue1/Idk881bPvJheyiedpxd/bB7POT5zTtveNu/gEextGPI/dm/GM4wSjsxHGmTT/rVErZarxc40pjmJWzpayFpdgg0wehEto9qoHtg+1WTjKDzb4iZQbijJQZiLdyMmNSZihfPEp6AfG24yHpsQhpRJZtEh3jJDo2j8SG2jeOyj0wPv2i43G5UNWsMpOTiLj1m8tSIKKne/ddE90wSR6vtWvt46e045easSpYlXIRq9LYnzhBUw2r4pT6mnpOfVdVmdLI1GYsQVqP/F5O6ajJKR3EjNIvLyCndNljhvOLOaUzRXJOVpUnuW/WlZYZZie49eb73FpvWhNgt7qKGI0WHj4lDoPZ946C27zp8/r3wa2X948JikrOPMMtFE3lLtXlEabQhNfj1nTDbQjhy7P9yypPn7b+jn4hEzy2P7SsuqpoTP89BjtLmnFWaFF4M0Rd7mvxDfsO+k75NLrDw5L4sMr3Md6CRjdqDENR9SbN0+Qdn37X2ep260lTww6o9zZ793nVpfD6w5riHWbasOod5nuZsldtVjnmYfe51PtMIVx7MZdhlpn1Zqf5uKkxc1AMC8HXgAv3ISZGTeOQyiAArxRAbqHWBP1I/4N40D2xbE5lmGq6u+lNjVPwUXEQENKcSpIUOS25pchvdbOe7q4uLcZ5DJtD/qqTK7Nfyj69MlvzEl/IVzfxG3n8pYDyzqRPOz3JlelJRZmCtSenJ5TzOPH72Bw+3wkWweHAjlvz5nhbm/N4XoR2Td6sRwLxm9wWyDOpH5XlFgBxJrcF8rxk+Cj/Exok8Y6zgywoLyCdVmDdAZ0v1Vt0wXSTHJA+l2bS59JO0mUc0mUc0hGZ9KTupbH63NJgXTARVK6zl9trbUVJKYOKCIY9GBO0aJZgh2JTtPBRld3hanFx4XMjgFx4ipoVjwub9BjtUZeRxyD0OH56IF0ZMa72ShDnVPwKrzS1wkJ85vHSOh2hoW4xq60JwdYVBAq90IrWLKlXrt/27R9e4pGTX735xImWez/7bd67CJF5/TYeufArvmkd/9WlImXJ7WfS2XuXRSg6rJo+r85TG9gctoBPOweukVJPhbgybvAR43PG14ynjeeN1w1ddGp80Bw2D5qfMZ8wtXJzqdlipsz3q8bNU+Zrppd1hvfB8S2gaGjKGGkijoOuoDWbUpamRbI0S9Y1z+ciFK4PN4cHwgfCR8KPh4+FT4ZdYfJQ5AZA/L10eSB+gZCMwJXTpyRekYECNWekholwPkgzh/n8dXzdAYsvtVqsFGKEalk0HUI0uLKknqx5pCArSFyBKWjYkhq2YgeMI8bjhsLKtHqtWevUVGF0XGcuN9eaitJpDpj7TMUwQ7jXG59+9ihNAoJYwYpQ49TJdXrntzXzTj4gUxKNcYeY53Bh9JbBjGupuXxuR0ByE6AISY1EONJ5B6T9BgaQ6qwNbA4ITSZIWofXEzBgNhNx2IpF3m+i215WKV31hPUz4O7dPd20g3e/H96wkSesv6POeDBu4vuXyd1kcpRz8DiPBkIIcMg2DNhX2cI6q34JLCxYGOALDN2ILi5bqOZNTg7cMvqlXW0Vt9z10POf/PyfP/yDf7nvj7PXfPTmDo9ob1kvtG/1p3o+URG57hNHprnrC6Mfu+f0Sr6zY92eu1s3wjetgtFNY5cXsi8fZyEIMB8aCZHcFpEo9/q5UrgUmyblUlwen9fwaK1Gq6e1xcuZV5D2vH6SJJzi20dpu4I459hkYN4Iyc8bJEWj9lcONjKoACnG6+owfR35oqPA687XkUfDpZM/W4GsNycJinQTENIK6znrue7nqqvgx3g8zuXWCoRqC2M2fupqF5NExPPLazp3a6+/fuLRR5//6oYebUXggf6i4i9M7lZGv3D6rfnYV22I+t/Rvs88yN3/5qkCqf4CLPMpYtL1EfiDi7NJ57vSsFFzSXowEP/hyERZsX3e1n02Z7ZKzs+W67d9tH64mNz6QZyXw0C8mXN81AMZM4UND4lgJoMen37VkbvANin42wV57Rbi7bOOFCi7R5c8wtm5LR3yQUZKLul9GyI54YTwnCUdkvRDcV4D44CtLKxbvESBeKI2T1YnEtVVicb6P+Xt2vcTVVSsbry0ApJ572U6QRyePi8MWIDJTh5nblgArc0NRuRGJ8K5jvRpuEPuMne9u9nd6R5w73Pjti2kl+mKLlpzaeXPucqQHF5OKN+eTSjfcZDzYJ/lEkqVJkO/F5wQmQaX2TsvIMlw7H68MYdT5PGAch3TTmqCu825Ju14Ve8wPDgzYrdhi1wlCpJELi2KxwvggwPB2sLI4RPb2rKjvFI99t7Om7fhGpizA4zpxfCvJXzcWRp1yNqjMgM35uF0Eh2MirJofbQzeiR6Mqqx1voSXmKRQEpMMvaSebSQEmIwRNZeIl1piUpGVCLVjLaJnJpL4Nx06pt1immhJWZrPTJ5OZshTceYS7MheZ+ZzZCzGXI2AyEQKTtmk74URNYJk3QMnaY1rFbDDtll9llbZWRakLW0MeRx9DHbcrmkVSJrk7YqJ0XHXMJHRE4ptpvWYZukFFm7QRqrDAt2JDSvbF79PIWHpaMMO/SRMAKHnmwOdyJG7AurLOzQB/GGfMLB/EBQ6SjydgRL3KbbdhcZQcNP5ksP7XBK4k6T4UrrvUqFZM3vB9d498RVBVJuzgdE7cWIrwb2/2LyBkTUhsgNLMErWGhrPX+bGA6e2Ld12ceKTtyz8o6vvLkl8pdbvvqU+OrUpiWT58R/rL8lVTf5llp5z8OjN3T8IDO1OGcXymuwCxxWnHCBjLpGfihf8H0FnLXaUpeQzyvSSYB4V26TnBzJMmy3dAPvy9Emm7DnSeledgBSl3I0yVCOtsjQbbPVcIVcZS6FIR2Z0ZlL6gzl38iNlGugr7ikzlDOaT3XQEpwyc+jfImUi7Jf7fDZbheSVR/FKFLBlSK/WuAUuWZkDBFfKdqIbStzBz686tDCE/etuPVnvFfc+fVPrFs2eUatHPmz7MYpXAvlMhc3JOjBXcQKpzQEY0EkwV4QyWMa50tD3F4X0uv1Zn1AV3VkAMjqKJ2gFYGYlOEDBCUPsiaXTqDmLSl0EG9I+VEf50MkQN277gA7wh5nx9hJXEfoswc3XPnIzIJJ38wipAQGF463zC+YzCOZNHImswwWvE3huMFQBnx8sICLhG+Tr9+nrC3YXHBbgeKTOYGvw4WIccEJEscFHcrMjUPu9sGDMEGXFLm7ib8i6SsQOe4e5NUD0kl5hkVsv0IBlE9eNvLu3PkW7ovySJxqWWwBK6Assg4HXH0V97z+ZjY7+ca/TLNTvPjjR7Jn7ntEzH2XV2f/NjuZncr+jF/PWXbXa3/ND71C+jiQ3aJWQB/5rIi3OsFiadNni3lwXbP7392i2X3ELeDdz0imQbwl5Q4iJ3cQuYgI4k0pd+rjzCe5u22bq628NSS4IUKiTBwQZ8W/C50JGkT+BgTOA6RjIa0YZWmTKEvjFdJSZXcyXBBZp5aEJgQFAiHoM2JeKm8wD9cgA4YQiQBfO4cHZMScydE68uZAH9I4iHCQoInknA6D+xQd54xcCxFOgFryPIYcbUhtGrPaoU2Rux6SGrpCPfK0LXOQGSV176YdEi8lj1MIZ+O3A1DSQsUOBmsjuS3DEydO3P7ZM9lp9k7jI0H/nnq+9Ynj5Xs+kI1qP03dnj2Tffti9kdVSsXUw0XV/OEffWspRWCc3Snn97EnnfluJA2Ul4etcuuUpZ70/dwnfLqrVW8dRPRArJDyBZELryBmwqshw6shw6tsJnGCmJTKo34yRlCNU0pCh3+7IqDrTHNLT+4WHV4TJ2+v9BjWxOk4LsqulArlrjLWspl0bFYcIXH6xpX9R06cGDxd26sMx5++f+pzauXXn/FjjTjXiEmsMcrOOvGYtEWzJFwiDF/IJ8pjLbGDMYUt9/K6Yq6csrmtrRso3lcsimmdxCyIN5zN5FOLQyy0DvnnEe/j3mPes3SNqVIP74Jzc3hZEaf7srJ8Xh7BleTcvXOFYsyZqwSUFn/KP+gf9p/ya/78ep47DOyT95MGzvs0fcAuwK7tiATCRAQ68iOefBJC7iz4A6T3P+imiwfk8vKELaUCckLmYEjiZ1N3hl0er68P1tbULS67XkEyhuhE14d6YSAUCuZytFUNj/Wlv3b/LQ3RZx/Zm1my+67G7r0H773r+W/+hXN08NO33bTixvVdtQ98em16s1M9UHfjsk/d8fCXKIO5d/q88s/467ufveeUBaQkcf3qRlqOq9+Q2Yzz0OOmesA8Zp41FRP3uvIcdElKEQeinP0QkdufZpC8I8pvy/MOiBdmu16QloOai04h7UeTtZ7zv+sXhj/kL/MrzD8bBkFckjEKxG+cGG02v5em9eMgh7dJevPDMnW8aSYa4NikNX+B0uFzuX0eU5/JE347RF3hInGUumuCtiDORnagFudrZHqxOl2njNfmH/j5rY/OOXEi8p0t6b9WK6c2vXt7s7jw3sufXrnz+afFMyQ7L2T3D2oN/t4wMXPv5FMF92i60PGnY6GoWOsxYlxVTYPuUuKX75z8V9w32VcPmul7dVdnsN7kJv2W42BO53Q6kBsKTmTry2QY7NT36Vq5WCpaRAoXQZp+LxP3ctzX7V3Kx7ko4/VccLWqMKQn1YgHshtXeUgtU+vVAXWfqjF1N/fo4JwCO1204XRUCQ+17AZ5QzQFu8Q10W5IS97/0OWPcNVk617iRTx6MqA2TfnF2/JvEnjh/zbeYx+VxG+/AqhQ2EJWxirwn3HLWTNuR9ewm9h6nKnaWQfbgP+Wuxn3pV1yIIdd5v6qo4Nim9va2zasim+8q3/r3m2D/XtaN/wndtLHZQ0KZW5kc3RyZWFtDWVuZG9iag0yNCAwIG9iag08PC9Bc2NlbnQgOTM5L0F2Z1dpZHRoIDQ1NC9DYXBIZWlnaHQgNzI4L0Rlc2NlbnQgLTIyMi9GbGFncyAzMi9Gb250QkJveFstODYgLTI2MiAxMDgyIDk0M10vRm9udEZpbGUyIDIzIDAgUi9Gb250TmFtZS9aUFFQU0ErVHJlYnVjaGV0TVMvSXRhbGljQW5nbGUgMC9NYXhXaWR0aCAxMTE0L1N0ZW1WIDAvVHlwZS9Gb250RGVzY3JpcHRvci9YSGVpZ2h0IDUzMz4+DWVuZG9iag0yNSAwIG9iag08PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDQ1OC9MZW5ndGgxIDY1Mj4+c3RyZWFtDQp4ASspKk1l4GBoYGBmYEjOTSxgAAPGBCAllZ5TmQbltwDpFxmpiSkQPsMfIG2WARSAypsAaZWM3JIKKD8CSHPk5CfD5GuAfLbcxAqo+Qx3gHyFvMTcVIh6phwQH8KmFsnHwMAINIuJUYFBgOEwAzsDE5DWZ2iDms8ClAXJszH1i2ieSInnt/nKIMkBltz9uuYMiHGx95T7719/uznfcJgBuZxAEyAAqI993t9bDAxcC37/+rWA8w3YJKgkmGJiAVl/HsyG2MPAwMPABsQMDIpQm0GSJUDIwMDKwPCvmPkSKx8wFtgZLBl8GfyAugUVBcFYhI+JnV2ETVlJj8lUXc3M2NjIjsnURE1ZiY8JLGZiZm7HbGwkx8QMVAkRsWMC8RmZL/2JYvb/y8ZUp2wfZswqJ8UvwsvGyiQjIaRroyoQHK1qoyfLzszOxszKwa5h7qTkneOqdItdUFZUTFaIg0NIVkxUVpD9721Wvl+fWPl+O7Pk/J7CzGYdY6/CPIOLg4mFjW2HnISklrWiZxi/sAALt7CAoBgHu5Agj4ZLzN82URmQGTKiohCz/vqC/MvIIAQNKzYGYAj5hnq7BflpOyfmZCYVZQIA3NJaww0KZW5kc3RyZWFtDWVuZG9iag0yNiAwIG9iag08PC9Bc2NlbnQgOTUyL0F2Z1dpZHRoIDUyMS9DYXBIZWlnaHQgNjQ0L0Rlc2NlbnQgLTI2OS9GbGFncyA0L0ZvbnRCQm94Wy01MDMgLTMwNyAxMjQwIDEwMjZdL0ZvbnRGaWxlMiAyNSAwIFIvRm9udE5hbWUvTVVLRlJOK0NhbGlicmkvSXRhbGljQW5nbGUgMC9NYXhXaWR0aCAxMzI4L1N0ZW1WIDAvVHlwZS9Gb250RGVzY3JpcHRvci9YSGVpZ2h0IDQ3Nj4+DWVuZG9iag0yNyAwIG9iag08PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDIyND4+c3RyZWFtDQp4AV2QwW7DIBBE73zFHpNDBM4tEkKqUkXyoU1Upx+AYW0hxQvC+OC/LxA3lXrYAzPzYFh+bt9bcgn4LXrTYYLBkY04+yUahB5HR6w5gnUmbaeqmUkHxjPcrXPCqaXBg5QMgH9lZE5xhd2b9T3ui3aNFqOjEXbf564q3RLCAyekBIIpBRaHfN2HDp96QuAVPbQ2+y6th0z9Je5rQMiNMtE8KxlvcQ7aYNQ0IpNCKHm5KIZk/1kb0A9b8tgoWUYIcar5X6eg5YuvSmaJMbepe6hFSwFH+FpV8KE8WOcHcEZwGQ0KZW5kc3RyZWFtDWVuZG9iag0yOCAwIG9iag08PC9GaWx0ZXIvRmxhdGVEZWNvZGUvTGVuZ3RoIDE4ODcvTGVuZ3RoMSAyNTM2Pj5zdHJlYW0NCngBrVZbbBtZGT4zZ26eGY89sWfGcXz3eJyME8fxZWzHiXNpNzfn1qZJt/cmaXpNSNWmTVO6FQ9LuUihiItW+wCUFTzsUxd1tUgtC2hBPEDFslsk2CwvCB5Y7fJAF7QiqRPOeJwsIB45kuc//7n93/n/7z+/V65cWwAM+ByAAMwvzV4GtYZ9FYn0ucW1s3X9zwDQG+cXZs9YOniGpHEeDdTns0iq55dWbtT155EsLC7P787/HOmBpdkb9fPBH5Ae+szs0oK13vZTJJut/v/rKwCAobMELASc4E1AAxzJdnC6fj6BZs15Cr/70qPHZ045uv4BGpna5KMPbz02O++s/7KytVFdt33ETKK1NnSC1dA++tvVDQDYe1sbm3dsH9VOqk/WhECY5t8GgHgPKPBjMEhUQIWgwQj8FWjCPgGXkH4b/z24DTvAMJ4D9+BlgKM930C7LVwA8IACKtLDwI7GcKTZAAkIFCkGsOg+HFphrxtdASvgAcZix7Dv4zp+Av8e/iEcgS/CV+Ff0QoSgO2r8H1SQHtpUARjYByZEcNi7ecWcJqmqGgkiefimpHJpMt4LqtFIwJeG8sa+TLMpAM4RCutkTJu6hh8/9kEfK6q4mvh0lQHiSViStDFMDAYsMcyIUdlLGo0e0mCoSDJ0HGjPzq9OhL5NeuJ+/xxD4uk34dk9S1S2HxKCluHif1bP8L/Uny+rFJrdg4nbcy3mgOS2uHrrtgddlJoUrw+mhEFVh+arb7sjSksq8S8vph5VqxaQj5SdjaJn5FuEAEaADFZtq4Vh2FagNGIphl5zLqLQkdhmPgBT8mFjkwxwBOHt70HCbs/l0hm3RSP3aWc0XKmNBAXqbewH2LLc6oukdDmtGNEVXBxBKXoUeKWKHEQcrLrFyYZcDC48wG8Dn8HMqAXWXebxjUtl6v7Nou8lsllk8iHe34kTD9KtDkiueVM2sjD6+6E3tYi5tdnBlcPp7rXXl89LMb7Uj3zoxknJ3IU6xs4uVy68M3TrZ+c7p4xGgd7ckeSQcFJ005hsNQfG14cGr9aUQ29R3f7Ij7BqylB1R8NuFqm7xzfaFAz4UKvkUVoKwjtY3IJeapooq1j+ne0aVkRTdQmRqjVGCG5A0gv43n4mPW0BELNjdxzLx0/u36kOTP3tVOVm12cPxWLpXz8pjFvdAwmpIaW/VlvR8YIRTgHSxCsg5sfOThx58H86o/vDHWXsD+xTo6iOCdbze4f6ji4kCtcnEo7IvlmhHAEIXwD+TMB0PuCualouG69hsgtWJzdRRg3aghp+EaLWv1jU+lEX/+Z4ZTDxjMQJxh759GV/tUHN0rl669evPyds6m/w2OnUoPtjTi2mWwtnuiLuBQX3RBulIOyQ/AoYtfNhy+s/uTzA/3X7p0MXVxTu6faAcqfpu2vw1fgE1BGOXTKQhWVZanmPi1OUSiMihKA1gDKKIQuj0iAksj8ms6UlXBaxqwEqwW9jLuyWjwuoE2IAmUcviI7L8iu7OyXDiXGJd6VSb43unog0bly/9qV755rF8OpYKLdSET1/NwXD+pjYaxJlLbfnByOFWINk4NaIeYqDfU88AZd1MLx4njKDU+nkp7u8PjaVEIS7Krsj+EMjO072dV/bSat9h7JhbvyaUWZaC/NxqNzw+OfnW5jba3b/xyabEwUg/snPHq+OtOWwklXNBRwprOKhjyBg0s7m9g6OQ4k9EJ9yp89wrhQvCLmxdGNsZt8jRZ+fle6yoemS93Th7oirIMlSfSBNxE3EBMcLJYa7SwMj5YQLTFwe2cTvos4kP4PG2YSUbS4S9o9o5KBzJlsfZf3daixDh/vUotaai63a5f1tgRDusKOvDx17IWxyJ51rNo3kvMP7Kve38Nza7d3bnKy69yXZ+to8Ee1DEd8tDLGzN3/haT2zlD4I0Q9lnE3BhokvQ3hqfthF0e0XCj47IGQhyMJHFbUpJelGVpUu1qrv921D/eQLKf7NAekbSwv6cg3wzsf4E8RmmGThzU0hEUxK0fRW0PVdUTLT9GiHKbwp8XzX5lKHxtKyTzB8DYu0TttRHJxd6x77MBYdyx98guH9IneVhdDQEjzjE0rVlKRdMiplScOTJQ1LDC6Mh53KB6prdUflejGgFfwNnsDiZAv0tp7tKf30qjON0gOhxRUmiJuWvJIgjfqDuohX7i194jJn3s7H2MP4X3gAvE9/LV8MSO4F9p6dmMPhbDR0mKEed6Swn/rUNYLqsOhFvREp+p0qp3VIb1oDhR1vWTKkllFt9+BM+TbiLPAZVaEJL5bEOhnpNOfCLdkmwgKnyGcvraQnvES5HbV7mRJxtkoUnftotUzmYAqNXyNVECyzsuwWT5NsBkRFVMr1/Mxq9hIonkl+BrFCbaqwQgcRaLe336j+EUKZwQek0mHJx7U2j3ME5uDI8/44mZpq5VJDo5c5UhR1zxBWWBeJ0iIoXjYtp5wHuQ2s2GgAf3MRiFfgqGBvrGjA4l9s4sX5q5caOtfXjT/tP0Lyd6s8g0KZW5kc3RyZWFtDWVuZG9iag0yOSAwIG9iag08PC9Bc2NlbnQgOTUyL0F2Z1dpZHRoIDUzNi9DYXBIZWlnaHQgNjQ2L0Rlc2NlbnQgLTI2OS9GbGFncyA0L0ZvbnRCQm94Wy01MTkgLTMwNiAxMjQwIDEwMzldL0ZvbnRGaWxlMiAyOCAwIFIvRm9udE5hbWUvSEZBTVpGK0NhbGlicmktQm9sZC9JdGFsaWNBbmdsZSAwL01heFdpZHRoIDEzMjgvU3RlbVYgMC9UeXBlL0ZvbnREZXNjcmlwdG9yL1hIZWlnaHQgNDgzPj4NZW5kb2JqDTMwIDAgb2JqDTw8L0ZpbHRlci9GbGF0ZURlY29kZS9MZW5ndGggMzA3Pj5zdHJlYW0NCngBXZHLasMwEEX3/got00Xw2M6jAWMoKQEv+qBuP0CWxsFQy0J2Fv773lHSFLo4i6OrGUaj9Fg/166fVfoeRtPwrLre2cDTeAmGVcvn3iVZrmxv5pvFMzNon6QobpZp5qF23ajKMlEq/UDJNIdFrZ7s2PKDnL0Fy6F3Z7X6OjbxpLl4/80Du1lRUlXKcod2L9q/6oFVGkvXtUXez8saVX83PhfPChOhIruOZEbLk9eGg3ZnTkqiqjydqoSd/RdlxbWi7W5X86wqBSI6VEmZ51BAtMlECygg2u1FN1BAtM9Ft1CAdCu6gwLoRnQPBdDY6hEKoCzpAQrQqhDVUIA0jtFCAVGhJTVQQJR3ohYKkKIzHvn7Gnmv/Mt9j+YSAlYYPy9uV7bWO77/rx+9NIj8ALTZlwMNCmVuZHN0cmVhbQ1lbmRvYmoNMzEgMCBvYmoNPDwvRmlsdGVyL0ZsYXRlRGVjb2RlL0xlbmd0aCAxMDI0My9MZW5ndGgxIDE1MjM2Pj5zdHJlYW0NCngBvXt7fFTF9fjM3Ofe3ewr+35vNrubzZMk5EUCWUNePBKRZ4IGwyMQEBQwBKHiNyqIRIoK8lCoFR88xSwhygLFL6UgYq2CpajUWv2K1j7ys+0PbRV293vmbkghn3778Y9+urtnZs7M3Htnzpxz5pwzdzuWLmtDKagLMWjC9JmL5yL5U6pFiOmYvWjm4iSeOgfyd2Z3dniSOJcB7QvnLp63KImLTyEkOectXDFwveEJhNI+aW+bSa+jn2sAxe1QIWMID4c8vX1Rx31JXN8HeWThPbMH2g20ffqimfcNPB99BLjn7pmL2pL9S/4Eefrie+7tGMBpv/sXL20b6I+bYHzvIgy12WgjUqC7kIAI0sK3BSHhS8mJWGil7fAZM+oj152aiq+RTpTxOxsel/Pzk3/xxN/brgWVT4rfQoXien+a86F4CCEVhvZ+5ZODLfJ1kGRH0aSsKBoDUAlQBJCVdYsFdeFd6AmA5wAYNB8/hlYArAN4GoAdLO0F7Ah+rJcVw0fxCmTDY8NK1j3ZYHVbJKX7vSjm+551f2j57Bi2wup9iq29KUhxi4Sfwz9Gc5Abv4T8eCWqRxn4mUOhhe5WaNqLFgN0ATByivHeXleB+3WcjfwshmsCyMXi19y/y89xf54fJbjXfTIYZSH7qQuwsMZ9wvms+7+d89yvA+xPNu0LQY/X3HudC92bXFH8TK97ozOK4Zonk9kyJ1z6mntRaIt7Tr7cPn5LlOzvdZdB+9Sw0l1c6nUXOS+784JREQOe4xzvzsz/hTsdLoRuHripP6xzO5yb3COgyeWsCY4AOIb34e0oE2/v9Y91H4UiTPfQmFDplij+waH6jHx/FK8MF9dnbAnVB/2h8W5/qDYYhPLUN4XVwu3CLUKBkCVkCAHBK9gFg6gXtaJaVImSKIpCFL/cW+nmj+H9qBLIsv+QyItcFL8ClewxfECuPHBYZEUiItEQTXwCzIuRIYr392lpCQqv8XKJj+IDh5JVB8JulpZYuUFLaBkSSBHBIkFjUQT/MMqjNabOSkulfpSurLb6/0pa5Zbradb//bFgZ2TLuElNkX3O5kgBLSSczde7W64X/s+8Yxk0tVVlZY2buOJQ5+IFc2vafDWtvpo2gNbIY53tlkjXLI/n4ILFtMETYQKts2a303xmW2Sxr606ssBX7TnYKV83pHkube70VR9Ec2smNx2cG26r7u0Md9b4ZlY3H5pVtbTlpmetG3zW0qp/8qwqerOl9Fmz5OuGPKuFNs+iz2qhz2qhz5oVniU/i06+Zv6kqns7gDs9NfPHeSIZkyJjbpveFPHMbK6O4l1QWb0McSeQljuOMrguZGPzkBuhxIcAl2gen5L4gjuDtPFFib8w5bCoRyiQeGUFOoF+iLajHsSjPVDOQDPQNnQWLwDZvgP1oYvYhXJB97Ioisajt3EicR7NRS9C/w50Em1GB5EKrlmEjNC6AfsTKwEPQ3kWWp14HqWjUvQIOo7K4K4bUH9ib+IQtE5EU9A+tB+u/zn2kYNsauKVxGUkotvgnquh5XxifKIH6UEvVqEJULsavY79zKVEO7KgchjdDvRjtBP9FP0JP4T7Eu2JzsS5xKfAqhbkQJPguwr34U+ZHvaRxI7EHxJxoEQGyoSntqJN6AW4fw98T4BqrcF34Q68CW8mYfIQ6WPXcOZ4DOgQQnXwrUf3oEeBAkfQKfRX9C3+ilgYLdPBnE4UJf4/UqJxMEs6kzbUCd+18N0AczqGeTwMj8YT8Cr8FN6Mf0kyyRTSRJaT+8gXTCNzB7OC+SV7L9vLree28cr414ljiTOJXyEzcqLb0VL0AMzuJDqHrqDvMAP3cmA/LsdVeAZ8u/B2cgTvxEfIBHwCnyP78G/xZ/grfJVwREWMJIt0kE1kPzlJ3mHmM5uZp5nfMl+zozjC7eQ+5/3Cr+Oz4uvi7yTKE58m/g4qVkReWJkq1IjuRDNhtovRcPRfMIsD8O2BVTuFTqOz8vcz7ED96O9ABYT12IYLcAN8G/GteC6ej5/FR+H7ujyWbwgsBFEQHTETB5lEZpFFpIv8inQxdiaTGctMZ3rg+yZzkbnKXGU5NpU1snXsGLSeXcQ+A99d7B62l32XK+NGcY3cVK6LW8etZ2Zz57mL/AP8Br6X/4r/M6jF8cI9wnpYnbPAsz8FXv7Hh8XpMPoCdDeajavxLLQFVmMnnom6gbvm4EeBXotRRqKFeYCpI8OAG15HPwBufQatQuuYO9DOxAfMPvQ+cMpCuGUX2s1WISe3FVbnITQMuGjgGw5lhjKCAX+6L83rAZXvsNusFrPJaEjV67QpKqWkEAWeYxmCUXaNr7bVEwm0RtiAr74+h+K+mVAx84aKVhBlT6T25j4RD71uJjTd1DMMPecO6RlO9gwP9sRaTwWqyMn21Pg8kV9U+zxRPP22Jij/sNrX7In0y+UGufyEXE6BstcLF3hqLO3Vnghu9dREajvbu2taq3Oy8ZEwkEPKyaaKI4yU9MYRNHrmKlCwaDTtUROx+aprIlYflKGN8dfMnBOZcFtTTbXd622GOqia2ATPyMmeH4FxosdUc3xzHouG0axWWpp5R1OEmdkcIa30XrqsiNlXHTGv/NzyD/R6qWb9DY0R4q+d2dZdGwm3PgbEpWgrxWauB2zcJA/clqxpborgNQODoGNcACOlw03uCf7WBZ6Iwlfla+9e0ArERRObem1hm6x8I2hCU681bJWRnOwjlgfKvTD7Izm35NxC83Kv5YFk/ruHk/XvnaC55YFTn0A+buIgATClgG8MjDPimS0/xAeDLaVJWynqnl0KdIJPM4ZpzofxjI4Q4BnGH+H8Y2ZGuiZdH0Z7dXJwrQuqexVWm7wJVTVD/9Zu7QhYKeiv9Xm6v4bdutXX/6eba2YO1PB+7deINtKFHuSVCJ55vdxJN0s/zLrd4mun69spryngPkvNDRWAU9LQMUcMsIFPaPJGPM1QAdZk9rgoUkxoOojxhuYoTqyJomrnEbBRmTtnQHM2ZbX51fB8QHLAGs3O9EIpN9tTC0+upbzi6fZ0j5nT7an1tAMzsX45h4a27uY8oOCkJqATmgxPDDfbB4ttzc0j4D559D5wCXTvboY7LBi4A+RyVV4MOg3Lhs2UCUxouq0p0lVtj4Srm2EVgH1PTGiKnADObW6GXvmDI4URr5pvGRhzAYw5PxPaC5N3AdulC27R3N1N7zmpyeeNnOjutndTeUviUYyGVoQHKqKIdqEkj+KuCXAtZD6vXV4Dr88Lw2qmNB0OLH2do8Bm/9cULh4cN1xZAqMtlilc+m+icNn3ofCI70Xh8sGR3kThChhzOaXwyP8chUfdROHKf03h8OC4YZC3wGjDMoWr/k0UHv19KFz9vShcMzjSmyhcC2OuoRSu+89RuP4mCo/51xQeOzhuGOQ4GO1YmcLj/00Ubvg+FG78XhS+dXCkN1F4Aoz5Vkrh2/5zFJ54E4Un/WsKTx4cNwxyCox2skzhqf8mCk/7PhRu+l4Ubh4c6U0Ung5jbqYUvn2QwmF7BN2oh7uGqF30b1fMd9xAcrCUOD2qImXgOJehfWQfmgJ5D3svCnNTkQtgK/hi0wFegvqzgO+Ath2A7+DL0ATAe6Dcx36GvJDvAzwT2icCdIKDXg55KUA9XOuAfCTAanwGrYa2LsjX8fugDHUAtG8nPH8dtNHxmAHvgrIS7qunOYARgMayrseaVOABnQDcA/4IuP7/9EPAe4DL5A/3T3tcr+QhqiXCnn/9I0FBCT4igpiQGmkgh2gD0oG3Rz+pcnpjYgDP0gTekQVZkQ3ZwcNDYIu7wKfzgPeSBpgPvEw/CqAgeHkhsM+zBi4vRsWoHbzPXeCtvE6qyHvMcGY88wbz/9hq9iRn4Z7nzvEW/ilhhvCmOFZ8TdGleFvqkE5Jf1XOUq5WIdUdqgsptSnr1QvVP9GYNC9rp2u7dR7dYrg7AZ8JsefA12ZgdqOTcTQxL4pYAFEbRegcAMWhzHwEZcgFyBnIFR+ho3AVQlOzjsKdOMiH5RfqvLogQBW7IXrtf7jj342Osg1XIS4DK7Avfg53oUtArZywCfnU0hxR0prNNmG4NAeJVs3sNktWo/ZKQ0Wsv7GmrfoLVNnQf6E/f5i5uKS4aHgg6CsqNBp4YV+NQ4PJooutnedVU3IyBaVw6a3lfUZKLIym4I/JOLIV5uMJSyiPwTYOWVmIKFUd8h6thzBM42XtFyivAW6b6jV6p+Bv4hLZSmMIGHwtJI+PQYFwKs5kJA4Gh+fQ6+d46eCyGq80xAaHlj+spNDo6zl//hIEJuj1YaCli/sRrOWucGMxW8tO4+5y3u1a6VqN1xIxU5xuvct6v/V+x6tWDqVhDetQW72CwwrxQM6t0aSlSkWpnMe9zJum8v6XUGq6J00d1DzoLk1Lr/NpO09ZLvRf6dd+3X8ZVVbEKir7dfqyPL25DEOuLyvTQYJa8oeNXhF2sFaVXxdQ6tUZSGEQMrCVTdFKGVg0QgKRUa0W00DMg6ilWF+Jk7T1pQm84IOyt0BvNAi8BvNQAQQau+anJx4cPnHLqiN1AfYwU7UMZ3zz2YraV9fNKp1jY9TXQkewfvE944om3bVq0/pxa451not/88LLK+vaxhfnT1uwD+jCAI8jbiTwGEESSMyl8IR63ITbMfMos5XdJu2VooqoxGdIGAk8j4moUEAiIYHD6zHDegyS5NdDnYHj/HrooFRyjEJieQ4rCWYQcQliFDeHFeCi8gqJ4QDbE9anpMDacc/iZyWrKmWnd/0MWD1r4xVLQyxmlVewttqCKs0VlRUNMSCnrqySEjJJyby1uVmrtOPAkmZP2CPsqea1uZaBCgYqmFPNWQN912orKgSA/GG4pQW1YCVOLcQ+xsv4MLPht/1rPiXGS5tjx378NnmCTCfrYsuZ2d+NxtF4vSxxW4EuLJQk0AwZ6KFw6fSU6boFZEHKAt1KstwrjEmp1xGn6Naw7lSgYVB0mYnSFRTZfPt8Tb7Plqkw+jNM1lBmFN95yNs5V2ZQOp9G7TcNwCyoMlbZD1wSk5lE5g29xcaJVj8fECxsFuZsYhZwBGWHBx+EGeACKmjBgM+ru6HIeD00HgD8YErmIUxOPVB797Kqh+I/wgcON+Y/Pn5VfNnPyHKQyvCtoYYlpbOb18Q/jm1iJvhKHn+iwBEvi01fMPrO50a4Y1e51GduX/5Yc14wq7h174Z7XwaumJ64xC3hPkdUGx4Ml9u5rXgLx7ixm30Ir+XWpXKTROYRp05n5Ec4GdUIo8JFXC4rk0/Ktfk6m0eRb7W6PTu9C5IEaOgfmD7MHFVW9ssk0IK8g2iMQA6zPzWg9tsDSpOiAKUYtAVYr9NoBQdgHGIKMCYsI1lUBUijh0S08QWYxZBQkcHaCm0FCJCc0ooHW3CLiM2+XOxLQzqtvhAIWFIIouP1BAM6LYiTj3Xh4bqT3tO9H8a//stXH9070nXStrEn/n4CvfL5y0dxXQb3efzSsQ274u/GT8fj8f/e2/zklz86vv0X+GVcc+5/QH4Iegn4ZDbwSQrsHfPC7rW6LXpSICpdGoJcZlHMT7XZUvxqq9V20du57rqWoixAGSAmTzyATTq/McALnMAKjEAEjpe0IszWBIlCryzAggEiTTDFrKxMOi8/nQnVDVoC3CCzgM4gEFj6c223dIwtt2k+/Ev8x2+SSThv9+am7fFHYj37jMF7mh+bVId1OPfqNi71/ZPx8384Hu+VdeNZUJAbZa4309OZo6AuEcrNglgs3V5I3rD81EKd7+zZs3TLgMYdMOdG6K9ER8Mrec7PBcV6oUlYzj3KbGOiEHb7naDcxexiCcdliCHFHsW3hIPJiZyCuUAwx/FwtqAgJINh/HqWVfBUeUAVx0I4i0azBF4hcoSVWAYTSeDFu/gf8F/yDG9LwZJfiUBpADGp0mi8QiXK2qj9ogUURgUojApZ65rLxLUNuVncKu1pqh5Y7biJ9608rRUrRNAGaOmSFrykBaaEvQrsxYLOt+MkeRunxn5EOuKxWPyPJ7njseHk7Vjk2iby6adxmUYwZ3YczJlD+eFURBjiYjmRsQmY+GET4+GsZBKI+T8GBWOCEcHoKmEzg43X6N1xhnx57TYg4V97gG8oDTPhfgrQMJ1hQwku5YmAzTiI63ATkAsTEsXbw2bQqsARIlACDl8kRpIwL8LToe1VjrWpqH7dHpYUyKpUPeftXDxIFJAz+nSqC2VRA9pAkQXtuXbVaSACCAeoQx2sK4bfjj+SL47/NqZ5nYzgjl+dzu76bjT70tXbYXx0j5iQ+BX3JegAjWwZdYez14Jhdwb/jLwpnpX40aJxhIaxjxAUDuJwKPX5jM1lyVdana4Phoj9oNDLbF+AbCkB7Ff4uYBJbSlABqQvwDYRSloeSmaVsQCnEkiskr0A6VhIZDmnCf3AHonMJp1WIAMCrfcifZEWUWk36L0Mu/3Yxt2n4pvjB04eeOp1CMHb/xj/yx8vxz/5Gzaquc+/+1n8XPzwpQT65AM8FmdewNrvnscrvoZweEX8TPzdK/GD3AxYJ7A72L8DHSQY38xw0XzVfP0K1Uo9W29oMrQbVhpYQXTptFoJqzUuOMiSRMLrVazCYMhnbSaNwo+sRlMUKw95N1+XfnkHiOmAUUEDwvamBbJAhsFAaEn1FoA250GqfUhW9d6C4qIesvnUny9+HC84w3TdV3VvvAOvf2Q3d/w3b76ciG1ij4xwx5mlT1BdBMdv3H0yTwXRU2G9kDIG13PNuImbz80x3MeJpmNwaGBFduwIV/m8nkCrfol+mYHRu9wGh5HxukwGNqBP97uQQmEXXEoScNhFj9/o9puYfM18uy0kBvxByZoRuujdfPOGdgVswQug0kAEY8nplOmS5g/dtVuAC7PoNoxhNsktjPEW0P2KF1zYjWHrMhtBT+fhgDxpH1O3/oWlI+fGbWfInj2L3l00a+o0TmCU+twrkopVCXPKVsbLzzCOxRt/VOYCE3Fn/ozY6j2FvqVdpyeHag3e1IqpXz+Rb491g67ywvqB3gJbeHjYhnkXEggrKsD+QFcJ4+fYq7xVpAYItW2vwDJcGTAhZdmFMRthKXTeIvZsXPdWXMcd7/nur5wamILSe1/iQy4P7k19h4qwz8wFuVItIyHCjdAqTIzJZFD4VTYL9husZstz3s1J6RzYA68LQwWsO9YZzCa6PxWBQMoMzQSsoJo6Kpp/Gbs9/60xj8TXx9evGUNGc8evdTy34LkDM37MrL92Jv6XjfFvsLQRa5gymGsmrP9wGI8S/SS8OAMXkzoyjZnGzmPmsZ3kPvFR/AirDCpLSAlXKrZzHKgRLGtgThREhQAGHGhmBRT9ekkpEUwY7AeHSUk4UQmqWODpQQNYcUiUeBYmKSpFBRYUthQGg0qOYtUh74YBU67BckrbaP0GsiSfU1uuApQR8ARVzKL2BAeaSM60N2RJveT1KXCh/PNhbPszUcdTv8XLcUd/PJVwf4t3kL+Abn6HFMSGxzTkDtBPExMfyac7Gji3q0C/CZdmDsOSVmlXOYKF9dr5igVaoUzUqxSMvUBIVzi1Kmd5FskNlR8uJ+UFmX69VuBERzDN7IjiblhGp1sIOnOVxFmkrBAqKhwGIZS5J902yh5yjNUES60jR/0EbwXGOoK3oOSODvYctWsux05dX1WwbMC6oxOmzJ/bn9tPTViQCVn1ZRSXGNMQtvpxscaLLC67F5k8Bi/2pqES4kU2p9kLjAcJ1Xhg2Mjq7kFQeLglXeaTkViNZTfAeJOPMAoXUrHSGaATPEINlk8wEKRZoGh4cUkqVi9tvLN5i7e9YNGs/Em4b5RR9fDKH5Z7pT3c31443rnM7Fe5dJnZgZZMk6Lknfs3Hz+6tfvd6dljdj1pdPDqFEfePLxQzLbk3DFpfOakN7bX12+LbXWkMcwaFV/lC9cvePXRzS+m4stUNjoTH7N+7iT43C60OJy7S9jteN/BpIkaFwFX3uzkBJ3kciqVhqBo89hytbk4hHRgKq71Hm+5voldvixbiwjsRPjpwI+SqWfRm3jJxBsCWC9BYhTMAZyqcAWSFhIlE2y4lBR6nYHIFDD60qmT6kvjjVTQOnvKX2x989tvLq2cXFC2i8x98skf/uBIoO4kdzL2x4bb4v3xK/F4pNzXsG7Vl6/v/fi181tnHJTlHU40mXNsoxwh2B3O223F2yx7xH0WZqyo225gGAPvtAkpToPSLtjtZm1Qj5kg0dmcUtBsdcBrHsIh79JVAxwj78/9ZWXUBxhqCQ9HVtGvMkoBpE7VwiypDWwFDGxgr2wDK00pAbCBIVFY+AC1gb3/xAaW+QWZkhawAFOXuaKQsgOBfbJQIBc/M/dolz7w8thhj25c/LC1x/XnY+99h/UXHGxj5P3ZD+9Z9NzOj9Yt/9VpXPgFHMeO4GBdSxOXmH5YVyVyouXhghJ1nXqaeje71875RQPROLVIdDqFVIk4zUouNzVXG9LpbW5l0GZ1udd6l1bdOP3YZfCXb15bm8WhkBDGFiXMzQEJspIAkuxiACYIvwfp8urpRAbWE0wAM7Viiui0UNFwfeE3G3eu2rlr5aN7cfekYSMPPF/58j2H4t999TG+88v3z/78Z+feIiXDXeOI87tRm2c34Zzv/oCngQ6pT1xibXBC7KBxHqwKr9gqPm3b7WY4NdFwBqNarzEawqqwQQzZ8Djla8wZ/AZzxv6B+KHiovsD35fmL33KM7ozenKHyHnTNc+YnOllvCCYvE6HIDlNSr+w1bHbcRhkgPWbNH4HZ5VUgg5iCM4gZwum5wpBqzUQvODdlWT+hliS9S/EZK9Xdn7zWgb5hFoNNMYgi0Mt8rEcA8fvmGN5N3g2em2q1qBleZU/zZ4egAiWM4BdToVZCCClUR3AKWqfzQtVHCSiBfgKIhBAaKpkZF0j65vMrMwHwUpGS8B3pvuzyeh1gUhRF0oNW4HAy04VKpS37DQe7NW+i6XFeu21r7gntv5w8jDDQeHW/Ikrbpn4ZvwP2PI/2K3MGHvg/j0c9rF1d025beHY51843VJcV/5k7gSHFvvgHQSCq+KBZbUPHerGH9H9FdMoHDFz70FcriGcJTh5yclgjaHMlMLrJSts4eoUXcisF/QatVtN1NcMVov1mnfeA0kWi7WUnaJ2lfbGDb1SjlnpS4oLC8DkyAWW4Y0QJ4ItHuJXRa/6Kvt06WaHVTnR09vXu3kzVzX8DkJeJHjKKxuuzWF2bNgD42LQyHg58yXwihvlwFsrh8MNxYYx4hhFk9iseFS1177HuTe4K+uIXRkWGVNaSH1KSoMtheVDTqukd0qaXCE3l3MwuabcnBBnG6ZSB1NGBYIOa96wGwTkSn8ZVX6xy1/DOg9oCNCC8rIn1z3bl2FzKXXpfm3A5woEUIYNEp1S7UUatSrF70wL4KA9BHpCBYbxwEaS3EqSUkQlp6gQHEfemxYIFsIS0+WVd4t0HagHBPpyQGuAYYLJ/TMKi3ZVLI6fPfAn9eGU4MiH3w0HmOJtq16JX8XCUVz94n+9XuvfdP/JW7Pj59mqUb7Ra68VvN15aftL9cGKjVN/M3HC37ATp+Dc+M4TvXc+8+rxntmrSY68zquBqFSnmNCkcDZIjWgWzGKQDaYuE5aJYmoKSYUAos7JC0aVlBKSwKIyhpAJbCp4+++Qd1ZSpwzGVsAKlXeLMkwFRN4MwFdObow+HTU/YdF1vtV94cJpD/1+Us4RV/7axa/1gfL/6DZv2QvNz8ZuIy90ljQ9czH2JuVDAm8GIVwOdhWNwxaHHcLnLDAnz0jUjAS+DQkMKGzFvn+M5FSs4tQg20GgVHYBfTrgtNWH4cNmXr3IHX9bnnsXzJ36F0q0OzynmeARIrYSEDAzP42bx63g7xPWckeYs8wliHgmHWeGrCZPAVMypAxCbywHL4Dwi/RANdl55pK+M1ihiGV48J4lcJsVRAohJRhpvd5ZR7ApabVQgsm+84DrXCmbZ2CxyBbaKu1PwVm0ZLWAE31iwHPGVB0spY4zmGc+6jh3HcDvfBGfiw9+Ee/degCM0/34TPye2Czi6I7fLc9vHdCOxhcZFArDKg7EbUkIMRC5vYFkEBhLhpWT/rJvXV+fHG2gegDoz/vZOoi+rwmXC6Kg5jVm0aw2a4JiEFRovXWqcp5S5fNLNqfPKhHW7Pc6zc4UXkC83eFnUqUMWChdCF5MxL22EH0fMwx7TK4fhMMazIjilBuZ6LL2Sv+V64FksFvBEegHXXs9mJvkKOMAR5mvW1zAWAN8dQOH9YaHNy/pasxOr3i+7YPGzGN3NSx4+rAttHju7j42b9ut6SMr02unTtoxeUOshHx514QNu2JPkmOLCsY9+y7lPJnvmH7QM/RMYkY4/zB/hicsb+CDhk6+Q+AMKmKwaMGSQrxFKdkEmw2pQgqbA+daQlZktYM5e5N4JLeUpDaBefXryq6LCI0FGG+YCpUR0PFqDHKCV+8fv6/98oTsw85hD4RDY0tz7H14N4x/xsQfT3ueysqsijkppqqiJfNj78JgYaXLEx+yXrCTVPJ5yhPhwm3iFu3TppfYPeIu7V5TVHxTfJ/9XP17g2qEyDstgsqpV1oFq9VIghqbXRE0Wm32KFaAtTSwGyajhoN6UN72suGYKaBMVcDOpSMBLJihxKVASTKoAghrIRFNYBwxakjkvY0mNGqQrpedUGoMmgr1EBokXrAcZIPokzXDxh99acuWF+CFxGvxv/0mfg3rf8d3YM2uLTOeuta7/zJzKf4nMA9j8Vdw1jUwwsPUJuqMT2H9MHU1nC50hLP3irvNJEP0OHRq3mkUNLza6VCmqUnQYkuXwNL1htI0Vl/6P7V0ZXOInhfIc3SY7IizBdgAssPEOBMk2KoOIMYsz0meFjWIqHWbXDPZvsWFSf6El8ToPg0ugM5H3tjtrz16rMYPaTy3pzh8+w9eix/ueGbFxGHlfSt++V7XHQePzXnm/mm7mIMbxmRUxH8Pc3x+y51FrjGx31AZBDkmG0EGdejWcCDIBFJKmDqWVYtaolboFKqgSNlQJ4m2VExtPmTVp0ZxDQhWcjsGZQPsByGxyobKUzHwCoH3ZAOG6meZ9Qb3Y51v3X7ji3dxFqfWrn10I4jKkeLthHmdIT1LY9uoXFQl3mdeY8fB3puHc8OPlyq2cVv0Txu2Gbdl8hnp/mCxt9Zbl14XnJo+LTg3fV5ghWpFygp1p68jvcPfEdjl2pOdyoApxOWwuanIZrSbHRZjjiE3Q6OcDxGOYj/xp6VIbFaq5Q2HM1VgnbnPZCnzBIVaSwSU582zuS0mS9A8KiMgBDNs+Wp3UDsKBXOtw/J7B+03UCHJ/btMCyU63bI8SEHkqBFHvUOqUpbIqzwe55CA0W8LeNVuL1LAq9iYyQb/ksuEklMPdXaDxYs9mjQv8qapU8Sg5MUBv0LCOawX3r+HxKVzeLHVBIlsxslBcDmRWeQ649OYn7zNy+wiR1vALaQ7o+CjkZekeyTHY2gQCqwC/JXor94zZ9vI4L2Pr7ul49dH/nrXaLKPC4x6eu78mozG5Ser5n/48VdnBHwYT5g+bNq022vSwfJNyxzz4LafbJjePrKgrjFcm2lNdeZl1zz1+LkPnyPfAi+ZE18RBTcdtMPEV1NypRNqHMWVYT9rKjMzvFrS2UBdw1upIWRUGzWMmyHMNRNE0MG2G/Cehth2eVRJxyr6tbHL8k5LLTo5oDLgAweKqHm357X9+wPG/BSXwT06+MD0J5/kpsd/tSlWU5qqxGSDQnxwHjm9Sd7vuxKfMR+DPNPz4BnhEVHDmwaiSBUN1lSrIYNfzrwPmy3i1BLiUyQOdJdFsFjAJcuVQiqlzYZDdLDvXbcG5HAPZX9Y/qQdV1lBGSIZ67gp8uMrke3qIIzXj0ttwx7+SbW/bx/xDZ+36fNJObiHhdOiicNb90z/EVFfPf/syMzJT09cRz6wUflUguL9A5uHwB4J51bh05igeaidtDPz+LXso9xutIeI8LYxqWHHco+w67gz7JucOCbj3gwaQQZVK5vN8Hp7NLG4DxwJDxvFDx9mmEV6iAPBCe3DYRcPVgY8iYMwEMYQPOIZBKaHJNLF6iFHMbWSVh/CPbw1eZb3yScDp3nUvoDTPP3AcagAASBt4+UGIZlljbttRdhPQnqGYVEIwt3gx9x0czjz6eHQP+5bVhYrK0ueEw7emRO0WfCDSBq4LC1LUiF6BAbKR9iFs07HF56IL2Pzrm1j2q+eBwph+hYAtxNKKuwJP1DH7lPA8uNaYYxyLdMtrpHeIqeYN4Sz4hvSWaVyrrBAbJPmKzuFFWKntEK5RuhWSrQvqWOWo/s4ZlqGKQM8U7Ycl7OP48dZXsFiRkkYjldxCEL2SkaQ1EAjONXZLjLsKYkoTikR3q6yplCaw+EFPfiUJ5VMB6cGxgdQDSJIlEIqDmgjwFvjepVKya3VZsEPlqtPAe8AS1H8WDhVD6EBiNFxtCMvKESFBCv7WFgNRyyMUgXTli+V429rtatOWSACZ8kS4YhELqxdpT01WENjtUuWLAFrz04K7ZSWSiDn+++cf+u9X/fFzx679Mtj8Z8DSfuY8deOMHVXzzMjr/0MCDrAh59CUQlvTByUxMoIX3gQ8ZURphBHVHmRlIvwLr+k0x8kYlkZPVeyYzPYktScdP3+m29/Hd+KV3wR/yYev4xXsHnxtXgFF7sa+zXeGL+b+KnuN8bHyL4XfVPjrfDd3cZHLbstDLWXS/X1+ib9PGE5s1xYb9gGb79sM241bTXvQXtM2no0zlhnPmtkq7k3OLKW2wUvbOzm9pi59AzOYjSbwJ43qpQap6imhojJDgtGec5stPSoHjeBPXIhKSHA2g2XLTctVFKsYQkLrHkWGuakexssTVhvhMCwaZHebLZwGFPhsUDQk5KeZiLkQOX8YUvArG7BhTycexFZ6RZRR7u4ZBQuAcozjPdM4OFZVTu6dgRCrrxMbUGelhuljne8DYFzNm9e/Mn4n16Jz+3jxRdTeK9FfCqdbQRWf4jSCt63YfpAj9EzpoXhqhK+Hk1DTXgaD5oBz+OXcwqQZj5EpZqeK0EQAZMy8BrgrL8M2EcSuFGCTcWMpYdLvYPGmGxGQuCGnl2UyYl8Ol9GT+blsyXcUoK9RV4jhkMvPJz8INbHjIqtI93XuvC7Gxi0c1MMpG8MjE/+JNpQW7I0JM0GnIEzJy3YHMbBt2+y6a6P8uE8tBAVoRL450c5qkY1qFb+L8UY+PsQ/cdEI7pV/k/HRPifxhQ0lU4bNaPp8K7RHfR/b/ChWgDLJZ6+GzSlbmzN6Pqs+raFnW0d82fPlHvIzZDAOSj8SwGhCwCXAa7A5SyAASAdAOiMqwEmA8wB6ABYDfAUwIsAfQCnAC4AXAa4AovDAhgA0gGGA1QDTAaYA9ABsBrgKYAXAfoATgFcALgMcAUIwwIYANIBhgNUA0wGmJMY+MA40WAZI88QnFL3xnbqid+I5w7BC4bgtwzBq4bgo4fgML6b7g+Owk34uCH4+CF44xD81iH4hCH4xCE40Oam500ZgjcNwSkH3EiPWUPw2UPwOUNwmadvoP/cIe3zhuDtQ/D5Q/C7huALh+Dy/09veB71vm8c/z1DcPqG2Y3tS4fg9w7BO4bgy4bgnUPw5UPw+4bgK4bgKyn+v1djb5wNCmVuZHN0cmVhbQ1lbmRvYmoNMzIgMCBvYmoNPDwvQXNjZW50IDc3MC9BdmdXaWR0aCA0NDEvQ2FwSGVpZ2h0IDcxNy9EZXNjZW50IC0yMzAvRmxhZ3MgMzIvRm9udEJCb3hbLTk1MSAtNDgxIDE0NDUgMTEyMl0vRm9udEZpbGUyIDMxIDAgUi9Gb250TmFtZS9VR0pFQ0grSGVsdmV0aWNhL0l0YWxpY0FuZ2xlIDAvTWF4V2lkdGggMTUwMC9TdGVtSCA4NS9TdGVtViA5OC9UeXBlL0ZvbnREZXNjcmlwdG9yL1hIZWlnaHQgNTIzPj4NZW5kb2JqDTMzIDAgb2JqDTw8L0JpdHNQZXJDb21wb25lbnQgOC9Db2xvclNwYWNlIDEzIDAgUi9GaWx0ZXIvRENURGVjb2RlL0hlaWdodCAyNzYvSW50ZW50L1BlcmNlcHR1YWwvSW50ZXJwb2xhdGUgdHJ1ZS9MZW5ndGggNzgyMS9TdWJ0eXBlL0ltYWdlL1R5cGUvWE9iamVjdC9XaWR0aCAzMDA+PnN0cmVhbQ0K/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAoHBwkHBgoJCAkLCwoMDxkQDw4ODx4WFxIZJCAmJSMgIyIoLTkwKCo2KyIjMkQyNjs9QEBAJjBGS0U+Sjk/QD3/2wBDAQsLCw8NDx0QEB09KSMpPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT3/wAARCAEUASwDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD2aiiigAooooAKKKKACiiigAooooAKKKKACiiigAopM1HLcRQqWlkRFAySzACgCWkrCvPGmh2ZKtfxyN/diy5/Ssqb4maYn+qt7qU/7oX+daxo1JbRM3Vgt2dlRXBP8UY/4NMlP+9IBUY+KBzzpf8A5G/+tVfVavYn6xT7noVJXCp8T4P+WmmTD/dcGrdv8SNJlKiWO6hz13R5A/EUPD1V9karU31OvpaybLxRo9+wW31CAt/dZtp/I1qK6uMqQR6g5rFxa3RomnsOopM0ZpDFopKKAFooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoopKAFopM1navrtlosHmXsoUn7ka8u/wBBTSbdkJtJXZoE4+lYereMNL0lmjeXz5x/yyh+Y/iegri9b8Zahqm5I2NlangIrfOw9z/hXOB0X7ig9+fWuynhOszmniOkTpNT+IGq3eVtEW0ibgbRuf8AM1zdzLPfPvu5pZm9XYmlhPnXCK5OCKszwBGGw54yR6Cu6lThFe6jknOUt2UlhUdBgU/YKnijV9wBY4HXb1oMJx8oYn6VrdGZB5Yo2rUrQtng5I4xSNZSInQlj19qHILEZCDqaTYpGQeKa1uyfeBFCq6Z21HOVyg0AParNlqOo6aQ1jeTQgc7Q2V/I8UxDvHOBS4xVtKS11JV0zq9M+I11EQmp2ySrnmSL5W/LpXZ6Xr+n6woNncKzd424YfhXj5TjimLuhkV1LI68h1OCPxriq4SL1Wh1QxMlo9T3TPpS96820Tx/c2hWHVFNxD/AM9VHzr9fWvQLHULbUrZZ7OZJom7qensfQ1wVKUqe52QqRnsWqKSgVmWLRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFITSE1xnirxY0TSafpcgMw4lmH/LP2Hv/ACq4U3N2RM5qCuy34j8YR6WxtLEJPfdCM/LF9ff2rg5Wmublrm7d57h/vOx/Qegp0NvtGTkk9STkmrKJ6V6tKjGmtNzz6lRzeplXFtJGhkcg5647VVALdK2r5QLcoer9Kpw2uO1VJGdyosBbHBq/ZK0MvXKkYOeanSADtUojApIGxGsf34ZZULH+7xxUv2YCdBnCt8uAelNwAKsw2ksgDGNlGMgnrTvYW5CbeQOQEU4PSpYdshKOuJAeferttD+6ZxEwduoY9aq3iSLcKXQqD93b1P40uYVhs1iko5HI7is+WzEblT1rft4yYgSxZT93IwQKq3lm4/eRgsO4xyKLgkYgtNzgDHPHSpZLPHZsfSr9mFecocE4OBVxoQRVRlYUjmhHknCkY9ajkjyMV0ElqDnAFUJrTGeK0TvuIyDERnIyKtaZqd5o12LmykKHo6nlXHoR/WkmRo3z/D3FQvtZSBw3oamUE1YuMmnc9X8OeJ7TX4T5f7q5QfvIWPI9x6itsV4Xb3MtpOk9vI0U0Zyrr1Br1Hwt4qj1yHybjbFeoMsoPEg9V/wry61Dk96Ox6FKtzaPc6Simj9KdXMbhRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABSE4orK8Ra0mh6Y05AaVvliT+83+eacU5OyE2krsyPGHiM2EZ0+xb/TJR8zA/wCqX1+p7VxUFuFHTnvmhBJPM89wS80jFnY9yauIuTXr0qSpxsebUqOo7iLHkdKsQW+SKlhhzV1IQsLljtABJPpVuRFjBvF3XRXAwmBnHWmqgFSyRCOVl378Y5z1ptTcQAUtJS0CLukrE9/GJsY5Iz0zXT+WCMkZ964rvWlYa1cWmyLCyRlgAH7fjWc4t6o0hJLRmnrMSx6dI0cZDZ6pxt9zWPYXu2XyriTMLckuM4Na/iK+a3j+zRpkTpkuT29MVznf1pU9Y6hPSWhuwXcM87QIR8o+U/3vXFWTHgetcyshjkV0IDKcj2rb07WI55lguhtdzhSOhP8ASnJNahFpk8enxSyhxGBt5yOKlls05KnFWcrGp296rSyk1Ck2aciMyf8AdkhuMVQnlHNatwomXa2KyLi2eMkbcr61vBmMoWKEyrJlSMqagjs2lZQOSv8AEfSrnlkEHHcVcWBYyxXPzHJFa6E6nOyoySFWXHNOt5pLa4jmgcpJE25WHY1t3UUUkRE5CgDhvSsm0kVHdCoZSeSR1qLJ6FX6nqfhjxDHr2neYQqXUXyzRjsfUexrbrxrS9Qn0jVEvLUjKnDR9A691r1zT7+HUrKG6tm3RSruU/0rzcRQ9m7rZnfQre0VnuizmlpB1pa5jcKKKKACiiigAooooAKKKKACiikBoARmwCTwAM5ry/X9TOua00ikm3hJjiX+bfjXXeNNVNjpH2eFiJ7o+WuOoX+I1xFtFtUYFd+Dp/bZx4mp9glSOrUMOSKbFGSa0raHpxXZJ2OW1x0EPA4qjqt1uY20f3VP7wjufStabdBaSyKOVUkVzGSSWYkk8kmso6hLQSijvRVkhRRRQAUdQRnrRRSAlnup7op9okL+Wu1c9hUOSzBVGWY4AHemvIAOtVjMzSBYslycAL1/ChaD1bNqDSorjKC8zcYyYo1yF9i3SnX0UGg2olgDXN0zrtZhyo7gY6fWqNpDqWkyJKIz50wKJBtz+LHoK6K0jnRWlu3Xe5+4o4T6HvWUm77m0UuxDp+qDVIWcQvAynBV+tTlc1KFVju2jd645pStK5disYs1BdRboGAq+VqvOrZUKOCeaaYmjFWHdIox3q00foKs/ZvLmZhjHYUMlaqRk4mXd2q3MTRv+B9D61zuHhmIkGGBwRXWyIapXFlDO4d0y4GM1ZKMSc7JlXGCOa67wPqwtbw2Mjf6PcnMWf4ZO4/H+dcddTNNcM7AA/dwPaprKRgw2NtdSGU+hHSoqr2kXFlQfJJSR7ZS1naHqa6tpUF0v3mGHHow4NaGa8hpp2Z6ad1dC0UUUhhRRRQAUUUUAFFFFABSUtUdYvRp+k3VyescZI+vb9aaV3YTdlc4DxHetqniOdgcw2/7mPHTj7x/Oool6VUskbbliSxOTn1PWtOFMkV7MYqEVFHlSfM22TwR57VpwR4HSq9vF0rRjXispyKiipqsbtpc3lqSdo6ema5c/wAq7pV+Ujtg5ripreWFY5JFKxzZaM+ozSpS6BUjbUiopSKStTMKKKQnFAAeKjklApssuKoTTkkgUDsSSzk8KGJ9AM10WjaDPaTxXk04WXbgx7c4B9/WsvwpGLjUpC8bMqAMGHRWB4rsz65rKcuhtCPUik4Yk9e3pSqC5y3SjaXcE9KlAAGBWZoJSYp2KMUgGkU0rUmKMU7gVmSomSrbLUTLxVpktFGRKrsuG5q/ImarSR1rGRjJWOPvIjFdSIeu7NMhbZMpqxqrZ1GX/ZIWqgPOaofQ77wLf+VfT2TH5JV82Me46/piu5615JpV61neWd2DxE43e46H9DXrSsGAI6GvOxUbTv3OzDSvG3YdRRRXMdAUUUUAFFFFABRRRQAVy/ju4KaRDbg8zzAH6DmunriPHUofU7KDJ+RGc/icVvh43qIyru0GYlutaECciqsC8CtK2XpXpSZ5xcgTgVcRahiXGKtAHPBx+Fcs2axQ9dqrlvu45rldUJl022dVISCR4sHqOeM11Egd2CA+5BqjPp7XMd/a9DIRIrY4HH+IpQai7suabVkckTSUuDyCMEHBHpQckYUfMeB9a6jlsIAWOFBJ9AM1FIX5CKzH0AOa762sbbT4kEMSq+0bmI5PFRvFF5/nbF80LtDAc4rFVr9Df2Om55rcu6E71ZM9NwxmptH0l9YncF/LhjwZGAyeewrvpoIrjAmiSRQeAwziuf8AD6pY+IdRsQpRW+aNfYf/AFjVc91oHJZm1Z2UNjCIbSFYogc47sfUmptrZ56U/HNLisbmlhoUdqOlOoxQMbRTsUYp3AbSU/FGKQEZFMYcVKRTWHFUmJlZ1qvItXGFQyCtIszkjiNW41Of/eH8qqdBWr4gtPJvRMCSJevsRWTWqING2/eWhUdxj6V6r4euvtmh2cxOWMQB+o4/pXlOn8qRXongeXfojx/88pmH58/1rmxavBM2wztJo6WikFLXnncFFFFABRRRQAUUUUAJXAeLm3+JSOyQqP5mvQK898Uc+KJv+uafyrqwn8Q58T8BBAp4rTtl6VQgHStO2HSu2ZxIvRLxVlBjtUEQqyorlkbwQqKFJJ5Y9TT1P73HtmjHFRK/7wGs9zXY4zWdi6zdCMYUPz9e9R6UQ+s2qN035GfUV0Gu6Qt6z3FsAs4HzL2f/A1xck8trdKRuSWJgdrDGMV2QkpQscso2lc9BlueD3NMDeYMr261hr4l02VFZ5zE7nlGUkqf8KxLjUrvVda8mwuJo4XYKoQ44HVqyjTNXM7V3SJd8jBV9ScVUjFjLqa3KSRG8VDHjd8xU+1W0ZCirknbgZbnPvUV1p1reqftEKyEdHA+ZfoetIonXI4brS1nW1vqtnKITLFdWgxteVtsoH4daW813T9Puvs95OY3wDkocEHuDRbsK5oUU2KWOVA8UiSIejK3B/GnmkMSiloxQAlLRRQA0imkVIRTSKYiBhULirDConHFWiZI57xKg/s9XPVZR+tcyeldvf2q3ltJAx2h/wCLGce9cZdW0lpcSQyrgqePcdjWyMSzp/3WrvfATf6Pfp6SqfzH/wBauE05d2R6mu88DDaNQ/30/kayxK/ds0oP94kdYKWkpa8w9EKKKKACiiigAooooASuA8Upt8Tuf70SH+dd+elcR4zj2a1bSf8APSHH5H/69dOFf7w58Sv3ZTh4ArStjnFZkB4FaVr0Fd0ziiacNWVFVoelWVrkkdEBWbYv1qDPrUk54A/GoGbFSi2NkfHFZWo2FpqIX7TEGZejA4YfjV2V+tQda1joQznNX8NIIRNpqkMg+aIHO4eo96ytJvo9LvzNcQO7qpVVB27c9TXc9Kr3mnW2oRsk0S5ZcBwPmU+ua0U9LMjl6op6f4htr+5WARyRO2du7kN7VsoSDwa43QrU2/iQwTkF4Q2D6n2rsVPzDFKaS2HF3JY4Y4i3lrt3nc2O5pjWNq0qytbwtIgwrFckVI8kcMZeaREQdWZsCs5fEVhJJMqTIY4Vy0pOBnsoHVqz1L0NBoY3UAxoFB3YAwM1JjNV7S/tb62E1vOjJ0yTgg+nNTGRBII96eYRkJuGT+FAC4ooUhlDKcg9DS0AJilxRRQAhppp9NNAEbCoXAxUzVDJVoiRTuDtjcjsDWNc6cmp2MLFtsyKQr+vsa1r87LaQ+1ULFT5Uh7ZwK6I7GDKljafZ7c78eZnkius8EDC6gfV0H6GucVdkGMY7muq8ExbdMuJOu+c4P0AFZYl2pMvDa1UdIKWkpa8s9MKKKKAEooopiCiiigArlvG0Gba1uAPuSbCfYj/AOtXUVl+IrRr3QrmNRlwu9fqOa0pS5ZpkVVzQaONg5APpWjbt0FZVpIHUEHOea0YGwRXpyPNRsW5+XFXENULduBV1K45nRAjuBiQH2qrK1XZ1Lr8oyRWZK+Se1ESmMY5NIKbnmpY4i65Xse9abEiBGP3RS7GXqKuDGAB0FGPyqbjsY9/p0d4EYMYZo23RzIPmU/1FJb22qSyTLcXSxsi/uGiUYc/7QP8q1zGh/hpgjjjO9mChRkk9AKfMFjhdd1u61FEs7iFITCf3qAdXHGfpWOe2O1XdZuI7rWbuaFg8bSHaw6EetU66YqyMW7sQHB7j8aeJH8wSGR/MHR9x3D8aZS0xHUaH4qdfLttQK+UBgTk8/8AAvWuna9tQVH2qHLcj5hXmFIAPSs5U02WqjR6sPm6YPoRRXH+ENVaK6OnzOzRyDMQ67T3/Cuw/iNYyjZ2NE7q4vammnY4pppARNUL1O1QSHiriRIzdUbFvj1OKbZxERRLjrzU11FHOAHBODkDNKo2RtJ6DA+tb30sYlC6IXf6DNdp4XtzbeH7VWyGZTIc+5zXDzRvcyJAnLzOEH4mvSoYhDCkajAQBR9BXNi5Wiom+EV25ElFJS1wncFFFGaBiUUUUCuFFFFAXCg8jBxjvRQaAPOLi1Onatc2pGFR9ye6nkVbhbOK0/GFiV8nUEX7n7uUj+6eh/P+dY9uwIFepTnzwTPNqR5JtGtbvWjG2RWLHLs5NaMMuRWc4jgy2zYFZ95Fk7159cVZL5puazWhre5nqrH+E1egQrCA3rmkkk8vb8ucnGfSpRyKpsAx6EfjRRiipGIa5Xxhqs0TJp8LlVkTdNj+IHoK6zNcN4wluH1QQzKqwoAYiFxuyOcnvVwV2TN6HPjsOw4FLRnFFdJiFFFFABRSUUAWrO/nsC5tZPKaQbWkVQWx6A9q0U8VanHMGWZTEMDy3Xdx9euaxaM0nFMd2jq38cNuKxWIx2LSf0q7YeKrK8VVnPkXBbbsPIJ9jXD1e0ixk1DU4IoxlUYPIcfdUGocIpDUmegvx1qvIeKnlYEk9jVWRs1MQkyEjJovCIbRYectzU8Ee85PSqGqT7p3I6JxWi1djOWiLHhm0+1a6JWX5LZd3/AjwP0zXcVj+GNPNhpSmQDzpz5j/j0H5VsV5+InzzdjuoQ5IJMKKKKxN7hS0lAoAKKQ9TSUCHUU2igB1J3pKKAGXNul1byQyDKSKVIrz9oJNPvZLWb70ZwD/eHY16J2rB8T6U13bC7gX9/AOQP4l7j8K6MPU5JWezMK9PmV1ujD3HyzjrjipbS8BjHmfI6naQfWqdtMHQc1ZKJKpDqCD1rvcTiTNASe9SIc1lIlxBKnluZI+6v1A+taSOCKxlGxpGRY4I55FOqJWp4NZmqY6ijNFIYVS1fTE1fT5LdgPMAzE391u1XqKAseTOjI7RyDa6EqwPYikrrvF+iSySjULSHduGJwg546NiuRGCK6oyUkYSVmLRRkYoqhBRRRQAUUmaCcUAOVWdlROXdgqj3Neg6Rpcej2Xkod0p+aWQjlj/gK5Dw7Ym+1mLJ/dwYlc/ToPxNd1I2Sazm7uxS0VyORqgwWbAHWnscmp4Ygo3H71LZEbseqCJB7VnaPp51LVcyA+TGd7e/PAq3dTsBsQbnY7UUdz6VvaVp40+zCHBlY7pGHdqyqVOSL7s1p0+eXki8owOcfhS03vRXCdw6im0UAOoptLQAHqaTNB6migQZozRRQAZozRRQAZooooA47X9J/s24N3bgi2kPzKP+WZ/wNVoZAy9a7eSNZY2jkUMjDBU9CK4vUtMk0a4LLue0c/K39z2P+Nd9CrzLllucdalyvmjsSqcVMjVUilBFWAfStmjBMsq1TK1VFapFes3E0UrFoGlqJXzTwazsaKQ+lpB0pe9IdxASO9ZGt+H7LUbWabyhFcIpYSxjBOBnn1rWZgvJIA96qaxcfZ9FvZVZQwibbn1PFCunoDtbU8xXBAI6GlpAMAD0FLXaYBSUUDjmkAvvVuy0i91H5rWBig4LtwoP1NanhbSYb6SW5u03xRYCIR8rN/XFdcXAG0ABR0A6ColJrRAUdI01NJtPLDBpX+aRwOp9PoKsMxNPJz0qSKHHzN1qA3Ehh5DN+VFxMsUZJOAKdLKI1JJAA9ak03TWvHW5uVKxA5jQ/wAXufaockleRUYuTsifRdPbcLy6H7wj92h/hHr9TW1mkxS1xSk5O7O2EVFWQUZooqSgzRmiigAzS5pKKAEPU0maU9TSUwDNGaKKADNGaKKADNGaKKAFzTJIkmjaOVA6MMMpHBFOooA5HVNDl0zdNaBpLbOSo5ZP8RVaC4DAe9dsRmsTUvDizu01iVilPJQ/db/A12UsR0n95y1aHWBnKc09TVAtPaS+VdRmOQdj3+h71YScEda6LdTmuWw+KVrhIhmRsCoBID3qtfQzThDGqsq9Rnmp5R3HS30kjHa21e2OtM+1SYxvOKpMkkX+sRl+tIJKrlHzF4zuwwzkj3rJ8SXDDTUTdxJIAR7CrHm4rC1q6a4uzFwI4jgD1PrQo2Yc1zNHc0tJnFWILG4uFDxx/If4icVoBXJA61raRpaXDebfb0hB+VFGGf8AwFWbCyW2jPmbHcnOducewq6kbyEAZwT1pNCubsSpFAiQoEjA+VR2FOWMyH0qWGALEm7IwBjPWpGkVB1rBy7Dt3EWJUHODUU1wIh654AHOaE+0XpMdmm493P3V/GtWw0iO0IkkJmnPV2HA+g7VnKajvuaRg57bFWx0gystxejpykJ6D3NbQ46UUVyyk5O7OuMFBWQZozRRUlBmjNFFABmjNFFABmlzSUtADWcbj9aTeKRh8x+tJiqshXHbxRvFNxRiiwh28UbxTcUYosA7eKN4puKMUWAdvFG8U3FGKLAO3ijeKbijFFgGXMMN3HsniWRfQ9qw7nw+6ZazkDD/nnJ/Q10GKTbVwnKGzInBS3ORdWtztuI3hI7uOPzqUAkZVgfpXUtGrrtZQw9CM1Qn0SzlcusZif1jYit1iE/iRg6DWzOX1GYlBFGxJz8wHSs8bh1BFdLceFiCWt74IT2kAxWbNpWoQMQHtZlH918H8q6I1YPZmEqc10MwsyDdg4Fc/MJJ5ywjwD6d665ba+B5tc/RxUJ0ecvuW0dD14YY/nVOS7iSl2MjS9Kct5z5U44UitdbTB+Y8D0qylnfNjdGF92YVah0SeUbmngT8aTnFdR8sn0KIjiQep96kRt2ApCjuT2rYh8LQHma5aT2XitO30mygIMcKkju3NYyxEVtqaRoTe5kRNcXOFt4mfAxvfgVeg0UffvXMh/uLwv/wBetUKB04oxXPKq3todEaMVvqNjCRIERQqjoAKfvpuKMVkajt4o3im4oxRZBcdvFG8U3FGKLAO3ijeKbijFFgHbxRvFNxRiiwDt4o3im4pQKLAPYfMfrSYpxHJoxU3KsNxRinYoxRcLDcUYp2KMUXCw3FGKdijFFwsNxRinYoxRcLDcUYp2KZJMkfU89hTAXFMaZEHJGfQVXkkeU46D2poj9qpR7kN9h7XJP3Fx9aYWZurn6Cn7KRvkxxmq0ROpWliGd3X61HgDGAKssS3Hb0pvl57VXMTykOSKrF3DfNmr/l1BcQsHyehouPlEimDDBwDT9qN2qDy6erMgx15p3DlJRGByuQfrUiyyr1c4piHfyTin4x70mw5Wiyk7AfMufxqVZVcY6H0NQocjpTigeoaRSuTgUYqAF4/9pfSpklV/Y+lS9C0xcUYp2KMUrjG4oxTsUYpXCw3FGKdijFFwsNxRinYoxRcLDcUuKXFGKLhYfijFOpKkobijFOopgNxRinUUANxRinUUANxQeBk06kIzQBC8jHhRUXk556n3qzspdlUnYlq5WEXtTvLqfZRto5g5SDy6DCCMVPtpdtHMPlKpg54FHkn0q1to20cwuUq+T7UyaDcnA5q7tHpSFQR0o5g5TL8j2o8j2rR8oUnlCjmDlKHke1PRCOGGRVzyqPL9qOYOUiRAw4p4TFSKmKdto5g5SLbTfLGc1Pto20rhykakjg9KkHNG2lAxSGhMUYp1FAxuKMU6igBuKMU6igBuKXFLRQAtFFFIYlFFFABRRRQAUUUUAGKWiigBKKKKACiiigApaKKACiiigAooooAKSiigAooooAWiiigBKKKKACiiigAooooAKKKKACiiigApaKKAP//ZDQplbmRzdHJlYW0NZW5kb2JqDTEgMCBvYmoNPDwvQ291bnQgMS9LaWRzWzcgMCBSXS9UeXBlL1BhZ2VzPj4NZW5kb2JqDTIgMCBvYmoNPDwvTGVuZ3RoIDMzNjYvU3VidHlwZS9YTUwvVHlwZS9NZXRhZGF0YT4+c3RyZWFtDQo8P3hwYWNrZXQgYmVnaW49Iu+7vyIgaWQ9Ilc1TTBNcENlaGlIenJlU3pOVGN6a2M5ZCI/Pgo8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4OnhtcHRrPSJBZG9iZSBYTVAgQ29yZSA1LjYtYzAxNiA5MS4xNjM2MTYsIDIwMTgvMTAvMjktMTY6NTg6NDkgICAgICAgICI+CiAgIDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+CiAgICAgIDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiCiAgICAgICAgICAgIHhtbG5zOnhtcD0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wLyIKICAgICAgICAgICAgeG1sbnM6cGRmPSJodHRwOi8vbnMuYWRvYmUuY29tL3BkZi8xLjMvIgogICAgICAgICAgICB4bWxuczpkYz0iaHR0cDovL3B1cmwub3JnL2RjL2VsZW1lbnRzLzEuMS8iCiAgICAgICAgICAgIHhtbG5zOnhtcE1NPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvbW0vIj4KICAgICAgICAgPHhtcDpDcmVhdGVEYXRlPjIwMTgtMDMtMjFUMTM6NTg6MDdaPC94bXA6Q3JlYXRlRGF0ZT4KICAgICAgICAgPHhtcDpDcmVhdG9yVG9vbD5Xb3JkPC94bXA6Q3JlYXRvclRvb2w+CiAgICAgICAgIDx4bXA6TW9kaWZ5RGF0ZT4yMDE5LTA1LTI2VDExOjE3OjE3LTA3OjAwPC94bXA6TW9kaWZ5RGF0ZT4KICAgICAgICAgPHhtcDpNZXRhZGF0YURhdGU+MjAxOS0wNS0yNlQxMToxNzoxNy0wNzowMDwveG1wOk1ldGFkYXRhRGF0ZT4KICAgICAgICAgPHBkZjpLZXl3b3Jkcy8+CiAgICAgICAgIDxwZGY6UHJvZHVjZXI+TWFjIE9TIFggMTAuMTEuNiBRdWFydHogUERGQ29udGV4dDwvcGRmOlByb2R1Y2VyPgogICAgICAgICA8ZGM6Zm9ybWF0PmFwcGxpY2F0aW9uL3BkZjwvZGM6Zm9ybWF0PgogICAgICAgICA8ZGM6dGl0bGU+CiAgICAgICAgICAgIDxyZGY6QWx0PgogICAgICAgICAgICAgICA8cmRmOmxpIHhtbDpsYW5nPSJ4LWRlZmF1bHQiPk1pY3Jvc29mdCBXb3JkIC0gV29ybGRfV2lkZV9Db3JwX2xvcmVtLmRvY3g8L3JkZjpsaT4KICAgICAgICAgICAgPC9yZGY6QWx0PgogICAgICAgICA8L2RjOnRpdGxlPgogICAgICAgICA8eG1wTU06RG9jdW1lbnRJRD51dWlkOjJlNjBiMTYyLTY4MmQtNGZjOS1hYjFjLTcwMTQ0OTllMGQ0OTwveG1wTU06RG9jdW1lbnRJRD4KICAgICAgICAgPHhtcE1NOkluc3RhbmNlSUQ+dXVpZDplYmRjZjYzOS02NWNjLTQ0YTgtODEyMi02ZDA2YWFjNzI3MDI8L3htcE1NOkluc3RhbmNlSUQ+CiAgICAgIDwvcmRmOkRlc2NyaXB0aW9uPgogICA8L3JkZjpSREY+CjwveDp4bXBtZXRhPgogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgCjw/eHBhY2tldCBlbmQ9InciPz4NCmVuZHN0cmVhbQ1lbmRvYmoNMyAwIG9iag1bXQ1lbmRvYmoNNCAwIG9iag08PC9BQVBMOktleXdvcmRzIDMgMCBSL0NyZWF0aW9uRGF0ZShEOjIwMTgwMzIxMTM1ODA3WikvQ3JlYXRvcihXb3JkKS9LZXl3b3JkcygpL01vZERhdGUoRDoyMDE5MDUyNjExMTcxNy0wNycwMCcpL1Byb2R1Y2VyKE1hYyBPUyBYIDEwLjExLjYgUXVhcnR6IFBERkNvbnRleHQpL1RpdGxlKE1pY3Jvc29mdCBXb3JkIC0gV29ybGRfV2lkZV9Db3JwX2xvcmVtLmRvY3gpPj4NZW5kb2JqDXhyZWYNCjAgNQ0KMDAwMDAwMDAwMCA2NTUzNSBmDQowMDAwMDM4NzQzIDAwMDAwIG4NCjAwMDAwMzg3OTQgMDAwMDAgbg0KMDAwMDA0MjIzNyAwMDAwMCBuDQowMDAwMDQyMjU1IDAwMDAwIG4NCnRyYWlsZXINCjw8L1NpemUgNS9JRFs8OTNEREQ1RjRBQjk1NTU2NTVFMUFFQkU3Mjc4OTFGNzQ+PDUyRUNGNjUzRTlDQTM4NDNBMEI2MTY0ODI1RkZENjJDPl0+Pg0Kc3RhcnR4cmVmDQoxMTYNCiUlRU9GDQo=",
document_id="1", # A label used to reference the doc
file_extension="pdf", # Many different document types are accepted
name="Lorem" # Can be different from actual file name
)
envelope_definition.documents = [document1]
envelope_definition.status = args["envelope_args"]["status"]
# Create your signature tab
sign_here1 = SignHere(
name="SignHereTab",
x_position="75",
y_position="572",
tab_label="SignHereTab",
page_number="1",
document_id="1",
# A 1- to 8-digit integer or 32-character GUID to match recipient IDs on your own systems.
# This value is referenced in the Tabs element below to assign tabs on a per-recipient basis.
recipient_id="1" # represents your {RECIPIENT_ID}
)
signer1 = Signer(
email=args["envelope_args"]["signer_email"], # Represents your {signer_email}
name=args["envelope_args"]["signer_name"], # Represents your {signer_name}
role_name = "",
note = "",
status = "created",
delivery_method = "email",
recipient_id = "1", # Represents your {RECIPIENT_ID}
routing_order="1",
identity_verification = { "workflowId" : workflow_id, "steps": "null" },
tabs = Tabs(sign_here_tabs=[sign_here1])
)
# Tabs are set per recipient
envelope_definition.recipients = Recipients(signers=[signer1])
# Step 5: Call the eSignature REST API
envelopes_api = EnvelopesApi(api_client)
results = envelopes_api.create_envelope(args["account_id"], envelope_definition=envelope_definition)
envelope_id = results.envelope_id
app.logger.info(f"Envelope was created. EnvelopeId {envelope_id} ")
return render_template("example_done.html",
title="Envelope sent",
h1="Envelope sent",
message=f"""The envelope has been created and sent!<br/>
Envelope ID {envelope_id}."""
)
except ApiException as err:
error_body_json = err and hasattr(err, "body") and err.body
# We can pull the DocuSign error code and message from the response body
error_body = json.loads(error_body_json)
error_code = error_body and "errorCode" in error_body and error_body["errorCode"]
error_message = error_body and "message" in error_body and error_body["message"]
# In production, you may want to provide customized error messages and
# remediation advice to the user
return render_template("error.html",
err=err,
error_code=error_code,
error_message=error_message
)
else:
flash("Sorry, you need to re-authenticate.")
# We could store the parameters of the requested operation so it could be restarted
# automatically. But since it should be rare to have a token issue here,
# we'll make the user re-enter the form data after authentication
session["eg"] = url_for(eg)
return redirect(url_for("ds_must_authenticate"))
def get_controller():
"""responds with the form for the example"""
if views.ds_token_ok():
return render_template("eg023_idv_authentication.html",
title="IDV authentication",
source_file=path.basename(__file__),
source_url=ds_config.DS_CONFIG["github_example_url"] + path.basename(__file__),
documentation=ds_config.DS_CONFIG["documentation"] + eg,
show_doc=ds_config.DS_CONFIG["documentation"],
signer_name=ds_config.DS_CONFIG["signer_name"],
signer_email=ds_config.DS_CONFIG["signer_email"]
)
else:
# Save the current operation so it will be resumed after authentication
session["eg"] = url_for(eg)
return redirect(url_for("ds_must_authenticate")) | 408.910828 | 56,991 | 0.931588 | 2,121 | 64,199 | 28.116455 | 0.743989 | 0.001476 | 0.000838 | 0.001341 | 0.007747 | 0.0055 | 0.003723 | 0.002482 | 0.002482 | 0.002482 | 0 | 0.1317 | 0.037493 | 64,199 | 157 | 56,992 | 408.910828 | 0.833393 | 0.023785 | 0 | 0.079646 | 0 | 0.00885 | 0.92386 | 0.910395 | 0 | 1 | 0 | 0 | 0 | 1 | 0.026549 | false | 0 | 0.070796 | 0 | 0.168142 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
13a2cda0cb85b699d61e738d363e7c7c8fcb1acf | 61 | py | Python | src/objects/__init__.py | fish-face/agutaywusyg | 8d70b2c61bab04861c6195a9586cec03e431c825 | [
"BSD-3-Clause"
] | null | null | null | src/objects/__init__.py | fish-face/agutaywusyg | 8d70b2c61bab04861c6195a9586cec03e431c825 | [
"BSD-3-Clause"
] | null | null | null | src/objects/__init__.py | fish-face/agutaywusyg | 8d70b2c61bab04861c6195a9586cec03e431c825 | [
"BSD-3-Clause"
] | null | null | null | from object import *
from static import *
from item import *
| 15.25 | 20 | 0.754098 | 9 | 61 | 5.111111 | 0.555556 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196721 | 61 | 3 | 21 | 20.333333 | 0.938776 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
13ad0489aaaabec715264c278c80a97df73f749e | 27 | py | Python | src/10/loading_modules_from_a_remote_machine_using_import_hooks/testcode/grok/__init__.py | tuanavu/python-gitbook | 948a05e065b0f40afbfd22f697dff16238163cde | [
"MIT"
] | 14 | 2017-05-20T04:06:46.000Z | 2022-01-23T06:48:45.000Z | src/10/loading_modules_from_a_remote_machine_using_import_hooks/testcode/grok/__init__.py | tuanavu/python-gitbook | 948a05e065b0f40afbfd22f697dff16238163cde | [
"MIT"
] | 1 | 2021-06-10T20:17:55.000Z | 2021-06-10T20:17:55.000Z | src/10/loading_modules_from_a_remote_machine_using_import_hooks/testcode/grok/__init__.py | tuanavu/python-gitbook | 948a05e065b0f40afbfd22f697dff16238163cde | [
"MIT"
] | 15 | 2017-03-29T17:57:33.000Z | 2021-08-24T02:20:08.000Z | print("I'm grok.__init__")
| 13.5 | 26 | 0.703704 | 5 | 27 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 27 | 1 | 27 | 27 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0.62963 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
13b606541e3c8bc66eb65588844588f3d7949d56 | 31,770 | py | Python | scripts/semantic_types_test.py | yuichigoto/ccg2lambda | 442a9ec9be7393d92c77a03856aa4a9c71c20078 | [
"Apache-2.0"
] | 200 | 2015-10-08T10:54:00.000Z | 2022-03-31T07:55:09.000Z | scripts/semantic_types_test.py | masashi-y/ccg2lambda | 4fc9d706e282f9c8422d21f8639417295588cc4f | [
"Apache-2.0"
] | 25 | 2020-03-07T14:56:22.000Z | 2021-11-18T17:14:26.000Z | scripts/semantic_types_test.py | masashi-y/ccg2lambda | 4fc9d706e282f9c8422d21f8639417295588cc4f | [
"Apache-2.0"
] | 47 | 2016-11-21T09:24:52.000Z | 2021-11-09T09:39:36.000Z | #!/usr/bin/python3
# -*- coding: utf-8 -*-
#
# Copyright 2015 Pascual Martinez-Gomez
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
from lxml import etree
from nltk.sem.logic import Variable, Expression
from ccg2lambda_tools import assign_semantics_to_ccg
from logic_parser import lexpr
from semantic_index import SemanticIndex
from semantic_index import SemanticRule
from semantic_types import build_dynamic_library
from semantic_types import build_library_entry
from semantic_types import combine_signatures_or_rename_preds
from semantic_types import convert_coq_signatures_to_nltk
from semantic_types import convert_coq_to_nltk_type
from semantic_types import get_coq_types
from semantic_types import get_dynamic_library_from_doc
from semantic_types import merge_dynamic_libraries
from semantic_types import read_type
from semparse import filter_attributes
from theorem import get_formulas_from_doc
class combine_signatures_or_rename_predsTestCase(unittest.TestCase):
def test_different_onepred(self):
exprs = [lexpr(r'pred1(x)'), lexpr(r'pred2(x)')]
sig, exprs_new = combine_signatures_or_rename_preds(exprs)
self.assertEqual(exprs, exprs_new)
def test_equal_onepred(self):
exprs = [lexpr(r'pred1(x)'), lexpr(r'pred1(x)')]
sig, exprs_new = combine_signatures_or_rename_preds(exprs)
self.assertEqual(2, len(sig), msg='Unexpected signature: {0}'.format(sig))
self.assertEqual(exprs, exprs_new)
def test_equalvar_onepred(self):
exprs = [lexpr(r'pred1(x)'), lexpr(r'pred1(y)')]
sig, exprs_new = combine_signatures_or_rename_preds(exprs)
self.assertEqual(3, len(sig), msg='Unexpected signature: {0}'.format(sig))
self.assertEqual(exprs, exprs_new)
def test_different_one_two_pred(self):
exprs = [lexpr(r'pred1(x)'), lexpr(r'pred1(x,y)')]
expected_exprs = [lexpr(r'pred1_e2(x)'), lexpr(r'pred1_e3(x,y)')]
sig, new_exprs = combine_signatures_or_rename_preds(exprs)
self.assertEqual(expected_exprs, new_exprs)
def test_different_one_pred_vartype(self):
exprs = [lexpr(r'pred1(x)'), lexpr(r'pred1(e)')]
expected_exprs = [lexpr(r'pred1_e2(x)'), lexpr(r'pred1_v2(e)')]
sig, exprs_new = combine_signatures_or_rename_preds(exprs)
self.assertEqual(expected_exprs, exprs_new)
def test_different_in_same_expression(self):
exprs = [lexpr(r'pred1(x) & pred1(e)'), lexpr(r'pred1(e)')]
sigs, new_exprs = combine_signatures_or_rename_preds(exprs)
expected_exprs = [
lexpr(r'pred1_e2(x) & pred1_v2(e)'), lexpr(r'pred1_v2(e)')]
self.assertEqual(expected_exprs, new_exprs)
def test_different_in_same_expression_embed(self):
exprs = [lexpr(r'exists x. (pred1(x) & exists e. pred1(e))')]
sigs, new_exprs = combine_signatures_or_rename_preds(exprs)
expected_exprs = [
lexpr(r'exists x. (pred1_e2(x) & exists e. pred1_v2(e))')]
self.assertEqual(expected_exprs, new_exprs)
def test_arbitrary_different_same_pred(self):
doc_str = r"""
<document>
<sentences>
<sentence id="s1">
<tokens>
<token base="pred_same" pos="pos1" surf="surf1" id="t1_1"/>
<token base="pred_same" pos="pos2" surf="surf2" id="t1_2"/>
</tokens>
<ccg root="sp1-3">
<span terminal="t1_1" category="cat1" end="2" begin="1" id="sp1-1"/>
<span terminal="t1_2" category="cat2" end="3" begin="2" id="sp1-2"/>
<span child="sp1-1 sp1-2" rule="lex" category="NP" end="3" begin="1" id="sp1-3"/>
</ccg>
<semantics root="sp1-3">
<span sem="exists x e. _pred_same(x) -> _pred_same(e)" child="sp1-1 sp1-2"/>
<span sem="_pred_same" type="pred_same : Entity -> Prop" id="sp1-1"/>
<span sem="_pred_same" type="pred_same : Event -> Prop" id="sp1-2"/>
</semantics>
</sentence>
</sentences>
</document>
"""
doc = etree.fromstring(doc_str)
sem_nodes = doc.xpath('//semantics')
dynamic_library_str, formulas = get_dynamic_library_from_doc(doc, sem_nodes)
coq_types = dynamic_library_str.split('\n')
expected_coq_types = ["Parameter _pred_same_e2 : Entity -> Prop.",
"Parameter _pred_same_v2 : Event -> Prop."]
self.assertEqual(expected_coq_types, coq_types,
msg="\n{0}\nvs\n{1}".format(expected_coq_types, coq_types))
# TODO: also test for types that are Propositions 't'.
def nltk_sig_to_coq_lib(nltk_sig):
# Convert into coq style library entries.
coq_lib = []
for predicate, pred_type in nltk_sig.items():
library_entry = build_library_entry(predicate, pred_type)
coq_lib.append(library_entry)
return sorted(set(coq_lib))
def semparse_sentence(sentence, semantic_index):
sem_node = etree.Element('semantics')
sem_tree = assign_semantics_to_ccg(sentence, semantic_index)
filter_attributes(sem_tree)
sem_node.extend(sem_tree.xpath('.//descendant-or-self::span'))
sem_node.set('status', 'success')
sem_node.set('root', sentence.xpath('./ccg[1]/@root')[0])
sentence.append(sem_node)
return sentence
class build_arbitrary_dynamic_libraryTestCase(unittest.TestCase):
def test_type_arbitrary_raised(self):
semantic_index = SemanticIndex(None)
semantic_rules = [SemanticRule(r'N1', r'\P.P', {'coq_type' : 'Entity -> Prop'}),
SemanticRule(r'N2', r'\P.P', {'coq_type' : 'Entity'}),
SemanticRule(r'NP', r'\P Q.(_new(P, Q))', {'rule' : 'lex'})]
semantic_index.rules = semantic_rules
sentence_str = r"""
<sentence id="s1">
<tokens>
<token base="base1" pos="pos1" surf="surf1" id="t1_1"/>
<token base="base2" pos="pos1" surf="surf2" id="t1_2"/>
</tokens>
<ccg root="sp1-3" id="test1">
<span terminal="t1_1" category="N1" end="2" begin="1" id="sp1-1"/>
<span terminal="t1_2" category="N2" end="3" begin="2" id="sp1-2"/>
<span child="sp1-1 sp1-2" rule="lex" category="NP" end="2" begin="1" id="sp1-3"/>
</ccg>
</sentence>
"""
sentence = etree.fromstring(sentence_str)
sentence_sem = semparse_sentence(sentence, semantic_index)
lib, formulas = get_dynamic_library_from_doc(
sentence_sem, sentence_sem.xpath('./semantics'))
coq_types = get_coq_types(sentence_sem)
expected_coq_types = [
"Parameter _base1 : Entity -> Prop.",
"Parameter _base2 : Entity."]
self.assertEqual(expected_coq_types, lib.split('\n'))
def test_lexical_unary_one_type(self):
semantic_index = SemanticIndex(None)
semantic_rules = [SemanticRule(r'N', r'\P.P', {'coq_type' : 'Entity -> Prop'}),
SemanticRule(r'NP', r'\P.(P -> P)', {'rule' : 'lex'})]
semantic_index.rules = semantic_rules
sentence_str = r"""
<sentence id="s1">
<tokens>
<token base="base1" pos="pos1" surf="surf1" id="t1_1"/>
</tokens>
<ccg root="sp1-2">
<span terminal="t1_1" category="N" end="2" begin="1" id="sp1-1"/>
<span child="sp1-1" rule="lex" category="NP" end="2" begin="1" id="sp1-2"/>
</ccg>
</sentence>
"""
sentence = etree.fromstring(sentence_str)
ccg_tree = assign_semantics_to_ccg(sentence, semantic_index)
coq_types = get_coq_types(ccg_tree)
expected_coq_types = ["Parameter _base1 : Entity -> Prop."]
self.assertEqual(expected_coq_types, coq_types)
def test_lexical_binary_two_types(self):
semantic_index = SemanticIndex(None)
semantic_rules = [SemanticRule(r'cat1', r'\P.P', {'coq_type' : 'Entity -> Prop'}),
SemanticRule(r'cat2', r'\P.P', {'coq_type' : 'Entity -> Prop -> Prop'}),
SemanticRule(r'NP', r'\P Q.(Q -> P)', {'rule' : 'lex'})]
semantic_index.rules = semantic_rules
sentence_str = r"""
<sentence id="s1">
<tokens>
<token base="base1" pos="pos1" surf="surf1" id="t1_1"/>
<token base="base2" pos="pos2" surf="surf2" id="t1_2"/>
</tokens>
<ccg root="sp1-3">
<span terminal="t1_1" category="cat1" end="2" begin="1" id="sp1-1"/>
<span terminal="t1_2" category="cat2" end="3" begin="2" id="sp1-2"/>
<span child="sp1-1 sp1-2" rule="lex" category="NP" end="3" begin="1" id="sp1-3"/>
</ccg>
</sentence>
"""
sentence = etree.fromstring(sentence_str)
ccg_tree = assign_semantics_to_ccg(sentence, semantic_index)
coq_types = get_coq_types(ccg_tree)
expected_coq_types = ["Parameter _base1 : Entity -> Prop.",
"Parameter _base2 : Entity -> Prop -> Prop."]
self.assertEqual(expected_coq_types, coq_types)
def test_lexical_binary_one_type(self):
semantic_index = SemanticIndex(None)
semantic_rules = [SemanticRule(r'cat1', r'\P.P'),
SemanticRule(r'cat2', r'\Q x.Q(x)', {'coq_type' : 'Entity -> Prop'}),
SemanticRule(r'NP', r'\P Q x.(P -> Q(x))', {'rule' : 'lex'})]
semantic_index.rules = semantic_rules
sentence_str = r"""
<sentence id="s1">
<tokens>
<token base="base1" pos="pos1" surf="surf1" id="t1_1"/>
<token base="base2" pos="pos2" surf="surf2" id="t1_2"/>
</tokens>
<ccg root="sp1-3">
<span terminal="t1_1" category="cat1" end="2" begin="1" id="sp1-1"/>
<span terminal="t1_2" category="cat2" end="3" begin="2" id="sp1-2"/>
<span child="sp1-1 sp1-2" rule="lex" category="NP" end="3" begin="1" id="sp1-3"/>
</ccg>
</sentence>
"""
sentence = etree.fromstring(sentence_str)
ccg_tree = assign_semantics_to_ccg(sentence, semantic_index)
coq_types = get_coq_types(ccg_tree)
expected_coq_types = ["Parameter _base2 : Entity -> Prop."]
self.assertEqual(expected_coq_types, coq_types)
class ArbiAutoTypesTestCase(unittest.TestCase):
def test_lexical_binary_one_type(self):
semantic_index = SemanticIndex(None)
semantic_rules = [SemanticRule(r'cat1', r'\P x.P(x)'),
SemanticRule(r'cat2', r'\P x.P(x)', {'coq_type' : 'Entity -> Prop'}),
SemanticRule(r'NP', r'\P Q x.(Q(x) -> P(x))', {'rule' : 'lex'})]
semantic_index.rules = semantic_rules
sentence_str = r"""
<sentence id="s1">
<tokens>
<token base="base1" pos="pos1" surf="surf1" id="t1_1"/>
<token base="base2" pos="pos2" surf="surf2" id="t1_2"/>
</tokens>
<ccg root="sp1-3">
<span terminal="t1_1" category="cat1" end="2" begin="1" id="sp1-1"/>
<span terminal="t1_2" category="cat2" end="3" begin="2" id="sp1-2"/>
<span child="sp1-1 sp1-2" rule="lex" category="NP" end="3" begin="1" id="sp1-3"/>
</ccg>
</sentence>
"""
sentence = etree.fromstring(sentence_str)
ccg_tree = assign_semantics_to_ccg(sentence, semantic_index)
coq_lib = get_coq_types(ccg_tree)
expected_coq_lib = ["Parameter _base2 : Entity -> Prop."]
self.assertEqual(expected_coq_lib, coq_lib)
expression = [ccg_tree.get('sem')]
coq_sig = convert_coq_signatures_to_nltk(coq_lib)
nltk_lib, _ = build_dynamic_library(expression, coq_sig)
lib = merge_dynamic_libraries(coq_sig, nltk_lib, sentence)
expected_lib = ["Parameter _base2 : Entity -> Prop.",
"Parameter _base1 : Entity -> Prop."]
self.assertCountEqual(expected_lib, lib)
def test_lexical_binary_no_type(self):
semantic_index = SemanticIndex(None)
semantic_rules = [SemanticRule(r'cat1', r'\P x.P(x)'),
SemanticRule(r'cat2', r'\P x.P(x)'),
SemanticRule(r'NP', r'\P Q x.(Q(x) -> P(x))', {'rule' : 'lex'})]
semantic_index.rules = semantic_rules
sentence_str = r"""
<sentence id="s1">
<tokens>
<token base="base1" pos="pos1" surf="surf1" id="t1_1"/>
<token base="base2" pos="pos2" surf="surf2" id="t1_2"/>
</tokens>
<ccg root="sp1-3">
<span terminal="t1_1" category="cat1" end="2" begin="1" id="sp1-1"/>
<span terminal="t1_2" category="cat2" end="3" begin="2" id="sp1-2"/>
<span child="sp1-1 sp1-2" rule="lex" category="NP" end="3" begin="1" id="sp1-3"/>
</ccg>
</sentence>
"""
sentence = etree.fromstring(sentence_str)
ccg_tree = assign_semantics_to_ccg(sentence, semantic_index)
coq_lib = get_coq_types(ccg_tree)
expected_coq_lib = []
self.assertEqual(expected_coq_lib, coq_lib)
expression = [ccg_tree.get('sem')]
coq_sig = convert_coq_signatures_to_nltk(coq_lib)
nltk_lib, _ = build_dynamic_library(expression, coq_sig)
lib = merge_dynamic_libraries(coq_lib, nltk_lib, sentence)
expected_lib = ["Parameter _base2 : Entity -> Prop.",
"Parameter _base1 : Entity -> Prop."]
self.assertCountEqual(expected_lib, lib)
def test_lexical_binary_one_nltk_complex_type(self):
semantic_index = SemanticIndex(None)
semantic_rules = [SemanticRule(r'cat1', r'\P x.P(x)'),
SemanticRule(r'cat2', r'\Q x y.Q(x, y)'),
SemanticRule(r'NP', r'\P Q x y.(P(x) -> Q(x, y))', {'rule' : 'lex'})]
semantic_index.rules = semantic_rules
sentence_str = r"""
<sentence id="s1">
<tokens>
<token base="base1" pos="pos1" surf="surf1" id="t1_1"/>
<token base="base2" pos="pos2" surf="surf2" id="t1_2"/>
</tokens>
<ccg root="sp1-3">
<span terminal="t1_1" category="cat1" end="2" begin="1" id="sp1-1"/>
<span terminal="t1_2" category="cat2" end="3" begin="2" id="sp1-2"/>
<span child="sp1-1 sp1-2" rule="lex" category="NP" end="3" begin="1" id="sp1-3"/>
</ccg>
</sentence>
"""
sentence = etree.fromstring(sentence_str)
ccg_tree = assign_semantics_to_ccg(sentence, semantic_index)
coq_lib = get_coq_types(ccg_tree)
expected_coq_lib = []
self.assertEqual(expected_coq_lib, coq_lib)
expression = [ccg_tree.get('sem')]
coq_sig = convert_coq_signatures_to_nltk(coq_lib)
nltk_lib, _ = build_dynamic_library(expression, coq_sig)
lib = merge_dynamic_libraries(coq_lib, nltk_lib, sentence)
expected_lib = ["Parameter _base2 : Entity -> (Entity -> Prop).",
"Parameter _base1 : Entity -> Prop."]
self.assertCountEqual(expected_lib, lib)
def test_lexical_binary_one_coq_complex_type(self):
semantic_index = SemanticIndex(None)
semantic_rules = [SemanticRule(r'cat1', r'\P x.P(x)'),
SemanticRule(r'cat2', r'\Q R S.Q(R, S)', {'coq_type' : 'Prop -> Entity -> Prop'}),
SemanticRule(r'NP', r'\P Q x R S.(P(x) -> Q(R, S))', {'rule' : 'lex'})]
semantic_index.rules = semantic_rules
sentence_str = r"""
<sentence id="s1">
<tokens>
<token base="base1" pos="pos1" surf="surf1" id="t1_1"/>
<token base="base2" pos="pos2" surf="surf2" id="t1_2"/>
</tokens>
<ccg root="sp1-3">
<span terminal="t1_1" category="cat1" end="2" begin="1" id="sp1-1"/>
<span terminal="t1_2" category="cat2" end="3" begin="2" id="sp1-2"/>
<span child="sp1-1 sp1-2" rule="lex" category="NP" end="3" begin="1" id="sp1-3"/>
</ccg>
</sentence>
"""
sentence = etree.fromstring(sentence_str)
ccg_tree = assign_semantics_to_ccg(sentence, semantic_index)
coq_lib = get_coq_types(ccg_tree)
expected_coq_lib = ['Parameter _base2 : Prop -> Entity -> Prop.']
self.assertEqual(expected_coq_lib, coq_lib)
expression = [ccg_tree.get('sem')]
coq_sig = convert_coq_signatures_to_nltk(coq_lib)
nltk_lib, _ = build_dynamic_library(expression, coq_sig)
lib = merge_dynamic_libraries(coq_sig, nltk_lib, sentence)
expected_lib = ["Parameter _base2 : Prop -> (Entity -> Prop).",
"Parameter _base1 : Entity -> Prop."]
self.assertCountEqual(expected_lib, lib)
def test_lexical_binary_two_coq_complex_type(self):
semantic_index = SemanticIndex(None)
semantic_rules = [SemanticRule(r'cat1', r'\P x R.P(x, R)', {'coq_type' : 'Entity -> Prop -> Prop'}),
SemanticRule(r'cat2', r'\Q S T.Q(S, T)', {'coq_type' : 'Prop -> Entity -> Prop'}),
SemanticRule(r'NP', r'\P Q x R S T.(Q(x, R) -> P(S, T))', {'rule' : 'lex'})]
semantic_index.rules = semantic_rules
sentence_str = r"""
<sentence id="s1">
<tokens>
<token base="base1" pos="pos1" surf="surf1" id="t1_1"/>
<token base="base2" pos="pos2" surf="surf2" id="t1_2"/>
</tokens>
<ccg root="sp1-3">
<span terminal="t1_1" category="cat1" end="2" begin="1" id="sp1-1"/>
<span terminal="t1_2" category="cat2" end="3" begin="2" id="sp1-2"/>
<span child="sp1-1 sp1-2" rule="lex" category="NP" end="3" begin="1" id="sp1-3"/>
</ccg>
</sentence>
"""
sentence = etree.fromstring(sentence_str)
ccg_tree = assign_semantics_to_ccg(sentence, semantic_index)
coq_lib = get_coq_types(ccg_tree)
expected_coq_lib = ['Parameter _base1 : Entity -> Prop -> Prop.',
'Parameter _base2 : Prop -> Entity -> Prop.']
self.assertEqual(expected_coq_lib, coq_lib)
expression = [ccg_tree.get('sem')]
coq_sig = convert_coq_signatures_to_nltk(coq_lib)
nltk_lib, _ = build_dynamic_library(expression, coq_sig)
lib = merge_dynamic_libraries(coq_sig, nltk_lib, sentence)
expected_lib = ["Parameter _base2 : Prop -> (Entity -> Prop).",
"Parameter _base1 : Entity -> (Prop -> Prop)."]
self.assertCountEqual(expected_lib, lib)
class Coq2NLTKSignaturesTestCase(unittest.TestCase):
def test_entity(self):
coq_sig = ['Parameter base1 : Entity.',
'Parameter base2 : Prop.']
nltk_sig = convert_coq_signatures_to_nltk(coq_sig)
expected_nltk_sig = {'base1' : read_type('e'),
'base2' : read_type('t')}
self.assertEqual(expected_nltk_sig, nltk_sig)
class Coq2NLTKTypesTestCase(unittest.TestCase):
def test_entity(self):
coq_type = 'Parameter base : Entity.'
nltk_type = convert_coq_to_nltk_type(coq_type)
expected_nltk_type = {'base' : read_type('e')}
self.assertEqual(expected_nltk_type, nltk_type)
def test_property(self):
coq_type = 'Parameter base : Prop.'
nltk_type = convert_coq_to_nltk_type(coq_type)
expected_nltk_type = {'base' : read_type('t')}
self.assertEqual(expected_nltk_type, nltk_type)
def test_event(self):
coq_type = 'Parameter base : Event.'
nltk_type = convert_coq_to_nltk_type(coq_type)
expected_nltk_type = {'base' : read_type('v')}
self.assertEqual(expected_nltk_type, nltk_type)
def test_wrong_type(self):
coq_type = 'Parameter base : YYY.'
self.assertRaises(ValueError, convert_coq_to_nltk_type, coq_type)
def test_entity_property(self):
coq_type = 'Parameter base : Entity -> Prop.'
nltk_type = convert_coq_to_nltk_type(coq_type)
expected_nltk_type = {'base' : read_type('<e,t>')}
self.assertEqual(expected_nltk_type, nltk_type)
def test_entity_entity_property(self):
coq_type = 'Parameter base : Entity -> Entity -> Prop.'
nltk_type = convert_coq_to_nltk_type(coq_type)
expected_nltk_type = {'base' : read_type('<e,<e,t>>')}
self.assertEqual(expected_nltk_type, nltk_type)
def test_entity_property_property(self):
coq_type = 'Parameter base : Entity -> Prop -> Prop.'
nltk_type = convert_coq_to_nltk_type(coq_type)
expected_nltk_type = {'base' : read_type('<e,<t,t>>')}
self.assertEqual(expected_nltk_type, nltk_type)
def test_entity_property_and_property(self):
coq_type = 'Parameter base : (Entity -> Prop) -> Prop.'
nltk_type = convert_coq_to_nltk_type(coq_type)
expected_nltk_type = {'base' : read_type('<<e,t>,t>>')}
self.assertEqual(expected_nltk_type, nltk_type)
def test_entity_property_and_property_entity(self):
coq_type = 'Parameter base : (Entity -> Prop) -> (Prop -> Entity).'
nltk_type = convert_coq_to_nltk_type(coq_type)
expected_nltk_type = {'base' : read_type('<<e,t>,<t,e>>')}
self.assertEqual(expected_nltk_type, nltk_type)
def test_event_and_entity_property(self):
coq_type = 'Parameter base : Event -> (Entity -> Prop).'
nltk_type = convert_coq_to_nltk_type(coq_type)
expected_nltk_type = {'base' : read_type('<v,<e,t>>')}
self.assertEqual(expected_nltk_type, nltk_type)
class build_dynamic_libraryTestCase(unittest.TestCase):
def test_entity(self):
exprs = [lexpr('Python')]
dynamic_library, _ = combine_signatures_or_rename_preds(exprs)
dynamic_library = nltk_sig_to_coq_lib(dynamic_library)
expected_dynamic_library = \
['Parameter Python : Entity.']
self.assertEqual(expected_dynamic_library, dynamic_library)
def test_predicate1_argument1(self):
exprs = [lexpr('language(Python)')]
dynamic_library, _ = combine_signatures_or_rename_preds(exprs)
dynamic_library = nltk_sig_to_coq_lib(dynamic_library)
expected_dynamic_library = \
['Parameter Python : Entity.',
'Parameter language : Entity -> Prop.']
for item in dynamic_library:
self.assertIn(item, expected_dynamic_library)
self.assertEqual(len(expected_dynamic_library), len(dynamic_library))
def test_predicate1_argument2(self):
exprs = [lexpr('language(Python, Scala)')]
dynamic_library, _ = combine_signatures_or_rename_preds(exprs)
dynamic_library = nltk_sig_to_coq_lib(dynamic_library)
expected_dynamic_library = \
['Parameter Python : Entity.',
'Parameter Scala : Entity.',
'Parameter language : Entity -> (Entity -> Prop).']
for item in dynamic_library:
self.assertIn(item, expected_dynamic_library)
self.assertEqual(len(expected_dynamic_library), len(dynamic_library))
def test_predicate2_argument1_and_2(self):
exprs = [lexpr('AND(language(Python, Scala), nice(Python))')]
dynamic_library, _ = combine_signatures_or_rename_preds(exprs)
dynamic_library = nltk_sig_to_coq_lib(dynamic_library)
expected_dynamic_library = \
['Parameter nice : Entity -> Prop.',
'Parameter Python : Entity.',
'Parameter Scala : Entity.',
'Parameter language : Entity -> (Entity -> Prop).',
'Parameter AND : Prop -> (Prop -> Prop).']
for item in dynamic_library:
self.assertIn(item, expected_dynamic_library)
self.assertEqual(len(expected_dynamic_library), len(dynamic_library))
def test_predicate2_argument1_and_2Exprs2(self):
exprs = [lexpr('language(Python, Scala)'), lexpr('nice(Python)')]
dynamic_library, _ = combine_signatures_or_rename_preds(exprs)
dynamic_library = nltk_sig_to_coq_lib(dynamic_library)
expected_dynamic_library = \
['Parameter nice : Entity -> Prop.',
'Parameter Python : Entity.',
'Parameter Scala : Entity.',
'Parameter language : Entity -> (Entity -> Prop).']
for item in dynamic_library:
self.assertIn(item, expected_dynamic_library)
self.assertEqual(len(expected_dynamic_library), len(dynamic_library))
def test_pred1_prop_prop(self):
exprs = [lexpr('nice(language(Python, Scala))')]
dynamic_library, _ = combine_signatures_or_rename_preds(exprs)
dynamic_library = nltk_sig_to_coq_lib(dynamic_library)
expected_dynamic_library = \
['Parameter nice : Prop -> Prop.',
'Parameter Python : Entity.',
'Parameter Scala : Entity.',
'Parameter language : Entity -> (Entity -> Prop).']
for item in dynamic_library:
self.assertIn(item, expected_dynamic_library)
self.assertEqual(len(expected_dynamic_library), len(dynamic_library))
def test_pred2_prop_prop(self):
exprs = [lexpr('nice(language(Python, Scala))'),
lexpr('fun(language(Python, Scala))')]
dynamic_library, _ = combine_signatures_or_rename_preds(exprs)
dynamic_library = nltk_sig_to_coq_lib(dynamic_library)
expected_dynamic_library = \
['Parameter nice : Prop -> Prop.',
'Parameter fun : Prop -> Prop.',
'Parameter Python : Entity.',
'Parameter Scala : Entity.',
'Parameter language : Entity -> (Entity -> Prop).']
for item in dynamic_library:
self.assertIn(item, expected_dynamic_library)
self.assertEqual(len(expected_dynamic_library), len(dynamic_library))
def test_exists(self):
exprs = [lexpr('exists x.P(x)')]
dynamic_library, _ = combine_signatures_or_rename_preds(exprs)
dynamic_library = nltk_sig_to_coq_lib(dynamic_library)
expected_dynamic_library = \
['Parameter P : Entity -> Prop.',
'Parameter x : Entity.']
for item in dynamic_library:
self.assertIn(item, expected_dynamic_library)
self.assertEqual(len(expected_dynamic_library), len(dynamic_library))
def test_exist(self):
exprs = [lexpr('exist x.P(x)')]
dynamic_library, _ = combine_signatures_or_rename_preds(exprs)
dynamic_library = nltk_sig_to_coq_lib(dynamic_library)
expected_dynamic_library = \
['Parameter P : Entity -> Prop.',
'Parameter x : Entity.']
for item in dynamic_library:
self.assertIn(item, expected_dynamic_library)
self.assertEqual(len(expected_dynamic_library), len(dynamic_library))
def test_Lambda1exists1(self):
exprs = [lexpr('\P.exist x.P(x)')]
dynamic_library, _ = combine_signatures_or_rename_preds(exprs)
dynamic_library = nltk_sig_to_coq_lib(dynamic_library)
expected_dynamic_library = \
['Parameter P : Entity -> Prop.',
'Parameter x : Entity.']
for item in dynamic_library:
self.assertIn(item, expected_dynamic_library)
self.assertEqual(len(expected_dynamic_library), len(dynamic_library))
def test_Lambda2exists1(self):
exprs = [lexpr('\P y.exist x.P(x, y)')]
dynamic_library, _ = combine_signatures_or_rename_preds(exprs)
dynamic_library = nltk_sig_to_coq_lib(dynamic_library)
expected_dynamic_library = \
['Parameter P : Entity -> (Entity -> Prop).',
'Parameter x : Entity.',
'Parameter y : Entity.']
for item in dynamic_library:
self.assertIn(item, expected_dynamic_library)
self.assertEqual(len(expected_dynamic_library), len(dynamic_library))
def test_Lambda3exists1(self):
exprs = [lexpr('\P y.\T.exist x.T(P(x, y))')]
dynamic_library, _ = combine_signatures_or_rename_preds(exprs)
dynamic_library = nltk_sig_to_coq_lib(dynamic_library)
expected_dynamic_library = \
['Parameter P : Entity -> (Entity -> Prop).',
'Parameter T : Prop -> Prop.',
'Parameter x : Entity.',
'Parameter y : Entity.']
for item in dynamic_library:
self.assertIn(item, expected_dynamic_library)
self.assertEqual(len(expected_dynamic_library), len(dynamic_library))
def test_Lambda3exists2(self):
exprs = [lexpr('\P y.\T.exist x.exists z.T(P(x, y), z)')]
dynamic_library, _ = combine_signatures_or_rename_preds(exprs)
dynamic_library = nltk_sig_to_coq_lib(dynamic_library)
expected_dynamic_library = \
['Parameter P : Entity -> (Entity -> Prop).',
'Parameter T : Prop -> (Entity -> Prop).',
'Parameter x : Entity.',
'Parameter y : Entity.',
'Parameter z : Entity.']
for item in dynamic_library:
self.assertIn(item, expected_dynamic_library)
self.assertEqual(len(expected_dynamic_library), len(dynamic_library))
def test_Lambda3exists2All1(self):
exprs = [lexpr('\P y.\T.all w.exist x.exists z.T(P(x, y), z, w)')]
dynamic_library, _ = combine_signatures_or_rename_preds(exprs)
dynamic_library = nltk_sig_to_coq_lib(dynamic_library)
expected_dynamic_library = \
['Parameter P : Entity -> (Entity -> Prop).',
'Parameter T : Prop -> (Entity -> (Entity -> Prop)).',
'Parameter w : Entity.',
'Parameter x : Entity.',
'Parameter y : Entity.',
'Parameter z : Entity.']
for item in dynamic_library:
self.assertIn(item, expected_dynamic_library)
self.assertEqual(len(expected_dynamic_library), len(dynamic_library))
def test_Lambda3exists2All1Mixed(self):
exprs = [lexpr('\P y.\T.all w.exists z.T(exist x.P(x, y), z, w)')]
dynamic_library, _ = combine_signatures_or_rename_preds(exprs)
dynamic_library = nltk_sig_to_coq_lib(dynamic_library)
expected_dynamic_library = \
['Parameter P : Entity -> (Entity -> Prop).',
'Parameter T : Prop -> (Entity -> (Entity -> Prop)).',
'Parameter w : Entity.',
'Parameter x : Entity.',
'Parameter y : Entity.',
'Parameter z : Entity.']
for item in dynamic_library:
self.assertIn(item, expected_dynamic_library)
self.assertEqual(len(expected_dynamic_library), len(dynamic_library))
if __name__ == '__main__':
suite1 = unittest.TestLoader().loadTestsFromTestCase(combine_signatures_or_rename_predsTestCase)
suite2 = unittest.TestLoader().loadTestsFromTestCase(build_arbitrary_dynamic_libraryTestCase)
suite3 = unittest.TestLoader().loadTestsFromTestCase(build_dynamic_libraryTestCase)
suite4 = unittest.TestLoader().loadTestsFromTestCase(Coq2NLTKTypesTestCase)
suite5 = unittest.TestLoader().loadTestsFromTestCase(Coq2NLTKSignaturesTestCase)
suite6 = unittest.TestLoader().loadTestsFromTestCase(ArbiAutoTypesTestCase)
suites = unittest.TestSuite([suite1, suite2, suite3, suite4, suite5, suite6])
unittest.TextTestRunner(verbosity=2).run(suites)
| 47.631184 | 108 | 0.625496 | 3,975 | 31,770 | 4.72327 | 0.070189 | 0.096192 | 0.051558 | 0.033289 | 0.834035 | 0.798615 | 0.76751 | 0.745939 | 0.728522 | 0.700399 | 0 | 0.019983 | 0.240762 | 31,770 | 666 | 109 | 47.702703 | 0.758385 | 0.022096 | 0 | 0.636516 | 0 | 0.056951 | 0.32058 | 0.002222 | 0 | 0 | 0 | 0.001502 | 0.107203 | 1 | 0.075377 | false | 0 | 0.030151 | 0 | 0.118928 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b9a33be654b60b2360e7bacac79227e61078658a | 129 | py | Python | puppet/__init__.py | yutiansut/puppet | c8ca97b5b8720798f585cf9f7266cbdf247407c3 | [
"MIT"
] | 2 | 2017-09-07T00:46:09.000Z | 2020-09-27T14:52:38.000Z | puppet/__init__.py | yutiansut/puppet | c8ca97b5b8720798f585cf9f7266cbdf247407c3 | [
"MIT"
] | null | null | null | puppet/__init__.py | yutiansut/puppet | c8ca97b5b8720798f585cf9f7266cbdf247407c3 | [
"MIT"
] | 1 | 2019-11-20T10:06:10.000Z | 2019-11-20T10:06:10.000Z | from .client import __version__
from .client import Account, login
from .runner import run
from .puppet_util import check_config
| 25.8 | 37 | 0.829457 | 19 | 129 | 5.315789 | 0.631579 | 0.19802 | 0.316832 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131783 | 129 | 4 | 38 | 32.25 | 0.901786 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b9cf8f8c7394243d1575f1d091a2e4a458c04b48 | 115 | py | Python | lib_cidr_trie/__init__.py | jfuruness/lib_cidr_trie | 5bfd76bbccaa59d5663032e97eaaa6bf082a7982 | [
"BSD-3-Clause"
] | null | null | null | lib_cidr_trie/__init__.py | jfuruness/lib_cidr_trie | 5bfd76bbccaa59d5663032e97eaaa6bf082a7982 | [
"BSD-3-Clause"
] | null | null | null | lib_cidr_trie/__init__.py | jfuruness/lib_cidr_trie | 5bfd76bbccaa59d5663032e97eaaa6bf082a7982 | [
"BSD-3-Clause"
] | null | null | null | from .cidr_trie import CIDRTrie
from .cidr_tries import IPv4CIDRTrie, IPv6CIDRTrie
from .cidr_node import CIDRNode
| 28.75 | 50 | 0.852174 | 16 | 115 | 5.9375 | 0.625 | 0.252632 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 0.113043 | 115 | 3 | 51 | 38.333333 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b9dfaf529c5d8079561e9846f7d57cf3031151c8 | 138 | py | Python | dispensaries/views.py | cyobero/ganjapp | 9db95c85be5ab8ac9fa5c4d116bd133a4be4e9de | [
"MIT"
] | 1 | 2020-04-19T13:43:39.000Z | 2020-04-19T13:43:39.000Z | dispensaries/views.py | cyobero/ganjapp | 9db95c85be5ab8ac9fa5c4d116bd133a4be4e9de | [
"MIT"
] | 7 | 2020-04-18T19:40:58.000Z | 2022-03-12T00:56:53.000Z | dispensaries/views.py | cyobero/ganjapp | 9db95c85be5ab8ac9fa5c4d116bd133a4be4e9de | [
"MIT"
] | null | null | null | from django.shortcuts import render
# Create your views here.
def dispensaries(request):
return render(request, 'dispensaries.html')
| 23 | 47 | 0.775362 | 17 | 138 | 6.294118 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137681 | 138 | 5 | 48 | 27.6 | 0.89916 | 0.166667 | 0 | 0 | 0 | 0 | 0.150442 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
b9e45c361e21da1a4d81f3c140fd8df536e46ec6 | 48,254 | py | Python | tests/unit/run/run_test.py | abeja-inc/abeja-platform-cli | bed60642ee46656a5bfec5577d876e54c88bfd3f | [
"Apache-2.0"
] | 2 | 2020-06-19T23:07:38.000Z | 2021-06-03T10:44:39.000Z | tests/unit/run/run_test.py | abeja-inc/abeja-platform-cli | bed60642ee46656a5bfec5577d876e54c88bfd3f | [
"Apache-2.0"
] | 20 | 2020-04-07T07:48:42.000Z | 2020-09-07T09:18:43.000Z | tests/unit/run/run_test.py | abeja-inc/abeja-platform-cli | bed60642ee46656a5bfec5577d876e54c88bfd3f | [
"Apache-2.0"
] | 1 | 2021-06-01T13:38:19.000Z | 2021-06-01T13:38:19.000Z | import json
import os
import tempfile
from unittest import TestCase
import pytest
import requests_mock
from click.testing import CliRunner
from mock import patch
from ruamel.yaml import YAML
import abejacli.training
from abejacli.config import ORGANIZATION_ENDPOINT
from abejacli.configuration.config import Config
from abejacli.registry.commands import (
create_repository,
delete_repository,
describe_repositories,
describe_repository,
describe_repository_tags
)
from abejacli.run import (
delete_configuration,
describe_datalake_buckets,
describe_datalake_channels,
initialize_configuragtion,
list_configurations,
model,
show_configuration,
switch_configuration
)
from abejacli.training import training_default_configuration
from abejacli.training.commands import (
archive_job,
archive_training_model,
archive_version,
create_job_definition,
create_training_job,
create_training_model,
create_training_version,
describe_jobs,
describe_training_models,
describe_training_versions,
download_training_model,
initialize_training,
unarchive_job,
unarchive_training_model,
unarchive_version,
update_training_model
)
from tests import get_tmp_training_file_name
from tests.unit import ConfigPatcher
TEST_CONFIG_FILE_ROOT = '/tmp/.abeja'
TEST_CONFIG_FILE_PATH = os.path.join(TEST_CONFIG_FILE_ROOT, 'config')
TEST_CONFIG_USER_ID = '12345'
TEST_CONFIG_TOKEN = 'ntoken12345'
TEST_CONFIG_ORG_NAME = 'test-inc'
TEST_CONFIG = {
'abeja-platform-user': Config.prefixed_user(TEST_CONFIG_USER_ID),
'personal-access-token': TEST_CONFIG_TOKEN,
'organization-name': TEST_CONFIG_ORG_NAME
}
TEST_CONFIG_USER_ID_2 = '2039587479106'
TEST_CONFIG_TOKEN_2 = '34676abf4875998fbe7fd4637'
TEST_CONFIG_ORG_NAME_2 = 'banana-fish'
TEST_CONFIG_2 = {
'abeja-platform-user': Config.prefixed_user(TEST_CONFIG_USER_ID_2),
'personal-access-token': TEST_CONFIG_TOKEN_2,
'organization-name': TEST_CONFIG_ORG_NAME_2
}
TEST_ORGANIZATION_DOMAIN = 'http://apidomain/organizations/test'
yaml = YAML()
CHANNEL_ID = '1111111111111'
CHANNEL_RESPONSE = {
"channels": [
{
"channel_id": "1335597259091",
"created_at": "2018-01-15T07:39:29Z",
"storage_type": "datalake",
"updated_at": "2018-01-15T07:39:29Z",
"description": "Movie Spliter Output"
},
{
"channel_id": "1325662709070",
"created_at": "2018-01-04T02:09:58Z",
"description": "image-detection-result-channel",
"updated_at": "2018-01-04T02:09:59Z",
"storage_type": "rdb"
},
{
"channel_id": "1325660026189",
"created_at": "2018-01-04T02:05:36Z",
"description": "test-image-in",
"updated_at": "2018-01-04T02:05:36Z",
"storage_type": "datalake"
},
{
"created_at": "2017-12-21T16:00:24Z",
"channel_id": "1313786614091",
"description": "kawasaki-test",
"storage_type": "file",
"updated_at": "2017-12-21T16:00:24Z"
},
{
"storage_type": "rdb",
"updated_at": "2017-12-21T05:36:17Z",
"description": "test",
"created_at": "2017-12-21T05:36:17Z",
"channel_id": "1313403161930"
}
],
"updated_at": "2017-05-10T02:36:00Z",
"created_at": "2017-04-27T08:26:11Z",
"offset": 0,
"limit": 300,
"organization_id": "1122334455667",
"has_next": False,
"organization_name": "abeja-inc"
}
CHANNEL_FILES_RESPONSE = {
'files': [
{
"url_expires_on": "2017-11-21T02:18:16+00:00",
"uploaded_at": "2017-11-16T07:10:56+00:00",
"metadata": {
"x-abeja-meta-filename": "file1.txt",
"x-abeja-meta-label": "1"
},
"file_id": "20171116T071056-b2168632-7aae-47ad-8339-9e6463607e6e",
"download_uri": "https://abeja-datalake-dev.s3.amazonaws.com/320e-1282495447337/20171116/071056-b2168632-7aae-47ad-8339-9e6463607e6e?AWSAccessKeyId=ASIAIS6VOBREHPTWAQDA&Signature=Riaqm%2B4sJz9fc2J0GIsvIIAROG8%3D&x-amz-security-token=FQoDYXdzEN7%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDGO9U9SxYJDxdzmbMyKCAnNauIDasGDp9mNIHaSbhG8PIXZWA193DiNcFqRvd4BlfA9VB2ZjohVJNnMLssOQBLkrK5Tgc7ixxgTuon2pkeew9IEiyxHjDm8T3jjLbUCUWqUDuy0JKdYjTqYGQ4SJBUSEGsFOfyUIDW1VqXPAdmgHC3p%2BMOOBI07uW6%2BThG50EjCttzrCYX9ka73R3Tj6Iqe4bnj3ogl909o9%2Fen1yRJ6uEGGkbfXCMJsAGrDrRY5bJxcjS4uCQLidqxQM1nbumNc%2F2WipjF7AK1wQQl50eEO%2FG9%2F%2Fc81Bjv767GazeCraSnukGggMTcqEOeUQEAlxgTo7lh6ykbl0JU%2BMs0Hks08DiiHhc3QBQ%3D%3D&Expires=1511230696", # noqa
"content_type": "text/plain"
},
{
"url_expires_on": "2017-11-21T02:18:16+00:00",
"uploaded_at": "2017-11-16T07:11:00+00:00",
"metadata": {
"x-abeja-meta-filename": "file2.txt",
"x-abeja-meta-label": "2"
},
"file_id": "20171116T071100-6e82c7ef-ad2a-40ab-888b-3c2c5567de0f",
"download_uri": "https://abeja-datalake-dev.s3.amazonaws.com/320e-1282495447337/20171116/071100-6e82c7ef-ad2a-40ab-888b-3c2c5567de0f?AWSAccessKeyId=ASIAIS6VOBREHPTWAQDA&Signature=T1%2BcHb3A0D892Dfw5HYmY%2BIhROA%3D&x-amz-security-token=FQoDYXdzEN7%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDGO9U9SxYJDxdzmbMyKCAnNauIDasGDp9mNIHaSbhG8PIXZWA193DiNcFqRvd4BlfA9VB2ZjohVJNnMLssOQBLkrK5Tgc7ixxgTuon2pkeew9IEiyxHjDm8T3jjLbUCUWqUDuy0JKdYjTqYGQ4SJBUSEGsFOfyUIDW1VqXPAdmgHC3p%2BMOOBI07uW6%2BThG50EjCttzrCYX9ka73R3Tj6Iqe4bnj3ogl909o9%2Fen1yRJ6uEGGkbfXCMJsAGrDrRY5bJxcjS4uCQLidqxQM1nbumNc%2F2WipjF7AK1wQQl50eEO%2FG9%2F%2Fc81Bjv767GazeCraSnukGggMTcqEOeUQEAlxgTo7lh6ykbl0JU%2BMs0Hks08DiiHhc3QBQ%3D%3D&Expires=1511230696", # noqa
"content_type": "text/plain"
},
{
"url_expires_on": "2017-11-21T02:18:16+00:00",
"uploaded_at": "2017-11-16T07:11:01+00:00",
"metadata": {
"x-abeja-meta-filename": "file3.txt",
"x-abeja-meta-label": "3"
},
"file_id": "20171116T071101-959db0d1-e853-4dd0-9aa0-d81692d2d88b",
"download_uri": "https://abeja-datalake-dev.s3.amazonaws.com/320e-1282495447337/20171116/071101-959db0d1-e853-4dd0-9aa0-d81692d2d88b?AWSAccessKeyId=ASIAIS6VOBREHPTWAQDA&Signature=dE07nbdjtR08B0CzVLiwgap%2BK1E%3D&x-amz-security-token=FQoDYXdzEN7%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDGO9U9SxYJDxdzmbMyKCAnNauIDasGDp9mNIHaSbhG8PIXZWA193DiNcFqRvd4BlfA9VB2ZjohVJNnMLssOQBLkrK5Tgc7ixxgTuon2pkeew9IEiyxHjDm8T3jjLbUCUWqUDuy0JKdYjTqYGQ4SJBUSEGsFOfyUIDW1VqXPAdmgHC3p%2BMOOBI07uW6%2BThG50EjCttzrCYX9ka73R3Tj6Iqe4bnj3ogl909o9%2Fen1yRJ6uEGGkbfXCMJsAGrDrRY5bJxcjS4uCQLidqxQM1nbumNc%2F2WipjF7AK1wQQl50eEO%2FG9%2F%2Fc81Bjv767GazeCraSnukGggMTcqEOeUQEAlxgTo7lh6ykbl0JU%2BMs0Hks08DiiHhc3QBQ%3D%3D&Expires=1511230696", # noqa
"content_type": "text/plain"
}
]
}
BUCKET_RESPONSE = {
"buckets": [
{
"bucket_id": "1335597259091",
"created_at": "2018-01-15T07:39:29Z",
"updated_at": "2018-01-15T07:39:29Z",
"description": "Movie Spliter Output"
},
{
"bucket_id": "1325660026189",
"created_at": "2018-01-04T02:05:36Z",
"updated_at": "2018-01-04T02:05:36Z",
"description": "test-image-in"
},
],
"updated_at": "2017-05-10T02:36:00Z",
"created_at": "2017-04-27T08:26:11Z",
"offset": 0,
"limit": 300,
"organization_id": "1122334455667",
"has_next": False,
"organization_name": "abeja-inc"
}
DATASET_ID = '9999999999999'
DATASET_ITEM_RESPONSE = {
'dataset_id': DATASET_ID,
'source_data': [
{
'data_type': 'image/jpeg',
'data_uri': 'datalake://1234567890123/20170815T044617-f20dde80-1e3b-4496-bc06-1b63b026b872'
}
],
'attributes': {
'classification': {
'label': 'inu'
}
}
}
DEFAULT_TRAINING_JOB_DEFINITION_NAME = 'training-1'
DEFAULT_TRAINING_CONFIG = {
'name': DEFAULT_TRAINING_JOB_DEFINITION_NAME,
'handler': 'train:handler',
'image': 'abeja-inc/minimal:0.1.0',
'params': {
'param1': 'value1',
'parma2': 'value2',
},
'datasets': {
'dataset_name1': 'value1',
'dataset_name2': 'value2',
}
}
TRAINING_MODEL_ID = '4444444444444'
BASE_ENVIRON = {'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}
@pytest.fixture
def runner():
return CliRunner()
@pytest.fixture
def config_file_path():
if not os.path.exists(TEST_CONFIG_FILE_ROOT):
os.makedirs(TEST_CONFIG_FILE_ROOT, mode=0o711)
elif os.path.exists(TEST_CONFIG_FILE_PATH):
os.unlink(TEST_CONFIG_FILE_PATH)
return TEST_CONFIG_FILE_PATH
@pytest.fixture
def config_file(config_file_path):
with open(config_file_path, "w") as f:
json.dump(TEST_CONFIG, f)
f.close
return TEST_CONFIG
class TestNoConfig(object):
@patch('abejacli.configuration.CONFIG_FILE_PATH', TEST_CONFIG_FILE_PATH)
def test_init_configure(self, runner, config_file_path):
input_config = '{}\n{}\n{}\n'.format(
TEST_CONFIG_USER_ID, TEST_CONFIG_TOKEN, TEST_CONFIG_ORG_NAME)
result = runner.invoke(
initialize_configuragtion, input=input_config)
assert not result.exception
with open(config_file_path, 'r') as f:
json_data = json.load(f)
assert TEST_CONFIG == json_data
@patch.dict(os.environ, BASE_ENVIRON, clear=True)
@patch('abejacli.configuration.CONFIG_FILE_PATH', TEST_CONFIG_FILE_PATH)
def test_show_configure(self, runner, config_file):
result = runner.invoke(show_configuration, ['--format=json'])
assert not result.exception
assert json.loads(result.output) == TEST_CONFIG
result = runner.invoke(show_configuration, ['-u'])
assert not result.exception
assert result.output == 'abeja-platform-user:user-{}\n'.format(
TEST_CONFIG_USER_ID)
result = runner.invoke(show_configuration, ['-t'])
assert not result.exception
assert result.output == 'personal-access-token:{}\n'.format(
TEST_CONFIG_TOKEN)
result = runner.invoke(show_configuration, ['-o'])
assert not result.exception
assert result.output == 'organization-name:{}\n'.format(
TEST_CONFIG_ORG_NAME)
@patch.dict(os.environ, BASE_ENVIRON, clear=True)
@patch('abejacli.configuration.CONFIG_FILE_PATH', TEST_CONFIG_FILE_PATH)
def test_show_named_configuration(self, runner, config_file):
# Add another configuration
input_config = '{}\n{}\n{}\n'.format(
TEST_CONFIG_USER_ID_2, TEST_CONFIG_TOKEN_2, TEST_CONFIG_ORG_NAME_2)
result = runner.invoke(
initialize_configuragtion, ['test'], input=input_config)
assert not result.exception
result = runner.invoke(show_configuration, ['--format=json', 'test'])
assert json.loads(result.output) == TEST_CONFIG_2
@patch.dict(os.environ, BASE_ENVIRON, clear=True)
@patch('abejacli.configuration.CONFIG_FILE_PATH', TEST_CONFIG_FILE_PATH)
def test_show_default_configuration(self, runner, config_file):
# Add another configuration
input_config = '{}\n{}\n{}\n'.format(
TEST_CONFIG_USER_ID_2, TEST_CONFIG_TOKEN_2, TEST_CONFIG_ORG_NAME_2)
result = runner.invoke(
initialize_configuragtion, ['test'], input=input_config)
assert not result.exception
# Activate
result = runner.invoke(switch_configuration, ['test'])
assert not result.exception
result = runner.invoke(show_configuration, ['--format=json'])
assert json.loads(result.output) == TEST_CONFIG_2
# Show default
result = runner.invoke(show_configuration, [
'--format=json', '--default'])
assert json.loads(result.output) == TEST_CONFIG
@patch.dict(os.environ, BASE_ENVIRON, clear=True)
@patch('abejacli.configuration.CONFIG_FILE_PATH', TEST_CONFIG_FILE_PATH)
def test_list_configurations(self, runner, config_file):
result = runner.invoke(list_configurations)
assert not result.exception
assert result.output == " NAME ORGANIZATION USER TOKEN \n" \
"* (default) test-inc 12345 *******2345\n"
@patch.dict(os.environ, BASE_ENVIRON, clear=True)
@patch('abejacli.configuration.CONFIG_FILE_PATH', TEST_CONFIG_FILE_PATH)
def test_delete_default_configuration(self, runner, config_file_path, config_file):
# Add another configuration
input_config = '{}\n{}\n{}\n'.format(
TEST_CONFIG_USER_ID_2, TEST_CONFIG_TOKEN_2, TEST_CONFIG_ORG_NAME_2)
result = runner.invoke(
initialize_configuragtion, ['test'], input=input_config)
assert not result.exception
# Delete default
result = runner.invoke(delete_configuration, ['--assume-yes'])
assert not result.exception
# Active confiuration changed
result = runner.invoke(show_configuration, ['--format=json'])
assert json.loads(result.output) == TEST_CONFIG_2
@patch.dict(os.environ, BASE_ENVIRON, clear=True)
@patch('abejacli.configuration.CONFIG_FILE_PATH', TEST_CONFIG_FILE_PATH)
def test_delete_default_configuration_not_activated(self, runner, config_file_path, config_file):
# Add another configuration
input_config = '{}\n{}\n{}\n'.format(
TEST_CONFIG_USER_ID_2, TEST_CONFIG_TOKEN_2, TEST_CONFIG_ORG_NAME_2)
result = runner.invoke(
initialize_configuragtion, ['test'], input=input_config)
assert not result.exception
# Switch
result = runner.invoke(switch_configuration, ['test'])
assert not result.exception
# Delete default
result = runner.invoke(delete_configuration, ['--assume-yes'])
assert not result.exception
# Active confiuration NOT changed
result = runner.invoke(show_configuration, ['--format=json'])
assert json.loads(result.output) == TEST_CONFIG_2
@patch.dict(os.environ, BASE_ENVIRON, clear=True)
@patch('abejacli.configuration.CONFIG_FILE_PATH', TEST_CONFIG_FILE_PATH)
def test_delete_default_configuration_then_empty(self, runner, config_file_path, config_file):
result = runner.invoke(delete_configuration, ['--assume-yes'])
assert not result.exception
# Configuration file should be deleted
assert not os.path.exists(config_file_path)
@patch.dict(os.environ, BASE_ENVIRON, clear=True)
@patch('abejacli.configuration.CONFIG_FILE_PATH', TEST_CONFIG_FILE_PATH)
def test_delete_named_configuration(self, runner, config_file_path, config_file):
# Add another configuration
input_config = '{}\n{}\n{}\n'.format(
TEST_CONFIG_USER_ID_2, TEST_CONFIG_TOKEN_2, TEST_CONFIG_ORG_NAME_2)
result = runner.invoke(
initialize_configuragtion, ['test'], input=input_config)
assert not result.exception
# Delete default
result = runner.invoke(delete_configuration, ['--assume-yes'])
assert not result.exception
# Active confiuration changed
result = runner.invoke(show_configuration, ['--format=json'])
assert json.loads(result.output) == TEST_CONFIG_2
class RunTest(TestCase):
def setUp(self):
self.runner = CliRunner()
self.config_patcher = ConfigPatcher() \
.add(user=TEST_CONFIG_USER_ID, token=TEST_CONFIG_TOKEN, organization=TEST_CONFIG_ORG_NAME) \
.start()
def tearDown(self):
self.config_patcher.stop()
@patch('abejacli.run.ORGANIZATION_ENDPOINT', TEST_ORGANIZATION_DOMAIN)
@patch('abejacli.run.api_post')
def test_create_trigger(self, mock_api_post):
deployment_id = '1111111111111'
version_id = 'ver-1111111111111'
model_id = '4444444444444'
input_service_name = 'datalake'
input_service_id = '2222222222222'
output_service_name = 'datamart'
output_service_id = '3333333333333'
environment = 'DEBUG:x'
url = "{}/deployments/{}/triggers".format(
TEST_ORGANIZATION_DOMAIN, deployment_id)
data = {
'version_id': version_id,
'input_service_name': input_service_name,
'input_service_id': input_service_id,
'retry_count': 5,
'environment': {
'DEBUG': 'x'
},
'models': {
'alias': model_id
},
'output_service_name': output_service_name,
'output_service_id': output_service_id,
}
mock_api_post.return_value = data
options = [
'create-trigger',
'--deployment_id={}'.format(deployment_id),
'--version_id={}'.format(version_id),
'--model_id={}'.format(model_id),
'--input_service_name={}'.format(input_service_name),
'--input_service_id={}'.format(input_service_id),
'--output_service_name={}'.format(output_service_name),
'--output_service_id={}'.format(output_service_id),
'--environment={}'.format(environment)
]
result = self.runner.invoke(model, options)
assert not result.exception
call_args, call_kwargs = mock_api_post.call_args
self.assertEqual(call_args[0], url)
self.assertDictEqual(json.loads(call_args[1]), data)
@patch('abejacli.run.ORGANIZATION_ENDPOINT', TEST_ORGANIZATION_DOMAIN)
@patch('abejacli.run.api_post')
def test_create_trigger_without_output(self, mock_api_post):
deployment_id = '1111111111111'
version_id = 'ver-1111111111111'
input_service_name = 'datalake'
input_service_id = '2222222222222'
retry_count = 5
environment = 'DEBUG:x'
url = "{}/deployments/{}/triggers".format(
TEST_ORGANIZATION_DOMAIN, deployment_id)
data = {
'version_id': version_id,
'input_service_name': input_service_name,
'input_service_id': input_service_id,
'retry_count': retry_count,
'environment': {
'DEBUG': 'x'
}
}
mock_api_post.return_value = data
options = [
'create-trigger',
'--deployment_id={}'.format(deployment_id),
'--version_id={}'.format(version_id),
'--input_service_name={}'.format(input_service_name),
'--input_service_id={}'.format(input_service_id),
'--retry_count={}'.format(retry_count),
'--environment={}'.format(environment)
]
result = self.runner.invoke(model, options)
assert not result.exception
mock_api_post.assert_called_once_with(url, json.dumps(data))
def test_create_trigger_with_invalid_output_option(self):
deployment_id = '1111111111111'
version_id = 'ver-1111111111111'
input_service_name = 'datalake'
input_service_id = '2222222222222'
output_service_name = 'datalake'
retry_count = 5
environment = 'DEBUG:x'
options = [
'create-trigger',
'--deployment_id={}'.format(deployment_id),
'--version_id={}'.format(version_id),
'--input_service_name={}'.format(input_service_name),
'--input_service_id={}'.format(input_service_id),
'--output_service_name'.format(output_service_name),
'--retry_count={}'.format(retry_count),
'--environment={}'.format(environment)
]
result = self.runner.invoke(model, options)
assert result.exception
@patch('abejacli.run.ORGANIZATION_ENDPOINT', TEST_ORGANIZATION_DOMAIN)
@patch('abejacli.run.api_post')
def test_submit_run(self, mock_api_post):
deployment_id = '1111111111111'
version_id = 'ver-1111111111111'
model_id = '3333333333333'
input_operator = '$datalake:1'
input_target = '2222222222222/20180101T112233-22222222-4444-6666-8888-000000000000'
output_operator = '$datamart-rdb:1'
output_target = '2222222222222'
environment = 'DEBUG:x'
url = "{}/deployments/{}/runs".format(
TEST_ORGANIZATION_DOMAIN, deployment_id)
data = {
'version_id': version_id,
'input_data': {input_operator: input_target},
'retry_count': 5,
'environment': {
'DEBUG': 'x'
},
'models': {
'alias': model_id
},
'output_template': {output_operator: output_target},
}
mock_api_post.return_value = data
options = [
'submit-run',
'--deployment_id={}'.format(deployment_id),
'--version_id={}'.format(version_id),
'--model_id={}'.format(model_id),
'--input_operator={}'.format(input_operator),
'--input_target={}'.format(input_target),
'--output_operator={}'.format(output_operator),
'--output_target={}'.format(output_target),
'--environment={}'.format(environment)
]
result = self.runner.invoke(model, options)
assert not result.exception
call_args, call_kwargs = mock_api_post.call_args
self.assertEqual(call_args[0], url)
self.assertDictEqual(json.loads(call_args[1]), data)
@patch('abejacli.run.ORGANIZATION_ENDPOINT', TEST_ORGANIZATION_DOMAIN)
@patch('abejacli.run.api_post')
def test_submit_run_without_output(self, mock_api_post):
deployment_id = '1111111111111'
version_id = 'ver-1111111111111'
input_operator = '$datalake:1'
input_target = '2222222222222/20180101T112233-22222222-4444-6666-8888-000000000000'
environment = 'DEBUG:x'
url = "{}/deployments/{}/runs".format(
TEST_ORGANIZATION_DOMAIN, deployment_id)
data = {
'version_id': version_id,
'input_data': {input_operator: input_target},
'retry_count': 5,
'environment': {
'DEBUG': 'x'
}
}
mock_api_post.return_value = data
options = [
'submit-run',
'--deployment_id={}'.format(deployment_id),
'--version_id={}'.format(version_id),
'--input_operator={}'.format(input_operator),
'--input_target={}'.format(input_target),
'--environment={}'.format(environment)
]
result = self.runner.invoke(model, options)
assert not result.exception
mock_api_post.assert_called_once_with(url, json.dumps(data))
def test_submit_run_with_invalid_output_option(self):
deployment_id = '1111111111111'
version_id = 'ver-1111111111111'
input_operator = '$datalake:1'
input_target = '2222222222222/20180101T112233-22222222-4444-6666-8888-000000000000'
output_target = '2222222222222'
environment = 'DEBUG:x'
options = [
'submit-run',
'--deployment_id={}'.format(deployment_id),
'--version_id={}'.format(version_id),
'--input_operator={}'.format(input_operator),
'--input_target={}'.format(input_target),
'--output_target={}'.format(output_target),
'--environment={}'.format(environment)
]
result = self.runner.invoke(model, options)
assert result.exception
@requests_mock.Mocker()
def test_describe_datalake_channels_all(self, mock):
cmd = [
]
expected_response = {
"channels": [
{
"channel_id": "1335597259091",
"created_at": "2018-01-15T07:39:29Z",
"storage_type": "datalake",
"updated_at": "2018-01-15T07:39:29Z",
"description": "Movie Spliter Output"
},
{
"channel_id": "1325660026189",
"created_at": "2018-01-04T02:05:36Z",
"description": "test-image-in",
"updated_at": "2018-01-04T02:05:36Z",
"storage_type": "datalake"
},
],
"created_at": "2017-04-27T08:26:11Z",
"organization_id": "1122334455667",
"organization_name": "2017-04-27T08:26:11Z",
"updated_at": "2017-05-10T02:36:00Z"
}
url = "{}/channels?limit={}".format(ORGANIZATION_ENDPOINT, 1000)
mock.register_uri('GET', url, json=CHANNEL_RESPONSE)
r = self.runner.invoke(describe_datalake_channels, cmd)
self.assertDictEqual(json.loads(r.output), expected_response)
@requests_mock.Mocker()
def test_describe_datalake_buckets_all(self, mock):
cmd = [
]
expected_response = {
"buckets": [
{
"bucket_id": "1335597259091",
"created_at": "2018-01-15T07:39:29Z",
"updated_at": "2018-01-15T07:39:29Z",
"description": "Movie Spliter Output"
},
{
"bucket_id": "1325660026189",
"created_at": "2018-01-04T02:05:36Z",
"updated_at": "2018-01-04T02:05:36Z",
"description": "test-image-in"
},
],
"created_at": "2017-04-27T08:26:11Z",
"organization_id": "1122334455667",
"organization_name": "2017-04-27T08:26:11Z",
"updated_at": "2017-05-10T02:36:00Z"
}
url = "{}/buckets".format(ORGANIZATION_ENDPOINT)
mock.register_uri('GET', url, json=BUCKET_RESPONSE)
r = self.runner.invoke(describe_datalake_buckets, cmd)
actual_response = json.loads(r.output[r.output.index('{'):]) # FIXME: Use `r.output` after GA release.
self.assertDictEqual(actual_response, expected_response)
@patch('abejacli.training.CONFIGFILE_NAME', get_tmp_training_file_name())
def test_initialize_training(self):
cmd = [
'training-1'
]
r = self.runner.invoke(initialize_training, cmd)
actual_file = open(abejacli.training.CONFIGFILE_NAME, 'r').read()
self.assertEqual(actual_file, training_default_configuration)
self.assertEqual(r.output, 'training initialized\n')
@requests_mock.Mocker()
@patch('abejacli.training.CONFIGFILE_NAME', get_tmp_training_file_name())
def test_create_job_definition(self, mock):
cmd = []
data = yaml.load(training_default_configuration)
with open(abejacli.training.CONFIGFILE_NAME, 'w') as configfile:
yaml.dump(data, configfile)
url = "{}/training/definitions".format(ORGANIZATION_ENDPOINT)
mock.register_uri('POST', url, json={"dummy": "dummy"})
r = self.runner.invoke(create_job_definition, cmd)
self.assertDictEqual(json.loads(r.output), {"dummy": "dummy"})
@requests_mock.Mocker()
def test_create_training_version_without_config_file(self, req_mock):
with self.runner.isolated_filesystem():
url = "{}/training/definitions/{}/versions".format(
ORGANIZATION_ENDPOINT, DEFAULT_TRAINING_JOB_DEFINITION_NAME)
req_mock.register_uri('POST', url, json={"dummy": "dummy"})
r = self.runner.invoke(create_training_version, [])
self.assertEqual(
r.output, 'Please specify job-definition-name or set config file.\n'
'training configuration file does not exists.\n')
@requests_mock.Mocker()
@patch('abejacli.training.CONFIGFILE_NAME', get_tmp_training_file_name())
def test_create_training_version_with_invalid_configuration(self, req_mock):
config = DEFAULT_TRAINING_CONFIG.copy()
del config['handler']
with self.runner.isolated_filesystem():
with open(abejacli.training.CONFIGFILE_NAME, 'w') as configfile:
yaml.dump(config, configfile)
with open('train.py', 'w') as f:
f.write('dummy')
url = "{}/training/definitions/{}/versions".format(
ORGANIZATION_ENDPOINT, DEFAULT_TRAINING_JOB_DEFINITION_NAME)
req_mock.register_uri('POST', url, json={"dummy": "dummy"})
r = self.runner.invoke(create_training_version, [])
self.assertEqual(
r.output, 'invalid training configuration file.\n')
@requests_mock.Mocker()
@patch('abejacli.training.CONFIGFILE_NAME', get_tmp_training_file_name())
def test_create_training_version(self, req_mock):
with self.runner.isolated_filesystem():
with open(abejacli.training.CONFIGFILE_NAME, 'w') as configfile:
yaml.dump(DEFAULT_TRAINING_CONFIG, configfile)
with open('train.py', 'w') as f:
f.write('dummy')
url = "{}/training/definitions/{}/versions".format(
ORGANIZATION_ENDPOINT, DEFAULT_TRAINING_JOB_DEFINITION_NAME)
req_mock.register_uri('POST', url, json={"dummy": "dummy"})
r = self.runner.invoke(create_training_version, [])
self.assertDictEqual(json.loads(r.output), {"dummy": "dummy"})
self.assertTrue(req_mock.called)
req = req_mock.request_history[0]
self.assertEqual(req.method, 'POST')
self.assertRegex(
req.headers['Content-Type'], r'^multipart/form-data; boundary=')
self.assertRegex(req.body, b'Content-Disposition:')
@requests_mock.Mocker()
@patch('abejacli.training.CONFIGFILE_NAME', get_tmp_training_file_name())
def test_describe_training_versions(self, req_mock):
with self.runner.isolated_filesystem():
with open(abejacli.training.CONFIGFILE_NAME, 'w') as configfile:
yaml.dump(DEFAULT_TRAINING_CONFIG, configfile)
with open('train.py', 'w') as f:
f.write('dummy')
url = "{}/training/definitions/{}/versions".format(
ORGANIZATION_ENDPOINT, DEFAULT_TRAINING_JOB_DEFINITION_NAME)
req_mock.register_uri('GET', url, text=json.dumps({
"entries": [
{
"created_at": "2019-04-03T05:23:04.844581Z",
"datasets": {},
"handler": "train:handler",
"image": "abeja-inc/all-gpu:18.10",
"job_definition_id": "1727436091178",
"job_definition_version": 3,
"modified_at": "2019-04-03T05:23:04.938196Z",
"user_parameters": {}
}
]
}))
self.runner.invoke(describe_training_versions)
self.assertTrue(req_mock.called)
@requests_mock.Mocker()
@patch('abejacli.training.CONFIGFILE_NAME', get_tmp_training_file_name())
def test_create_training_version_without_datasets(self, req_mock):
config = DEFAULT_TRAINING_CONFIG.copy()
del config['datasets']
with self.runner.isolated_filesystem():
with open(abejacli.training.CONFIGFILE_NAME, 'w') as configfile:
yaml.dump(config, configfile)
with open('train.py', 'w') as f:
f.write('dummy')
url = "{}/training/definitions/{}/versions".format(
ORGANIZATION_ENDPOINT, DEFAULT_TRAINING_JOB_DEFINITION_NAME)
req_mock.register_uri('POST', url, json={"dummy": "dummy"})
r = self.runner.invoke(create_training_version, [])
self.assertDictEqual(json.loads(r.output), {"dummy": "dummy"})
@requests_mock.Mocker()
@patch('abejacli.training.CONFIGFILE_NAME', get_tmp_training_file_name())
def test_create_training_job(self, mock):
cmd = [
'--version', '1',
'--params', 'USER_ID:1234567890123',
'--params', 'ACCESS_KEY:373be7309f0146c0d283440e500843d8',
'--description', 'Initial job',
'--instance-type', 'gpu:b-4'
]
config_data = yaml.load(training_default_configuration)
with open(abejacli.training.CONFIGFILE_NAME, 'w') as configfile:
yaml.dump(config_data, configfile)
url = "{}/training/definitions/{}/versions/{}/jobs".format(
ORGANIZATION_ENDPOINT, config_data['name'], '1')
matcher = mock.register_uri('POST', url, json={"dummy": "dummy"})
r = self.runner.invoke(create_training_job, cmd)
request_body = matcher.last_request.json()
self.assertEqual(request_body['description'], 'Initial job')
self.assertEqual(request_body['instance_type'], 'gpu:b-4')
self.assertDictEqual(json.loads(r.output), {"dummy": "dummy"})
@requests_mock.Mocker()
@patch('abejacli.training.CONFIGFILE_NAME', get_tmp_training_file_name())
def test_create_training_job_without_version(self, mock):
cmd = [
'--params', 'USER_ID:1234567890123',
'--params', 'ACCESS_KEY:373be7309f0146c0d283440e500843d8',
]
config_data = yaml.load(training_default_configuration)
with open(abejacli.training.CONFIGFILE_NAME, 'w') as configfile:
yaml.dump(config_data, configfile)
list_versions_url = "{}/training/definitions/{}/versions".format(
ORGANIZATION_ENDPOINT, config_data['name'])
mock.register_uri('GET', list_versions_url, text=json.dumps({
'entries': [
{'job_definition_version': 1}
]
}))
create_job_url = "{}/training/definitions/{}/versions/{}/jobs".format(
ORGANIZATION_ENDPOINT, config_data['name'], '1')
mock.register_uri('POST', create_job_url, json={"dummy": "dummy"})
r = self.runner.invoke(create_training_job, cmd)
self.assertDictEqual(json.loads(r.output), {"dummy": "dummy"})
@requests_mock.Mocker()
@patch('abejacli.training.CONFIGFILE_NAME', get_tmp_training_file_name())
def test_create_training_job_without_version_not_exist(self, mock):
cmd = [
'--params', 'USER_ID:1234567890123',
'--params', 'ACCESS_KEY:373be7309f0146c0d283440e500843d8',
]
config_data = yaml.load(training_default_configuration)
with open(abejacli.training.CONFIGFILE_NAME, 'w') as configfile:
yaml.dump(config_data, configfile)
list_versions_url = "{}/training/definitions/{}/versions".format(
ORGANIZATION_ENDPOINT, config_data['name'])
mock.register_uri('GET', list_versions_url,
text=json.dumps({'entries': []}))
create_job_url = "{}/training/definitions/{}/versions/{}/jobs".format(
ORGANIZATION_ENDPOINT, config_data['name'], '1')
mock.register_uri('POST', create_job_url, json={"dummy": "dummy"})
r = self.runner.invoke(create_training_job, cmd)
self.assertEqual(r.output, 'there is no available training versions. '
'please create training version first.\n')
@requests_mock.Mocker()
@patch('abejacli.training.CONFIGFILE_NAME', get_tmp_training_file_name())
def test_archive_version(self, mock):
cmd = [
'--version-id', '1234567890123'
]
data = yaml.load(training_default_configuration)
with open(abejacli.training.CONFIGFILE_NAME, 'w') as configfile:
yaml.dump(data, configfile)
url = "{}/training/definitions/{}/versions/{}/archive".format(
ORGANIZATION_ENDPOINT, data['name'], '1234567890123')
mock.register_uri('POST', url, json={"dummy": "dummy"})
r = self.runner.invoke(archive_version, cmd)
self.assertDictEqual(json.loads(r.output), {"dummy": "dummy"})
@requests_mock.Mocker()
@patch('abejacli.training.CONFIGFILE_NAME', get_tmp_training_file_name())
def test_unarchive_version(self, mock):
cmd = [
'--version-id', '1234567890123'
]
data = yaml.load(training_default_configuration)
with open(abejacli.training.CONFIGFILE_NAME, 'w') as configfile:
yaml.dump(data, configfile)
url = "{}/training/definitions/{}/versions/{}/unarchive".format(
ORGANIZATION_ENDPOINT, data['name'], '1234567890123')
mock.register_uri('POST', url, json={"dummy": "dummy"})
r = self.runner.invoke(unarchive_version, cmd)
self.assertDictEqual(json.loads(r.output), {"dummy": "dummy"})
@requests_mock.Mocker()
@patch('abejacli.training.CONFIGFILE_NAME', get_tmp_training_file_name())
def test_describe_job(self, mock):
cmd = []
data = yaml.load(training_default_configuration)
with open(abejacli.training.CONFIGFILE_NAME, 'w') as configfile:
yaml.dump(data, configfile)
url = "{}/training/definitions/{}/jobs".format(
ORGANIZATION_ENDPOINT, data['name'])
mock.register_uri('GET', url, json={"dummy": "dummy"})
r = self.runner.invoke(describe_jobs, cmd)
self.assertDictEqual(json.loads(r.output), {"dummy": "dummy"})
@requests_mock.Mocker()
@patch('abejacli.training.CONFIGFILE_NAME', get_tmp_training_file_name())
def test_archive_job(self, mock):
cmd = [
'--job-id', '1234567890123'
]
data = yaml.load(training_default_configuration)
with open(abejacli.training.CONFIGFILE_NAME, 'w') as configfile:
yaml.dump(data, configfile)
url = "{}/training/definitions/{}/jobs/{}/archive".format(
ORGANIZATION_ENDPOINT, data['name'], '1234567890123')
mock.register_uri('POST', url, json={"dummy": "dummy"})
r = self.runner.invoke(archive_job, cmd)
self.assertDictEqual(json.loads(r.output), {"dummy": "dummy"})
@requests_mock.Mocker()
@patch('abejacli.training.CONFIGFILE_NAME', get_tmp_training_file_name())
def test_unarchive_job(self, mock):
cmd = [
'--job-id', '1234567890123'
]
data = yaml.load(training_default_configuration)
with open(abejacli.training.CONFIGFILE_NAME, 'w') as configfile:
yaml.dump(data, configfile)
url = "{}/training/definitions/{}/jobs/{}/unarchive".format(
ORGANIZATION_ENDPOINT, data['name'], '1234567890123')
mock.register_uri('POST', url, json={"dummy": "dummy"})
r = self.runner.invoke(unarchive_job, cmd)
self.assertDictEqual(json.loads(r.output), {"dummy": "dummy"})
@requests_mock.Mocker()
def test_create_repository(self, mock):
cmd = [
'--name', 'test_repository',
'--description', 'test_description',
]
url = "{}/registry/repositories".format(ORGANIZATION_ENDPOINT)
mock.register_uri('POST', url, json={"dummy": "dummy"})
r = self.runner.invoke(create_repository, cmd)
self.assertEqual(r.exit_code, 0)
@requests_mock.Mocker()
def test_delete_repository(self, mock):
repository_id = '1234567890123'
cmd = [
'--repository_id', repository_id
]
url = "{}/registry/repositories/{}".format(
ORGANIZATION_ENDPOINT, repository_id)
mock.register_uri('DELETE', url, json={})
r = self.runner.invoke(delete_repository, cmd)
self.assertEqual(r.exit_code, 0)
@requests_mock.Mocker()
def test_describe_repository(self, mock):
repository_id = '1234567890123'
mock_res = {
"id": "1234567890123",
"organization_id": "1410000000000",
"name": "registry-repository-3",
"description": 'null',
"creator": {
"updated_at": "2018-01-04T03:02:12Z",
"role": "admin",
"is_registered": 'true',
"id": "1122334455660",
"email": "test@abeja.asia",
"display_name": 'null',
"created_at": "2017-05-26T01:38:46Z"
},
"created_at": "2018-06-07T04:42:34.913644Z",
"updated_at": "2018-06-07T04:42:3pp4.913726Z"
}
cmd = [
'--repository_id', repository_id
]
url = "{}/registry/repositories/{}".format(
ORGANIZATION_ENDPOINT, repository_id)
mock.register_uri('GET', url, json=mock_res)
r = self.runner.invoke(describe_repository, cmd)
self.assertEqual(r.exit_code, 0)
@requests_mock.Mocker()
def test_describe_repositories(self, mock):
cmd = [
'--limit', 100,
'--offset', 10
]
mock_res = {
"offset": 0,
"limit": 10,
"has_next": 'false',
"organization_name": "test-org",
"organization_id": "1122334455667",
"created_at": "2019-05-23T05:13:13Z",
"updated_at": "2019-05-23T05:13:15Z",
"entries": [
{
"id": "1234567890123",
"organization_id": "1410000000000",
"name": "registry-repository-3",
"description": 'null',
"creator": {
"updated_at": "2018-01-04T03:02:12Z",
"role": "admin",
"is_registered": 'true',
"id": "1122334455660",
"email": "test@abeja.asia",
"display_name": 'null',
"created_at": "2017-05-26T01:38:46Z"
},
"created_at": "2018-06-07T04:42:34.913644Z",
"updated_at": "2018-06-07T04:42:34.913726Z"
}
]
}
url = "{}/registry/repositories".format(ORGANIZATION_ENDPOINT)
mock.register_uri('GET', url, json=mock_res)
r = self.runner.invoke(describe_repositories, cmd)
self.assertEqual(r.exit_code, 0)
@requests_mock.Mocker()
def test_describe_repository_tags(self, mock):
repository_id = '1234567890123'
mock_res = {
"id": "1234567890123",
"organization_id": "1410000000000",
"name": "registry-repository-3",
"description": 'null',
"creator": {
"updated_at": "2018-01-04T03:02:12Z",
"role": "admin",
"is_registered": 'true',
"id": "1122334455660",
"email": "test@abeja.asia",
"display_name": 'null',
"created_at": "2017-05-26T01:38:46Z"
},
"created_at": "2018-06-07T04:42:34.913644Z",
"updated_at": "2018-06-07T04:42:34.913726Z"
}
cmd = [
'--repository_id', repository_id,
'--limit', 100,
'--offset', 10
]
url = "{}/registry/repositories/{}/tags".format(
ORGANIZATION_ENDPOINT, repository_id)
mock.register_uri('GET', url, json=mock_res)
r = self.runner.invoke(describe_repository_tags, cmd)
self.assertEqual(r.exit_code, 0)
@requests_mock.Mocker()
def test_describe_training_models(self, mock):
cmd = [
'--job_definition_name', DEFAULT_TRAINING_JOB_DEFINITION_NAME
]
url = "{}/training/definitions/{}/models".format(
ORGANIZATION_ENDPOINT, DEFAULT_TRAINING_JOB_DEFINITION_NAME)
mock.register_uri('GET', url, json={"dummy": "dummy"})
r = self.runner.invoke(describe_training_models, cmd)
self.assertDictEqual(json.loads(r.output), {"dummy": "dummy"})
cmd = [
'--job_definition_name', DEFAULT_TRAINING_JOB_DEFINITION_NAME,
'--model_id', TRAINING_MODEL_ID
]
url = "{}/training/definitions/{}/models/{}".format(
ORGANIZATION_ENDPOINT, DEFAULT_TRAINING_JOB_DEFINITION_NAME, TRAINING_MODEL_ID)
mock.register_uri('GET', url, json={"dummy": "dummy"})
r = self.runner.invoke(describe_training_models, cmd)
self.assertDictEqual(json.loads(r.output), {"dummy": "dummy"})
@requests_mock.Mocker()
def test_create_training_model(self, mock):
with tempfile.NamedTemporaryFile(suffix='.txt') as f:
cmd = [
'--job_definition_name', DEFAULT_TRAINING_JOB_DEFINITION_NAME,
'--filepath', f.name
]
url = "{}/training/definitions/{}/models".format(
ORGANIZATION_ENDPOINT, DEFAULT_TRAINING_JOB_DEFINITION_NAME)
mock.register_uri('POST', url, json={"dummy": "dummy"})
r = self.runner.invoke(create_training_model, cmd)
self.assertDictEqual(json.loads(r.output), {"dummy": "dummy"})
with tempfile.NamedTemporaryFile(suffix='.zip') as f:
cmd = [
'--job_definition_name', DEFAULT_TRAINING_JOB_DEFINITION_NAME,
'--filepath', f.name,
'--description', 'dummy',
'--user_parameters', 'DEBUG:x'
]
url = "{}/training/definitions/{}/models".format(
ORGANIZATION_ENDPOINT, DEFAULT_TRAINING_JOB_DEFINITION_NAME)
mock.register_uri('POST', url, json={"dummy": "dummy"})
r = self.runner.invoke(create_training_model, cmd)
self.assertDictEqual(json.loads(r.output), {"dummy": "dummy"})
@patch('abejacli.training.commands.api_patch')
def test_update_training_model(self, mock_api_patch):
cmd = [
'--job_definition_name', DEFAULT_TRAINING_JOB_DEFINITION_NAME,
'--model_id', TRAINING_MODEL_ID,
'--description', 'dummy'
]
data = {"dummy": "dummy"}
mock_api_patch.return_value = data
r = self.runner.invoke(update_training_model, cmd)
assert not r.exception
self.assertDictEqual(json.loads(r.output), {"dummy": "dummy"})
@requests_mock.Mocker()
def test_download_training_model(self, mock):
cmd = [
'--job_definition_name', DEFAULT_TRAINING_JOB_DEFINITION_NAME,
'--model_id', TRAINING_MODEL_ID
]
url = "{}/training/definitions/{}/models/{}/download".format(
ORGANIZATION_ENDPOINT, DEFAULT_TRAINING_JOB_DEFINITION_NAME, TRAINING_MODEL_ID)
mock.register_uri('GET', url, json={"download_uri": "dummy"})
r = self.runner.invoke(download_training_model, cmd)
self.assertDictEqual(json.loads(r.output), {"download_uri": "dummy"})
@requests_mock.Mocker()
def test_archive_training_model(self, mock):
cmd = [
'--job_definition_name', DEFAULT_TRAINING_JOB_DEFINITION_NAME,
'--model_id', TRAINING_MODEL_ID
]
url = "{}/training/definitions/{}/models/{}/archive".format(
ORGANIZATION_ENDPOINT, DEFAULT_TRAINING_JOB_DEFINITION_NAME, TRAINING_MODEL_ID)
mock.register_uri('POST', url, json={"message": "dummy"})
r = self.runner.invoke(archive_training_model, cmd)
self.assertDictEqual(json.loads(r.output), {"message": "dummy"})
@requests_mock.Mocker()
def test_unarchive_training_model(self, mock):
cmd = [
'--job_definition_name', DEFAULT_TRAINING_JOB_DEFINITION_NAME,
'--model_id', TRAINING_MODEL_ID
]
url = "{}/training/definitions/{}/models/{}/unarchive".format(
ORGANIZATION_ENDPOINT, DEFAULT_TRAINING_JOB_DEFINITION_NAME, TRAINING_MODEL_ID)
mock.register_uri('POST', url, json={"message": "dummy"})
r = self.runner.invoke(unarchive_training_model, cmd)
self.assertDictEqual(json.loads(r.output), {"message": "dummy"})
| 41.490972 | 716 | 0.62086 | 5,040 | 48,254 | 5.67996 | 0.091468 | 0.022706 | 0.020121 | 0.017222 | 0.823663 | 0.787054 | 0.757187 | 0.732351 | 0.726552 | 0.720474 | 0 | 0.073658 | 0.251606 | 48,254 | 1,162 | 717 | 41.526678 | 0.719049 | 0.007937 | 0 | 0.585366 | 0 | 0.002927 | 0.26664 | 0.082464 | 0 | 0 | 0 | 0.000861 | 0.078049 | 1 | 0.046829 | false | 0 | 0.017561 | 0.000976 | 0.069268 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b9ef4b627cf8104121c1f9ed74961b475a116698 | 155 | py | Python | dqc/__init__.py | Jaikinator/dqc | 47c964c7d1323a35f4f69521d40476c41843810e | [
"Apache-2.0"
] | 39 | 2021-05-31T17:01:23.000Z | 2022-03-23T19:20:35.000Z | dqc/__init__.py | Jaikinator/dqc | 47c964c7d1323a35f4f69521d40476c41843810e | [
"Apache-2.0"
] | 14 | 2021-09-01T13:39:11.000Z | 2022-03-13T16:45:39.000Z | dqc/__init__.py | Jaikinator/dqc | 47c964c7d1323a35f4f69521d40476c41843810e | [
"Apache-2.0"
] | 6 | 2021-07-16T09:08:29.000Z | 2022-02-24T01:13:54.000Z | from dqc.system import *
from dqc.qccalc import *
from dqc.api import *
from dqc._version import get_version as _get_version
__version__ = _get_version()
| 22.142857 | 52 | 0.793548 | 24 | 155 | 4.708333 | 0.375 | 0.247788 | 0.345133 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141935 | 155 | 6 | 53 | 25.833333 | 0.849624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b9f40fbb0fa9a04b0978630153a823c0a69499f4 | 61 | py | Python | python/meteoswissdata/__init__.py | allestuetsmerweh/meteoswissdata | 2684e14c38ccf99c9088e0da06f123cb7b869bd2 | [
"MIT"
] | null | null | null | python/meteoswissdata/__init__.py | allestuetsmerweh/meteoswissdata | 2684e14c38ccf99c9088e0da06f123cb7b869bd2 | [
"MIT"
] | null | null | null | python/meteoswissdata/__init__.py | allestuetsmerweh/meteoswissdata | 2684e14c38ccf99c9088e0da06f123cb7b869bd2 | [
"MIT"
] | null | null | null | from .station_measurements import StationMeasurementsFetcher
| 30.5 | 60 | 0.918033 | 5 | 61 | 11 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065574 | 61 | 1 | 61 | 61 | 0.964912 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b9f95988755da78bd36f8f48ae82ab240f95df6d | 4,702 | py | Python | test/test_pr_curve.py | ameya98/roc2pr | ab19d7552e2e9ae32ca00a1be4a17b29a3f915fa | [
"MIT"
] | 1 | 2020-09-08T14:51:48.000Z | 2020-09-08T14:51:48.000Z | test/test_pr_curve.py | ameya98/pr2roc | ab19d7552e2e9ae32ca00a1be4a17b29a3f915fa | [
"MIT"
] | null | null | null | test/test_pr_curve.py | ameya98/pr2roc | ab19d7552e2e9ae32ca00a1be4a17b29a3f915fa | [
"MIT"
] | null | null | null | from __future__ import division
from pr2roc import PRCurve
import numpy as np
import pytest
from pytest import approx
def test_pr_curve_conversion():
rec_vals = [100/120, 110/120]
prec_vals = [100/140, 110/155]
points = zip(rec_vals, prec_vals)
pr_curve = PRCurve(points, 120/60, label='Test PR Curve')
roc_curve = pr_curve.to_roc()
fpr_actual = [point[0] for point in roc_curve.points()]
tpr_actual = [point[1] for point in roc_curve.points()]
fpr_expected = [40/60, 45/60]
tpr_expected = [100/120, 110/120]
assert tpr_actual == approx(tpr_expected)
assert fpr_actual == approx(fpr_expected)
def test_pr_curve_reconversion():
rec_vals = [0.25, 0.30, 0.35, 0.40, 0.45, 0.50]
prec_vals = [0.5, 0.375, 0.318, 0.286, 0.265, 0.250]
points = zip(rec_vals, prec_vals)
pr_curve = PRCurve(points, 20/2000, label='Test PR Curve')
orig_points = pr_curve.points()
reconverted_points = pr_curve.to_roc().to_pr().points()
assert np.array(orig_points).flatten() == approx(np.array(reconverted_points).flatten())
def test_pr_curve_reconversion_edge_cases():
# Precision > 1.
with pytest.raises(ValueError):
rec_vals = [0.1, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, 1.0]
prec_vals = [1.1, 0.5, 0.375, 0.318, 0.286, 0.265, 0.250, 0.01]
points = zip(rec_vals, prec_vals)
pr_curve = PRCurve(points, 20/2000, label='Bad PR Curve')
# Precision < 0.
with pytest.raises(ValueError):
rec_vals = [0.1, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, 1.0]
prec_vals = [1.1, 0.5, 0.375, 0.318, 0.286, 0.265, 0.250, -0.01]
points = zip(rec_vals, prec_vals)
pr_curve = PRCurve(points, 20/2000, label='Bad PR Curve')
# Recall > 1.
with pytest.raises(ValueError):
rec_vals = [0.25, 0.30, 0.35, 0.40, 0.45, 0.50, 1.1]
prec_vals = [0.5, 0.375, 0.318, 0.286, 0.265, 0.250, 0.01]
points = zip(rec_vals, prec_vals)
pr_curve = PRCurve(points, 20/2000, label='Bad PR Curve')
# Recall < 0.
with pytest.raises(ValueError):
rec_vals = [-0.25, 0.30, 0.35, 0.40, 0.45, 0.50, 1.0]
prec_vals = [0.5, 0.375, 0.318, 0.286, 0.265, 0.250, 0.01]
points = zip(rec_vals, prec_vals)
pr_curve = PRCurve(points, 20/2000, label='Bad PR Curve')
# Ratio of positive to negative samples is == 0.
with pytest.raises(ValueError):
rec_vals = [0.0, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, 1.0]
prec_vals = [0.0, 0.5, 0.375, 0.318, 0.286, 0.265, 0.250, 0.01]
points = zip(rec_vals, prec_vals)
pr_curve = PRCurve(points, 0, label='Bad PR Curve')
# Ratio of positive to negative samples is < 0.
with pytest.raises(ValueError):
rec_vals = [0.0, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, 1.0]
prec_vals = [0.0, 0.5, 0.375, 0.318, 0.286, 0.265, 0.250, 0.01]
points = zip(rec_vals, prec_vals)
pr_curve = PRCurve(points, -10, label='Bad PR Curve')
# Precision > 0 when Recall = 0.
with pytest.raises(ValueError):
rec_vals = [0.0, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, 1.0]
prec_vals = [0.9, 0.5, 0.375, 0.318, 0.286, 0.265, 0.250, 0.01]
points = zip(rec_vals, prec_vals)
pr_curve = PRCurve(points, 20/2000, label='Bad PR Curve')
# Precision = 0 when Recall > 0.
with pytest.raises(ValueError):
rec_vals = [0.25, 0.30, 0.35, 0.40, 0.45, 0.50, 1.0]
prec_vals = [0.5, 0.375, 0.318, 0.286, 0.265, 0.250, 0.0]
points = zip(rec_vals, prec_vals)
pr_curve = PRCurve(points, 20/2000, label='Bad PR Curve')
# Precision = 0 when Recall = 0. This is acceptable.
rec_vals = [0.0, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, 1.0]
prec_vals = [0.0, 0.5, 0.375, 0.318, 0.286, 0.265, 0.250, 0.01]
points = zip(rec_vals, prec_vals)
pr_curve = PRCurve(points, 20/2000, label='Good PR Curve')
orig_points = pr_curve.points()
reconverted_points = pr_curve.to_roc().to_pr().points()
assert np.array(orig_points).flatten() == approx(np.array(reconverted_points).flatten())
def test_pr_curve_resample():
rec_vals = [0.25, 0.5]
prec_vals = [0.5, 0.25]
points = zip(rec_vals, prec_vals)
pr_curve = PRCurve(points, 20/2000, label='Test PR Curve')
points = pr_curve.resample(num_points=6).points()
rec_vals_sampled = [point[0] for point in points]
prec_vals_sampled = [point[1] for point in points]
rec_vals_expected = [0.25, 0.30, 0.35, 0.40, 0.45, 0.50]
prec_vals_expected = [0.5, 0.375, 0.318, 0.286, 0.265, 0.250]
assert rec_vals_sampled == approx(rec_vals_expected, abs=1e-3)
assert prec_vals_sampled == approx(prec_vals_expected, abs=1e-3)
| 39.512605 | 92 | 0.619736 | 848 | 4,702 | 3.288915 | 0.102594 | 0.085335 | 0.051631 | 0.068842 | 0.806382 | 0.756185 | 0.738975 | 0.738975 | 0.738257 | 0.738257 | 0 | 0.175974 | 0.219268 | 4,702 | 118 | 93 | 39.847458 | 0.583765 | 0.055083 | 0 | 0.529412 | 0 | 0 | 0.033394 | 0 | 0 | 0 | 0 | 0 | 0.070588 | 1 | 0.047059 | false | 0 | 0.058824 | 0 | 0.105882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6a04db8b1c424c66dd09f919091c0b4ec785707d | 16,876 | py | Python | tests/data/expression.py | StarryInternet/black | f90f50a7436ca13517933c290ef007e7cb2e7258 | [
"MIT"
] | 5 | 2021-04-23T13:41:59.000Z | 2022-02-08T04:53:07.000Z | tests/data/expression.py | StarryInternet/black | f90f50a7436ca13517933c290ef007e7cb2e7258 | [
"MIT"
] | 7 | 2022-02-14T15:22:31.000Z | 2022-03-28T15:19:43.000Z | tests/data/expression.py | StarryInternet/black | f90f50a7436ca13517933c290ef007e7cb2e7258 | [
"MIT"
] | 2 | 2021-04-24T07:10:48.000Z | 2022-01-17T05:37:30.000Z | ...
'some_string'
b'\\xa3'
Name
None
True
False
1
1.0
1j
True or False
True or False or None
True and False
True and False and None
(Name1 and Name2) or Name3
Name1 and Name2 or Name3
Name1 or (Name2 and Name3)
Name1 or Name2 and Name3
(Name1 and Name2) or (Name3 and Name4)
Name1 and Name2 or Name3 and Name4
Name1 or (Name2 and Name3) or Name4
Name1 or Name2 and Name3 or Name4
v1 << 2
1 >> v2
1 % finished
1 + v2 - v3 * 4 ^ 5 ** v6 / 7 // 8
((1 + v2) - (v3 * 4)) ^ (((5 ** v6) / 7) // 8)
not great
~great
+value
-1
~int and not v1 ^ 123 + v2 | True
(~int) and (not ((v1 ^ (123 + v2)) | True))
+really ** -confusing ** ~operator ** -precedence
flags & ~ select.EPOLLIN and waiters.write_task is not None
lambda arg: None
lambda a=True: a
lambda a, b, c=True: a
lambda a, b, c=True, *, d=(1 << v2), e='str': a
lambda a, b, c=True, *vararg, d=(v1 << 2), e='str', **kwargs: a + b
manylambdas = lambda x=lambda y=lambda z=1: z: y(): x()
foo = (lambda port_id, ignore_missing: {"port1": port1_resource, "port2": port2_resource}[port_id])
1 if True else 2
str or None if True else str or bytes or None
(str or None) if True else (str or bytes or None)
str or None if (1 if True else 2) else str or bytes or None
(str or None) if (1 if True else 2) else (str or bytes or None)
((super_long_variable_name or None) if (1 if super_long_test_name else 2) else (str or bytes or None))
{'2.7': dead, '3.7': (long_live or die_hard)}
{'2.7': dead, '3.7': (long_live or die_hard), **{'3.6': verygood}}
{**a, **b, **c}
{'2.7', '3.6', '3.7', '3.8', '3.9', ('4.0' if gilectomy else '3.10')}
({'a': 'b'}, (True or False), (+value), 'string', b'bytes') or None
()
(1,)
(1, 2)
(1, 2, 3)
[]
[1, 2, 3, 4, 5, 6, 7, 8, 9, (10 or A), (11 or B), (12 or C)]
[1, 2, 3,]
[*a]
[*range(10)]
[*a, 4, 5,]
[4, *a, 5,]
[this_is_a_very_long_variable_which_will_force_a_delimiter_split, element, another, *more]
{i for i in (1, 2, 3)}
{(i ** 2) for i in (1, 2, 3)}
{(i ** 2) for i, _ in ((1, 'a'), (2, 'b'), (3, 'c'))}
{((i ** 2) + j) for i in (1, 2, 3) for j in (1, 2, 3)}
[i for i in (1, 2, 3)]
[(i ** 2) for i in (1, 2, 3)]
[(i ** 2) for i, _ in ((1, 'a'), (2, 'b'), (3, 'c'))]
[((i ** 2) + j) for i in (1, 2, 3) for j in (1, 2, 3)]
{i: 0 for i in (1, 2, 3)}
{i: j for i, j in ((1, 'a'), (2, 'b'), (3, 'c'))}
{a: b * 2 for a, b in dictionary.items()}
{a: b * -2 for a, b in dictionary.items()}
{k: v for k, v in this_is_a_very_long_variable_which_will_cause_a_trailing_comma_which_breaks_the_comprehension}
Python3 > Python2 > COBOL
Life is Life
call()
call(arg)
call(kwarg='hey')
call(arg, kwarg='hey')
call(arg, another, kwarg='hey', **kwargs)
call(this_is_a_very_long_variable_which_will_force_a_delimiter_split, arg, another, kwarg='hey', **kwargs) # note: no trailing comma pre-3.6
call(*gidgets[:2])
call(a, *gidgets[:2])
call(**self.screen_kwargs)
call(b, **self.screen_kwargs)
lukasz.langa.pl
call.me(maybe)
1 .real
1.0 .real
....__class__
list[str]
dict[str, int]
tuple[str, ...]
tuple[str, int, float, dict[str, int],]
very_long_variable_name_filters: t.List[
t.Tuple[str, t.Union[str, t.List[t.Optional[str]]]],
]
xxxx_xxxxx_xxxx_xxx: Callable[..., List[SomeClass]] = classmethod( # type: ignore
sync(async_xxxx_xxx_xxxx_xxxxx_xxxx_xxx.__func__)
)
xxxx_xxx_xxxx_xxxxx_xxxx_xxx: Callable[..., List[SomeClass]] = classmethod( # type: ignore
sync(async_xxxx_xxx_xxxx_xxxxx_xxxx_xxx.__func__)
)
xxxx_xxx_xxxx_xxxxx_xxxx_xxx: Callable[
..., List[SomeClass]
] = classmethod(sync(async_xxxx_xxx_xxxx_xxxxx_xxxx_xxx.__func__)) # type: ignore
slice[0]
slice[0:1]
slice[0:1:2]
slice[:]
slice[:-1]
slice[1:]
slice[::-1]
slice[d :: d + 1]
slice[:c, c - 1]
numpy[:, 0:1]
numpy[:, :-1]
numpy[0, :]
numpy[:, i]
numpy[0, :2]
numpy[:N, 0]
numpy[:2, :4]
numpy[2:4, 1:5]
numpy[4:, 2:]
numpy[:, (0, 1, 2, 5)]
numpy[0, [0]]
numpy[:, [i]]
numpy[1 : c + 1, c]
numpy[-(c + 1) :, d]
numpy[:, l[-2]]
numpy[:, ::-1]
numpy[np.newaxis, :]
(str or None) if (sys.version_info[0] > (3,)) else (str or bytes or None)
{'2.7': dead, '3.7': long_live or die_hard}
{'2.7', '3.6', '3.7', '3.8', '3.9', '4.0' if gilectomy else '3.10'}
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10 or A, 11 or B, 12 or C]
(SomeName)
SomeName
(Good, Bad, Ugly)
(i for i in (1, 2, 3))
((i ** 2) for i in (1, 2, 3))
((i ** 2) for i, _ in ((1, 'a'), (2, 'b'), (3, 'c')))
(((i ** 2) + j) for i in (1, 2, 3) for j in (1, 2, 3))
(*starred,)
{"id": "1","type": "type","started_at": now(),"ended_at": now() + timedelta(days=10),"priority": 1,"import_session_id": 1,**kwargs}
a = (1,)
b = 1,
c = 1
d = (1,) + a + (2,)
e = (1,).count(1)
f = 1, *range(10)
g = 1, *"ten"
what_is_up_with_those_new_coord_names = (coord_names + set(vars_to_create)) + set(vars_to_remove)
what_is_up_with_those_new_coord_names = (coord_names | set(vars_to_create)) - set(vars_to_remove)
result = session.query(models.Customer.id).filter(models.Customer.account_id == account_id, models.Customer.email == email_address).order_by(models.Customer.id.asc(),).all()
Ø = set()
authors.łukasz.say_thanks()
mapping = {
A: 0.25 * (10.0 / 12),
B: 0.1 * (10.0 / 12),
C: 0.1 * (10.0 / 12),
D: 0.1 * (10.0 / 12),
}
def gen():
yield from outside_of_generator
a = (yield)
b = ((yield))
c = (((yield)))
async def f():
await some.complicated[0].call(with_args=(True or (1 is not 1)))
print(* [] or [1])
print(**{1: 3} if False else {x: x for x in range(3)})
print(* lambda x: x)
assert(not Test),("Short message")
assert this is ComplexTest and not requirements.fit_in_a_single_line(force=False), "Short message"
assert(((parens is TooMany)))
for x, in (1,), (2,), (3,): ...
for y in (): ...
for z in (i for i in (1, 2, 3)): ...
for i in (call()): ...
for j in (1 + (2 + 3)): ...
while(this and that): ...
for addr_family, addr_type, addr_proto, addr_canonname, addr_sockaddr in socket.getaddrinfo('google.com', 'http'):
pass
a = aaaa.bbbb.cccc.dddd.eeee.ffff.gggg.hhhh.iiii.jjjj.kkkk.llll.mmmm.nnnn.oooo.pppp in qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
a = aaaa.bbbb.cccc.dddd.eeee.ffff.gggg.hhhh.iiii.jjjj.kkkk.llll.mmmm.nnnn.oooo.pppp not in qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
a = aaaa.bbbb.cccc.dddd.eeee.ffff.gggg.hhhh.iiii.jjjj.kkkk.llll.mmmm.nnnn.oooo.pppp is qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
a = aaaa.bbbb.cccc.dddd.eeee.ffff.gggg.hhhh.iiii.jjjj.kkkk.llll.mmmm.nnnn.oooo.pppp is not qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
if (
threading.current_thread() != threading.main_thread() and
threading.current_thread() != threading.main_thread() or
signal.getsignal(signal.SIGINT) != signal.default_int_handler
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa |
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa &
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa +
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa -
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa *
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa /
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
~ aaaa.a + aaaa.b - aaaa.c * aaaa.d / aaaa.e | aaaa.f & aaaa.g % aaaa.h ^ aaaa.i << aaaa.k >> aaaa.l ** aaaa.m // aaaa.n
):
return True
if (
~ aaaaaaaa.a + aaaaaaaa.b - aaaaaaaa.c @ aaaaaaaa.d / aaaaaaaa.e | aaaaaaaa.f & aaaaaaaa.g % aaaaaaaa.h ^ aaaaaaaa.i << aaaaaaaa.k >> aaaaaaaa.l ** aaaaaaaa.m // aaaaaaaa.n
):
return True
if (
~ aaaaaaaaaaaaaaaa.a + aaaaaaaaaaaaaaaa.b - aaaaaaaaaaaaaaaa.c * aaaaaaaaaaaaaaaa.d @ aaaaaaaaaaaaaaaa.e | aaaaaaaaaaaaaaaa.f & aaaaaaaaaaaaaaaa.g % aaaaaaaaaaaaaaaa.h ^ aaaaaaaaaaaaaaaa.i << aaaaaaaaaaaaaaaa.k >> aaaaaaaaaaaaaaaa.l ** aaaaaaaaaaaaaaaa.m // aaaaaaaaaaaaaaaa.n
):
return True
last_call()
# standalone comment at ENDMARKER
# output
...
"some_string"
b"\\xa3"
Name
None
True
False
1
1.0
1j
True or False
True or False or None
True and False
True and False and None
(Name1 and Name2) or Name3
Name1 and Name2 or Name3
Name1 or (Name2 and Name3)
Name1 or Name2 and Name3
(Name1 and Name2) or (Name3 and Name4)
Name1 and Name2 or Name3 and Name4
Name1 or (Name2 and Name3) or Name4
Name1 or Name2 and Name3 or Name4
v1 << 2
1 >> v2
1 % finished
1 + v2 - v3 * 4 ^ 5 ** v6 / 7 // 8
((1 + v2) - (v3 * 4)) ^ (((5 ** v6) / 7) // 8)
not great
~great
+value
-1
~int and not v1 ^ 123 + v2 | True
(~int) and (not ((v1 ^ (123 + v2)) | True))
+(really ** -(confusing ** ~(operator ** -precedence)))
flags & ~select.EPOLLIN and waiters.write_task is not None
lambda arg: None
lambda a=True: a
lambda a, b, c=True: a
lambda a, b, c=True, *, d=(1 << v2), e="str": a
lambda a, b, c=True, *vararg, d=(v1 << 2), e="str", **kwargs: a + b
manylambdas = lambda x=lambda y=lambda z=1: z: y(): x()
foo = lambda port_id, ignore_missing: {
"port1": port1_resource,
"port2": port2_resource,
}[port_id]
1 if True else 2
str or None if True else str or bytes or None
(str or None) if True else (str or bytes or None)
str or None if (1 if True else 2) else str or bytes or None
(str or None) if (1 if True else 2) else (str or bytes or None)
(
(super_long_variable_name or None)
if (1 if super_long_test_name else 2)
else (str or bytes or None)
)
{"2.7": dead, "3.7": (long_live or die_hard)}
{"2.7": dead, "3.7": (long_live or die_hard), **{"3.6": verygood}}
{**a, **b, **c}
{"2.7", "3.6", "3.7", "3.8", "3.9", ("4.0" if gilectomy else "3.10")}
({"a": "b"}, (True or False), (+value), "string", b"bytes") or None
()
(1,)
(1, 2)
(1, 2, 3)
[]
[1, 2, 3, 4, 5, 6, 7, 8, 9, (10 or A), (11 or B), (12 or C)]
[
1,
2,
3,
]
[*a]
[*range(10)]
[
*a,
4,
5,
]
[
4,
*a,
5,
]
[
this_is_a_very_long_variable_which_will_force_a_delimiter_split,
element,
another,
*more,
]
{i for i in (1, 2, 3)}
{(i ** 2) for i in (1, 2, 3)}
{(i ** 2) for i, _ in ((1, "a"), (2, "b"), (3, "c"))}
{((i ** 2) + j) for i in (1, 2, 3) for j in (1, 2, 3)}
[i for i in (1, 2, 3)]
[(i ** 2) for i in (1, 2, 3)]
[(i ** 2) for i, _ in ((1, "a"), (2, "b"), (3, "c"))]
[((i ** 2) + j) for i in (1, 2, 3) for j in (1, 2, 3)]
{i: 0 for i in (1, 2, 3)}
{i: j for i, j in ((1, "a"), (2, "b"), (3, "c"))}
{a: b * 2 for a, b in dictionary.items()}
{a: b * -2 for a, b in dictionary.items()}
{
k: v
for k, v in this_is_a_very_long_variable_which_will_cause_a_trailing_comma_which_breaks_the_comprehension
}
Python3 > Python2 > COBOL
Life is Life
call()
call(arg)
call(kwarg="hey")
call(arg, kwarg="hey")
call(arg, another, kwarg="hey", **kwargs)
call(
this_is_a_very_long_variable_which_will_force_a_delimiter_split,
arg,
another,
kwarg="hey",
**kwargs
) # note: no trailing comma pre-3.6
call(*gidgets[:2])
call(a, *gidgets[:2])
call(**self.screen_kwargs)
call(b, **self.screen_kwargs)
lukasz.langa.pl
call.me(maybe)
1 .real
1.0 .real
....__class__
list[str]
dict[str, int]
tuple[str, ...]
tuple[
str, int, float, dict[str, int],
]
very_long_variable_name_filters: t.List[
t.Tuple[str, t.Union[str, t.List[t.Optional[str]]]],
]
xxxx_xxxxx_xxxx_xxx: Callable[..., List[SomeClass]] = classmethod( # type: ignore
sync(async_xxxx_xxx_xxxx_xxxxx_xxxx_xxx.__func__)
)
xxxx_xxx_xxxx_xxxxx_xxxx_xxx: Callable[..., List[SomeClass]] = classmethod( # type: ignore
sync(async_xxxx_xxx_xxxx_xxxxx_xxxx_xxx.__func__)
)
xxxx_xxx_xxxx_xxxxx_xxxx_xxx: Callable[..., List[SomeClass]] = classmethod(
sync(async_xxxx_xxx_xxxx_xxxxx_xxxx_xxx.__func__)
) # type: ignore
slice[0]
slice[0:1]
slice[0:1:2]
slice[:]
slice[:-1]
slice[1:]
slice[::-1]
slice[d :: d + 1]
slice[:c, c - 1]
numpy[:, 0:1]
numpy[:, :-1]
numpy[0, :]
numpy[:, i]
numpy[0, :2]
numpy[:N, 0]
numpy[:2, :4]
numpy[2:4, 1:5]
numpy[4:, 2:]
numpy[:, (0, 1, 2, 5)]
numpy[0, [0]]
numpy[:, [i]]
numpy[1 : c + 1, c]
numpy[-(c + 1) :, d]
numpy[:, l[-2]]
numpy[:, ::-1]
numpy[np.newaxis, :]
(str or None) if (sys.version_info[0] > (3,)) else (str or bytes or None)
{"2.7": dead, "3.7": long_live or die_hard}
{"2.7", "3.6", "3.7", "3.8", "3.9", "4.0" if gilectomy else "3.10"}
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10 or A, 11 or B, 12 or C]
(SomeName)
SomeName
(Good, Bad, Ugly)
(i for i in (1, 2, 3))
((i ** 2) for i in (1, 2, 3))
((i ** 2) for i, _ in ((1, "a"), (2, "b"), (3, "c")))
(((i ** 2) + j) for i in (1, 2, 3) for j in (1, 2, 3))
(*starred,)
{
"id": "1",
"type": "type",
"started_at": now(),
"ended_at": now() + timedelta(days=10),
"priority": 1,
"import_session_id": 1,
**kwargs,
}
a = (1,)
b = (1,)
c = 1
d = (1,) + a + (2,)
e = (1,).count(1)
f = 1, *range(10)
g = 1, *"ten"
what_is_up_with_those_new_coord_names = (coord_names + set(vars_to_create)) + set(
vars_to_remove
)
what_is_up_with_those_new_coord_names = (coord_names | set(vars_to_create)) - set(
vars_to_remove
)
result = (
session.query(models.Customer.id)
.filter(
models.Customer.account_id == account_id, models.Customer.email == email_address
)
.order_by(models.Customer.id.asc(),)
.all()
)
Ø = set()
authors.łukasz.say_thanks()
mapping = {
A: 0.25 * (10.0 / 12),
B: 0.1 * (10.0 / 12),
C: 0.1 * (10.0 / 12),
D: 0.1 * (10.0 / 12),
}
def gen():
yield from outside_of_generator
a = yield
b = yield
c = yield
async def f():
await some.complicated[0].call(with_args=(True or (1 is not 1)))
print(*[] or [1])
print(**{1: 3} if False else {x: x for x in range(3)})
print(*lambda x: x)
assert not Test, "Short message"
assert this is ComplexTest and not requirements.fit_in_a_single_line(
force=False
), "Short message"
assert parens is TooMany
for (x,) in (1,), (2,), (3,):
...
for y in ():
...
for z in (i for i in (1, 2, 3)):
...
for i in call():
...
for j in 1 + (2 + 3):
...
while this and that:
...
for (
addr_family,
addr_type,
addr_proto,
addr_canonname,
addr_sockaddr,
) in socket.getaddrinfo("google.com", "http"):
pass
a = (
aaaa.bbbb.cccc.dddd.eeee.ffff.gggg.hhhh.iiii.jjjj.kkkk.llll.mmmm.nnnn.oooo.pppp
in qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
)
a = (
aaaa.bbbb.cccc.dddd.eeee.ffff.gggg.hhhh.iiii.jjjj.kkkk.llll.mmmm.nnnn.oooo.pppp
not in qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
)
a = (
aaaa.bbbb.cccc.dddd.eeee.ffff.gggg.hhhh.iiii.jjjj.kkkk.llll.mmmm.nnnn.oooo.pppp
is qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
)
a = (
aaaa.bbbb.cccc.dddd.eeee.ffff.gggg.hhhh.iiii.jjjj.kkkk.llll.mmmm.nnnn.oooo.pppp
is not qqqq.rrrr.ssss.tttt.uuuu.vvvv.xxxx.yyyy.zzzz
)
if (
threading.current_thread() != threading.main_thread()
and threading.current_thread() != threading.main_thread()
or signal.getsignal(signal.SIGINT) != signal.default_int_handler
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
| aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
& aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
- aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
* aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
/ aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
):
return True
if (
~aaaa.a + aaaa.b - aaaa.c * aaaa.d / aaaa.e
| aaaa.f & aaaa.g % aaaa.h ^ aaaa.i << aaaa.k >> aaaa.l ** aaaa.m // aaaa.n
):
return True
if (
~aaaaaaaa.a + aaaaaaaa.b - aaaaaaaa.c @ aaaaaaaa.d / aaaaaaaa.e
| aaaaaaaa.f & aaaaaaaa.g % aaaaaaaa.h
^ aaaaaaaa.i << aaaaaaaa.k >> aaaaaaaa.l ** aaaaaaaa.m // aaaaaaaa.n
):
return True
if (
~aaaaaaaaaaaaaaaa.a
+ aaaaaaaaaaaaaaaa.b
- aaaaaaaaaaaaaaaa.c * aaaaaaaaaaaaaaaa.d @ aaaaaaaaaaaaaaaa.e
| aaaaaaaaaaaaaaaa.f & aaaaaaaaaaaaaaaa.g % aaaaaaaaaaaaaaaa.h
^ aaaaaaaaaaaaaaaa.i
<< aaaaaaaaaaaaaaaa.k
>> aaaaaaaaaaaaaaaa.l ** aaaaaaaaaaaaaaaa.m // aaaaaaaaaaaaaaaa.n
):
return True
last_call()
# standalone comment at ENDMARKER
| 28.651952 | 280 | 0.648969 | 2,775 | 16,876 | 3.827748 | 0.101622 | 0.008661 | 0.011297 | 0.015063 | 0.999435 | 0.999435 | 0.999435 | 0.999435 | 0.999435 | 0.999435 | 0 | 0.049207 | 0.173916 | 16,876 | 588 | 281 | 28.70068 | 0.712718 | 0.012562 | 0 | 0.652705 | 0 | 0 | 0.027744 | 0 | 0 | 0 | 0 | 0 | 0.010471 | 1 | 0.00349 | false | 0.00349 | 0.00349 | 0 | 0.041885 | 0.010471 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6a3bba567c7a14e07495cfb5c87543b5ff43278a | 139 | py | Python | Modulos/modulo_aritmetica.py | josaphatsv/EjercicioPython | 269bf5552bc926917ba3e54477e735af4f9c1830 | [
"MIT"
] | null | null | null | Modulos/modulo_aritmetica.py | josaphatsv/EjercicioPython | 269bf5552bc926917ba3e54477e735af4f9c1830 | [
"MIT"
] | null | null | null | Modulos/modulo_aritmetica.py | josaphatsv/EjercicioPython | 269bf5552bc926917ba3e54477e735af4f9c1830 | [
"MIT"
] | null | null | null | def sumar(a,b):
return a + b
def restar(a,b):
return a-b
def multiplicar(a,b):
return a*b
def dividir(a,b):
return a/b | 12.636364 | 21 | 0.582734 | 28 | 139 | 2.892857 | 0.285714 | 0.197531 | 0.395062 | 0.444444 | 0.604938 | 0.481481 | 0 | 0 | 0 | 0 | 0 | 0 | 0.273381 | 139 | 11 | 22 | 12.636364 | 0.80198 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
e006935726a3ad29dd5e4e2c1bf1f16d2a67415f | 81 | py | Python | tests/test_printf.py | theshteves/python-box | e4b034dc2e9b8721ca7387e8947d19e5ad6e2e3f | [
"MIT"
] | null | null | null | tests/test_printf.py | theshteves/python-box | e4b034dc2e9b8721ca7387e8947d19e5ad6e2e3f | [
"MIT"
] | null | null | null | tests/test_printf.py | theshteves/python-box | e4b034dc2e9b8721ca7387e8947d19e5ad6e2e3f | [
"MIT"
] | null | null | null | from printf import printf
new_var = 'value2 is muuuuuch better'
printf(new_var)
| 16.2 | 37 | 0.790123 | 13 | 81 | 4.769231 | 0.692308 | 0.290323 | 0.387097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014493 | 0.148148 | 81 | 4 | 38 | 20.25 | 0.884058 | 0 | 0 | 0 | 0 | 0 | 0.308642 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
e01f265bfb7e26e2296fa647fe4866e5e163454d | 104 | py | Python | openpaygo/simulators/__init__.py | wan5xp/OpenPAYGO-Token | ea0e7902dfadcbb4fcceddbe8caa739ee0a7d01e | [
"Apache-2.0"
] | null | null | null | openpaygo/simulators/__init__.py | wan5xp/OpenPAYGO-Token | ea0e7902dfadcbb4fcceddbe8caa739ee0a7d01e | [
"Apache-2.0"
] | null | null | null | openpaygo/simulators/__init__.py | wan5xp/OpenPAYGO-Token | ea0e7902dfadcbb4fcceddbe8caa739ee0a7d01e | [
"Apache-2.0"
] | null | null | null | from .device_simulator import DeviceSimulator
from .server_simulator import SingleDeviceServerSimulator
| 34.666667 | 57 | 0.903846 | 10 | 104 | 9.2 | 0.7 | 0.326087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 104 | 2 | 58 | 52 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e05c0f04ac35fdb953990233b8267023387cd76e | 32 | py | Python | klusta_process_manager/database/__init__.py | tymoreau/app_launcher | 7697429d4233c9079eb9a6e3e62c724e008d261a | [
"BSD-3-Clause"
] | 2 | 2015-06-10T13:56:19.000Z | 2019-01-31T22:30:49.000Z | klusta_process_manager/database/__init__.py | tymoreau/app_launcher | 7697429d4233c9079eb9a6e3e62c724e008d261a | [
"BSD-3-Clause"
] | null | null | null | klusta_process_manager/database/__init__.py | tymoreau/app_launcher | 7697429d4233c9079eb9a6e3e62c724e008d261a | [
"BSD-3-Clause"
] | 1 | 2016-05-31T13:25:40.000Z | 2016-05-31T13:25:40.000Z | from .database import Database
| 16 | 31 | 0.8125 | 4 | 32 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 32 | 1 | 32 | 32 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0ef5b563cc327b1b79a612e0e4dc7b8e5f3e8c16 | 10,166 | py | Python | dataset_handling/lisa_read.py | HaPham28/Traffic-Sign-Tracker | 4fe155a2d03e99c67060310f48815f9ff102e8ff | [
"MIT"
] | 1 | 2021-09-10T12:25:26.000Z | 2021-09-10T12:25:26.000Z | dataset_handling/lisa_read.py | HaPham28/Traffic-Sign-Tracker | 4fe155a2d03e99c67060310f48815f9ff102e8ff | [
"MIT"
] | null | null | null | dataset_handling/lisa_read.py | HaPham28/Traffic-Sign-Tracker | 4fe155a2d03e99c67060310f48815f9ff102e8ff | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Wed Apr 7 19:25:38 2021
@author: Ha V. Pham
"""
import os
import shutil
from sklearn.model_selection import train_test_split
from PIL import Image, ImageEnhance
from PIL.ImageFilter import (BLUR, SHARPEN)
import numpy as np
import pandas as pd
path = os.path.dirname(os.path.realpath(__file__))
#print(path)
classes = ["stop", "yield", "pedestrianCrossing", "speedLimit15", "speedLimit25", "speedLimit30", "speedLimit35", "speedLimit40", "speedLimit45", "speedLimit50", "speedLimit55","speedLimit60", "speedLimit65"]
df = pd.read_csv("allAnnotations.csv")
df.drop(df.columns[1], axis = 1, inplace = True)
df = df.rename(columns = {df.columns[0]: "information"})
df = pd.DataFrame(df.information.str.split(';').tolist(), columns = ['path','type', 'leftX', 'leftY', 'rightX', 'rightY', 'ocluded'])
df.drop(df.columns[6], axis = 1, inplace = True)
df.type = df.type.apply(lambda y: np.nan if y not in classes else y)
df = df.dropna()
#print(df)
X = df[['path', 'leftX', 'leftY', 'rightX', 'rightY']]
Y = df[['type']]
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.1, random_state=42)
train = pd.concat([X_train, y_train], axis=1)
test = pd.concat([X_test, y_test], axis=1)
train.to_csv("train.csv")
test.to_csv("test.csv")
trainpath = "train"
if not os.path.exists(trainpath): #create a new folder for new images if not exists
os.makedirs(trainpath)
testpath = "test"
if not os.path.exists(testpath): #create a new folder for new images if not exists
os.makedirs(testpath)
for index, row in train.iterrows():
filename = row["path"].split("/")[-1].replace(".png", ".jpg")
newpath = trainpath + "/"+ filename
#print("###", os.listdir(path + "/train"))
with Image.open(row["path"]) as im:
img_width, img_height = im.size
obj_width = int(row["rightX"]) - int(row["leftX"])
obj_height = int(row["rightY"]) - int(row["leftY"])
obj_mid_x = (int(row["rightX"]) + int(row["leftX"])) / 2.0
obj_mid_y = (int(row["rightY"]) + int(row["leftY"])) / 2.0
obj_width_rel = obj_width / img_width
obj_height_rel = obj_height / img_height
obj_mid_x_rel = obj_mid_x / img_width
obj_mid_y_rel = obj_mid_y / img_height
if row["type"] == "stop":
index = "0"
elif row["type"] == "yield":
index = "1"
with Image.open(row["path"]) as im:
enhancer = ImageEnhance.Brightness(im)
dark = enhancer.enhance(0.6)
bright = enhancer.enhance(1.4)
dark.save("train/dark_"+ filename)
bright.save("train/bright_"+ filename)
elif row["type"] == "pedestrianCrossing":
index = "2"
else:
index = "3"
if filename in os.listdir(path + "/train"):
with open(path + "/train/" + filename.replace(".jpg", ".txt"), "a") as img_txt:
img_txt.write(index + " " + " " + str(obj_mid_x_rel) + " " + str(obj_mid_y_rel) + " " + str(obj_width_rel) + " " + str(obj_height_rel) + "\n")
if index == "1" and "dark_" + filename.replace(".jpg", ".txt") not in os.listdir(path + "/train"):
with open("train.txt", "a") as train_txt:
train_txt.write("data/obj/train/dark_" + filename + "\n")
train_txt.write("data/obj/train/bright_" + filename + "\n")
shutil.copy(path + "/train/" + filename.replace(".jpg", ".txt"), path + "/train/dark_" + filename.replace(".jpg", ".txt"))
shutil.copy(path + "/train/" + filename.replace(".jpg", ".txt"), path + "/train/bright_" + filename.replace(".jpg", ".txt"))
elif filename in os.listdir(path + "/test"):
with open(path + "/test/" + filename.replace(".jpg", ".txt"), "a") as img_txt:
img_txt.write(index + " " + " " + str(obj_mid_x_rel) + " " + str(obj_mid_y_rel) + " " + str(obj_width_rel) + " " + str(obj_height_rel) + "\n")
shutil.copy(path + "/test/" + filename.replace(".jpg", ".txt"), path + "/train/" + filename.replace(".jpg", ".txt"))
if index == "1" and "dark_" + filename.replace(".jpg", ".txt") not in os.listdir(path + "/train"):
shutil.copy(path + "/test/" + filename.replace(".jpg", ".txt"), path + "/train/dark_" + filename.replace(".jpg", ".txt"))
shutil.copy(path + "/test/" + filename.replace(".jpg", ".txt"), path + "/train/bright_" + filename.replace(".jpg", ".txt"))
with open("train.txt", "a") as train_txt:
train_txt.write("data/obj/train/dark_" + filename + "\n")
train_txt.write("data/obj/train/bright_" + filename + "\n")
with open("train.txt", "a") as train_txt:
train_txt.write("data/obj/train/" + filename + "\n")
else:
if index == "1":
with open(path + "/train/dark_" + filename.replace(".jpg", ".txt"), "w+") as img_txt:
img_txt.write(index + " " + " " + str(obj_mid_x_rel) + " " + str(obj_mid_y_rel) + " " + str(obj_width_rel) + " " + str(obj_height_rel) + "\n")
with open(path + "/train/bright_" + filename.replace(".jpg", ".txt"), "w+") as img_txt:
img_txt.write(index + " " + " " + str(obj_mid_x_rel) + " " + str(obj_mid_y_rel) + " " + str(obj_width_rel) + " " + str(obj_height_rel) + "\n")
with open("train.txt", "a") as train_txt:
train_txt.write("data/obj/train/dark_" + filename + "\n")
train_txt.write("data/obj/train/bright_" + filename + "\n")
with open(path + "/train/" + filename.replace(".jpg", ".txt"), "w+") as img_txt:
img_txt.write(index + " " + " " + str(obj_mid_x_rel) + " " + str(obj_mid_y_rel) + " " + str(obj_width_rel) + " " + str(obj_height_rel) + "\n")
with open("train.txt", "a") as train_txt:
train_txt.write("data/obj/train/" + filename + "\n")
shutil.copy(row["path"], newpath)
for index, row in test.iterrows():
filename = row["path"].split("/")[-1].replace(".png", ".jpg")
newpath = testpath + "/"+ filename
#print("###", os.listdir(path + "/train"))
with Image.open(row["path"]) as im:
img_width, img_height = im.size
obj_width = int(row["rightX"]) - int(row["leftX"])
obj_height = int(row["rightY"]) - int(row["leftY"])
obj_mid_x = (int(row["rightX"]) + int(row["leftX"])) / 2.0
obj_mid_y = (int(row["rightY"]) + int(row["leftY"])) / 2.0
obj_width_rel = obj_width / img_width
obj_height_rel = obj_height / img_height
obj_mid_x_rel = obj_mid_x / img_width
obj_mid_y_rel = obj_mid_y / img_height
if row["type"] == "stop":
index = "0"
elif row["type"] == "yield":
index = "1"
with Image.open(row["path"]) as im:
enhancer = ImageEnhance.Brightness(im)
dark = enhancer.enhance(0.6)
bright = enhancer.enhance(1.4)
dark.save("test/dark_"+ filename)
bright.save("test/bright_"+ filename)
elif row["type"] == "pedestrianCrossing":
index = "2"
else:
index = "3"
if filename in os.listdir(path + "/test"):
with open(path + "/test/" + filename.replace(".jpg", ".txt"), "a") as img_txt:
img_txt.write(index + " " + " " + str(obj_mid_x_rel) + " " + str(obj_mid_y_rel) + " " + str(obj_width_rel) + " " + str(obj_height_rel) + "\n")
if index == "1" and "dark_" + filename.replace(".jpg", ".txt") not in os.listdir(path + "/test"):
with open("test.txt", "a") as train_txt:
train_txt.write("data/obj/test/dark_" + filename + "\n")
train_txt.write("data/obj/test/bright_" + filename + "\n")
shutil.copy(path + "/test/" + filename.replace(".jpg", ".txt"), path + "/test/dark_" + filename.replace(".jpg", ".txt"))
shutil.copy(path + "/test/" + filename.replace(".jpg", ".txt"), path + "/test/bright_" + filename.replace(".jpg", ".txt"))
elif filename in os.listdir(path + "/train"):
with open(path + "/train/" + filename.replace(".jpg", ".txt"), "a") as img_txt:
img_txt.write(index + " " + " " + str(obj_mid_x_rel) + " " + str(obj_mid_y_rel) + " " + str(obj_width_rel) + " " + str(obj_height_rel) + "\n")
shutil.copy(path + "/train/" + filename.replace(".jpg", ".txt"), path + "/test/" + filename.replace(".jpg", ".txt"))
if index == "1" and "dark_" + filename.replace(".jpg", ".txt") not in os.listdir(path + "/test"):
shutil.copy(path + "/train/" + filename.replace(".jpg", ".txt"), path + "/test/dark_" + filename.replace(".jpg", ".txt"))
shutil.copy(path + "/train/" + filename.replace(".jpg", ".txt"), path + "/test/bright_" + filename.replace(".jpg", ".txt"))
with open("test.txt", "a") as test_txt:
test_txt.write("data/obj/test/dark_" + filename + "\n")
test_txt.write("data/obj/test/bright_" + filename + "\n")
with open("test.txt", "a") as test_txt:
test_txt.write("data/obj/test/" + filename + "\n")
else:
if index == "1":
with open(path + "/test/dark_" + filename.replace(".jpg", ".txt"), "w+") as img_txt:
img_txt.write(index + " " + " " + str(obj_mid_x_rel) + " " + str(obj_mid_y_rel) + " " + str(obj_width_rel) + " " + str(obj_height_rel) + "\n")
with open(path + "/test/bright_" + filename.replace(".jpg", ".txt"), "w+") as img_txt:
img_txt.write(index + " " + " " + str(obj_mid_x_rel) + " " + str(obj_mid_y_rel) + " " + str(obj_width_rel) + " " + str(obj_height_rel) + "\n")
with open("test.txt", "a") as test_txt:
test_txt.write("data/obj/test/dark_" + filename + "\n")
test_txt.write("data/obj/test/bright_" + filename + "\n")
with open(path + "/test/" + filename.replace(".jpg", ".txt"), "w+") as img_txt:
img_txt.write(index + " " + " " + str(obj_mid_x_rel) + " " + str(obj_mid_y_rel) + " " + str(obj_width_rel) + " " + str(obj_height_rel) + "\n")
with open("test.txt", "a") as test_txt:
test_txt.write("data/obj/test/" + filename + "\n")
shutil.copy(row["path"], newpath) | 58.425287 | 208 | 0.569742 | 1,382 | 10,166 | 4.005789 | 0.102026 | 0.043353 | 0.110549 | 0.128974 | 0.816835 | 0.804191 | 0.803468 | 0.790101 | 0.777095 | 0.768425 | 0 | 0.009541 | 0.226736 | 10,166 | 174 | 209 | 58.425287 | 0.694695 | 0.026657 | 0 | 0.607843 | 0 | 0 | 0.168894 | 0.013062 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045752 | 0 | 0.045752 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0effaeb18680909c6ed8df73037bae513416f727 | 153 | py | Python | lbutils/lbutils/__init__.py | akashphaniteja/Bio-BERT | b714a9bd9d67c8f9be6c0c58dd19709cd78542d5 | [
"CC0-1.0"
] | null | null | null | lbutils/lbutils/__init__.py | akashphaniteja/Bio-BERT | b714a9bd9d67c8f9be6c0c58dd19709cd78542d5 | [
"CC0-1.0"
] | null | null | null | lbutils/lbutils/__init__.py | akashphaniteja/Bio-BERT | b714a9bd9d67c8f9be6c0c58dd19709cd78542d5 | [
"CC0-1.0"
] | null | null | null | from lbutils import data_utils
from lbutils import model_utils
from lbutils import utils
__all__ = [
'data_utils',
'model_utils',
'utils',
]
| 17 | 31 | 0.718954 | 20 | 153 | 5.1 | 0.35 | 0.323529 | 0.5 | 0.431373 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.202614 | 153 | 8 | 32 | 19.125 | 0.836066 | 0 | 0 | 0 | 0 | 0 | 0.169935 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1605ae8364127c0a96e1fe4877523f31d61a559c | 96 | py | Python | venv/lib/python3.8/site-packages/poetry/core/packages/constraints/constraint.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/poetry/core/packages/constraints/constraint.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/poetry/core/packages/constraints/constraint.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/10/15/1f/39fce55ae79d59374bc65ec62a8ec8d72846b81b1c6f2c038e5cfa02d1 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.40625 | 0 | 96 | 1 | 96 | 96 | 0.489583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1626150bd8a865bc7cfd99d75e32b4b1bea098f0 | 2,911 | py | Python | src/deduplication/tests/test_delete.py | rebinsilva/fast-near-duplicate-image-search | eed7776c423e9b870b8f3c08b3e4f56019bace10 | [
"Apache-2.0"
] | 98 | 2019-05-29T20:55:08.000Z | 2022-03-17T11:44:47.000Z | src/deduplication/tests/test_delete.py | rebinsilva/fast-near-duplicate-image-search | eed7776c423e9b870b8f3c08b3e4f56019bace10 | [
"Apache-2.0"
] | 8 | 2019-12-30T03:53:10.000Z | 2022-03-11T23:47:51.000Z | src/deduplication/tests/test_delete.py | rebinsilva/fast-near-duplicate-image-search | eed7776c423e9b870b8f3c08b3e4f56019bace10 | [
"Apache-2.0"
] | 18 | 2019-09-29T15:47:11.000Z | 2022-03-18T13:58:17.000Z | import os
import pytest
from commands.delete import delete
from tests.conftest import mkdir_output, PROJECT_DIR
@pytest.mark.parametrize(
'hash_size, distance_metric, nearest_neighbors, leaf_size, parallel, batch_size, threshold, '
'backup_keep, backup_duplicate, safe_deletion, expected',
[(8,
'manhattan',
5,
40,
False,
32,
40,
True,
True,
True,
[12, '2018-12-11-15-031193.png', '2018-12-11-15-031197.png', 414]
)
]
)
def test_delete(build_potato_dataset, tree_type, hash_size, distance_metric, nearest_neighbors, leaf_size,
parallel, batch_size, threshold, backup_keep, backup_duplicate, safe_deletion, expected):
output_path = mkdir_output(os.path.join(str(PROJECT_DIR), "outputs"))
df_dataset, img_file_list = build_potato_dataset
to_keep, to_remove = delete(df_dataset, img_file_list, output_path, hash_size, tree_type, distance_metric,
nearest_neighbors, leaf_size, parallel, batch_size, threshold, backup_keep,
backup_duplicate, safe_deletion)
assert len(to_keep) == expected[0]
assert to_keep[0].split(os.sep)[-1] == expected[1]
assert to_keep[1].split(os.sep)[-1] == expected[2]
assert len(to_remove) == expected[3]
# delete_output(output_path)
print()
@pytest.mark.parametrize(
'hash_size, distance_metric, nearest_neighbors, leaf_size, parallel, batch_size, threshold, '
'backup_keep, backup_duplicate, safe_deletion, expected',
[(8, 'manhattan', 2, 40, False, 32, 40, True, True, True,
[15, '2018-12-11-15-031193.png', '2018-12-11-15-031196.png', 15]),
(8, 'manhattan', 5, 40, False, 32, 40, True, True, True,
[4, '2018-12-11-15-031193.png', '2018-12-11-16-121735.png', 28]),
(8, 'manhattan', 10, 40, False, 32, 40, True, True, True,
[4, '2018-12-11-15-031193.png', '2018-12-11-16-121735.png', 28])
])
def test_delete_multi_folder(build_potato_multi_folder_dataset, tree_type, hash_size, distance_metric,
nearest_neighbors, leaf_size, parallel, batch_size, threshold, backup_keep,
backup_duplicate, safe_deletion, expected):
output_path = mkdir_output(os.path.join(str(PROJECT_DIR), "outputs"))
df_dataset, img_file_list = build_potato_multi_folder_dataset
to_keep, to_remove = delete(df_dataset, img_file_list, output_path, hash_size, tree_type, distance_metric,
nearest_neighbors, leaf_size, parallel, batch_size, threshold, backup_keep,
backup_duplicate, safe_deletion)
assert len(to_keep) == expected[0]
assert to_keep[0].split(os.sep)[-1] == expected[1]
assert to_keep[3].split(os.sep)[-1] == expected[2]
assert len(to_remove) == expected[3]
# delete_output(output_path)
print()
| 41 | 110 | 0.660598 | 393 | 2,911 | 4.62341 | 0.201018 | 0.035223 | 0.035223 | 0.099064 | 0.898734 | 0.880572 | 0.880572 | 0.867914 | 0.867914 | 0.867914 | 0 | 0.084137 | 0.216077 | 2,911 | 70 | 111 | 41.585714 | 0.712095 | 0.018207 | 0 | 0.5 | 0 | 0 | 0.18634 | 0.06725 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.035714 | false | 0 | 0.071429 | 0 | 0.107143 | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
164ddefcc4ee17b1829675962183a3ec1cd8133d | 293 | py | Python | src/domain/validators/general_notification_validator.py | sergicollado/json_redo | 89ed514e441f7038f18fe1833f2c49a4bf08ef19 | [
"CC0-1.0"
] | null | null | null | src/domain/validators/general_notification_validator.py | sergicollado/json_redo | 89ed514e441f7038f18fe1833f2c49a4bf08ef19 | [
"CC0-1.0"
] | null | null | null | src/domain/validators/general_notification_validator.py | sergicollado/json_redo | 89ed514e441f7038f18fe1833f2c49a4bf08ef19 | [
"CC0-1.0"
] | null | null | null | from domain.validators.exceptions import BadParametersError
from domain.validators.abstract_classes import NotificationValidator
class GeneralNotificationValidator(NotificationValidator):
@property
def required_params(self):
return ["type", "name"] | 36.625 | 69 | 0.733788 | 24 | 293 | 8.875 | 0.791667 | 0.093897 | 0.187793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204778 | 293 | 8 | 70 | 36.625 | 0.914163 | 0 | 0 | 0 | 0 | 0 | 0.027211 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
16500b5a92e48291a31cbb2f5cf37ddb8b4dcf9b | 45 | py | Python | views/__init__.py | ray2060/mathquiz | ebe0952f1768f382d0c4ae50c470a045a3446e0c | [
"MIT"
] | null | null | null | views/__init__.py | ray2060/mathquiz | ebe0952f1768f382d0c4ae50c470a045a3446e0c | [
"MIT"
] | null | null | null | views/__init__.py | ray2060/mathquiz | ebe0952f1768f382d0c4ae50c470a045a3446e0c | [
"MIT"
] | null | null | null | from .index import *
from .a_plus_b import *
| 15 | 23 | 0.733333 | 8 | 45 | 3.875 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 45 | 2 | 24 | 22.5 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
167c3c953b1b81631593bfc4901430adad92efdf | 2,400 | py | Python | COOL/src/main/python/VolapCharts.py | big-unibo/conversational-olap | 12749bf50b0fefd54924e75136e954fee799161f | [
"Apache-2.0"
] | 1 | 2022-02-01T15:53:21.000Z | 2022-02-01T15:53:21.000Z | COOL/src/main/python/VolapCharts.py | big-unibo/conversational-olap | 12749bf50b0fefd54924e75136e954fee799161f | [
"Apache-2.0"
] | null | null | null | COOL/src/main/python/VolapCharts.py | big-unibo/conversational-olap | 12749bf50b0fefd54924e75136e954fee799161f | [
"Apache-2.0"
] | null | null | null | import math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from textwrap import wrap
path = "../../../outputs/Volap 2021-02-11_18-10-02.csv"
results = pd.read_csv(path, header=None).to_numpy()
nColumns = 3
nRows = math.ceil(len(results) / nColumns)
fig = plt.figure(figsize=(4.5 * nColumns, 6 * nRows))
plt.subplots_adjust(left=0.05, bottom=0.10, right=0.95, top=0.90, wspace=0.15, hspace=0.40)
for i in range(len(results)):
ax = fig.add_subplot(nRows, nColumns, i + 1)
ax.grid(True)
ax.set_title("\n".join(wrap(results[i][0], 45)))
ax.set_xlabel("Average Relative Error")
ax.set_ylabel("Quality")
ax.set_xlim([0, 1])
ax.set_ylim([0, 0.5])
if len(results[i]) > 7 and ~np.isnan(results[i][7]) and ~np.isnan(results[i][8]):
ax.plot(results[i][7], results[i][8], color="green", label="Original", ls="", marker="s", markersize=13)
if len(results[i]) > 7 and np.isnan(results[i][7]) and ~np.isnan(results[i][8]):
ax.axhline(y=results[i][8], color='green', label="Original", linewidth=4)
if len(results[i]) > 4 and ~np.isnan(results[i][4]) and ~np.isnan(results[i][5]):
ax.plot(results[i][4], results[i][5], color="blue", label="Base", ls="", marker="D", markersize=11)
if len(results[i]) > 1 and ~np.isnan(results[i][1]) and ~np.isnan(results[i][2]):
ax.plot(results[i][1], results[i][2], color="red", label="Complete", ls="", marker="o", markersize=14)
ax.legend()
plt.savefig(path.replace(".csv", " (1).pdf"))
fig = plt.figure(figsize=(4.5 * nColumns, 6 * nRows))
plt.subplots_adjust(left=0.05, bottom=0.10, right=0.95, top=0.90, wspace=0.15, hspace=0.40)
for i in range(len(results)):
ax = fig.add_subplot(nRows, nColumns, i + 1)
ax.grid(True)
ax.set_title("\n".join(wrap(results[i][0], 45)))
ax.set_xlabel("Average Relative Error")
ax.set_ylabel("Time (s)")
ax.set_xlim([0, 1])
ax.set_ylim([0, 50])
if len(results[i]) > 4 and ~np.isnan(results[i][4]) and ~np.isnan(results[i][6]):
ax.plot(results[i][4], results[i][6] / 1000, color="blue", label="Base", ls="", marker="D", markersize=11)
if len(results[i]) > 1 and ~np.isnan(results[i][1]) and ~np.isnan(results[i][3]):
ax.plot(results[i][1], results[i][3] / 1000, color="red", label="Complete", ls="", marker="o", markersize=14)
ax.legend()
plt.savefig(path.replace(".csv", " (2).pdf"))
| 48.979592 | 117 | 0.629583 | 421 | 2,400 | 3.548694 | 0.256532 | 0.165997 | 0.080321 | 0.136546 | 0.804552 | 0.804552 | 0.804552 | 0.700134 | 0.700134 | 0.672021 | 0 | 0.060827 | 0.14375 | 2,400 | 48 | 118 | 50 | 0.66618 | 0 | 0 | 0.4 | 0 | 0 | 0.084167 | 0.01875 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bc85a3296a9ed2b6fd40dc0dbc3d583185ddf76e | 41,584 | py | Python | tests/plugins/test_quotas.py | vincentfretin/kinto | 34ff8093ab708eae55fc4eefa237fb420e808a6a | [
"Apache-2.0"
] | 4,618 | 2015-08-06T18:17:02.000Z | 2022-03-31T10:03:07.000Z | tests/plugins/test_quotas.py | vincentfretin/kinto | 34ff8093ab708eae55fc4eefa237fb420e808a6a | [
"Apache-2.0"
] | 1,578 | 2015-07-30T09:47:41.000Z | 2022-03-31T13:12:50.000Z | tests/plugins/test_quotas.py | vincentfretin/kinto | 34ff8093ab708eae55fc4eefa237fb420e808a6a | [
"Apache-2.0"
] | 546 | 2015-07-30T10:31:35.000Z | 2022-03-31T12:44:32.000Z | import unittest
from unittest import mock
import pytest
import transaction
from pyramid import testing
from kinto import main as kinto_main
from kinto.core.errors import ERRORS
from kinto.core.storage import Sort
from kinto.core.storage.exceptions import ObjectNotFoundError
from kinto.core.testing import FormattedErrorMixin, skip_if_no_statsd, sqlalchemy
from kinto.plugins.quotas import scripts
from kinto.plugins.quotas.listener import (
BUCKET_QUOTA_OBJECT_ID,
COLLECTION_QUOTA_OBJECT_ID,
QUOTA_RESOURCE_NAME,
)
from kinto.plugins.quotas.utils import record_size
from .. import support
class PluginSetup(unittest.TestCase):
@skip_if_no_statsd
def test_a_statsd_timer_is_used_for_quotas_if_configured(self):
settings = {
"statsd_url": "udp://127.0.0.1:8125",
"includes": "kinto.plugins.quotas",
}
config = testing.setUp(settings=settings)
with mock.patch("kinto.core.statsd.Client.timer") as mocked:
kinto_main(None, config=config)
mocked.assert_called_with("plugins.quotas")
class QuotaWebTest(support.BaseWebTest, unittest.TestCase):
bucket_uri = "/buckets/test"
collection_uri = "/buckets/test/collections/col"
record_uri = "/buckets/test/collections/col/records/rec"
group_uri = "/buckets/test/groups/grp"
@classmethod
def setUpClass(cls):
if not sqlalchemy:
raise unittest.SkipTest("postgresql is not installed.")
super().setUpClass()
def create_bucket(self):
resp = self.app.put(self.bucket_uri, headers=self.headers)
self.bucket = resp.json["data"]
def create_collection(self):
resp = self.app.put(self.collection_uri, headers=self.headers)
self.collection = resp.json["data"]
def create_group(self):
body = {"data": {"members": ["elle"]}}
resp = self.app.put_json(self.group_uri, body, headers=self.headers)
self.group = resp.json["data"]
def create_record(self):
body = {"data": {"foo": 42}}
resp = self.app.put_json(self.record_uri, body, headers=self.headers)
self.record = resp.json["data"]
@classmethod
def get_app_settings(cls, extras=None):
settings = super().get_app_settings(extras)
# Setup the postgresql backend for transaction support.
settings["storage_backend"] = "kinto.core.storage.postgresql"
db = "postgresql://postgres:postgres@localhost/testdb"
settings["storage_url"] = db
settings["permission_backend"] = "kinto.core.permission.postgresql"
settings["permission_url"] = db
settings["cache_backend"] = "kinto.core.cache.memory"
settings["includes"] = "kinto.plugins.quotas"
return settings
def assertStatsEqual(self, data, stats):
for key in stats:
assert data[key] == stats[key]
class HelloViewTest(QuotaWebTest):
def test_quota_capability_if_enabled(self):
resp = self.app.get("/")
capabilities = resp.json["capabilities"]
self.assertIn("quotas", capabilities)
class QuotaListenerTest(QuotaWebTest):
#
# Bucket
#
def test_quota_tracks_bucket_creation(self):
self.create_bucket()
self.create_collection()
self.create_record()
storage_size = record_size(self.bucket)
storage_size += record_size(self.collection)
storage_size += record_size(self.record)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 1, "record_count": 1, "storage_size": storage_size}
)
def test_tracks_bucket_attributes_update(self):
self.create_bucket()
self.create_collection()
self.create_record()
body = {"data": {"foo": "baz"}}
resp = self.app.patch_json(self.bucket_uri, body, headers=self.headers)
storage_size = record_size(resp.json["data"])
storage_size += record_size(self.collection)
storage_size += record_size(self.record)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 1, "record_count": 1, "storage_size": storage_size}
)
def test_delete_all_buckets_destroys_all_quota_entries(self):
self.app.put("/buckets/a", headers=self.headers)
self.app.put("/buckets/b", headers=self.headers)
self.app.delete("/buckets", headers=self.headers)
stored_in_backend = self.storage.list_all(
parent_id="/buckets/*", resource_name=QUOTA_RESOURCE_NAME
)
assert len(stored_in_backend) == 0
def test_bucket_delete_destroys_its_quota_entries(self):
self.create_bucket()
self.app.delete(self.bucket_uri, headers=self.headers)
stored_in_backend = self.storage.list_all(
parent_id="/buckets/test", resource_name=QUOTA_RESOURCE_NAME
)
assert len(stored_in_backend) == 0
def test_bucket_delete_doesnt_raise_if_quota_entries_do_not_exist(self):
self.create_bucket()
self.storage.delete(
parent_id="/buckets/test",
resource_name=QUOTA_RESOURCE_NAME,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
transaction.commit()
self.app.delete(self.bucket_uri, headers=self.headers)
#
# Collection
#
def test_stats_are_not_accessible_if_collection_does_not_exist(self):
self.create_bucket()
self.app.get(self.collection_uri, headers=self.headers, status=404)
def test_quota_tracks_collection_creation(self):
self.create_bucket()
self.create_collection()
# Bucket stats
storage_size = record_size(self.bucket) + record_size(self.collection)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 1, "record_count": 0, "storage_size": storage_size}
)
# Collection stats
storage_size = record_size(self.collection)
data = self.storage.get(
QUOTA_RESOURCE_NAME, self.collection_uri, COLLECTION_QUOTA_OBJECT_ID
)
self.assertStatsEqual(data, {"record_count": 0, "storage_size": storage_size})
def test_tracks_collection_attributes_update(self):
self.create_bucket()
self.create_collection()
body = {"data": {"foo": "baz"}}
resp = self.app.patch_json(self.collection_uri, body, headers=self.headers)
# Bucket stats
storage_size = record_size(self.bucket)
storage_size += record_size(resp.json["data"])
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 1, "record_count": 0, "storage_size": storage_size}
)
# Collection stats
storage_size -= record_size(self.bucket)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.collection_uri,
object_id=COLLECTION_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(data, {"record_count": 0, "storage_size": storage_size})
def test_tracks_collection_delete(self):
self.create_bucket()
self.create_collection()
body = {"data": {"foo": "baz"}}
self.app.patch_json(self.collection_uri, body, headers=self.headers)
self.app.delete(self.collection_uri, headers=self.headers)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data,
{"collection_count": 0, "record_count": 0, "storage_size": record_size(self.bucket)},
)
def test_collection_delete_destroys_its_quota_entries(self):
self.create_bucket()
self.create_collection()
self.app.delete(self.collection_uri, headers=self.headers)
stored_in_backend = self.storage.list_all(
parent_id=self.collection_uri, resource_name=QUOTA_RESOURCE_NAME
)
assert len(stored_in_backend) == 0
def test_collection_delete_doesnt_raise_if_quota_entries_dont_exist(self):
self.create_bucket()
self.create_collection()
self.storage.delete(
parent_id=self.collection_uri,
resource_name=QUOTA_RESOURCE_NAME,
object_id=COLLECTION_QUOTA_OBJECT_ID,
)
transaction.commit()
self.app.delete(self.collection_uri, headers=self.headers)
def test_tracks_collection_delete_with_multiple_records(self):
self.create_bucket()
self.create_collection()
body = {"data": {"foo": 42}}
self.app.post_json("{}/records".format(self.collection_uri), body, headers=self.headers)
self.app.post_json("{}/records".format(self.collection_uri), body, headers=self.headers)
self.app.post_json("{}/records".format(self.collection_uri), body, headers=self.headers)
self.app.post_json("{}/records".format(self.collection_uri), body, headers=self.headers)
self.app.delete(self.collection_uri, headers=self.headers)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data,
{"collection_count": 0, "record_count": 0, "storage_size": record_size(self.bucket)},
)
#
# Group
#
def test_quota_tracks_group_creation(self):
self.create_bucket()
self.create_group()
storage_size = record_size(self.bucket) + record_size(self.group)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 0, "record_count": 0, "storage_size": storage_size}
)
def test_tracks_group_attributes_update(self):
self.create_bucket()
self.create_group()
body = {"data": {"foo": "baz", "members": ["lui"]}}
resp = self.app.patch_json(self.group_uri, body, headers=self.headers)
storage_size = record_size(self.bucket)
storage_size += record_size(resp.json["data"])
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 0, "record_count": 0, "storage_size": storage_size}
)
def test_tracks_group_delete(self):
self.create_bucket()
self.create_group()
self.app.delete(self.group_uri, headers=self.headers)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data,
{"collection_count": 0, "record_count": 0, "storage_size": record_size(self.bucket)},
)
#
# Record
#
def test_quota_tracks_record_creation(self):
self.create_bucket()
self.create_collection()
self.create_record()
storage_size = record_size(self.bucket)
storage_size += record_size(self.collection)
storage_size += record_size(self.record)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 1, "record_count": 1, "storage_size": storage_size}
)
def test_tracks_record_attributes_update(self):
self.create_bucket()
self.create_collection()
self.create_record()
resp = self.app.patch_json(self.record_uri, {"data": {"foo": "baz"}}, headers=self.headers)
storage_size = record_size(self.bucket)
storage_size += record_size(self.collection)
storage_size += record_size(resp.json["data"])
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 1, "record_count": 1, "storage_size": storage_size}
)
def test_tracks_record_delete(self):
self.create_bucket()
self.create_collection()
self.create_record()
self.app.delete(self.record_uri, headers=self.headers)
storage_size = record_size(self.bucket)
storage_size += record_size(self.collection)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 1, "record_count": 0, "storage_size": storage_size}
)
def test_tracks_records_delete_with_multiple_records(self):
self.create_bucket()
self.create_collection()
body = {"data": {"foo": 42}}
self.app.post_json("{}/records".format(self.collection_uri), body, headers=self.headers)
self.app.post_json("{}/records".format(self.collection_uri), body, headers=self.headers)
self.app.post_json("{}/records".format(self.collection_uri), body, headers=self.headers)
self.app.post_json("{}/records".format(self.collection_uri), body, headers=self.headers)
self.app.delete("{}/records".format(self.collection_uri), headers=self.headers)
storage_size = record_size(self.bucket) + record_size(self.collection)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 1, "record_count": 0, "storage_size": storage_size}
)
def test_bulk_create(self):
body = {
"defaults": {"method": "POST", "path": "{}/records".format(self.collection_uri)},
"requests": [
{"path": self.bucket_uri, "method": "PUT"},
{"path": self.collection_uri, "method": "PUT"},
{"body": {"data": {"id": "a", "attr": 1}}},
{"body": {"data": {"id": "b", "attr": 2}}},
{"body": {"data": {"id": "c", "attr": 3}}},
],
}
self.app.post_json("/batch", body, headers=self.headers)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 1, "record_count": 3, "storage_size": 232}
)
def test_bulk_update(self):
body = {
"defaults": {"method": "POST", "path": "{}/collections".format(self.bucket_uri)},
"requests": [
{"path": self.bucket_uri, "method": "PUT"},
{"body": {"data": {"id": "a", "attr": 10}}},
{"body": {"data": {"id": "b", "attr": 200}}},
{"body": {"data": {"id": "c", "attr": 3000}}},
],
}
self.app.post_json("/batch", body, headers=self.headers)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 3, "record_count": 0, "storage_size": 196}
)
body = {
"defaults": {"method": "PUT"},
"requests": [
{
"path": "{}/collections/a".format(self.bucket_uri),
"body": {"data": {"attr": 100}},
},
{
"path": "{}/collections/b".format(self.bucket_uri),
"body": {"data": {"attr": 2000}},
},
{
"path": "{}/collections/c".format(self.bucket_uri),
"body": {"data": {"attr": 30000}},
},
],
}
self.app.post_json("/batch", body, headers=self.headers)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 3, "record_count": 0, "storage_size": 199}
)
def test_bulk_delete(self):
self.create_bucket()
self.create_collection()
self.create_record()
body = {
"defaults": {"method": "POST", "path": "{}/collections".format(self.bucket_uri)},
"requests": [
{"body": {"data": {"id": "a", "attr": 1}}},
{"body": {"data": {"id": "b", "attr": 2}}},
{"body": {"data": {"id": "c", "attr": 3}}},
],
}
self.app.post_json("/batch", body, headers=self.headers)
body = {
"defaults": {"method": "DELETE"},
"requests": [
{"path": "{}/collections/a".format(self.bucket_uri)},
{"path": "{}/collections/b".format(self.bucket_uri)},
{"path": "{}/collections/c".format(self.bucket_uri)},
{"path": self.collection_uri},
],
}
self.app.post_json("/batch", body, headers=self.headers)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data,
{"collection_count": 0, "record_count": 0, "storage_size": record_size(self.bucket)},
)
with pytest.raises(ObjectNotFoundError):
self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id="{}/collections/a".format(self.bucket_uri),
object_id=COLLECTION_QUOTA_OBJECT_ID,
)
with pytest.raises(ObjectNotFoundError):
self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id="{}/collections/b".format(self.bucket_uri),
object_id=COLLECTION_QUOTA_OBJECT_ID,
)
with pytest.raises(ObjectNotFoundError):
self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id="{}/collections/c".format(self.bucket_uri),
object_id=COLLECTION_QUOTA_OBJECT_ID,
)
with pytest.raises(ObjectNotFoundError):
self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.collection_uri,
object_id=COLLECTION_QUOTA_OBJECT_ID,
)
class QuotaBucketRecordMixin:
def test_507_is_raised_if_quota_exceeded_on_record_creation(self):
self.create_bucket()
self.create_collection()
self.create_record()
body = {"data": {"foo": 42}}
resp = self.app.post_json(
"{}/records".format(self.collection_uri), body, headers=self.headers, status=507
)
# Check that the storage was not updated.
storage_size = record_size(self.bucket)
storage_size += record_size(self.collection)
storage_size += record_size(self.record)
self.assertFormattedError(
resp, 507, ERRORS.FORBIDDEN, "Insufficient Storage", self.error_message
)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 1, "record_count": 1, "storage_size": storage_size}
)
# Check that the record wasn't created
resp = self.app.get("{}/records".format(self.collection_uri), headers=self.headers)
assert len(resp.json["data"]) == 1
class QuotaBucketUpdateMixin:
def test_507_is_raised_if_quota_exceeded_on_record_update(self):
self.create_bucket()
self.create_collection()
self.create_record()
body = {"data": {"foo": 42, "bar": "This is a very long string."}}
resp = self.app.patch_json(self.record_uri, body, headers=self.headers, status=507)
# Check that the storage was not updated.
storage_size = record_size(self.bucket)
storage_size += record_size(self.collection)
storage_size += record_size(self.record)
self.assertFormattedError(
resp, 507, ERRORS.FORBIDDEN, "Insufficient Storage", self.error_message
)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 1, "record_count": 1, "storage_size": storage_size}
)
def test_507_is_raised_if_quota_exceeded_on_collection_update(self):
self.create_bucket()
self.create_collection()
self.create_record()
body = {"data": {"foo": 42, "bar": "This is a very long string."}}
resp = self.app.patch_json(self.collection_uri, body, headers=self.headers, status=507)
storage_size = record_size(self.bucket)
storage_size += record_size(self.collection)
storage_size += record_size(self.record)
self.assertFormattedError(
resp, 507, ERRORS.FORBIDDEN, "Insufficient Storage", self.error_message
)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 1, "record_count": 1, "storage_size": storage_size}
)
def test_507_is_raised_if_quota_exceeded_on_group_update(self):
self.create_bucket()
self.create_collection()
body = {"data": {"members": []}}
resp = self.app.put_json(self.group_uri, body, headers=self.headers)
group = resp.json["data"]
body = {
"data": {"members": ["elle", "lui", "je", "tu", "il", "nous", "vous", "ils", "elles"]}
}
resp = self.app.put_json(self.group_uri, body, headers=self.headers, status=507)
storage_size = record_size(self.bucket)
storage_size += record_size(self.collection)
storage_size += record_size(group)
self.assertFormattedError(
resp, 507, ERRORS.FORBIDDEN, "Insufficient Storage", self.error_message
)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 1, "record_count": 0, "storage_size": storage_size}
)
def test_507_is_not_raised_if_quota_exceeded_on_record_delete(self):
self.create_bucket()
self.create_collection()
self.create_record()
self.app.delete(self.record_uri, headers=self.headers)
# Check that the storage was not updated.
storage_size = record_size(self.bucket)
storage_size += record_size(self.collection)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 1, "record_count": 0, "storage_size": storage_size}
)
def test_507_is_not_raised_if_quota_exceeded_on_collection_delete(self):
self.create_bucket()
self.create_collection()
# fake the quota to the Max
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
data["storage_size"] = 140
self.storage.update(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
obj=data,
)
transaction.commit()
self.app.delete(self.collection_uri, headers=self.headers)
storage_size = 140
storage_size -= record_size(self.collection)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 0, "record_count": 0, "storage_size": storage_size}
)
def test_507_is_raised_if_quota_exceeded_on_group_delete(self):
self.create_bucket()
body = {"data": {"members": []}}
resp = self.app.put_json(self.group_uri, body, headers=self.headers)
group = resp.json["data"]
# fake the quota to the Max
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
data["storage_size"] = 140
self.storage.update(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
obj=data,
)
transaction.commit()
self.app.delete(self.group_uri, headers=self.headers)
storage_size = 140
storage_size -= record_size(group)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 0, "record_count": 0, "storage_size": storage_size}
)
class QuotaBucketMixin:
def test_507_is_raised_if_quota_exceeded_on_collection_creation(self):
self.create_bucket()
self.create_collection()
self.create_record()
body = {"data": {"foo": 42}}
resp = self.app.post_json(
"{}/collections".format(self.bucket_uri), body, headers=self.headers, status=507
)
storage_size = record_size(self.bucket)
storage_size += record_size(self.collection)
storage_size += record_size(self.record)
self.assertFormattedError(
resp, 507, ERRORS.FORBIDDEN, "Insufficient Storage", self.error_message
)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 1, "record_count": 1, "storage_size": storage_size}
)
# Check that the collection wasn't created
resp = self.app.get("{}/collections".format(self.bucket_uri), headers=self.headers)
assert len(resp.json["data"]) == 1
def test_507_is_raised_if_quota_exceeded_on_group_creation(self):
self.create_bucket()
self.create_collection()
self.create_record()
body = {"data": {"members": ["elle"]}}
resp = self.app.put_json(self.group_uri, body, headers=self.headers, status=507)
storage_size = record_size(self.bucket)
storage_size += record_size(self.collection)
storage_size += record_size(self.record)
self.assertFormattedError(
resp, 507, ERRORS.FORBIDDEN, "Insufficient Storage", self.error_message
)
data = self.storage.get(
resource_name=QUOTA_RESOURCE_NAME,
parent_id=self.bucket_uri,
object_id=BUCKET_QUOTA_OBJECT_ID,
)
self.assertStatsEqual(
data, {"collection_count": 1, "record_count": 1, "storage_size": storage_size}
)
# Check that the group wasn't created
resp = self.app.get("{}/groups".format(self.bucket_uri), headers=self.headers)
assert len(resp.json["data"]) == 0
class QuotaMaxBytesExceededSettingsListenerTest(
FormattedErrorMixin,
QuotaBucketRecordMixin,
QuotaBucketUpdateMixin,
QuotaBucketMixin,
QuotaWebTest,
):
error_message = "Bucket maximum total size exceeded "
@classmethod
def get_app_settings(cls, extras=None):
settings = super().get_app_settings(extras)
settings["quotas.bucket_max_bytes"] = "150"
return settings
class QuotaMaxBytesExceededBucketSettingsListenerTest(
FormattedErrorMixin,
QuotaBucketRecordMixin,
QuotaBucketUpdateMixin,
QuotaBucketMixin,
QuotaWebTest,
):
error_message = "Bucket maximum total size exceeded "
@classmethod
def get_app_settings(cls, extras=None):
settings = super().get_app_settings(extras)
settings["quotas.bucket_test_max_bytes"] = "150"
return settings
class QuotaMaxItemsExceededSettingsListenerTest(
FormattedErrorMixin, QuotaBucketRecordMixin, QuotaWebTest
):
error_message = "Bucket maximum number of objects exceeded "
@classmethod
def get_app_settings(cls, extras=None):
settings = super().get_app_settings(extras)
settings["quotas.bucket_max_items"] = "1"
return settings
class QuotaMaxItemsExceededBucketSettingsListenerTest(
FormattedErrorMixin, QuotaBucketRecordMixin, QuotaWebTest
):
error_message = "Bucket maximum number of objects exceeded "
@classmethod
def get_app_settings(cls, extras=None):
settings = super().get_app_settings(extras)
settings["quotas.bucket_test_max_items"] = "1"
return settings
class QuotaMaxBytesPerItemExceededListenerTest(
FormattedErrorMixin,
QuotaBucketRecordMixin,
QuotaBucketUpdateMixin,
QuotaBucketMixin,
QuotaWebTest,
):
error_message = "Maximum bytes per object exceeded "
@classmethod
def get_app_settings(cls, extras=None):
settings = super().get_app_settings(extras)
settings["quotas.bucket_max_bytes_per_item"] = "55"
return settings
class QuotaMaxBytesPerItemExceededBucketListenerTest(
FormattedErrorMixin,
QuotaBucketRecordMixin,
QuotaBucketUpdateMixin,
QuotaBucketMixin,
QuotaWebTest,
):
error_message = "Maximum bytes per object exceeded "
@classmethod
def get_app_settings(cls, extras=None):
settings = super().get_app_settings(extras)
settings["quotas.bucket_test_max_bytes_per_item"] = "55"
return settings
class QuotaCollectionMixin:
def test_507_is_raised_if_quota_exceeded_on_record_creation(self):
self.create_bucket()
self.create_collection()
self.create_record()
body = {"data": {"foo": 42}}
resp = self.app.post_json(
"{}/records".format(self.collection_uri), body, headers=self.headers, status=507
)
# Check that the storage was not updated.
storage_size = record_size(self.collection)
storage_size += record_size(self.record)
self.assertFormattedError(
resp, 507, ERRORS.FORBIDDEN, "Insufficient Storage", self.error_message
)
data = self.storage.get(
QUOTA_RESOURCE_NAME, self.collection_uri, COLLECTION_QUOTA_OBJECT_ID
)
self.assertStatsEqual(data, {"record_count": 1, "storage_size": storage_size})
class QuotaCollectionUpdateMixin:
def test_507_is_raised_if_quota_exceeded_on_record_update(self):
self.create_bucket()
self.create_collection()
self.create_record()
body = {"data": {"foo": 42, "bar": "This is a very long string."}}
resp = self.app.patch_json(self.record_uri, body, headers=self.headers, status=507)
# Check that the storage was not updated.
storage_size = record_size(self.collection)
storage_size += record_size(self.record)
self.assertFormattedError(
resp, 507, ERRORS.FORBIDDEN, "Insufficient Storage", self.error_message
)
data = self.storage.get(
QUOTA_RESOURCE_NAME, self.collection_uri, COLLECTION_QUOTA_OBJECT_ID
)
self.assertStatsEqual(data, {"record_count": 1, "storage_size": storage_size})
def test_507_is_not_raised_if_quota_exceeded_on_record_delete(self):
self.create_bucket()
self.create_collection()
self.create_record()
self.app.delete(self.record_uri, headers=self.headers)
# Check that the storage was not updated.
storage_size = record_size(self.collection)
data = self.storage.get(
QUOTA_RESOURCE_NAME, self.collection_uri, COLLECTION_QUOTA_OBJECT_ID
)
self.assertStatsEqual(data, {"record_count": 0, "storage_size": storage_size})
class QuotaMaxBytesExceededCollectionSettingsListenerTest(
FormattedErrorMixin, QuotaCollectionMixin, QuotaCollectionUpdateMixin, QuotaWebTest
):
error_message = "Collection maximum size exceeded "
@classmethod
def get_app_settings(cls, extras=None):
settings = super().get_app_settings(extras)
settings["quotas.collection_max_bytes"] = "100"
return settings
class QuotaMaxBytesExceededCollectionBucketSettingsListenerTest(
FormattedErrorMixin, QuotaCollectionMixin, QuotaCollectionUpdateMixin, QuotaWebTest
):
error_message = "Collection maximum size exceeded "
@classmethod
def get_app_settings(cls, extras=None):
settings = super().get_app_settings(extras)
settings["quotas.collection_test_max_bytes"] = "100"
return settings
class QuotaMaxBytesExceededBucketCollectionSettingsListenerTest(
FormattedErrorMixin, QuotaCollectionMixin, QuotaCollectionUpdateMixin, QuotaWebTest
):
error_message = "Collection maximum size exceeded "
@classmethod
def get_app_settings(cls, extras=None):
settings = super().get_app_settings(extras)
settings["quotas.collection_test_col_max_bytes"] = "100"
return settings
class QuotaMaxItemsExceededCollectionSettingsListenerTest(
FormattedErrorMixin, QuotaCollectionMixin, QuotaWebTest
):
error_message = "Collection maximum number of objects exceeded "
@classmethod
def get_app_settings(cls, extras=None):
settings = super().get_app_settings(extras)
settings["quotas.collection_max_items"] = "1"
return settings
class QuotaMaxItemsExceededCollectionBucketSettingsListenerTest(
FormattedErrorMixin, QuotaCollectionMixin, QuotaWebTest
):
error_message = "Collection maximum number of objects exceeded "
@classmethod
def get_app_settings(cls, extras=None):
settings = super().get_app_settings(extras)
settings["quotas.collection_test_max_items"] = "1"
return settings
class QuotaMaxItemsExceededBucketCollectionSettingsListenerTest(
FormattedErrorMixin, QuotaCollectionMixin, QuotaWebTest
):
error_message = "Collection maximum number of objects exceeded "
@classmethod
def get_app_settings(cls, extras=None):
settings = super().get_app_settings(extras)
settings["quotas.collection_test_col_max_items"] = "1"
return settings
class QuotaMaxBytesPerItemExceededCollectionSettingsListenerTest(
FormattedErrorMixin, QuotaCollectionMixin, QuotaWebTest
):
error_message = "Maximum bytes per object exceeded "
@classmethod
def get_app_settings(cls, extras=None):
settings = super().get_app_settings(extras)
settings["quotas.collection_max_bytes_per_item"] = "80"
return settings
class QuotaMaxBytesPerItemExceededCollectionBucketSettingsListenerTest(
FormattedErrorMixin, QuotaCollectionMixin, QuotaCollectionUpdateMixin, QuotaWebTest
):
error_message = "Maximum bytes per object exceeded "
@classmethod
def get_app_settings(cls, extras=None):
settings = super().get_app_settings(extras)
settings["quotas.collection_test_max_bytes_per_item"] = "80"
return settings
class QuotaMaxBytesPerItemExceededBucketCollectionSettingsListenerTest(
FormattedErrorMixin, QuotaCollectionMixin, QuotaCollectionUpdateMixin, QuotaWebTest
):
error_message = "Maximum bytes per object exceeded "
@classmethod
def get_app_settings(cls, extras=None):
settings = super().get_app_settings(extras)
settings["quotas.collection_test_col_max_bytes_per_item"] = "80"
return settings
class QuotasScriptsTest(unittest.TestCase):
OLDEST_FIRST = Sort("last_modified", 1)
BATCH_SIZE = 25
def setUp(self):
self.storage = mock.Mock()
def test_rebuild_quotas_updates_records(self):
paginated_data = [
# get buckets
iter([{"id": "bucket-1", "last_modified": 10}]),
# get collections for first bucket
iter(
[
{"id": "collection-1", "last_modified": 100},
{"id": "collection-2", "last_modified": 200},
]
),
# get records for first collection
iter([{"id": "record-1", "last_modified": 110}]),
# get records for second collection
iter([{"id": "record-1b", "last_modified": 210}]),
]
def paginated_mock(*args, **kwargs):
return paginated_data.pop(0)
with mock.patch("kinto.plugins.quotas.scripts.logger") as mocked_logger:
with mock.patch(
"kinto.plugins.quotas.scripts.paginated", side_effect=paginated_mock
) as mocked_paginated:
scripts.rebuild_quotas(self.storage)
mocked_paginated.assert_any_call(
self.storage, resource_name="bucket", parent_id="", sorting=[self.OLDEST_FIRST]
)
mocked_paginated.assert_any_call(
self.storage,
resource_name="collection",
parent_id="/buckets/bucket-1",
sorting=[self.OLDEST_FIRST],
)
mocked_paginated.assert_any_call(
self.storage,
resource_name="record",
parent_id="/buckets/bucket-1/collections/collection-1",
sorting=[self.OLDEST_FIRST],
)
mocked_paginated.assert_any_call(
self.storage,
resource_name="record",
parent_id="/buckets/bucket-1/collections/collection-2",
sorting=[self.OLDEST_FIRST],
)
self.storage.update.assert_any_call(
resource_name="quota",
parent_id="/buckets/bucket-1",
object_id="bucket_info",
obj={"record_count": 2, "storage_size": 193, "collection_count": 2},
)
self.storage.update.assert_any_call(
resource_name="quota",
parent_id="/buckets/bucket-1/collections/collection-1",
object_id="collection_info",
obj={"record_count": 1, "storage_size": 78},
)
self.storage.update.assert_any_call(
resource_name="quota",
parent_id="/buckets/bucket-1/collections/collection-2",
object_id="collection_info",
obj={"record_count": 1, "storage_size": 79},
)
mocked_logger.info.assert_any_call(
"Bucket bucket-1, collection collection-1. " "Final size: 1 records, 78 bytes."
)
mocked_logger.info.assert_any_call(
"Bucket bucket-1, collection collection-2. " "Final size: 1 records, 79 bytes."
)
mocked_logger.info.assert_any_call(
"Bucket bucket-1. Final size: " "2 collections, 2 records, 193 bytes."
)
def test_rebuild_quotas_doesnt_update_if_dry_run(self):
paginated_data = [
# get buckets
iter([{"id": "bucket-1", "last_modified": 10}]),
# get collections for first bucket
iter([{"id": "collection-1", "last_modified": 100}]),
# get records for first collection
iter([{"id": "record-1", "last_modified": 110}]),
]
def paginated_mock(*args, **kwargs):
return paginated_data.pop(0)
with mock.patch("kinto.plugins.quotas.scripts.logger") as mocked:
with mock.patch("kinto.plugins.quotas.scripts.paginated", side_effect=paginated_mock):
scripts.rebuild_quotas(self.storage, dry_run=True)
assert not self.storage.update.called
mocked.info.assert_any_call(
"Bucket bucket-1, collection collection-1. " "Final size: 1 records, 78 bytes."
)
mocked.info.assert_any_call(
"Bucket bucket-1. Final size: 1 collections, " "1 records, 114 bytes."
)
| 35.941227 | 99 | 0.634331 | 4,555 | 41,584 | 5.512623 | 0.058836 | 0.049064 | 0.036559 | 0.045161 | 0.856989 | 0.840183 | 0.81812 | 0.798248 | 0.772561 | 0.751095 | 0 | 0.010971 | 0.256926 | 41,584 | 1,156 | 100 | 35.972318 | 0.80165 | 0.017795 | 0 | 0.660638 | 0 | 0 | 0.126372 | 0.025779 | 0 | 0 | 0 | 0 | 0.065957 | 1 | 0.067021 | false | 0 | 0.014894 | 0.002128 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bca47159d57016e77cb93c1293a88999648636ff | 64,405 | py | Python | pybind/slxos/v16r_1_00b/routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
import authentication_key
import md5_authentication
import database_filter
import bfd
class ospf1(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-common-def - based on the path /routing-system/interface/ve/ip/interface-vlan-ospf-conf/ospf1. Each member element of
the container is represented as a class variable - with a specific
YANG type.
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__area','__auth_change_wait_time','__authentication_key','__hello_interval','__dead_interval','__retransmit_interval','__transmit_delay','__md5_authentication','__cost','__network','__intf_ldp_sync','__database_filter','__mtu_ignore','__active','__passive','__priority','__bfd',)
_yang_name = 'ospf1'
_rest_name = 'ospf'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__priority = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..255']}), is_leaf=True, yang_name="priority", rest_name="priority", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Router priority'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='uint32', is_config=True)
self.__retransmit_interval = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..3600']}), is_leaf=True, yang_name="retransmit-interval", rest_name="retransmit-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Time between retransmitting lost link state\n advertisements'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)
self.__auth_change_wait_time = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..14400']}), is_leaf=True, yang_name="auth-change-wait-time", rest_name="auth-change-wait-time", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Authentication Change Wait time (MD5 and non MD5)'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)
self.__network = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'broadcast': {'value': 1}, u'non-broadcast': {'value': 2}, u'point-to-point': {'value': 3}},), is_leaf=True, yang_name="network", rest_name="network", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Interface type'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='enumeration', is_config=True)
self.__area = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'((([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.)(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){2}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5]))|(([0-9])|([1-9]([0-9]{1,8}))|([1]([0-9]{1,9}))|([2][0]([0-9]{1,8}))|([2][1][0-3]([0-9]{1,7}))|([2][1][4][0-6]([0-9]{1,6}))|([2][1][4][7][0-3]([0-9]{1,5}))|([2][1][4][7][4][0-7]([0-9]{1,4}))|([2][1][4][7][4][8][0-2]([0-9]{1,3}))|([2][1][4][7][4][8][3][0-5]([0-9]{1,2}))|([2][1][4][7][4][8][3][6][0-3][0-9])|([2][1][4][7][4][8][3][6][4][0-7]))'}), is_leaf=True, yang_name="area", rest_name="area", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'OSPF areas'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='ospf-area-id', is_config=True)
self.__mtu_ignore = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="mtu-ignore", rest_name="mtu-ignore", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'To disable OSPF MTU mismatch detection'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='empty', is_config=True)
self.__md5_authentication = YANGDynClass(base=md5_authentication.md5_authentication, is_container='container', presence=False, yang_name="md5-authentication", rest_name="md5-authentication", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'MD5 authentication parameters', u'cli-incomplete-no': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='container', is_config=True)
self.__intf_ldp_sync = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'enable': {'value': 1}, u'disable': {'value': 2}},), is_leaf=True, yang_name="intf-ldp-sync", rest_name="ldp-sync", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Set LDP-SYNC operation mode on this interface', u'cli-full-no': None, u'alt-name': u'ldp-sync'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='enumeration', is_config=True)
self.__passive = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="passive", rest_name="passive", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Passive information'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='empty', is_config=True)
self.__dead_interval = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'3..65535']}), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)(40), is_leaf=True, yang_name="dead-interval", rest_name="dead-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Interval after which a neighbor is declared dead'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)
self.__hello_interval = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..65535']}), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)(10), is_leaf=True, yang_name="hello-interval", rest_name="hello-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Time between HELLO packets'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)
self.__active = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="active", rest_name="active", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Active information'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='empty', is_config=True)
self.__bfd = YANGDynClass(base=bfd.bfd, is_container='container', presence=False, yang_name="bfd", rest_name="bfd", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set BFD operation mode on this interface', u'hidden': u'full'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='container', is_config=True)
self.__authentication_key = YANGDynClass(base=authentication_key.authentication_key, is_container='container', presence=False, yang_name="authentication-key", rest_name="authentication-key", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Authentication password (key)', u'cli-full-no': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='container', is_config=True)
self.__database_filter = YANGDynClass(base=database_filter.database_filter, is_container='container', presence=False, yang_name="database-filter", rest_name="database-filter", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Filter OSPF LSA during synchronization and flooding', u'cli-incomplete-no': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='container', is_config=True)
self.__transmit_delay = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..3600']}), is_leaf=True, yang_name="transmit-delay", rest_name="transmit-delay", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Link state transmit delay'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)
self.__cost = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..65535']}), is_leaf=True, yang_name="cost", rest_name="cost", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Interface cost', u'cli-trim-default': None}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='uint32', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'routing-system', u'interface', u've', u'ip', u'interface-vlan-ospf-conf', u'ospf1']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'interface', u'Ve', u'ip', u'ospf']
def _get_area(self):
"""
Getter method for area, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/area (ospf-area-id)
"""
return self.__area
def _set_area(self, v, load=False):
"""
Setter method for area, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/area (ospf-area-id)
If this variable is read-only (config: false) in the
source YANG file, then _set_area is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_area() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'((([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.)(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){2}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5]))|(([0-9])|([1-9]([0-9]{1,8}))|([1]([0-9]{1,9}))|([2][0]([0-9]{1,8}))|([2][1][0-3]([0-9]{1,7}))|([2][1][4][0-6]([0-9]{1,6}))|([2][1][4][7][0-3]([0-9]{1,5}))|([2][1][4][7][4][0-7]([0-9]{1,4}))|([2][1][4][7][4][8][0-2]([0-9]{1,3}))|([2][1][4][7][4][8][3][0-5]([0-9]{1,2}))|([2][1][4][7][4][8][3][6][0-3][0-9])|([2][1][4][7][4][8][3][6][4][0-7]))'}), is_leaf=True, yang_name="area", rest_name="area", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'OSPF areas'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='ospf-area-id', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """area must be of a type compatible with ospf-area-id""",
'defined-type': "brocade-ospf:ospf-area-id",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'((([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.)(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){2}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5]))|(([0-9])|([1-9]([0-9]{1,8}))|([1]([0-9]{1,9}))|([2][0]([0-9]{1,8}))|([2][1][0-3]([0-9]{1,7}))|([2][1][4][0-6]([0-9]{1,6}))|([2][1][4][7][0-3]([0-9]{1,5}))|([2][1][4][7][4][0-7]([0-9]{1,4}))|([2][1][4][7][4][8][0-2]([0-9]{1,3}))|([2][1][4][7][4][8][3][0-5]([0-9]{1,2}))|([2][1][4][7][4][8][3][6][0-3][0-9])|([2][1][4][7][4][8][3][6][4][0-7]))'}), is_leaf=True, yang_name="area", rest_name="area", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'OSPF areas'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='ospf-area-id', is_config=True)""",
})
self.__area = t
if hasattr(self, '_set'):
self._set()
def _unset_area(self):
self.__area = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'((([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.)(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){2}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5]))|(([0-9])|([1-9]([0-9]{1,8}))|([1]([0-9]{1,9}))|([2][0]([0-9]{1,8}))|([2][1][0-3]([0-9]{1,7}))|([2][1][4][0-6]([0-9]{1,6}))|([2][1][4][7][0-3]([0-9]{1,5}))|([2][1][4][7][4][0-7]([0-9]{1,4}))|([2][1][4][7][4][8][0-2]([0-9]{1,3}))|([2][1][4][7][4][8][3][0-5]([0-9]{1,2}))|([2][1][4][7][4][8][3][6][0-3][0-9])|([2][1][4][7][4][8][3][6][4][0-7]))'}), is_leaf=True, yang_name="area", rest_name="area", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'OSPF areas'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='ospf-area-id', is_config=True)
def _get_auth_change_wait_time(self):
"""
Getter method for auth_change_wait_time, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/auth_change_wait_time (common-def:time-interval-sec)
"""
return self.__auth_change_wait_time
def _set_auth_change_wait_time(self, v, load=False):
"""
Setter method for auth_change_wait_time, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/auth_change_wait_time (common-def:time-interval-sec)
If this variable is read-only (config: false) in the
source YANG file, then _set_auth_change_wait_time is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_auth_change_wait_time() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..14400']}), is_leaf=True, yang_name="auth-change-wait-time", rest_name="auth-change-wait-time", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Authentication Change Wait time (MD5 and non MD5)'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """auth_change_wait_time must be of a type compatible with common-def:time-interval-sec""",
'defined-type': "common-def:time-interval-sec",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..14400']}), is_leaf=True, yang_name="auth-change-wait-time", rest_name="auth-change-wait-time", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Authentication Change Wait time (MD5 and non MD5)'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)""",
})
self.__auth_change_wait_time = t
if hasattr(self, '_set'):
self._set()
def _unset_auth_change_wait_time(self):
self.__auth_change_wait_time = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..14400']}), is_leaf=True, yang_name="auth-change-wait-time", rest_name="auth-change-wait-time", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Authentication Change Wait time (MD5 and non MD5)'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)
def _get_authentication_key(self):
"""
Getter method for authentication_key, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/authentication_key (container)
"""
return self.__authentication_key
def _set_authentication_key(self, v, load=False):
"""
Setter method for authentication_key, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/authentication_key (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_authentication_key is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_authentication_key() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=authentication_key.authentication_key, is_container='container', presence=False, yang_name="authentication-key", rest_name="authentication-key", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Authentication password (key)', u'cli-full-no': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """authentication_key must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=authentication_key.authentication_key, is_container='container', presence=False, yang_name="authentication-key", rest_name="authentication-key", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Authentication password (key)', u'cli-full-no': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='container', is_config=True)""",
})
self.__authentication_key = t
if hasattr(self, '_set'):
self._set()
def _unset_authentication_key(self):
self.__authentication_key = YANGDynClass(base=authentication_key.authentication_key, is_container='container', presence=False, yang_name="authentication-key", rest_name="authentication-key", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Authentication password (key)', u'cli-full-no': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='container', is_config=True)
def _get_hello_interval(self):
"""
Getter method for hello_interval, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/hello_interval (common-def:time-interval-sec)
"""
return self.__hello_interval
def _set_hello_interval(self, v, load=False):
"""
Setter method for hello_interval, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/hello_interval (common-def:time-interval-sec)
If this variable is read-only (config: false) in the
source YANG file, then _set_hello_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_hello_interval() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..65535']}), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)(10), is_leaf=True, yang_name="hello-interval", rest_name="hello-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Time between HELLO packets'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """hello_interval must be of a type compatible with common-def:time-interval-sec""",
'defined-type': "common-def:time-interval-sec",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..65535']}), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)(10), is_leaf=True, yang_name="hello-interval", rest_name="hello-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Time between HELLO packets'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)""",
})
self.__hello_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_hello_interval(self):
self.__hello_interval = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..65535']}), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)(10), is_leaf=True, yang_name="hello-interval", rest_name="hello-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Time between HELLO packets'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)
def _get_dead_interval(self):
"""
Getter method for dead_interval, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/dead_interval (common-def:time-interval-sec)
"""
return self.__dead_interval
def _set_dead_interval(self, v, load=False):
"""
Setter method for dead_interval, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/dead_interval (common-def:time-interval-sec)
If this variable is read-only (config: false) in the
source YANG file, then _set_dead_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_dead_interval() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'3..65535']}), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)(40), is_leaf=True, yang_name="dead-interval", rest_name="dead-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Interval after which a neighbor is declared dead'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """dead_interval must be of a type compatible with common-def:time-interval-sec""",
'defined-type': "common-def:time-interval-sec",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'3..65535']}), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)(40), is_leaf=True, yang_name="dead-interval", rest_name="dead-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Interval after which a neighbor is declared dead'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)""",
})
self.__dead_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_dead_interval(self):
self.__dead_interval = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'3..65535']}), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32)(40), is_leaf=True, yang_name="dead-interval", rest_name="dead-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Interval after which a neighbor is declared dead'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)
def _get_retransmit_interval(self):
"""
Getter method for retransmit_interval, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/retransmit_interval (common-def:time-interval-sec)
"""
return self.__retransmit_interval
def _set_retransmit_interval(self, v, load=False):
"""
Setter method for retransmit_interval, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/retransmit_interval (common-def:time-interval-sec)
If this variable is read-only (config: false) in the
source YANG file, then _set_retransmit_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_retransmit_interval() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..3600']}), is_leaf=True, yang_name="retransmit-interval", rest_name="retransmit-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Time between retransmitting lost link state\n advertisements'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """retransmit_interval must be of a type compatible with common-def:time-interval-sec""",
'defined-type': "common-def:time-interval-sec",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..3600']}), is_leaf=True, yang_name="retransmit-interval", rest_name="retransmit-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Time between retransmitting lost link state\n advertisements'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)""",
})
self.__retransmit_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_retransmit_interval(self):
self.__retransmit_interval = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..3600']}), is_leaf=True, yang_name="retransmit-interval", rest_name="retransmit-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Time between retransmitting lost link state\n advertisements'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)
def _get_transmit_delay(self):
"""
Getter method for transmit_delay, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/transmit_delay (common-def:time-interval-sec)
"""
return self.__transmit_delay
def _set_transmit_delay(self, v, load=False):
"""
Setter method for transmit_delay, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/transmit_delay (common-def:time-interval-sec)
If this variable is read-only (config: false) in the
source YANG file, then _set_transmit_delay is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_transmit_delay() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..3600']}), is_leaf=True, yang_name="transmit-delay", rest_name="transmit-delay", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Link state transmit delay'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """transmit_delay must be of a type compatible with common-def:time-interval-sec""",
'defined-type': "common-def:time-interval-sec",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..3600']}), is_leaf=True, yang_name="transmit-delay", rest_name="transmit-delay", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Link state transmit delay'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)""",
})
self.__transmit_delay = t
if hasattr(self, '_set'):
self._set()
def _unset_transmit_delay(self):
self.__transmit_delay = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..3600']}), is_leaf=True, yang_name="transmit-delay", rest_name="transmit-delay", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Link state transmit delay'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='common-def:time-interval-sec', is_config=True)
def _get_md5_authentication(self):
"""
Getter method for md5_authentication, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/md5_authentication (container)
"""
return self.__md5_authentication
def _set_md5_authentication(self, v, load=False):
"""
Setter method for md5_authentication, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/md5_authentication (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_md5_authentication is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_md5_authentication() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=md5_authentication.md5_authentication, is_container='container', presence=False, yang_name="md5-authentication", rest_name="md5-authentication", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'MD5 authentication parameters', u'cli-incomplete-no': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """md5_authentication must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=md5_authentication.md5_authentication, is_container='container', presence=False, yang_name="md5-authentication", rest_name="md5-authentication", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'MD5 authentication parameters', u'cli-incomplete-no': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='container', is_config=True)""",
})
self.__md5_authentication = t
if hasattr(self, '_set'):
self._set()
def _unset_md5_authentication(self):
self.__md5_authentication = YANGDynClass(base=md5_authentication.md5_authentication, is_container='container', presence=False, yang_name="md5-authentication", rest_name="md5-authentication", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'MD5 authentication parameters', u'cli-incomplete-no': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='container', is_config=True)
def _get_cost(self):
"""
Getter method for cost, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/cost (uint32)
"""
return self.__cost
def _set_cost(self, v, load=False):
"""
Setter method for cost, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/cost (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_cost is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_cost() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..65535']}), is_leaf=True, yang_name="cost", rest_name="cost", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Interface cost', u'cli-trim-default': None}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """cost must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..65535']}), is_leaf=True, yang_name="cost", rest_name="cost", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Interface cost', u'cli-trim-default': None}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='uint32', is_config=True)""",
})
self.__cost = t
if hasattr(self, '_set'):
self._set()
def _unset_cost(self):
self.__cost = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..65535']}), is_leaf=True, yang_name="cost", rest_name="cost", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Interface cost', u'cli-trim-default': None}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='uint32', is_config=True)
def _get_network(self):
"""
Getter method for network, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/network (enumeration)
"""
return self.__network
def _set_network(self, v, load=False):
"""
Setter method for network, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/network (enumeration)
If this variable is read-only (config: false) in the
source YANG file, then _set_network is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_network() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'broadcast': {'value': 1}, u'non-broadcast': {'value': 2}, u'point-to-point': {'value': 3}},), is_leaf=True, yang_name="network", rest_name="network", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Interface type'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='enumeration', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """network must be of a type compatible with enumeration""",
'defined-type': "brocade-ospf:enumeration",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'broadcast': {'value': 1}, u'non-broadcast': {'value': 2}, u'point-to-point': {'value': 3}},), is_leaf=True, yang_name="network", rest_name="network", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Interface type'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='enumeration', is_config=True)""",
})
self.__network = t
if hasattr(self, '_set'):
self._set()
def _unset_network(self):
self.__network = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'broadcast': {'value': 1}, u'non-broadcast': {'value': 2}, u'point-to-point': {'value': 3}},), is_leaf=True, yang_name="network", rest_name="network", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Interface type'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='enumeration', is_config=True)
def _get_intf_ldp_sync(self):
"""
Getter method for intf_ldp_sync, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/intf_ldp_sync (enumeration)
"""
return self.__intf_ldp_sync
def _set_intf_ldp_sync(self, v, load=False):
"""
Setter method for intf_ldp_sync, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/intf_ldp_sync (enumeration)
If this variable is read-only (config: false) in the
source YANG file, then _set_intf_ldp_sync is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_intf_ldp_sync() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'enable': {'value': 1}, u'disable': {'value': 2}},), is_leaf=True, yang_name="intf-ldp-sync", rest_name="ldp-sync", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Set LDP-SYNC operation mode on this interface', u'cli-full-no': None, u'alt-name': u'ldp-sync'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='enumeration', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """intf_ldp_sync must be of a type compatible with enumeration""",
'defined-type': "brocade-ospf:enumeration",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'enable': {'value': 1}, u'disable': {'value': 2}},), is_leaf=True, yang_name="intf-ldp-sync", rest_name="ldp-sync", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Set LDP-SYNC operation mode on this interface', u'cli-full-no': None, u'alt-name': u'ldp-sync'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='enumeration', is_config=True)""",
})
self.__intf_ldp_sync = t
if hasattr(self, '_set'):
self._set()
def _unset_intf_ldp_sync(self):
self.__intf_ldp_sync = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'enable': {'value': 1}, u'disable': {'value': 2}},), is_leaf=True, yang_name="intf-ldp-sync", rest_name="ldp-sync", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Set LDP-SYNC operation mode on this interface', u'cli-full-no': None, u'alt-name': u'ldp-sync'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='enumeration', is_config=True)
def _get_database_filter(self):
"""
Getter method for database_filter, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/database_filter (container)
"""
return self.__database_filter
def _set_database_filter(self, v, load=False):
"""
Setter method for database_filter, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/database_filter (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_database_filter is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_database_filter() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=database_filter.database_filter, is_container='container', presence=False, yang_name="database-filter", rest_name="database-filter", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Filter OSPF LSA during synchronization and flooding', u'cli-incomplete-no': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """database_filter must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=database_filter.database_filter, is_container='container', presence=False, yang_name="database-filter", rest_name="database-filter", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Filter OSPF LSA during synchronization and flooding', u'cli-incomplete-no': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='container', is_config=True)""",
})
self.__database_filter = t
if hasattr(self, '_set'):
self._set()
def _unset_database_filter(self):
self.__database_filter = YANGDynClass(base=database_filter.database_filter, is_container='container', presence=False, yang_name="database-filter", rest_name="database-filter", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Filter OSPF LSA during synchronization and flooding', u'cli-incomplete-no': None, u'cli-incomplete-command': None}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='container', is_config=True)
def _get_mtu_ignore(self):
"""
Getter method for mtu_ignore, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/mtu_ignore (empty)
"""
return self.__mtu_ignore
def _set_mtu_ignore(self, v, load=False):
"""
Setter method for mtu_ignore, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/mtu_ignore (empty)
If this variable is read-only (config: false) in the
source YANG file, then _set_mtu_ignore is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_mtu_ignore() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="mtu-ignore", rest_name="mtu-ignore", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'To disable OSPF MTU mismatch detection'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='empty', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """mtu_ignore must be of a type compatible with empty""",
'defined-type': "empty",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="mtu-ignore", rest_name="mtu-ignore", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'To disable OSPF MTU mismatch detection'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='empty', is_config=True)""",
})
self.__mtu_ignore = t
if hasattr(self, '_set'):
self._set()
def _unset_mtu_ignore(self):
self.__mtu_ignore = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="mtu-ignore", rest_name="mtu-ignore", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'To disable OSPF MTU mismatch detection'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='empty', is_config=True)
def _get_active(self):
"""
Getter method for active, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/active (empty)
"""
return self.__active
def _set_active(self, v, load=False):
"""
Setter method for active, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/active (empty)
If this variable is read-only (config: false) in the
source YANG file, then _set_active is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_active() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="active", rest_name="active", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Active information'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='empty', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """active must be of a type compatible with empty""",
'defined-type': "empty",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="active", rest_name="active", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Active information'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='empty', is_config=True)""",
})
self.__active = t
if hasattr(self, '_set'):
self._set()
def _unset_active(self):
self.__active = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="active", rest_name="active", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Active information'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='empty', is_config=True)
def _get_passive(self):
"""
Getter method for passive, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/passive (empty)
"""
return self.__passive
def _set_passive(self, v, load=False):
"""
Setter method for passive, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/passive (empty)
If this variable is read-only (config: false) in the
source YANG file, then _set_passive is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_passive() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="passive", rest_name="passive", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Passive information'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='empty', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """passive must be of a type compatible with empty""",
'defined-type': "empty",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="passive", rest_name="passive", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Passive information'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='empty', is_config=True)""",
})
self.__passive = t
if hasattr(self, '_set'):
self._set()
def _unset_passive(self):
self.__passive = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="passive", rest_name="passive", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Passive information'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='empty', is_config=True)
def _get_priority(self):
"""
Getter method for priority, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/priority (uint32)
"""
return self.__priority
def _set_priority(self, v, load=False):
"""
Setter method for priority, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/priority (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_priority is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_priority() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..255']}), is_leaf=True, yang_name="priority", rest_name="priority", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Router priority'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """priority must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..255']}), is_leaf=True, yang_name="priority", rest_name="priority", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Router priority'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='uint32', is_config=True)""",
})
self.__priority = t
if hasattr(self, '_set'):
self._set()
def _unset_priority(self):
self.__priority = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..255']}), is_leaf=True, yang_name="priority", rest_name="priority", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Router priority'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='uint32', is_config=True)
def _get_bfd(self):
"""
Getter method for bfd, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/bfd (container)
YANG Description: Set Bidirectional Forwarding Detection operation mode on this interface
"""
return self.__bfd
def _set_bfd(self, v, load=False):
"""
Setter method for bfd, mapped from YANG variable /routing_system/interface/ve/ip/interface_vlan_ospf_conf/ospf1/bfd (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_bfd is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_bfd() directly.
YANG Description: Set Bidirectional Forwarding Detection operation mode on this interface
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=bfd.bfd, is_container='container', presence=False, yang_name="bfd", rest_name="bfd", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set BFD operation mode on this interface', u'hidden': u'full'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='container', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """bfd must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=bfd.bfd, is_container='container', presence=False, yang_name="bfd", rest_name="bfd", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set BFD operation mode on this interface', u'hidden': u'full'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='container', is_config=True)""",
})
self.__bfd = t
if hasattr(self, '_set'):
self._set()
def _unset_bfd(self):
self.__bfd = YANGDynClass(base=bfd.bfd, is_container='container', presence=False, yang_name="bfd", rest_name="bfd", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Set BFD operation mode on this interface', u'hidden': u'full'}}, namespace='urn:brocade.com:mgmt:brocade-ospf', defining_module='brocade-ospf', yang_type='container', is_config=True)
area = __builtin__.property(_get_area, _set_area)
auth_change_wait_time = __builtin__.property(_get_auth_change_wait_time, _set_auth_change_wait_time)
authentication_key = __builtin__.property(_get_authentication_key, _set_authentication_key)
hello_interval = __builtin__.property(_get_hello_interval, _set_hello_interval)
dead_interval = __builtin__.property(_get_dead_interval, _set_dead_interval)
retransmit_interval = __builtin__.property(_get_retransmit_interval, _set_retransmit_interval)
transmit_delay = __builtin__.property(_get_transmit_delay, _set_transmit_delay)
md5_authentication = __builtin__.property(_get_md5_authentication, _set_md5_authentication)
cost = __builtin__.property(_get_cost, _set_cost)
network = __builtin__.property(_get_network, _set_network)
intf_ldp_sync = __builtin__.property(_get_intf_ldp_sync, _set_intf_ldp_sync)
database_filter = __builtin__.property(_get_database_filter, _set_database_filter)
mtu_ignore = __builtin__.property(_get_mtu_ignore, _set_mtu_ignore)
active = __builtin__.property(_get_active, _set_active)
passive = __builtin__.property(_get_passive, _set_passive)
priority = __builtin__.property(_get_priority, _set_priority)
bfd = __builtin__.property(_get_bfd, _set_bfd)
_pyangbind_elements = {'area': area, 'auth_change_wait_time': auth_change_wait_time, 'authentication_key': authentication_key, 'hello_interval': hello_interval, 'dead_interval': dead_interval, 'retransmit_interval': retransmit_interval, 'transmit_delay': transmit_delay, 'md5_authentication': md5_authentication, 'cost': cost, 'network': network, 'intf_ldp_sync': intf_ldp_sync, 'database_filter': database_filter, 'mtu_ignore': mtu_ignore, 'active': active, 'passive': passive, 'priority': priority, 'bfd': bfd, }
| 93.205499 | 982 | 0.724804 | 9,197 | 64,405 | 4.856801 | 0.028814 | 0.033581 | 0.043879 | 0.030447 | 0.906712 | 0.890526 | 0.884571 | 0.876243 | 0.87369 | 0.86337 | 0 | 0.02538 | 0.116 | 64,405 | 690 | 983 | 93.34058 | 0.759177 | 0.148048 | 0 | 0.5 | 0 | 0.048544 | 0.424629 | 0.200189 | 0 | 0 | 0 | 0 | 0 | 1 | 0.131068 | false | 0.041262 | 0.029126 | 0 | 0.269417 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bcf89d5734698a8545dde2839f960890ceea8e75 | 43 | py | Python | at_learner_core/at_learner_core/datasets/__init__.py | hieuvecto/CASIA-SURF_CeFA | 71dfd846ce968b3ed26974392a6e0c9b40aa12ae | [
"MIT"
] | 133 | 2020-03-03T03:58:04.000Z | 2022-03-28T21:42:36.000Z | at_learner_core/at_learner_core/datasets/__init__.py | lucaslu1987/CASIA-SURF_CeFA | 205d3d976523ed0c15d1e709ed7f21d50d7cf19b | [
"MIT"
] | 24 | 2020-03-13T09:30:09.000Z | 2022-03-22T07:47:15.000Z | at_learner_core/at_learner_core/datasets/__init__.py | lucaslu1987/CASIA-SURF_CeFA | 205d3d976523ed0c15d1e709ed7f21d50d7cf19b | [
"MIT"
] | 29 | 2020-03-10T06:46:45.000Z | 2022-01-29T15:35:21.000Z | from .dataset_manager import DatasetManager | 43 | 43 | 0.906977 | 5 | 43 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 43 | 1 | 43 | 43 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4c1e1a490d1c2d72895a895bfba15a41d12cbdd1 | 1,421 | py | Python | tests/test_utils.py | michalklym/voucherify-python-sdk | df7f6330c92f900ee1678874e0be9ce7b1e83769 | [
"MIT"
] | 6 | 2016-05-30T10:01:13.000Z | 2018-03-01T20:41:26.000Z | tests/test_utils.py | michalklym/voucherify-python-sdk | df7f6330c92f900ee1678874e0be9ce7b1e83769 | [
"MIT"
] | 21 | 2016-06-02T12:46:07.000Z | 2021-05-27T07:03:01.000Z | tests/test_utils.py | michalklym/voucherify-python-sdk | df7f6330c92f900ee1678874e0be9ce7b1e83769 | [
"MIT"
] | 6 | 2018-01-27T02:20:57.000Z | 2021-05-20T08:27:54.000Z | from voucherify import utils
unit_price = 83.45
items_count = 13
base_price = unit_price * items_count
def test_shouldCalculateAmountPriceDiscount():
voucher = {
"code": "PythonVoucherTest",
"discount": {
"type": "AMOUNT",
"amount_off": 12436
},
"category": "PythonTestCategory",
"start_date": "2016-01-01T00:00:00Z",
"expiration_date": None,
"redemption": {
"quantity": None,
"redeemed_quantity": 0
},
"active": True
}
discount = utils.calculate_discount(base_price, voucher, unit_price)
price = utils.calculate_price(base_price, voucher, unit_price)
assert discount == 124.36
assert price == 960.49
def test_shouldCalculateAmountDiscountWhenGiftIsNone():
voucher = {
"code": "PythonVoucherTest",
"discount": {
"type": "AMOUNT",
"amount_off": 12436
},
"gift": None,
"category": "PythonTestCategory",
"start_date": "2016-01-01T00:00:00Z",
"expiration_date": None,
"redemption": {
"quantity": None,
"redeemed_quantity": 0
},
"active": True
}
discount = utils.calculate_discount(base_price, voucher, unit_price)
price = utils.calculate_price(base_price, voucher, unit_price)
assert discount == 124.36
assert price == 960.49
| 29 | 72 | 0.589726 | 135 | 1,421 | 6.007407 | 0.355556 | 0.066584 | 0.078915 | 0.098644 | 0.781751 | 0.781751 | 0.781751 | 0.781751 | 0.781751 | 0.633785 | 0 | 0.065411 | 0.289937 | 1,421 | 48 | 73 | 29.604167 | 0.738355 | 0 | 0 | 0.711111 | 0 | 0 | 0.229416 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 1 | 0.044444 | false | 0 | 0.022222 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4c2ca8d413749a8dedaa9941e8573b295c00b980 | 37,594 | py | Python | tests/test_port_dpb_system.py | JafarSeyedi/sonic-swss | 2647424f897c4f6f8b5fc9760350ffb2e9eb06a2 | [
"Apache-2.0"
] | null | null | null | tests/test_port_dpb_system.py | JafarSeyedi/sonic-swss | 2647424f897c4f6f8b5fc9760350ffb2e9eb06a2 | [
"Apache-2.0"
] | null | null | null | tests/test_port_dpb_system.py | JafarSeyedi/sonic-swss | 2647424f897c4f6f8b5fc9760350ffb2e9eb06a2 | [
"Apache-2.0"
] | null | null | null | import pytest
import json
from port_dpb import Port
from port_dpb import DPB
from dvslib.dvs_common import wait_for_result, PollingConfig
ARP_FLUSH_POLLING = PollingConfig(polling_interval=0.01, timeout=10, strict=True)
ROUTE_CHECK_POLLING = PollingConfig(polling_interval=0.01, timeout=5, strict=True)
"""
Below prefix should be same as the one specified for Ethernet8
in port_breakout_config_db.json in sonic-buildimage/platform/vs/docker-sonic-vs/
"""
Ethernet8_IP = "10.0.0.8/31"
Ethernet8_IPME = "10.0.0.8/32"
@pytest.mark.usefixtures('dpb_setup_fixture')
@pytest.mark.usefixtures('dvs_vlan_manager')
class TestPortDPBSystem(object):
def create_l3_intf(self, dvs, interface, vrf_name):
dvs_asic_db = dvs.get_asic_db()
initial_entries = set(dvs_asic_db.get_keys("ASIC_STATE:SAI_OBJECT_TYPE_ROUTER_INTERFACE"))
if interface.startswith("PortChannel"):
tbl_name = "PORTCHANNEL_INTERFACE"
elif interface.startswith("Vlan"):
tbl_name = "VLAN_INTERFACE"
elif interface.startswith("Loopback"):
tbl_name = "LOOPBACK_INTERFACE"
else:
tbl_name = "INTERFACE"
if len(vrf_name) == 0:
fvs = {'NULL':'NULL'}
else:
fvs = {'vrf_name':vrf_name}
dvs.get_config_db().create_entry(tbl_name, interface, fvs)
dvs_asic_db.wait_for_n_keys("ASIC_STATE:SAI_OBJECT_TYPE_ROUTER_INTERFACE", len(initial_entries)+1)
current_entries = set(dvs_asic_db.get_keys("ASIC_STATE:SAI_OBJECT_TYPE_ROUTER_INTERFACE"))
assert len(current_entries - initial_entries) == 1
return list(current_entries - initial_entries)[0]
def remove_l3_intf(self, dvs, interface):
if interface.startswith("PortChannel"):
tbl_name = "PORTCHANNEL_INTERFACE"
elif interface.startswith("Vlan"):
tbl_name = "VLAN_INTERFACE"
elif interface.startswith("Loopback"):
tbl_name = "LOOPBACK_INTERFACE"
else:
tbl_name = "INTERFACE"
dvs.get_config_db().delete_entry(tbl_name, interface)
def add_ip_address(self, dvs, interface, ip):
if interface.startswith("PortChannel"):
tbl_name = "PORTCHANNEL_INTERFACE"
elif interface.startswith("Vlan"):
tbl_name = "VLAN_INTERFACE"
elif interface.startswith("Loopback"):
tbl_name = "LOOPBACK_INTERFACE"
else:
tbl_name = "INTERFACE"
dvs.get_config_db().create_entry(tbl_name, interface+'|'+ip, {'NULL':'NULL'})
def remove_ip_address(self, dvs, interface, ip):
if interface.startswith("PortChannel"):
tbl_name = "PORTCHANNEL_INTERFACE"
elif interface.startswith("Vlan"):
tbl_name = "VLAN_INTERFACE"
elif interface.startswith("Loopback"):
tbl_name = "LOOPBACK_INTERFACE"
else:
tbl_name = "INTERFACE"
dvs.get_config_db().delete_entry(tbl_name, interface+'|'+ip)
def clear_srv_config(self, dvs):
dvs.servers[0].runcmd("ip address flush dev eth0")
dvs.servers[1].runcmd("ip address flush dev eth0")
dvs.servers[2].runcmd("ip address flush dev eth0")
dvs.servers[3].runcmd("ip address flush dev eth0")
def set_admin_status(self, dvs, interface, status):
dvs_cfg_db = dvs.get_config_db()
if interface.startswith("PortChannel"):
tbl_name = "PORTCHANNEL"
elif interface.startswith("Vlan"):
tbl_name = "VLAN"
else:
tbl_name = "PORT"
dvs_cfg_db.create_entry(tbl_name, interface, {'admin_status':status})
def verify_only_ports_exist(self, dvs, port_names):
all_port_names = ["Ethernet0", "Ethernet1", "Ethernet2", "Ethernet3"]
for port_name in all_port_names:
p = Port(dvs, port_name)
if port_name in port_names:
assert(p.exists_in_config_db() == True)
assert(p.exists_in_app_db() == True)
assert(p.exists_in_asic_db() == True)
else:
assert(p.exists_in_config_db() == False)
assert(p.exists_in_app_db() == False)
assert(p.exists_in_asic_db() == False)
'''
|-----------------------------------------------------------------------------------------------------
| | 1X40G | 1X100G | 4X10G | 4X25G | 2X50G | 2x25G(2)+1x50G(2) | 1x50G(2)+2x25G(2) |
|-----------------------------------------------------------------------------------------------------
| 1X40G | NA | | | | | | |
|-----------------------------------------------------------------------------------------------------
| 1X100G | | NA | | P | P | P | P |
|-----------------------------------------------------------------------------------------------------
| 4X10G | | | NA | | | | |
|-----------------------------------------------------------------------------------------------------
| 4X25G | | P | | NA | P | P | P |
|-----------------------------------------------------------------------------------------------------
| 2X50G | | P | | P | NA | P | P |
|-----------------------------------------------------------------------------------------------------
| 2x25G(2)+1x50G(2) | | P | | P | P | NA | P |
|-----------------------------------------------------------------------------------------------------
| 1x50G(2)+2x25G(2) | | P | | P | P | P | NA |
|-----------------------------------------------------------------------------------------------------
NA --> Not Applicable
P --> Pass
F --> Fail
Empty --> Not Tested
'''
@pytest.mark.parametrize('root_port, breakout_mode', [
('Ethernet0', '2x50G'),
('Ethernet0', '4x25G[10G]'),
('Ethernet0', '2x50G'),
('Ethernet0', '2x25G(2)+1x50G(2)'),
('Ethernet0', '2x50G'),
('Ethernet0', '1x50G(2)+2x25G(2)'),
('Ethernet0', '2x50G'),
('Ethernet0', '1x100G[40G]'),
('Ethernet0', '4x25G[10G]'),
('Ethernet0', '2x25G(2)+1x50G(2)'),
('Ethernet0', '4x25G[10G]'),
('Ethernet0', '1x50G(2)+2x25G(2)'),
('Ethernet0', '4x25G[10G]'),
('Ethernet0', '1x100G[40G]'),
('Ethernet0', '2x25G(2)+1x50G(2)'),
('Ethernet0', '1x50G(2)+2x25G(2)'),
('Ethernet0', '2x25G(2)+1x50G(2)'),
('Ethernet0', '1x100G[40G]'),
('Ethernet0', '1x50G(2)+2x25G(2)'),
('Ethernet0', '1x100G[40G]')
], scope="function")
def test_port_breakout_simple(self, dvs, root_port, breakout_mode):
dvs.setup_db()
dpb = DPB()
dvs.change_port_breakout_mode(root_port, breakout_mode)
dpb.verify_port_breakout_mode(dvs, root_port, breakout_mode)
expected_ports = dpb.get_child_ports(root_port, breakout_mode)
self.verify_only_ports_exist(dvs, expected_ports)
def test_port_breakout_with_vlan(self, dvs):
dvs.setup_db()
dpb = DPB()
portName = "Ethernet0"
vlanID = "100"
breakoutMode1 = "1x100G[40G]"
breakoutMode2 = "4x25G[10G]"
breakoutOption = "-f" #Force breakout by deleting dependencies
# Create VLAN
self.dvs_vlan.create_vlan(vlanID)
# Verify VLAN is created
self.dvs_vlan.get_and_verify_vlan_ids(1)
# Add port to VLAN
self.dvs_vlan.create_vlan_member(vlanID, portName)
# Verify VLAN member is created
self.dvs_vlan.get_and_verify_vlan_member_ids(1)
# Breakout port from 1x100G[40G] --> 4x25G[10G]
dpb.verify_port_breakout_mode(dvs, "Ethernet0", breakoutMode1)
dvs.change_port_breakout_mode("Ethernet0", breakoutMode2, breakoutOption)
# Verify DPB is successful
dpb.verify_port_breakout_mode(dvs, "Ethernet0", breakoutMode2)
# Verify port is removed from VLAN
self.dvs_vlan.get_and_verify_vlan_member_ids(0)
# Delete VLAN
self.dvs_vlan.remove_vlan(vlanID)
# Verify VLAN is deleted
self.dvs_vlan.get_and_verify_vlan_ids(0)
# Breakout port from 4x25G[10G] --> 1x100G[40G]
dvs.change_port_breakout_mode("Ethernet0", breakoutMode1)
# Verify DPB is successful
dpb.verify_port_breakout_mode(dvs, "Ethernet0", breakoutMode1)
@pytest.mark.skip(reason="This test is not stable enough")
def test_port_breakout_with_acl(self, dvs, dvs_acl):
dvs.setup_db()
dpb = DPB()
# Create ACL table "test" and bind it to Ethernet0
bind_ports = ["Ethernet0"]
dvs_acl.create_acl_table("test", "L3", bind_ports)
# Verify ACL table is created
dvs_acl.verify_acl_table_count(1)
# Verify that ACL group OID is created.
# Just FYI: Usually one ACL group OID is created per port,
# even when port is bound to multiple ACL tables
dvs_acl.verify_acl_table_groups(1)
# Verify that port is correctly bound to table by looking into
# ACL member table, which binds ACL group OID of a port and
# ACL table OID.
acl_table_ids = dvs_acl.get_acl_table_ids(1)
dvs_acl.verify_acl_table_port_binding(acl_table_ids[0], bind_ports, 1)
# Verify current breakout mode, perform breakout without force dependency
# delete option
dpb.verify_port_breakout_mode(dvs, "Ethernet0", "1x100G[40G]")
dvs.change_port_breakout_mode("Ethernet0", "4x25G[10G]")
# Verify that breakout did NOT succeed
dpb.verify_port_breakout_mode(dvs, "Ethernet0", "1x100G[40G]")
# Do breakout with force option, and verify that it succeeds
dvs.change_port_breakout_mode("Ethernet0", "4x25G[10G]", "-f")
dpb.verify_port_breakout_mode(dvs, "Ethernet0", "4x25G[10G]")
# Verify port is removed from ACL table
dvs_acl.verify_acl_table_count(1)
dvs_acl.verify_acl_table_groups(0)
# Verify child ports are created.
self.verify_only_ports_exist(dvs, ["Ethernet0", "Ethernet1", "Ethernet2", "Ethernet3"])
# Move back to 1x100G[40G] mode and verify current mode
dvs.change_port_breakout_mode("Ethernet0", "1x100G[40G]", "-f")
dpb.verify_port_breakout_mode(dvs, "Ethernet0", "1x100G[40G]")
# Remove ACL table and verify the same
dvs_acl.remove_acl_table("test")
dvs_acl.verify_acl_table_count(0)
@pytest.mark.skip("DEBUG: When we have more than one child ports, operation status of all does NOT go down")
def test_cli_command_with_force_option(self, dvs, dvs_acl):
dvs.setup_db()
dpb = DPB()
portGroup = ["Ethernet0", "Ethernet1", "Ethernet2", "Ethernet3"]
rootPortName = portGroup[0]
vlanID = "100"
aclTableName = "DPB_ACL_TBL_1"
breakoutMode1x = "1x100G[40G]"
breakoutMode2x = "2x50G"
breakoutMode4x = "4x25G[10G]"
breakoutOption = "-f" #Force breakout by deleting dependencies
# Breakout port with no dependency using "-f" option
dvs.change_port_breakout_mode(rootPortName, breakoutMode4x, breakoutOption)
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode4x)
dvs.change_port_breakout_mode(rootPortName, breakoutMode1x, breakoutOption)
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode1x)
# Breakout port with VLAN and ACL dependency
# Create ACL table and bind port
dvs_acl.verify_acl_table_groups(0)
bind_ports = []
bind_ports.append(rootPortName)
dvs_acl.create_acl_table(aclTableName, "L3", bind_ports)
dvs_acl.verify_acl_table_groups(1)
# Create VLAN and add port to VLAN
self.dvs_vlan.create_vlan(vlanID)
self.dvs_vlan.create_vlan_member(vlanID, rootPortName)
self.dvs_vlan.get_and_verify_vlan_member_ids(1)
# Breakout port and make sure it succeeds and associations are removed
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode1x)
dvs.change_port_breakout_mode(rootPortName, breakoutMode4x, breakoutOption)
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode4x)
dvs_acl.verify_acl_table_groups(0)
self.dvs_vlan.get_and_verify_vlan_member_ids(0)
# Add all ports to ACL and VLAN tables
dvs_acl.update_acl_table_port_list(aclTableName, portGroup)
for p in portGroup:
self.dvs_vlan.create_vlan_member(vlanID, p)
dvs_acl.verify_acl_table_groups(len(portGroup))
self.dvs_vlan.get_and_verify_vlan_member_ids(len(portGroup))
# Breakout with "-f" option and ensure it succeeds and associations are removed
dvs.change_port_breakout_mode(rootPortName, breakoutMode1x, breakoutOption)
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode1x)
dvs_acl.verify_acl_table_groups(0)
self.dvs_vlan.get_and_verify_vlan_member_ids(0)
# Cleanup
# Remove ACL and VLAN tables
dvs_acl.remove_acl_table(aclTableName)
self.dvs_vlan.remove_vlan(vlanID)
# Verify cleanup
dvs_acl.verify_acl_table_count(0)
self.dvs_vlan.get_and_verify_vlan_ids(0)
# check ASIC router interface database
# one loopback router interface
dvs.get_asic_db().wait_for_n_keys("ASIC_STATE:SAI_OBJECT_TYPE_ROUTER_INTERFACE", 1)
# Bring up port
self.set_admin_status(dvs, "Ethernet8", "up")
# Create L3 interface
self.create_l3_intf(dvs, "Ethernet8", "");
# Configure IPv4 address on Ethernet8
self.add_ip_address(dvs, "Ethernet8", Ethernet8_IP)
# one loopback router interface and one port based router interface
dvs.get_asic_db().wait_for_n_keys("ASIC_STATE:SAI_OBJECT_TYPE_ROUTER_INTERFACE", 2)
def _check_route_present():
routes = dvs.get_asic_db().get_keys("ASIC_STATE:SAI_OBJECT_TYPE_ROUTE_ENTRY")
subnet_found = False
ip2me_found = False
for route in routes:
rt = json.loads(route)
if rt["dest"] == Ethernet8_IP:
subnet_found = True
if rt["dest"] == Ethernet8_IPME:
ip2me_found = True
return ((subnet_found and ip2me_found), routes)
# check ASIC route database
status, result = wait_for_result(_check_route_present, ROUTE_CHECK_POLLING)
assert status == True
# Breakout Ethernet8 WITH "-f" option and ensure cleanup happened
dpb.verify_port_breakout_mode(dvs, "Ethernet8", breakoutMode1x)
dvs.change_port_breakout_mode("Ethernet8", breakoutMode2x, breakoutOption)
dpb.verify_port_breakout_mode(dvs, "Ethernet8", breakoutMode2x)
# check ASIC router interface database
# one loopback router interface
dvs.get_asic_db().wait_for_n_keys("ASIC_STATE:SAI_OBJECT_TYPE_ROUTER_INTERFACE", 1)
def _check_route_absent():
routes = dvs.get_asic_db().get_keys("ASIC_STATE:SAI_OBJECT_TYPE_ROUTE_ENTRY")
for route in routes:
rt = json.loads(route)
if rt["dest"] == Ethernet8_IP or \
rt["dest"] == Ethernet8_IPME:
return (False, route)
return (True, routes)
# check ASIC database
status, result = wait_for_result(_check_route_absent, ROUTE_CHECK_POLLING)
assert status == True
dpb.verify_port_breakout_mode(dvs, "Ethernet8", breakoutMode2x)
dvs.change_port_breakout_mode("Ethernet8", breakoutMode1x)
dpb.verify_port_breakout_mode(dvs, "Ethernet8", breakoutMode1x)
@pytest.mark.skip("DEBUG: When we have more than one child ports, operation status of all does NOT go down")
def test_cli_command_with_load_port_breakout_config_option(self, dvs, dvs_acl):
dvs.setup_db()
dpb = DPB()
# Note below definitions are dependent on port_breakout_config_db.json
# That is vlanIDs, aclTableNames are all should match with
# VLANs and ACL tables in port_breakout_config_db.json
portGroup = ["Ethernet0", "Ethernet1", "Ethernet2", "Ethernet3"]
rootPortName = portGroup[0]
vlanIDs = ["100", "101"]
aclTableNames = ["DPB_ACL_TBL_1", "DPB_ACL_TBL_2"]
breakoutMode1x = "1x100G[40G]"
breakoutMode2x = "2x50G"
breakoutMode4x = "4x25G[10G]"
breakoutOption = "-l"
# Lets create ACL and VLAN tables
bind_ports = []
for aclTableName in aclTableNames:
dvs_acl.create_acl_table(aclTableName, "L3", bind_ports)
for vlanID in vlanIDs:
self.dvs_vlan.create_vlan(vlanID)
# Breakout port and expect that newly created ports are
# automatically added to VLANs and ACL tables as per
# port_breakout_config_db.json
dvs_acl.verify_acl_table_groups(0)
self.dvs_vlan.get_and_verify_vlan_member_ids(0)
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode1x)
dvs.change_port_breakout_mode(rootPortName, breakoutMode4x, breakoutOption)
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode4x)
dvs_acl.verify_acl_table_groups(len(portGroup))
self.dvs_vlan.get_and_verify_vlan_member_ids(len(portGroup))
# Breakout port and expect that root port remains in VLAN and ACL tables
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode4x)
dvs.change_port_breakout_mode(rootPortName, breakoutMode1x, breakoutOption + " -f")
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode1x)
dvs_acl.verify_acl_table_groups(1)
self.dvs_vlan.get_and_verify_vlan_member_ids(1)
# Breakout port with "-f" and WITHOUT "-l" and expect that
# breakout succeeds and root port gets removed from
# VLAN and ACL table
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode1x)
dvs.change_port_breakout_mode(rootPortName, breakoutMode4x, "-f")
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode4x)
dvs_acl.verify_acl_table_groups(0)
self.dvs_vlan.get_and_verify_vlan_member_ids(0)
#--------------------------------------------------------------------
# Exercise port group spanned across different VLAN and ACl table |
#--------------------------------------------------------------------
portGroup = ["Ethernet4", "Ethernet5", "Ethernet6", "Ethernet7"]
rootPortName = portGroup[0]
breakoutMode2x = "2x50G"
# Breakout port and expect that newly created ports are
# automatically added to VLANs and ACL tables as per
# port_breakout_config_db.json
dvs_acl.verify_acl_table_groups(0)
self.dvs_vlan.get_and_verify_vlan_member_ids(0)
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode1x)
dvs.change_port_breakout_mode(rootPortName, breakoutMode4x, breakoutOption)
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode4x)
dvs_acl.verify_acl_table_groups(len(portGroup))
self.dvs_vlan.get_and_verify_vlan_member_ids(len(portGroup))
# Breakout port and expect that Ethernet4 and Ethernet6 remain in
# ACL and VLAN where as Ethernet5 and Ethernet7 get removed from
# ACL and VLAN table
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode4x)
dvs.change_port_breakout_mode(rootPortName, breakoutMode2x, breakoutOption + " -f")
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode2x)
dvs_acl.verify_acl_table_groups(2)
self.dvs_vlan.get_and_verify_vlan_member_ids(2)
# Breakout again and verify that only root port (Ethernet4) remains in
# in VLAN and ACL and Ethernet6 gets removed.
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode2x)
dvs.change_port_breakout_mode(rootPortName, breakoutMode1x, breakoutOption + " -f")
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode1x)
dvs_acl.verify_acl_table_groups(1)
self.dvs_vlan.get_and_verify_vlan_member_ids(1)
# Breakout port without "-l" option and ensure that root port
# gets removed from VLAN and ACL
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode1x)
dvs.change_port_breakout_mode(rootPortName, breakoutMode2x, "-f")
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode2x)
dvs_acl.verify_acl_table_groups(0)
self.dvs_vlan.get_and_verify_vlan_member_ids(0)
#Cleanup
# Move both Ethernet0 and Ethernet4 back to default mode
dvs.change_port_breakout_mode("Ethernet0", breakoutMode1x)
dpb.verify_port_breakout_mode(dvs, "Ethernet0", breakoutMode1x)
dvs.change_port_breakout_mode("Ethernet4", breakoutMode1x)
dpb.verify_port_breakout_mode(dvs, "Ethernet4", breakoutMode1x)
# Delete VLANs and ACL tables
bind_ports = []
for aclTableName in aclTableNames:
dvs_acl.remove_acl_table(aclTableName)
for vlanID in vlanIDs:
self.dvs_vlan.remove_vlan(vlanID)
# Verify cleanup
dvs_acl.verify_acl_table_count(0)
self.dvs_vlan.get_and_verify_vlan_ids(0)
##### Interface dependency test ############
# check ASIC router interface database
# one loopback router interface
dvs.get_asic_db().wait_for_n_keys("ASIC_STATE:SAI_OBJECT_TYPE_ROUTER_INTERFACE", 1)
# Breakout Ethernet8 WITH "-l" option and ensure
# ip address gets configured as per port_breakout_config_db.json
dpb.verify_port_breakout_mode(dvs, "Ethernet8", breakoutMode1x)
dvs.change_port_breakout_mode("Ethernet8", breakoutMode2x, breakoutOption)
dpb.verify_port_breakout_mode(dvs, "Ethernet8", breakoutMode2x)
# one loopback router interface and one port based router interface
dvs.get_asic_db().wait_for_n_keys("ASIC_STATE:SAI_OBJECT_TYPE_ROUTER_INTERFACE", 2)
def _check_route_present():
routes = dvs.get_asic_db().get_keys("ASIC_STATE:SAI_OBJECT_TYPE_ROUTE_ENTRY")
subnet_found = False
ip2me_found = False
for route in routes:
rt = json.loads(route)
if rt["dest"] == Ethernet8_IP:
subnet_found = True
if rt["dest"] == Ethernet8_IPME:
ip2me_found = True
return ((subnet_found and ip2me_found), routes)
# check ASIC route database
status, result = wait_for_result(_check_route_present, ROUTE_CHECK_POLLING)
assert status == True
# Breakout Ethernet8 WITH "-f" option and ensure cleanup happened
dpb.verify_port_breakout_mode(dvs, "Ethernet8", breakoutMode2x)
dvs.change_port_breakout_mode("Ethernet8", breakoutMode1x, "-f")
dpb.verify_port_breakout_mode(dvs, "Ethernet8", breakoutMode1x)
# check ASIC router interface database
# one loopback router interface
dvs.get_asic_db().wait_for_n_keys("ASIC_STATE:SAI_OBJECT_TYPE_ROUTER_INTERFACE", 1)
def _check_route_absent():
routes = dvs.get_asic_db().get_keys("ASIC_STATE:SAI_OBJECT_TYPE_ROUTE_ENTRY")
for route in routes:
rt = json.loads(route)
if rt["dest"] == Ethernet8_IP or \
rt["dest"] == Ethernet8_IPME:
return (False, route)
return (True, routes)
# check ASIC database
status, result = wait_for_result(_check_route_absent, ROUTE_CHECK_POLLING)
assert status == True
@pytest.mark.skip(reason="This test is not stable enough")
def test_cli_command_negative(self, dvs, dvs_acl):
dvs.setup_db()
dpb = DPB()
portGroup = ["Ethernet0", "Ethernet1", "Ethernet2", "Ethernet3"]
rootPortName = portGroup[0]
vlanIDs = ["100", "101"]
aclTableNames = ["DPB_ACL_TBL_1", "DPB_ACL_TBL_2"]
breakoutMode1x = "1x100G[40G]"
breakoutMode4x = "4x25G[10G]"
# Create only one ACL table and one VLAN table
bind_ports = []
dvs_acl.create_acl_table(aclTableNames[0], "L3", bind_ports)
self.dvs_vlan.create_vlan(vlanIDs[0])
# Add root port to ACL and VLAN tables
bind_ports = []
bind_ports.append(rootPortName)
dvs_acl.update_acl_table_port_list(aclTableNames[0], bind_ports)
self.dvs_vlan.create_vlan_member(vlanIDs[0], rootPortName)
# Breakout port WITHOUT "-f" option when dependencies exist
# TBD: Verify the list of dependencies returned by CLI command
dvs_acl.verify_acl_table_groups(1)
self.dvs_vlan.get_and_verify_vlan_member_ids(1)
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode1x)
dvs.change_port_breakout_mode(rootPortName, breakoutMode4x)
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode1x)
dvs_acl.verify_acl_table_groups(1)
self.dvs_vlan.get_and_verify_vlan_member_ids(1)
# Breakout port WITH "-f" option, and WITHOUT "-l" option
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode1x)
dvs.change_port_breakout_mode(rootPortName, breakoutMode4x, "-f")
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode4x)
dvs_acl.verify_acl_table_groups(0)
self.dvs_vlan.get_and_verify_vlan_member_ids(0)
# Delete VLAN table, ensure breakout WITH "-l" fails
self.dvs_vlan.remove_vlan(vlanIDs[0])
self.dvs_vlan.get_and_verify_vlan_ids(0)
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode4x)
dvs.change_port_breakout_mode(rootPortName, breakoutMode1x, "-l")
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode4x)
dvs_acl.verify_acl_table_groups(0)
self.dvs_vlan.get_and_verify_vlan_member_ids(0)
# Delete ACL table, Add back VLAN table and
# ensure breakout WITH "-l" fails
dvs_acl.remove_acl_table(aclTableNames[0])
dvs_acl.verify_acl_table_count(0)
self.dvs_vlan.create_vlan(vlanIDs[0])
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode4x)
dvs.change_port_breakout_mode(rootPortName, breakoutMode1x, "-l")
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode4x)
dvs_acl.verify_acl_table_groups(0)
self.dvs_vlan.get_and_verify_vlan_member_ids(0)
# Create both ACL tables (as per port_breakout_config_db.json,
# Ethernet0 is in both ACL tables and one VLAN table)
# and ensure, breakout succeeds
bind_ports = []
dvs_acl.create_acl_table(aclTableNames[0], "L3", bind_ports)
dvs_acl.create_acl_table(aclTableNames[1], "L3", bind_ports)
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode4x)
dvs.change_port_breakout_mode(rootPortName, breakoutMode1x, "-l")
dpb.verify_port_breakout_mode(dvs, rootPortName, breakoutMode1x)
dvs_acl.verify_acl_table_groups(1)
self.dvs_vlan.get_and_verify_vlan_member_ids(1)
# Delete ACL and VLAN tables
self.dvs_vlan.remove_vlan_member(vlanIDs[0], rootPortName)
self.dvs_vlan.remove_vlan(vlanIDs[0])
dvs_acl.remove_acl_table(aclTableNames[0])
dvs_acl.remove_acl_table(aclTableNames[1])
# TBD: Provide "-l" option without port_breakout_config_db.json file
# Verify cleanup
dvs_acl.verify_acl_table_count(0)
self.dvs_vlan.get_and_verify_vlan_ids(0)
@pytest.mark.skip(reason="This test is not stable enough")
def test_dpb_arp_flush(self, dvs):
dvs.setup_db()
dvs_asic_db = dvs.get_asic_db()
dpb = DPB()
portName = "Ethernet0"
vrfName = ""
ipAddress = "10.0.0.0/31"
srv0MAC = "00:00:00:00:01:11"
self.clear_srv_config(dvs)
# Create l3 interface
rif_oid = self.create_l3_intf(dvs, portName, vrfName)
# set ip address
self.add_ip_address(dvs, portName, ipAddress)
# bring up interface
self.set_admin_status(dvs, portName, "up")
# Set IP address and default route
cmd = "ip link set eth0 address " + srv0MAC
dvs.servers[0].runcmd(cmd)
dvs.servers[0].runcmd("ip address add 10.0.0.1/31 dev eth0")
dvs.servers[0].runcmd("ip route add default via 10.0.0.0")
# Get neighbor and ARP entry
dvs.servers[0].runcmd("ping -c 3 10.0.0.0")
intf_entries = dvs_asic_db.wait_for_n_keys("ASIC_STATE:SAI_OBJECT_TYPE_NEIGHBOR_ENTRY", 1)
route = json.loads(intf_entries[0])
assert route["ip"] == "10.0.0.1"
assert route["rif"] == rif_oid
dvs_asic_db.wait_for_exact_match("ASIC_STATE:SAI_OBJECT_TYPE_NEIGHBOR_ENTRY", \
intf_entries[0], \
{"SAI_NEIGHBOR_ENTRY_ATTR_DST_MAC_ADDRESS":srv0MAC})
# Breakout port and make sure NEIGHBOR entry is removed
dpb.verify_port_breakout_mode(dvs, "Ethernet0", "1x100G[40G]")
dvs.change_port_breakout_mode("Ethernet0", "4x25G[10G]", "-f")
dpb.verify_port_breakout_mode(dvs, "Ethernet0", "4x25G[10G]")
#Verify ARP/Neighbor entry is removed
dvs_asic_db.wait_for_deleted_entry("ASIC_STATE:SAI_OBJECT_TYPE_NEIGHBOR_ENTRY", \
intf_entries[0], ARP_FLUSH_POLLING)
dvs.change_port_breakout_mode("Ethernet0", "1x100G[40G]")
dpb.verify_port_breakout_mode(dvs, "Ethernet0", "1x100G[40G]")
@pytest.mark.skip(reason="This test is not stable enough")
def test_dpb_arp_flush_vlan(self, dvs):
dvs.setup_db()
dvs_asic_db = dvs.get_asic_db()
dpb = DPB()
self.clear_srv_config(dvs)
vlanID = "100"
portName = "Ethernet0"
vlanName = "Vlan" + str(vlanID)
vrfName = ""
ipAddress = "10.0.0.0/31"
srv0MAC = "00:00:00:00:01:11"
self.dvs_vlan.create_vlan(vlanID)
self.dvs_vlan.create_vlan_member(vlanID, portName)
# bring up interface
self.set_admin_status(dvs, portName, "up")
self.set_admin_status(dvs, vlanName, "up")
# create vlan interface
rif_oid = self.create_l3_intf(dvs, vlanName, vrfName)
# assign IP to interface
self.add_ip_address(dvs, vlanName, ipAddress)
# Set IP address and default route
cmd = "ip link set eth0 address " + srv0MAC
dvs.servers[0].runcmd(cmd)
dvs.servers[0].runcmd("ip address add 10.0.0.1/31 dev eth0")
dvs.servers[0].runcmd("ip route add default via 10.0.0.0")
# Get neighbor and ARP entry
dvs.servers[0].runcmd("ping -c 1 10.0.0.0")
intf_entries = dvs_asic_db.wait_for_n_keys("ASIC_STATE:SAI_OBJECT_TYPE_NEIGHBOR_ENTRY", 1)
route = json.loads(intf_entries[0])
assert route["ip"] == "10.0.0.1"
assert route["rif"] == rif_oid
dvs_asic_db.wait_for_exact_match("ASIC_STATE:SAI_OBJECT_TYPE_NEIGHBOR_ENTRY", \
intf_entries[0], \
{"SAI_NEIGHBOR_ENTRY_ATTR_DST_MAC_ADDRESS":srv0MAC})
# Breakout port and make sure NEIGHBOR entry is removed
dpb.verify_port_breakout_mode(dvs, "Ethernet0", "1x100G[40G]")
dvs.change_port_breakout_mode("Ethernet0", "4x25G[10G]", "-f")
dpb.verify_port_breakout_mode(dvs, "Ethernet0", "4x25G[10G]")
#Verify ARP/Neighbor entry is removed
dvs_asic_db.wait_for_deleted_entry("ASIC_STATE:SAI_OBJECT_TYPE_NEIGHBOR_ENTRY", \
intf_entries[0], ARP_FLUSH_POLLING)
dvs.change_port_breakout_mode("Ethernet0", "1x100G[40G]")
dpb.verify_port_breakout_mode(dvs, "Ethernet0", "1x100G[40G]")
# Remove IP from interface, and then remove interface
self.remove_ip_address(dvs, vlanName, ipAddress)
self.remove_l3_intf(dvs, vlanName)
# Remove VLAN(note that member was removed during port breakout)
self.dvs_vlan.remove_vlan(vlanID)
@pytest.mark.skip(reason="This test is not stable enough")
def test_dpb_arp_flush_on_port_oper_shut(self, dvs):
dvs.setup_db()
dvs_asic_db = dvs.get_asic_db()
dpb = DPB()
self.clear_srv_config(dvs)
vlanID = "100"
portName = "Ethernet0"
vlanName = "Vlan" + str(vlanID)
vrfName = ""
ipAddress = "10.0.0.0/31"
srv0MAC = "00:00:00:00:01:11"
self.dvs_vlan.create_vlan(vlanID)
self.dvs_vlan.create_vlan_member(vlanID, portName)
# bring up interface
self.set_admin_status(dvs, portName, "up")
self.set_admin_status(dvs, vlanName, "up")
# create vlan interface
rif_oid = self.create_l3_intf(dvs, vlanName, vrfName)
# assign IP to interface
self.add_ip_address(dvs, vlanName, ipAddress)
# Set IP address and default route
cmd = "ip link set eth0 address " + srv0MAC
dvs.servers[0].runcmd(cmd)
dvs.servers[0].runcmd("ip address add 10.0.0.1/31 dev eth0")
dvs.servers[0].runcmd("ip route add default via 10.0.0.0")
# Get neighbor and ARP entry
dvs.servers[0].runcmd("ping -c 3 10.0.0.0")
intf_entries = dvs_asic_db.wait_for_n_keys("ASIC_STATE:SAI_OBJECT_TYPE_NEIGHBOR_ENTRY", 1)
route = json.loads(intf_entries[0])
assert route["ip"] == "10.0.0.1"
assert route["rif"] == rif_oid
dvs_asic_db.wait_for_exact_match("ASIC_STATE:SAI_OBJECT_TYPE_NEIGHBOR_ENTRY", \
intf_entries[0], \
{"SAI_NEIGHBOR_ENTRY_ATTR_DST_MAC_ADDRESS":srv0MAC})
# Bring link operation state down
self.set_admin_status(dvs, portName, "down")
dvs.servers[0].runcmd("ip link set dev eth0 down")
#Verify ARP/Neighbor entry is removed
dvs_asic_db.wait_for_deleted_entry("ASIC_STATE:SAI_OBJECT_TYPE_NEIGHBOR_ENTRY", \
intf_entries[0], ARP_FLUSH_POLLING)
# Bring link operation state up
self.set_admin_status(dvs, portName, "up")
dvs.servers[0].runcmd("ip link set dev eth0 up")
# Remove IP from interface, and then remove interface
self.remove_ip_address(dvs, vlanName, ipAddress)
self.remove_l3_intf(dvs, vlanName)
# Remove VLAN member and VLAN
self.dvs_vlan.remove_vlan_member(vlanID, portName)
self.dvs_vlan.remove_vlan(vlanID)
@pytest.mark.skip(reason="This test is not stable enough")
def test_dpb_arp_flush_on_vlan_member_remove(self, dvs):
dvs.setup_db()
dvs_asic_db = dvs.get_asic_db()
dpb = DPB()
self.clear_srv_config(dvs)
vlanID = "100"
portName = "Ethernet0"
vlanName = "Vlan" + str(vlanID)
vrfName = ""
ipAddress = "10.0.0.0/31"
srv0MAC = "00:00:00:00:01:11"
self.dvs_vlan.create_vlan(vlanID)
self.dvs_vlan.create_vlan_member(vlanID, portName)
# bring up interface
self.set_admin_status(dvs, portName, "up")
self.set_admin_status(dvs, vlanName, "up")
# create vlan interface
rif_oid = self.create_l3_intf(dvs, vlanName, vrfName)
# assign IP to interface
self.add_ip_address(dvs, vlanName, ipAddress)
# Set IP address and default route
cmd = "ip link set eth0 address " + srv0MAC
dvs.servers[0].runcmd(cmd)
dvs.servers[0].runcmd("ip address add 10.0.0.1/31 dev eth0")
dvs.servers[0].runcmd("ip route add default via 10.0.0.0")
# Get neighbor and ARP entry
dvs.servers[0].runcmd("ping -c 1 10.0.0.0")
intf_entries = dvs_asic_db.wait_for_n_keys("ASIC_STATE:SAI_OBJECT_TYPE_NEIGHBOR_ENTRY", 1)
route = json.loads(intf_entries[0])
assert route["ip"] == "10.0.0.1"
assert route["rif"] == rif_oid
dvs_asic_db.wait_for_exact_match("ASIC_STATE:SAI_OBJECT_TYPE_NEIGHBOR_ENTRY", \
intf_entries[0], \
{"SAI_NEIGHBOR_ENTRY_ATTR_DST_MAC_ADDRESS":srv0MAC})
# Remove port from VLAN
self.dvs_vlan.remove_vlan_member(vlanID, portName)
#Verify ARP/Neighbor entry is removed
dvs_asic_db.wait_for_deleted_entry("ASIC_STATE:SAI_OBJECT_TYPE_NEIGHBOR_ENTRY", \
intf_entries[0], ARP_FLUSH_POLLING)
# Remove IP from interface, and then remove interface
self.remove_ip_address(dvs, vlanName, ipAddress)
self.remove_l3_intf(dvs, vlanName)
# Remove VLAN
self.dvs_vlan.remove_vlan(vlanID)
| 43.411085 | 112 | 0.627547 | 4,654 | 37,594 | 4.779759 | 0.068113 | 0.055563 | 0.064734 | 0.046123 | 0.833221 | 0.802607 | 0.760036 | 0.714992 | 0.687076 | 0.640009 | 0 | 0.033639 | 0.251955 | 37,594 | 865 | 113 | 43.461272 | 0.757379 | 0.142177 | 0 | 0.794118 | 0 | 0 | 0.144865 | 0.042337 | 0 | 0 | 0 | 0 | 0.034926 | 1 | 0.038603 | false | 0 | 0.009191 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d5b49ac9d070c580bd8c751e47d16b58858065d6 | 35 | py | Python | instasent/__init__.py | instasent/instasent-python-lib | bebf8de5f0bd5c3f676fdc88012cd39e4a8a4477 | [
"MIT"
] | null | null | null | instasent/__init__.py | instasent/instasent-python-lib | bebf8de5f0bd5c3f676fdc88012cd39e4a8a4477 | [
"MIT"
] | null | null | null | instasent/__init__.py | instasent/instasent-python-lib | bebf8de5f0bd5c3f676fdc88012cd39e4a8a4477 | [
"MIT"
] | null | null | null | from instasent.client import Client | 35 | 35 | 0.885714 | 5 | 35 | 6.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 35 | 1 | 35 | 35 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d5ffa2c9872882407595bd62d5f48c9808881a40 | 1,043 | py | Python | test/test_align_template.py | IbhzHazem/commitArt | a459aa68727afb3328492611d1f58359cbbcf01e | [
"MIT"
] | 61 | 2015-06-04T18:49:00.000Z | 2022-03-19T23:08:43.000Z | test/test_align_template.py | IbhzHazem/commitArt | a459aa68727afb3328492611d1f58359cbbcf01e | [
"MIT"
] | 6 | 2015-05-23T05:36:01.000Z | 2018-03-02T10:05:54.000Z | test/test_align_template.py | IbhzHazem/commitArt | a459aa68727afb3328492611d1f58359cbbcf01e | [
"MIT"
] | 13 | 2015-07-29T23:21:08.000Z | 2022-03-16T18:24:31.000Z | from nose2.tools import params
from test import GithubBoardTestCase
from github_board import align_template
class TestAlignTemplate(GithubBoardTestCase):
@params(
([[1]], [[0], [0], [0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]]),
([[1, 2]], [[0], [0], [0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2]]),
([[1], [2]], [
[0],
[0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2],
]),
([[1], [2, 3]], [
[0],
[0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 3],
]),
)
def test(self, template, expected_result):
self.assertListEqual(expected_result, align_template(template, "center"))
| 38.62963 | 116 | 0.394056 | 206 | 1,043 | 1.970874 | 0.11165 | 0.743842 | 1.071429 | 1.369458 | 0.413793 | 0.413793 | 0.413793 | 0.413793 | 0.413793 | 0.413793 | 0 | 0.257778 | 0.352828 | 1,043 | 26 | 117 | 40.115385 | 0.343704 | 0 | 0 | 0.272727 | 0 | 0 | 0.005753 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 1 | 0.045455 | false | 0 | 0.136364 | 0 | 0.227273 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d5fffbf095e00c808d0801d4f737d10ab1cbb1da | 261 | py | Python | manage.py | pdiogenes/got_listing | 72493c232cafd4afde8ad06912ceae868ed63207 | [
"MIT"
] | null | null | null | manage.py | pdiogenes/got_listing | 72493c232cafd4afde8ad06912ceae868ed63207 | [
"MIT"
] | null | null | null | manage.py | pdiogenes/got_listing | 72493c232cafd4afde8ad06912ceae868ed63207 | [
"MIT"
] | null | null | null | from flask import render_template
from got import app
from got.controllers import battles, character_deaths, character_predictions
app.register_blueprint(battles)
app.register_blueprint(character_deaths)
app.register_blueprint(character_predictions)
app.run() | 29 | 76 | 0.869732 | 34 | 261 | 6.441176 | 0.441176 | 0.150685 | 0.273973 | 0.26484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072797 | 261 | 9 | 77 | 29 | 0.904959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.428571 | 0 | 0.428571 | 0.428571 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
91054583112b4d2808544864bd555e4336e159d3 | 188 | py | Python | rest_framework_bulk/__init__.py | 101Loop/drf-bulk | 83ded91468dd6e2e8a132ede07246e808c56720e | [
"MIT"
] | 2 | 2020-08-18T17:00:12.000Z | 2020-09-12T12:58:27.000Z | rest_framework_bulk/__init__.py | 101Loop/drf-bulk | 83ded91468dd6e2e8a132ede07246e808c56720e | [
"MIT"
] | null | null | null | rest_framework_bulk/__init__.py | 101Loop/drf-bulk | 83ded91468dd6e2e8a132ede07246e808c56720e | [
"MIT"
] | 1 | 2020-08-18T17:54:29.000Z | 2020-08-18T17:54:29.000Z | __version__ = "0.2.1"
__author__ = "101 Loop"
try:
from .generics import * # noqa
from .mixins import * # noqa
from .serializers import * # noqa
except Exception:
pass
| 18.8 | 38 | 0.638298 | 23 | 188 | 4.869565 | 0.73913 | 0.267857 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043165 | 0.260638 | 188 | 9 | 39 | 20.888889 | 0.76259 | 0.074468 | 0 | 0 | 0 | 0 | 0.076471 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.125 | 0.375 | 0 | 0.375 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
e6805e5ed5c96d6a0b7df9ae0f92448da67982fc | 10,664 | py | Python | cottonformation/res/kms.py | gitter-badger/cottonformation-project | 354f1dce7ea106e209af2d5d818b6033a27c193c | [
"BSD-2-Clause"
] | null | null | null | cottonformation/res/kms.py | gitter-badger/cottonformation-project | 354f1dce7ea106e209af2d5d818b6033a27c193c | [
"BSD-2-Clause"
] | null | null | null | cottonformation/res/kms.py | gitter-badger/cottonformation-project | 354f1dce7ea106e209af2d5d818b6033a27c193c | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
This module
"""
import attr
import typing
from ..core.model import (
Property, Resource, Tag, GetAtt, TypeHint, TypeCheck,
)
from ..core.constant import AttrMeta
#--- Property declaration ---
#--- Resource declaration ---
@attr.s
class Key(Resource):
"""
AWS Object Type = "AWS::KMS::Key"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html
Property Document:
- ``rp_KeyPolicy``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-keypolicy
- ``p_Description``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-description
- ``p_EnableKeyRotation``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-enablekeyrotation
- ``p_Enabled``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-enabled
- ``p_KeySpec``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-keyspec
- ``p_KeyUsage``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-keyusage
- ``p_MultiRegion``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-multiregion
- ``p_PendingWindowInDays``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-pendingwindowindays
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-tags
"""
AWS_OBJECT_TYPE = "AWS::KMS::Key"
rp_KeyPolicy: dict = attr.ib(
default=None,
validator=attr.validators.instance_of(dict),
metadata={AttrMeta.PROPERTY_NAME: "KeyPolicy"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-keypolicy"""
p_Description: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Description"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-description"""
p_EnableKeyRotation: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "EnableKeyRotation"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-enablekeyrotation"""
p_Enabled: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "Enabled"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-enabled"""
p_KeySpec: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "KeySpec"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-keyspec"""
p_KeyUsage: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "KeyUsage"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-keyusage"""
p_MultiRegion: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "MultiRegion"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-multiregion"""
p_PendingWindowInDays: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "PendingWindowInDays"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-pendingwindowindays"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-tags"""
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#aws-resource-kms-key-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@property
def rv_KeyId(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#aws-resource-kms-key-return-values"""
return GetAtt(resource=self, attr_name="KeyId")
@attr.s
class ReplicaKey(Resource):
"""
AWS Object Type = "AWS::KMS::ReplicaKey"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-replicakey.html
Property Document:
- ``rp_KeyPolicy``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-replicakey.html#cfn-kms-replicakey-keypolicy
- ``rp_PrimaryKeyArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-replicakey.html#cfn-kms-replicakey-primarykeyarn
- ``p_Description``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-replicakey.html#cfn-kms-replicakey-description
- ``p_Enabled``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-replicakey.html#cfn-kms-replicakey-enabled
- ``p_PendingWindowInDays``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-replicakey.html#cfn-kms-replicakey-pendingwindowindays
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-replicakey.html#cfn-kms-replicakey-tags
"""
AWS_OBJECT_TYPE = "AWS::KMS::ReplicaKey"
rp_KeyPolicy: dict = attr.ib(
default=None,
validator=attr.validators.instance_of(dict),
metadata={AttrMeta.PROPERTY_NAME: "KeyPolicy"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-replicakey.html#cfn-kms-replicakey-keypolicy"""
rp_PrimaryKeyArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "PrimaryKeyArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-replicakey.html#cfn-kms-replicakey-primarykeyarn"""
p_Description: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Description"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-replicakey.html#cfn-kms-replicakey-description"""
p_Enabled: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "Enabled"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-replicakey.html#cfn-kms-replicakey-enabled"""
p_PendingWindowInDays: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "PendingWindowInDays"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-replicakey.html#cfn-kms-replicakey-pendingwindowindays"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-replicakey.html#cfn-kms-replicakey-tags"""
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-replicakey.html#aws-resource-kms-replicakey-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@property
def rv_KeyId(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-replicakey.html#aws-resource-kms-replicakey-return-values"""
return GetAtt(resource=self, attr_name="KeyId")
@attr.s
class Alias(Resource):
"""
AWS Object Type = "AWS::KMS::Alias"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-alias.html
Property Document:
- ``rp_AliasName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-alias.html#cfn-kms-alias-aliasname
- ``rp_TargetKeyId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-alias.html#cfn-kms-alias-targetkeyid
"""
AWS_OBJECT_TYPE = "AWS::KMS::Alias"
rp_AliasName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "AliasName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-alias.html#cfn-kms-alias-aliasname"""
rp_TargetKeyId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "TargetKeyId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-alias.html#cfn-kms-alias-targetkeyid"""
| 51.269231 | 179 | 0.723556 | 1,312 | 10,664 | 5.79497 | 0.0625 | 0.065106 | 0.082862 | 0.091674 | 0.958832 | 0.958832 | 0.935946 | 0.935946 | 0.935946 | 0.935946 | 0 | 0.000107 | 0.124531 | 10,664 | 207 | 180 | 51.516908 | 0.814267 | 0.325675 | 0 | 0.622807 | 0 | 0 | 0.049413 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035088 | false | 0 | 0.035088 | 0 | 0.307018 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6a9001aa14ed4f96807deeb23fd43a798b5377a | 17,633 | py | Python | tests/integ/test_transformer.py | AmirS2/sagemaker-python-sdk | bb17c3b3de6a9af718279670c8177b5f3a19659c | [
"Apache-2.0"
] | null | null | null | tests/integ/test_transformer.py | AmirS2/sagemaker-python-sdk | bb17c3b3de6a9af718279670c8177b5f3a19659c | [
"Apache-2.0"
] | null | null | null | tests/integ/test_transformer.py | AmirS2/sagemaker-python-sdk | bb17c3b3de6a9af718279670c8177b5f3a19659c | [
"Apache-2.0"
] | null | null | null | # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from __future__ import absolute_import
import gzip
import os
import pickle
import sys
import time
import pytest
from sagemaker import KMeans
from sagemaker.mxnet import MXNet
from sagemaker.transformer import Transformer
from sagemaker.estimator import Estimator
from sagemaker.utils import unique_name_from_base
from tests.integ import (
DATA_DIR,
TRAINING_DEFAULT_TIMEOUT_MINUTES,
TRANSFORM_DEFAULT_TIMEOUT_MINUTES,
)
from tests.integ.kms_utils import get_or_create_kms_key
from tests.integ.timeout import timeout, timeout_and_delete_model_with_transformer
from tests.integ.vpc_test_utils import get_or_create_vpc_resources
@pytest.mark.canary_quick
def test_transform_mxnet(sagemaker_session, mxnet_full_version, cpu_instance_type):
data_path = os.path.join(DATA_DIR, "mxnet_mnist")
script_path = os.path.join(data_path, "mnist.py")
mx = MXNet(
entry_point=script_path,
role="SageMakerRole",
train_instance_count=1,
train_instance_type=cpu_instance_type,
sagemaker_session=sagemaker_session,
framework_version=mxnet_full_version,
)
train_input = mx.sagemaker_session.upload_data(
path=os.path.join(data_path, "train"), key_prefix="integ-test-data/mxnet_mnist/train"
)
test_input = mx.sagemaker_session.upload_data(
path=os.path.join(data_path, "test"), key_prefix="integ-test-data/mxnet_mnist/test"
)
job_name = unique_name_from_base("test-mxnet-transform")
with timeout(minutes=TRAINING_DEFAULT_TIMEOUT_MINUTES):
mx.fit({"train": train_input, "test": test_input}, job_name=job_name)
transform_input_path = os.path.join(data_path, "transform", "data.csv")
transform_input_key_prefix = "integ-test-data/mxnet_mnist/transform"
transform_input = mx.sagemaker_session.upload_data(
path=transform_input_path, key_prefix=transform_input_key_prefix
)
kms_key_arn = get_or_create_kms_key(sagemaker_session)
output_filter = "$"
input_filter = "$"
transformer = _create_transformer_and_transform_job(
mx,
transform_input,
cpu_instance_type,
kms_key_arn,
input_filter=input_filter,
output_filter=output_filter,
join_source=None,
)
with timeout_and_delete_model_with_transformer(
transformer, sagemaker_session, minutes=TRANSFORM_DEFAULT_TIMEOUT_MINUTES
):
transformer.wait()
job_desc = transformer.sagemaker_session.sagemaker_client.describe_transform_job(
TransformJobName=transformer.latest_transform_job.name
)
assert kms_key_arn == job_desc["TransformResources"]["VolumeKmsKeyId"]
assert output_filter == job_desc["DataProcessing"]["OutputFilter"]
assert input_filter == job_desc["DataProcessing"]["InputFilter"]
@pytest.mark.canary_quick
def test_attach_transform_kmeans(sagemaker_session, cpu_instance_type):
data_path = os.path.join(DATA_DIR, "one_p_mnist")
pickle_args = {} if sys.version_info.major == 2 else {"encoding": "latin1"}
# Load the data into memory as numpy arrays
train_set_path = os.path.join(data_path, "mnist.pkl.gz")
with gzip.open(train_set_path, "rb") as f:
train_set, _, _ = pickle.load(f, **pickle_args)
kmeans = KMeans(
role="SageMakerRole",
train_instance_count=1,
train_instance_type=cpu_instance_type,
k=10,
sagemaker_session=sagemaker_session,
output_path="s3://{}/".format(sagemaker_session.default_bucket()),
)
# set kmeans specific hp
kmeans.init_method = "random"
kmeans.max_iterators = 1
kmeans.tol = 1
kmeans.num_trials = 1
kmeans.local_init_method = "kmeans++"
kmeans.half_life_time_size = 1
kmeans.epochs = 1
records = kmeans.record_set(train_set[0][:100])
job_name = unique_name_from_base("test-kmeans-attach")
with timeout(minutes=TRAINING_DEFAULT_TIMEOUT_MINUTES):
kmeans.fit(records, job_name=job_name)
transform_input_path = os.path.join(data_path, "transform_input.csv")
transform_input_key_prefix = "integ-test-data/one_p_mnist/transform"
transform_input = kmeans.sagemaker_session.upload_data(
path=transform_input_path, key_prefix=transform_input_key_prefix
)
transformer = _create_transformer_and_transform_job(kmeans, transform_input, cpu_instance_type)
attached_transformer = Transformer.attach(
transformer.latest_transform_job.name, sagemaker_session=sagemaker_session
)
with timeout_and_delete_model_with_transformer(
transformer, sagemaker_session, minutes=TRANSFORM_DEFAULT_TIMEOUT_MINUTES
):
attached_transformer.wait()
def test_transform_mxnet_vpc(sagemaker_session, mxnet_full_version, cpu_instance_type):
data_path = os.path.join(DATA_DIR, "mxnet_mnist")
script_path = os.path.join(data_path, "mnist.py")
ec2_client = sagemaker_session.boto_session.client("ec2")
subnet_ids, security_group_id = get_or_create_vpc_resources(ec2_client)
mx = MXNet(
entry_point=script_path,
role="SageMakerRole",
train_instance_count=1,
train_instance_type=cpu_instance_type,
sagemaker_session=sagemaker_session,
framework_version=mxnet_full_version,
subnets=subnet_ids,
security_group_ids=[security_group_id],
)
train_input = mx.sagemaker_session.upload_data(
path=os.path.join(data_path, "train"), key_prefix="integ-test-data/mxnet_mnist/train"
)
test_input = mx.sagemaker_session.upload_data(
path=os.path.join(data_path, "test"), key_prefix="integ-test-data/mxnet_mnist/test"
)
job_name = unique_name_from_base("test-mxnet-vpc")
with timeout(minutes=TRAINING_DEFAULT_TIMEOUT_MINUTES):
mx.fit({"train": train_input, "test": test_input}, job_name=job_name)
job_desc = sagemaker_session.sagemaker_client.describe_training_job(
TrainingJobName=mx.latest_training_job.name
)
assert set(subnet_ids) == set(job_desc["VpcConfig"]["Subnets"])
assert [security_group_id] == job_desc["VpcConfig"]["SecurityGroupIds"]
transform_input_path = os.path.join(data_path, "transform", "data.csv")
transform_input_key_prefix = "integ-test-data/mxnet_mnist/transform"
transform_input = mx.sagemaker_session.upload_data(
path=transform_input_path, key_prefix=transform_input_key_prefix
)
transformer = _create_transformer_and_transform_job(mx, transform_input, cpu_instance_type)
with timeout_and_delete_model_with_transformer(
transformer, sagemaker_session, minutes=TRANSFORM_DEFAULT_TIMEOUT_MINUTES
):
transformer.wait()
model_desc = sagemaker_session.sagemaker_client.describe_model(
ModelName=transformer.model_name
)
assert set(subnet_ids) == set(model_desc["VpcConfig"]["Subnets"])
assert [security_group_id] == model_desc["VpcConfig"]["SecurityGroupIds"]
def test_transform_mxnet_tags(sagemaker_session, mxnet_full_version, cpu_instance_type):
data_path = os.path.join(DATA_DIR, "mxnet_mnist")
script_path = os.path.join(data_path, "mnist.py")
tags = [{"Key": "some-tag", "Value": "value-for-tag"}]
mx = MXNet(
entry_point=script_path,
role="SageMakerRole",
train_instance_count=1,
train_instance_type=cpu_instance_type,
sagemaker_session=sagemaker_session,
framework_version=mxnet_full_version,
)
train_input = mx.sagemaker_session.upload_data(
path=os.path.join(data_path, "train"), key_prefix="integ-test-data/mxnet_mnist/train"
)
test_input = mx.sagemaker_session.upload_data(
path=os.path.join(data_path, "test"), key_prefix="integ-test-data/mxnet_mnist/test"
)
job_name = unique_name_from_base("test-mxnet-transform")
with timeout(minutes=TRAINING_DEFAULT_TIMEOUT_MINUTES):
mx.fit({"train": train_input, "test": test_input}, job_name=job_name)
transform_input_path = os.path.join(data_path, "transform", "data.csv")
transform_input_key_prefix = "integ-test-data/mxnet_mnist/transform"
transform_input = mx.sagemaker_session.upload_data(
path=transform_input_path, key_prefix=transform_input_key_prefix
)
transformer = mx.transformer(1, cpu_instance_type, tags=tags)
transformer.transform(transform_input, content_type="text/csv")
with timeout_and_delete_model_with_transformer(
transformer, sagemaker_session, minutes=TRANSFORM_DEFAULT_TIMEOUT_MINUTES
):
transformer.wait()
model_desc = sagemaker_session.sagemaker_client.describe_model(
ModelName=transformer.model_name
)
model_tags = sagemaker_session.sagemaker_client.list_tags(
ResourceArn=model_desc["ModelArn"]
)["Tags"]
assert tags == model_tags
def test_transform_byo_estimator(sagemaker_session, cpu_instance_type):
data_path = os.path.join(DATA_DIR, "one_p_mnist")
pickle_args = {} if sys.version_info.major == 2 else {"encoding": "latin1"}
tags = [{"Key": "some-tag", "Value": "value-for-tag"}]
# Load the data into memory as numpy arrays
train_set_path = os.path.join(data_path, "mnist.pkl.gz")
with gzip.open(train_set_path, "rb") as f:
train_set, _, _ = pickle.load(f, **pickle_args)
kmeans = KMeans(
role="SageMakerRole",
train_instance_count=1,
train_instance_type=cpu_instance_type,
k=10,
sagemaker_session=sagemaker_session,
output_path="s3://{}/".format(sagemaker_session.default_bucket()),
)
# set kmeans specific hp
kmeans.init_method = "random"
kmeans.max_iterators = 1
kmeans.tol = 1
kmeans.num_trials = 1
kmeans.local_init_method = "kmeans++"
kmeans.half_life_time_size = 1
kmeans.epochs = 1
records = kmeans.record_set(train_set[0][:100])
job_name = unique_name_from_base("test-kmeans-attach")
with timeout(minutes=TRAINING_DEFAULT_TIMEOUT_MINUTES):
kmeans.fit(records, job_name=job_name)
transform_input_path = os.path.join(data_path, "transform_input.csv")
transform_input_key_prefix = "integ-test-data/one_p_mnist/transform"
transform_input = kmeans.sagemaker_session.upload_data(
path=transform_input_path, key_prefix=transform_input_key_prefix
)
estimator = Estimator.attach(training_job_name=job_name, sagemaker_session=sagemaker_session)
transformer = estimator.transformer(1, cpu_instance_type, tags=tags)
transformer.transform(transform_input, content_type="text/csv")
with timeout_and_delete_model_with_transformer(
transformer, sagemaker_session, minutes=TRANSFORM_DEFAULT_TIMEOUT_MINUTES
):
transformer.wait()
model_desc = sagemaker_session.sagemaker_client.describe_model(
ModelName=transformer.model_name
)
model_tags = sagemaker_session.sagemaker_client.list_tags(
ResourceArn=model_desc["ModelArn"]
)["Tags"]
assert tags == model_tags
def test_single_transformer_multiple_jobs(sagemaker_session, mxnet_full_version, cpu_instance_type):
data_path = os.path.join(DATA_DIR, "mxnet_mnist")
script_path = os.path.join(data_path, "mnist.py")
mx = MXNet(
entry_point=script_path,
role="SageMakerRole",
train_instance_count=1,
train_instance_type=cpu_instance_type,
sagemaker_session=sagemaker_session,
framework_version=mxnet_full_version,
)
train_input = mx.sagemaker_session.upload_data(
path=os.path.join(data_path, "train"), key_prefix="integ-test-data/mxnet_mnist/train"
)
test_input = mx.sagemaker_session.upload_data(
path=os.path.join(data_path, "test"), key_prefix="integ-test-data/mxnet_mnist/test"
)
job_name = unique_name_from_base("test-mxnet-transform")
with timeout(minutes=TRAINING_DEFAULT_TIMEOUT_MINUTES):
mx.fit({"train": train_input, "test": test_input}, job_name=job_name)
transform_input_path = os.path.join(data_path, "transform", "data.csv")
transform_input_key_prefix = "integ-test-data/mxnet_mnist/transform"
transform_input = mx.sagemaker_session.upload_data(
path=transform_input_path, key_prefix=transform_input_key_prefix
)
transformer = mx.transformer(1, cpu_instance_type)
job_name = unique_name_from_base("test-mxnet-transform")
transformer.transform(transform_input, content_type="text/csv", job_name=job_name)
with timeout_and_delete_model_with_transformer(
transformer, sagemaker_session, minutes=TRANSFORM_DEFAULT_TIMEOUT_MINUTES
):
assert transformer.output_path == "s3://{}/{}".format(
sagemaker_session.default_bucket(), job_name
)
job_name = unique_name_from_base("test-mxnet-transform")
transformer.transform(transform_input, content_type="text/csv", job_name=job_name)
assert transformer.output_path == "s3://{}/{}".format(
sagemaker_session.default_bucket(), job_name
)
def test_stop_transform_job(sagemaker_session, mxnet_full_version, cpu_instance_type):
data_path = os.path.join(DATA_DIR, "mxnet_mnist")
script_path = os.path.join(data_path, "mnist.py")
tags = [{"Key": "some-tag", "Value": "value-for-tag"}]
mx = MXNet(
entry_point=script_path,
role="SageMakerRole",
train_instance_count=1,
train_instance_type=cpu_instance_type,
sagemaker_session=sagemaker_session,
framework_version=mxnet_full_version,
)
train_input = mx.sagemaker_session.upload_data(
path=os.path.join(data_path, "train"), key_prefix="integ-test-data/mxnet_mnist/train"
)
test_input = mx.sagemaker_session.upload_data(
path=os.path.join(data_path, "test"), key_prefix="integ-test-data/mxnet_mnist/test"
)
job_name = unique_name_from_base("test-mxnet-transform")
with timeout(minutes=TRAINING_DEFAULT_TIMEOUT_MINUTES):
mx.fit({"train": train_input, "test": test_input}, job_name=job_name)
transform_input_path = os.path.join(data_path, "transform", "data.csv")
transform_input_key_prefix = "integ-test-data/mxnet_mnist/transform"
transform_input = mx.sagemaker_session.upload_data(
path=transform_input_path, key_prefix=transform_input_key_prefix
)
transformer = mx.transformer(1, cpu_instance_type, tags=tags)
transformer.transform(transform_input, content_type="text/csv")
time.sleep(15)
latest_transform_job_name = transformer.latest_transform_job.name
print("Attempting to stop {}".format(latest_transform_job_name))
transformer.stop_transform_job()
desc = transformer.latest_transform_job.sagemaker_session.sagemaker_client.describe_transform_job(
TransformJobName=latest_transform_job_name
)
assert desc["TransformJobStatus"] == "Stopped"
def test_transform_mxnet_logs(sagemaker_session, mxnet_full_version, cpu_instance_type):
data_path = os.path.join(DATA_DIR, "mxnet_mnist")
script_path = os.path.join(data_path, "mnist.py")
mx = MXNet(
entry_point=script_path,
role="SageMakerRole",
train_instance_count=1,
train_instance_type=cpu_instance_type,
sagemaker_session=sagemaker_session,
framework_version=mxnet_full_version,
)
train_input = mx.sagemaker_session.upload_data(
path=os.path.join(data_path, "train"), key_prefix="integ-test-data/mxnet_mnist/train"
)
test_input = mx.sagemaker_session.upload_data(
path=os.path.join(data_path, "test"), key_prefix="integ-test-data/mxnet_mnist/test"
)
job_name = unique_name_from_base("test-mxnet-transform")
with timeout(minutes=TRAINING_DEFAULT_TIMEOUT_MINUTES):
mx.fit({"train": train_input, "test": test_input}, job_name=job_name)
transform_input_path = os.path.join(data_path, "transform", "data.csv")
transform_input_key_prefix = "integ-test-data/mxnet_mnist/transform"
transform_input = mx.sagemaker_session.upload_data(
path=transform_input_path, key_prefix=transform_input_key_prefix
)
with timeout(minutes=45):
transformer = _create_transformer_and_transform_job(
mx, transform_input, cpu_instance_type, wait=True, logs=True
)
with timeout_and_delete_model_with_transformer(
transformer, sagemaker_session, minutes=TRANSFORM_DEFAULT_TIMEOUT_MINUTES
):
transformer.wait()
def _create_transformer_and_transform_job(
estimator,
transform_input,
instance_type,
volume_kms_key=None,
input_filter=None,
output_filter=None,
join_source=None,
wait=False,
logs=False,
):
transformer = estimator.transformer(1, instance_type, volume_kms_key=volume_kms_key)
transformer.transform(
transform_input,
content_type="text/csv",
input_filter=input_filter,
output_filter=output_filter,
join_source=join_source,
wait=wait,
logs=logs,
)
return transformer
| 38.002155 | 102 | 0.729484 | 2,265 | 17,633 | 5.308609 | 0.099779 | 0.091816 | 0.02994 | 0.041916 | 0.83982 | 0.804142 | 0.782601 | 0.775782 | 0.757818 | 0.757818 | 0 | 0.003977 | 0.172914 | 17,633 | 463 | 103 | 38.084233 | 0.820488 | 0.037997 | 0 | 0.665753 | 0 | 0 | 0.109701 | 0.040482 | 0 | 0 | 0 | 0 | 0.032877 | 1 | 0.024658 | false | 0 | 0.043836 | 0 | 0.071233 | 0.00274 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6b2de7369e277a846822efe51c3c9aab29d1431 | 49 | py | Python | fastlmm/pyplink/__init__.py | code-review-doctor/FaST-LMM | d1fdad820ebb80329b39cedf8e599ef33230487e | [
"Apache-2.0"
] | 27 | 2019-12-11T18:13:27.000Z | 2022-01-07T18:13:49.000Z | fastlmmhpc/pyplink/__init__.py | epiproject/FaST-LMM-HPC | 5d6df81268aeff19015194ab0718a9163b8d33af | [
"Apache-2.0"
] | 26 | 2019-12-28T03:30:45.000Z | 2022-03-21T01:20:42.000Z | fastlmmhpc/pyplink/__init__.py | epiproject/FaST-LMM-HPC | 5d6df81268aeff19015194ab0718a9163b8d33af | [
"Apache-2.0"
] | 12 | 2020-02-04T14:21:10.000Z | 2022-02-21T09:49:58.000Z | from .snpset import *
from .snpreader import *
| 16.333333 | 25 | 0.714286 | 6 | 49 | 5.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204082 | 49 | 2 | 26 | 24.5 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e6ff95bd7fe5cc60e6d6966ff144d37f0d376cb6 | 47 | py | Python | snippets/python/django/models/get_models.py | c6401/Snippets | a88d97005658eeda99f1a2766e3d069a64e142cb | [
"MIT"
] | null | null | null | snippets/python/django/models/get_models.py | c6401/Snippets | a88d97005658eeda99f1a2766e3d069a64e142cb | [
"MIT"
] | null | null | null | snippets/python/django/models/get_models.py | c6401/Snippets | a88d97005658eeda99f1a2766e3d069a64e142cb | [
"MIT"
] | null | null | null | from django import apps
apps.apps.get_models()
| 15.666667 | 23 | 0.808511 | 8 | 47 | 4.625 | 0.75 | 0.432432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 47 | 2 | 24 | 23.5 | 0.880952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
fc311fb2d51ca2112ec63cc1d347287ef5178aed | 218 | py | Python | cfgov/legacy/templatetags/staticversions.py | flacoman91/consumerfinance.gov | 64e3d68d1c023ae944baf66a99e54236e5976097 | [
"CC0-1.0"
] | 4 | 2020-03-19T19:32:26.000Z | 2020-06-30T11:15:00.000Z | cfgov/legacy/templatetags/staticversions.py | flacoman91/consumerfinance.gov | 64e3d68d1c023ae944baf66a99e54236e5976097 | [
"CC0-1.0"
] | 100 | 2020-02-06T13:32:37.000Z | 2020-08-19T15:02:02.000Z | cfgov/legacy/templatetags/staticversions.py | raft-tech/cfgov-refresh | 7c63c31fd6bb95ed4f7d368f1e1252175f0c71ca | [
"CC0-1.0"
] | null | null | null | from django import template
from django.conf import settings
register = template.Library()
def get_static_version():
return '%s%s' % ('?ver=', settings.STATIC_VERSION)
register.simple_tag(get_static_version)
| 16.769231 | 54 | 0.756881 | 29 | 218 | 5.482759 | 0.586207 | 0.245283 | 0.201258 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133028 | 218 | 12 | 55 | 18.166667 | 0.84127 | 0 | 0 | 0 | 0 | 0 | 0.041284 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
fc437b6022209ef1f155aec489831b1a7a2bf84b | 18,402 | py | Python | tests/test_schedule_compute.py | hj424/heterocl | e51b8f7f65ae6ad55c0c2426ab7192c3d8f6702b | [
"Apache-2.0"
] | 7 | 2019-08-20T02:43:44.000Z | 2019-12-13T14:26:05.000Z | tests/test_schedule_compute.py | hj424/heterocl | e51b8f7f65ae6ad55c0c2426ab7192c3d8f6702b | [
"Apache-2.0"
] | null | null | null | tests/test_schedule_compute.py | hj424/heterocl | e51b8f7f65ae6ad55c0c2426ab7192c3d8f6702b | [
"Apache-2.0"
] | 2 | 2019-07-18T14:13:35.000Z | 2020-01-04T01:45:34.000Z | import heterocl as hcl
import numpy as np
def test_pipeline():
hcl.init()
initiation_interval = 4
a = hcl.placeholder((10, 20))
b = hcl.placeholder((10, 20))
c = hcl.compute(a.shape, lambda i, j: a[i, j] + b[i, j])
s = hcl.create_schedule([a, b, c])
s[c].pipeline(c.axis[0], initiation_interval)
ir = hcl.lower(s)
pipeline_hint_str = "\"initiation_interval\"="+str(initiation_interval)
assert pipeline_hint_str in str(ir)
def test_pipeline_num_axis():
hcl.init()
initiation_interval = 4
a = hcl.placeholder((10, 20))
b = hcl.placeholder((10, 20))
c = hcl.compute(a.shape, lambda i, j: a[i, j] + b[i, j])
s = hcl.create_schedule([a, b, c])
s[c].pipeline(0, initiation_interval)
ir = hcl.lower(s)
pipeline_hint_str = "\"initiation_interval\"="+str(initiation_interval)
assert pipeline_hint_str in str(ir)
def test_unroll():
hcl.init()
factor = 4
a = hcl.placeholder((10, 20))
b = hcl.placeholder((10, 20))
c = hcl.compute(a.shape, lambda i, j: a[i, j] + b[i, j])
s = hcl.create_schedule([a, b, c])
s[c].unroll(c.axis[0], factor=factor)
ir = hcl.lower(s)
unroll_hint_str = "\"factor\"="+str(factor)
assert unroll_hint_str in str(ir)
def test_unroll_num_axis():
hcl.init()
factor = 4
a = hcl.placeholder((10, 20))
b = hcl.placeholder((10, 20))
c = hcl.compute(a.shape, lambda i, j: a[i, j] + b[i, j])
s = hcl.create_schedule([a, b, c])
s[c].unroll(0, factor=factor)
ir = hcl.lower(s)
unroll_hint_str = "\"factor\"="+str(factor)
assert unroll_hint_str in str(ir)
def test_fuse():
hcl.init()
a = hcl.placeholder((10, 20, 30, 40))
b = hcl.placeholder((10, 20, 30, 40))
c = hcl.compute(a.shape, lambda i, j, k, l: a[i, j, k, l] + b[i, j, k, l])
s = hcl.create_schedule([a, b, c])
s[c].fuse(c.axis[1], c.axis[2])
ir = hcl.lower(s)
assert "j.k.fused" in str(ir)
def test_fuse_num_axis():
hcl.init()
a = hcl.placeholder((10, 20, 30, 40))
b = hcl.placeholder((10, 20, 30, 40))
c = hcl.compute(a.shape, lambda i, j, k, l: a[i, j, k, l] + b[i, j, k, l])
s = hcl.create_schedule([a, b, c])
s[c].fuse(1, 2)
ir = hcl.lower(s)
assert "j.k.fused" in str(ir)
def test_reorder():
hcl.init()
a = hcl.placeholder((10, 20, 30, 40), name="a")
b = hcl.placeholder((10, 20, 30, 40), name="b")
c = hcl.compute(a.shape, lambda i, j, k, l: a[i, j, k, l] + b[i, j, k, l], name="c")
# axes are consecutive
def test_case_1():
s = hcl.create_schedule([a, b, c])
s[c].reorder(c.axis[2], c.axis[1])
ir = hcl.lower(s)
assert str(ir.body.body).startswith("for (i, 0, 10)")
assert str(ir.body.body.body).startswith("for (k, 0, 30)")
assert str(ir.body.body.body.body).startswith("for (j, 0, 20)")
assert str(ir.body.body.body.body.body).startswith("for (l, 0, 40)")
# axes are not consecutive
def test_case_2():
s = hcl.create_schedule([a, b, c])
s[c].reorder(c.axis[3], c.axis[0])
ir = hcl.lower(s)
assert str(ir.body.body).startswith("for (l, 0, 40)")
assert str(ir.body.body.body).startswith("for (j, 0, 20)")
assert str(ir.body.body.body.body).startswith("for (k, 0, 30)")
assert str(ir.body.body.body.body.body).startswith("for (i, 0, 10)")
test_case_1()
test_case_2()
def test_reorder_num_axis():
hcl.init()
a = hcl.placeholder((10, 20, 30, 40), name="a")
b = hcl.placeholder((10, 20, 30, 40), name="b")
c = hcl.compute(a.shape, lambda i, j, k, l: a[i, j, k, l] + b[i, j, k, l], name="c")
s = hcl.create_schedule([a, b, c])
s[c].reorder(2, 1)
ir = hcl.lower(s)
assert str(ir.body.body).startswith("for (i, 0, 10)")
assert str(ir.body.body.body).startswith("for (k, 0, 30)")
assert str(ir.body.body.body.body).startswith("for (j, 0, 20)")
assert str(ir.body.body.body.body.body).startswith("for (l, 0, 40)")
def test_split():
hcl.init()
a = hcl.placeholder((10, 20), name="a")
b = hcl.placeholder((10, 20), name="b")
c = hcl.compute(a.shape, lambda i, j: a[i, j] + b[i, j], name="c")
# without if condition
def test_transform_mode_1():
s = hcl.create_schedule([a, b, c])
s[c].split(c.axis[1], factor=4, mode="transform")
ir = hcl.lower(s)
assert str(ir.body.body).startswith("for (i, 0, 10)")
assert str(ir.body.body.body).startswith("for (j.outer, 0, 5)")
assert str(ir.body.body.body.body).startswith("for (j.inner, 0, 4)")
assert str(ir.body.body.body.body.body).startswith("c[")
# with if condition
def test_transform_mode_2():
s = hcl.create_schedule([a, b, c])
s[c].split(c.axis[1], factor=3, mode="transform")
ir = hcl.lower(s)
assert str(ir.body.body).startswith("for (i, 0, 10)")
assert str(ir.body.body.body).startswith("for (j.outer, 0, 7)")
assert str(ir.body.body.body.body).startswith("for (j.inner, 0, 3)")
assert str(ir.body.body.body.body.body).startswith(
"if ((j.inner < (20 - (j.outer*3))))")
def test_annotate_mode():
split_factor = 3
s = hcl.create_schedule([a, b, c])
s[c].split(c.axis[1], factor=split_factor, mode="annotate")
split_hint_str = "\"split_factor\"="+str(split_factor)
ir = hcl.lower(s)
assert split_hint_str in str(ir)
test_transform_mode_1()
test_transform_mode_2()
test_annotate_mode()
def test_split_num_axis():
hcl.init()
a = hcl.placeholder((10, 20), name="a")
b = hcl.placeholder((10, 20), name="b")
c = hcl.compute(a.shape, lambda i, j: a[i, j] + b[i, j], name="c")
s = hcl.create_schedule([a, b, c])
s[c].split(1, factor=4, mode="transform")
ir = hcl.lower(s)
assert str(ir.body.body).startswith("for (i, 0, 10)")
assert str(ir.body.body.body).startswith("for (j.outer, 0, 5)")
assert str(ir.body.body.body.body).startswith("for (j.inner, 0, 4)")
assert str(ir.body.body.body.body.body).startswith("c[")
def test_split_reorder():
hcl.init()
a = hcl.placeholder((10, 20), name="a")
b = hcl.placeholder((10, 20), name="b")
c = hcl.compute(a.shape, lambda i, j: a[i, j] + b[i, j], name="c")
def test_case_1():
s = hcl.create_schedule([a, b, c])
xo, xi = s[c].split(c.axis[0], factor=2, mode="transform")
yo, yi = s[c].split(c.axis[1], factor=5, mode="transform")
s[c].reorder(yo, xo, yi, xi)
ir = hcl.lower(s)
assert str(ir.body.body).startswith("for (j.outer, 0, 4)")
assert str(ir.body.body.body).startswith("for (i.outer, 0, 5)")
assert str(ir.body.body.body.body).startswith("for (j.inner, 0, 5)")
assert str(ir.body.body.body.body.body).startswith("for (i.inner, 0, 2)")
def test_case_2():
s = hcl.create_schedule([a, b, c])
xo, xi = s[c].split(c.axis[0], factor=3, mode="transform")
yo, yi = s[c].split(c.axis[1], factor=3, mode="transform")
s[c].reorder(yi, xi, yo, xo)
ir = hcl.lower(s)
assert str(ir.body.body).startswith("for (j.inner, 0, 3)")
assert str(ir.body.body.body).startswith("for (i.inner, 0, 3)")
assert str(ir.body.body.body.body).startswith("for (j.outer, 0, 7)")
assert str(ir.body.body.body.body.body).startswith("for (i.outer, 0, 4)")
assert str(ir.body.body.body.body.body.body).startswith(
"if ((j.inner < (20 - (j.outer*3))))")
assert str(ir.body.body.body.body.body.body.then_case).startswith(
"if ((i.inner < (10 - (i.outer*3)))")
test_case_1()
test_case_2()
def test_split_reorder_num_axis():
# note that this is not the recommanded way
hcl.init()
a = hcl.placeholder((10, 20), name="a")
b = hcl.placeholder((10, 20), name="b")
c = hcl.compute(a.shape, lambda i, j: a[i, j] + b[i, j], name="c")
s = hcl.create_schedule([a, b, c])
xo, xi = s[c].split(0, factor=2, mode="transform")
yo, yi = s[c].split(2, factor=5, mode="transform")
s[c].reorder(2, 0, 3, 1)
ir = hcl.lower(s)
assert str(ir.body.body).startswith("for (j.outer, 0, 4)")
assert str(ir.body.body.body).startswith("for (i.outer, 0, 5)")
assert str(ir.body.body.body.body).startswith("for (j.inner, 0, 5)")
assert str(ir.body.body.body.body.body).startswith("for (i.inner, 0, 2)")
def test_compute_at():
def _build_kernel():
hcl.init()
A = hcl.placeholder((10, 20, 30), name="A")
B = hcl.compute(A.shape, lambda i, j, m: A[i, j, m] * 2, name="B")
C = hcl.compute(B.shape, lambda ii, jj, mm: B[ii, jj, mm] + 1, name="C")
return A, B, C
def _verify_build(sch):
f = hcl.build(sch)
a_np = np.random.randint(low=0, high=100, size=(10, 20, 30))
a_hcl = hcl.asarray(a_np)
c_hcl = hcl.asarray(np.zeros(a_np.shape), dtype="int32")
f(a_hcl, c_hcl)
c_np = a_np * 2 + 1
np.testing.assert_allclose(c_np, c_hcl.asnumpy())
def test_case_1():
# axis 0
A, B, C = _build_kernel()
s0 = hcl.create_schedule([A, C])
s0[B].compute_at(s0[C], C.axis[0])
ir0 = hcl.lower(s0)
assert "allocate B[int32 * 1 * 20 * 30]" in str(ir0)
_verify_build(s0)
# axis 1
A, B, C = _build_kernel()
s1 = hcl.create_schedule([A, C])
s1[B].compute_at(s1[C], C.axis[1])
ir1 = hcl.lower(s1)
assert "allocate B[int32 * 1 * 1 * 30]" in str(ir1)
_verify_build(s1)
# axis 2
A, B, C = _build_kernel()
s2 = hcl.create_schedule([A, C])
s2[B].compute_at(s2[C], C.axis[2])
ir2 = hcl.lower(s2)
assert "allocate B[int32 * 1 * 1 * 1]" in str(ir2)
_verify_build(s2)
def test_case_2():
A, B, C = _build_kernel()
s = hcl.create_schedule([A, C])
s[B].compute_at(s[C], C.axis[2])
s[C].fuse(C.axis[0], C.axis[1])
ir = hcl.lower(s)
assert "allocate B[int32 * 1 * 1 * 1]" in str(ir)
_verify_build(s)
def test_case_3():
A, B, C = _build_kernel()
s = hcl.create_schedule([A, C])
s[B].compute_at(s[C], C.axis[2])
s[C].split(C.axis[0], factor=3)
s[C].split(C.axis[1], factor=3)
ir = hcl.lower(s)
assert "allocate B[int32 * 1 * 1 * 1]" in str(ir)
_verify_build(s)
# compute_at and reorder, compute at an axis that is not reordered
# check both directions of reorder and compute_at
def test_case_4():
A, B, C = _build_kernel()
s0 = hcl.create_schedule([A, C])
s0[B].compute_at(s0[C], C.axis[2])
s0[C].reorder(C.axis[1], C.axis[0])
ir0 = hcl.lower(s0)
assert "allocate B[int32 * 1 * 1 * 1]" in str(ir0)
_verify_build(s0)
# compute_at and reorder, compute at an axis that has been reordered
# note that the results will be different
def test_case_5():
A, B, C = _build_kernel()
s0 = hcl.create_schedule([A, C])
s0[B].compute_at(s0[C], C.axis[1])
s0[C].reorder(C.axis[1], C.axis[0])
ir0 = hcl.lower(s0)
assert "allocate B[int32 * 1 * 1 * 30]" in str(ir0)
_verify_build(s0)
def test_case_6():
A, B, C = _build_kernel()
s = hcl.create_schedule([A, C])
s[B].compute_at(s[C], C.axis[2])
yo, yi = s[C].split(C.axis[0], factor=3)
xo, xi = s[C].split(C.axis[1], factor=3)
s[C].reorder(yo, xo, yi, xi)
ir = hcl.lower(s)
assert "allocate B[int32 * 1 * 1 * 1]" in str(ir)
_verify_build(s)
test_case_1()
test_case_2()
test_case_3()
test_case_4()
test_case_5()
test_case_6()
def test_compute_at_complex():
hcl.init()
A = hcl.placeholder((10, 20, 30), name="A")
B = hcl.compute(A.shape, lambda i, j, m: A[i, j, m] * 2, name="B")
C = hcl.compute(B.shape, lambda ii, jj, mm: B[ii, jj, mm] + 1, name="C")
D = hcl.compute(C.shape, lambda iii, jjj, mmm: C[iii, jjj, mmm] % 3, name="D")
s = hcl.create_schedule([A, D])
s[B].compute_at(s[C], C.axis[1])
s[C].compute_at(s[D], D.axis[2])
ir = hcl.lower(s)
assert "allocate B[int32 * 1 * 1 * 30]" in str(ir)
assert "allocate C[int32 * 1 * 1 * 1]" in str(ir)
f = hcl.build(s)
a_np = np.random.randint(low=0, high=100, size=A.shape)
a_hcl = hcl.asarray(a_np)
d_hcl = hcl.asarray(np.zeros(D.shape), dtype="int32")
f(a_hcl, d_hcl)
d_np = (a_np * 2 + 1) % 3
np.testing.assert_allclose(d_np, d_hcl.asnumpy())
def test_compute_at_complex_num_axis():
hcl.init()
A = hcl.placeholder((10, 20, 30), name="A")
B = hcl.compute(A.shape, lambda i, j, m: A[i, j, m] * 2, name="B")
C = hcl.compute(B.shape, lambda ii, jj, mm: B[ii, jj, mm] + 1, name="C")
D = hcl.compute(C.shape, lambda iii, jjj, mmm: C[iii, jjj, mmm] % 3, name="D")
s = hcl.create_schedule([A, D])
s[B].compute_at(s[C], 1)
s[C].compute_at(s[D], 2)
ir = hcl.lower(s)
assert "allocate B[int32 * 1 * 1 * 30]" in str(ir)
assert "allocate C[int32 * 1 * 1 * 1]" in str(ir)
f = hcl.build(s)
a_np = np.random.randint(low=0, high=100, size=A.shape)
a_hcl = hcl.asarray(a_np)
d_hcl = hcl.asarray(np.zeros(D.shape), dtype="int32")
f(a_hcl, d_hcl)
d_np = (a_np * 2 + 1) % 3
np.testing.assert_allclose(d_np, d_hcl.asnumpy())
def test_compute_at_with_reuse_1D():
hcl.init()
A = hcl.compute((10, 10), lambda y, x: x + y, "A")
B = hcl.compute((10, 8), lambda y, x: A[y, x] + A[y, x+1] + A[y, x+2], "B")
s = hcl.create_schedule([B])
s[A].compute_at(s[B], B.axis[1])
ir = hcl.lower(s)
assert "allocate A[int32 * 1 * 3]" in str(ir)
f = hcl.build(s)
a_np = np.fromfunction(lambda i, j: i + j, A.shape, dtype="int")
b_np = np.zeros(B.shape, dtype="int")
c_np = np.zeros(B.shape, dtype="int")
for y in range(0, 10):
for x in range(0, 8):
c_np[y][x] = a_np[y][x] + a_np[y][x+1] + a_np[y][x+2]
b_hcl = hcl.asarray(b_np)
f(b_hcl)
np.testing.assert_array_equal(c_np, b_hcl.asnumpy())
def test_compute_at_with_reuse_2D():
hcl.init()
A = hcl.compute((10, 10), lambda y, x: x + y, "A")
B = hcl.compute((8, 8), lambda y, x: A[y, x] + A[y+1, x+1] + A[y+2, x+2], "B")
s = hcl.create_schedule([B])
s[A].compute_at(s[B], B.axis[1])
ir = hcl.lower(s)
assert "allocate A[int32 * 3 * 3]" in str(ir)
f = hcl.build(s)
a_np = np.fromfunction(lambda i, j: i + j, A.shape, dtype="int")
b_np = np.zeros(B.shape, dtype="int")
c_np = np.zeros(B.shape, dtype="int")
for y in range(0, 8):
for x in range(0, 8):
c_np[y][x] = a_np[y][x] + a_np[y+1][x+1] + a_np[y+2][x+2]
b_hcl = hcl.asarray(b_np)
f(b_hcl)
np.testing.assert_array_equal(c_np, b_hcl.asnumpy())
def test_compute_at_with_reuse_2D_complex():
hcl.init()
A = hcl.compute((10, 10), lambda y, x: x + y, "A")
B = hcl.compute((8, 8), lambda y, x: A[y, x] + A[y+1, x+1] + A[y+2, x+2], "B")
s = hcl.create_schedule([B])
s[A].compute_at(s[B], B.axis[1])
s[B].split(B.axis[1], 4)
ir = hcl.lower(s)
assert "allocate A[int32 * 3 * 3]" in str(ir)
f = hcl.build(s)
a_np = np.fromfunction(lambda i, j: i + j, A.shape, dtype="int")
b_np = np.zeros(B.shape, dtype="int")
c_np = np.zeros(B.shape, dtype="int")
for y in range(0, 8):
for x in range(0, 8):
c_np[y][x] = a_np[y][x] + a_np[y+1][x+1] + a_np[y+2][x+2]
b_hcl = hcl.asarray(b_np)
f(b_hcl)
np.testing.assert_array_equal(c_np, b_hcl.asnumpy())
def test_compute_at_no_dep():
hcl.init()
A = hcl.compute((10, 10), lambda y, x: y + x, "A")
B = hcl.compute((10, 10), lambda y, x: y - x, "B")
s = hcl.create_schedule([A, B])
s[A].compute_at(s[B], B.axis[1])
f = hcl.build(s)
a_hcl = hcl.asarray(np.zeros(A.shape, dtype="int"))
b_hcl = hcl.asarray(np.zeros(B.shape, dtype="int"))
f(a_hcl, b_hcl)
a_np = np.fromfunction(lambda i, j: i + j, A.shape, dtype="int")
b_np = np.fromfunction(lambda i, j: i - j, B.shape, dtype="int")
np.testing.assert_array_equal(a_np, a_hcl.asnumpy())
np.testing.assert_array_equal(b_np, b_hcl.asnumpy())
def test_compute_at_no_dep_diff_shape_smaller():
hcl.init()
A = hcl.compute((8, 8), lambda y, x: y + x, "A")
B = hcl.compute((10, 10), lambda y, x: y - x, "B")
s = hcl.create_schedule([A, B])
s[A].compute_at(s[B], B.axis[1])
f = hcl.build(s)
a_hcl = hcl.asarray(np.zeros(A.shape, dtype="int"))
b_hcl = hcl.asarray(np.zeros(B.shape, dtype="int"))
f(a_hcl, b_hcl)
a_np = np.fromfunction(lambda i, j: i + j, A.shape, dtype="int")
b_np = np.fromfunction(lambda i, j: i - j, B.shape, dtype="int")
np.testing.assert_array_equal(a_np, a_hcl.asnumpy())
np.testing.assert_array_equal(b_np, b_hcl.asnumpy())
def test_compute_at_no_dep_diff_shape_larger():
hcl.init()
A = hcl.compute((12, 12), lambda y, x: y + x, "A")
B = hcl.compute((10, 10), lambda y, x: y - x, "B")
s = hcl.create_schedule([A, B])
# the outer one will be truncated
s[A].compute_at(s[B], B.axis[1])
f = hcl.build(s)
a_hcl = hcl.asarray(np.zeros(A.shape, dtype="int"))
b_hcl = hcl.asarray(np.zeros(B.shape, dtype="int"))
f(a_hcl, b_hcl)
a_np = np.fromfunction(lambda i, j: i + j, A.shape, dtype="int")
b_np = np.fromfunction(lambda i, j: i - j, B.shape, dtype="int")
for i in range(0, 12):
for j in range(0, 12):
if (i >= 10 or j >= 10):
a_np[i][j] = 0
np.testing.assert_array_equal(a_np, a_hcl.asnumpy())
np.testing.assert_array_equal(b_np, b_hcl.asnumpy())
def test_multi_stage():
hcl.init()
def test(A):
r = hcl.reduce_axis(0, 10)
B = hcl.compute((10,), lambda x: hcl.sum(A[x, r], axis=r), "B")
return B
A = hcl.placeholder((10, 10))
s = hcl.create_schedule([A], test)
s[test.B].split(test.B.axis[0], 5)
f = hcl.build(s)
a_np = np.random.randint(0, 10, size=(10, 10))
b_np = np.zeros(shape=(10,), dtype="int")
a_hcl = hcl.asarray(a_np)
b_hcl = hcl.asarray(b_np)
f(a_hcl, b_hcl)
d_np = np.sum(a_np, axis=1)
np.testing.assert_array_equal(d_np, b_hcl.asnumpy())
| 38.020661 | 88 | 0.574122 | 3,308 | 18,402 | 3.077388 | 0.047158 | 0.078585 | 0.073084 | 0.055992 | 0.884774 | 0.860511 | 0.841356 | 0.825246 | 0.807564 | 0.782024 | 0 | 0.042727 | 0.233072 | 18,402 | 483 | 89 | 38.099379 | 0.678594 | 0.021682 | 0 | 0.683841 | 0 | 0 | 0.073878 | 0 | 0 | 0 | 0 | 0 | 0.17096 | 1 | 0.088993 | false | 0 | 0.004684 | 0 | 0.098361 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc954e66cc81631df6abb49e249ba480a129738b | 48 | py | Python | SOSAT/samplers/__init__.py | pnnl/SOSAT | 610f99e0bb80f2f5e7836e7e3b6b816e029838bb | [
"BSD-3-Clause"
] | null | null | null | SOSAT/samplers/__init__.py | pnnl/SOSAT | 610f99e0bb80f2f5e7836e7e3b6b816e029838bb | [
"BSD-3-Clause"
] | 1 | 2021-03-22T18:59:05.000Z | 2021-03-22T18:59:05.000Z | SOSAT/samplers/__init__.py | pnnl/SOSAT | 610f99e0bb80f2f5e7836e7e3b6b816e029838bb | [
"BSD-3-Clause"
] | null | null | null | from .rejection_sampler import RejectionSampler
| 24 | 47 | 0.895833 | 5 | 48 | 8.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d77b36e95af5aeee8f2ce48c26139e5be3f597f | 5 | py | Python | test_data/parse_retree/expected/start_and_stop_symbols/start_symbol_at_the_beginning/source.py | aas-core-works/aas-core-codegen | afec2cf363b6cb69816e7724a2b58626e2165869 | [
"MIT"
] | 5 | 2021-12-29T12:55:34.000Z | 2022-03-01T17:57:21.000Z | test_data/parse_retree/expected/start_and_stop_symbols/start_symbol_at_the_beginning/source.py | aas-core-works/aas-core-codegen | afec2cf363b6cb69816e7724a2b58626e2165869 | [
"MIT"
] | 10 | 2021-12-29T02:15:55.000Z | 2022-03-09T11:04:22.000Z | test_data/parse_retree/expected/start_and_stop_symbols/start_symbol_at_the_beginning/source.py | aas-core-works/aas-core-codegen | afec2cf363b6cb69816e7724a2b58626e2165869 | [
"MIT"
] | 2 | 2021-12-29T01:42:12.000Z | 2022-02-15T13:46:33.000Z | "^a"
| 2.5 | 4 | 0.2 | 1 | 5 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 5 | 1 | 5 | 5 | 0.25 | 0.4 | 0 | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d8eb32106e7b03e94957c15cfb90baae9c6e346 | 90 | py | Python | c3dev/galmocks/data_loaders/__init__.py | aphearin/c3dev | d36d083c9eb688640670dbe066bf299777a78ba7 | [
"BSD-3-Clause"
] | 2 | 2020-09-23T00:47:06.000Z | 2022-02-08T18:41:00.000Z | c3dev/galmocks/data_loaders/__init__.py | aphearin/c3dev | d36d083c9eb688640670dbe066bf299777a78ba7 | [
"BSD-3-Clause"
] | 2 | 2022-01-24T15:45:08.000Z | 2022-02-07T20:58:40.000Z | c3dev/galmocks/data_loaders/__init__.py | aphearin/c3dev | d36d083c9eb688640670dbe066bf299777a78ba7 | [
"BSD-3-Clause"
] | 5 | 2018-03-27T17:21:06.000Z | 2022-03-11T19:45:30.000Z | """
"""
from .load_unit_sims import read_unit_sim
from .load_gumbo import read_gumbo_mock
| 18 | 41 | 0.8 | 15 | 90 | 4.333333 | 0.6 | 0.246154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 90 | 4 | 42 | 22.5 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5dbca933e3a55af593b08c8dfdf119188d1406e7 | 124 | py | Python | pylogrus/__init__.py | StationA/pylogrus | 1189c9510207c4f21dfa83c574973ca59079577c | [
"MIT"
] | 13 | 2018-01-08T10:29:32.000Z | 2021-12-16T15:21:43.000Z | pylogrus/__init__.py | StationA/pylogrus | 1189c9510207c4f21dfa83c574973ca59079577c | [
"MIT"
] | 1 | 2020-01-29T16:32:48.000Z | 2020-01-29T21:40:37.000Z | pylogrus/__init__.py | StationA/pylogrus | 1189c9510207c4f21dfa83c574973ca59079577c | [
"MIT"
] | 4 | 2019-06-12T13:47:09.000Z | 2021-07-02T12:34:42.000Z | # -*- coding: utf-8 -*-
from .base import PyLogrus
from .json_formatter import JsonFormatter
from .text_formatter import *
| 20.666667 | 41 | 0.75 | 16 | 124 | 5.6875 | 0.6875 | 0.32967 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009434 | 0.145161 | 124 | 5 | 42 | 24.8 | 0.849057 | 0.169355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5dc4aeeff87be72b0afcbaeed907adfd1cce8e20 | 70 | py | Python | modules/unit_tests/core/model/__init__.py | nursix/DRKCM | 09328289ff721c416494398aa751ff99906327cb | [
"MIT"
] | 3 | 2022-01-26T08:07:54.000Z | 2022-03-21T21:53:52.000Z | modules/unit_tests/core/model/__init__.py | nursix/eden-asp | e49f46cb6488918f8d5a163dcd5a900cd686978c | [
"MIT"
] | null | null | null | modules/unit_tests/core/model/__init__.py | nursix/eden-asp | e49f46cb6488918f8d5a163dcd5a900cd686978c | [
"MIT"
] | 1 | 2017-10-03T13:03:47.000Z | 2017-10-03T13:03:47.000Z | from .datamodel import *
from .dynamic import *
from .fields import *
| 17.5 | 24 | 0.742857 | 9 | 70 | 5.777778 | 0.555556 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171429 | 70 | 3 | 25 | 23.333333 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f8e5ad6431c278aad757989813b38a2df7ca2677 | 128 | py | Python | vicarui/src/vicarui/support/__init__.py | joniumGit/moons | f5f8b7e23e707c8cf7e1081c4a1c0fcc22182d85 | [
"MIT"
] | 1 | 2021-07-16T06:30:37.000Z | 2021-07-16T06:30:37.000Z | vicarui/src/vicarui/support/__init__.py | joniumGit/moons | f5f8b7e23e707c8cf7e1081c4a1c0fcc22182d85 | [
"MIT"
] | null | null | null | vicarui/src/vicarui/support/__init__.py | joniumGit/moons | f5f8b7e23e707c8cf7e1081c4a1c0fcc22182d85 | [
"MIT"
] | 1 | 2021-05-26T03:53:41.000Z | 2021-05-26T03:53:41.000Z | from .concurrent import *
from .pipeline import *
from .ui import *
from .tasks import *
from .tex import *
from .misc import *
| 18.285714 | 25 | 0.71875 | 18 | 128 | 5.111111 | 0.444444 | 0.543478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 128 | 6 | 26 | 21.333333 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d1f3b4255f4c107a41365b8796311f8758b6f66 | 3,686 | py | Python | zippy/benchmarks/src/benchmarks/whoosh/tests/test_flexible.py | lucapele/pele-c | ff6d06794a171f8e1b08fc6246446d9777116f56 | [
"BSD-3-Clause"
] | 319 | 2016-09-22T15:54:48.000Z | 2022-03-18T02:36:58.000Z | zippy/benchmarks/src/benchmarks/whoosh/tests/test_flexible.py | lucapele/pele-c | ff6d06794a171f8e1b08fc6246446d9777116f56 | [
"BSD-3-Clause"
] | 9 | 2016-11-03T21:56:41.000Z | 2020-08-09T19:27:37.000Z | zippy/benchmarks/src/benchmarks/whoosh/tests/test_flexible.py | lucapele/pele-c | ff6d06794a171f8e1b08fc6246446d9777116f56 | [
"BSD-3-Clause"
] | 27 | 2016-10-06T16:05:32.000Z | 2022-03-18T02:37:00.000Z | from __future__ import with_statement
from whoosh import fields
from whoosh.compat import u, b
from whoosh.util.testing import TempIndex
def test_addfield():
schema = fields.Schema(id=fields.ID(stored=True), content=fields.TEXT)
with TempIndex(schema, "addfield") as ix:
w = ix.writer()
w.add_document(id=u("a"), content=u("alfa"))
w.add_document(id=u("b"), content=u("bravo"))
w.add_document(id=u("c"), content=u("charlie"))
w.commit()
ix.add_field("added", fields.KEYWORD(stored=True))
w = ix.writer()
w.add_document(id=u("d"), content=u("delta"), added=u("fourth"))
w.add_document(id=u("e"), content=u("echo"), added=u("fifth"))
w.commit(merge=False)
with ix.searcher() as s:
assert ("id", "d") in s.reader()
assert s.document(id="d") == {"id": "d", "added": "fourth"}
assert s.document(id="b") == {"id": "b"}
def test_addfield_spelling():
schema = fields.Schema(id=fields.ID(stored=True), content=fields.TEXT)
with TempIndex(schema, "addfield") as ix:
w = ix.writer()
w.add_document(id=u("a"), content=u("alfa"))
w.add_document(id=u("b"), content=u("bravo"))
w.add_document(id=u("c"), content=u("charlie"))
w.commit()
ix.add_field("added", fields.KEYWORD(stored=True, spelling=True))
w = ix.writer()
w.add_document(id=u("d"), content=u("delta"), added=u("fourth"))
w.add_document(id=u("e"), content=u("echo"), added=u("fifth"))
w.commit(merge=False)
with ix.searcher() as s:
assert s.document(id=u("d")) == {"id": "d", "added": "fourth"}
assert s.document(id=u("b")) == {"id": "b"}
def test_removefield():
schema = fields.Schema(id=fields.ID(stored=True),
content=fields.TEXT,
city=fields.KEYWORD(stored=True))
with TempIndex(schema, "removefield") as ix:
w = ix.writer()
w.add_document(id=u("b"), content=u("bravo"), city=u("baghdad"))
w.add_document(id=u("c"), content=u("charlie"), city=u("cairo"))
w.add_document(id=u("d"), content=u("delta"), city=u("dakar"))
w.commit()
with ix.searcher() as s:
assert s.document(id=u("c")) == {"id": "c", "city": "cairo"}
w = ix.writer()
w.remove_field("content")
w.remove_field("city")
w.commit()
ixschema = ix._current_schema()
assert ixschema.names() == ["id"]
assert ixschema.stored_names() == ["id"]
with ix.searcher() as s:
assert ("content", b("charlie")) not in s.reader()
assert s.document(id=u("c")) == {"id": u("c")}
def test_optimize_away():
schema = fields.Schema(id=fields.ID(stored=True),
content=fields.TEXT,
city=fields.KEYWORD(stored=True))
with TempIndex(schema, "optimizeaway") as ix:
w = ix.writer()
w.add_document(id=u("b"), content=u("bravo"), city=u("baghdad"))
w.add_document(id=u("c"), content=u("charlie"), city=u("cairo"))
w.add_document(id=u("d"), content=u("delta"), city=u("dakar"))
w.commit()
with ix.searcher() as s:
assert s.document(id=u("c")) == {"id": "c", "city": "cairo"}
w = ix.writer()
w.remove_field("content")
w.remove_field("city")
w.commit(optimize=True)
with ix.searcher() as s:
assert ("content", u("charlie")) not in s.reader()
assert s.document(id=u("c")) == {"id": u("c")}
if __name__ == "__main__":
test_addfield()
| 35.104762 | 74 | 0.549919 | 509 | 3,686 | 3.89391 | 0.131631 | 0.12109 | 0.122099 | 0.113017 | 0.842079 | 0.830474 | 0.830474 | 0.787084 | 0.787084 | 0.754793 | 0 | 0 | 0.251492 | 3,686 | 104 | 75 | 35.442308 | 0.718376 | 0 | 0 | 0.6875 | 0 | 0 | 0.092784 | 0 | 0 | 0 | 0 | 0 | 0.1625 | 1 | 0.05 | false | 0 | 0.05 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d2cc4ce3ec8b5b636e180da88f3d4d9d8343da5 | 1,661 | py | Python | examples/bp_network_example.py | Yangruipis/simple_ml | 09657f6b017b973a5201aa611774d6ac8f0fc0a2 | [
"MIT"
] | 25 | 2018-04-17T04:38:51.000Z | 2021-10-09T04:07:53.000Z | examples/bp_network_example.py | Yangruipis/simple_ml | 09657f6b017b973a5201aa611774d6ac8f0fc0a2 | [
"MIT"
] | null | null | null | examples/bp_network_example.py | Yangruipis/simple_ml | 09657f6b017b973a5201aa611774d6ac8f0fc0a2 | [
"MIT"
] | 5 | 2018-04-17T05:27:00.000Z | 2020-12-01T02:55:15.000Z | # -*- coding:utf-8 -*-
from simple_ml.classify_data import *
from simple_ml.neural_network import *
from simple_ml.data_handle import train_test_split
from simple_ml.base.base_enum import *
def wine_example():
x, y = get_wine()
x = x[(y == 0) | (y == 1)]
y = y[(y == 0) | (y == 1)]
x_train, y_train, x_test, y_test = train_test_split(x, y, 0.3, 918)
nn = NeuralNetwork(alpha=0.5, cost_func=CostFunction.square)
nn.clear_all()
nn.add_some_layers(2, 3, active_func=ActiveFunction.relu)
nn.fit(x_train, y_train)
print(nn.predict_prob(x_test))
nn.classify_plot(x_test, y_test)
nn.auc_plot(x_test, y_test)
def moon_example():
x, y = get_moon()
x = x[(y == 0) | (y == 1)]
y = y[(y == 0) | (y == 1)]
x_train, y_train, x_test, y_test = train_test_split(x, y, 0.3, 918)
nn = NeuralNetwork(alpha=0.5, cost_func=CostFunction.square)
nn.clear_all()
nn.add_some_layers(2, 3, active_func=ActiveFunction.relu)
nn.fit(x_train, y_train)
print(nn.predict_prob(x_test))
nn.classify_plot(x_test, y_test)
nn.auc_plot(x_test, y_test)
def multi_class_example():
x, y = get_wine()
x_train, y_train, x_test, y_test = train_test_split(x, y, 0.3, 918)
nn = NeuralNetwork(alpha=0.5, cost_func=CostFunction.square)
nn.clear_all()
nn.add_some_layers(2, 3, active_func=ActiveFunction.relu)
nn.fit(x_train, y_train)
# print(nn.predict_prob(x_test)) # raise error here
nn.classify_plot(x_test, y_test)
# nn.auc_plot(x_test, y_test) # raise error here
if __name__ == '__main__':
# wine_example()
# moon_example()
multi_class_example() | 28.152542 | 71 | 0.661048 | 285 | 1,661 | 3.529825 | 0.207018 | 0.059642 | 0.053678 | 0.089463 | 0.744533 | 0.744533 | 0.712724 | 0.712724 | 0.712724 | 0.712724 | 0 | 0.026946 | 0.195665 | 1,661 | 59 | 72 | 28.152542 | 0.726048 | 0.092715 | 0 | 0.736842 | 0 | 0 | 0.005333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078947 | false | 0 | 0.105263 | 0 | 0.184211 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d3baa0db2bce0b7980f8b8f7bdd7a9aa8ca1cb4 | 101 | py | Python | pg_methods/objectives/__init__.py | zafarali/policy-gradient-methods | f0d83a80ddc772dcad0c851aac9bfd41d436c274 | [
"MIT"
] | 28 | 2018-06-12T21:37:20.000Z | 2021-12-27T15:13:14.000Z | pg_methods/objectives/__init__.py | zafarali/policy-gradient-methods | f0d83a80ddc772dcad0c851aac9bfd41d436c274 | [
"MIT"
] | 3 | 2018-05-10T16:33:05.000Z | 2018-06-19T18:17:37.000Z | pg_methods/objectives/__init__.py | zafarali/policy-gradient-methods | f0d83a80ddc772dcad0c851aac9bfd41d436c274 | [
"MIT"
] | 7 | 2018-05-08T04:13:21.000Z | 2021-04-02T12:31:55.000Z | from pg_methods.objectives.objectives import PolicyGradientObjective, NaturalPolicyGradientObjective | 101 | 101 | 0.920792 | 8 | 101 | 11.5 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049505 | 101 | 1 | 101 | 101 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5d70db3e31aa797d90b5190140187b5cb6c080b1 | 13,048 | py | Python | tests-python/test_waveform_from_file.py | babycat-io/babycat | 39ecba8469e698a990bc9dc52e5de9ae78492a60 | [
"MIT"
] | 8 | 2021-05-10T23:12:14.000Z | 2022-02-23T06:54:31.000Z | tests-python/test_waveform_from_file.py | babycat-io/babycat | 39ecba8469e698a990bc9dc52e5de9ae78492a60 | [
"MIT"
] | 13 | 2021-06-01T05:31:17.000Z | 2022-03-25T22:24:18.000Z | tests-python/test_waveform_from_file.py | babycat-io/babycat | 39ecba8469e698a990bc9dc52e5de9ae78492a60 | [
"MIT"
] | 1 | 2021-06-01T05:24:52.000Z | 2021-06-01T05:24:52.000Z | """
Tests loading waveform from file.
These tests mirror the ones in ``../tests/test_waveform_from_file.rs``
"""
import pytest
from fixtures import *
import babycat
Waveform = babycat.Waveform
bexc = babycat.exceptions
def test_circus_of_freaks_default_1():
waveform = Waveform.from_file(COF_FILENAME)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == COF_NUM_FRAMES
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_wrong_time_offset_1():
with pytest.raises(bexc.WrongTimeOffset):
Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=1000,
end_time_milliseconds=999,
)
def test_circus_of_freaks_wrong_time_offset_2():
with pytest.raises(bexc.WrongTimeOffset):
Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=1000,
end_time_milliseconds=1000,
)
def test_circus_of_freaks_invalid_end_time_milliseconds_zero_pad_ending_1():
with pytest.raises(bexc.CannotZeroPadWithoutSpecifiedLength):
Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=5,
end_time_milliseconds=0,
zero_pad_ending=True,
)
def test_circus_of_freaks_get_channels_1():
waveform = Waveform.from_file(
COF_FILENAME,
num_channels=1,
)
assert waveform.num_channels == 1
assert waveform.num_frames == COF_NUM_FRAMES
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_get_channels_2():
waveform = Waveform.from_file(
COF_FILENAME,
num_channels=2,
)
assert waveform.num_channels == 2
assert waveform.num_frames == COF_NUM_FRAMES
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_get_channels_too_many_1():
with pytest.raises(bexc.WrongNumChannels):
Waveform.from_file(
COF_FILENAME,
num_channels=3,
)
def test_circus_of_freaks_convert_to_mono_1():
waveform = Waveform.from_file(
COF_FILENAME,
num_channels=2,
convert_to_mono=True,
)
assert waveform.num_channels == 1
assert waveform.num_frames == COF_NUM_FRAMES
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_convert_to_mono_2():
waveform = Waveform.from_file(
COF_FILENAME,
convert_to_mono=True,
)
assert waveform.num_channels == 1
assert waveform.num_frames == COF_NUM_FRAMES
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_convert_to_mono_invalid_1():
with pytest.raises(bexc.WrongNumChannelsAndMono):
Waveform.from_file(
COF_FILENAME,
num_channels=1,
convert_to_mono=True,
)
def test_circus_of_freaks_start_end_milliseconds_1():
waveform = Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=0,
end_time_milliseconds=1,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 44
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_start_end_milliseconds_2():
waveform = Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=10,
end_time_milliseconds=11,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 44
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_start_end_milliseconds_3():
waveform = Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=0,
end_time_milliseconds=30000,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 1323000
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_start_end_milliseconds_4():
waveform = Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=15000,
end_time_milliseconds=45000,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 1323000
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_start_end_milliseconds_5():
waveform = Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=30000,
end_time_milliseconds=60000,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 1168776
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_start_end_milliseconds_zero_pad_ending_1():
waveform = Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=0,
end_time_milliseconds=1,
zero_pad_ending=True,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 44
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_start_end_milliseconds_zero_pad_ending_2():
waveform = Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=10,
end_time_milliseconds=11,
zero_pad_ending=True,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 44
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_start_end_milliseconds_zero_pad_ending_3():
waveform = Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=0,
end_time_milliseconds=30000,
zero_pad_ending=True,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 1323000
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_start_end_milliseconds_zero_pad_ending_4():
waveform = Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=15000,
end_time_milliseconds=45000,
zero_pad_ending=True,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 1323000
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_start_end_milliseconds_zero_pad_ending_5():
waveform = Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=30000,
end_time_milliseconds=60000,
zero_pad_ending=True,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 1323000
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_start_end_milliseconds_zero_pad_ending_6():
waveform = Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=0,
end_time_milliseconds=60000,
zero_pad_ending=True,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 2646000
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_start_end_milliseconds_zero_pad_ending_7():
waveform = Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=0,
end_time_milliseconds=90000,
zero_pad_ending=True,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 3969000
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_start_end_milliseconds_zero_pad_ending_8():
waveform = Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=30000,
end_time_milliseconds=90000,
zero_pad_ending=True,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 2646000
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_end_milliseconds_zero_pad_ending_1():
waveform = Waveform.from_file(
COF_FILENAME,
end_time_milliseconds=90000,
zero_pad_ending=True,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 3969000
assert waveform.frame_rate_hz == COF_FRAME_RATE_HZ
def test_circus_of_freaks_invalid_resample_1():
with pytest.raises(bexc.WrongFrameRateRatio):
Waveform.from_file(
COF_FILENAME,
frame_rate_hz=1,
)
def test_circus_of_freaks_invalid_resample_2():
with pytest.raises(bexc.WrongFrameRateRatio):
Waveform.from_file(
COF_FILENAME,
frame_rate_hz=20,
)
def test_circus_of_freaks_invalid_resample_3():
with pytest.raises(bexc.WrongFrameRateRatio):
Waveform.from_file(
COF_FILENAME,
frame_rate_hz=172,
)
def test_circus_of_freaks_resample_1():
waveform = Waveform.from_file(
COF_FILENAME,
frame_rate_hz=22050,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 1245888
assert waveform.frame_rate_hz == 22050
def test_circus_of_freaks_resample_2():
waveform = Waveform.from_file(
COF_FILENAME,
frame_rate_hz=11025,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 622944
assert waveform.frame_rate_hz == 11025
def test_circus_of_freaks_resample_3():
waveform = Waveform.from_file(
COF_FILENAME,
frame_rate_hz=88200,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 4983552
assert waveform.frame_rate_hz == 88200
def test_circus_of_freaks_resample_4():
waveform = Waveform.from_file(
COF_FILENAME,
frame_rate_hz=4410,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 249178
assert waveform.frame_rate_hz == 4410
def test_circus_of_freaks_resample_5():
waveform = Waveform.from_file(
COF_FILENAME,
frame_rate_hz=44099,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 2491720
assert waveform.frame_rate_hz == 44099
def test_circus_of_freaks_resample_6():
waveform = Waveform.from_file(
COF_FILENAME,
frame_rate_hz=48000,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 2712138
assert waveform.frame_rate_hz == 48000
def test_circus_of_freaks_resample_7():
waveform = Waveform.from_file(
COF_FILENAME,
frame_rate_hz=60000,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 3390172
assert waveform.frame_rate_hz == 60000
def test_circus_of_freaks_resample_8():
waveform = Waveform.from_file(
COF_FILENAME,
frame_rate_hz=88200,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 4983552
assert waveform.frame_rate_hz == 88200
def test_circus_of_freaks_resample_9():
waveform = Waveform.from_file(
COF_FILENAME,
frame_rate_hz=96000,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 5424275
assert waveform.frame_rate_hz == 96000
def test_circus_of_freaks_resample_10():
waveform = Waveform.from_file(
COF_FILENAME,
frame_rate_hz=200,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 11301
assert waveform.frame_rate_hz == 200
def test_circus_of_freaks_resample_11():
waveform = Waveform.from_file(
COF_FILENAME,
frame_rate_hz=2000,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 113006
assert waveform.frame_rate_hz == 2000
def test_circus_of_freaks_resample_12():
waveform = Waveform.from_file(
COF_FILENAME,
frame_rate_hz=173,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 9775
assert waveform.frame_rate_hz == 173
def test_circus_of_freaks_start_end_milliseconds_resample_zero_pad_ending_1():
waveform = Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=0,
end_time_milliseconds=60000,
frame_rate_hz=48000,
zero_pad_ending=True,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 2880000
assert waveform.frame_rate_hz == 48000
def test_circus_of_freaks_start_end_milliseconds_resample_zero_pad_ending_2():
waveform = Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=0,
end_time_milliseconds=60000,
frame_rate_hz=44099,
zero_pad_ending=True,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 2645940
assert waveform.frame_rate_hz == 44099
def test_circus_of_freaks_start_end_milliseconds_resample_zero_pad_ending_3():
waveform = Waveform.from_file(
COF_FILENAME,
start_time_milliseconds=0,
end_time_milliseconds=60000,
frame_rate_hz=22050,
zero_pad_ending=True,
)
assert waveform.num_channels == COF_NUM_CHANNELS
assert waveform.num_frames == 1323000
assert waveform.frame_rate_hz == 22050
| 28.931264 | 78 | 0.722946 | 1,683 | 13,048 | 5.124777 | 0.063577 | 0.165565 | 0.090551 | 0.073043 | 0.941565 | 0.908174 | 0.868174 | 0.840232 | 0.81542 | 0.745623 | 0 | 0.048527 | 0.21191 | 13,048 | 450 | 79 | 28.995556 | 0.790236 | 0.008047 | 0 | 0.643454 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.284123 | 1 | 0.116992 | false | 0 | 0.008357 | 0 | 0.125348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
53783b77c5bbb1c282e971de6c290c04b5b47458 | 40,606 | py | Python | tools/grammar-analysis/ANTLRv4Lexer.py | sanyaade-teachings/spresensedroplet | 8dffe6d07a744288870aa5f37a179aa5db22f8d2 | [
"MIT"
] | 3 | 2017-05-19T15:07:19.000Z | 2018-10-25T12:29:27.000Z | tools/grammar-analysis/ANTLRv4Lexer.py | sanyaade-teachings/spresensedroplet | 8dffe6d07a744288870aa5f37a179aa5db22f8d2 | [
"MIT"
] | 1 | 2018-08-30T19:37:37.000Z | 2018-08-30T19:37:37.000Z | tools/grammar-analysis/ANTLRv4Lexer.py | sanyaade-teachings/spresensedroplet | 8dffe6d07a744288870aa5f37a179aa5db22f8d2 | [
"MIT"
] | 7 | 2017-04-07T14:11:01.000Z | 2021-11-11T19:32:01.000Z | # Generated from java-escape by ANTLR 4.5
from antlr4 import *
from io import StringIO
from LexerAdaptor import LexerAdaptor
def serializedATN():
with StringIO() as buf:
buf.write("\3\u0430\ud6d1\u8206\uad2d\u4417\uaef1\u8d80\uaadd\2?")
buf.write("\u03c6\b\1\b\1\b\1\b\1\b\1\b\1\b\1\4\2\t\2\4\3\t\3\4\4")
buf.write("\t\4\4\5\t\5\4\6\t\6\4\7\t\7\4\b\t\b\4\t\t\t\4\n\t\n\4")
buf.write("\13\t\13\4\f\t\f\4\r\t\r\4\16\t\16\4\17\t\17\4\20\t\20")
buf.write("\4\21\t\21\4\22\t\22\4\23\t\23\4\24\t\24\4\25\t\25\4\26")
buf.write("\t\26\4\27\t\27\4\30\t\30\4\31\t\31\4\32\t\32\4\33\t\33")
buf.write("\4\34\t\34\4\35\t\35\4\36\t\36\4\37\t\37\4 \t \4!\t!\4")
buf.write("\"\t\"\4#\t#\4$\t$\4%\t%\4&\t&\4\'\t\'\4(\t(\4)\t)\4*")
buf.write("\t*\4+\t+\4,\t,\4-\t-\4.\t.\4/\t/\4\60\t\60\4\61\t\61")
buf.write("\4\62\t\62\4\63\t\63\4\64\t\64\4\65\t\65\4\66\t\66\4\67")
buf.write("\t\67\48\t8\49\t9\4:\t:\4;\t;\4<\t<\4=\t=\4>\t>\4?\t?")
buf.write("\4@\t@\4A\tA\4B\tB\4C\tC\4D\tD\4E\tE\4F\tF\4G\tG\4H\t")
buf.write("H\4I\tI\4J\tJ\4K\tK\4L\tL\4M\tM\4N\tN\4O\tO\4P\tP\4Q\t")
buf.write("Q\4R\tR\4S\tS\4T\tT\4U\tU\4V\tV\4W\tW\4X\tX\4Y\tY\4Z\t")
buf.write("Z\4[\t[\4\\\t\\\4]\t]\4^\t^\4_\t_\4`\t`\4a\ta\4b\tb\4")
buf.write("c\tc\4d\td\4e\te\4f\tf\4g\tg\4h\th\4i\ti\4j\tj\4k\tk\4")
buf.write("l\tl\4m\tm\4n\tn\4o\to\4p\tp\4q\tq\4r\tr\4s\ts\4t\tt\4")
buf.write("u\tu\4v\tv\4w\tw\4x\tx\4y\ty\4z\tz\4{\t{\4|\t|\4}\t}\4")
buf.write("~\t~\4\177\t\177\4\u0080\t\u0080\4\u0081\t\u0081\4\u0082")
buf.write("\t\u0082\4\u0083\t\u0083\4\u0084\t\u0084\4\u0085\t\u0085")
buf.write("\4\u0086\t\u0086\4\u0087\t\u0087\4\u0088\t\u0088\4\u0089")
buf.write("\t\u0089\4\u008a\t\u008a\4\u008b\t\u008b\4\u008c\t\u008c")
buf.write("\4\u008d\t\u008d\4\u008e\t\u008e\4\u008f\t\u008f\4\u0090")
buf.write("\t\u0090\4\u0091\t\u0091\4\u0092\t\u0092\4\u0093\t\u0093")
buf.write("\4\u0094\t\u0094\4\u0095\t\u0095\4\u0096\t\u0096\4\u0097")
buf.write("\t\u0097\4\u0098\t\u0098\4\u0099\t\u0099\3\2\3\2\3\3\3")
buf.write("\3\3\3\3\3\3\4\3\4\3\4\3\4\3\5\3\5\3\6\3\6\3\7\3\7\3\b")
buf.write("\3\b\3\b\3\t\3\t\3\t\3\t\3\n\3\n\3\n\3\n\3\n\3\n\3\n\3")
buf.write("\n\3\n\3\n\3\13\3\13\3\13\3\13\3\13\3\13\3\13\3\13\3\13")
buf.write("\3\f\3\f\3\f\3\f\3\f\3\f\3\f\3\f\3\f\3\f\3\f\3\r\3\r\3")
buf.write("\r\3\r\3\r\3\r\3\r\3\16\3\16\3\16\3\16\3\16\3\16\3\16")
buf.write("\3\16\3\16\3\17\3\17\3\17\3\17\3\17\3\17\3\20\3\20\3\20")
buf.write("\3\20\3\20\3\20\3\20\3\21\3\21\3\21\3\21\3\21\3\21\3\21")
buf.write("\3\21\3\22\3\22\3\22\3\22\3\22\3\22\3\22\3\22\3\22\3\22")
buf.write("\3\23\3\23\3\23\3\23\3\23\3\23\3\23\3\24\3\24\3\24\3\24")
buf.write("\3\24\3\24\3\24\3\24\3\25\3\25\3\25\3\25\3\25\3\25\3\25")
buf.write("\3\25\3\26\3\26\3\26\3\26\3\26\3\26\3\26\3\27\3\27\3\27")
buf.write("\3\27\3\27\3\27\3\27\3\30\3\30\3\30\3\30\3\30\3\30\3\31")
buf.write("\3\31\3\31\3\31\3\31\3\31\3\31\3\31\3\32\3\32\3\32\3\32")
buf.write("\3\32\3\33\3\33\3\34\3\34\3\35\3\35\3\36\3\36\3\37\3\37")
buf.write("\3 \3 \3!\3!\3\"\3\"\3#\3#\3$\3$\3%\3%\3&\3&\3\'\3\'\3")
buf.write("(\3(\3)\3)\3*\3*\3+\3+\3,\3,\3-\3-\3.\3.\3/\3/\3\60\3")
buf.write("\60\3\61\3\61\3\62\3\62\3\63\6\63\u0207\n\63\r\63\16\63")
buf.write("\u0208\3\63\3\63\3\64\3\64\3\64\3\64\3\65\3\65\5\65\u0213")
buf.write("\n\65\3\66\3\66\3\67\3\67\38\38\38\38\78\u021d\n8\f8\16")
buf.write("8\u0220\138\38\38\38\58\u0225\n8\39\39\39\39\39\79\u022c")
buf.write("\n9\f9\169\u022f\139\39\39\39\59\u0234\n9\3:\3:\3:\3:")
buf.write("\7:\u023a\n:\f:\16:\u023d\13:\3;\3;\3;\3;\3;\5;\u0244")
buf.write("\n;\3<\3<\3<\3=\3=\3=\3=\3=\5=\u024e\n=\5=\u0250\n=\5")
buf.write("=\u0252\n=\5=\u0254\n=\3>\3>\3>\7>\u0259\n>\f>\16>\u025c")
buf.write("\13>\5>\u025e\n>\3?\3?\3@\3@\3A\3A\3A\3A\3A\3A\3A\3A\3")
buf.write("A\5A\u026d\nA\3B\3B\3B\5B\u0272\nB\3B\3B\3C\3C\3C\7C\u0279")
buf.write("\nC\fC\16C\u027c\13C\3C\3C\3D\3D\3D\7D\u0283\nD\fD\16")
buf.write("D\u0286\13D\3D\3D\3E\3E\3E\7E\u028d\nE\fE\16E\u0290\13")
buf.write("E\3F\3F\3F\3F\5F\u0296\nF\3G\3G\3H\3H\3H\3H\3I\3I\3J\3")
buf.write("J\3K\3K\3K\3L\3L\3M\3M\3N\3N\3O\3O\3P\3P\3Q\3Q\3R\3R\3")
buf.write("S\3S\3T\3T\3T\3U\3U\3V\3V\3W\3W\3X\3X\3Y\3Y\3Z\3Z\3[\3")
buf.write("[\3[\3\\\3\\\3]\3]\3^\3^\3_\3_\3`\3`\3a\3a\3b\3b\3b\3")
buf.write("c\3c\3d\3d\3e\3e\3f\3f\3f\3f\3f\3g\3g\3g\3g\3h\3h\3h\3")
buf.write("h\3i\3i\3i\3i\3j\3j\3j\3k\3k\3k\3k\3l\3l\3m\3m\3m\3m\3")
buf.write("m\3n\3n\3n\3n\3o\3o\3o\3o\3p\3p\3p\3p\3q\3q\3q\3q\3r\3")
buf.write("r\3r\3r\3s\3s\3s\3s\3t\3t\3t\3u\3u\3u\3u\3v\3v\3w\3w\3")
buf.write("w\3w\3w\3x\3x\3x\3x\3x\3y\3y\3y\3y\3y\3z\3z\3z\3z\3{\3")
buf.write("{\3{\3{\3{\3|\3|\3|\3|\3}\3}\3}\3}\3~\3~\3~\3~\3\177\3")
buf.write("\177\3\177\3\177\3\u0080\3\u0080\3\u0080\3\u0080\3\u0081")
buf.write("\3\u0081\3\u0081\3\u0081\3\u0082\3\u0082\3\u0082\3\u0082")
buf.write("\3\u0083\6\u0083\u0351\n\u0083\r\u0083\16\u0083\u0352")
buf.write("\3\u0083\3\u0083\3\u0083\3\u0084\3\u0084\3\u0084\3\u0084")
buf.write("\3\u0084\3\u0085\3\u0085\3\u0085\3\u0085\3\u0085\3\u0086")
buf.write("\3\u0086\3\u0086\3\u0086\3\u0086\3\u0087\3\u0087\3\u0087")
buf.write("\3\u0087\3\u0088\3\u0088\3\u0088\3\u0088\3\u0088\3\u0089")
buf.write("\3\u0089\3\u0089\3\u0089\3\u008a\3\u008a\3\u008a\3\u008a")
buf.write("\3\u008b\3\u008b\3\u008b\3\u008b\3\u008c\6\u008c\u037d")
buf.write("\n\u008c\r\u008c\16\u008c\u037e\3\u008c\3\u008c\3\u008c")
buf.write("\3\u008d\3\u008d\3\u008d\3\u008d\3\u008d\3\u008e\3\u008e")
buf.write("\3\u008e\3\u008e\3\u008e\3\u008f\3\u008f\3\u008f\3\u008f")
buf.write("\3\u008f\3\u0090\3\u0090\3\u0090\3\u0090\3\u0091\3\u0091")
buf.write("\3\u0091\3\u0091\3\u0091\3\u0092\3\u0092\3\u0092\3\u0092")
buf.write("\3\u0093\3\u0093\3\u0093\3\u0093\3\u0094\3\u0094\3\u0094")
buf.write("\3\u0094\3\u0095\6\u0095\u03a9\n\u0095\r\u0095\16\u0095")
buf.write("\u03aa\3\u0095\3\u0095\3\u0095\3\u0096\3\u0096\6\u0096")
buf.write("\u03b2\n\u0096\r\u0096\16\u0096\u03b3\3\u0096\3\u0096")
buf.write("\3\u0097\3\u0097\3\u0097\3\u0097\3\u0098\3\u0098\3\u0098")
buf.write("\3\u0098\3\u0099\3\u0099\7\u0099\u03c2\n\u0099\f\u0099")
buf.write("\16\u0099\u03c5\13\u0099\4\u021e\u022d\2\u009a\t\6\13")
buf.write("\7\r\b\17\t\21\n\23\13\25\f\27\r\31\16\33\17\35\20\37")
buf.write("\21!\22#\23%\24\'\25)\26+\27-\30/\31\61\32\63\33\65\34")
buf.write("\67\359\36;\37= ?!A\"C#E$G%I&K\'M(O)Q*S+U,W-Y.[/]\60_")
buf.write("\61a\62c\63e\64g\65i\66k\67m8o\2q\2s\2u\2w\2y\2{\2}\2")
buf.write("\177\2\u0081\2\u0083\2\u0085\2\u0087\2\u0089\2\u008b\2")
buf.write("\u008d\2\u008f\2\u0091\2\u0093\2\u0095\2\u0097\2\u0099")
buf.write("\2\u009b\2\u009d\2\u009f\2\u00a1\2\u00a3\2\u00a5\2\u00a7")
buf.write("\2\u00a9\2\u00ab\2\u00ad\2\u00af\2\u00b1\2\u00b3\2\u00b5")
buf.write("\2\u00b7\2\u00b9\2\u00bb\2\u00bd\2\u00bf\2\u00c1\2\u00c3")
buf.write("\2\u00c5\2\u00c7\2\u00c9\2\u00cb\2\u00cd\2\u00cf\2\u00d1")
buf.write("\2\u00d3\2\u00d5\2\u00d7\2\u00d99\u00db:\u00dd;\u00df")
buf.write("\2\u00e1\2\u00e3\2\u00e5\2\u00e7\2\u00e9\2\u00eb\2\u00ed")
buf.write("<\u00ef=\u00f1>\u00f3\2\u00f5\2\u00f7\2\u00f9\2\u00fb")
buf.write("\2\u00fd\2\u00ff\2\u0101\2\u0103\2\u0105\2\u0107\2\u0109")
buf.write("\2\u010b\2\u010d\2\u010f\2\u0111\2\u0113\2\u0115\2\u0117")
buf.write("\2\u0119\2\u011b\2\u011d\2\u011f\2\u0121\2\u0123\2\u0125")
buf.write("\2\u0127\2\u0129\2\u012b\2\u012d\2\u012f\2\u0131\2\u0133")
buf.write("\5\u0135?\u0137\2\t\2\3\4\5\6\7\b\16\4\2\13\13\"\"\4\2")
buf.write("\f\f\16\17\4\2\f\f\17\17\n\2$$))^^ddhhppttvv\3\2\63;\5")
buf.write("\2\62;CHch\3\2\62;\6\2\f\f\17\17))^^\6\2\f\f\17\17$$^")
buf.write("^\5\2\u00b9\u00b9\u0302\u0371\u2041\u2042\17\2C\\c|\u00c2")
buf.write("\u00d8\u00da\u00f8\u00fa\u0301\u0372\u037f\u0381\u2001")
buf.write("\u200e\u200f\u2072\u2191\u2c02\u2ff1\u3003\ud801\uf902")
buf.write("\ufdd1\ufdf2\uffff\3\2^_\u03ae\2\t\3\2\2\2\2\13\3\2\2")
buf.write("\2\2\r\3\2\2\2\2\17\3\2\2\2\2\21\3\2\2\2\2\23\3\2\2\2")
buf.write("\2\25\3\2\2\2\2\27\3\2\2\2\2\31\3\2\2\2\2\33\3\2\2\2\2")
buf.write("\35\3\2\2\2\2\37\3\2\2\2\2!\3\2\2\2\2#\3\2\2\2\2%\3\2")
buf.write("\2\2\2\'\3\2\2\2\2)\3\2\2\2\2+\3\2\2\2\2-\3\2\2\2\2/\3")
buf.write("\2\2\2\2\61\3\2\2\2\2\63\3\2\2\2\2\65\3\2\2\2\2\67\3\2")
buf.write("\2\2\29\3\2\2\2\2;\3\2\2\2\2=\3\2\2\2\2?\3\2\2\2\2A\3")
buf.write("\2\2\2\2C\3\2\2\2\2E\3\2\2\2\2G\3\2\2\2\2I\3\2\2\2\2K")
buf.write("\3\2\2\2\2M\3\2\2\2\2O\3\2\2\2\2Q\3\2\2\2\2S\3\2\2\2\2")
buf.write("U\3\2\2\2\2W\3\2\2\2\2Y\3\2\2\2\2[\3\2\2\2\2]\3\2\2\2")
buf.write("\2_\3\2\2\2\2a\3\2\2\2\2c\3\2\2\2\2e\3\2\2\2\2g\3\2\2")
buf.write("\2\2i\3\2\2\2\2k\3\2\2\2\2m\3\2\2\2\3\u00d1\3\2\2\2\3")
buf.write("\u00d3\3\2\2\2\3\u00d5\3\2\2\2\3\u00d7\3\2\2\2\3\u00d9")
buf.write("\3\2\2\2\3\u00db\3\2\2\2\3\u00dd\3\2\2\2\4\u00df\3\2\2")
buf.write("\2\4\u00e1\3\2\2\2\4\u00e3\3\2\2\2\4\u00e5\3\2\2\2\4\u00e7")
buf.write("\3\2\2\2\4\u00e9\3\2\2\2\4\u00eb\3\2\2\2\4\u00ed\3\2\2")
buf.write("\2\4\u00ef\3\2\2\2\4\u00f1\3\2\2\2\5\u00f3\3\2\2\2\5\u00f5")
buf.write("\3\2\2\2\5\u00f7\3\2\2\2\5\u00f9\3\2\2\2\5\u00fb\3\2\2")
buf.write("\2\5\u00fd\3\2\2\2\5\u00ff\3\2\2\2\5\u0101\3\2\2\2\5\u0103")
buf.write("\3\2\2\2\5\u0105\3\2\2\2\5\u0107\3\2\2\2\5\u0109\3\2\2")
buf.write("\2\5\u010b\3\2\2\2\6\u010d\3\2\2\2\6\u010f\3\2\2\2\6\u0111")
buf.write("\3\2\2\2\6\u0113\3\2\2\2\6\u0115\3\2\2\2\6\u0117\3\2\2")
buf.write("\2\6\u0119\3\2\2\2\6\u011b\3\2\2\2\6\u011d\3\2\2\2\7\u011f")
buf.write("\3\2\2\2\7\u0121\3\2\2\2\7\u0123\3\2\2\2\7\u0125\3\2\2")
buf.write("\2\7\u0127\3\2\2\2\7\u0129\3\2\2\2\7\u012b\3\2\2\2\7\u012d")
buf.write("\3\2\2\2\7\u012f\3\2\2\2\b\u0131\3\2\2\2\b\u0133\3\2\2")
buf.write("\2\b\u0135\3\2\2\2\t\u0139\3\2\2\2\13\u013b\3\2\2\2\r")
buf.write("\u013f\3\2\2\2\17\u0143\3\2\2\2\21\u0145\3\2\2\2\23\u0147")
buf.write("\3\2\2\2\25\u0149\3\2\2\2\27\u014c\3\2\2\2\31\u0150\3")
buf.write("\2\2\2\33\u015a\3\2\2\2\35\u0163\3\2\2\2\37\u016e\3\2")
buf.write("\2\2!\u0175\3\2\2\2#\u017e\3\2\2\2%\u0184\3\2\2\2\'\u018b")
buf.write("\3\2\2\2)\u0193\3\2\2\2+\u019d\3\2\2\2-\u01a4\3\2\2\2")
buf.write("/\u01ac\3\2\2\2\61\u01b4\3\2\2\2\63\u01bb\3\2\2\2\65\u01c2")
buf.write("\3\2\2\2\67\u01c8\3\2\2\29\u01d0\3\2\2\2;\u01d5\3\2\2")
buf.write("\2=\u01d7\3\2\2\2?\u01d9\3\2\2\2A\u01db\3\2\2\2C\u01dd")
buf.write("\3\2\2\2E\u01df\3\2\2\2G\u01e1\3\2\2\2I\u01e3\3\2\2\2")
buf.write("K\u01e5\3\2\2\2M\u01e7\3\2\2\2O\u01e9\3\2\2\2Q\u01eb\3")
buf.write("\2\2\2S\u01ed\3\2\2\2U\u01ef\3\2\2\2W\u01f1\3\2\2\2Y\u01f3")
buf.write("\3\2\2\2[\u01f5\3\2\2\2]\u01f7\3\2\2\2_\u01f9\3\2\2\2")
buf.write("a\u01fb\3\2\2\2c\u01fd\3\2\2\2e\u01ff\3\2\2\2g\u0201\3")
buf.write("\2\2\2i\u0203\3\2\2\2k\u0206\3\2\2\2m\u020c\3\2\2\2o\u0212")
buf.write("\3\2\2\2q\u0214\3\2\2\2s\u0216\3\2\2\2u\u0218\3\2\2\2")
buf.write("w\u0226\3\2\2\2y\u0235\3\2\2\2{\u023e\3\2\2\2}\u0245\3")
buf.write("\2\2\2\177\u0248\3\2\2\2\u0081\u025d\3\2\2\2\u0083\u025f")
buf.write("\3\2\2\2\u0085\u0261\3\2\2\2\u0087\u026c\3\2\2\2\u0089")
buf.write("\u026e\3\2\2\2\u008b\u0275\3\2\2\2\u008d\u027f\3\2\2\2")
buf.write("\u008f\u0289\3\2\2\2\u0091\u0295\3\2\2\2\u0093\u0297\3")
buf.write("\2\2\2\u0095\u0299\3\2\2\2\u0097\u029d\3\2\2\2\u0099\u029f")
buf.write("\3\2\2\2\u009b\u02a1\3\2\2\2\u009d\u02a4\3\2\2\2\u009f")
buf.write("\u02a6\3\2\2\2\u00a1\u02a8\3\2\2\2\u00a3\u02aa\3\2\2\2")
buf.write("\u00a5\u02ac\3\2\2\2\u00a7\u02ae\3\2\2\2\u00a9\u02b0\3")
buf.write("\2\2\2\u00ab\u02b2\3\2\2\2\u00ad\u02b4\3\2\2\2\u00af\u02b7")
buf.write("\3\2\2\2\u00b1\u02b9\3\2\2\2\u00b3\u02bb\3\2\2\2\u00b5")
buf.write("\u02bd\3\2\2\2\u00b7\u02bf\3\2\2\2\u00b9\u02c1\3\2\2\2")
buf.write("\u00bb\u02c3\3\2\2\2\u00bd\u02c6\3\2\2\2\u00bf\u02c8\3")
buf.write("\2\2\2\u00c1\u02ca\3\2\2\2\u00c3\u02cc\3\2\2\2\u00c5\u02ce")
buf.write("\3\2\2\2\u00c7\u02d0\3\2\2\2\u00c9\u02d2\3\2\2\2\u00cb")
buf.write("\u02d5\3\2\2\2\u00cd\u02d7\3\2\2\2\u00cf\u02d9\3\2\2\2")
buf.write("\u00d1\u02db\3\2\2\2\u00d3\u02e0\3\2\2\2\u00d5\u02e4\3")
buf.write("\2\2\2\u00d7\u02e8\3\2\2\2\u00d9\u02ec\3\2\2\2\u00db\u02ef")
buf.write("\3\2\2\2\u00dd\u02f3\3\2\2\2\u00df\u02f5\3\2\2\2\u00e1")
buf.write("\u02fa\3\2\2\2\u00e3\u02fe\3\2\2\2\u00e5\u0302\3\2\2\2")
buf.write("\u00e7\u0306\3\2\2\2\u00e9\u030a\3\2\2\2\u00eb\u030e\3")
buf.write("\2\2\2\u00ed\u0312\3\2\2\2\u00ef\u0315\3\2\2\2\u00f1\u0319")
buf.write("\3\2\2\2\u00f3\u031b\3\2\2\2\u00f5\u0320\3\2\2\2\u00f7")
buf.write("\u0325\3\2\2\2\u00f9\u032a\3\2\2\2\u00fb\u032e\3\2\2\2")
buf.write("\u00fd\u0333\3\2\2\2\u00ff\u0337\3\2\2\2\u0101\u033b\3")
buf.write("\2\2\2\u0103\u033f\3\2\2\2\u0105\u0343\3\2\2\2\u0107\u0347")
buf.write("\3\2\2\2\u0109\u034b\3\2\2\2\u010b\u0350\3\2\2\2\u010d")
buf.write("\u0357\3\2\2\2\u010f\u035c\3\2\2\2\u0111\u0361\3\2\2\2")
buf.write("\u0113\u0366\3\2\2\2\u0115\u036a\3\2\2\2\u0117\u036f\3")
buf.write("\2\2\2\u0119\u0373\3\2\2\2\u011b\u0377\3\2\2\2\u011d\u037c")
buf.write("\3\2\2\2\u011f\u0383\3\2\2\2\u0121\u0388\3\2\2\2\u0123")
buf.write("\u038d\3\2\2\2\u0125\u0392\3\2\2\2\u0127\u0396\3\2\2\2")
buf.write("\u0129\u039b\3\2\2\2\u012b\u039f\3\2\2\2\u012d\u03a3\3")
buf.write("\2\2\2\u012f\u03a8\3\2\2\2\u0131\u03b1\3\2\2\2\u0133\u03b7")
buf.write("\3\2\2\2\u0135\u03bb\3\2\2\2\u0137\u03bf\3\2\2\2\u0139")
buf.write("\u013a\5w9\2\u013a\n\3\2\2\2\u013b\u013c\5u8\2\u013c\u013d")
buf.write("\3\2\2\2\u013d\u013e\b\3\2\2\u013e\f\3\2\2\2\u013f\u0140")
buf.write("\5y:\2\u0140\u0141\3\2\2\2\u0141\u0142\b\4\2\2\u0142\16")
buf.write("\3\2\2\2\u0143\u0144\5\u0081>\2\u0144\20\3\2\2\2\u0145")
buf.write("\u0146\5\u008bC\2\u0146\22\3\2\2\2\u0147\u0148\5\u008f")
buf.write("E\2\u0148\24\3\2\2\2\u0149\u014a\5\u00a9R\2\u014a\u014b")
buf.write("\b\b\3\2\u014b\26\3\2\2\2\u014c\u014d\5\u00a5P\2\u014d")
buf.write("\u014e\3\2\2\2\u014e\u014f\b\t\4\2\u014f\30\3\2\2\2\u0150")
buf.write("\u0151\7q\2\2\u0151\u0152\7r\2\2\u0152\u0153\7v\2\2\u0153")
buf.write("\u0154\7k\2\2\u0154\u0155\7q\2\2\u0155\u0156\7p\2\2\u0156")
buf.write("\u0157\7u\2\2\u0157\u0158\3\2\2\2\u0158\u0159\b\n\5\2")
buf.write("\u0159\32\3\2\2\2\u015a\u015b\7v\2\2\u015b\u015c\7q\2")
buf.write("\2\u015c\u015d\7m\2\2\u015d\u015e\7g\2\2\u015e\u015f\7")
buf.write("p\2\2\u015f\u0160\7u\2\2\u0160\u0161\3\2\2\2\u0161\u0162")
buf.write("\b\13\6\2\u0162\34\3\2\2\2\u0163\u0164\7e\2\2\u0164\u0165")
buf.write("\7j\2\2\u0165\u0166\7c\2\2\u0166\u0167\7p\2\2\u0167\u0168")
buf.write("\7p\2\2\u0168\u0169\7g\2\2\u0169\u016a\7n\2\2\u016a\u016b")
buf.write("\7u\2\2\u016b\u016c\3\2\2\2\u016c\u016d\b\f\7\2\u016d")
buf.write("\36\3\2\2\2\u016e\u016f\7k\2\2\u016f\u0170\7o\2\2\u0170")
buf.write("\u0171\7r\2\2\u0171\u0172\7q\2\2\u0172\u0173\7t\2\2\u0173")
buf.write("\u0174\7v\2\2\u0174 \3\2\2\2\u0175\u0176\7h\2\2\u0176")
buf.write("\u0177\7t\2\2\u0177\u0178\7c\2\2\u0178\u0179\7i\2\2\u0179")
buf.write("\u017a\7o\2\2\u017a\u017b\7g\2\2\u017b\u017c\7p\2\2\u017c")
buf.write("\u017d\7v\2\2\u017d\"\3\2\2\2\u017e\u017f\7n\2\2\u017f")
buf.write("\u0180\7g\2\2\u0180\u0181\7z\2\2\u0181\u0182\7g\2\2\u0182")
buf.write("\u0183\7t\2\2\u0183$\3\2\2\2\u0184\u0185\7r\2\2\u0185")
buf.write("\u0186\7c\2\2\u0186\u0187\7t\2\2\u0187\u0188\7u\2\2\u0188")
buf.write("\u0189\7g\2\2\u0189\u018a\7t\2\2\u018a&\3\2\2\2\u018b")
buf.write("\u018c\7i\2\2\u018c\u018d\7t\2\2\u018d\u018e\7c\2\2\u018e")
buf.write("\u018f\7o\2\2\u018f\u0190\7o\2\2\u0190\u0191\7c\2\2\u0191")
buf.write("\u0192\7t\2\2\u0192(\3\2\2\2\u0193\u0194\7r\2\2\u0194")
buf.write("\u0195\7t\2\2\u0195\u0196\7q\2\2\u0196\u0197\7v\2\2\u0197")
buf.write("\u0198\7g\2\2\u0198\u0199\7e\2\2\u0199\u019a\7v\2\2\u019a")
buf.write("\u019b\7g\2\2\u019b\u019c\7f\2\2\u019c*\3\2\2\2\u019d")
buf.write("\u019e\7r\2\2\u019e\u019f\7w\2\2\u019f\u01a0\7d\2\2\u01a0")
buf.write("\u01a1\7n\2\2\u01a1\u01a2\7k\2\2\u01a2\u01a3\7e\2\2\u01a3")
buf.write(",\3\2\2\2\u01a4\u01a5\7r\2\2\u01a5\u01a6\7t\2\2\u01a6")
buf.write("\u01a7\7k\2\2\u01a7\u01a8\7x\2\2\u01a8\u01a9\7c\2\2\u01a9")
buf.write("\u01aa\7v\2\2\u01aa\u01ab\7g\2\2\u01ab.\3\2\2\2\u01ac")
buf.write("\u01ad\7t\2\2\u01ad\u01ae\7g\2\2\u01ae\u01af\7v\2\2\u01af")
buf.write("\u01b0\7w\2\2\u01b0\u01b1\7t\2\2\u01b1\u01b2\7p\2\2\u01b2")
buf.write("\u01b3\7u\2\2\u01b3\60\3\2\2\2\u01b4\u01b5\7n\2\2\u01b5")
buf.write("\u01b6\7q\2\2\u01b6\u01b7\7e\2\2\u01b7\u01b8\7c\2\2\u01b8")
buf.write("\u01b9\7n\2\2\u01b9\u01ba\7u\2\2\u01ba\62\3\2\2\2\u01bb")
buf.write("\u01bc\7v\2\2\u01bc\u01bd\7j\2\2\u01bd\u01be\7t\2\2\u01be")
buf.write("\u01bf\7q\2\2\u01bf\u01c0\7y\2\2\u01c0\u01c1\7u\2\2\u01c1")
buf.write("\64\3\2\2\2\u01c2\u01c3\7e\2\2\u01c3\u01c4\7c\2\2\u01c4")
buf.write("\u01c5\7v\2\2\u01c5\u01c6\7e\2\2\u01c6\u01c7\7j\2\2\u01c7")
buf.write("\66\3\2\2\2\u01c8\u01c9\7h\2\2\u01c9\u01ca\7k\2\2\u01ca")
buf.write("\u01cb\7p\2\2\u01cb\u01cc\7c\2\2\u01cc\u01cd\7n\2\2\u01cd")
buf.write("\u01ce\7n\2\2\u01ce\u01cf\7{\2\2\u01cf8\3\2\2\2\u01d0")
buf.write("\u01d1\7o\2\2\u01d1\u01d2\7q\2\2\u01d2\u01d3\7f\2\2\u01d3")
buf.write("\u01d4\7g\2\2\u01d4:\3\2\2\2\u01d5\u01d6\5\u0099J\2\u01d6")
buf.write("<\3\2\2\2\u01d7\u01d8\5\u009bK\2\u01d8>\3\2\2\2\u01d9")
buf.write("\u01da\5\u00c3_\2\u01da@\3\2\2\2\u01db\u01dc\5\u00c5`")
buf.write("\2\u01dcB\3\2\2\2\u01dd\u01de\5\u00a1N\2\u01deD\3\2\2")
buf.write("\2\u01df\u01e0\5\u00a3O\2\u01e0F\3\2\2\2\u01e1\u01e2\5")
buf.write("\u00a5P\2\u01e2H\3\2\2\2\u01e3\u01e4\5\u00a7Q\2\u01e4")
buf.write("J\3\2\2\2\u01e5\u01e6\5\u00adT\2\u01e6L\3\2\2\2\u01e7")
buf.write("\u01e8\5\u00afU\2\u01e8N\3\2\2\2\u01e9\u01ea\5\u00b1V")
buf.write("\2\u01eaP\3\2\2\2\u01eb\u01ec\5\u00b3W\2\u01ecR\3\2\2")
buf.write("\2\u01ed\u01ee\5\u00b5X\2\u01eeT\3\2\2\2\u01ef\u01f0\5")
buf.write("\u00b7Y\2\u01f0V\3\2\2\2\u01f1\u01f2\5\u00bb[\2\u01f2")
buf.write("X\3\2\2\2\u01f3\u01f4\5\u00b9Z\2\u01f4Z\3\2\2\2\u01f5")
buf.write("\u01f6\5\u00bf]\2\u01f6\\\3\2\2\2\u01f7\u01f8\5\u00c1")
buf.write("^\2\u01f8^\3\2\2\2\u01f9\u01fa\5\u00c9b\2\u01fa`\3\2\2")
buf.write("\2\u01fb\u01fc\5\u00c7a\2\u01fcb\3\2\2\2\u01fd\u01fe\5")
buf.write("\u00cbc\2\u01fed\3\2\2\2\u01ff\u0200\5\u00cdd\2\u0200")
buf.write("f\3\2\2\2\u0201\u0202\5\u00cfe\2\u0202h\3\2\2\2\u0203")
buf.write("\u0204\5\u0137\u0099\2\u0204j\3\2\2\2\u0205\u0207\5o\65")
buf.write("\2\u0206\u0205\3\2\2\2\u0207\u0208\3\2\2\2\u0208\u0206")
buf.write("\3\2\2\2\u0208\u0209\3\2\2\2\u0209\u020a\3\2\2\2\u020a")
buf.write("\u020b\b\63\2\2\u020bl\3\2\2\2\u020c\u020d\13\2\2\2\u020d")
buf.write("\u020e\3\2\2\2\u020e\u020f\b\64\b\2\u020fn\3\2\2\2\u0210")
buf.write("\u0213\5q\66\2\u0211\u0213\5s\67\2\u0212\u0210\3\2\2\2")
buf.write("\u0212\u0211\3\2\2\2\u0213p\3\2\2\2\u0214\u0215\t\2\2")
buf.write("\2\u0215r\3\2\2\2\u0216\u0217\t\3\2\2\u0217t\3\2\2\2\u0218")
buf.write("\u0219\7\61\2\2\u0219\u021a\7,\2\2\u021a\u021e\3\2\2\2")
buf.write("\u021b\u021d\13\2\2\2\u021c\u021b\3\2\2\2\u021d\u0220")
buf.write("\3\2\2\2\u021e\u021f\3\2\2\2\u021e\u021c\3\2\2\2\u021f")
buf.write("\u0224\3\2\2\2\u0220\u021e\3\2\2\2\u0221\u0222\7,\2\2")
buf.write("\u0222\u0225\7\61\2\2\u0223\u0225\7\2\2\3\u0224\u0221")
buf.write("\3\2\2\2\u0224\u0223\3\2\2\2\u0225v\3\2\2\2\u0226\u0227")
buf.write("\7\61\2\2\u0227\u0228\7,\2\2\u0228\u0229\7,\2\2\u0229")
buf.write("\u022d\3\2\2\2\u022a\u022c\13\2\2\2\u022b\u022a\3\2\2")
buf.write("\2\u022c\u022f\3\2\2\2\u022d\u022e\3\2\2\2\u022d\u022b")
buf.write("\3\2\2\2\u022e\u0233\3\2\2\2\u022f\u022d\3\2\2\2\u0230")
buf.write("\u0231\7,\2\2\u0231\u0234\7\61\2\2\u0232\u0234\7\2\2\3")
buf.write("\u0233\u0230\3\2\2\2\u0233\u0232\3\2\2\2\u0234x\3\2\2")
buf.write("\2\u0235\u0236\7\61\2\2\u0236\u0237\7\61\2\2\u0237\u023b")
buf.write("\3\2\2\2\u0238\u023a\n\4\2\2\u0239\u0238\3\2\2\2\u023a")
buf.write("\u023d\3\2\2\2\u023b\u0239\3\2\2\2\u023b\u023c\3\2\2\2")
buf.write("\u023cz\3\2\2\2\u023d\u023b\3\2\2\2\u023e\u0243\5\u0097")
buf.write("I\2\u023f\u0244\t\5\2\2\u0240\u0244\5\177=\2\u0241\u0244")
buf.write("\13\2\2\2\u0242\u0244\7\2\2\3\u0243\u023f\3\2\2\2\u0243")
buf.write("\u0240\3\2\2\2\u0243\u0241\3\2\2\2\u0243\u0242\3\2\2\2")
buf.write("\u0244|\3\2\2\2\u0245\u0246\5\u0097I\2\u0246\u0247\13")
buf.write("\2\2\2\u0247~\3\2\2\2\u0248\u0253\7w\2\2\u0249\u0251\5")
buf.write("\u0083?\2\u024a\u024f\5\u0083?\2\u024b\u024d\5\u0083?")
buf.write("\2\u024c\u024e\5\u0083?\2\u024d\u024c\3\2\2\2\u024d\u024e")
buf.write("\3\2\2\2\u024e\u0250\3\2\2\2\u024f\u024b\3\2\2\2\u024f")
buf.write("\u0250\3\2\2\2\u0250\u0252\3\2\2\2\u0251\u024a\3\2\2\2")
buf.write("\u0251\u0252\3\2\2\2\u0252\u0254\3\2\2\2\u0253\u0249\3")
buf.write("\2\2\2\u0253\u0254\3\2\2\2\u0254\u0080\3\2\2\2\u0255\u025e")
buf.write("\7\62\2\2\u0256\u025a\t\6\2\2\u0257\u0259\5\u0085@\2\u0258")
buf.write("\u0257\3\2\2\2\u0259\u025c\3\2\2\2\u025a\u0258\3\2\2\2")
buf.write("\u025a\u025b\3\2\2\2\u025b\u025e\3\2\2\2\u025c\u025a\3")
buf.write("\2\2\2\u025d\u0255\3\2\2\2\u025d\u0256\3\2\2\2\u025e\u0082")
buf.write("\3\2\2\2\u025f\u0260\t\7\2\2\u0260\u0084\3\2\2\2\u0261")
buf.write("\u0262\t\b\2\2\u0262\u0086\3\2\2\2\u0263\u0264\7v\2\2")
buf.write("\u0264\u0265\7t\2\2\u0265\u0266\7w\2\2\u0266\u026d\7g")
buf.write("\2\2\u0267\u0268\7h\2\2\u0268\u0269\7c\2\2\u0269\u026a")
buf.write("\7n\2\2\u026a\u026b\7u\2\2\u026b\u026d\7g\2\2\u026c\u0263")
buf.write("\3\2\2\2\u026c\u0267\3\2\2\2\u026d\u0088\3\2\2\2\u026e")
buf.write("\u0271\5\u009dL\2\u026f\u0272\5{;\2\u0270\u0272\n\t\2")
buf.write("\2\u0271\u026f\3\2\2\2\u0271\u0270\3\2\2\2\u0272\u0273")
buf.write("\3\2\2\2\u0273\u0274\5\u009dL\2\u0274\u008a\3\2\2\2\u0275")
buf.write("\u027a\5\u009dL\2\u0276\u0279\5{;\2\u0277\u0279\n\t\2")
buf.write("\2\u0278\u0276\3\2\2\2\u0278\u0277\3\2\2\2\u0279\u027c")
buf.write("\3\2\2\2\u027a\u0278\3\2\2\2\u027a\u027b\3\2\2\2\u027b")
buf.write("\u027d\3\2\2\2\u027c\u027a\3\2\2\2\u027d\u027e\5\u009d")
buf.write("L\2\u027e\u008c\3\2\2\2\u027f\u0284\5\u009fM\2\u0280\u0283")
buf.write("\5{;\2\u0281\u0283\n\n\2\2\u0282\u0280\3\2\2\2\u0282\u0281")
buf.write("\3\2\2\2\u0283\u0286\3\2\2\2\u0284\u0282\3\2\2\2\u0284")
buf.write("\u0285\3\2\2\2\u0285\u0287\3\2\2\2\u0286\u0284\3\2\2\2")
buf.write("\u0287\u0288\5\u009fM\2\u0288\u008e\3\2\2\2\u0289\u028e")
buf.write("\5\u009dL\2\u028a\u028d\5{;\2\u028b\u028d\n\t\2\2\u028c")
buf.write("\u028a\3\2\2\2\u028c\u028b\3\2\2\2\u028d\u0290\3\2\2\2")
buf.write("\u028e\u028c\3\2\2\2\u028e\u028f\3\2\2\2\u028f\u0090\3")
buf.write("\2\2\2\u0290\u028e\3\2\2\2\u0291\u0296\5\u0093G\2\u0292")
buf.write("\u0296\4\62;\2\u0293\u0296\5\u00bd\\\2\u0294\u0296\t\13")
buf.write("\2\2\u0295\u0291\3\2\2\2\u0295\u0292\3\2\2\2\u0295\u0293")
buf.write("\3\2\2\2\u0295\u0294\3\2\2\2\u0296\u0092\3\2\2\2\u0297")
buf.write("\u0298\t\f\2\2\u0298\u0094\3\2\2\2\u0299\u029a\7k\2\2")
buf.write("\u029a\u029b\7p\2\2\u029b\u029c\7v\2\2\u029c\u0096\3\2")
buf.write("\2\2\u029d\u029e\7^\2\2\u029e\u0098\3\2\2\2\u029f\u02a0")
buf.write("\7<\2\2\u02a0\u009a\3\2\2\2\u02a1\u02a2\7<\2\2\u02a2\u02a3")
buf.write("\7<\2\2\u02a3\u009c\3\2\2\2\u02a4\u02a5\7)\2\2\u02a5\u009e")
buf.write("\3\2\2\2\u02a6\u02a7\7$\2\2\u02a7\u00a0\3\2\2\2\u02a8")
buf.write("\u02a9\7*\2\2\u02a9\u00a2\3\2\2\2\u02aa\u02ab\7+\2\2\u02ab")
buf.write("\u00a4\3\2\2\2\u02ac\u02ad\7}\2\2\u02ad\u00a6\3\2\2\2")
buf.write("\u02ae\u02af\7\177\2\2\u02af\u00a8\3\2\2\2\u02b0\u02b1")
buf.write("\7]\2\2\u02b1\u00aa\3\2\2\2\u02b2\u02b3\7_\2\2\u02b3\u00ac")
buf.write("\3\2\2\2\u02b4\u02b5\7/\2\2\u02b5\u02b6\7@\2\2\u02b6\u00ae")
buf.write("\3\2\2\2\u02b7\u02b8\7>\2\2\u02b8\u00b0\3\2\2\2\u02b9")
buf.write("\u02ba\7@\2\2\u02ba\u00b2\3\2\2\2\u02bb\u02bc\7?\2\2\u02bc")
buf.write("\u00b4\3\2\2\2\u02bd\u02be\7A\2\2\u02be\u00b6\3\2\2\2")
buf.write("\u02bf\u02c0\7,\2\2\u02c0\u00b8\3\2\2\2\u02c1\u02c2\7")
buf.write("-\2\2\u02c2\u00ba\3\2\2\2\u02c3\u02c4\7-\2\2\u02c4\u02c5")
buf.write("\7?\2\2\u02c5\u00bc\3\2\2\2\u02c6\u02c7\7a\2\2\u02c7\u00be")
buf.write("\3\2\2\2\u02c8\u02c9\7~\2\2\u02c9\u00c0\3\2\2\2\u02ca")
buf.write("\u02cb\7&\2\2\u02cb\u00c2\3\2\2\2\u02cc\u02cd\7.\2\2\u02cd")
buf.write("\u00c4\3\2\2\2\u02ce\u02cf\7=\2\2\u02cf\u00c6\3\2\2\2")
buf.write("\u02d0\u02d1\7\60\2\2\u02d1\u00c8\3\2\2\2\u02d2\u02d3")
buf.write("\7\60\2\2\u02d3\u02d4\7\60\2\2\u02d4\u00ca\3\2\2\2\u02d5")
buf.write("\u02d6\7B\2\2\u02d6\u00cc\3\2\2\2\u02d7\u02d8\7%\2\2\u02d8")
buf.write("\u00ce\3\2\2\2\u02d9\u02da\7\u0080\2\2\u02da\u00d0\3\2")
buf.write("\2\2\u02db\u02dc\5\u00a9R\2\u02dc\u02dd\3\2\2\2\u02dd")
buf.write("\u02de\bf\t\2\u02de\u02df\bf\n\2\u02df\u00d2\3\2\2\2\u02e0")
buf.write("\u02e1\5}<\2\u02e1\u02e2\3\2\2\2\u02e2\u02e3\bg\t\2\u02e3")
buf.write("\u00d4\3\2\2\2\u02e4\u02e5\5\u008dD\2\u02e5\u02e6\3\2")
buf.write("\2\2\u02e6\u02e7\bh\t\2\u02e7\u00d6\3\2\2\2\u02e8\u02e9")
buf.write("\5\u008bC\2\u02e9\u02ea\3\2\2\2\u02ea\u02eb\bi\t\2\u02eb")
buf.write("\u00d8\3\2\2\2\u02ec\u02ed\5\u00abS\2\u02ed\u02ee\bj\13")
buf.write("\2\u02ee\u00da\3\2\2\2\u02ef\u02f0\7\2\2\3\u02f0\u02f1")
buf.write("\3\2\2\2\u02f1\u02f2\bk\f\2\u02f2\u00dc\3\2\2\2\u02f3")
buf.write("\u02f4\13\2\2\2\u02f4\u00de\3\2\2\2\u02f5\u02f6\5\u00a5")
buf.write("P\2\u02f6\u02f7\3\2\2\2\u02f7\u02f8\bm\r\2\u02f8\u02f9")
buf.write("\bm\4\2\u02f9\u00e0\3\2\2\2\u02fa\u02fb\5}<\2\u02fb\u02fc")
buf.write("\3\2\2\2\u02fc\u02fd\bn\r\2\u02fd\u00e2\3\2\2\2\u02fe")
buf.write("\u02ff\5\u008dD\2\u02ff\u0300\3\2\2\2\u0300\u0301\bo\r")
buf.write("\2\u0301\u00e4\3\2\2\2\u0302\u0303\5\u008bC\2\u0303\u0304")
buf.write("\3\2\2\2\u0304\u0305\bp\r\2\u0305\u00e6\3\2\2\2\u0306")
buf.write("\u0307\5w9\2\u0307\u0308\3\2\2\2\u0308\u0309\bq\r\2\u0309")
buf.write("\u00e8\3\2\2\2\u030a\u030b\5u8\2\u030b\u030c\3\2\2\2\u030c")
buf.write("\u030d\br\r\2\u030d\u00ea\3\2\2\2\u030e\u030f\5y:\2\u030f")
buf.write("\u0310\3\2\2\2\u0310\u0311\bs\r\2\u0311\u00ec\3\2\2\2")
buf.write("\u0312\u0313\5\u00a7Q\2\u0313\u0314\bt\16\2\u0314\u00ee")
buf.write("\3\2\2\2\u0315\u0316\7\2\2\3\u0316\u0317\3\2\2\2\u0317")
buf.write("\u0318\bu\f\2\u0318\u00f0\3\2\2\2\u0319\u031a\13\2\2\2")
buf.write("\u031a\u00f2\3\2\2\2\u031b\u031c\5w9\2\u031c\u031d\3\2")
buf.write("\2\2\u031d\u031e\bw\17\2\u031e\u031f\bw\2\2\u031f\u00f4")
buf.write("\3\2\2\2\u0320\u0321\5u8\2\u0321\u0322\3\2\2\2\u0322\u0323")
buf.write("\bx\20\2\u0323\u0324\bx\2\2\u0324\u00f6\3\2\2\2\u0325")
buf.write("\u0326\5y:\2\u0326\u0327\3\2\2\2\u0327\u0328\by\21\2\u0328")
buf.write("\u0329\by\2\2\u0329\u00f8\3\2\2\2\u032a\u032b\5\u00a5")
buf.write("P\2\u032b\u032c\3\2\2\2\u032c\u032d\bz\22\2\u032d\u00fa")
buf.write("\3\2\2\2\u032e\u032f\5\u00a7Q\2\u032f\u0330\3\2\2\2\u0330")
buf.write("\u0331\b{\23\2\u0331\u0332\b{\f\2\u0332\u00fc\3\2\2\2")
buf.write("\u0333\u0334\5\u0137\u0099\2\u0334\u0335\3\2\2\2\u0335")
buf.write("\u0336\b|\24\2\u0336\u00fe\3\2\2\2\u0337\u0338\5\u00c7")
buf.write("a\2\u0338\u0339\3\2\2\2\u0339\u033a\b}\25\2\u033a\u0100")
buf.write("\3\2\2\2\u033b\u033c\5\u00b3W\2\u033c\u033d\3\2\2\2\u033d")
buf.write("\u033e\b~\26\2\u033e\u0102\3\2\2\2\u033f\u0340\5\u008b")
buf.write("C\2\u0340\u0341\3\2\2\2\u0341\u0342\b\177\27\2\u0342\u0104")
buf.write("\3\2\2\2\u0343\u0344\5\u0095H\2\u0344\u0345\3\2\2\2\u0345")
buf.write("\u0346\b\u0080\30\2\u0346\u0106\3\2\2\2\u0347\u0348\5")
buf.write("\u00b7Y\2\u0348\u0349\3\2\2\2\u0349\u034a\b\u0081\31\2")
buf.write("\u034a\u0108\3\2\2\2\u034b\u034c\5\u00c5`\2\u034c\u034d")
buf.write("\3\2\2\2\u034d\u034e\b\u0082\32\2\u034e\u010a\3\2\2\2")
buf.write("\u034f\u0351\5o\65\2\u0350\u034f\3\2\2\2\u0351\u0352\3")
buf.write("\2\2\2\u0352\u0350\3\2\2\2\u0352\u0353\3\2\2\2\u0353\u0354")
buf.write("\3\2\2\2\u0354\u0355\b\u0083\33\2\u0355\u0356\b\u0083")
buf.write("\2\2\u0356\u010c\3\2\2\2\u0357\u0358\5w9\2\u0358\u0359")
buf.write("\3\2\2\2\u0359\u035a\b\u0084\17\2\u035a\u035b\b\u0084")
buf.write("\2\2\u035b\u010e\3\2\2\2\u035c\u035d\5u8\2\u035d\u035e")
buf.write("\3\2\2\2\u035e\u035f\b\u0085\20\2\u035f\u0360\b\u0085")
buf.write("\2\2\u0360\u0110\3\2\2\2\u0361\u0362\5y:\2\u0362\u0363")
buf.write("\3\2\2\2\u0363\u0364\b\u0086\21\2\u0364\u0365\b\u0086")
buf.write("\2\2\u0365\u0112\3\2\2\2\u0366\u0367\5\u00a5P\2\u0367")
buf.write("\u0368\3\2\2\2\u0368\u0369\b\u0087\22\2\u0369\u0114\3")
buf.write("\2\2\2\u036a\u036b\5\u00a7Q\2\u036b\u036c\3\2\2\2\u036c")
buf.write("\u036d\b\u0088\23\2\u036d\u036e\b\u0088\f\2\u036e\u0116")
buf.write("\3\2\2\2\u036f\u0370\5\u0137\u0099\2\u0370\u0371\3\2\2")
buf.write("\2\u0371\u0372\b\u0089\24\2\u0372\u0118\3\2\2\2\u0373")
buf.write("\u0374\5\u00c7a\2\u0374\u0375\3\2\2\2\u0375\u0376\b\u008a")
buf.write("\25\2\u0376\u011a\3\2\2\2\u0377\u0378\5\u00c3_\2\u0378")
buf.write("\u0379\3\2\2\2\u0379\u037a\b\u008b\34\2\u037a\u011c\3")
buf.write("\2\2\2\u037b\u037d\5o\65\2\u037c\u037b\3\2\2\2\u037d\u037e")
buf.write("\3\2\2\2\u037e\u037c\3\2\2\2\u037e\u037f\3\2\2\2\u037f")
buf.write("\u0380\3\2\2\2\u0380\u0381\b\u008c\33\2\u0381\u0382\b")
buf.write("\u008c\2\2\u0382\u011e\3\2\2\2\u0383\u0384\5w9\2\u0384")
buf.write("\u0385\3\2\2\2\u0385\u0386\b\u008d\17\2\u0386\u0387\b")
buf.write("\u008d\2\2\u0387\u0120\3\2\2\2\u0388\u0389\5u8\2\u0389")
buf.write("\u038a\3\2\2\2\u038a\u038b\b\u008e\20\2\u038b\u038c\b")
buf.write("\u008e\2\2\u038c\u0122\3\2\2\2\u038d\u038e\5y:\2\u038e")
buf.write("\u038f\3\2\2\2\u038f\u0390\b\u008f\21\2\u0390\u0391\b")
buf.write("\u008f\2\2\u0391\u0124\3\2\2\2\u0392\u0393\5\u00a5P\2")
buf.write("\u0393\u0394\3\2\2\2\u0394\u0395\b\u0090\22\2\u0395\u0126")
buf.write("\3\2\2\2\u0396\u0397\5\u00a7Q\2\u0397\u0398\3\2\2\2\u0398")
buf.write("\u0399\b\u0091\23\2\u0399\u039a\b\u0091\f\2\u039a\u0128")
buf.write("\3\2\2\2\u039b\u039c\5\u0137\u0099\2\u039c\u039d\3\2\2")
buf.write("\2\u039d\u039e\b\u0092\24\2\u039e\u012a\3\2\2\2\u039f")
buf.write("\u03a0\5\u00c7a\2\u03a0\u03a1\3\2\2\2\u03a1\u03a2\b\u0093")
buf.write("\25\2\u03a2\u012c\3\2\2\2\u03a3\u03a4\5\u00c3_\2\u03a4")
buf.write("\u03a5\3\2\2\2\u03a5\u03a6\b\u0094\34\2\u03a6\u012e\3")
buf.write("\2\2\2\u03a7\u03a9\5o\65\2\u03a8\u03a7\3\2\2\2\u03a9\u03aa")
buf.write("\3\2\2\2\u03aa\u03a8\3\2\2\2\u03aa\u03ab\3\2\2\2\u03ab")
buf.write("\u03ac\3\2\2\2\u03ac\u03ad\b\u0095\33\2\u03ad\u03ae\b")
buf.write("\u0095\2\2\u03ae\u0130\3\2\2\2\u03af\u03b2\n\r\2\2\u03b0")
buf.write("\u03b2\5}<\2\u03b1\u03af\3\2\2\2\u03b1\u03b0\3\2\2\2\u03b2")
buf.write("\u03b3\3\2\2\2\u03b3\u03b1\3\2\2\2\u03b3\u03b4\3\2\2\2")
buf.write("\u03b4\u03b5\3\2\2\2\u03b5\u03b6\b\u0096\35\2\u03b6\u0132")
buf.write("\3\2\2\2\u03b7\u03b8\5\u00abS\2\u03b8\u03b9\3\2\2\2\u03b9")
buf.write("\u03ba\b\u0097\f\2\u03ba\u0134\3\2\2\2\u03bb\u03bc\7\2")
buf.write("\2\3\u03bc\u03bd\3\2\2\2\u03bd\u03be\b\u0098\f\2\u03be")
buf.write("\u0136\3\2\2\2\u03bf\u03c3\5\u0093G\2\u03c0\u03c2\5\u0091")
buf.write("F\2\u03c1\u03c0\3\2\2\2\u03c2\u03c5\3\2\2\2\u03c3\u03c1")
buf.write("\3\2\2\2\u03c3\u03c4\3\2\2\2\u03c4\u0138\3\2\2\2\u03c5")
buf.write("\u03c3\3\2\2\2&\2\3\4\5\6\7\b\u0208\u0212\u021e\u0224")
buf.write("\u022d\u0233\u023b\u0243\u024d\u024f\u0251\u0253\u025a")
buf.write("\u025d\u026c\u0271\u0278\u027a\u0282\u0284\u028c\u028e")
buf.write("\u0295\u0352\u037e\u03aa\u03b1\u03b3\u03c3\36\2\4\2\3")
buf.write("\b\2\7\4\2\7\5\2\7\6\2\7\7\2\2\3\2\t;\2\7\3\2\3j\3\6\2")
buf.write("\2\t>\2\3t\4\t\6\2\t\7\2\t\b\2\t%\2\t&\2\t\66\2\t\62\2")
buf.write("\t*\2\t\n\2\t\t\2\t,\2\t\"\2\t\67\2\t!\2\5\2\2")
return buf.getvalue()
class ANTLRv4Lexer(LexerAdaptor):
atn = ATNDeserializer().deserialize(serializedATN())
decisionsToDFA = [ DFA(ds, i) for i, ds in enumerate(atn.decisionToState) ]
Argument = 1
Action = 2
Options = 3
Tokens = 4
Channels = 5
LexerCharSet = 6
TOKEN_REF = 1
RULE_REF = 2
LEXER_CHAR_SET = 3
DOC_COMMENT = 4
BLOCK_COMMENT = 5
LINE_COMMENT = 6
INT = 7
STRING_LITERAL = 8
UNTERMINATED_STRING_LITERAL = 9
BEGIN_ARGUMENT = 10
BEGIN_ACTION = 11
OPTIONS = 12
TOKENS = 13
CHANNELS = 14
IMPORT = 15
FRAGMENT = 16
LEXER = 17
PARSER = 18
GRAMMAR = 19
PROTECTED = 20
PUBLIC = 21
PRIVATE = 22
RETURNS = 23
LOCALS = 24
THROWS = 25
CATCH = 26
FINALLY = 27
MODE = 28
COLON = 29
COLONCOLON = 30
COMMA = 31
SEMI = 32
LPAREN = 33
RPAREN = 34
LBRACE = 35
RBRACE = 36
RARROW = 37
LT = 38
GT = 39
ASSIGN = 40
QUESTION = 41
STAR = 42
PLUS_ASSIGN = 43
PLUS = 44
OR = 45
DOLLAR = 46
RANGE = 47
DOT = 48
AT = 49
POUND = 50
NOT = 51
ID = 52
WS = 53
ERRCHAR = 54
END_ARGUMENT = 55
UNTERMINATED_ARGUMENT = 56
ARGUMENT_CONTENT = 57
END_ACTION = 58
UNTERMINATED_ACTION = 59
ACTION_CONTENT = 60
UNTERMINATED_CHAR_SET = 61
modeNames = [ u"DEFAULT_MODE", u"Argument", u"Action", u"Options", u"Tokens",
u"Channels", u"LexerCharSet" ]
literalNames = [ u"<INVALID>",
"'options'", "'tokens'", "'channels'", "'import'", "'fragment'",
"'lexer'", "'parser'", "'grammar'", "'protected'", "'public'",
"'private'", "'returns'", "'locals'", "'throws'", "'catch'",
"'finally'", "'mode'" ]
symbolicNames = [ u"<INVALID>",
"TOKEN_REF", "RULE_REF", "LEXER_CHAR_SET", "DOC_COMMENT", "BLOCK_COMMENT",
"LINE_COMMENT", "INT", "STRING_LITERAL", "UNTERMINATED_STRING_LITERAL",
"BEGIN_ARGUMENT", "BEGIN_ACTION", "OPTIONS", "TOKENS", "CHANNELS",
"IMPORT", "FRAGMENT", "LEXER", "PARSER", "GRAMMAR", "PROTECTED",
"PUBLIC", "PRIVATE", "RETURNS", "LOCALS", "THROWS", "CATCH",
"FINALLY", "MODE", "COLON", "COLONCOLON", "COMMA", "SEMI", "LPAREN",
"RPAREN", "LBRACE", "RBRACE", "RARROW", "LT", "GT", "ASSIGN",
"QUESTION", "STAR", "PLUS_ASSIGN", "PLUS", "OR", "DOLLAR", "RANGE",
"DOT", "AT", "POUND", "NOT", "ID", "WS", "ERRCHAR", "END_ARGUMENT",
"UNTERMINATED_ARGUMENT", "ARGUMENT_CONTENT", "END_ACTION", "UNTERMINATED_ACTION",
"ACTION_CONTENT", "UNTERMINATED_CHAR_SET" ]
ruleNames = [ "DOC_COMMENT", "BLOCK_COMMENT", "LINE_COMMENT", "INT",
"STRING_LITERAL", "UNTERMINATED_STRING_LITERAL", "BEGIN_ARGUMENT",
"BEGIN_ACTION", "OPTIONS", "TOKENS", "CHANNELS", "IMPORT",
"FRAGMENT", "LEXER", "PARSER", "GRAMMAR", "PROTECTED",
"PUBLIC", "PRIVATE", "RETURNS", "LOCALS", "THROWS", "CATCH",
"FINALLY", "MODE", "COLON", "COLONCOLON", "COMMA", "SEMI",
"LPAREN", "RPAREN", "LBRACE", "RBRACE", "RARROW", "LT",
"GT", "ASSIGN", "QUESTION", "STAR", "PLUS_ASSIGN", "PLUS",
"OR", "DOLLAR", "RANGE", "DOT", "AT", "POUND", "NOT",
"ID", "WS", "ERRCHAR", "Ws", "Hws", "Vws", "BlockComment",
"DocComment", "LineComment", "EscSeq", "EscAny", "UnicodeEsc",
"DecimalNumeral", "HexDigit", "DecDigit", "BoolLiteral",
"CharLiteral", "SQuoteLiteral", "DQuoteLiteral", "USQuoteLiteral",
"NameChar", "NameStartChar", "Int", "Esc", "Colon", "DColon",
"SQuote", "DQuote", "LParen", "RParen", "LBrace", "RBrace",
"LBrack", "RBrack", "RArrow", "Lt", "Gt", "Equal", "Question",
"Star", "Plus", "PlusAssign", "Underscore", "Pipe", "Dollar",
"Comma", "Semi", "Dot", "Range", "At", "Pound", "Tilde",
"NESTED_ARGUMENT", "ARGUMENT_ESCAPE", "ARGUMENT_STRING_LITERAL",
"ARGUMENT_CHAR_LITERAL", "END_ARGUMENT", "UNTERMINATED_ARGUMENT",
"ARGUMENT_CONTENT", "NESTED_ACTION", "ACTION_ESCAPE",
"ACTION_STRING_LITERAL", "ACTION_CHAR_LITERAL", "ACTION_DOC_COMMENT",
"ACTION_BLOCK_COMMENT", "ACTION_LINE_COMMENT", "END_ACTION",
"UNTERMINATED_ACTION", "ACTION_CONTENT", "OPT_DOC_COMMENT",
"OPT_BLOCK_COMMENT", "OPT_LINE_COMMENT", "OPT_LBRACE",
"OPT_RBRACE", "OPT_ID", "OPT_DOT", "OPT_ASSIGN", "OPT_STRING_LITERAL",
"OPT_INT", "OPT_STAR", "OPT_SEMI", "OPT_WS", "TOK_DOC_COMMENT",
"TOK_BLOCK_COMMENT", "TOK_LINE_COMMENT", "TOK_LBRACE",
"TOK_RBRACE", "TOK_ID", "TOK_DOT", "TOK_COMMA", "TOK_WS",
"CHN_DOC_COMMENT", "CHN_BLOCK_COMMENT", "CHN_LINE_COMMENT",
"CHN_LBRACE", "CHN_RBRACE", "CHN_ID", "CHN_DOT", "CHN_COMMA",
"CHN_WS", "LEXER_CHAR_SET_BODY", "LEXER_CHAR_SET", "UNTERMINATED_CHAR_SET",
"Id" ]
grammarFileName = "ANTLRv4Lexer.g4"
def __init__(self, input=None):
super().__init__(input)
self.checkVersion("4.5")
self._interp = LexerATNSimulator(self, self.atn, self.decisionsToDFA, PredictionContextCache())
self._actions = None
self._predicates = None
def action(self, localctx:RuleContext, ruleIndex:int, actionIndex:int):
if self._actions is None:
actions = dict()
actions[6] = self.BEGIN_ARGUMENT_action
actions[104] = self.END_ARGUMENT_action
actions[114] = self.END_ACTION_action
self._actions = actions
action = self._actions.get(ruleIndex, None)
if action is not None:
action(localctx, actionIndex)
else:
raise Exception("No registered action for:" + str(ruleIndex))
def BEGIN_ARGUMENT_action(self, localctx:RuleContext , actionIndex:int):
if actionIndex == 0:
self.handleBeginArgument()
def END_ARGUMENT_action(self, localctx:RuleContext , actionIndex:int):
if actionIndex == 1:
self.handleEndArgument()
def END_ACTION_action(self, localctx:RuleContext , actionIndex:int):
if actionIndex == 2:
self.handleEndAction()
| 64.9696 | 103 | 0.590972 | 8,741 | 40,606 | 2.726805 | 0.154101 | 0.111433 | 0.069604 | 0.083575 | 0.230208 | 0.181456 | 0.097042 | 0.077113 | 0.073086 | 0.062975 | 0 | 0.350241 | 0.145964 | 40,606 | 624 | 104 | 65.073718 | 0.337063 | 0.00096 | 0 | 0.00335 | 1 | 0.695142 | 0.644578 | 0.595493 | 0 | 0 | 0 | 0 | 0 | 1 | 0.01005 | false | 0 | 0.011725 | 0 | 0.149079 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
53908df17744b007a000eb44e93d24e59b001022 | 114 | py | Python | officers/api/permissions.py | elishaking/i-witness | 09fe9f6db04fb64440c306e714a5233db31db23e | [
"Apache-2.0"
] | null | null | null | officers/api/permissions.py | elishaking/i-witness | 09fe9f6db04fb64440c306e714a5233db31db23e | [
"Apache-2.0"
] | 2 | 2021-06-08T20:53:14.000Z | 2021-06-10T22:31:47.000Z | witness/api/permissions.py | elishaking/i-witness | 09fe9f6db04fb64440c306e714a5233db31db23e | [
"Apache-2.0"
] | null | null | null | from rest_framework.permissions import BasePermission
class IsOwnerOrReadOnly(BasePermission):
pass
""""""
| 14.25 | 53 | 0.780702 | 10 | 114 | 8.8 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 114 | 7 | 54 | 16.285714 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
53cdda992d7608ef9d742049fd920771d0396112 | 226 | py | Python | lib/passwords/__init__.py | sschwetz/network_tech | fc65166e71bfdb5a0e99ca7e7ce9f7814b92869b | [
"Apache-2.0"
] | 73 | 2017-05-04T06:35:20.000Z | 2022-02-03T13:57:00.000Z | lib/passwords/__init__.py | sschwetz/network_tech | fc65166e71bfdb5a0e99ca7e7ce9f7814b92869b | [
"Apache-2.0"
] | 35 | 2017-11-09T16:28:48.000Z | 2022-01-12T08:15:48.000Z | lib/passwords/__init__.py | sschwetz/network_tech | fc65166e71bfdb5a0e99ca7e7ce9f7814b92869b | [
"Apache-2.0"
] | 20 | 2017-11-08T05:07:59.000Z | 2021-12-09T17:41:06.000Z | # Copyright 2017 Glen Harmon
# from .listener import PasswordDecryptListener, PasswordDecryptViewListener # noqa
# from .listener import PasswordDecryptViewListener # noqa
from .commands import DecodePasswordCommand # noqa | 45.2 | 84 | 0.827434 | 20 | 226 | 9.35 | 0.6 | 0.128342 | 0.192513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020305 | 0.128319 | 226 | 5 | 85 | 45.2 | 0.928934 | 0.752212 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
54e27ce77eae15f33dde0abe6f50a2065a1352f8 | 18,959 | py | Python | omega_miya/utils/Omega_Base/model/user.py | 58563528/omega-miyabot | d0c04bf06c193d90ad968973c924b01b7bdd735d | [
"MIT"
] | null | null | null | omega_miya/utils/Omega_Base/model/user.py | 58563528/omega-miyabot | d0c04bf06c193d90ad968973c924b01b7bdd735d | [
"MIT"
] | null | null | null | omega_miya/utils/Omega_Base/model/user.py | 58563528/omega-miyabot | d0c04bf06c193d90ad968973c924b01b7bdd735d | [
"MIT"
] | null | null | null | from omega_miya.utils.Omega_Base.database import NBdb
from omega_miya.utils.Omega_Base.class_result import Result
from omega_miya.utils.Omega_Base.tables import User, UserGroup, Skill, UserSkill, UserSub, Vocation, AuthUser
from .skill import DBSkill
from datetime import datetime
from sqlalchemy.future import select
from sqlalchemy.orm.exc import NoResultFound, MultipleResultsFound
class DBUser(object):
def __init__(self, user_id: int):
self.qq = user_id
async def id(self) -> Result.IntResult:
async_session = NBdb().get_async_session()
async with async_session() as session:
async with session.begin():
try:
session_result = await session.execute(
select(User.id).where(User.qq == self.qq)
)
user_table_id = session_result.scalar_one()
result = Result.IntResult(error=False, info='Success', result=user_table_id)
except NoResultFound:
result = Result.IntResult(error=True, info='NoResultFound', result=-1)
except MultipleResultsFound:
result = Result.IntResult(error=True, info='MultipleResultsFound', result=-1)
except Exception as e:
result = Result.IntResult(error=True, info=repr(e), result=-1)
return result
async def exist(self) -> bool:
result = await self.id()
return result.success()
async def nickname(self) -> Result.TextResult:
async_session = NBdb().get_async_session()
async with async_session() as session:
async with session.begin():
try:
session_result = await session.execute(
select(User.nickname).where(User.qq == self.qq)
)
user_nickname = session_result.scalar_one()
result = Result.TextResult(error=False, info='Success', result=user_nickname)
except NoResultFound:
result = Result.TextResult(error=True, info='NoResultFound', result='')
except MultipleResultsFound:
result = Result.TextResult(error=True, info='MultipleResultsFound', result='')
except Exception as e:
result = Result.TextResult(error=True, info=repr(e), result='')
return result
async def add(self, nickname: str, is_friend: int = 0, aliasname: str = None) -> Result.IntResult:
async_session = NBdb().get_async_session()
async with async_session() as session:
try:
async with session.begin():
try:
# 用户已存在则更新成员表昵称
session_result = await session.execute(
select(User).where(User.qq == self.qq)
)
exist_user = session_result.scalar_one()
if exist_user.nickname == nickname:
result = Result.IntResult(error=False, info='Nickname not change', result=0)
else:
exist_user.nickname = nickname
exist_user.is_friend = is_friend
exist_user.aliasname = aliasname
exist_user.updated_at = datetime.now()
result = Result.IntResult(error=False, info='Success upgraded', result=0)
except NoResultFound:
# 不存在则成员表中添加新成员
new_user = User(qq=self.qq, nickname=nickname, is_friend=is_friend,
aliasname=aliasname, created_at=datetime.now())
session.add(new_user)
result = Result.IntResult(error=False, info='Success added', result=0)
await session.commit()
except MultipleResultsFound:
await session.rollback()
result = Result.IntResult(error=True, info='MultipleResultsFound', result=-1)
except Exception as e:
await session.rollback()
result = Result.IntResult(error=True, info=repr(e), result=-1)
return result
async def delete(self) -> Result.IntResult:
id_result = await self.id()
if id_result.error:
return Result.IntResult(error=True, info='User not exist', result=-1)
async_session = NBdb().get_async_session()
async with async_session() as session:
try:
async with session.begin():
# 清空该用户权限节点
session_result = await session.execute(
select(AuthUser).where(AuthUser.user_id == id_result.result)
)
for exist_auth_node in session_result.scalars().all():
await session.delete(exist_auth_node)
# 清空技能
session_result = await session.execute(
select(UserSkill).where(UserSkill.user_id == id_result.result)
)
for exist_skill in session_result.scalars().all():
await session.delete(exist_skill)
# 删除状态和假期
session_result = await session.execute(
select(Vocation).where(Vocation.user_id == id_result.result)
)
exist_status = session_result.scalar_one()
await session.delete(exist_status)
# 清空订阅
session_result = await session.execute(
select(UserSub).where(UserSub.user_id == id_result.result)
)
for exist_user_sub in session_result.scalars().all():
await session.delete(exist_user_sub)
# 清空群成员表中该用户
session_result = await session.execute(
select(UserGroup).where(UserGroup.user_id == id_result.result)
)
for exist_user in session_result.scalars().all():
await session.delete(exist_user)
# 删除用户表中用户
session_result = await session.execute(
select(User).where(User.qq == self.qq)
)
exist_user = session_result.scalar_one()
await session.delete(exist_user)
await session.commit()
result = Result.IntResult(error=False, info='Success Delete', result=0)
except NoResultFound:
await session.rollback()
result = Result.IntResult(error=True, info='NoResultFound', result=-1)
except MultipleResultsFound:
await session.rollback()
result = Result.IntResult(error=True, info='MultipleResultsFound', result=-1)
except Exception as e:
await session.rollback()
result = Result.IntResult(error=True, info=repr(e), result=-1)
return result
async def skill_list(self) -> Result.ListResult:
id_result = await self.id()
if id_result.error:
return Result.ListResult(error=True, info='User not exist', result=[])
async_session = NBdb().get_async_session()
async with async_session() as session:
async with session.begin():
try:
session_result = await session.execute(
select(Skill.name, UserSkill.skill_level).
join(UserSkill).
where(Skill.id == UserSkill.skill_id).
where(UserSkill.user_id == id_result.result)
)
res = [(x[0], x[1]) for x in session_result.all()]
result = Result.ListResult(error=False, info='Success', result=res)
except Exception as e:
result = Result.ListResult(error=True, info=repr(e), result=[])
return result
async def skill_add(self, skill: DBSkill, skill_level: int) -> Result.IntResult:
user_id_result = await self.id()
if user_id_result.error:
return Result.IntResult(error=True, info='User not exist', result=-1)
skill_id_result = await skill.id()
if skill_id_result.error:
return Result.IntResult(error=True, info='Skill not exist', result=-1)
async_session = NBdb().get_async_session()
async with async_session() as session:
try:
async with session.begin():
# 查询用户已有技能
try:
# 已有技能, 更新等级
session_result = await session.execute(
select(UserSkill).
where(UserSkill.skill_id == skill_id_result.result).
where(UserSkill.user_id == user_id_result.result)
)
exist_skill = session_result.scalar_one()
exist_skill.skill_level = skill_level
exist_skill.updated_at = datetime.now()
result = Result.IntResult(error=False, info='Success upgraded', result=0)
except NoResultFound:
new_skill = UserSkill(user_id=user_id_result.result, skill_id=skill_id_result.result,
skill_level=skill_level, created_at=datetime.now())
session.add(new_skill)
result = Result.IntResult(error=False, info='Success added', result=0)
await session.commit()
except MultipleResultsFound:
await session.rollback()
result = Result.IntResult(error=True, info='MultipleResultsFound', result=-1)
except Exception as e:
await session.rollback()
result = Result.IntResult(error=True, info=repr(e), result=-1)
return result
async def skill_del(self, skill: DBSkill) -> Result.IntResult:
user_id_result = await self.id()
if user_id_result.error:
return Result.IntResult(error=True, info='User not exist', result=-1)
skill_id_result = await skill.id()
if skill_id_result.error:
return Result.IntResult(error=True, info='Skill not exist', result=-1)
async_session = NBdb().get_async_session()
async with async_session() as session:
try:
async with session.begin():
session_result = await session.execute(
select(UserSkill).
where(UserSkill.skill_id == skill_id_result.result).
where(UserSkill.user_id == user_id_result.result)
)
exist_skill = session_result.scalar_one()
await session.delete(exist_skill)
await session.commit()
result = Result.IntResult(error=False, info='Success', result=0)
except NoResultFound:
await session.rollback()
result = Result.IntResult(error=True, info='NoResultFound', result=-1)
except MultipleResultsFound:
await session.rollback()
result = Result.IntResult(error=True, info='MultipleResultsFound', result=-1)
except Exception as e:
await session.rollback()
result = Result.IntResult(error=True, info=repr(e), result=-1)
return result
async def skill_clear(self) -> Result.IntResult:
user_id_result = await self.id()
if user_id_result.error:
return Result.IntResult(error=True, info='User not exist', result=-1)
async_session = NBdb().get_async_session()
async with async_session() as session:
try:
async with session.begin():
session_result = await session.execute(
select(UserSkill).where(UserSkill.user_id == user_id_result.result)
)
for exist_skill in session_result.scalars().all():
await session.delete(exist_skill)
await session.commit()
result = Result.IntResult(error=False, info='Success', result=0)
except Exception as e:
await session.rollback()
result = Result.IntResult(error=True, info=repr(e), result=-1)
return result
async def status(self) -> Result.IntResult:
user_id_result = await self.id()
if user_id_result.error:
return Result.IntResult(error=True, info='User not exist', result=-1)
async_session = NBdb().get_async_session()
async with async_session() as session:
async with session.begin():
try:
session_result = await session.execute(
select(Vocation.status).where(Vocation.user_id == user_id_result.result)
)
res = session_result.scalar_one()
result = Result.IntResult(error=False, info='Success', result=res)
except Exception as e:
result = Result.IntResult(error=True, info=repr(e), result=-1)
return result
async def vocation_status(self) -> Result.ListResult:
user_id_result = await self.id()
if user_id_result.error:
return Result.ListResult(error=True, info='User not exist', result=[-1, None])
async_session = NBdb().get_async_session()
async with async_session() as session:
async with session.begin():
try:
session_result = await session.execute(
select(Vocation.status, Vocation.stop_at).
where(Vocation.user_id == user_id_result.result)
)
res = session_result.one()
result = Result.ListResult(error=False, info='Success', result=[res[0], res[1]])
except Exception as e:
result = Result.ListResult(error=True, info=repr(e), result=[-1, None])
return result
async def status_set(self, status: int) -> Result.IntResult:
user_id_result = await self.id()
if user_id_result.error:
return Result.IntResult(error=True, info='User not exist', result=-1)
async_session = NBdb().get_async_session()
async with async_session() as session:
try:
async with session.begin():
try:
session_result = await session.execute(
select(Vocation).where(Vocation.user_id == user_id_result.result)
)
exist_status = session_result.scalar_one()
exist_status.status = status
exist_status.stop_at = None
exist_status.reason = None
exist_status.updated_at = datetime.now()
result = Result.IntResult(error=False, info='Success upgraded', result=0)
except NoResultFound:
new_status = Vocation(user_id=user_id_result.result, status=status, created_at=datetime.now())
session.add(new_status)
result = Result.IntResult(error=False, info='Success set', result=0)
await session.commit()
except MultipleResultsFound:
await session.rollback()
result = Result.IntResult(error=True, info='MultipleResultsFound', result=-1)
except Exception as e:
await session.rollback()
result = Result.IntResult(error=True, info=repr(e), result=-1)
return result
async def vocation_set(self, stop_time: datetime, reason: str = None) -> Result.IntResult:
user_id_result = await self.id()
if user_id_result.error:
return Result.IntResult(error=True, info='User not exist', result=-1)
async_session = NBdb().get_async_session()
async with async_session() as session:
try:
async with session.begin():
try:
session_result = await session.execute(
select(Vocation).where(Vocation.user_id == user_id_result.result)
)
exist_status = session_result.scalar_one()
exist_status.status = 1
exist_status.stop_at = stop_time
exist_status.reason = reason
exist_status.updated_at = datetime.now()
result = Result.IntResult(error=False, info='Success upgraded', result=0)
except NoResultFound:
new_status = Vocation(user_id=user_id_result.result, status=1,
stop_at=stop_time, reason=reason, created_at=datetime.now())
session.add(new_status)
result = Result.IntResult(error=False, info='Success set', result=0)
await session.commit()
except MultipleResultsFound:
await session.rollback()
result = Result.IntResult(error=True, info='MultipleResultsFound', result=-1)
except Exception as e:
await session.rollback()
result = Result.IntResult(error=True, info=repr(e), result=-1)
return result
async def status_del(self) -> Result.IntResult:
user_id_result = await self.id()
if user_id_result.error:
return Result.IntResult(error=True, info='User not exist', result=-1)
async_session = NBdb().get_async_session()
async with async_session() as session:
try:
async with session.begin():
session_result = await session.execute(
select(Vocation).where(Vocation.user_id == user_id_result.result)
)
exist_status = session_result.scalar_one()
await session.delete(exist_status)
await session.commit()
result = Result.IntResult(error=False, info='Success', result=0)
except Exception as e:
await session.rollback()
result = Result.IntResult(error=True, info=repr(e), result=-1)
return result
| 48.989664 | 118 | 0.547234 | 1,908 | 18,959 | 5.280398 | 0.061321 | 0.075037 | 0.08933 | 0.090323 | 0.856675 | 0.848238 | 0.800893 | 0.784715 | 0.775583 | 0.763474 | 0 | 0.004319 | 0.364945 | 18,959 | 386 | 119 | 49.11658 | 0.832475 | 0.005011 | 0 | 0.690962 | 0 | 0 | 0.030923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002915 | false | 0 | 0.020408 | 0 | 0.102041 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
54f2842473ce68bb9c2b46027f1497b87f4be9a0 | 195 | py | Python | app/admin/views.py | techouse/nordend | 9129c5dc75f338ba0b4fc6c6a8b6bfdc334264d4 | [
"MIT"
] | null | null | null | app/admin/views.py | techouse/nordend | 9129c5dc75f338ba0b4fc6c6a8b6bfdc334264d4 | [
"MIT"
] | 1 | 2020-03-03T07:58:56.000Z | 2020-03-03T07:58:56.000Z | app/admin/views.py | techouse/nordend | 9129c5dc75f338ba0b4fc6c6a8b6bfdc334264d4 | [
"MIT"
] | null | null | null | from flask import render_template
from . import admin
@admin.route("/", defaults={"path": ""})
@admin.route("/<path:path>")
def catch_all(path):
return render_template("admin/index.html")
| 19.5 | 46 | 0.697436 | 26 | 195 | 5.115385 | 0.576923 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117949 | 195 | 9 | 47 | 21.666667 | 0.773256 | 0 | 0 | 0 | 0 | 0 | 0.169231 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
54f3cbaff33cfa40e353223cacbfe120b14adf94 | 27 | py | Python | src/py/checks/sanitycheck.py | Birch-san/gmusicapi-node | 0d32a55c9ad2eec763b1bf8d15c740601b68fda3 | [
"MIT"
] | 5 | 2015-11-09T18:33:48.000Z | 2017-05-20T13:33:26.000Z | src/py/checks/sanitycheck.py | Birch-san/gmusicapi-node | 0d32a55c9ad2eec763b1bf8d15c740601b68fda3 | [
"MIT"
] | null | null | null | src/py/checks/sanitycheck.py | Birch-san/gmusicapi-node | 0d32a55c9ad2eec763b1bf8d15c740601b68fda3 | [
"MIT"
] | null | null | null | import sys
print 'sup, yo' | 9 | 15 | 0.703704 | 5 | 27 | 3.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185185 | 27 | 3 | 15 | 9 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
07334920324c58a821dafc954bdea3f219b43980 | 388 | py | Python | pyNastran/utils/test/all_tests.py | jtran10/pyNastran | 4aed8e05b91576c2b50ee835f0497a9aad1d2cb0 | [
"BSD-3-Clause"
] | null | null | null | pyNastran/utils/test/all_tests.py | jtran10/pyNastran | 4aed8e05b91576c2b50ee835f0497a9aad1d2cb0 | [
"BSD-3-Clause"
] | null | null | null | pyNastran/utils/test/all_tests.py | jtran10/pyNastran | 4aed8e05b91576c2b50ee835f0497a9aad1d2cb0 | [
"BSD-3-Clause"
] | null | null | null | """tests for ``pyNastran.utils``"""
from __future__ import print_function
from pyNastran.utils.test.test_log import TestLog
from pyNastran.utils.test.test_utils import TestUtils
from pyNastran.utils.test.test_atmosphere import TestAtm
from pyNastran.utils.test.test_dict_to_h5py import TestDictToH5
if __name__ == '__main__': # pragma: no cover
import unittest
unittest.main()
| 32.333333 | 63 | 0.798969 | 53 | 388 | 5.490566 | 0.490566 | 0.24055 | 0.247423 | 0.302406 | 0.357388 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005831 | 0.115979 | 388 | 11 | 64 | 35.272727 | 0.842566 | 0.121134 | 0 | 0 | 0 | 0 | 0.023881 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0.125 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ab488be7d33b3f753d429be32a0a729648109126 | 3,807 | py | Python | tests/test_single_inst.py | danielstumpp/tomasulo-simulator | 54fd896adc6a7bcb098ad51cc35e7127ef7911b5 | [
"MIT"
] | null | null | null | tests/test_single_inst.py | danielstumpp/tomasulo-simulator | 54fd896adc6a7bcb098ad51cc35e7127ef7911b5 | [
"MIT"
] | 6 | 2021-11-13T03:26:46.000Z | 2021-11-17T15:20:10.000Z | tests/test_single_inst.py | danielstumpp/tomasulo-simulator | 54fd896adc6a7bcb098ad51cc35e7127ef7911b5 | [
"MIT"
] | null | null | null | from simulator import simulator
from simulator.modules.state import State
from simulator.modules.timing_table import TimingTable
root = 'tests/inputs/test_single_inst/'
def test_addi_one_inst():
state = simulator.run(root + 'test_addi_one_inst/addi_single.yml')
tt_test = TimingTable()
tt_gold = TimingTable()
tt_test.load_from_state(state)
tt_gold.load_from_file(root + 'test_addi_one_inst/addi_single.tt')
assert tt_gold == tt_test
assert state.registers['R5'] == 100
assert list(state.RAT.values()) == list(state.RAT.keys())
def test_add_one_inst():
state = simulator.run(root + 'test_add_one_inst/add_single.yml')
tt_test = TimingTable()
tt_gold = TimingTable()
tt_test.load_from_state(state)
tt_gold.load_from_file(root + 'test_add_one_inst/add_single.tt')
assert tt_gold == tt_test
assert state.registers['R3'] == 30
assert list(state.RAT.values()) == list(state.RAT.keys())
def test_add_d_one_inst():
state = simulator.run(root + 'test_add_d_one_inst/add_d_single.yml')
tt_test = TimingTable()
tt_gold = TimingTable()
tt_test.load_from_state(state)
tt_gold.load_from_file(root + 'test_add_d_one_inst/add_d_single.tt')
assert tt_gold == tt_test
assert state.registers['F5'] == 6.5
assert list(state.RAT.values()) == list(state.RAT.keys())
def test_sub_one_inst():
state = simulator.run(root + 'test_sub_one_inst/sub_single.yml')
tt_test = TimingTable()
tt_gold = TimingTable()
tt_test.load_from_state(state)
tt_gold.load_from_file(root + 'test_sub_one_inst/sub_single.tt')
assert tt_gold == tt_test
assert state.registers['R3'] == -10
assert list(state.RAT.values()) == list(state.RAT.keys())
def test_sub_d_one_inst():
state = simulator.run(root + 'test_sub_d_one_inst/sub_d_single.yml')
tt_test = TimingTable()
tt_gold = TimingTable()
tt_test.load_from_state(state)
tt_gold.load_from_file(root + 'test_sub_d_one_inst/sub_d_single.tt')
assert tt_gold == tt_test
assert state.registers['F5'] == 2.5
assert list(state.RAT.values()) == list(state.RAT.keys())
def test_mult_d_one_inst():
state = simulator.run(root + 'test_mult_d_one_inst/mult_d_single.yml')
tt_test = TimingTable()
tt_gold = TimingTable()
tt_test.load_from_state(state)
tt_gold.load_from_file(root + 'test_mult_d_one_inst/mult_d_single.tt')
assert tt_gold == tt_test
assert state.registers['F5'] == 9.0
assert list(state.RAT.values()) == list(state.RAT.keys())
def test_ld_one_inst():
state = simulator.run(root + 'test_ld_one_inst/ld_single.yml')
tt_test = TimingTable()
tt_gold = TimingTable()
tt_test.load_from_state(state)
tt_gold.load_from_file(root + 'test_ld_one_inst/ld_single.tt')
assert tt_gold == tt_test
assert state.registers['F31'] == 3.4
assert list(state.RAT.values()) == list(state.RAT.keys())
def test_sd_one_inst():
state = simulator.run(root + 'test_sd_one_inst/sd_single.yml')
tt_test = TimingTable()
tt_gold = TimingTable()
tt_test.load_from_state(state)
tt_gold.load_from_file(root + 'test_sd_one_inst/sd_single.tt')
assert len(state.LSU.RS) == 0
assert tt_gold == tt_test
assert state.memory[4] == 4.5
assert list(state.RAT.values()) == list(state.RAT.keys())
def test_nop_one_inst():
state = simulator.run(root + 'test_nop_one_inst/nop_single.yml')
tt_test = TimingTable()
tt_gold = TimingTable()
tt_test.load_from_state(state)
tt_gold.load_from_file(root + 'test_nop_one_inst/nop_single.tt')
assert len(state.LSU.RS) == 0
assert state.registers['R0'] == 0
assert state.registers['R1'] == 10
assert list(state.RAT.values()) == list(state.RAT.keys())
assert tt_gold == tt_test
| 35.25 | 74 | 0.704754 | 600 | 3,807 | 4.126667 | 0.093333 | 0.076333 | 0.087237 | 0.076333 | 0.910339 | 0.901454 | 0.901454 | 0.783926 | 0.725363 | 0.655089 | 0 | 0.010364 | 0.163646 | 3,807 | 107 | 75 | 35.579439 | 0.767274 | 0 | 0 | 0.534091 | 0 | 0 | 0.168111 | 0.163121 | 0 | 0 | 0 | 0 | 0.340909 | 1 | 0.102273 | false | 0 | 0.034091 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ab59f2911db95f131927409fd48fadf44c6b7ddb | 144 | py | Python | src/hypeshed/common.py | underyx/hypeshed | 64037920f5bc8cef7d7075a2a6a32f561a5e9f6b | [
"MIT"
] | null | null | null | src/hypeshed/common.py | underyx/hypeshed | 64037920f5bc8cef7d7075a2a6a32f561a5e9f6b | [
"MIT"
] | null | null | null | src/hypeshed/common.py | underyx/hypeshed | 64037920f5bc8cef7d7075a2a6a32f561a5e9f6b | [
"MIT"
] | null | null | null | from typing import Annotated
iso_datetime = Annotated[str, "ISO-8601 formatted datetime"]
iso_date = Annotated[str, "ISO-8601 formatted date"]
| 28.8 | 60 | 0.784722 | 20 | 144 | 5.55 | 0.5 | 0.216216 | 0.27027 | 0.342342 | 0.504505 | 0 | 0 | 0 | 0 | 0 | 0 | 0.062992 | 0.118056 | 144 | 4 | 61 | 36 | 0.811024 | 0 | 0 | 0 | 0 | 0 | 0.347222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ab5a2de633dd4576e85358ef851929109bacccf2 | 79 | py | Python | app/api/user_view.py | ryzencool/flask_arch | 432a426ea81dce1830881315d64b5f6691c94fdd | [
"Apache-2.0"
] | null | null | null | app/api/user_view.py | ryzencool/flask_arch | 432a426ea81dce1830881315d64b5f6691c94fdd | [
"Apache-2.0"
] | null | null | null | app/api/user_view.py | ryzencool/flask_arch | 432a426ea81dce1830881315d64b5f6691c94fdd | [
"Apache-2.0"
] | null | null | null | from app.api import user
@user.route("/hello")
def hello():
return "hello" | 15.8 | 24 | 0.670886 | 12 | 79 | 4.416667 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164557 | 79 | 5 | 25 | 15.8 | 0.80303 | 0 | 0 | 0 | 0 | 0 | 0.1375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
ab5aec227a6bf8eba114444ee59ee3fcbb9cbaed | 1,020 | py | Python | toil_lib/style.py | oorahduc/toil | 7da6214b15b352817b3a4f30ab148b1f1cfd62ec | [
"MIT"
] | 1 | 2020-03-20T05:47:11.000Z | 2020-03-20T05:47:11.000Z | toil_lib/style.py | oorahduc/toil | 7da6214b15b352817b3a4f30ab148b1f1cfd62ec | [
"MIT"
] | null | null | null | toil_lib/style.py | oorahduc/toil | 7da6214b15b352817b3a4f30ab148b1f1cfd62ec | [
"MIT"
] | null | null | null | # styles
def underline(string):
return "\033[4m" + string + "\033[0m"
def bright(string):
return "\033[1m" + string + "\033[0m"
def dim(string):
return "\033[2m" + string + "\033[0m"
# colors
def dark(string):
return "\033[30m" + string + "\033[0m"
def red(string):
return "\033[31m" + string + "\033[0m"
def green(string):
return "\033[32m" + string + "\033[0m"
def yellow(string):
return "\033[33m" + string + "\033[0m"
def blue(string):
return "\033[34m" + string + "\033[0m"
def cyan(string):
return "\033[36m" + string + "\033[0m"
def white(string):
return "\033[37m " + string + "\033[0m"
# brights
def brightgreen(string):
return "\033[32;1m" + string + "\033[0m"
def brightyellow(string):
return "\033[33;1m" + string + "\033[0m"
def brightred(string):
return "\033[91" + string + "\033[0m"
def brightblue(string):
return "\033[94" + string + "\033[0m"
# custom
def tagline(string):
return "\033[36;4m" + string + "\033[0m"
| 16.190476 | 44 | 0.589216 | 142 | 1,020 | 4.232394 | 0.274648 | 0.299501 | 0.374376 | 0.25624 | 0.079867 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16812 | 0.212745 | 1,020 | 62 | 45 | 16.451613 | 0.580324 | 0.027451 | 0 | 0 | 0 | 0 | 0.22999 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
db507d3c1bca60d6afd09d4d832943cc14b6a485 | 4,517 | py | Python | plugins/sdm/lib/test_util.py | camposer/accessbot | 6f7e48cb962ed878c720ad0c82bb0780da7bfa33 | [
"Apache-2.0"
] | null | null | null | plugins/sdm/lib/test_util.py | camposer/accessbot | 6f7e48cb962ed878c720ad0c82bb0780da7bfa33 | [
"Apache-2.0"
] | null | null | null | plugins/sdm/lib/test_util.py | camposer/accessbot | 6f7e48cb962ed878c720ad0c82bb0780da7bfa33 | [
"Apache-2.0"
] | 3 | 2021-08-16T22:34:05.000Z | 2021-09-22T02:51:13.000Z | # pylint: disable=invalid-name
from unittest.mock import MagicMock
from strongdm import Postgres, Role
from .util import is_hidden, can_auto_approve_by_tag, HiddenTagEnum
class Test_is_hidden_resource:
def test_hide_resource_when_tag_true(self):
config = {'HIDE_RESOURCE_TAG': 'hide-resource'}
sdm_resource = MagicMock(spec = Postgres)
sdm_resource.tags = {'hide-resource': 'true'}
assert is_hidden(config, HiddenTagEnum.RESOURCE, sdm_resource)
def test_dont_hide_resource_when_tag_false(self):
config = {'HIDE_RESOURCE_TAG': 'hide-resource'}
sdm_resource = MagicMock(spec = Postgres)
sdm_resource.tags = {'hide-resource': 'false'}
assert is_hidden(config, HiddenTagEnum.RESOURCE, sdm_resource) is False
def test_hide_resource_when_tag_have_no_value(self):
config = {'HIDE_RESOURCE_TAG': 'hide-resource'}
sdm_resource = MagicMock(spec = Postgres)
sdm_resource.tags = {'hide-resource': None}
assert is_hidden(config, HiddenTagEnum.RESOURCE, sdm_resource)
def test_hide_resource_when_tag_have_unexpected_value(self):
config = {'HIDE_RESOURCE_TAG': 'hide-resource'}
sdm_resource = MagicMock(spec = Postgres)
sdm_resource.tags = {'hide-resource': 'not-a-boolean'}
assert is_hidden(config, HiddenTagEnum.RESOURCE, sdm_resource)
def test_dont_hide_resource_when_tag_doesnt_exist(self):
config = {'HIDE_RESOURCE_TAG': 'another-tag'}
sdm_resource = MagicMock(spec = Postgres)
sdm_resource.tags = {'hide-resource': 'true'}
assert is_hidden(config, HiddenTagEnum.RESOURCE, sdm_resource) is False
class Test_is_hidden_role:
def test_hide_role_when_tag_true(self):
config = {'HIDE_ROLE_TAG': 'hide-role'}
sdm_role = MagicMock(spec = Role)
sdm_role.tags = {'hide-role': 'true'}
assert is_hidden(config, HiddenTagEnum.ROLE, sdm_role)
def test_dont_hide_role_when_tag_false(self):
config = {'HIDE_ROLE_TAG': 'hide-role'}
sdm_role = MagicMock(spec = Role)
sdm_role.tags = {'hide-role': 'false'}
assert not is_hidden(config, HiddenTagEnum.ROLE, sdm_role)
def test_hide_role_when_tag_has_no_value(self):
config = {'HIDE_ROLE_TAG': 'hide-role'}
sdm_role = MagicMock(spec = Role)
sdm_role.tags = {'hide-role': None}
assert is_hidden(config, HiddenTagEnum.ROLE, sdm_role)
def test_hide_role_when_tag_has_unexpected_value(self):
config = {'HIDE_ROLE_TAG': 'hide-role'}
sdm_role = MagicMock(spec = Role)
sdm_role.tags = {'hide-role': 'not-a-boolean'}
assert is_hidden(config, HiddenTagEnum.ROLE, sdm_role)
def test_dont_hide_role_when_tag_doesnt_exist(self):
config = {'HIDE_ROLE_TAG': 'hide-role'}
sdm_role = MagicMock(spec = Role)
sdm_role.tags = {'another-tag': 'true'}
assert not is_hidden(config, HiddenTagEnum.ROLE, sdm_role)
class Test_can_auto_approve_by_tag:
def test_auto_approve_when_tag_true(self):
config = {'AUTO_APPROVE_TAG': 'auto-approve'}
sdm_resource = MagicMock(spec = Postgres)
sdm_resource.tags = {'auto-approve': 'true'}
assert can_auto_approve_by_tag(config, sdm_resource, 'AUTO_APPROVE_TAG')
def test_dont_auto_approve_when_tag_false(self):
config = {'AUTO_APPROVE_TAG': 'auto-approve'}
sdm_resource = MagicMock(spec = Postgres)
sdm_resource.tags = {'auto-approve': 'false'}
assert can_auto_approve_by_tag(config, sdm_resource, 'AUTO_APPROVE_TAG') is False
def test_auto_approve_when_tag_have_no_value(self):
config = {'AUTO_APPROVE_TAG': 'auto-approve'}
sdm_resource = MagicMock(spec = Postgres)
sdm_resource.tags = {'auto-approve': None}
assert can_auto_approve_by_tag(config, sdm_resource, 'AUTO_APPROVE_TAG')
def test_auto_approve_when_tag_have_unexpected_value(self):
config = {'AUTO_APPROVE_TAG': 'auto-approve'}
sdm_resource = MagicMock(spec = Postgres)
sdm_resource.tags = {'auto-approve': 'not-a-boolean'}
assert can_auto_approve_by_tag(config, sdm_resource, 'AUTO_APPROVE_TAG')
def test_dont_auto_approve_when_tag_doesnt_exist(self):
config = {'AUTO_APPROVE_TAG': 'another-tag'}
sdm_resource = MagicMock(spec = Postgres)
sdm_resource.tags = {'auto-approve': 'true'}
assert can_auto_approve_by_tag(config, sdm_resource, 'AUTO_APPROVE_TAG') is False
| 45.17 | 89 | 0.699358 | 594 | 4,517 | 4.936027 | 0.080808 | 0.116303 | 0.056276 | 0.081855 | 0.9294 | 0.90382 | 0.886426 | 0.800477 | 0.792974 | 0.773874 | 0 | 0 | 0.193934 | 4,517 | 99 | 90 | 45.626263 | 0.805273 | 0.006199 | 0 | 0.580247 | 0 | 0 | 0.162024 | 0 | 0 | 0 | 0 | 0 | 0.185185 | 1 | 0.185185 | false | 0 | 0.037037 | 0 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
db7681ba006c5a88c4b29c9f9cf61b1a97595bf3 | 108 | py | Python | donkeykong/__init__.py | maurosilber/donkeykong | e137a55b370961b9bfecbcd797f5155eb8ba4248 | [
"MIT"
] | null | null | null | donkeykong/__init__.py | maurosilber/donkeykong | e137a55b370961b9bfecbcd797f5155eb8ba4248 | [
"MIT"
] | null | null | null | donkeykong/__init__.py | maurosilber/donkeykong | e137a55b370961b9bfecbcd797f5155eb8ba4248 | [
"MIT"
] | null | null | null | # Monkey-patch luigi
from . import monkey_patching
del monkey_patching
from . import target, invalidation
| 15.428571 | 34 | 0.805556 | 14 | 108 | 6.071429 | 0.642857 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 108 | 6 | 35 | 18 | 0.923913 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
db7dd8aa595f507ff5320d585e26a2944ca4f5e5 | 3,646 | py | Python | dominio/suamesa/serializers.py | MinisterioPublicoRJ/api-cadg | a8998c4c234a65192f1dca8ea9a17a1d4a496556 | [
"MIT"
] | 6 | 2020-02-11T18:45:58.000Z | 2020-05-26T12:37:28.000Z | dominio/suamesa/serializers.py | MinisterioPublicoRJ/api-cadg | a8998c4c234a65192f1dca8ea9a17a1d4a496556 | [
"MIT"
] | 120 | 2019-07-01T14:45:32.000Z | 2022-01-25T19:10:16.000Z | dominio/suamesa/serializers.py | MinisterioPublicoRJ/apimpmapas | 196ad25a4922448b8ae7a66012a2843c7b7194ad | [
"MIT"
] | null | null | null | from rest_framework import serializers
class MetricsDetalheDocumentoOrgaoCPFSerializer(serializers.Serializer):
tipo_detalhe = serializers.CharField()
intervalo = serializers.CharField()
orgao_id = serializers.IntegerField()
cpf = serializers.CharField()
nr_documentos_distintos_atual = serializers.IntegerField(min_value=0)
nr_aberturas_vista_atual = serializers.IntegerField(min_value=0)
nr_aproveitamentos_atual = serializers.IntegerField(min_value=0)
nr_instaurados_atual = serializers.IntegerField(min_value=0)
nr_documentos_distintos_anterior = serializers.IntegerField(min_value=0)
nr_aberturas_vista_anterior = serializers.IntegerField(min_value=0)
nr_aproveitamentos_anterior = serializers.IntegerField(min_value=0)
nr_instaurados_anterior = serializers.IntegerField(min_value=0)
variacao_documentos_distintos = serializers.FloatField()
variacao_aberturas_vista = serializers.FloatField()
variacao_aproveitamentos = serializers.FloatField()
variacao_instaurados = serializers.FloatField()
class MetricsDetalheDocumentoOrgaoSerializer(serializers.Serializer):
tipo_detalhe = serializers.CharField()
intervalo = serializers.CharField()
nm_orgao = serializers.CharField()
cod_pacote = serializers.IntegerField()
orgao_id = serializers.IntegerField()
nr_documentos_distintos_atual = serializers.IntegerField(min_value=0)
nr_aberturas_vista_atual = serializers.IntegerField(min_value=0)
nr_aproveitamentos_atual = serializers.IntegerField(min_value=0)
nr_instaurados_atual = serializers.IntegerField(min_value=0)
acervo_anterior = serializers.IntegerField(min_value=0)
acervo_atual = serializers.IntegerField(min_value=0)
variacao_acervo = serializers.FloatField()
nr_documentos_distintos_anterior = serializers.IntegerField(min_value=0)
nr_aberturas_vista_anterior = serializers.IntegerField(min_value=0)
nr_aproveitamentos_anterior = serializers.IntegerField(min_value=0)
nr_instaurados_anterior = serializers.IntegerField(min_value=0)
variacao_documentos_distintos = serializers.FloatField()
variacao_aberturas_vista = serializers.FloatField()
variacao_aproveitamentos = serializers.FloatField()
variacao_instaurados = serializers.FloatField()
class RankingSerializer(serializers.Serializer):
nm_orgao = serializers.CharField()
valor = serializers.IntegerField()
class RankingFloatSerializer(serializers.Serializer):
nm_orgao = serializers.CharField()
valor = serializers.FloatField()
class RankingPercentageSerializer(serializers.Serializer):
nm_orgao = serializers.CharField()
valor_percentual = serializers.FloatField()
class SuaMesaDetalheAISPSerializer(serializers.Serializer):
acervo_inicio = serializers.IntegerField(min_value=0)
acervo_fim = serializers.IntegerField(min_value=0)
variacao_acervo = serializers.FloatField()
class SuaMesaDetalheTutelaProcessosSerializer(serializers.Serializer):
orgao_id = serializers.IntegerField()
nm_orgao = serializers.CharField()
nr_acoes_12_meses_anterior = serializers.IntegerField(min_value=0)
nr_acoes_12_meses_atual = serializers.IntegerField(min_value=0)
variacao_12_meses = serializers.FloatField()
nr_acoes_60_dias_anterior = serializers.IntegerField(min_value=0)
nr_acoes_ultimos_60_dias = serializers.IntegerField(min_value=0)
variacao_60_dias = serializers.FloatField()
nr_acoes_30_dias_anterior = serializers.IntegerField(min_value=0)
nr_acoes_ultimos_30_dias = serializers.IntegerField(min_value=0)
variacao_30_dias = serializers.FloatField()
| 46.151899 | 76 | 0.80746 | 379 | 3,646 | 7.432718 | 0.134565 | 0.253106 | 0.239972 | 0.28612 | 0.749734 | 0.749734 | 0.717785 | 0.649982 | 0.587859 | 0.482783 | 0 | 0.013656 | 0.116292 | 3,646 | 78 | 77 | 46.74359 | 0.860646 | 0 | 0 | 0.59375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.015625 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
db99091ad223d7bd9be87504f5c5495c7cea3840 | 198 | py | Python | arctor/dwt/__init__.py | exowanderer/arctor | 1525b1d2b679da1e64bc00e2850d85076cada842 | [
"BSD-3-Clause"
] | 1 | 2021-02-26T23:42:34.000Z | 2021-02-26T23:42:34.000Z | arctor/dwt/__init__.py | exowanderer/arctor | 1525b1d2b679da1e64bc00e2850d85076cada842 | [
"BSD-3-Clause"
] | 1 | 2020-10-14T04:08:57.000Z | 2021-03-30T20:43:19.000Z | arctor/dwt/__init__.py | exowanderer/arctor | 1525b1d2b679da1e64bc00e2850d85076cada842 | [
"BSD-3-Clause"
] | 1 | 2022-03-02T22:11:48.000Z | 2022-03-02T22:11:48.000Z | # Copyright (c) 2015-2019 Patricio Cubillos and contributors.
# MC3 is open-source software under the MIT license (see LICENSE).
# Source: https://github.com/pcubillos/mc3
from .dwt_chisq import *
| 33 | 66 | 0.762626 | 29 | 198 | 5.172414 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05848 | 0.136364 | 198 | 5 | 67 | 39.6 | 0.818713 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dbc61b27b42657813e6beb34e9714ec8a6da4c14 | 601 | py | Python | src/som/interpreter/send.py | SOM-st/PySOM | 65ef72f44252439b724a7429408dac7f8d1b1d98 | [
"MIT"
] | 22 | 2015-10-29T05:11:06.000Z | 2022-03-01T11:18:45.000Z | src/som/interpreter/send.py | smarr/PySOM | 65ef72f44252439b724a7429408dac7f8d1b1d98 | [
"MIT"
] | 16 | 2021-03-07T22:09:33.000Z | 2021-08-24T12:36:15.000Z | src/som/interpreter/send.py | SOM-st/PySOM | 65ef72f44252439b724a7429408dac7f8d1b1d98 | [
"MIT"
] | 5 | 2015-01-02T03:51:29.000Z | 2020-10-02T07:05:46.000Z | from som.vm.symbols import symbol_for
def lookup_and_send_2(receiver, arg, selector_string):
from som.vm.current import current_universe
selector = symbol_for(selector_string)
invokable = receiver.get_class(current_universe).lookup_invokable(selector)
return invokable.invoke_2(receiver, arg)
def lookup_and_send_3(receiver, arg1, arg2, selector_string):
from som.vm.current import current_universe
selector = symbol_for(selector_string)
invokable = receiver.get_class(current_universe).lookup_invokable(selector)
return invokable.invoke_3(receiver, arg1, arg2)
| 31.631579 | 79 | 0.792013 | 81 | 601 | 5.592593 | 0.320988 | 0.12362 | 0.059603 | 0.07064 | 0.732892 | 0.732892 | 0.732892 | 0.732892 | 0.732892 | 0.732892 | 0 | 0.015326 | 0.131448 | 601 | 18 | 80 | 33.388889 | 0.85249 | 0 | 0 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.272727 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
dbd720cddb94c9106d985b65e7e2a971574e096d | 36,889 | py | Python | tests/__init__.py | kislenko-artem/dataclasses-ujson | 6113d8b3db0f45be0b9ade846e408e3b50979bb2 | [
"Apache-2.0"
] | 3 | 2018-07-02T05:38:13.000Z | 2018-10-06T22:15:48.000Z | tests/__init__.py | kislenko-artem/dataclasses-ujson | 6113d8b3db0f45be0b9ade846e408e3b50979bb2 | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | kislenko-artem/dataclasses-ujson | 6113d8b3db0f45be0b9ade846e408e3b50979bb2 | [
"Apache-2.0"
] | null | null | null | from typing import List, Dict, Union, Optional
from dataclasses import dataclass
from dataclasses_ujson.dataclasses_ujson import UJsonMixin
JSON_SIMPLE = '{"x": 1}'
JSON_SIMPLE_OPTIONAL = '{"x": 1, "y": null}'
JSON_LIST = '{"x": [1]}'
JSON_DICT = '{"x": {"d": 1}}'
JSON_NESTED = '{{"a": {simple}, "b": {list}, "c": {dict}}}'.format(
simple=JSON_SIMPLE, list=JSON_LIST, dict=JSON_DICT)
JSON_UNION_V1 = '{{"a": 1, "b": {list}, "c": {list}}}'.format(list=JSON_LIST)
JSON_UNION_V2 = '{{"a": "s", "b": {dict}, "c": {dict}}}'.format(dict=JSON_DICT)
JSON_SIMPLE_LIST = '[{"x": 1}]'
JSON_NESTED_LIST = '{{"a": {simple}, "b": [{list}], "c": [{dict}]}}'.format(
simple=JSON_SIMPLE, list=JSON_LIST, dict=JSON_DICT)
JSON_VK = '''
{"count": 578, "items": [{"id": 45668, "from_id": -61006621, "owner_id": -61006621, "date": 1531139873, "marked_as_ads": 0, "post_type": "post", "text": "\u0424\u0438\u0448\u043a\u0438 \u0432 \u041b\u0435\u043d\u0442\u0443 \u0434\u043e 15.07\uff0c\u0448\u043a\u0430\u0444 \u0441\u0430\u043c\u043e\u0432\u044b\u0432\u043e\u0437 \u0441 \u0441\u0430\u0434\u043e\u0432, \u043f\u0430\u0440\u0430 \u043c\u0443\u0436 \u0434\u0436\u0438\u043d\u0441 \u043d\u0430 \u043f\u043e\u0434\u0435\u043b\u043a\u0438.", "signer_id": 216078137, "attachments": [{"type": "photo", "photo": {"id": 456250864, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c845421/v845421539/9e5fa/NQewHCexzHk.jpg", "photo_130": "https://pp.userapi.com/c845421/v845421539/9e5fb/WrUWkOz_XDM.jpg", "photo_604": "https://pp.userapi.com/c845421/v845421539/9e5fc/h5mBywiKzAk.jpg", "photo_807": "https://pp.userapi.com/c845421/v845421539/9e5fd/NJbEm_B-zdY.jpg", "photo_1280": "https://pp.userapi.com/c845421/v845421539/9e5fe/VGlxZk972sQ.jpg", "photo_2560": "https://pp.userapi.com/c845421/v845421539/9e5ff/UW2jTjG24RI.jpg", "width": 2560, "height": 1898, "text": "", "date": 1531135427, "lat": 56.224976, "long": 93.507524, "access_key": "6fce7dad27ffab5f22"}}, {"type": "photo", "photo": {"id": 456250865, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c845421/v845421539/9e604/xzlP-Qnp6Sc.jpg", "photo_130": "https://pp.userapi.com/c845421/v845421539/9e605/LVIv8pKjhPI.jpg", "photo_604": "https://pp.userapi.com/c845421/v845421539/9e606/BtqsTukMQl0.jpg", "photo_807": "https://pp.userapi.com/c845421/v845421539/9e607/Zun4wBSvxc4.jpg", "photo_1280": "https://pp.userapi.com/c845421/v845421539/9e608/8usdAi-iyeI.jpg", "photo_2560": "https://pp.userapi.com/c845421/v845421539/9e609/ByhtbHifBUI.jpg", "width": 2560, "height": 1898, "text": "", "date": 1531135427, "access_key": "c08a351f8692724df8"}}, {"type": "photo", "photo": {"id": 456250866, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c845421/v845421539/9e60e/WHs4Rdu2_2g.jpg", "photo_130": "https://pp.userapi.com/c845421/v845421539/9e60f/gsZ3UAMC9SM.jpg", "photo_604": "https://pp.userapi.com/c845421/v845421539/9e610/OrBiRn7R0Rg.jpg", "photo_807": "https://pp.userapi.com/c845421/v845421539/9e611/jHbOoChmYEA.jpg", "photo_1280": "https://pp.userapi.com/c845421/v845421539/9e612/8SPi8ucKbC4.jpg", "photo_2560": "https://pp.userapi.com/c845421/v845421539/9e613/S8i6busxOhU.jpg", "width": 1602, "height": 2160, "text": "", "date": 1531135427, "lat": 56.194023, "long": 93.546361, "access_key": "d623ce300e8afa0afe"}}, {"type": "photo", "photo": {"id": 456250867, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c845421/v845421539/9e618/22Jztd-SO_Q.jpg", "photo_130": "https://pp.userapi.com/c845421/v845421539/9e619/ebwSW_pcR4M.jpg", "photo_604": "https://pp.userapi.com/c845421/v845421539/9e61a/BMkUdxJkZkQ.jpg", "photo_807": "https://pp.userapi.com/c845421/v845421539/9e61b/5oUOSAyty3Q.jpg", "photo_1280": "https://pp.userapi.com/c845421/v845421539/9e61c/mYyUd0TrzUM.jpg", "photo_2560": "https://pp.userapi.com/c845421/v845421539/9e61d/WgYzxnsKlLU.jpg", "width": 2560, "height": 1898, "text": "", "date": 1531135427, "access_key": "fac24d55138e807a74"}}], "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 0, "groups_can_post": true, "can_post": 1}, "likes": {"count": 0, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 369}}, {"id": 45666, "from_id": -61006621, "owner_id": -61006621, "date": 1531139850, "marked_as_ads": 0, "post_type": "post", "text": "\u043e\u0442\u0434\u0430\u043c \u0434\u0432\u0430 \u043f\u0430\u043a\u0435\u0442\u0430..\u043e\u0434\u0435\u0436\u0434\u0430 \u043d\u0430 \u0434\u0435\u0432\u043e\u0447\u043a\u0443..\u043e\u0442 8\u043c\u0435\u0441-1.6 \u043b\u0435\u0442. \u0441\u043e\u0441\u0442\u043e\u044f\u043d\u0438\u0435 \u0440\u0430\u0437\u043d\u043e\u0435.. \u0441\u0440\u0435\u0434\u043d\u0435\u0435..\u0431\u043e\u043b\u044c\u0448\u0430\u044f \u0447\u0430\u0441\u0442\u044c \u0434\u043b\u044f \u0434\u043e\u043c\u0430..\u0444\u0443\u0442\u0431\u043e\u043b\u043e\u0447\u043a\u0438 .\u043a\u043e\u0444\u0442\u043e\u0447\u043a\u0438 \u0438 \u0442.\u0434..\u043f\u0438\u0441\u0430\u0442\u044c \u0432 \u043b.\u0441.", "signer_id": 84611251, "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 0, "groups_can_post": true, "can_post": 1}, "likes": {"count": 0, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 371}}, {"id": 45661, "from_id": -61006621, "owner_id": -61006621, "date": 1531135087, "marked_as_ads": 0, "post_type": "post", "text": "\u041e\u0442\u0434\u0430\u043c!\u043e\u0442 3-6 \u043c\u0435\u0441\u044f\u0446\u0435\u0432,\u0440.68-74!\u0428\u043e\u043a\u043e\u043b\u0430\u0434\u043a\u0430 \u043f\u0440\u0438\u0432\u0435\u0442\u0441\u0442\u0432\u0443\u0435\u0442\u0441\u044f!)", "signer_id": 47838728, "attachments": [{"type": "photo", "photo": {"id": 456250861, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c845020/v845020910/98d3b/N8Z_5QTFGk0.jpg", "photo_130": "https://pp.userapi.com/c845020/v845020910/98d3c/gnawEjCLYI8.jpg", "photo_604": "https://pp.userapi.com/c845020/v845020910/98d3d/RAwo94rKjio.jpg", "photo_807": "https://pp.userapi.com/c845020/v845020910/98d3e/onqAOVtw_7o.jpg", "photo_1280": "https://pp.userapi.com/c845020/v845020910/98d3f/tbTD7hNiT94.jpg", "photo_2560": "https://pp.userapi.com/c845020/v845020910/98d40/Ldt2bZXpb38.jpg", "width": 2560, "height": 1920, "text": "", "date": 1531128438, "post_id": 45653, "access_key": "c5d0a0016dcb6abc20"}}], "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 0, "groups_can_post": true, "can_post": 1}, "likes": {"count": 3, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 496}}, {"id": 45660, "from_id": -61006621, "owner_id": -61006621, "date": 1531135084, "marked_as_ads": 0, "post_type": "post", "text": "\u041e\u0442\u0434\u0430\u043c \u043a\u043e\u0441\u0442\u044e\u043c \u0432\u0435\u0441\u043d\u0430-\u043e\u0441\u0435\u043d\u044c,\u0440.74!", "signer_id": 47838728, "attachments": [{"type": "photo", "photo": {"id": 456250862, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://sun9-9.userapi.com/c834403/v834403910/182b70/SB0MCQoPIqQ.jpg", "photo_130": "https://sun9-5.userapi.com/c834403/v834403910/182b71/y5O5CH64hg4.jpg", "photo_604": "https://sun9-6.userapi.com/c834403/v834403910/182b72/IEaWYrxg9DU.jpg", "photo_807": "https://sun9-8.userapi.com/c834403/v834403910/182b73/APO1C-FSej4.jpg", "photo_1280": "https://sun9-8.userapi.com/c834403/v834403910/182b74/-36jOR5Wpkw.jpg", "photo_2560": "https://sun9-4.userapi.com/c834403/v834403910/182b75/3ZIuny8Dy1k.jpg", "width": 1620, "height": 2160, "text": "", "date": 1531128515, "post_id": 45654, "access_key": "a5f71d5c162231ed55"}}], "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 0, "groups_can_post": true, "can_post": 1}, "likes": {"count": 1, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 428}}, {"id": 45659, "from_id": -61006621, "owner_id": -61006621, "date": 1531135078, "marked_as_ads": 0, "post_type": "post", "text": "\u0421\u043f\u043e\u0440\u0442\u0438\u0432\u043d\u044b\u0439 \u043a\u0443\u043f\u0430\u043b\u044c\u043d\u0438\u043a \u043d\u0430 \u0440\u043e\u0441\u0442 146 \u0444\u0438\u0440\u043c\u044b Demix.", "signer_id": 37549991, "attachments": [{"type": "photo", "photo": {"id": 456250863, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c834304/v834304814/ce6c9/LtkZdzGoyeA.jpg", "photo_130": "https://pp.userapi.com/c834304/v834304814/ce6ca/nCl25-FGgDQ.jpg", "photo_604": "https://pp.userapi.com/c834304/v834304814/ce6cb/Zy-R9JjWw_4.jpg", "photo_807": "https://pp.userapi.com/c834304/v834304814/ce6cc/aBrliiTmKFk.jpg", "photo_1280": "https://pp.userapi.com/c834304/v834304814/ce6cd/W50ecv28sgA.jpg", "photo_2560": "https://pp.userapi.com/c834304/v834304814/ce6ce/XygDn1mDs9I.jpg", "width": 1620, "height": 2160, "text": "", "date": 1531134738, "access_key": "554cfe1fc2b3eaa3ab"}}], "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 1, "groups_can_post": true, "can_post": 1}, "likes": {"count": 0, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 399}}, {"id": 45645, "from_id": -61006621, "owner_id": -61006621, "date": 1531116940, "marked_as_ads": 0, "post_type": "post", "text": "\u041e\u0442\u0434\u0430\u043c \u0443\u0447\u0435\u0431\u043d\u0438\u043a\u0438, \u0436\u0443\u0440\u043d\u0430\u043b\u044b \u0430\u043d\u0433\u043b.", "signer_id": 6356076, "attachments": [{"type": "photo", "photo": {"id": 456250854, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c834202/v834202371/14297c/EH9E6jjDWas.jpg", "photo_130": "https://pp.userapi.com/c834202/v834202371/14297d/g-V-N4Qjt1o.jpg", "photo_604": "https://pp.userapi.com/c834202/v834202371/14297e/ZO2QMPYVBQ8.jpg", "photo_807": "https://pp.userapi.com/c834202/v834202371/14297f/cfBEUQYdnrY.jpg", "photo_1280": "https://pp.userapi.com/c834202/v834202371/142980/cMLlD7GnY1g.jpg", "photo_2560": "https://pp.userapi.com/c834202/v834202371/142981/620fwmsQ4K8.jpg", "width": 1632, "height": 1224, "text": "", "date": 1531105397, "access_key": "bade112b8e2b47f16d"}}, {"type": "photo", "photo": {"id": 456250855, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c850036/v850036371/22e3d/hk-lrMGShJg.jpg", "photo_130": "https://pp.userapi.com/c850036/v850036371/22e3e/OcJ2AVaFnvA.jpg", "photo_604": "https://pp.userapi.com/c850036/v850036371/22e3f/TeiFgYG_WzI.jpg", "photo_807": "https://pp.userapi.com/c850036/v850036371/22e40/DjClsiFkq7g.jpg", "photo_1280": "https://pp.userapi.com/c850036/v850036371/22e41/hPrmGsctPZc.jpg", "photo_2560": "https://pp.userapi.com/c850036/v850036371/22e42/koNBQdjAldc.jpg", "width": 1632, "height": 1224, "text": "", "date": 1531105397, "access_key": "b5a56d3d997bc65c91"}}], "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 0, "groups_can_post": true, "can_post": 1}, "likes": {"count": 0, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 698}}, {"id": 45642, "from_id": -61006621, "owner_id": -61006621, "date": 1531116929, "marked_as_ads": 0, "post_type": "post", "text": "\u0411\u043e\u0442\u0438\u043d\u043a\u0438 \u0434\u043b\u044f \u0434\u0435\u0432\u043e\u0447\u043a\u0438, 35 \u0440\u0430\u0437\u043c\u0435\u0440", "signer_id": 37549991, "attachments": [{"type": "photo", "photo": {"id": 456250856, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c830308/v830308680/13effe/_YAKCI49lic.jpg", "photo_130": "https://pp.userapi.com/c830308/v830308680/13efff/Af9PmtokkV8.jpg", "photo_604": "https://pp.userapi.com/c830308/v830308680/13f000/zkRe0ZyqhJ8.jpg", "photo_807": "https://pp.userapi.com/c830308/v830308680/13f001/h_809C5-6Xc.jpg", "photo_1280": "https://pp.userapi.com/c830308/v830308680/13f002/q5okz0hoK0Y.jpg", "photo_2560": "https://pp.userapi.com/c830308/v830308680/13f003/g2N-vpugcIE.jpg", "width": 1620, "height": 2160, "text": "", "date": 1531110685, "lat": 56.229736, "long": 93.538349, "access_key": "5777cd0f6964f4965f"}}], "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 2, "groups_can_post": true, "can_post": 1}, "likes": {"count": 1, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 619}}, {"id": 45640, "from_id": -61006621, "owner_id": -61006621, "date": 1531116923, "marked_as_ads": 0, "post_type": "post", "text": "\u041e\u0442\u0434\u0430\u043c \u043d\u0435\u043c\u043d\u043e\u0433\u043e \u0432\u0435\u0449\u0435\u0439 \u043d\u0430 \u043c\u0430\u043b\u044c\u0447\u0438\u043a\u0430 1,5-3 \u0433\u043e\u0434\u0430. \u0414\u043b\u044f \u0434\u043e\u043c\u0430 \u0438 \u0434\u0430\u0447\u0438. \u0417\u0430\u0431\u0438\u0440\u0430\u0442\u044c \u0436\u0435\u043b\u0430\u0442\u0435\u043b\u044c\u043d\u043e \u0441\u0435\u0433\u043e\u0434\u043d\u044f", "signer_id": 67389960, "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 0, "groups_can_post": true, "can_post": 1}, "likes": {"count": 1, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 591}}, {"id": 45639, "from_id": -61006621, "owner_id": -61006621, "date": 1531116918, "marked_as_ads": 0, "post_type": "post", "text": "\u041e\u0442\u0434\u0430\u043c \u0441\u0430\u043d\u0434\u0430\u043b\u0438,\u0441\u043e\u0432\u0435\u0442\u0441\u043a\u0438\u0435 12 \u0441\u043c \u043f\u043e \u0441\u0442\u0435\u043b\u044c\u043a\u0435,\u0441\u043e\u0432\u0435\u043d\u043e\u043a \u0440.14", "signer_id": 47838728, "attachments": [{"type": "photo", "photo": {"id": 456250858, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c846017/v846017897/919fb/mEUn60_V7nI.jpg", "photo_130": "https://pp.userapi.com/c846017/v846017897/919fc/8DUS3WLw-Ik.jpg", "photo_604": "https://pp.userapi.com/c846017/v846017897/919fd/-rd4iujfRTc.jpg", "photo_807": "https://pp.userapi.com/c846017/v846017897/919fe/2XD5uYjmmhU.jpg", "photo_1280": "https://pp.userapi.com/c846017/v846017897/919ff/rhLk44hJ_Fc.jpg", "photo_2560": "https://pp.userapi.com/c846017/v846017897/91a00/M4v92nH-C5M.jpg", "width": 1620, "height": 2160, "text": "", "date": 1531116622, "access_key": "56a533234b8547be0e"}}, {"type": "photo", "photo": {"id": 456250859, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c846218/v846218897/972f2/A1G2JkzmKcg.jpg", "photo_130": "https://pp.userapi.com/c846218/v846218897/972f3/hgo7roXETSM.jpg", "photo_604": "https://pp.userapi.com/c846218/v846218897/972f4/XCujFkniwdk.jpg", "photo_807": "https://pp.userapi.com/c846218/v846218897/972f5/TiXoQMJ7xq4.jpg", "photo_1280": "https://pp.userapi.com/c846218/v846218897/972f6/dZRg7WG4m6Y.jpg", "photo_2560": "https://pp.userapi.com/c846218/v846218897/972f7/rdlO368kqnM.jpg", "width": 1620, "height": 2160, "text": "", "date": 1531116622, "access_key": "ae0917d26fcf558fa0"}}, {"type": "photo", "photo": {"id": 456250860, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c845420/v845420897/97a17/Yb5JTTSfpbo.jpg", "photo_130": "https://pp.userapi.com/c845420/v845420897/97a18/QCtOTSw2U1c.jpg", "photo_604": "https://pp.userapi.com/c845420/v845420897/97a19/ENoVrWzFPf4.jpg", "photo_807": "https://pp.userapi.com/c845420/v845420897/97a1a/mw1dNAHMct8.jpg", "photo_1280": "https://pp.userapi.com/c845420/v845420897/97a1b/cqx2gkTvGjQ.jpg", "photo_2560": "https://pp.userapi.com/c845420/v845420897/97a1c/m9dKL5ocGUQ.jpg", "width": 1620, "height": 2160, "text": "", "date": 1531116622, "access_key": "92e935d641a8de785e"}}], "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 0, "groups_can_post": true, "can_post": 1}, "likes": {"count": 0, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 562}}, {"id": 45626, "from_id": -61006621, "owner_id": -61006621, "date": 1531095010, "marked_as_ads": 0, "post_type": "post", "text": "\u041e\u0442\u0434\u0430\u043c \u0434\u0435\u0442\u0441\u043a\u0438\u0435 \u0432\u0435\u0449\u0438 \u0441 \u0440\u043e\u0436\u0434\u0435\u043d\u0438\u044f. \u0421\u043e\u0441\u0442\u043e\u044f\u043d\u0438\u0435 \u0440\u0430\u0437\u043d\u043e\u0435", "signer_id": 68650358, "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 0, "groups_can_post": true, "can_post": 1}, "likes": {"count": 0, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 824}}, {"id": 45625, "from_id": -61006621, "owner_id": -61006621, "date": 1531095006, "marked_as_ads": 0, "post_type": "post", "text": "\u041e\u0442\u0434\u0430\u043c, \u0440\u0430\u0431\u043e\u0442\u0430\u0435\u0442, \u043d\u043e \u0441\u0431\u0438\u0432\u0430\u044e\u0442\u0441\u044f \u043a\u0430\u043d\u0430\u043b\u044b. \u0411\u0435\u0437 \u043f\u0443\u043b\u044c\u0442\u0430.", "signer_id": 160831686, "attachments": [{"type": "photo", "photo": {"id": 456250848, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c846520/v846520490/971cd/4CqM5F1gaiw.jpg", "photo_130": "https://pp.userapi.com/c846520/v846520490/971ce/bCnsryiN3aU.jpg", "photo_604": "https://pp.userapi.com/c846520/v846520490/971cf/2glPMkC-DhA.jpg", "photo_807": "https://pp.userapi.com/c846520/v846520490/971d0/nFEpAGlzhSw.jpg", "photo_1280": "https://pp.userapi.com/c846520/v846520490/971d1/qLJB7cI5ke8.jpg", "photo_2560": "https://pp.userapi.com/c846520/v846520490/971d2/MhCKfHBpVJc.jpg", "width": 720, "height": 1280, "text": "", "date": 1531073714, "access_key": "bcd976249a8d9fcd82"}}, {"type": "photo", "photo": {"id": 456250849, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c846220/v846220490/96511/0jNBsB9TRLs.jpg", "photo_130": "https://pp.userapi.com/c846220/v846220490/96512/KNKiZDhQ8GU.jpg", "photo_604": "https://pp.userapi.com/c846220/v846220490/96513/ElOzb7al-GQ.jpg", "photo_807": "https://pp.userapi.com/c846220/v846220490/96514/Px5DAtkgGOo.jpg", "photo_1280": "https://pp.userapi.com/c846220/v846220490/96515/62TA99YB8Dg.jpg", "photo_2560": "https://pp.userapi.com/c846220/v846220490/96516/5ieJUEv0tFI.jpg", "width": 720, "height": 1280, "text": "", "date": 1531073714, "access_key": "051ab6f0a045d2bf5d"}}, {"type": "photo", "photo": {"id": 456250850, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c845221/v845221490/95981/vx-jVHV37W8.jpg", "photo_130": "https://pp.userapi.com/c845221/v845221490/95982/4at8rG1jArk.jpg", "photo_604": "https://pp.userapi.com/c845221/v845221490/95983/ZAe95ihABDg.jpg", "photo_807": "https://pp.userapi.com/c845221/v845221490/95984/DVY7tUlB_Iw.jpg", "photo_1280": "https://pp.userapi.com/c845221/v845221490/95985/hhdgV9mjCg4.jpg", "width": 1280, "height": 720, "text": "", "date": 1531073714, "access_key": "b17747f96efd11602c"}}], "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 3, "groups_can_post": true, "can_post": 1}, "likes": {"count": 3, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 1013}}, {"id": 45621, "from_id": -61006621, "owner_id": -61006621, "date": 1531064418, "marked_as_ads": 0, "post_type": "post", "text": "\u041e\u0442\u0434\u0430\u043c \u043c\u044f\u0433\u043a\u0438\u0435 \u0438\u0433\u0440\u0443\u0448\u043a\u0438,\u043e\u0447\u0435\u043d\u044c \u043c\u043d\u043e\u0433\u043e!:)", "signer_id": 320139240, "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 2, "groups_can_post": true, "can_post": 1}, "likes": {"count": 0, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 911}}, {"id": 45619, "from_id": -61006621, "owner_id": -61006621, "date": 1531064411, "marked_as_ads": 0, "post_type": "post", "text": "\u041e\u0442\u0434\u0430\u043c. \u0412\u043e\u043f\u0440\u043e\u0441\u044b \u0432 \u043b\u0441.", "signer_id": 461064610, "attachments": [{"type": "photo", "photo": {"id": 456250842, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c830609/v830609641/13b4cb/z86q6mtyV0M.jpg", "photo_130": "https://pp.userapi.com/c830609/v830609641/13b4cc/a4ZXBqyl1Yw.jpg", "photo_604": "https://pp.userapi.com/c830609/v830609641/13b4cd/FtOaYsb__3g.jpg", "photo_807": "https://pp.userapi.com/c830609/v830609641/13b4ce/0XhH_CbZD5M.jpg", "photo_1280": "https://pp.userapi.com/c830609/v830609641/13b4cf/QLD5SA5SDDg.jpg", "width": 960, "height": 720, "text": "", "date": 1531057446, "lat": 56.22605, "long": 93.514991, "access_key": "b54dbf970766adc03d"}}, {"type": "photo", "photo": {"id": 456250843, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c849228/v849228641/22d06/deNl8fUY4GQ.jpg", "photo_130": "https://pp.userapi.com/c849228/v849228641/22d07/HgLrR2qWvKs.jpg", "photo_604": "https://pp.userapi.com/c849228/v849228641/22d08/n6tdt5Ov8Tw.jpg", "photo_807": "https://pp.userapi.com/c849228/v849228641/22d09/guMiRAsADx4.jpg", "photo_1280": "https://pp.userapi.com/c849228/v849228641/22d0a/JCY5zZrWKqM.jpg", "width": 847, "height": 960, "text": "", "date": 1531057446, "lat": 56.226, "long": 93.514986, "access_key": "f8af1007c75bdc8c75"}}, {"type": "photo", "photo": {"id": 456250844, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c834202/v834202641/14f159/IWc1_CnMKb8.jpg", "photo_130": "https://pp.userapi.com/c834202/v834202641/14f15a/71bwlYQGXYI.jpg", "photo_604": "https://pp.userapi.com/c834202/v834202641/14f15b/o7bsr8TIBr8.jpg", "photo_807": "https://pp.userapi.com/c834202/v834202641/14f15c/5UEryIqh8G8.jpg", "photo_1280": "https://pp.userapi.com/c834202/v834202641/14f15d/DzRHh6mjjmg.jpg", "width": 960, "height": 720, "text": "", "date": 1531057446, "lat": 56.226076, "long": 93.514949, "access_key": "d7ee797a1d164fb91b"}}, {"type": "photo", "photo": {"id": 456250845, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c834202/v834202641/14f162/Zb2KN8us58k.jpg", "photo_130": "https://pp.userapi.com/c834202/v834202641/14f163/0cHG7gneUBY.jpg", "photo_604": "https://pp.userapi.com/c834202/v834202641/14f164/5P9XOg8Ex7g.jpg", "photo_807": "https://pp.userapi.com/c834202/v834202641/14f165/3SoNiUNuH0s.jpg", "photo_1280": "https://pp.userapi.com/c834202/v834202641/14f166/_e7_LGa4O90.jpg", "width": 720, "height": 960, "text": "", "date": 1531057446, "lat": 56.226076, "long": 93.514949, "access_key": "b1d6b67390bd4aea34"}}], "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 0, "groups_can_post": true, "can_post": 1}, "likes": {"count": 1, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 843}}, {"id": 45618, "from_id": -61006621, "owner_id": -61006621, "date": 1531064403, "marked_as_ads": 0, "post_type": "post", "text": "\u043e\u0442\u0434\u0430\u043c 26\u0440", "signer_id": 296641548, "attachments": [{"type": "photo", "photo": {"id": 456250846, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c849428/v849428910/2391a/1m9yd4uc9K8.jpg", "photo_130": "https://pp.userapi.com/c849428/v849428910/2391b/dnQgYrxQBxY.jpg", "photo_604": "https://pp.userapi.com/c849428/v849428910/2391c/jXn_2Nu-5pQ.jpg", "photo_807": "https://pp.userapi.com/c849428/v849428910/2391d/6NlqC4426kQ.jpg", "photo_1280": "https://pp.userapi.com/c849428/v849428910/2391e/9C5Sb0j72-o.jpg", "photo_2560": "https://pp.userapi.com/c849428/v849428910/2391f/a4yPnuBy3gE.jpg", "width": 1620, "height": 2160, "text": "", "date": 1531059357, "lat": 56.241253, "long": 93.545269, "access_key": "704e2ea525e09472a4"}}, {"type": "photo", "photo": {"id": 456250847, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c845123/v845123910/94a2e/Jd1klrCPvJg.jpg", "photo_130": "https://pp.userapi.com/c845123/v845123910/94a2f/BivE5PGJMdw.jpg", "photo_604": "https://pp.userapi.com/c845123/v845123910/94a30/YCjkIJ4q1_Q.jpg", "photo_807": "https://pp.userapi.com/c845123/v845123910/94a31/LFlnlMd0ImI.jpg", "photo_1280": "https://pp.userapi.com/c845123/v845123910/94a32/RtK0I3LatgQ.jpg", "photo_2560": "https://pp.userapi.com/c845123/v845123910/94a33/9VcLNJ8bb-4.jpg", "width": 1620, "height": 2160, "text": "", "date": 1531059357, "lat": 56.241253, "long": 93.545269, "access_key": "5e55d348f3916f7168"}}], "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 0, "groups_can_post": true, "can_post": 1}, "likes": {"count": 0, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 784}}, {"id": 45605, "from_id": -61006621, "owner_id": -61006621, "date": 1531031760, "marked_as_ads": 0, "post_type": "post", "text": "\u041e\u0442\u0434\u0430\u043c \u043c\u043e\u0436\u0435\u0442 \u043d\u0430\u0434\u043e \u043a\u043e\u043c\u0443", "signer_id": 34210140, "attachments": [{"type": "photo", "photo": {"id": 456250834, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c846524/v846524785/953a8/ZDgaFl_oLrE.jpg", "photo_130": "https://pp.userapi.com/c846524/v846524785/953a9/2KuTkingSSY.jpg", "photo_604": "https://pp.userapi.com/c846524/v846524785/953aa/N6hC580sBLI.jpg", "photo_807": "https://pp.userapi.com/c846524/v846524785/953ab/0v2uICqNf94.jpg", "photo_1280": "https://pp.userapi.com/c846524/v846524785/953ac/glDpOYMJoLM.jpg", "photo_2560": "https://pp.userapi.com/c846524/v846524785/953ad/O-4-VTx8HH0.jpg", "width": 1620, "height": 2160, "text": "", "date": 1531023767, "access_key": "bcb325c626702caf95"}}], "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 0, "groups_can_post": true, "can_post": 1}, "likes": {"count": 4, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 1277}}, {"id": 45604, "from_id": -61006621, "owner_id": -61006621, "date": 1531031755, "marked_as_ads": 0, "post_type": "post", "text": "\u041e\u0442\u0434\u0430\u043c \u0432\u0435\u0449\u0438. \u041f\u043e \u0432\u043e\u043f\u0440\u043e\u0441\u0430\u043c \u043e\u0431\u0440\u0430\u0449\u0430\u0442\u044c\u0441\u044f \u0432 \u041b\u0421.", "signer_id": 28024669, "attachments": [{"type": "photo", "photo": {"id": 456250835, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c844722/v844722474/9b70b/xa5XTmhUEaI.jpg", "photo_130": "https://pp.userapi.com/c844722/v844722474/9b70c/s24zvEIeE5Q.jpg", "photo_604": "https://pp.userapi.com/c844722/v844722474/9b70d/vtYaFUo01FA.jpg", "photo_807": "https://pp.userapi.com/c844722/v844722474/9b70e/PQCPUV94OP4.jpg", "photo_1280": "https://pp.userapi.com/c844722/v844722474/9b70f/N_XIwY3KhLI.jpg", "photo_2560": "https://pp.userapi.com/c844722/v844722474/9b710/ZL7Mk8E5CHQ.jpg", "width": 2560, "height": 1920, "text": "\u0420\u0430\u0437\u043c\u0435\u0440 42-44", "date": 1531027304, "access_key": "a80b13adbb3cd5b0a7"}}, {"type": "photo", "photo": {"id": 456250836, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c844722/v844722474/9b701/VygAKhgTZ2M.jpg", "photo_130": "https://pp.userapi.com/c844722/v844722474/9b702/F3zMOtQuXIM.jpg", "photo_604": "https://pp.userapi.com/c844722/v844722474/9b703/cbrdIsjpCe4.jpg", "photo_807": "https://pp.userapi.com/c844722/v844722474/9b704/4qyKw6NYZpw.jpg", "photo_1280": "https://pp.userapi.com/c844722/v844722474/9b705/UFvoy9E5PzA.jpg", "photo_2560": "https://pp.userapi.com/c844722/v844722474/9b706/rfIsGb2v4Dg.jpg", "width": 2560, "height": 1920, "text": "\u0420\u0430\u0437\u043c\u0435\u0440 42-44", "date": 1531027304, "access_key": "25817f585bc5d6aa24"}}, {"type": "photo", "photo": {"id": 456250837, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c844722/v844722474/9b6f7/bDPixDDiGPw.jpg", "photo_130": "https://pp.userapi.com/c844722/v844722474/9b6f8/8dGHW4HiU_4.jpg", "photo_604": "https://pp.userapi.com/c844722/v844722474/9b6f9/4536MNgVTuA.jpg", "photo_807": "https://pp.userapi.com/c844722/v844722474/9b6fa/ZVPTTPE-JYU.jpg", "photo_1280": "https://pp.userapi.com/c844722/v844722474/9b6fb/qfBvAiIvR5c.jpg", "photo_2560": "https://pp.userapi.com/c844722/v844722474/9b6fc/W9A14HdgBrg.jpg", "width": 2560, "height": 1920, "text": "\u0428\u0442\u0430\u043d\u044b \u0443\u0442\u0435\u043f\u043b\u0435\u043d\u043d\u044b\u0435 \u043d\u0430 \u0444\u043b\u0438\u0441\u0435. \u0420\u0430\u0437\u043c\u0435\u0440 38-40.\n\u041a\u043e\u0444\u0442\u0430 - \u0440\u0430\u0437\u043c\u0435\u0440 42.", "date": 1531027304, "access_key": "538e08d48e491a09e9"}}], "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 1, "groups_can_post": true, "can_post": 1}, "likes": {"count": 2, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 1073}}, {"id": 45603, "from_id": -61006621, "owner_id": -61006621, "date": 1531031751, "marked_as_ads": 0, "post_type": "post", "text": "\u041e\u0442\u0434\u0430\u043c \u0434\u0430\u0440\u043e\u043c,\u0441\u0442\u0430\u0440\u043e\u0435 \u043a\u0440\u0435\u0441\u043b\u043e \u043d\u0430 \u0434\u0430\u0447\u0443,\u0441\u0430\u043c\u043e\u0432\u044b\u0432\u043e\u0437 ,\u0432\u0441\u0435 \u0432\u043e\u043f\u0440\u043e\u0441\u044b \u043f\u043e \u0442 .89232191096.", "signer_id": 317325265, "attachments": [{"type": "photo", "photo": {"id": 456250838, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c824411/v824411812/184278/7EjzpR5KT78.jpg", "photo_130": "https://pp.userapi.com/c824411/v824411812/184279/Km_7WnAG7tY.jpg", "photo_604": "https://pp.userapi.com/c824411/v824411812/18427a/dR03T4mHrkQ.jpg", "photo_807": "https://pp.userapi.com/c824411/v824411812/18427b/UkI2o8WjSAs.jpg", "photo_1280": "https://pp.userapi.com/c824411/v824411812/18427c/vLevI9JBxgw.jpg", "photo_2560": "https://pp.userapi.com/c824411/v824411812/18427d/iOYcGFCJOq4.jpg", "width": 1080, "height": 1800, "text": "", "date": 1531031609, "access_key": "010d91fecccc8ba85a"}}, {"type": "photo", "photo": {"id": 456250839, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://sun9-9.userapi.com/c834204/v834204812/186ce3/h1a76A_nuEo.jpg", "photo_130": "https://sun9-7.userapi.com/c834204/v834204812/186ce4/PducVEcTtXo.jpg", "photo_604": "https://sun9-4.userapi.com/c834204/v834204812/186ce5/_OAusqCr5eM.jpg", "photo_807": "https://sun9-8.userapi.com/c834204/v834204812/186ce6/OauVC_pnF_o.jpg", "photo_1280": "https://sun9-9.userapi.com/c834204/v834204812/186ce7/r4brGN_izkM.jpg", "photo_2560": "https://sun9-3.userapi.com/c834204/v834204812/186ce8/YcLe2h090Xc.jpg", "width": 1080, "height": 1800, "text": "", "date": 1531031609, "access_key": "404c5587d3d97b3411"}}], "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 0, "groups_can_post": true, "can_post": 1}, "likes": {"count": 1, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 997}}, {"id": 45580, "from_id": -61006621, "owner_id": -61006621, "date": 1530963156, "marked_as_ads": 0, "post_type": "post", "text": "39 \u0440\u0430\u0437\u043c\u0435\u0440", "signer_id": 266234913, "attachments": [{"type": "photo", "photo": {"id": 456250830, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c844521/v844521324/7c836/QLPbU1IYt1U.jpg", "photo_130": "https://pp.userapi.com/c844521/v844521324/7c837/Np98xmLfPUY.jpg", "photo_604": "https://pp.userapi.com/c844521/v844521324/7c838/o8xQBkuVvoY.jpg", "photo_807": "https://pp.userapi.com/c844521/v844521324/7c839/2QC84ndQExI.jpg", "photo_1280": "https://pp.userapi.com/c844521/v844521324/7c83a/RLBbDKy0cog.jpg", "photo_2560": "https://pp.userapi.com/c844521/v844521324/7c83b/d0hz9zqdofo.jpg", "width": 1620, "height": 2160, "text": "", "date": 1530962533, "post_id": 45576, "access_key": "1a14111b0eedb7b4dd"}}], "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 2, "groups_can_post": true, "can_post": 1}, "likes": {"count": 0, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 1318}}, {"id": 45572, "from_id": -61006621, "owner_id": -61006621, "date": 1530962339, "marked_as_ads": 0, "post_type": "post", "text": "\u041e\u0442\u0434\u0430\u043c \u043a\u0440\u0435\u0441\u043b\u043e. \u0441.\u0442. 8 902 942 5999 (\u043f\u043e\u0441\u043b\u0435 14-00 )", "signer_id": 187411503, "attachments": [{"type": "photo", "photo": {"id": 456250828, "album_id": -7, "owner_id": -61006621, "user_id": 100, "photo_75": "https://pp.userapi.com/c834104/v834104811/188d65/gwiWye3RSoA.jpg", "photo_130": "https://pp.userapi.com/c834104/v834104811/188d66/4JesGGL0MLI.jpg", "photo_604": "https://pp.userapi.com/c834104/v834104811/188d67/fSqT82mHc10.jpg", "photo_807": "https://pp.userapi.com/c834104/v834104811/188d68/qKepXy7x_fs.jpg", "photo_1280": "https://pp.userapi.com/c834104/v834104811/188d69/ea57CuCFE88.jpg", "photo_2560": "https://pp.userapi.com/c834104/v834104811/188d6a/BHF171Dmado.jpg", "width": 1620, "height": 2160, "text": "", "date": 1530961046, "access_key": "dd6b200698ce9a57dc"}}], "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 0, "groups_can_post": true, "can_post": 1}, "likes": {"count": 0, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 1173}}, {"id": 45564, "from_id": -61006621, "owner_id": -61006621, "date": 1530936269, "marked_as_ads": 0, "post_type": "post", "text": "\u041e\u0442\u0434\u0430\u043c \u0442\u0440\u043e\u0439\u043d\u0438\u043a \u0441 \u0445\u0432\u043e\u0441\u0442\u043e\u043c \u043c\u0435\u0442\u0440\u0430 \u0434\u0432\u0430. \n\u041a\u0440\u043e\u043b\u0438\u043a \u043f\u0435\u0440\u0435\u0433\u0440\u044b\u0437 \u043f\u0440\u043e\u0432\u043e\u0434, \u043f\u0440\u0438\u0448\u043b\u043e\u0441\u044c \u043e\u0442\u0440\u0435\u0437\u0430\u0442\u044c \u0432\u0438\u043b\u043a\u0443. :)))\n\u0423\u043c\u0435\u043b\u044c\u0446\u044b \u043b\u0435\u0433\u043a\u043e \u043f\u0440\u0438\u043a\u0440\u0443\u0442\u044f\u0442 \u0432\u0438\u043b\u043a\u0443 \u0438 \u043c\u043e\u0436\u043d\u043e \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u044c\u0441\u044f.", "signer_id": 227037173, "post_source": {"type": "api", "platform": "android"}, "comments": {"count": 0, "groups_can_post": true, "can_post": 1}, "likes": {"count": 0, "user_likes": 0, "can_like": 1, "can_publish": 1}, "reposts": {"count": 0, "user_reposted": 0}, "views": {"count": 1267}}]}
'''
@dataclass(frozen=True)
class JsonList(UJsonMixin):
x: List[int]
@dataclass(frozen=True)
class JsonSimple(UJsonMixin):
x: int
@dataclass(frozen=True)
class JsonDict(UJsonMixin):
x: Dict[str, int]
@dataclass(frozen=True)
class JsonNoTypingDict(UJsonMixin):
x: dict
@dataclass(frozen=True)
class JsonNoTypingList(UJsonMixin):
x: list
@dataclass(frozen=True)
class JsonNested(UJsonMixin):
a: JsonSimple
b: JsonList
c: Optional[JsonNoTypingDict]
@dataclass(frozen=True)
class JsonUnion(UJsonMixin):
a: Union[str, int]
b: Union[dict, list]
c: Union[Dict[str, int], List[int]]
@dataclass(frozen=True)
class JsonListNested(UJsonMixin):
a: JsonSimple
b: List[JsonList]
c: List[JsonNoTypingDict]
@dataclass(frozen=True)
class JsonSimpleOptional(UJsonMixin):
x: int
y: Optional[int]
@dataclass(frozen=True)
class JsonSimpleNotOptional(UJsonMixin):
x: int
y: int
@dataclass(frozen=True)
class WallAttachmentPhoto(UJsonMixin):
id: int
album_id: int
owner_id: int
user_id: int
width: int
height: int
text: str
date: int
access_key: str
photo_75: str
photo_130: str
photo_604: str
photo_807: str
photo_1280: str
@dataclass(frozen=True)
class WallAttachment(UJsonMixin):
type: str
photo: WallAttachmentPhoto
@dataclass(frozen=True)
class WallItem(UJsonMixin):
id: int
from_id: int
owner_id: int
date: int
marked_as_ads: int
post_type: str
text: str
can_pin: Optional[bool]
attachments: List[WallAttachment]
@dataclass(frozen=True)
class WallResponse(UJsonMixin):
count: int
items: List[WallItem]
| 302.368852 | 34,466 | 0.707121 | 5,125 | 36,889 | 4.962927 | 0.154927 | 0.068803 | 0.089719 | 0.108944 | 0.644034 | 0.608178 | 0.515471 | 0.378376 | 0.351523 | 0.27128 | 0 | 0.289052 | 0.068259 | 36,889 | 121 | 34,467 | 304.867769 | 0.450962 | 0 | 0 | 0.322222 | 0 | 0.011111 | 0.940497 | 0.144813 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.033333 | 0 | 0.688889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
916592bb8806dd0e52465d627037ff9315a33498 | 99 | py | Python | test/unit/jobs/test_rules/20_instance.py | blankenberg/galaxy-data-resource | ca32a1aafd64948f489a4e5cf88096f32391b1d9 | [
"CC-BY-3.0"
] | null | null | null | test/unit/jobs/test_rules/20_instance.py | blankenberg/galaxy-data-resource | ca32a1aafd64948f489a4e5cf88096f32391b1d9 | [
"CC-BY-3.0"
] | 1 | 2015-02-21T18:48:19.000Z | 2015-02-27T15:50:32.000Z | test/unit/jobs/test_rules/20_instance.py | blankenberg/galaxy-data-resource | ca32a1aafd64948f489a4e5cf88096f32391b1d9 | [
"CC-BY-3.0"
] | 3 | 2015-02-22T13:34:16.000Z | 2020-10-01T01:28:04.000Z |
def tophat():
# This should override definition in 10_site.py
return 'instance_dest_id'
| 14.142857 | 51 | 0.707071 | 14 | 99 | 4.785714 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025974 | 0.222222 | 99 | 6 | 52 | 16.5 | 0.844156 | 0.454545 | 0 | 0 | 0 | 0 | 0.326531 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
91714cc04e9e330b6d7b00399bec5048c2c801fb | 197 | py | Python | stories/admin.py | HamdaniFatima/WorkShop | cd3f682a4d8b1abf3b03f80e96aa50ce5b48062e | [
"bzip2-1.0.6"
] | null | null | null | stories/admin.py | HamdaniFatima/WorkShop | cd3f682a4d8b1abf3b03f80e96aa50ce5b48062e | [
"bzip2-1.0.6"
] | null | null | null | stories/admin.py | HamdaniFatima/WorkShop | cd3f682a4d8b1abf3b03f80e96aa50ce5b48062e | [
"bzip2-1.0.6"
] | null | null | null | from django.contrib import admin
from stories.models import registration
from stories.models import Post
# Register your models here.
admin.site.register(registration)
admin.site.register(Post) | 19.7 | 39 | 0.822335 | 27 | 197 | 6 | 0.481481 | 0.135802 | 0.209877 | 0.283951 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111675 | 197 | 10 | 40 | 19.7 | 0.925714 | 0.13198 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
91a105c9abd20235f47bb11aa40b9c9273c413c4 | 119 | py | Python | webware/MiddleKit/Core/AnyDateTimeAttr.py | PeaceWorksTechnologySolutions/w4py3-middlekit | a9554e20c47010e7b0c0deee63e1786482c59a1c | [
"MIT"
] | 2 | 2020-10-31T09:12:58.000Z | 2021-02-20T13:52:14.000Z | webware/MiddleKit/Core/AnyDateTimeAttr.py | WebwareForPython/w4py3-middlekit | f740e2d2d3a5c225d6b8f9eb27ac08f8deed47e6 | [
"MIT"
] | 2 | 2020-01-07T15:24:09.000Z | 2020-01-08T15:39:57.000Z | webware/MiddleKit/Core/AnyDateTimeAttr.py | PeaceWorksTechnologySolutions/w4py3-middlekit | a9554e20c47010e7b0c0deee63e1786482c59a1c | [
"MIT"
] | 1 | 2021-09-27T21:04:18.000Z | 2021-09-27T21:04:18.000Z | from .Attr import Attr
class AnyDateTimeAttr(Attr):
def __init__(self, attr):
Attr.__init__(self, attr)
| 14.875 | 33 | 0.680672 | 15 | 119 | 4.866667 | 0.533333 | 0.219178 | 0.328767 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.218487 | 119 | 7 | 34 | 17 | 0.784946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
91bfa0de0d867a8ad4d7d17894fb55a9c83258ff | 49,852 | py | Python | tests/test_blobxfer_operations_upload.py | temporaer/blobxfer | 8602006192c0f8f7bb078e3d6da20396c07f302a | [
"MIT"
] | null | null | null | tests/test_blobxfer_operations_upload.py | temporaer/blobxfer | 8602006192c0f8f7bb078e3d6da20396c07f302a | [
"MIT"
] | null | null | null | tests/test_blobxfer_operations_upload.py | temporaer/blobxfer | 8602006192c0f8f7bb078e3d6da20396c07f302a | [
"MIT"
] | null | null | null | # coding=utf-8
"""Tests for upload operations"""
# stdlib imports
import datetime
try:
import unittest.mock as mock
except ImportError: # noqa
import mock
try:
import pathlib2 as pathlib
except ImportError: # noqa
import pathlib
# non-stdlib imports
import azure.storage.blob
import pytest
# local imports
import blobxfer.models.azure as azmodels
import blobxfer.models.upload as models
import blobxfer.util as util
# module under test
import blobxfer.operations.upload as ops
def test_termination_check():
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
assert not u.termination_check
assert not u.termination_check_md5
def test_create_unique_id():
src = mock.MagicMock()
src.absolute_path = 'abspath'
ase = mock.MagicMock()
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
id = ops.Uploader.create_unique_id(src, ase)
assert id == 'abspath;ep;asepath'
def test_create_unique_transfer_id():
lp = mock.MagicMock()
lp.absolute_path = 'lpabspath'
lp.view.fd_start = 0
offsets = mock.MagicMock()
offsets.range_start = 10
ase = mock.MagicMock()
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
id = ops.Uploader.create_unique_transfer_id(lp, ase, offsets)
assert id == 'lpabspath;ep;asepath;0;10'
def test_create_destination_id():
client = mock.MagicMock()
client.primary_endpoint = 'ep'
id = ops.Uploader.create_destination_id(client, 'cont', 'name')
assert id == 'ep;cont;name'
def test_append_slice_suffix_to_name():
name = ops.Uploader.append_slice_suffix_to_name('name', 0)
assert name == 'name.bxslice-0'
def test_update_progress_bar():
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
with mock.patch(
'blobxfer.operations.progress.update_progress_bar') as patched_upb:
u._all_files_processed = False
u._update_progress_bar()
assert patched_upb.call_count == 0
u._all_files_processed = True
u._update_progress_bar()
assert patched_upb.call_count == 1
def test_pre_md5_skip_on_check():
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
src = mock.MagicMock()
src.absolute_path = 'abspath'
ase = mock.MagicMock()
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
u._md5_offload = mock.MagicMock()
u._pre_md5_skip_on_check(src, ase)
assert len(u._md5_map) == 1
assert u._md5_offload.add_localfile_for_md5_check.call_count == 1
def test_post_md5_skip_on_check():
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
src = mock.MagicMock()
src.absolute_path = 'abspath'
ase = mock.MagicMock()
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
id = ops.Uploader.create_unique_id(src, ase)
u._md5_map[id] = (src, ase, 'md5')
u._upload_set.add(id)
u._upload_total += 1
u._general_options.dry_run = True
u._post_md5_skip_on_check(id, 'md5', True)
assert len(u._md5_map) == 0
assert id not in u._upload_set
assert u._upload_total == 0
u._general_options.dry_run = False
u._md5_map[id] = (src, ase, 'md5')
u._upload_set.add(id)
u._upload_total += 1
u._add_to_upload_queue = mock.MagicMock()
u._post_md5_skip_on_check(id, 'lmd5', False)
assert len(u._md5_map) == 0
assert id in u._upload_set
assert u._upload_total == 1
assert u._add_to_upload_queue.call_count == 1
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = True
u._md5_map[id] = (src, ase, 'md5')
u._upload_set.add(id)
u._upload_total += 1
u._add_to_upload_queue = mock.MagicMock()
u._post_md5_skip_on_check(id, 'lmd5', False)
assert len(u._md5_map) == 0
assert id not in u._upload_set
assert u._upload_total == 0
def test_check_for_uploads_from_md5():
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
u._md5_offload = mock.MagicMock()
u._post_md5_skip_on_check = mock.MagicMock()
with mock.patch(
'blobxfer.operations.upload.Uploader.termination_check_md5',
new_callable=mock.PropertyMock) as patched_tcm:
patched_tcm.side_effect = [False, False, False, True, True]
u._md5_offload.pop_done_queue.side_effect = [
None, mock.MagicMock(), None
]
u._check_for_uploads_from_md5()
assert u._post_md5_skip_on_check.call_count == 1
def test_add_to_upload_queue():
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
u._spec.options.chunk_size_bytes = 32
src = mock.MagicMock()
src.absolute_path = 'abspath'
src.size = 32
src.use_stdin = False
ase = mock.MagicMock()
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
ase.encryption_metadata.symmetric_key = 'abc'
id = ops.Uploader.create_unique_id(src, ase)
u._add_to_upload_queue(src, ase, id)
assert len(u._ud_map) == 1
assert u._upload_queue.qsize() == 1
assert u._upload_start_time is not None
def test_initialize_disk_threads():
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
u._general_options.concurrency.disk_threads = 1
u._general_options.concurrency.transfer_threads = 1
try:
u._initialize_disk_threads()
assert len(u._disk_threads) == 1
finally:
u._wait_for_disk_threads(True)
for thr in u._disk_threads:
assert not thr.is_alive()
def test_initialize_transfer_threads():
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
u._general_options.concurrency.disk_threads = 1
u._general_options.concurrency.transfer_threads = 1
try:
u._initialize_transfer_threads()
assert len(u._transfer_threads) == 1
finally:
u._wait_for_transfer_threads(True)
for thr in u._transfer_threads:
assert not thr.is_alive()
def test_worker_thread_transfer():
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
u._transfer_queue.put(
(mock.MagicMock, mock.MagicMock, mock.MagicMock, mock.MagicMock)
)
u._transfer_queue.put(
(mock.MagicMock, mock.MagicMock, mock.MagicMock, mock.MagicMock)
)
u._process_transfer = mock.MagicMock()
u._process_transfer.side_effect = [None, Exception()]
with mock.patch(
'blobxfer.operations.upload.Uploader.termination_check',
new_callable=mock.PropertyMock) as patched_tc:
patched_tc.side_effect = [False, False, True]
u._worker_thread_transfer()
assert u._process_transfer.call_count == 2
assert len(u._exceptions) == 1
def test_process_transfer():
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
u._put_data = mock.MagicMock()
u._update_progress_bar = mock.MagicMock()
offsets = mock.MagicMock()
offsets.chunk_num = 0
offsets.num_bytes = 1
offsets.range_start = 10
ase = mock.MagicMock()
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
ase.size = 10
lp = mock.MagicMock()
lp.absolute_path = 'lpabspath'
lp.view.fd_start = 0
lp.use_stdin = True
ud = mock.MagicMock()
ud.entity.mode = azmodels.StorageModes.Append
ud.complete_offset_upload = mock.MagicMock()
ud.local_path = lp
id = ops.Uploader.create_unique_transfer_id(lp, ase, offsets)
u._transfer_set.add(id)
u._process_transfer(ud, ase, offsets, mock.MagicMock())
assert u._upload_bytes_total == 1
assert u._upload_bytes_sofar == 1
assert len(u._transfer_set) == 0
assert ud.complete_offset_upload.call_count == 1
assert u._upload_queue.qsize() == 1
assert u._update_progress_bar.call_count == 1
lp.use_stdin = False
u._transfer_set.add(id)
u._process_transfer(ud, ase, offsets, mock.MagicMock())
assert u._upload_bytes_total == 11
assert u._upload_bytes_sofar == 2
assert len(u._transfer_set) == 0
assert ud.complete_offset_upload.call_count == 2
assert u._upload_queue.qsize() == 2
assert u._update_progress_bar.call_count == 2
@mock.patch('blobxfer.operations.azure.blob.append.append_block')
@mock.patch('blobxfer.operations.azure.blob.block.create_blob')
@mock.patch('blobxfer.operations.azure.blob.block.put_block')
@mock.patch('blobxfer.operations.azure.file.put_file_range')
@mock.patch('blobxfer.operations.azure.blob.page.put_page')
def test_put_data(pp, pfr, pb, cb, ab):
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
offsets = mock.MagicMock()
offsets.chunk_num = 0
offsets.num_bytes = 1
offsets.range_start = 10
ase = mock.MagicMock()
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
ase.size = 10
lp = mock.MagicMock()
lp.absolute_path = 'lpabspath'
lp.view.fd_start = 0
lp.use_stdin = True
ud = mock.MagicMock()
ud.entity.mode = azmodels.StorageModes.Append
ud.complete_offset_upload = mock.MagicMock()
ud.local_path = lp
ase.mode = azmodels.StorageModes.Append
u._put_data(ud, ase, offsets, b'\0')
assert ab.call_count == 1
ase.mode = azmodels.StorageModes.Block
ud.is_one_shot_block_blob = True
ud.entity.is_encrypted = False
ud.must_compute_md5 = True
ud.md5.digest.return_value = b'md5'
u._put_data(ud, ase, offsets, b'\0')
assert cb.call_count == 1
ud.must_compute_md5 = False
u._put_data(ud, ase, offsets, b'\0')
assert cb.call_count == 2
ud.is_one_shot_block_blob = False
u._put_data(ud, ase, offsets, b'\0')
assert pb.call_count == 1
ase.mode = azmodels.StorageModes.File
u._put_data(ud, ase, offsets, b'\0')
assert pfr.call_count == 1
ase.mode = azmodels.StorageModes.Page
u._put_data(ud, ase, offsets, None)
assert pp.call_count == 0
ase.mode = azmodels.StorageModes.Page
u._put_data(ud, ase, offsets, b'\0')
assert pp.call_count == 0
ase.mode = azmodels.StorageModes.Page
u._put_data(ud, ase, offsets, b'1')
assert pp.call_count == 1
@mock.patch('time.sleep', return_value=None)
def test_worker_thread_upload(ts):
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
u._general_options.concurrency.transfer_threads = 1
u._transfer_set = mock.MagicMock()
u._transfer_set.__len__.side_effect = [5, 0, 0, 0]
u._upload_queue.put(mock.MagicMock)
u._upload_queue.put(mock.MagicMock)
u._process_upload_descriptor = mock.MagicMock()
u._process_upload_descriptor.side_effect = [None, Exception()]
with mock.patch(
'blobxfer.operations.upload.Uploader.termination_check',
new_callable=mock.PropertyMock) as patched_tc:
patched_tc.side_effect = [False, False, False, False, True]
u._worker_thread_upload()
assert u._process_upload_descriptor.call_count == 2
assert len(u._exceptions) == 1
@mock.patch('blobxfer.operations.azure.blob.create_container')
@mock.patch('blobxfer.operations.azure.blob.append.create_blob')
@mock.patch('blobxfer.operations.azure.file.create_share')
@mock.patch('blobxfer.operations.azure.file.create_all_parent_directories')
@mock.patch('blobxfer.operations.azure.file.create_file')
@mock.patch('blobxfer.operations.azure.blob.page.create_blob')
def test_prepare_upload(page_cb, cf, capd, cs, append_cb, cc):
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
ase = mock.MagicMock()
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
ase.size = 10
ase.mode = azmodels.StorageModes.Append
ase.append_create = True
u._prepare_upload(ase)
assert cc.call_count == 1
assert append_cb.call_count == 1
ase.mode = azmodels.StorageModes.Block
ase.append_create = False
u._prepare_upload(ase)
assert cc.call_count == 2
ase.mode = azmodels.StorageModes.File
u._prepare_upload(ase)
assert cs.call_count == 1
assert capd.call_count == 1
assert cf.call_count == 1
ase.mode = azmodels.StorageModes.Page
u._prepare_upload(ase)
assert cc.call_count == 3
assert page_cb.call_count == 1
def test_process_upload_descriptor():
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
ase = mock.MagicMock()
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
ase.size = 10
ase.mode = azmodels.StorageModes.Block
ase.is_encrypted = True
lp = mock.MagicMock()
lp.absolute_path = 'lpabspath'
lp.view.fd_start = 0
lp.use_stdin = False
ud = mock.MagicMock()
ud.entity = ase
ud.complete_offset_upload = mock.MagicMock()
ud.local_path = lp
ud.next_offsets.return_value = (None, 1)
ud.all_operations_completed = True
ud.unique_id = 'uid'
u._finalize_upload = mock.MagicMock()
u._ud_map['uid'] = 0
u._upload_set.add('uid')
# test resume and completed
u._process_upload_descriptor(ud)
assert u._upload_bytes_total == 10
assert u._upload_bytes_sofar == 1
assert u._finalize_upload.call_count == 1
assert len(u._ud_map) == 0
assert len(u._upload_set) == 0
assert u._upload_sofar == 1
# test nothing
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
ud.all_operations_completed = False
ud.next_offsets.return_value = (None, None)
u._process_upload_descriptor(ud)
assert u._upload_queue.qsize() == 1
# test encrypted
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
offsets = mock.MagicMock()
offsets.chunk_num = 0
offsets.num_bytes = 1
offsets.range_start = 10
ud.next_offsets.return_value = (offsets, None)
u._prepare_upload = mock.MagicMock()
ase2 = mock.MagicMock()
ase2._client.primary_endpoint = 'ep'
ase2.path = 'asepath2'
ase2.size = 10
ase2.mode = azmodels.StorageModes.Block
ase.replica_targets = [ase2]
ase.is_encrypted = True
ud.read_data.return_value = (b'\0', None)
with mock.patch(
'blobxfer.operations.crypto.aes_cbc_encrypt_data',
return_value=b'\0' * 16):
u._process_upload_descriptor(ud)
assert u._upload_queue.qsize() == 1
assert u._prepare_upload.call_count == 2
assert ud.hmac_data.call_count == 2
assert u._transfer_queue.qsize() == 2
assert len(u._transfer_set) == 2
# test stdin
ase.is_encrypted = False
lp = mock.MagicMock()
lp.absolute_path = 'lpabspath'
lp.view.fd_start = 0
lp.use_stdin = True
ud.local_path = lp
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
u._prepare_upload = mock.MagicMock()
ud.read_data.return_value = (False, offsets)
u._process_upload_descriptor(ud)
assert u._upload_queue.qsize() == 1
assert u._transfer_queue.qsize() == 0
assert len(u._transfer_set) == 0
@mock.patch('blobxfer.operations.azure.blob.block.put_block_list')
def test_finalize_block_blob(pbl):
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
ase = mock.MagicMock()
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
ase.size = 10
ase.mode = azmodels.StorageModes.Block
ase.is_encrypted = False
ase.replica_targets = [ase]
lp = mock.MagicMock()
lp.absolute_path = 'lpabspath'
lp.view.fd_start = 0
lp.use_stdin = False
ud = mock.MagicMock()
ud.entity = ase
ud.complete_offset_upload = mock.MagicMock()
ud.local_path = lp
ud.unique_id = 'uid'
ud.must_compute_md5 = True
ud.md5.digest.return_value = b'md5'
u._finalize_block_blob(ud, mock.MagicMock())
assert pbl.call_count == 2
ud.must_compute_md5 = False
ase.replica_targets = []
u._finalize_block_blob(ud, mock.MagicMock())
assert pbl.call_count == 3
@mock.patch('blobxfer.operations.azure.blob.set_blob_properties')
def test_set_blob_properties(sbp):
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
ase = mock.MagicMock()
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
ase.size = 10
ase.mode = azmodels.StorageModes.Block
ase.is_encrypted = False
ase.replica_targets = [ase]
lp = mock.MagicMock()
lp.absolute_path = 'lpabspath'
lp.view.fd_start = 0
lp.use_stdin = False
ud = mock.MagicMock()
ud.entity = ase
ud.complete_offset_upload = mock.MagicMock()
ud.local_path = lp
ud.unique_id = 'uid'
ud.must_compute_md5 = True
ud.md5.digest.return_value = b'md5'
u._set_blob_properties(ud)
assert sbp.call_count == 2
ud.requires_non_encrypted_md5_put = False
ud.must_compute_md5 = False
ase.cache_control = 'cc'
u._set_blob_properties(ud)
assert sbp.call_count == 4
@mock.patch('blobxfer.operations.azure.blob.set_blob_metadata')
def test_set_blob_metadata(sbm):
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
ase = mock.MagicMock()
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
ase.size = 10
ase.mode = azmodels.StorageModes.Block
ase.is_encrypted = False
ase.replica_targets = [ase]
lp = mock.MagicMock()
lp.absolute_path = 'lpabspath'
lp.view.fd_start = 0
lp.use_stdin = False
ud = mock.MagicMock()
ud.entity = ase
ud.complete_offset_upload = mock.MagicMock()
ud.local_path = lp
ud.unique_id = 'uid'
u._set_blob_metadata(ud, mock.MagicMock())
assert sbm.call_count == 2
@mock.patch('blobxfer.operations.azure.blob.page.resize_blob')
def test_resize_blob(rb):
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
ase = mock.MagicMock()
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
ase.size = 10
ase.mode = azmodels.StorageModes.Block
ase.is_encrypted = False
ase.replica_targets = [ase]
lp = mock.MagicMock()
lp.absolute_path = 'lpabspath'
lp.view.fd_start = 0
lp.use_stdin = True
ud = mock.MagicMock()
ud.entity = ase
ud.local_path = lp
ud.unique_id = 'uid'
u._resize_blob(ud, 512)
assert rb.call_count == 2
def test_finalize_nonblock_blob():
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
ase = mock.MagicMock()
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
ase.size = 10
ase.mode = azmodels.StorageModes.Block
ase.is_encrypted = False
ase.replica_targets = [ase]
lp = mock.MagicMock()
lp.absolute_path = 'lpabspath'
lp.view.fd_start = 0
lp.use_stdin = False
ud = mock.MagicMock()
ud.entity = ase
ud.local_path = lp
ud.unique_id = 'uid'
ud.requires_non_encrypted_md5_put = True
ud.requires_resize.return_value = (False, None)
u._set_blob_properties = mock.MagicMock()
u._set_blob_metadata = mock.MagicMock()
u._resize_blob = mock.MagicMock()
u._finalize_nonblock_blob(ud, {'a': 0})
assert u._set_blob_properties.call_count == 1
assert u._set_blob_metadata.call_count == 1
assert u._resize_blob.call_count == 0
# resize required
ud.requires_resize.return_value = (True, 512)
u._finalize_nonblock_blob(ud, {'a': 0})
assert u._resize_blob.call_count == 1
@mock.patch('blobxfer.operations.azure.file.set_file_properties')
@mock.patch('blobxfer.operations.azure.file.set_file_metadata')
def test_finalize_azure_file(sfmeta, sfp):
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
ase = mock.MagicMock()
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
ase.size = 10
ase.mode = azmodels.StorageModes.File
ase.is_encrypted = False
ase.replica_targets = [ase]
lp = mock.MagicMock()
lp.absolute_path = 'lpabspath'
lp.view.fd_start = 0
lp.use_stdin = False
ud = mock.MagicMock()
ud.entity = ase
ud.complete_offset_upload = mock.MagicMock()
ud.local_path = lp
ud.unique_id = 'uid'
ud.must_compute_md5 = True
ud.md5.digest.return_value = b'md5'
ud.requires_non_encrypted_md5_put = True
u._finalize_azure_file(ud, {'a': 0})
assert sfp.call_count == 2
assert sfmeta.call_count == 2
ud.requires_non_encrypted_md5_put = False
ud.must_compute_md5 = False
ase.cache_control = 'cc'
u._finalize_azure_file(ud, {'a': 0})
assert sfp.call_count == 4
def test_finalize_upload():
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
ase = mock.MagicMock()
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
ase.size = 10
ase.mode = azmodels.StorageModes.Block
ase.is_encrypted = False
ase.replica_targets = [ase]
lp = mock.MagicMock()
lp.absolute_path = 'lpabspath'
lp.view.fd_start = 0
lp.use_stdin = False
ud = mock.MagicMock()
ud.entity = ase
ud.complete_offset_upload = mock.MagicMock()
ud.local_path = lp
ud.unique_id = 'uid'
ud.requires_put_block_list = True
u._finalize_block_blob = mock.MagicMock()
u._finalize_upload(ud)
assert u._finalize_block_blob.call_count == 1
ud.requires_put_block_list = False
ud.remote_is_page_blob = True
u._finalize_nonblock_blob = mock.MagicMock()
u._finalize_upload(ud)
assert u._finalize_nonblock_blob.call_count == 1
ud.remote_is_page_blob = False
ud.remote_is_append_blob = False
ud.remote_is_file = True
u._finalize_azure_file = mock.MagicMock()
u._finalize_upload(ud)
assert u._finalize_azure_file.call_count == 1
def test_get_destination_paths():
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
paths = mock.MagicMock()
paths.paths = [pathlib.Path('a/b')]
u._spec.destinations = [paths]
sa, cont, dir, dpath = next(u._get_destination_paths())
assert cont == 'a'
assert dir == 'b'
assert dpath == pathlib.Path('a/b')
@mock.patch('blobxfer.operations.azure.file.list_all_files')
@mock.patch('blobxfer.operations.azure.file.delete_file')
@mock.patch('blobxfer.operations.azure.blob.list_all_blobs')
@mock.patch('blobxfer.operations.azure.blob.delete_blob')
def test_delete_extraneous_files(db, lab, df, laf):
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
# test no delete
u._spec.options.delete_extraneous_destination = False
u._get_destination_paths = mock.MagicMock()
u._delete_extraneous_files()
assert u._get_destination_paths.call_count == 0
# test file delete
u._spec.options.delete_extraneous_destination = True
u._spec.options.mode = azmodels.StorageModes.File
sa1 = mock.MagicMock()
sa1.name = 'name'
sa1.endpoint = 'ep'
sa1.file_client.primary_endpoint = 'ep'
laf.return_value = ['filename']
# test relative path failure
u._get_destination_paths = mock.MagicMock()
u._get_destination_paths.return_value = [
(sa1, 'cont', 'vpath', ''),
]
u._delete_extraneous_files()
assert laf.call_count == 1
assert df.call_count == 0
# test actual delete
u._get_destination_paths = mock.MagicMock()
u._get_destination_paths.return_value = [
(sa1, 'cont', '', ''),
(sa1, 'cont', '', ''),
]
u._general_options.dry_run = True
u._delete_extraneous_files()
assert laf.call_count == 2
assert df.call_count == 0
u._general_options.dry_run = False
u._delete_extraneous_files()
assert laf.call_count == 3
assert df.call_count == 1
# test blob delete
u._spec.options.delete_extraneous_destination = True
u._spec.options.mode = azmodels.StorageModes.Block
sa1 = mock.MagicMock()
sa1.name = 'name'
sa1.endpoint = 'ep'
sa1.block_blob_client.primary_endpoint = 'ep'
blob = mock.MagicMock()
blob.name = 'blobname'
lab.return_value = [blob]
# test relative path failure
u._get_destination_paths = mock.MagicMock()
u._get_destination_paths.return_value = [
(sa1, 'cont', 'vpath', ''),
]
u._delete_extraneous_files()
assert lab.call_count == 1
assert db.call_count == 0
# test actual delete
u._get_destination_paths = mock.MagicMock()
u._get_destination_paths.return_value = [
(sa1, 'cont', '', ''),
]
u._general_options.dry_run = True
u._delete_extraneous_files()
assert lab.call_count == 2
assert db.call_count == 0
u._general_options.dry_run = False
u._delete_extraneous_files()
assert lab.call_count == 3
assert db.call_count == 1
@mock.patch('blobxfer.models.metadata.get_md5_from_metadata')
def test_check_upload_conditions(gmfm):
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
ase = mock.MagicMock()
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
ase.size = 10
ase.mode = azmodels.StorageModes.Block
ase.append_create = True
ase.is_encrypted = False
ase.from_local = False
lp = mock.MagicMock()
lp.absolute_path = pathlib.Path('lpabspath')
lp.view.fd_start = 0
lp.use_stdin = False
assert u._check_upload_conditions(lp, ase) == ops.UploadAction.Skip
lp.use_stdin = True
assert u._check_upload_conditions(lp, None) == ops.UploadAction.Upload
u._spec.options.overwrite = False
ase.mode = azmodels.StorageModes.Append
assert u._check_upload_conditions(lp, ase) == ops.UploadAction.Upload
assert not ase.append_create
ase.mode = azmodels.StorageModes.Block
ase.append_create = True
assert u._check_upload_conditions(lp, ase) == ops.UploadAction.Skip
assert ase.append_create
u._spec.options.overwrite = True
u._spec.skip_on.md5_match = True
gmfm.return_value = 'md5'
assert u._check_upload_conditions(lp, ase) == ops.UploadAction.CheckMd5
u._spec.skip_on.md5_match = False
u._spec.skip_on.filesize_match = False
u._spec.skip_on.lmt_ge = False
assert u._check_upload_conditions(lp, ase) == ops.UploadAction.Upload
# size mismatch, page
u._spec.skip_on.filesize_match = True
ase.mode = azmodels.StorageModes.Page
lp.size = 1
assert u._check_upload_conditions(lp, ase) == ops.UploadAction.Upload
# size match
u._spec.skip_on.filesize_match = True
ase.mode = azmodels.StorageModes.Block
lp.size = ase.size
assert u._check_upload_conditions(lp, ase) == ops.UploadAction.Skip
# lmt match
u._spec.skip_on.filesize_match = False
u._spec.skip_on.lmt_ge = True
ase.lmt = 0
with mock.patch('blobxfer.util.datetime_from_timestamp') as patched_dft:
patched_dft.return_value = 0
assert u._check_upload_conditions(lp, ase) == ops.UploadAction.Skip
# lmt mismatch
u._spec.skip_on.lmt_ge = True
ase.lmt = 0
with mock.patch('blobxfer.util.datetime_from_timestamp') as patched_dft:
patched_dft.return_value = 1
assert u._check_upload_conditions(lp, ase) == ops.UploadAction.Upload
@mock.patch('blobxfer.operations.azure.file.get_file_properties')
@mock.patch('blobxfer.operations.azure.blob.get_blob_properties')
def test_check_for_existing_remote(gbp, gfp):
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
u._spec.options.overwrite = True
sa = mock.MagicMock()
sa.name = 'name'
sa.endpoint = 'ep'
sa.can_read_object = False
u._spec.options.mode = azmodels.StorageModes.File
gfp.return_value = None
assert u._check_for_existing_remote(sa, 'cont', 'name') is None
sa.can_read_object = True
u._spec.skip_on.filesize_match = False
u._spec.skip_on.lmt_ge = False
u._spec.skip_on.md5_match = False
assert u._check_for_existing_remote(sa, 'cont', 'name') is None
u._spec.options.overwrite = False
gfp.return_value = None
assert u._check_for_existing_remote(sa, 'cont', 'name') is None
with mock.patch(
'blobxfer.models.crypto.EncryptionMetadata.'
'encryption_metadata_exists', return_value=False):
gfp.return_value = mock.MagicMock()
assert u._check_for_existing_remote(sa, 'cont', 'name') is not None
with mock.patch(
'blobxfer.models.crypto.EncryptionMetadata.'
'encryption_metadata_exists', return_value=True):
with mock.patch(
'blobxfer.models.crypto.EncryptionMetadata.convert_from_json'):
gfp.return_value = mock.MagicMock()
assert u._check_for_existing_remote(sa, 'cont', 'name') is not None
u._spec.options.mode = azmodels.StorageModes.Block
gbp.return_value = None
assert u._check_for_existing_remote(sa, 'cont', 'name') is None
with mock.patch(
'blobxfer.models.crypto.EncryptionMetadata.'
'encryption_metadata_exists', return_value=False):
gbp.return_value = mock.MagicMock()
assert u._check_for_existing_remote(sa, 'cont', 'name') is not None
# check access tiers
with mock.patch(
'blobxfer.models.crypto.EncryptionMetadata.'
'encryption_metadata_exists', return_value=False):
gbp.return_value = mock.MagicMock()
gbp.return_value.properties.blob_type = \
azure.storage.blob.models._BlobTypes.BlockBlob
gbp.return_value.properties.blob_tier = None
u._spec.options.access_tier = None
ase = u._check_for_existing_remote(sa, 'cont', 'name')
assert ase is not None
assert ase.access_tier is None
gbp.return_value.properties.blob_tier = 'Cool'
ase = u._check_for_existing_remote(sa, 'cont', 'name')
assert ase is not None
assert ase.access_tier is None
u._spec.options.access_tier = 'Hot'
ase = u._check_for_existing_remote(sa, 'cont', 'name')
assert ase is not None
assert ase.access_tier == 'Hot'
def test_generate_destination_for_source():
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
u._spec.options.overwrite = False
u._check_for_existing_remote = mock.MagicMock()
lp = mock.MagicMock()
lp.relative_path = pathlib.Path('a/b/c/d')
lp.absolute_path = pathlib.Path('abs/rel/a/b/c/d')
lp.view.fd_start = 0
lp.use_stdin = False
sa = mock.MagicMock()
sa.name = 'name'
sa.endpoint = 'ep'
u._spec.options.strip_components = 1
u._spec.options.rename = True
u._get_destination_paths = mock.MagicMock()
u._get_destination_paths.return_value = [
(sa, 'cont', '', 'dpath'),
]
with pytest.raises(ValueError):
next(u._generate_destination_for_source(lp))
lp.relative_path = pathlib.Path('rel/a')
lp.absolute_path = pathlib.Path('abs/rel/a')
u._spec.options.strip_components = 0
u._spec.options.rename = False
u._get_destination_paths.return_value = [
(sa, 'cont', 'name', 'dpath'),
]
u._spec.options.mode = azmodels.StorageModes.Block
u._spec.options.vectored_io.distribution_mode = \
models.VectoredIoDistributionMode.Stripe
a, b = next(u._generate_destination_for_source(lp))
assert a == sa
assert b is not None
assert u._check_for_existing_remote.call_count == 0
u._spec.options.vectored_io.distribution_mode = \
models.VectoredIoDistributionMode.Disabled
a, b = next(u._generate_destination_for_source(lp))
assert a == sa
assert b is not None
assert u._check_for_existing_remote.call_count == 1
# check no-read permission
sa.can_read_object = False
u._spec.options.vectored_io.distribution_mode = \
models.VectoredIoDistributionMode.Disabled
a, b = next(u._generate_destination_for_source(lp))
assert a == sa
assert b is not None
assert u._check_for_existing_remote.call_count == 2
def test_vectorize_and_bind():
ase = mock.MagicMock()
ase.client.primary_endpoint = 'ep'
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
ase.mode = azmodels.StorageModes.Block
ase.is_encrypted = False
ase.replica_targets = None
ase.container = 'cont'
ase.name = 'name'
ase2 = mock.MagicMock()
ase2.client.primary_endpoint = 'ep2'
ase2._client.primary_endpoint = 'ep2'
ase2.path = 'asepath2'
ase2.mode = azmodels.StorageModes.Block
ase2.is_encrypted = False
ase2.replica_targets = None
ase2.container = 'cont2'
ase2.name = 'name2'
sa = mock.MagicMock()
sa.name = 'name'
sa.endpoint = 'ep'
sa.block_blob_client.primary_endpoint = 'pep'
lp = mock.MagicMock()
lp.relative_path = pathlib.Path('rel/a')
lp.absolute_path = pathlib.Path('abs/rel/a')
lp.view.fd_start = 0
lp.use_stdin = False
lp.total_size = 9
# no vectorization
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
u._spec.options.overwrite = False
u._spec.options.vectored_io.distribution_mode = \
models.VectoredIoDistributionMode.Disabled
u._check_upload_conditions = mock.MagicMock()
u._check_upload_conditions.return_value = ops.UploadAction.Upload
dest = [(sa, ase)]
a, b, c = next(u._vectorize_and_bind(lp, dest))
assert a == ops.UploadAction.Upload
assert b == lp
assert c == ase
# sub-test no object write
sa.can_write_object = False
dest = [(sa, ase)]
with pytest.raises(RuntimeError):
a, b, c = next(u._vectorize_and_bind(lp, dest))
sa.can_write_object = True
# stripe vectorization 1 slice
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
u._spec.options.overwrite = False
u._check_upload_conditions = mock.MagicMock()
u._check_upload_conditions.return_value = ops.UploadAction.Upload
u._spec.options.vectored_io.distribution_mode = \
models.VectoredIoDistributionMode.Stripe
u._spec.options.vectored_io.stripe_chunk_size_bytes = 10
dest = [(sa, ase), (sa, ase2)]
i = 0
for a, b, c in u._vectorize_and_bind(lp, dest):
assert a == ops.UploadAction.Upload
assert b == lp
assert c == ase
i += 1
assert i == 1
# sub-test no object write
sa.can_write_object = False
dest = [(sa, ase), (sa, ase2)]
with pytest.raises(RuntimeError):
a, b, c = next(u._vectorize_and_bind(lp, dest))
sa.can_write_object = True
# stripe vectorization multi-slice
u._spec.options.mode = azmodels.StorageModes.Block
u._spec.options.vectored_io.stripe_chunk_size_bytes = 5
u._check_for_existing_remote = mock.MagicMock()
u._check_for_existing_remote.return_value = None
dest = [(sa, ase), (sa, ase2)]
i = 0
for a, b, c in u._vectorize_and_bind(lp, dest):
assert a == ops.UploadAction.Upload
assert b != lp
assert b.parent_path == lp.parent_path
assert b.relative_path == lp.relative_path
assert not b.use_stdin
if i == 0:
assert b.view.fd_start == 0
assert b.view.fd_end == 5
assert b.view.slice_num == 0
else:
assert b.view.fd_start == 5
assert b.view.fd_end == 9
assert b.view.slice_num == 1
assert b.view.mode == u._spec.options.vectored_io.distribution_mode
assert c != ase
assert c.from_local
i += 1
assert i == 2
# sub-test no object write
sa.can_write_object = False
dest = [(sa, ase), (sa, ase2)]
with pytest.raises(RuntimeError):
a, b, c = next(u._vectorize_and_bind(lp, dest))
sa.can_write_object = True
# replication single target
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
u._spec.options.overwrite = False
u._spec.options.vectored_io.distribution_mode = \
models.VectoredIoDistributionMode.Replica
u._check_upload_conditions = mock.MagicMock()
u._check_upload_conditions.return_value = ops.UploadAction.CheckMd5
dest = [(sa, ase)]
a, b, c = next(u._vectorize_and_bind(lp, dest))
assert a == ops.UploadAction.CheckMd5
assert b == lp
assert c == ase
assert c.replica_targets is None
# sub-test no object write
sa.can_write_object = False
dest = [(sa, ase)]
with pytest.raises(RuntimeError):
a, b, c = next(u._vectorize_and_bind(lp, dest))
sa.can_write_object = True
# replication multi-target md5
dest = [(sa, ase), (sa, ase2)]
a, b, c = next(u._vectorize_and_bind(lp, dest))
assert a == ops.UploadAction.CheckMd5
assert b == lp
assert c == ase
assert c.replica_targets is None
# replication multi-target upload
u._spec.options.delete_extraneous_destination = True
u._check_upload_conditions.return_value = ops.UploadAction.Upload
a, b, c = next(u._vectorize_and_bind(lp, dest))
assert a == ops.UploadAction.Upload
assert b == lp
assert c == ase
assert len(c.replica_targets) == 1
assert c.replica_targets[0] == ase2
@mock.patch('blobxfer.operations.resume.UploadResumeManager')
@mock.patch('blobxfer.operations.md5.LocalFileMd5Offload')
def test_run(lfmo, urm, tmpdir):
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
u._initialize_disk_threads = mock.MagicMock()
u._initialize_transfer_threads = mock.MagicMock()
u._general_options.concurrency.disk_threads = 1
u._general_options.concurrency.transfer_threads = 1
u._general_options.concurrency.md5_processes = 1
u._general_options.concurrency.crypto_processes = 1
u._general_options.resume_file = 'resume'
u._spec.options.overwrite = False
u._spec.options.store_file_properties.md5 = True
u._spec.skip_on.md5_match = True
u._spec.options.rsa_public_key = 'abc'
u._spec.options.chunk_size_bytes = 0
u._spec.options.one_shot_bytes = 0
u._spec.options.delete_only = False
# check rename failure
u._spec.sources.can_rename.return_value = False
u._spec.options.rename = True
with pytest.raises(RuntimeError):
u._run()
u._upload_terminate = True
assert urm.call_count == 0
assert lfmo.call_count == 0
assert lfmo.initialize_check_thread.call_count == 0
assert u._initialize_disk_threads.call_count == 0
assert u._initialize_transfer_threads.call_count == 0
# check dupe
u._spec.sources.can_rename.return_value = False
u._spec.options.rename = False
ase = mock.MagicMock()
ase.client.primary_endpoint = 'ep'
ase._client.primary_endpoint = 'ep'
ase.path = 'asepath'
ase.mode = azmodels.StorageModes.Block
ase.is_encrypted = False
ase.replica_targets = None
ase.container = 'cont'
ase.name = 'name'
ase.size = 10
ase2 = mock.MagicMock()
ase2.client.primary_endpoint = 'ep2'
ase2._client.primary_endpoint = 'ep2'
ase2.path = 'asepath2'
ase2.mode = azmodels.StorageModes.Block
ase2.is_encrypted = False
ase2.replica_targets = None
ase2.container = 'cont2'
ase2.name = 'name2'
ase2.size = 10
sa = mock.MagicMock()
sa.name = 'name'
sa.endpoint = 'ep'
sa.block_blob_client.primary_endpoint = 'pep'
tmpdir.join('a').write('z' * 10)
lp = mock.MagicMock()
lp.relative_path = pathlib.Path('a')
lp.absolute_path = pathlib.Path(str(tmpdir.join('a')))
lp.view.fd_start = 0
lp.view.fd_end = 10
lp.use_stdin = False
lp.size = 10
lp.total_size = 10
u._generate_destination_for_source = mock.MagicMock()
with pytest.raises(RuntimeError):
u._generate_destination_for_source.return_value = [
(sa, ase), (sa, ase)
]
u._spec.sources.files.return_value = [lp]
u._run()
u._upload_terminate = True
assert urm.call_count == 1
assert lfmo.call_count == 1
assert u._md5_offload.initialize_check_thread.call_count == 1
assert u._initialize_disk_threads.call_count == 1
assert u._initialize_transfer_threads.call_count == 1
# mismatch exception raise
u._spec.options.vectored_io.distribution_mode = \
models.VectoredIoDistributionMode.Disabled
u._check_upload_conditions = mock.MagicMock()
u._check_upload_conditions.return_value = ops.UploadAction.Skip
u._generate_destination_for_source.return_value = [
(sa, ase)
]
u._spec.sources.files.return_value = [lp]
with pytest.raises(RuntimeError):
u._run()
u._upload_terminate = True
u._check_upload_conditions.return_value = ops.UploadAction.CheckMd5
with pytest.raises(RuntimeError):
u._run()
u._upload_terminate = True
# regular execution
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
u._spec.options.overwrite = False
u._general_options.concurrency.disk_threads = 1
u._general_options.concurrency.transfer_threads = 1
u._general_options.concurrency.md5_processes = 1
u._general_options.concurrency.crypto_processes = 0
u._general_options.resume_file = 'resume'
u._spec.options.store_file_properties.md5 = True
u._spec.skip_on.md5_match = True
u._spec.options.rsa_public_key = None
u._spec.options.chunk_size_bytes = 0
u._spec.options.one_shot_bytes = 0
u._spec.sources.can_rename.return_value = False
u._spec.options.rename = False
u._spec.options.delete_only = False
u._spec.options.vectored_io.distribution_mode = \
models.VectoredIoDistributionMode.Replica
u._check_upload_conditions = mock.MagicMock()
u._check_upload_conditions.return_value = ops.UploadAction.Upload
u._generate_destination_for_source = mock.MagicMock()
u._generate_destination_for_source.return_value = [
(sa, ase), (sa, ase2)
]
u._spec.sources.files.return_value = [lp]
u._put_data = mock.MagicMock()
u._finalize_upload = mock.MagicMock()
u._upload_start_time = (
util.datetime_now() - datetime.timedelta(seconds=1)
)
u._run()
assert u._finalize_upload.call_count == 1
# regular execution, skip dry run
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = True
u._general_options.concurrency.disk_threads = 1
u._general_options.concurrency.transfer_threads = 1
u._general_options.concurrency.md5_processes = 1
u._general_options.concurrency.crypto_processes = 0
u._general_options.resume_file = 'resume'
u._spec.options.overwrite = False
u._spec.options.store_file_properties.md5 = True
u._spec.skip_on.md5_match = True
u._spec.options.rsa_public_key = None
u._spec.options.chunk_size_bytes = 0
u._spec.options.one_shot_bytes = 0
u._spec.sources.can_rename.return_value = False
u._spec.options.rename = False
u._spec.options.delete_only = False
u._spec.options.vectored_io.distribution_mode = \
models.VectoredIoDistributionMode.Replica
u._check_upload_conditions = mock.MagicMock()
u._check_upload_conditions.return_value = ops.UploadAction.Skip
u._generate_destination_for_source = mock.MagicMock()
u._generate_destination_for_source.return_value = [
(sa, ase), (sa, ase2)
]
u._spec.sources.files.return_value = [lp]
u._put_data = mock.MagicMock()
u._finalize_upload = mock.MagicMock()
u._upload_start_time = (
util.datetime_now() - datetime.timedelta(seconds=1)
)
u._run()
assert u._finalize_upload.call_count == 0
# delete only
u._spec.options.delete_extraneous_destination = True
u._spec.options.delete_only = True
u._run()
assert u._finalize_upload.call_count == 0
# regular execution, upload dry run
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = True
u._general_options.concurrency.disk_threads = 1
u._general_options.concurrency.transfer_threads = 1
u._general_options.concurrency.md5_processes = 1
u._general_options.concurrency.crypto_processes = 0
u._general_options.resume_file = 'resume'
u._spec.options.overwrite = False
u._spec.options.store_file_properties.md5 = True
u._spec.skip_on.md5_match = True
u._spec.options.rsa_public_key = None
u._spec.options.chunk_size_bytes = 0
u._spec.options.one_shot_bytes = 0
u._spec.sources.can_rename.return_value = False
u._spec.options.rename = False
u._spec.options.delete_only = False
u._spec.options.vectored_io.distribution_mode = \
models.VectoredIoDistributionMode.Replica
u._check_upload_conditions = mock.MagicMock()
u._check_upload_conditions.return_value = ops.UploadAction.Upload
u._generate_destination_for_source = mock.MagicMock()
u._generate_destination_for_source.return_value = [
(sa, ase), (sa, ase2)
]
u._spec.sources.files.return_value = [lp]
u._put_data = mock.MagicMock()
u._finalize_upload = mock.MagicMock()
u._upload_start_time = (
util.datetime_now() - datetime.timedelta(seconds=1)
)
u._run()
assert u._finalize_upload.call_count == 0
# exception raise
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
u._general_options.concurrency.disk_threads = 1
u._general_options.concurrency.transfer_threads = 1
u._general_options.concurrency.md5_processes = 1
u._general_options.concurrency.crypto_processes = 0
u._general_options.resume_file = 'resume'
u._spec.options.store_file_properties.md5 = True
u._spec.skip_on.md5_match = True
u._spec.options.rsa_public_key = None
u._spec.options.chunk_size_bytes = 0
u._spec.options.one_shot_bytes = 0
u._spec.sources.can_rename.return_value = False
u._spec.options.rename = False
u._spec.options.delete_only = False
u._spec.options.vectored_io.distribution_mode = \
models.VectoredIoDistributionMode.Disabled
u._check_upload_conditions = mock.MagicMock()
u._check_upload_conditions.return_value = ops.UploadAction.Upload
u._generate_destination_for_source = mock.MagicMock()
u._generate_destination_for_source.return_value = [
(sa, ase)
]
u._spec.sources.files.return_value = [lp]
with pytest.raises(RuntimeError):
u._process_upload_descriptor = mock.MagicMock()
u._process_upload_descriptor.side_effect = RuntimeError()
u._run()
u._upload_terminate = True
def test_start():
u = ops.Uploader(mock.MagicMock(), mock.MagicMock(), mock.MagicMock())
u._general_options.dry_run = False
u._spec.options.overwrite = False
u._spec.options.delete_only = False
u._wait_for_transfer_threads = mock.MagicMock()
u._wait_for_disk_threads = mock.MagicMock()
u._md5_offload = mock.MagicMock()
u._md5_offload.finalize_processes = mock.MagicMock()
u._crypto_offload = mock.MagicMock()
u._crypto_offload.finalize_processes = mock.MagicMock()
u._resume = mock.MagicMock()
u._run = mock.MagicMock()
# test keyboard interrupt
u._run.side_effect = KeyboardInterrupt()
with pytest.raises(KeyboardInterrupt):
u.start()
assert u._run.call_count == 1
assert u._wait_for_transfer_threads.call_count == 1
assert u._wait_for_disk_threads.call_count == 1
assert u._md5_offload.finalize_processes.call_count == 1
assert u._crypto_offload.finalize_processes.call_count == 1
assert u._resume.close.call_count == 1
# test other exception
u._run.side_effect = RuntimeError()
with pytest.raises(RuntimeError):
u.start()
assert u._run.call_count == 2
assert u._wait_for_transfer_threads.call_count == 2
assert u._wait_for_disk_threads.call_count == 2
assert u._md5_offload.finalize_processes.call_count == 2
assert u._crypto_offload.finalize_processes.call_count == 2
assert u._resume.close.call_count == 2
u._run.side_effect = RuntimeError()
with pytest.raises(RuntimeError):
u._wait_for_transfer_threads = mock.MagicMock(
side_effect=RuntimeError('oops'))
u._upload_terminate = True
u.start()
assert u._run.call_count == 3
assert u._wait_for_transfer_threads.call_count == 1
assert u._wait_for_disk_threads.call_count == 2
assert u._md5_offload.finalize_processes.call_count == 3
assert u._crypto_offload.finalize_processes.call_count == 3
assert u._resume.close.call_count == 3
| 32.64702 | 79 | 0.690303 | 6,855 | 49,852 | 4.728519 | 0.049891 | 0.107484 | 0.040168 | 0.067378 | 0.867249 | 0.808138 | 0.76902 | 0.715586 | 0.67912 | 0.640125 | 0 | 0.01111 | 0.196522 | 49,852 | 1,526 | 80 | 32.668414 | 0.798128 | 0.018394 | 0 | 0.708674 | 0 | 0 | 0.056664 | 0.039112 | 0 | 0 | 0 | 0 | 0.170213 | 1 | 0.027005 | false | 0 | 0.010638 | 0 | 0.037643 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
91d4394a4810bfe33e4a2317080f42e40a977f05 | 142 | py | Python | examples/agent2/serv/objects/tv.py | irmen/Pyro3 | 5bd531088d9a11ec83556a0429f18df6cb5cd437 | [
"MIT"
] | 3 | 2018-01-13T20:50:41.000Z | 2020-02-24T13:35:08.000Z | examples/agent2/serv/objects/tv.py | irmen/Pyro3 | 5bd531088d9a11ec83556a0429f18df6cb5cd437 | [
"MIT"
] | null | null | null | examples/agent2/serv/objects/tv.py | irmen/Pyro3 | 5bd531088d9a11ec83556a0429f18df6cb5cd437 | [
"MIT"
] | 6 | 2015-03-21T20:34:05.000Z | 2021-06-08T04:04:33.000Z |
class tv(object):
def __init__(self):
pass
def getName(self):
return "TV"
def getDescription(self):
return "Full-HD LCD television"
| 15.777778 | 33 | 0.704225 | 20 | 142 | 4.8 | 0.7 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176056 | 142 | 8 | 34 | 17.75 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0.142857 | 0 | 0.285714 | 0.857143 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 6 |
37ed8d46128a02abe6ce06d3a910a76e24c6b8af | 20,407 | py | Python | pynlg/lexicon/feature/lexical/fr.py | roman-kutlak/pynlg | caea61e22bbdd38a09bd8e423e41749a70531e9f | [
"MIT"
] | null | null | null | pynlg/lexicon/feature/lexical/fr.py | roman-kutlak/pynlg | caea61e22bbdd38a09bd8e423e41749a70531e9f | [
"MIT"
] | null | null | null | pynlg/lexicon/feature/lexical/fr.py | roman-kutlak/pynlg | caea61e22bbdd38a09bd8e423e41749a70531e9f | [
"MIT"
] | null | null | null | # encoding: utf-8
"""Definition of french lexical features."""
from __future__ import unicode_literals
# * <p>
# * This feature gives the noun of the opposite gender corresponding to a
# * noun. For example, the feminine of <em>chien</em> is <em>chienne</em>.
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>opposite_gender</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>String</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>All supporting lexicons but can be set by the user for irregular
# * cases.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The morphology processor uses this feature to correctly form the noun
# * of the opposite gender corresponding to a noun. This feature will be
# * looked at first before any reference to lexicons or morphology rules.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Nouns.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>null</code>.</td>
# * </tr>
# * </table>
#
OPPOSITE_GENDER = "opposite_gender"
#
# * <p>
# * This feature gives the feminine singular form of a determiner or
# * adjective. For example, the feminine singular of <em>le</em> is
# * <em>la</em> and the feminin of <em>beau</em> is <em>belle</em>.
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>feminine_singular</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>String</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>All supporting lexicons but can be set by the user for irregular
# * cases.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The morphology processor uses this feature to correctly inflect
# * determiners and adjectives. This feature will be looked at first before
# * any reference to lexicons or morphology rules.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Determiners and adjectives.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>null</code>.</td>
# * </tr>
# * </table>
#
FEMININE_SINGULAR = "feminine_singular"
#
# * <p>
# * This feature gives the feminin plural form of an adjective. For example,
# * the feminine plural of <em>beau</em> is <em>belles</em>.
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>feminine_plural</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>String</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>All supporting lexicons but can be set by the user for irregular
# * cases.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The morphology processor uses this feature to correctly inflect
# * determiners and adjectives. This feature will be looked at first before
# * any reference to lexicons or morphology rules.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Adjectives.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>null</code>.</td>
# * </tr>
# * </table>
#
FEMININE_PLURAL = "feminine_plural"
#
# * <p>
# * This feature gives the form a masculine singular adjective takes when
# * placed in front of a word beginning with a vowel or a so-called mute 'h'
# * (not a so-called aspired 'h') For example the form of <em>beau</em> in
# * front of a vowel is <em>bel</em>.
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>liaison</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>String</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>All supporting lexicons but can be set by the user for irregular
# * cases.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The morphology processor uses this feature to correctly inflect
# * adjectives. This feature will be looked at first before any reference to
# * lexicons or morphology rules.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Adjectives only.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>null</code>.</td>
# * </tr>
# * </table>
#
LIAISON = "liaison"
#
# * <p>
# * This flag determines if a word is subject to elision in front of a vowel
# * The elided form of <em>le</em> is <em>l'</em>.
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>vowel_elision</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>Boolean</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>The information is read from Lexicons that support this feature.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The morphophonology methods.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Many categories.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>Boolean.FALSE</code>.</td>
# * </tr>
# * </table>
#
VOWEL_ELISION = "vowel_elision"
#
# * <p>
# * This flag determines if a pronoun is a detached (from the verb) form
# * ("forme disjointe"). For example, "moi" is a detached form, but not "me".
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>detached</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>Boolean</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>The information is read from Lexicons that support this feature.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The syntax and morphology methods.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Personal pronouns only.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>Boolean.FALSE</code>.</td>
# * </tr>
# * </table>
#
DETACHED = "detached"
#
# * <p>
# * This flag determines if a word begins with a so-called aspired 'h'.
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>aspired_h</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>Boolean</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>The information is read from Lexicons that support this feature.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The morphophonology methods.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Many categories.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>Boolean.FALSE</code>.</td>
# * </tr>
# * </table>
#
ASPIRED_H = "aspired_h"
#
# * <p>
# * This flag determines if a word provokes a negation with only the "ne"
# * negation adverb (no "pas" or "plus") when it is the subject or complement
# * of a clause.
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>ne_only_negation</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>Boolean</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>The information is read the lexicon.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The syntax methods.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Many categories.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>Boolean.FALSE</code>.</td>
# * </tr>
# * </table>
#
NE_ONLY_NEGATION = "ne_only_negation"
#
# * <p>
# * This flag determines if a verb provokes verbal complement clitic rising
# * when used as a modal.
# * </p>
# * <p>
# * For example : "faire" has clitic rising ("je <bold>le</bold> fait voir")
# * but "vouloir" doesn't have clitic rising ("je veux <bold>le</bold> voir")
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>clitic_rising</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>Boolean</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>The information is read from Lexicons that support this feature and
# * can be set by the user.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The syntax processing methods.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Many categories.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>Boolean.FALSE</code>.</td>
# * </tr>
# * </table>
#
CLITIC_RISING = "clitic_rising"
#
# * <p>
# * This flag determines if the comma must be ommited before a coordination
# * conjunction or after a front modifier.
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>no_comma</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>Boolean</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>The information is read from Lexicons that support this feature and
# * can be set by the user.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The orthography methods.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Conjunctions and word that are or can be front modifiers.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>Boolean.FALSE</code>.</td>
# * </tr>
# * </table>
#
NO_COMMA = "no_comma"
#
# * <p>
# * This flag determines if the coordination conjunction must be repeated
# * before each coordinate.
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>repeated_conjunction</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>Boolean</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>The information is read from Lexicons that support this feature and
# * can be set by the user.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The orthography methods.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Conjunctions.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>Boolean.FALSE</code>.</td>
# * </tr>
# * </table>
#
REPEATED_CONJUNCTION = "repeated_conjunction"
#
# * <p>
# * This flag determines if an adjective is placed before the noun by
# * default, when added to a noun phrase with addModifier(...). ("antéposé")
# * Example : "un beau chien" (preposed) vs "un chien élancé" (postposed)
# * Most adjectives in French are postposed, but preposed adjectives are used
# * frequently.
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>preposed</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>Boolean</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>The information is read from the lexicon and can be changed by the
# * user.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>addModifier(...)</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Adjectives.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>Boolean.FALSE</code>.</td>
# * </tr>
# * </table>
#
PREPOSED = "preposed"
#
# * <p>
# * This flag determines if a verb takes "être" as auxiliary instead of
# * "avoir".
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>auxiliary_etre</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>Boolean</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>The information is read from Lexicons that support this feature.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The syntax processing methods.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Verbs only.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>Boolean.FALSE</code>.</td>
# * </tr>
# * </table>
#
AUXILIARY_ETRE = "auxiliary_etre"
#
# * <p>
# * This flag determines if a verb can be used as a copula.
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>copular</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>Boolean</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>The information is read from Lexicons that support this feature.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The syntax processing methods.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Verbs only.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>Boolean.FALSE</code>.</td>
# * </tr>
# * </table>
#
COPULAR = "copular"
#
# * <p>
# * These features give the indicative present form of a verb.
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>present (person) (number)</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>String</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>All supporting lexicons but can be set by the user for irregular
# * cases.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The morphology processor uses this feature to correctly inflect
# * verbs. This feature will be looked at first before any reference to
# * lexicons or morphology rules.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Verbs only.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>null</code>.</td>
# * </tr>
# * </table>
#
PRESENT1S = "present1s"
PRESENT2S = "present2s"
PRESENT3S = "present3s"
PRESENT1P = "present1p"
PRESENT2P = "present2p"
PRESENT3P = "present3p"
#
# * <p>
# * These features give the imperative present form of a verb.
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>imperative (person) (number)</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>String</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>All supporting lexicons but can be set by the user for irregular
# * cases.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The morphology processor uses this feature to correctly inflect
# * verbs. This feature will be looked at first before any reference to
# * lexicons or morphology rules.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Verbs only.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>null</code>.</td>
# * </tr>
# * </table>
#
IMPERATIVE2S = "imperative2s"
IMPERATIVE1P = "imperative1p"
IMPERATIVE2P = "imperative2p"
#
# * <p>
# * These features give the indicative simple future radical of a verb.
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>future_radical</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>String</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>All supporting lexicons but can be set by the user for irregular
# * cases.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The morphology processor uses this feature to correctly inflect
# * verbs. This feature will be looked at first before any reference to
# * lexicons or morphology rules.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Verbs only.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>null</code>.</td>
# * </tr>
# * </table>
#
FUTURE_RADICAL = "future_radical"
#
# * <p>
# * These features give the indicative "imparfait" radical of a verb.
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>imparfait_radical</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>String</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>All supporting lexicons but can be set by the user for irregular
# * cases.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The morphology processor uses this feature to correctly inflect
# * verbs. This feature will be looked at first before any reference to
# * lexicons or morphology rules.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Verbs only.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>null</code>.</td>
# * </tr>
# * </table>
#
IMPARFAIT_RADICAL = "imparfait_radical"
#
# * <p>
# * This feature gives the feminine past participle tense form of a verb.
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>femininePastParticiple</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>String</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>All supporting lexicons but can be set by the user for irregular
# * cases.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The morphology processor uses this feature to correctly inflect
# * verbs. This feature will be looked at first before any reference to
# * lexicons or morphology rules.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Verbs only.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>null</code>.</td>
# * </tr>
# * </table>
#
FEMININE_PAST_PARTICIPLE = "feminine_past_participle"
#
# * <p>
# * This feature determines of what type is a pronoun.
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>pronoun_type</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>PronounType</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>The lexicon.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The morphology processing methods uses pronoun type to determine the
# * appropriate form for pronouns.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Pronouns.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>null</code></td>
# * </tr>
# * </table>
#
PRONOUN_TYPE = "pronoun_type"
#
# * <p>
# * These features give the subjunctive present form of a verb.
# * </p>
# * <table border="1">
# * <tr>
# * <td><b>Feature name</b></td>
# * <td><em>subjunctive (person) (number)</em></td>
# * </tr>
# * <tr>
# * <td><b>Expected type</b></td>
# * <td><code>String</code></td>
# * </tr>
# * <tr>
# * <td><b>Created by</b></td>
# * <td>All supporting lexicons but can be set by the user for irregular
# * cases.</td>
# * </tr>
# * <tr>
# * <td><b>Used by</b></td>
# * <td>The morphology processor uses this feature to correctly inflect
# * verbs. This feature will be looked at first before any reference to
# * lexicons or morphology rules.</td>
# * </tr>
# * <tr>
# * <td><b>Applies to</b></td>
# * <td>Verbs only.</td>
# * </tr>
# * <tr>
# * <td><b>Default</b></td>
# * <td><code>null</code>.</td>
# * </tr>
# * </table>
#
SUBJUNCTIVE1S = "subjunctive1s"
SUBJUNCTIVE2S = "subjunctive2s"
SUBJUNCTIVE3S = "subjunctive3s"
SUBJUNCTIVE1P = "subjunctive1p"
SUBJUNCTIVE2P = "subjunctive2p"
SUBJUNCTIVE3P = "subjunctive3p"
| 26.537061 | 81 | 0.506199 | 2,747 | 20,407 | 3.7419 | 0.084092 | 0.049032 | 0.06129 | 0.08172 | 0.768849 | 0.745111 | 0.71719 | 0.702014 | 0.702014 | 0.702014 | 0 | 0.003417 | 0.254325 | 20,407 | 768 | 82 | 26.571615 | 0.672077 | 0.877689 | 0 | 0 | 0 | 0 | 0.227273 | 0.013468 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.029412 | 0 | 0.029412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5321d8e2319f5aedcd3dd0ad0e36792debfa9c95 | 1,502 | py | Python | 001_search/search_procedures/linear.py | AndreiHondrari/petprojects | ecbb69daccd017539b70b29ba3a5297a8d38aca0 | [
"MIT"
] | 1 | 2018-12-13T20:41:57.000Z | 2018-12-13T20:41:57.000Z | 001_search/search_procedures/linear.py | AndreiHondrari/petprojects | ecbb69daccd017539b70b29ba3a5297a8d38aca0 | [
"MIT"
] | 1 | 2021-06-10T23:55:06.000Z | 2021-06-10T23:55:06.000Z | 001_search/search_procedures/linear.py | AndreiHondrari/petprojects | ecbb69daccd017539b70b29ba3a5297a8d38aca0 | [
"MIT"
] | 1 | 2018-12-13T13:30:56.000Z | 2018-12-13T13:30:56.000Z |
import time
from typing import List
from record import Record
def simple_search_short_text(
target_record: Record,
collection: List[Record]
) -> None:
print("#################################################################")
print(
"Simple search for short text: {}\n".format(target_record.short_text)
)
enumerated_collection = enumerate(collection)
count = 0
start = time.time()
for i, r in enumerated_collection:
if i % 100000 == 0:
print("Went through {} records.".format(i))
if r.short_text == target_record.short_text:
print("Found record: {}".format(r.id))
count += 1
print("\nSearch time: {:.2f}s".format(time.time() - start))
print("Counted: {}\n".format(count))
def simple_search_long_text(
target_record: Record,
collection: List[Record]
) -> None:
print("#################################################################")
print(
"Simple search for large text: {}\n".format(target_record.large_text)
)
enumerated_collection = enumerate(collection)
count = 0
start = time.time()
for i, r in enumerated_collection:
if i % 100000 == 0:
print("Went through {} records.".format(i))
if r.large_text == target_record.large_text:
print("Found record: {}".format(r.id))
count += 1
print("\nSearch time: {:.2f}s".format(time.time() - start))
print("Counted: {}\n".format(count))
| 27.309091 | 78 | 0.553262 | 170 | 1,502 | 4.758824 | 0.235294 | 0.088999 | 0.07911 | 0.051916 | 0.798517 | 0.741656 | 0.741656 | 0.741656 | 0.741656 | 0.741656 | 0 | 0.017346 | 0.232357 | 1,502 | 54 | 79 | 27.814815 | 0.684302 | 0 | 0 | 0.731707 | 0 | 0 | 0.231845 | 0.086609 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04878 | false | 0 | 0.073171 | 0 | 0.121951 | 0.292683 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5322c3ce3e122c3f68413992af8291a9954c2d10 | 11,839 | py | Python | tests/test_slack_event_app.py | haeena/python-slack-events-api-asgi | 6f4f0b207707468d10f165ba90c775bb7defd00e | [
"MIT"
] | 2 | 2020-01-16T04:23:43.000Z | 2020-09-04T23:47:35.000Z | tests/test_slack_event_app.py | haeena/python-slack-events-api-asgi | 6f4f0b207707468d10f165ba90c775bb7defd00e | [
"MIT"
] | 2 | 2020-01-14T17:05:29.000Z | 2020-01-29T20:06:38.000Z | tests/test_slack_event_app.py | haeena/slackevent-responder | 6f4f0b207707468d10f165ba90c775bb7defd00e | [
"MIT"
] | null | null | null | import json
import time
from freezegun import freeze_time
from starlette.testclient import TestClient
from slackevent_responder import SlackEventApp
from .helpers.helpers import create_signature
def test_verify_signature(verify_signatures_fixture):
# setup
(
expected,
signing_secret,
timestamp,
data,
signature,
) = verify_signatures_fixture
app = SlackEventApp(slack_signing_secret=signing_secret)
# run
result = app.verify_signature(
timestamp=timestamp, request_body=data, signature=signature
)
# validate
assert result == expected
class TestEndpoint:
def test_get(self, app, signing_secret, slack_event_path):
# setup
client = TestClient(app)
# run
response = client.get(slack_event_path)
# validate
assert (
response.text == "These are not the slackbots you're looking for."
)
assert response.status_code == 404
def test_no_timestamp(self, app, signing_secret, slack_event_path):
# setup
client = TestClient(app)
headers = {}
# run
response = client.post(slack_event_path, headers=headers)
# validate
assert response.text == "Request doesn't contain timestamp header"
assert response.status_code == 403
@freeze_time("2013-08-14")
def test_invalid_before_timestamp(
self, app, signing_secret, slack_event_path
):
# setup
client = TestClient(app)
headers = {
"X-Slack-Request-Timestamp": str(int(time.time() - 60 * 5 - 1))
}
# run
response = client.post(slack_event_path, headers=headers)
# validate
assert response.text == "Invalid timestamp in request header"
assert response.status_code == 403
@freeze_time("2013-08-14")
def test_invalid_after_timestamp(
self, app, signing_secret, slack_event_path
):
# setup
client = TestClient(app)
headers = {
"X-Slack-Request-Timestamp": str(int(time.time() + 60 * 5 + 1))
}
# run
response = client.post(slack_event_path, headers=headers)
# validate
assert response.text == "Invalid timestamp in request header"
assert response.status_code == 403
@freeze_time("2013-08-14")
def test_invalid_signature(self, app, signing_secret, slack_event_path):
# setup
client = TestClient(app)
timestamp = str(int(time.time()))
headers = {
"X-Slack-Request-Timestamp": timestamp,
"X-Slack-Signature": "",
}
# run
response = client.post(slack_event_path, headers=headers)
# validate
assert response.text == "Invalid request signature"
assert response.status_code == 403
@freeze_time("2013-08-14")
def test_challenge(
self, app, signing_secret, slack_event_path, url_challenge_fixture
):
# setup
client = TestClient(app)
timestamp = str(int(time.time()))
json_data = url_challenge_fixture
data = json.dumps(json_data)
signature = create_signature(signing_secret, timestamp, data)
headers = {
"X-Slack-Request-Timestamp": timestamp,
"X-Slack-Signature": signature,
}
# run
response = client.post(slack_event_path, data=data, headers=headers)
# validate
assert response.text == json_data["challenge"]
assert response.status_code == 200
@freeze_time("2013-08-14")
def test_event(
self, app, signing_secret, slack_event_path, reaction_event_fixture
):
# setup
client = TestClient(app)
timestamp = str(int(time.time()))
json_data = reaction_event_fixture
data = json.dumps(json_data)
signature = create_signature(signing_secret, timestamp, data)
headers = {
"X-Slack-Request-Timestamp": timestamp,
"X-Slack-Signature": signature,
}
# run
response = client.post(slack_event_path, data=data, headers=headers)
# validate
assert response.text == ""
assert response.status_code == 200
assert "X-Slack-Powered-By" in response.headers
@freeze_time("2013-08-14")
def test_no_event(
self, app, signing_secret, slack_event_path, reaction_event_fixture
):
# setup
client = TestClient(app)
timestamp = str(int(time.time()))
json_data = reaction_event_fixture
json_data.pop("event")
data = json.dumps(json_data)
signature = create_signature(signing_secret, timestamp, data)
headers = {
"X-Slack-Request-Timestamp": timestamp,
"X-Slack-Signature": signature,
}
# run
response = client.post(slack_event_path, data=data, headers=headers)
# validate
assert response.text == "No event in request body"
assert response.status_code == 403
@freeze_time("2013-08-14")
def test_no_event_type(
self, app, signing_secret, slack_event_path, reaction_event_fixture
):
# setup
client = TestClient(app)
timestamp = str(int(time.time()))
json_data = reaction_event_fixture
json_data["event"].pop("type")
data = json.dumps(json_data)
signature = create_signature(signing_secret, timestamp, data)
headers = {
"X-Slack-Request-Timestamp": timestamp,
"X-Slack-Signature": signature,
}
# run
response = client.post(slack_event_path, data=data, headers=headers)
# validate
assert response.text == "No event in request body"
assert response.status_code == 403
class TestEventHandler:
def test_handlers_zero(self, app):
# setup
event_type = "something"
# run
handlers = app.handlers(event_type)
# validate
assert len(handlers) == 0
def test_handlers_one(self, app):
# setup
event_type = "something"
# run
@app.on(event_type)
def handler(event_data):
pass
handlers = app.handlers(event_type)
# validate
assert len(handlers) == 1
assert handlers[0] == handler
def test_handlers_multi(self, app):
# setup
event_type = "something"
# run
@app.on(event_type)
def handler1(event_data):
pass
@app.once(event_type)
async def handler2(event_data):
pass
handlers = app.handlers(event_type)
# validate
assert len(handlers) == 2
assert handler1 in handlers
assert handler2 in handlers
def test_remove_handler(self, app):
# setup
event_type = "something"
@app.on(event_type)
def handler1(event_data):
pass
@app.once(event_type)
def handler2(event_data):
pass
# run
app.remove_handler(event_type, handler1)
# validate
handlers = app.handlers(event_type)
assert len(handlers) == 1
assert handlers[0] == handler2
def test_remove_all_handlers_for_a_event(self, app):
# setup
event_type = "something"
@app.on(event_type)
def handler1(event_data):
pass
@app.once(event_type)
def handler2(event_data):
pass
# run
app.remove_all_handlers(event_type)
# validate
handlers = app.handlers(event_type)
assert len(handlers) == 0
def test_remove_all_handlers_for_all_event(self, app):
# setup
event_type1 = "foo"
@app.on(event_type1)
def handler1(event_data):
pass
event_type2 = "bar"
@app.once(event_type2)
def handler2(event_data):
pass
# run
app.remove_all_handlers()
# validate
handlers1 = app.handlers(event_type1)
assert len(handlers1) == 0
handlers2 = app.handlers(event_type2)
assert len(handlers2) == 0
@freeze_time("2013-08-14")
def test_on_sync(
self, app, signing_secret, slack_event_path, reaction_event_fixture
):
# setup
client = TestClient(app)
timestamp = str(int(time.time()))
json_data = reaction_event_fixture
event_type = json_data["event"]["type"]
data = json.dumps(json_data)
signature = create_signature(signing_secret, timestamp, data)
headers = {
"X-Slack-Request-Timestamp": timestamp,
"X-Slack-Signature": signature,
}
EVENT_DATA_IN_HANDLER = None
# run
@app.on(event_type)
def handler(event_data):
nonlocal EVENT_DATA_IN_HANDLER
EVENT_DATA_IN_HANDLER = event_data
client.post(slack_event_path, data=data, headers=headers)
# validate
assert EVENT_DATA_IN_HANDLER == json_data
@freeze_time("2013-08-14")
def test_on_async(
self, app, signing_secret, slack_event_path, reaction_event_fixture
):
# setup
client = TestClient(app)
timestamp = str(int(time.time()))
json_data = reaction_event_fixture
event_type = json_data["event"]["type"]
data = json.dumps(json_data)
signature = create_signature(signing_secret, timestamp, data)
headers = {
"X-Slack-Request-Timestamp": timestamp,
"X-Slack-Signature": signature,
}
EVENT_DATA_IN_HANDLER = None
# run
@app.on(event_type)
async def handler(event_data):
nonlocal EVENT_DATA_IN_HANDLER
EVENT_DATA_IN_HANDLER = event_data
client.post(slack_event_path, data=data, headers=headers)
# validate
assert EVENT_DATA_IN_HANDLER == json_data
@freeze_time("2013-08-14")
def test_once_sync(
self, app, signing_secret, slack_event_path, reaction_event_fixture
):
# setup
client = TestClient(app)
timestamp = str(int(time.time()))
json_data = reaction_event_fixture
event_type = json_data["event"]["type"]
data = json.dumps(json_data)
signature = create_signature(signing_secret, timestamp, data)
headers = {
"X-Slack-Request-Timestamp": timestamp,
"X-Slack-Signature": signature,
}
EVENT_DATA_IN_HANDLER = None
# run
@app.once(event_type)
def handler(event_data):
nonlocal EVENT_DATA_IN_HANDLER
EVENT_DATA_IN_HANDLER = event_data
client.post(slack_event_path, data=data, headers=headers)
# validate
assert EVENT_DATA_IN_HANDLER == json_data
@freeze_time("2013-08-14")
def test_once_async(
self, app, signing_secret, slack_event_path, reaction_event_fixture
):
# setup
client = TestClient(app)
timestamp = str(int(time.time()))
json_data = reaction_event_fixture
event_type = json_data["event"]["type"]
data = json.dumps(json_data)
signature = create_signature(signing_secret, timestamp, data)
headers = {
"X-Slack-Request-Timestamp": timestamp,
"X-Slack-Signature": signature,
}
EVENT_DATA_IN_HANDLER = None
# run
@app.once(event_type)
async def handler(event_data):
nonlocal EVENT_DATA_IN_HANDLER
EVENT_DATA_IN_HANDLER = event_data
client.post(slack_event_path, data=data, headers=headers)
# validate
assert EVENT_DATA_IN_HANDLER == json_data
| 27.92217 | 78 | 0.605879 | 1,318 | 11,839 | 5.19651 | 0.084977 | 0.044678 | 0.053146 | 0.04205 | 0.836034 | 0.819536 | 0.809169 | 0.791064 | 0.78683 | 0.746532 | 0 | 0.018658 | 0.302813 | 11,839 | 423 | 79 | 27.98818 | 0.811122 | 0.032013 | 0 | 0.691756 | 0 | 0 | 0.078603 | 0.024125 | 0 | 0 | 0 | 0 | 0.125448 | 1 | 0.107527 | false | 0.032258 | 0.021505 | 0 | 0.136201 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
534675df4a8ec40e51ef50865cd0c988855b4f41 | 29 | py | Python | runner_master/runner/data/imglists/__init__.py | bigvideoresearch/SCC | f26cdb6aaf248b5112812dbdac1f1b5086aebccc | [
"MIT"
] | 5 | 2021-09-15T21:48:55.000Z | 2022-03-22T11:21:58.000Z | runner_master/runner/data/imglists/__init__.py | bigvideoresearch/SCC | f26cdb6aaf248b5112812dbdac1f1b5086aebccc | [
"MIT"
] | null | null | null | runner_master/runner/data/imglists/__init__.py | bigvideoresearch/SCC | f26cdb6aaf248b5112812dbdac1f1b5086aebccc | [
"MIT"
] | 1 | 2021-08-20T08:40:15.000Z | 2021-08-20T08:40:15.000Z | from .list_imglist import *
| 9.666667 | 27 | 0.758621 | 4 | 29 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172414 | 29 | 2 | 28 | 14.5 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
535acf354e1631e0058cb337a131bac025152900 | 13,769 | py | Python | models/inception.py | arp95/pytorch_image_classifier | 81db0a99b79dcebc39843869bf684c5090db6fdb | [
"MIT"
] | 3 | 2020-08-17T16:09:00.000Z | 2021-02-02T04:52:17.000Z | models/inception.py | arp95/pytorch_image_classifier | 81db0a99b79dcebc39843869bf684c5090db6fdb | [
"MIT"
] | 1 | 2020-10-14T02:21:46.000Z | 2020-10-14T02:21:46.000Z | models/inception.py | arp95/cnn_architectures_image_classification | 81db0a99b79dcebc39843869bf684c5090db6fdb | [
"MIT"
] | null | null | null | # header files
import torch
import torch.nn as nn
import torchvision
import numpy as np
# define InceptionNet network (remember input size: (224 x 224 x 3))
class InceptionNet(torch.nn.Module):
# init function
def __init__(self, num_classes = 2):
super(InceptionNet, self).__init__()
self.features = torch.nn.Sequential(
# first block
torch.nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3),
torch.nn.BatchNorm2d(64),
torch.nn.ReLU(inplace=True),
torch.nn.MaxPool2d(kernel_size=3, stride=2, padding=1),
# second block
torch.nn.Conv2d(64, 64, kernel_size=1),
torch.nn.BatchNorm2d(64),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(64, 192, kernel_size=3),
torch.nn.BatchNorm2d(192),
torch.nn.ReLU(inplace=True),
torch.nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
)
self.inception_3a_1 = torch.nn.Sequential(
torch.nn.Conv2d(192, 64, kernel_size=1),
torch.nn.BatchNorm2d(64),
torch.nn.ReLU(inplace=True)
)
self.inception_3a_2 = torch.nn.Sequential(
torch.nn.Conv2d(192, 96, kernel_size=1),
torch.nn.BatchNorm2d(96),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(96, 128, kernel_size=3, padding=1),
torch.nn.BatchNorm2d(128),
torch.nn.ReLU(inplace=True)
)
self.inception_3a_3 = torch.nn.Sequential(
torch.nn.Conv2d(192, 16, kernel_size=1),
torch.nn.BatchNorm2d(16),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(16, 32, kernel_size=5, padding=2),
torch.nn.BatchNorm2d(32),
torch.nn.ReLU(inplace=True)
)
self.inception_3a_4 = torch.nn.Sequential(
torch.nn.MaxPool2d(kernel_size=3, stride=1, padding=1),
torch.nn.Conv2d(192, 32, kernel_size=1),
torch.nn.BatchNorm2d(32),
torch.nn.ReLU(inplace=True)
)
self.inception_3b_1 = torch.nn.Sequential(
torch.nn.Conv2d(256, 128, kernel_size=1),
torch.nn.BatchNorm2d(128),
torch.nn.ReLU(inplace=True)
)
self.inception_3b_2 = torch.nn.Sequential(
torch.nn.Conv2d(256, 128, kernel_size=1),
torch.nn.BatchNorm2d(128),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(128, 192, kernel_size=3, padding=1),
torch.nn.BatchNorm2d(192),
torch.nn.ReLU(inplace=True)
)
self.inception_3b_3 = torch.nn.Sequential(
torch.nn.Conv2d(256, 32, kernel_size=1),
torch.nn.BatchNorm2d(32),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(32, 96, kernel_size=5, padding=2),
torch.nn.BatchNorm2d(96),
torch.nn.ReLU(inplace=True)
)
self.inception_3b_4 = torch.nn.Sequential(
torch.nn.MaxPool2d(kernel_size=3, stride=1, padding=1),
torch.nn.Conv2d(256, 64, kernel_size=1),
torch.nn.BatchNorm2d(64),
torch.nn.ReLU(inplace=True)
)
self.inception_4a_1 = torch.nn.Sequential(
torch.nn.Conv2d(480, 192, kernel_size=1),
torch.nn.BatchNorm2d(192),
torch.nn.ReLU(inplace=True)
)
self.inception_4a_2 = torch.nn.Sequential(
torch.nn.Conv2d(480, 96, kernel_size=1),
torch.nn.BatchNorm2d(96),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(96, 208, kernel_size=3, padding=1),
torch.nn.BatchNorm2d(208),
torch.nn.ReLU(inplace=True)
)
self.inception_4a_3 = torch.nn.Sequential(
torch.nn.Conv2d(480, 16, kernel_size=1),
torch.nn.BatchNorm2d(16),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(16, 48, kernel_size=5, padding=2),
torch.nn.BatchNorm2d(48),
torch.nn.ReLU(inplace=True)
)
self.inception_4a_4 = torch.nn.Sequential(
torch.nn.MaxPool2d(kernel_size=3, stride=1, padding=1),
torch.nn.Conv2d(480, 64, kernel_size=1),
torch.nn.BatchNorm2d(64),
torch.nn.ReLU(inplace=True)
)
self.inception_4b_1 = torch.nn.Sequential(
torch.nn.Conv2d(512, 160, kernel_size=1),
torch.nn.BatchNorm2d(160),
torch.nn.ReLU(inplace=True)
)
self.inception_4b_2 = torch.nn.Sequential(
torch.nn.Conv2d(512, 112, kernel_size=1),
torch.nn.BatchNorm2d(112),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(112, 224, kernel_size=3, padding=1),
torch.nn.BatchNorm2d(224),
torch.nn.ReLU(inplace=True)
)
self.inception_4b_3 = torch.nn.Sequential(
torch.nn.Conv2d(512, 24, kernel_size=1),
torch.nn.BatchNorm2d(24),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(24, 64, kernel_size=5, padding=2),
torch.nn.BatchNorm2d(64),
torch.nn.ReLU(inplace=True)
)
self.inception_4b_4 = torch.nn.Sequential(
torch.nn.MaxPool2d(kernel_size=3, stride=1, padding=1),
torch.nn.Conv2d(512, 64, kernel_size=1),
torch.nn.BatchNorm2d(64),
torch.nn.ReLU(inplace=True)
)
self.inception_4c_1 = torch.nn.Sequential(
torch.nn.Conv2d(512, 128, kernel_size=1),
torch.nn.BatchNorm2d(128),
torch.nn.ReLU(inplace=True)
)
self.inception_4c_2 = torch.nn.Sequential(
torch.nn.Conv2d(512, 128, kernel_size=1),
torch.nn.BatchNorm2d(128),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(128, 256, kernel_size=3, padding=1),
torch.nn.BatchNorm2d(256),
torch.nn.ReLU(inplace=True)
)
self.inception_4c_3 = torch.nn.Sequential(
torch.nn.Conv2d(512, 24, kernel_size=1),
torch.nn.BatchNorm2d(24),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(24, 64, kernel_size=5, padding=2),
torch.nn.BatchNorm2d(64),
torch.nn.ReLU(inplace=True)
)
self.inception_4c_4 = torch.nn.Sequential(
torch.nn.MaxPool2d(kernel_size=3, stride=1, padding=1),
torch.nn.Conv2d(512, 64, kernel_size=1),
torch.nn.BatchNorm2d(64),
torch.nn.ReLU(inplace=True)
)
self.inception_4d_1 = torch.nn.Sequential(
torch.nn.Conv2d(512, 112, kernel_size=1),
torch.nn.BatchNorm2d(112),
torch.nn.ReLU(inplace=True)
)
self.inception_4d_2 = torch.nn.Sequential(
torch.nn.Conv2d(512, 144, kernel_size=1),
torch.nn.BatchNorm2d(144),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(144, 288, kernel_size=3, padding=1),
torch.nn.BatchNorm2d(288),
torch.nn.ReLU(inplace=True)
)
self.inception_4d_3 = torch.nn.Sequential(
torch.nn.Conv2d(512, 32, kernel_size=1),
torch.nn.BatchNorm2d(32),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(32, 64, kernel_size=5, padding=2),
torch.nn.BatchNorm2d(64),
torch.nn.ReLU(inplace=True)
)
self.inception_4d_4 = torch.nn.Sequential(
torch.nn.MaxPool2d(kernel_size=3, stride=1, padding=1),
torch.nn.Conv2d(512, 64, kernel_size=1),
torch.nn.BatchNorm2d(64),
torch.nn.ReLU(inplace=True)
)
self.inception_4e_1 = torch.nn.Sequential(
torch.nn.Conv2d(528, 256, kernel_size=1),
torch.nn.BatchNorm2d(256),
torch.nn.ReLU(inplace=True)
)
self.inception_4e_2 = torch.nn.Sequential(
torch.nn.Conv2d(528, 160, kernel_size=1),
torch.nn.BatchNorm2d(160),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(160, 320, kernel_size=3, padding=1),
torch.nn.BatchNorm2d(320),
torch.nn.ReLU(inplace=True)
)
self.inception_4e_3 = torch.nn.Sequential(
torch.nn.Conv2d(528, 32, kernel_size=1),
torch.nn.BatchNorm2d(32),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(32, 128, kernel_size=5, padding=2),
torch.nn.BatchNorm2d(128),
torch.nn.ReLU(inplace=True)
)
self.inception_4e_4 = torch.nn.Sequential(
torch.nn.MaxPool2d(kernel_size=3, stride=1, padding=1),
torch.nn.Conv2d(528, 128, kernel_size=1),
torch.nn.BatchNorm2d(128),
torch.nn.ReLU(inplace=True)
)
self.inception_5a_1 = torch.nn.Sequential(
torch.nn.Conv2d(832, 256, kernel_size=1),
torch.nn.BatchNorm2d(256),
torch.nn.ReLU(inplace=True)
)
self.inception_5a_2 = torch.nn.Sequential(
torch.nn.Conv2d(832, 160, kernel_size=1),
torch.nn.BatchNorm2d(160),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(160, 320, kernel_size=3, padding=1),
torch.nn.BatchNorm2d(320),
torch.nn.ReLU(inplace=True)
)
self.inception_5a_3 = torch.nn.Sequential(
torch.nn.Conv2d(832, 32, kernel_size=1),
torch.nn.BatchNorm2d(32),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(32, 128, kernel_size=5, padding=2),
torch.nn.BatchNorm2d(128),
torch.nn.ReLU(inplace=True)
)
self.inception_5a_4 = torch.nn.Sequential(
torch.nn.MaxPool2d(kernel_size=3, stride=1, padding=1),
torch.nn.Conv2d(832, 128, kernel_size=1),
torch.nn.BatchNorm2d(128),
torch.nn.ReLU(inplace=True)
)
self.inception_5b_1 = torch.nn.Sequential(
torch.nn.Conv2d(832, 384, kernel_size=1),
torch.nn.BatchNorm2d(384),
torch.nn.ReLU(inplace=True)
)
self.inception_5b_2 = torch.nn.Sequential(
torch.nn.Conv2d(832, 192, kernel_size=1),
torch.nn.BatchNorm2d(192),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(192, 384, kernel_size=3, padding=1),
torch.nn.BatchNorm2d(384),
torch.nn.ReLU(inplace=True)
)
self.inception_5b_3 = torch.nn.Sequential(
torch.nn.Conv2d(832, 48, kernel_size=1),
torch.nn.BatchNorm2d(48),
torch.nn.ReLU(inplace=True),
torch.nn.Conv2d(48, 128, kernel_size=5, padding=2),
torch.nn.BatchNorm2d(128),
torch.nn.ReLU(inplace=True)
)
self.inception_5b_4 = torch.nn.Sequential(
torch.nn.MaxPool2d(kernel_size=3, stride=1, padding=1),
torch.nn.Conv2d(832, 128, kernel_size=1),
torch.nn.BatchNorm2d(128),
torch.nn.ReLU(inplace=True)
)
self.avgpool = torch.nn.AdaptiveAvgPool2d(7)
self.classifier = torch.nn.Sequential(
torch.nn.Dropout(0.4),
torch.nn.Linear(1024 * 7 * 7, num_classes)
)
self.max_pooling = torch.nn.Sequential(
torch.nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
)
def forward(self, x):
x = self.features(x)
x1 = self.inception_3a_1(x)
x2 = self.inception_3a_2(x)
x3 = self.inception_3a_3(x)
x4 = self.inception_3a_4(x)
x = torch.cat([x1, x2, x3, x4], 1)
x1 = self.inception_3b_1(x)
x2 = self.inception_3b_2(x)
x3 = self.inception_3b_3(x)
x4 = self.inception_3b_4(x)
x = torch.cat([x1, x2, x3, x4], 1)
x = self.max_pooling(x)
x1 = self.inception_4a_1(x)
x2 = self.inception_4a_2(x)
x3 = self.inception_4a_3(x)
x4 = self.inception_4a_4(x)
x = torch.cat([x1, x2, x3, x4], 1)
x1 = self.inception_4b_1(x)
x2 = self.inception_4b_2(x)
x3 = self.inception_4b_3(x)
x4 = self.inception_4b_4(x)
x = torch.cat([x1, x2, x3, x4], 1)
x1 = self.inception_4c_1(x)
x2 = self.inception_4c_2(x)
x3 = self.inception_4c_3(x)
x4 = self.inception_4c_4(x)
x = torch.cat([x1, x2, x3, x4], 1)
x1 = self.inception_4d_1(x)
x2 = self.inception_4d_2(x)
x3 = self.inception_4d_3(x)
x4 = self.inception_4d_4(x)
x = torch.cat([x1, x2, x3, x4], 1)
x1 = self.inception_4e_1(x)
x2 = self.inception_4e_2(x)
x3 = self.inception_4e_3(x)
x4 = self.inception_4e_4(x)
x = torch.cat([x1, x2, x3, x4], 1)
x = self.max_pooling(x)
x1 = self.inception_5a_1(x)
x2 = self.inception_5a_2(x)
x3 = self.inception_5a_3(x)
x4 = self.inception_5a_4(x)
x = torch.cat([x1, x2, x3, x4], 1)
x1 = self.inception_5b_1(x)
x2 = self.inception_5b_2(x)
x3 = self.inception_5b_3(x)
x4 = self.inception_5b_4(x)
x = torch.cat([x1, x2, x3, x4], 1)
x = self.avgpool(x)
x = x.view(x.shape[0], -1)
x = self.classifier(x)
return x
| 36.619681 | 71 | 0.547026 | 1,786 | 13,769 | 4.090705 | 0.052632 | 0.217492 | 0.070079 | 0.140433 | 0.9072 | 0.84109 | 0.834246 | 0.787435 | 0.674377 | 0.657131 | 0 | 0.100172 | 0.325732 | 13,769 | 375 | 72 | 36.717333 | 0.686773 | 0.00857 | 0 | 0.466049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006173 | false | 0 | 0.012346 | 0 | 0.024691 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
535fa62de0b6434b07ffb5bf77b61c734cb9f82d | 84,959 | py | Python | graphql_compiler/tests/test_input_data.py | gurer-kensho/graphql-compiler | 90d799ee9c22786f97424cf266c3d9e46a709f3e | [
"Apache-2.0"
] | null | null | null | graphql_compiler/tests/test_input_data.py | gurer-kensho/graphql-compiler | 90d799ee9c22786f97424cf266c3d9e46a709f3e | [
"Apache-2.0"
] | null | null | null | graphql_compiler/tests/test_input_data.py | gurer-kensho/graphql-compiler | 90d799ee9c22786f97424cf266c3d9e46a709f3e | [
"Apache-2.0"
] | null | null | null | # Copyright 2017-present Kensho Technologies, LLC.
"""Common GraphQL test inputs and expected outputs."""
from collections import namedtuple
from graphql import GraphQLID, GraphQLInt, GraphQLList, GraphQLString
from ..compiler.compiler_frontend import OutputMetadata
from ..schema import GraphQLDate, GraphQLDateTime, GraphQLDecimal
CommonTestData = namedtuple(
'CommonTestData',
(
'graphql_input',
'expected_output_metadata',
'expected_input_metadata',
'type_equivalence_hints',
))
def immediate_output():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def immediate_output_custom_scalars():
graphql_input = '''{
Animal {
birthday @output(out_name: "birthday")
net_worth @output(out_name: "net_worth")
}
}'''
expected_output_metadata = {
'birthday': OutputMetadata(type=GraphQLDate, optional=False),
'net_worth': OutputMetadata(type=GraphQLDecimal, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def immediate_output_with_custom_scalar_filter():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
net_worth @filter(op_name: ">=", value: ["$min_worth"])
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'min_worth': GraphQLDecimal,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def multiple_filters():
graphql_input = '''{
Animal {
name @filter(op_name: ">=", value: ["$lower_bound"])
@filter(op_name: "<", value: ["$upper_bound"])
@output(out_name: "animal_name")
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'lower_bound': GraphQLString,
'upper_bound': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def traverse_and_output():
graphql_input = '''{
Animal {
out_Animal_ParentOf {
name @output(out_name: "parent_name")
}
}
}'''
expected_output_metadata = {
'parent_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def optional_traverse_after_mandatory_traverse():
graphql_input = '''{
Animal {
out_Animal_OfSpecies {
name @output(out_name: "species_name")
}
out_Animal_ParentOf @optional {
name @output(out_name: "child_name")
}
}
}'''
expected_output_metadata = {
'species_name': OutputMetadata(type=GraphQLString, optional=False),
'child_name': OutputMetadata(type=GraphQLString, optional=True),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def traverse_filter_and_output():
graphql_input = '''{
Animal {
out_Animal_ParentOf @filter(op_name: "name_or_alias", value: ["$wanted"]) {
name @output(out_name: "parent_name")
}
}
}'''
expected_output_metadata = {
'parent_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'wanted': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def name_or_alias_filter_on_interface_type():
graphql_input = '''{
Animal {
out_Entity_Related @filter(op_name: "name_or_alias", value: ["$wanted"]) {
name @output(out_name: "related_entity")
}
}
}'''
expected_output_metadata = {
'related_entity': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'wanted': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def output_source_and_complex_output():
graphql_input = '''{
Animal {
name @filter(op_name: "=", value: ["$wanted"]) @output(out_name: "animal_name")
out_Animal_ParentOf @output_source {
name @output(out_name: "parent_name")
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'parent_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'wanted': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def filter_on_optional_variable_equality():
# The operand in the @filter directive originates from an optional block.
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
out_Animal_ParentOf {
out_Animal_FedAt @optional {
name @tag(tag_name: "child_fed_at_event")
}
}
out_Animal_FedAt @output_source {
name @filter(op_name: "=", value: ["%child_fed_at_event"])
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def filter_on_optional_variable_name_or_alias():
# The operand in the @filter directive originates from an optional block.
graphql_input = '''{
Animal {
in_Animal_ParentOf @optional {
name @tag(tag_name: "parent_name")
}
out_Animal_ParentOf @filter(op_name: "name_or_alias", value: ["%parent_name"])
@output_source {
name @output(out_name: "animal_name")
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def filter_in_optional_block():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
out_Animal_ParentOf @optional {
name @filter(op_name: "=", value: ["$name"])
@output(out_name: "parent_name")
uuid @output(out_name: "uuid")
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'parent_name': OutputMetadata(type=GraphQLString, optional=True),
'uuid': OutputMetadata(type=GraphQLID, optional=True),
}
expected_input_metadata = {
'name': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def between_filter_on_simple_scalar():
# The "between" filter emits different output depending on what the compared types are.
# This test checks for correct code generation when the type is a simple scalar (a String).
graphql_input = '''{
Animal {
name @filter(op_name: "between", value: ["$lower", "$upper"])
@output(out_name: "name")
}
}'''
expected_output_metadata = {
'name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'lower': GraphQLString,
'upper': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def between_filter_on_date():
# The "between" filter emits different output depending on what the compared types are.
# This test checks for correct code generation when the type is a custom scalar (Date).
graphql_input = '''{
Animal {
birthday @filter(op_name: "between", value: ["$lower", "$upper"])
@output(out_name: "birthday")
}
}'''
expected_output_metadata = {
'birthday': OutputMetadata(type=GraphQLDate, optional=False),
}
expected_input_metadata = {
'lower': GraphQLDate,
'upper': GraphQLDate,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def between_filter_on_datetime():
# The "between" filter emits different output depending on what the compared types are.
# This test checks for correct code generation when the type is a custom scalar (DateTime).
graphql_input = '''{
Event {
event_date @filter(op_name: "between", value: ["$lower", "$upper"])
@output(out_name: "event_date")
}
}'''
expected_output_metadata = {
'event_date': OutputMetadata(type=GraphQLDateTime, optional=False),
}
expected_input_metadata = {
'lower': GraphQLDateTime,
'upper': GraphQLDateTime,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def between_lowering_on_simple_scalar():
# The "between" filter emits different output depending on what the compared types are.
# This test checks for correct code generation when the type is a simple scalar (a String).
graphql_input = '''{
Animal {
name @filter(op_name: "<=", value: ["$upper"])
@filter(op_name: ">=", value: ["$lower"])
@output(out_name: "name")
}
}'''
expected_output_metadata = {
'name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'lower': GraphQLString,
'upper': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def between_lowering_with_extra_filters():
graphql_input = '''{
Animal {
name @filter(op_name: "<=", value: ["$upper"])
@filter(op_name: "has_substring", value: ["$substring"])
@filter(op_name: "in_collection", value: ["$fauna"])
@filter(op_name: ">=", value: ["$lower"])
@output(out_name: "name")
}
}'''
expected_output_metadata = {
'name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'lower': GraphQLString,
'upper': GraphQLString,
'substring': GraphQLString,
'fauna': GraphQLList(GraphQLString)
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def no_between_lowering_on_simple_scalar():
# The following filters do not get lowered to a BETWEEN clause.
# This is because the compiler has no way to decide which lower bound to use.
# The parameters are not provided to the compiler.
graphql_input = '''{
Animal {
name @filter(op_name: "<=", value: ["$upper"])
@filter(op_name: ">=", value: ["$lower0"])
@filter(op_name: ">=", value: ["$lower1"])
@output(out_name: "name")
}
}'''
expected_output_metadata = {
'name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'lower0': GraphQLString,
'lower1': GraphQLString,
'upper': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def complex_optional_variables():
# The operands in the @filter directives originate from an optional block.
graphql_input = '''{
Animal {
out_Animal_ParentOf {
out_Animal_FedAt @optional {
name @tag(tag_name: "child_fed_at_event")
event_date @tag(tag_name: "child_fed_at")
@output(out_name: "child_fed_at")
}
in_Animal_ParentOf {
out_Animal_FedAt @optional {
event_date @tag(tag_name: "other_parent_fed_at")
@output(out_name: "other_parent_fed_at")
}
}
}
in_Animal_ParentOf {
out_Animal_FedAt {
name @filter(op_name: "=", value: ["%child_fed_at_event"])
event_date @output(out_name: "grandparent_fed_at")
@filter(op_name: "between",
value: ["%other_parent_fed_at", "%child_fed_at"])
}
}
}
}'''
expected_output_metadata = {
'child_fed_at': OutputMetadata(type=GraphQLDateTime, optional=True),
'other_parent_fed_at': OutputMetadata(type=GraphQLDateTime, optional=True),
'grandparent_fed_at': OutputMetadata(type=GraphQLDateTime, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def complex_optional_variables_with_starting_filter():
# The operands in the @filter directives originate from an optional block.
graphql_input = '''{
Animal {
name @filter(op_name: "=", value: ["$animal_name"])
out_Animal_ParentOf {
out_Animal_FedAt @optional {
name @tag(tag_name: "child_fed_at_event")
event_date @tag(tag_name: "child_fed_at")
@output(out_name: "child_fed_at")
}
in_Animal_ParentOf {
out_Animal_FedAt @optional {
event_date @tag(tag_name: "other_parent_fed_at")
@output(out_name: "other_parent_fed_at")
}
}
}
in_Animal_ParentOf {
out_Animal_FedAt {
name @filter(op_name: "=", value: ["%child_fed_at_event"])
event_date @output(out_name: "grandparent_fed_at")
@filter(op_name: "between",
value: ["%other_parent_fed_at", "%child_fed_at"])
}
}
}
}'''
expected_output_metadata = {
'child_fed_at': OutputMetadata(type=GraphQLDateTime, optional=True),
'other_parent_fed_at': OutputMetadata(type=GraphQLDateTime, optional=True),
'grandparent_fed_at': OutputMetadata(type=GraphQLDateTime, optional=False),
}
expected_input_metadata = {
'animal_name': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def simple_fragment():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
out_Entity_Related {
... on Animal {
name @output(out_name: "related_animal_name")
out_Animal_OfSpecies {
name @output(out_name: "related_animal_species")
}
}
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'related_animal_name': OutputMetadata(type=GraphQLString, optional=False),
'related_animal_species': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def simple_union():
graphql_input = '''{
Species {
name @output(out_name: "species_name")
out_Species_Eats {
... on Food {
name @output(out_name: "food_name")
}
}
}
}'''
expected_output_metadata = {
'species_name': OutputMetadata(type=GraphQLString, optional=False),
'food_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def filter_then_apply_fragment():
graphql_input = '''{
Species {
name @filter(op_name: "in_collection", value: ["$species"])
@output(out_name: "species_name")
out_Species_Eats {
... on Food {
name @output(out_name: "food_name")
}
}
}
}'''
expected_output_metadata = {
'species_name': OutputMetadata(type=GraphQLString, optional=False),
'food_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'species': GraphQLList(GraphQLString),
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def filter_then_apply_fragment_with_multiple_traverses():
graphql_input = '''{
Species {
name @filter(op_name: "in_collection", value: ["$species"])
@output(out_name: "species_name")
out_Species_Eats {
... on Food {
name @output(out_name: "food_name")
out_Entity_Related {
name @output(out_name: "entity_related_to_food")
}
in_Entity_Related {
name @output(out_name: "food_related_to_entity")
}
}
}
}
}'''
expected_output_metadata = {
'species_name': OutputMetadata(type=GraphQLString, optional=False),
'food_name': OutputMetadata(type=GraphQLString, optional=False),
'entity_related_to_food': OutputMetadata(type=GraphQLString, optional=False),
'food_related_to_entity': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'species': GraphQLList(GraphQLString),
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def filter_on_fragment_in_union():
graphql_input = '''{
Species {
name @output(out_name: "species_name")
out_Species_Eats {
... on Food @filter(op_name: "name_or_alias", value: ["$wanted"]) {
name @output(out_name: "food_name")
}
}
}
}'''
expected_output_metadata = {
'species_name': OutputMetadata(type=GraphQLString, optional=False),
'food_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'wanted': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def optional_on_union():
graphql_input = '''{
Species {
name @output(out_name: "species_name")
out_Species_Eats @optional {
... on Food {
name @output(out_name: "food_name")
}
}
}
}'''
expected_output_metadata = {
'species_name': OutputMetadata(type=GraphQLString, optional=False),
'food_name': OutputMetadata(type=GraphQLString, optional=True),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def typename_output():
graphql_input = '''{
Animal {
__typename @output(out_name: "base_cls")
out_Animal_OfSpecies {
__typename @output(out_name: "child_cls")
}
}
}'''
expected_output_metadata = {
'base_cls': OutputMetadata(type=GraphQLString, optional=False),
'child_cls': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def typename_filter():
graphql_input = '''{
Entity {
__typename @filter(op_name: "=", value: ["$base_cls"])
name @output(out_name: "entity_name")
}
}'''
expected_output_metadata = {
'entity_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'base_cls': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def simple_recurse():
graphql_input = '''{
Animal {
out_Animal_ParentOf @recurse(depth: 1) {
name @output(out_name: "relation_name")
}
}
}'''
expected_output_metadata = {
'relation_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def traverse_then_recurse():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
out_Animal_ImportantEvent {
... on Event {
name @output(out_name: "important_event")
}
}
out_Animal_ParentOf @recurse(depth: 2) {
name @output(out_name: "ancestor_name")
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'important_event': OutputMetadata(type=GraphQLString, optional=False),
'ancestor_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def filter_then_traverse_and_recurse():
graphql_input = '''{
Animal @filter(op_name: "name_or_alias", value: ["$animal_name_or_alias"]) {
name @output(out_name: "animal_name")
out_Animal_ImportantEvent {
... on Event {
name @output(out_name: "important_event")
}
}
out_Animal_ParentOf @recurse(depth: 2) {
name @output(out_name: "ancestor_name")
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'important_event': OutputMetadata(type=GraphQLString, optional=False),
'ancestor_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'animal_name_or_alias': GraphQLString
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def two_consecutive_recurses():
graphql_input = '''{
Animal @filter(op_name: "name_or_alias", value: ["$animal_name_or_alias"]) {
name @output(out_name: "animal_name")
out_Animal_ImportantEvent {
... on Event {
name @output(out_name: "important_event")
}
}
out_Animal_ParentOf @recurse(depth: 2) {
name @output(out_name: "ancestor_name")
}
in_Animal_ParentOf @recurse(depth: 2) {
name @output(out_name: "descendent_name")
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'important_event': OutputMetadata(type=GraphQLString, optional=False),
'ancestor_name': OutputMetadata(type=GraphQLString, optional=False),
'descendent_name': OutputMetadata(type=GraphQLString, optional=False)
}
expected_input_metadata = {
'animal_name_or_alias': GraphQLString
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def recurse_within_fragment():
graphql_input = '''{
Food {
name @output(out_name: "food_name")
in_Entity_Related {
... on Animal {
name @output(out_name: "animal_name")
out_Animal_ParentOf @recurse(depth: 3) {
name @output(out_name: "relation_name")
}
}
}
}
}'''
expected_output_metadata = {
'food_name': OutputMetadata(type=GraphQLString, optional=False),
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'relation_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def filter_within_recurse():
graphql_input = '''{
Animal {
out_Animal_ParentOf @recurse(depth: 3) {
name @output(out_name: "relation_name")
color @filter(op_name: "=", value: ["$wanted"])
}
}
}'''
expected_output_metadata = {
'relation_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'wanted': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def recurse_with_immediate_type_coercion():
graphql_input = '''{
Animal {
in_Entity_Related @recurse(depth: 4) {
... on Animal {
name @output(out_name: "name")
}
}
}
}'''
expected_output_metadata = {
'name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def recurse_with_immediate_type_coercion_and_filter():
graphql_input = '''{
Animal {
in_Entity_Related @recurse(depth: 4) {
... on Animal {
name @output(out_name: "name")
color @filter(op_name: "=", value: ["$color"])
}
}
}
}'''
expected_output_metadata = {
'name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'color': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def in_collection_op_filter_with_variable():
graphql_input = '''{
Animal {
name @filter(op_name: "in_collection", value: ["$wanted"])
@output(out_name: "animal_name")
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'wanted': GraphQLList(GraphQLString)
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def in_collection_op_filter_with_tag():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
alias @tag(tag_name: "aliases")
out_Animal_ParentOf {
name @filter(op_name: "in_collection", value: ["%aliases"])
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def in_collection_op_filter_with_optional_tag():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
in_Animal_ParentOf @optional {
alias @tag(tag_name: "parent_aliases")
}
out_Animal_ParentOf {
name @filter(op_name: "in_collection", value: ["%parent_aliases"])
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def intersects_op_filter_with_variable():
graphql_input = '''{
Animal {
alias @filter(op_name: "intersects", value: ["$wanted"])
name @output(out_name: "animal_name")
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'wanted': GraphQLList(GraphQLString)
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def intersects_op_filter_with_tag():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
alias @tag(tag_name: "aliases")
out_Animal_ParentOf {
alias @filter(op_name: "intersects", value: ["%aliases"])
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def intersects_op_filter_with_optional_tag():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
in_Animal_ParentOf @optional {
alias @tag(tag_name: "parent_aliases")
}
out_Animal_ParentOf {
alias @filter(op_name: "intersects", value: ["%parent_aliases"])
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def contains_op_filter_with_variable():
graphql_input = '''{
Animal {
alias @filter(op_name: "contains", value: ["$wanted"])
name @output(out_name: "animal_name")
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'wanted': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def contains_op_filter_with_tag():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name") @tag(tag_name: "name")
in_Animal_ParentOf {
alias @filter(op_name: "contains", value: ["%name"])
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def contains_op_filter_with_optional_tag():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
in_Animal_ParentOf @optional {
name @tag(tag_name: "parent_name")
}
out_Animal_ParentOf {
alias @filter(op_name: "contains", value: ["%parent_name"])
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def has_substring_op_filter():
graphql_input = '''{
Animal {
name @filter(op_name: "has_substring", value: ["$wanted"])
@output(out_name: "animal_name")
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'wanted': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def has_substring_op_filter_with_optional_tag():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
in_Animal_ParentOf @optional {
name @tag(tag_name: "parent_name")
}
out_Animal_ParentOf {
name @filter(op_name: "has_substring", value: ["%parent_name"])
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def has_edge_degree_op_filter():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
in_Animal_ParentOf @filter(op_name: "has_edge_degree", value: ["$child_count"])
@output_source {
name @output(out_name: "child_name")
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'child_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'child_count': GraphQLInt,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def has_edge_degree_op_filter_with_optional():
graphql_input = '''{
Species {
name @output(out_name: "species_name")
in_Animal_OfSpecies {
name @output(out_name: "parent_name")
in_Animal_ParentOf @filter(op_name: "has_edge_degree", value: ["$child_count"])
@optional {
name @output(out_name: "child_name")
}
}
}
}'''
expected_output_metadata = {
'species_name': OutputMetadata(type=GraphQLString, optional=False),
'parent_name': OutputMetadata(type=GraphQLString, optional=False),
'child_name': OutputMetadata(type=GraphQLString, optional=True),
}
expected_input_metadata = {
'child_count': GraphQLInt,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def has_edge_degree_op_filter_with_fold():
graphql_input = '''{
Species {
name @output(out_name: "species_name")
in_Animal_OfSpecies {
name @output(out_name: "parent_name")
in_Animal_ParentOf @filter(op_name: "has_edge_degree", value: ["$child_count"])
@fold {
name @output(out_name: "child_names")
}
}
}
}'''
expected_output_metadata = {
'species_name': OutputMetadata(type=GraphQLString, optional=False),
'parent_name': OutputMetadata(type=GraphQLString, optional=False),
'child_names': OutputMetadata(type=GraphQLList(GraphQLString), optional=False),
}
expected_input_metadata = {
'child_count': GraphQLInt,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def fold_on_output_variable():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
out_Animal_ParentOf @fold {
name @output(out_name: "child_names_list")
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'child_names_list': OutputMetadata(type=GraphQLList(GraphQLString), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def fold_after_traverse():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
in_Animal_ParentOf {
out_Animal_ParentOf @fold {
name @output(out_name: "sibling_and_self_names_list")
}
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'sibling_and_self_names_list': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def fold_and_traverse():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
in_Animal_ParentOf @fold {
out_Animal_ParentOf {
name @output(out_name: "sibling_and_self_names_list")
}
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'sibling_and_self_names_list': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def fold_and_deep_traverse():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
in_Animal_ParentOf @fold {
out_Animal_ParentOf {
out_Animal_OfSpecies {
name @output(out_name: "sibling_and_self_species_list")
}
}
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'sibling_and_self_species_list': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def traverse_and_fold_and_traverse():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
in_Animal_ParentOf {
out_Animal_ParentOf @fold {
out_Animal_OfSpecies {
name @output(out_name: "sibling_and_self_species_list")
}
}
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'sibling_and_self_species_list': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def multiple_outputs_in_same_fold():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
out_Animal_ParentOf @fold {
name @output(out_name: "child_names_list")
uuid @output(out_name: "child_uuids_list")
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'child_names_list': OutputMetadata(type=GraphQLList(GraphQLString), optional=False),
'child_uuids_list': OutputMetadata(type=GraphQLList(GraphQLID), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def multiple_outputs_in_same_fold_and_traverse():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
in_Animal_ParentOf @fold {
out_Animal_ParentOf {
name @output(out_name: "sibling_and_self_names_list")
uuid @output(out_name: "sibling_and_self_uuids_list")
}
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'sibling_and_self_names_list': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
'sibling_and_self_uuids_list': OutputMetadata(
type=GraphQLList(GraphQLID), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def multiple_folds():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
out_Animal_ParentOf @fold {
name @output(out_name: "child_names_list")
uuid @output(out_name: "child_uuids_list")
}
in_Animal_ParentOf @fold {
name @output(out_name: "parent_names_list")
uuid @output(out_name: "parent_uuids_list")
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'child_names_list': OutputMetadata(type=GraphQLList(GraphQLString), optional=False),
'child_uuids_list': OutputMetadata(type=GraphQLList(GraphQLID), optional=False),
'parent_names_list': OutputMetadata(type=GraphQLList(GraphQLString), optional=False),
'parent_uuids_list': OutputMetadata(type=GraphQLList(GraphQLID), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def multiple_folds_and_traverse():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
out_Animal_ParentOf @fold {
in_Animal_ParentOf {
name @output(out_name: "spouse_and_self_names_list")
uuid @output(out_name: "spouse_and_self_uuids_list")
}
}
in_Animal_ParentOf @fold {
out_Animal_ParentOf {
name @output(out_name: "sibling_and_self_names_list")
uuid @output(out_name: "sibling_and_self_uuids_list")
}
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'spouse_and_self_names_list': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
'spouse_and_self_uuids_list': OutputMetadata(
type=GraphQLList(GraphQLID), optional=False),
'sibling_and_self_names_list': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
'sibling_and_self_uuids_list': OutputMetadata(
type=GraphQLList(GraphQLID), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def fold_date_and_datetime_fields():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
out_Animal_ParentOf @fold {
birthday @output(out_name: "child_birthdays_list")
}
out_Animal_FedAt @fold {
event_date @output(out_name: "fed_at_datetimes_list")
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'child_birthdays_list': OutputMetadata(type=GraphQLList(GraphQLDate), optional=False),
'fed_at_datetimes_list': OutputMetadata(
type=GraphQLList(GraphQLDateTime), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def coercion_to_union_base_type_inside_fold():
# Given type_equivalence_hints = { Event: EventOrBirthEvent },
# the coercion should be optimized away as a no-op.
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
out_Animal_ImportantEvent @fold {
... on Event {
name @output(out_name: "important_events")
}
}
}
}'''
type_equivalence_hints = {
'Event': 'EventOrBirthEvent'
}
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'important_events': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=type_equivalence_hints)
def no_op_coercion_inside_fold():
# The type where the coercion is applied is already Entity, so the coercion is a no-op.
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
out_Entity_Related @fold {
... on Entity {
name @output(out_name: "related_entities")
}
}
}
}'''
type_equivalence_hints = {
'Event': 'EventOrBirthEvent'
}
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'related_entities': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=type_equivalence_hints)
def no_op_coercion_with_eligible_subpath():
# This test case has a no-op coercion and a preferred location inside an
# eligible location. The no-op must be optimized away, or it will cause
# problems when hiding the eligible non-preferred location.
graphql_input = '''{
Animal {
out_Animal_ParentOf {
... on Animal {
out_Animal_ParentOf {
name @output(out_name: "animal_name")
}
out_Entity_Related {
... on Entity {
name @filter(op_name: "in_collection", value: ["$entity_names"])
}
}
}
}
}
}'''
type_equivalence_hints = {}
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'entity_names': GraphQLList(GraphQLString)
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=type_equivalence_hints)
def filter_within_fold_scope():
graphql_input = '''{
Animal {
name @output(out_name: "name")
out_Animal_ParentOf @fold {
name @filter(op_name: "=", value: ["$desired"]) @output(out_name: "child_list")
description @output(out_name: "child_descriptions")
}
}
}'''
expected_output_metadata = {
'name': OutputMetadata(type=GraphQLString, optional=False),
'child_list': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
'child_descriptions': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
}
expected_input_metadata = {
'desired': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def filter_on_fold_scope():
graphql_input = '''{
Animal {
name @output(out_name: "name")
out_Animal_ParentOf @fold
@filter(op_name: "name_or_alias", value: ["$desired"]) {
name @output(out_name: "child_list")
}
}
}'''
expected_output_metadata = {
'name': OutputMetadata(type=GraphQLString, optional=False),
'child_list': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
}
expected_input_metadata = {
'desired': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def coercion_on_interface_within_fold_scope():
graphql_input = '''{
Animal {
name @output(out_name: "name")
out_Entity_Related @fold {
... on Animal {
name @output(out_name: "related_animals")
}
}
}
}'''
expected_output_metadata = {
'name': OutputMetadata(type=GraphQLString, optional=False),
'related_animals': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def coercion_on_interface_within_fold_traversal():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
in_Animal_ParentOf @fold {
out_Entity_Related {
... on Animal {
out_Animal_OfSpecies {
name @output(out_name: "related_animal_species")
}
}
}
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'related_animal_species': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def coercion_on_union_within_fold_scope():
graphql_input = '''{
Animal {
name @output(out_name: "name")
out_Animal_ImportantEvent @fold {
... on BirthEvent {
name @output(out_name: "birth_events")
}
}
}
}'''
expected_output_metadata = {
'name': OutputMetadata(type=GraphQLString, optional=False),
'birth_events': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def coercion_filters_and_multiple_outputs_within_fold_scope():
graphql_input = '''{
Animal {
name @output(out_name: "name")
out_Entity_Related @fold {
... on Animal {
name @filter(op_name: "has_substring", value: ["$substring"])
@output(out_name: "related_animals")
birthday @filter(op_name: "<=", value: ["$latest"])
@output(out_name: "related_birthdays")
}
}
}
}'''
expected_output_metadata = {
'name': OutputMetadata(type=GraphQLString, optional=False),
'related_animals': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
'related_birthdays': OutputMetadata(
type=GraphQLList(GraphQLDate), optional=False),
}
expected_input_metadata = {
'substring': GraphQLString,
'latest': GraphQLDate,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def coercion_filters_and_multiple_outputs_within_fold_traversal():
graphql_input = '''{
Animal {
name @output(out_name: "name")
in_Animal_ParentOf @fold {
out_Entity_Related {
... on Animal {
name @filter(op_name: "has_substring", value: ["$substring"])
@output(out_name: "related_animals")
birthday @filter(op_name: "<=", value: ["$latest"])
@output(out_name: "related_birthdays")
}
}
}
}
}'''
expected_output_metadata = {
'name': OutputMetadata(type=GraphQLString, optional=False),
'related_animals': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
'related_birthdays': OutputMetadata(
type=GraphQLList(GraphQLDate), optional=False),
}
expected_input_metadata = {
'substring': GraphQLString,
'latest': GraphQLDate,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def optional_and_traverse():
graphql_input = '''{
Animal {
name @output(out_name: "name")
in_Animal_ParentOf @optional {
name @output(out_name: "child_name")
in_Animal_ParentOf {
name @output(out_name: "grandchild_name")
}
}
}
}'''
expected_output_metadata = {
'name': OutputMetadata(type=GraphQLString, optional=False),
'child_name': OutputMetadata(type=GraphQLString, optional=True),
'grandchild_name': OutputMetadata(type=GraphQLString, optional=True),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def optional_and_traverse_after_filter():
graphql_input = '''{
Animal {
name @output(out_name: "name")
@filter(op_name: "has_substring", value: ["$wanted"])
in_Animal_ParentOf @optional {
name @output(out_name: "child_name")
in_Animal_ParentOf {
name @output(out_name: "grandchild_name")
}
}
}
}'''
expected_output_metadata = {
'name': OutputMetadata(type=GraphQLString, optional=False),
'child_name': OutputMetadata(type=GraphQLString, optional=True),
'grandchild_name': OutputMetadata(type=GraphQLString, optional=True),
}
expected_input_metadata = {
'wanted': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def optional_and_deep_traverse():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
in_Animal_ParentOf @optional {
name @output(out_name: "child_name")
out_Animal_ParentOf {
name @output(out_name: "spouse_and_self_name")
out_Animal_OfSpecies {
name @output(out_name: "spouse_species")
}
}
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'child_name': OutputMetadata(type=GraphQLString, optional=True),
'spouse_and_self_name': OutputMetadata(type=GraphQLString, optional=True),
'spouse_species': OutputMetadata(type=GraphQLString, optional=True),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def traverse_and_optional_and_traverse():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
in_Animal_ParentOf {
name @output(out_name: "child_name")
out_Animal_ParentOf @optional {
name @output(out_name: "spouse_and_self_name")
out_Animal_OfSpecies {
name @output(out_name: "spouse_and_self_species")
}
}
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'child_name': OutputMetadata(type=GraphQLString, optional=False),
'spouse_and_self_name': OutputMetadata(type=GraphQLString, optional=True),
'spouse_and_self_species': OutputMetadata(type=GraphQLString, optional=True)
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def multiple_optional_traversals_with_starting_filter():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
@filter(op_name: "has_substring", value: ["$wanted"])
in_Animal_ParentOf @optional {
name @output(out_name: "child_name")
out_Animal_ParentOf {
name @output(out_name: "spouse_and_self_name")
}
}
out_Animal_ParentOf @optional {
name @output(out_name: "parent_name")
out_Animal_OfSpecies {
name @output(out_name: "parent_species")
}
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'child_name': OutputMetadata(type=GraphQLString, optional=True),
'spouse_and_self_name': OutputMetadata(type=GraphQLString, optional=True),
'parent_name': OutputMetadata(type=GraphQLString, optional=True),
'parent_species': OutputMetadata(type=GraphQLString, optional=True),
}
expected_input_metadata = {
'wanted': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def optional_traversal_and_optional_without_traversal():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
@filter(op_name: "has_substring", value: ["$wanted"])
in_Animal_ParentOf @optional {
name @output(out_name: "child_name")
}
out_Animal_ParentOf @optional {
name @output(out_name: "parent_name")
out_Animal_OfSpecies {
name @output(out_name: "parent_species")
}
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'child_name': OutputMetadata(type=GraphQLString, optional=True),
'parent_name': OutputMetadata(type=GraphQLString, optional=True),
'parent_species': OutputMetadata(type=GraphQLString, optional=True),
}
expected_input_metadata = {
'wanted': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def coercion_on_interface_within_optional_traversal():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
in_Animal_ParentOf @optional {
out_Entity_Related {
... on Animal {
out_Animal_OfSpecies {
name @output(out_name: "related_animal_species")
}
}
}
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'related_animal_species': OutputMetadata(type=GraphQLString, optional=True),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def filter_on_optional_traversal_equality():
# The operand in the @filter directive originates from an optional block.
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
out_Animal_ParentOf {
out_Animal_ParentOf @optional {
out_Animal_FedAt {
name @tag(tag_name: "grandparent_fed_at_event")
}
}
}
out_Animal_FedAt @output_source {
name @filter(op_name: "=", value: ["%grandparent_fed_at_event"])
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def filter_on_optional_traversal_name_or_alias():
# The operand in the @filter directive originates from an optional block.
graphql_input = '''{
Animal {
in_Animal_ParentOf @optional {
in_Animal_ParentOf {
name @tag(tag_name: "grandchild_name")
}
}
out_Animal_ParentOf @filter(op_name: "name_or_alias", value: ["%grandchild_name"])
@output_source {
name @output(out_name: "parent_name")
}
}
}'''
expected_output_metadata = {
'parent_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def complex_optional_traversal_variables():
# The operands in the @filter directives originate from an optional block.
graphql_input = '''{
Animal {
name @filter(op_name: "=", value: ["$animal_name"])
out_Animal_ParentOf {
out_Animal_FedAt @optional {
name @tag(tag_name: "parent_fed_at_event")
event_date @tag(tag_name: "parent_fed_at")
@output(out_name: "parent_fed_at")
}
in_Animal_ParentOf @optional {
out_Animal_FedAt {
event_date @tag(tag_name: "other_child_fed_at")
@output(out_name: "other_child_fed_at")
}
}
}
in_Animal_ParentOf {
out_Animal_FedAt {
name @filter(op_name: "=", value: ["%parent_fed_at_event"])
event_date @output(out_name: "grandchild_fed_at")
@filter(op_name: "between",
value: ["%other_child_fed_at", "%parent_fed_at"])
}
}
}
}'''
expected_output_metadata = {
'parent_fed_at': OutputMetadata(type=GraphQLDateTime, optional=True),
'other_child_fed_at': OutputMetadata(type=GraphQLDateTime, optional=True),
'grandchild_fed_at': OutputMetadata(type=GraphQLDateTime, optional=False),
}
expected_input_metadata = {
'animal_name': GraphQLString,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def simple_optional_recurse():
graphql_input = '''{
Animal {
name @output(out_name: "name")
in_Animal_ParentOf @optional {
name @output(out_name: "child_name")
out_Animal_ParentOf @recurse(depth: 3) {
name @output(out_name: "self_and_ancestor_name")
}
}
}
}'''
expected_output_metadata = {
'name': OutputMetadata(type=GraphQLString, optional=False),
'child_name': OutputMetadata(type=GraphQLString, optional=True),
'self_and_ancestor_name': OutputMetadata(type=GraphQLString, optional=True),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def multiple_traverse_within_optional():
graphql_input = '''{
Animal {
name @output(out_name: "name")
in_Animal_ParentOf @optional {
name @output(out_name: "child_name")
in_Animal_ParentOf {
name @output(out_name: "grandchild_name")
}
out_Animal_FedAt {
name @output(out_name: "child_feeding_time")
}
}
}
}'''
expected_output_metadata = {
'name': OutputMetadata(type=GraphQLString, optional=False),
'child_name': OutputMetadata(type=GraphQLString, optional=True),
'grandchild_name': OutputMetadata(type=GraphQLString, optional=True),
'child_feeding_time': OutputMetadata(type=GraphQLString, optional=True),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def optional_and_fold():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
in_Animal_ParentOf @optional {
name @output(out_name: "parent_name")
}
out_Animal_ParentOf @fold {
name @output(out_name: "child_names_list")
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'parent_name': OutputMetadata(type=GraphQLString, optional=True),
'child_names_list': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def fold_and_optional():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
out_Animal_ParentOf @fold {
name @output(out_name: "child_names_list")
}
in_Animal_ParentOf @optional {
name @output(out_name: "parent_name")
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'parent_name': OutputMetadata(type=GraphQLString, optional=True),
'child_names_list': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def optional_traversal_and_fold_traversal():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
in_Animal_ParentOf @optional {
in_Animal_ParentOf {
name @output(out_name: "grandparent_name")
}
}
out_Animal_ParentOf @fold {
out_Animal_ParentOf {
name @output(out_name: "grandchild_names_list")
}
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'grandparent_name': OutputMetadata(type=GraphQLString, optional=True),
'grandchild_names_list': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def fold_traversal_and_optional_traversal():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
out_Animal_ParentOf @fold {
out_Animal_ParentOf {
name @output(out_name: "grandchild_names_list")
}
}
in_Animal_ParentOf @optional {
in_Animal_ParentOf {
name @output(out_name: "grandparent_name")
}
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'grandparent_name': OutputMetadata(type=GraphQLString, optional=True),
'grandchild_names_list': OutputMetadata(
type=GraphQLList(GraphQLString), optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def between_lowering():
graphql_input = '''{
Animal {
uuid @filter(op_name: "between", value: ["$uuid_lower", "$uuid_upper"])
name @output(out_name: "animal_name")
birthday @filter(op_name: ">=", value: ["$earliest_modified_date"])
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {
'uuid_lower': GraphQLID,
'uuid_upper': GraphQLID,
'earliest_modified_date': GraphQLDate,
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def coercion_and_filter_with_tag():
graphql_input = '''{
Animal {
name @output(out_name: "origin") @tag(tag_name: "related")
out_Entity_Related {
... on Animal {
name @filter(op_name: "has_substring", value: ["%related"])
@output(out_name: "related_name")
}
}
}
}'''
expected_output_metadata = {
'origin': OutputMetadata(type=GraphQLString, optional=False),
'related_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def nested_optional_and_traverse():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
in_Animal_ParentOf @optional {
name @output(out_name: "child_name")
out_Animal_ParentOf @optional {
name @output(out_name: "spouse_and_self_name")
out_Animal_OfSpecies {
name @output(out_name: "spouse_species")
}
}
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'child_name': OutputMetadata(type=GraphQLString, optional=True),
'spouse_and_self_name': OutputMetadata(type=GraphQLString, optional=True),
'spouse_species': OutputMetadata(type=GraphQLString, optional=True),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def complex_nested_optionals():
graphql_input = '''{
Animal {
name @output(out_name: "animal_name")
in_Animal_ParentOf @optional {
name @output(out_name: "child_name")
in_Animal_ParentOf @optional {
name @output(out_name: "grandchild_name")
out_Animal_OfSpecies {
name @output(out_name: "grandchild_species")
}
}
in_Entity_Related @optional {
... on Animal {
name @output(out_name: "grandchild_relation_name")
out_Animal_OfSpecies {
name @output(out_name: "grandchild_relation_species")
}
}
}
}
out_Animal_ParentOf @optional {
name @output(out_name: "parent_name")
out_Animal_ParentOf @optional {
name @output(out_name: "grandparent_name")
out_Animal_OfSpecies {
name @output(out_name: "grandparent_species")
}
}
}
}
}'''
expected_output_metadata = {
'animal_name': OutputMetadata(type=GraphQLString, optional=False),
'child_name': OutputMetadata(type=GraphQLString, optional=True),
'grandchild_name': OutputMetadata(type=GraphQLString, optional=True),
'grandchild_species': OutputMetadata(type=GraphQLString, optional=True),
'grandchild_relation_name': OutputMetadata(type=GraphQLString, optional=True),
'grandchild_relation_species': OutputMetadata(type=GraphQLString, optional=True),
'parent_name': OutputMetadata(type=GraphQLString, optional=True),
'grandparent_name': OutputMetadata(type=GraphQLString, optional=True),
'grandparent_species': OutputMetadata(type=GraphQLString, optional=True),
}
expected_input_metadata = {}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=None)
def recursive_field_type_is_subtype_of_parent_field():
"""Ensure that recursion is allowed along an edge linked to a supertype of the parent field."""
graphql_input = '''{
BirthEvent {
out_Event_RelatedEvent @recurse(depth:2) {
... on Event {
name @output(out_name: "related_event_name")
}
}
}
}'''
expected_output_metadata = {
'related_event_name': OutputMetadata(type=GraphQLString, optional=False),
}
expected_input_metadata = {}
type_equivalence_hints = {
'Event': 'EventOrBirthEvent',
}
return CommonTestData(
graphql_input=graphql_input,
expected_output_metadata=expected_output_metadata,
expected_input_metadata=expected_input_metadata,
type_equivalence_hints=type_equivalence_hints)
| 34.051703 | 99 | 0.617486 | 7,903 | 84,959 | 6.230798 | 0.031254 | 0.066772 | 0.122416 | 0.11149 | 0.937472 | 0.927501 | 0.908412 | 0.893323 | 0.875858 | 0.856972 | 0 | 0.000316 | 0.291906 | 84,959 | 2,494 | 100 | 34.065357 | 0.818215 | 0.023282 | 0 | 0.690684 | 0 | 0 | 0.41562 | 0.026668 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041762 | false | 0 | 0.007802 | 0 | 0.091326 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7268a55bf53deb3d19141c4bce374dd6edbf6288 | 34 | py | Python | pywren_ibm_cloud/storage/backends/swift/__init__.py | gerardparis/pywren-ibm-cloud | ca69bed54f5bd5157bcda961b86dbfcfecf3c54a | [
"Apache-2.0"
] | 158 | 2020-09-16T13:22:03.000Z | 2022-03-28T20:01:31.000Z | pywren_ibm_cloud/storage/backends/swift/__init__.py | gerardparis/pywren-ibm-cloud | ca69bed54f5bd5157bcda961b86dbfcfecf3c54a | [
"Apache-2.0"
] | 256 | 2018-05-20T13:01:51.000Z | 2020-09-16T09:09:54.000Z | pywren_ibm_cloud/storage/backends/swift/__init__.py | gerardparis/pywren-ibm-cloud | ca69bed54f5bd5157bcda961b86dbfcfecf3c54a | [
"Apache-2.0"
] | 48 | 2020-09-19T15:29:53.000Z | 2022-03-23T17:08:24.000Z | from .swift import StorageBackend
| 17 | 33 | 0.852941 | 4 | 34 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.966667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
729d4cb1c21a110df79e9f2d123023fc91afc06b | 857 | py | Python | users/forms.py | sandeepagrawal8875/DjangoMultipleUser | 81cfc4f679b29df777a0a36db524c9defac01a0b | [
"MIT"
] | null | null | null | users/forms.py | sandeepagrawal8875/DjangoMultipleUser | 81cfc4f679b29df777a0a36db524c9defac01a0b | [
"MIT"
] | null | null | null | users/forms.py | sandeepagrawal8875/DjangoMultipleUser | 81cfc4f679b29df777a0a36db524c9defac01a0b | [
"MIT"
] | null | null | null | from django import forms
from django.contrib.auth.forms import UserCreationForm
from .models import User, Profile
class TeacherSignUpForm(UserCreationForm):
class Meta(UserCreationForm.Meta):
model = User
def save(self, commit=True):
user = super().save(commit=False)
user.is_teacher = True
if commit:
user.save()
profile = Profile.objects.create(user=user)
profile.save()
return user
class StudentSignUpForm(UserCreationForm):
class Meta(UserCreationForm.Meta):
model = User
def save(self, commit=True):
user = super().save(commit=False)
user.is_student = True
if commit:
user.save()
profile = Profile.objects.create(user=user)
profile.save()
return user | 27.645161 | 55 | 0.604434 | 90 | 857 | 5.733333 | 0.322222 | 0.063953 | 0.096899 | 0.158915 | 0.70155 | 0.70155 | 0.70155 | 0.70155 | 0.70155 | 0.70155 | 0 | 0 | 0.308051 | 857 | 31 | 56 | 27.645161 | 0.870152 | 0 | 0 | 0.72 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.12 | 0 | 0.44 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
72baf9b4cae39be0dde51eeef3632cdc901743ca | 7,412 | py | Python | utils/scripts/OOOlevelGen/src/levels/level_2_3.py | fullscreennl/bullettime | 8967449cdf926aaed6bb7ec217d92e0689fb0c3c | [
"MIT"
] | null | null | null | utils/scripts/OOOlevelGen/src/levels/level_2_3.py | fullscreennl/bullettime | 8967449cdf926aaed6bb7ec217d92e0689fb0c3c | [
"MIT"
] | null | null | null | utils/scripts/OOOlevelGen/src/levels/level_2_3.py | fullscreennl/bullettime | 8967449cdf926aaed6bb7ec217d92e0689fb0c3c | [
"MIT"
] | null | null | null | import LevelBuilder
from sprites import *
def render(name,bg):
lb = LevelBuilder.LevelBuilder(name+".plist",background=bg)
lb.addObject(Hero.HeroSprite(x=16, y=16,width=32,height=32))
#lb.addObject(Beam.BeamSprite(x=164, y=19,width=127,height=14,angle='30',restitution=0.2,static='true',friction=0.5,density=20 ).setName('Beam'))
#lb.addObject(Enemy.EnemySprite(x=408, y=28,width=32,height=32,angle='0',restitution=0.2,static='false',friction=0.5,density=5 ))
#lb.addObject(Enemy.EnemySprite(x=608, y=28,width=32,height=32,angle='0',restitution=0.2,static='false',friction=0.5,density=5 ))
#lb.addObject(Enemy.EnemySprite(x=880, y=28,width=32,height=32,angle='0',restitution=0.2,static='false',friction=0.5,density=5 ))
#lb.addObject(Enemy.EnemySprite(x=1128, y=28,width=32,height=32,angle='0',restitution=0.2,static='false',friction=0.5,density=5 ))
#lb.addObject(Enemy.EnemySprite(x=1508, y=28,width=32,height=32,angle='0',restitution=0.2,static='false',friction=0.5,density=5 ))
#lb.addObject(Enemy.EnemySprite(x=1700, y=28,width=32,height=32,angle='0',restitution=0.2,static='false',friction=0.5,density=5 ))
#lb.addObject(Enemy.EnemySprite(x=1856, y=28,width=32,height=32,angle='0',restitution=0.2,static='false',friction=0.5,density=5 ))
#lb.addObject(Teleporter.TeleporterSprite(x=2769, y=120, level_id='leveldata/level_2'))
#lb.addObject(Enemy.EnemySprite(x=2032, y=28,width=32,height=32,angle='0',restitution=0.2,static='false',friction=0.5,density=5 ))
#lb.addObject(Enemy.EnemySprite(x=2221, y=28,width=32,height=32,angle='0',restitution=0.2,static='false',friction=0.5,density=5 ))
#lb.addObject(Enemy.EnemySprite(x=2413, y=28,width=32,height=32,angle='0',restitution=0.2,static='false',friction=0.5,density=5 ))
#lb.addObject(Enemy.EnemySprite(x=2605, y=28,width=32,height=32,angle='0',restitution=0.2,static='false',friction=0.5,density=5 ))
#lb.addObject(Enemy.EnemySprite(x=2761, y=28,width=32,height=32,angle='0',restitution=0.2,static='false',friction=0.5,density=5 ))
lb.addObject(Crate.CrateSprite(x=660,y=100,width=32, height=32, static='false',angle=45))
lb.addObject(Crate.CrateSprite(x=700,y=100,width=32, height=32, static='false',angle=45))
lb.addObject(Pickup.PickupSprite(x=500,y=100,width=32, height=32, static='false',angle=45))
lb.addObject(Pickup.PickupSprite(x=500,y=100,width=32, height=32, static='false',angle=45))
lb.addObject(Pickup.PickupSprite(x=500,y=100,width=32, height=32, static='false',angle=45))
lb.addObject(Pickup.PickupSprite(x=500,y=100,width=32, height=32, static='false',angle=45))
lb.addObject(Crate.CrateSprite(x=1660,y=100,width=32, height=32, static='false',angle=45))
lb.addObject(Crate.CrateSprite(x=1700,y=100,width=32, height=32, static='false',angle=45))
lb.addObject(Pickup.PickupSprite(x=1500,y=100,width=32, height=32, static='false',angle=45))
lb.addObject(Pickup.PickupSprite(x=1500,y=100,width=32, height=32, static='false',angle=45))
lb.addObject(Pickup.PickupSprite(x=1500,y=100,width=32, height=32, static='false',angle=45))
lb.addObject(Pickup.PickupSprite(x=1500,y=100,width=32, height=32, static='false',angle=45))
lb.addObject(BulletTimePickup.BulletTimePickupSprite(x=500,y=200,width=32, height=32, static='false',angle=0))
#lb.addObject(BulletTimePickup.BulletTimePickupSprite(x=800,y=200,width=32, height=32, static='false',angle=0))
#lb.addObject(BulletTimePickup.BulletTimePickupSprite(x=1500,y=200,width=32, height=32, static='false',angle=0))
#lb.addObject(Enemy.EnemySprite(x=500, y=200,width=128,height=128,angle='0',restitution=0.2,static='false',friction=0.5,density=5,classname='BlobSprite',firstframe='monsterblob.png' ))
lb.addObject(Enemy.EnemySprite(x=1500, y=200,width=128,height=128,angle='0',shape='rect',restitution=0.2,static='false',friction=0.5,density=5,classname='SquareBlobSprite',firstframe='square_monsterblob.png' ))
lb.addObject(Enemy.EnemySprite(x=2000, y=200,width=60,height=80,angle='0',shape='rect',restitution=0.2,static='false',friction=0.5,density=20,classname='SquareBlobSprite',firstframe='square_monsterblob.png' ))
lb.addObject(Enemy.EnemySprite(x=2200, y=200,width=100,height=80,angle='0',shape='rect',restitution=0.2,static='false',friction=0.5,density=20,classname='SquareBlobSprite',firstframe='square_monsterblob.png' ))
#lb.addObject(Enemy.EnemySprite(x=2000, y=200,width=70,height=80,angle='0',shape='rect',restitution=0.2,static='false',friction=0.5,density=20,classname='SquareBlobSprite',firstframe='square_monsterblob.png' ))
"""
#lb.addObject(Enemy.EnemySprite(x=1000, y=200,width=140,height=140,angle='0',restitution=0.2,static='false',friction=0.5,density=3,classname='PumpkinBomberSprite',firstframe='pumpkin_bomber.png' ).setName('BomberSprite'))
lb.addObject(PumpkinBomber.PumpkinBomberSprite(x=1000, y=200))
lb.addObject(Enemy.EnemySprite(x=800, y=200,width=50,height=50,angle='0',restitution=0.2,static='false',friction=0.5,density=5,classname='BlobSprite',firstframe='monsterblob.png' ))
#lb.addObject(Beam.BeamSprite(x=1344, y=118,width=546,height=14,angle='34',restitution=0.2,static='true',friction=0.5,density=20 ).setName('Beam'))
lb.addObject(Enemy.EnemySprite(x=0, y=0,width=32,height=32,angle='0',restitution=0.7,static='false',friction=0.5,density=1,spawnEvent='onPumpkinBomberShoot', classname='PumpkinSprite', firstframe='pumpkin.png' ))
lb.addObject(Enemy.EnemySprite(x=1800, y=40,width=80,height=80,angle='0',restitution=0.7,static='false',friction=0.5,density=1, classname='PumpkinSprite', firstframe='pumpkin.png'))
lb.addObject(Enemy.EnemySprite(x=1800, y=80,width=32,height=32,angle='0',restitution=0.7,static='false',friction=0.5,density=1, classname='PumpkinSprite', firstframe='pumpkin.png'))
lb.addObject(Enemy.EnemySprite(x=1800, y=120,width=32,height=32,angle='0',restitution=0.7,static='false',friction=0.5,density=1, classname='PumpkinSprite', firstframe='pumpkin.png'))
lb.addObject(Contacts.Contact(body1='EnemyBullet',body2='Hero',event_name='onDamage'))
lb.addObject(Contacts.Contact(body1='EnemyBullet',body2='lbullet',event_name='onBulletHit'))
lb.addObject(Contacts.Contact(body1='EnemyBullet',body2='rbullet',event_name='onBulletHit'))
lb.addObject(Contacts.Contact(body1='beamPumpkinBomber',body2='lbullet',event_name='onBulletHit'))
lb.addObject(Contacts.Contact(body1='beamPumpkinBomber',body2='rbullet',event_name='onBulletHit'))
lb.addObject(Teleporter.TeleporterSprite(x=2834, y=155, level_id='leveldata/level_2'))
"""
lb.addObject(ZoomTrigger.ZoomTriggerSprite(x=300-115-50,y=250,width=100,height=500,zoom_fact=1.0))
lb.addObject(ZoomTrigger.ZoomTriggerSprite(x=300,y=320-60,width=128,height=100,zoom_fact=0.1666))
lb.addObject(ZoomTrigger.ZoomTriggerSprite(x=300+115+50,y=250,width=100,height=500,zoom_fact=1.0))
lb.addObject(WatchtowerVisual.WatchtowerVisualSprite(x=300, y=92,width=128,height=235-50,angle='0',restitution=0.2,static='true',friction=0.5,density=20,firstframe='watchtower.png' ))
lb.addObject(Bullet.BulletSprite(x=0, y=0,width=10,height=10,angle='0',restitution=0.5,static='false',friction=0.5,density=3,spawnEvent='onShoot' ))
lb.addObject(Teleporter.TeleporterSprite(level_id='leveldata/menu'))
lb.render() | 96.25974 | 225 | 0.740151 | 1,150 | 7,412 | 4.754783 | 0.124348 | 0.110644 | 0.073702 | 0.08504 | 0.855523 | 0.809071 | 0.803219 | 0.757315 | 0.749634 | 0.749634 | 0 | 0.105043 | 0.063681 | 7,412 | 77 | 226 | 96.25974 | 0.682853 | 0.32218 | 0 | 0.285714 | 0 | 0 | 0.087203 | 0.022051 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.071429 | 0 | 0.107143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
72d2e77ed402d990f5ede8aa84871d9d1e1ba1cd | 12,640 | py | Python | MobilityPatterns/IntersectTest.py | jadoona81/RL_Experiments | a9dd705f90930da47390c0317b6c45f09cd1c847 | [
"Apache-2.0"
] | null | null | null | MobilityPatterns/IntersectTest.py | jadoona81/RL_Experiments | a9dd705f90930da47390c0317b6c45f09cd1c847 | [
"Apache-2.0"
] | null | null | null | MobilityPatterns/IntersectTest.py | jadoona81/RL_Experiments | a9dd705f90930da47390c0317b6c45f09cd1c847 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Wed Feb 3 21:17:13 2021
@author: HG19230
"""
import math
def IntersectTest(xDim, yDim, Point_A, Point_B, LorC, direction, ListLine, ListCol):
#PointA : currPoint
#PointB : nextPoint in the direction (1) positive , (0) negative
# result = 10000;
# min = 10000;
# max = -10000;
# dim1 = 50;
# dim2 = 50;
result= 100000 #10000
minimum= 100000 #10000
# if LorC %on a line
print("LorC"+ str(LorC))
print("dircetion"+str(direction))
if LorC==1: #################### on a line
#if direct %positive
# if Point_A(1) > Point_B(1) % by def Point_A(x) < Point_B(x)
# intersection = -1; % error
# return;
# end
# for i = 1 : length(ListCol)
# if (ListCol(i) - Point_A(1) < min) && (ListCol(i) > Point_A(1)) % look for intersection "after" Point_A(x) so ListCol(i) should be > Point_A(1)
# min = ListCol(i) - Point_A(1);
# result = ListCol(i); %X of next intersection
# end
# end
# if result < Point_B(1) && result ~= 10000
# intersection = [1, result, Point_A(2), 1];
# else
# if Point_B(1) > dim1
# Point_B(1) = dim1 - (Point_B(1) - dim1);
# Point_A(1) = dim1;
# for i = 1 : length(ListCol)
# if (Point_A(1) - ListCol(i) < min) && (Point_A(1) > ListCol(i))
# min = Point_A(1) - ListCol(i);
# result = ListCol(i); %X of next intersection
# end
# end
# if result > Point_B(1)
# intersection = [1, result, Point_A(2), 0];
# else
# intersection = [0, Point_B(1), Point_B(2), 0]; %no intersection
# end
# else
# intersection = [0, Point_B(1), Point_B(2), 1]; %no intersection
# end
# end
if direction==1: # positive
if Point_A[0] > Point_B[0]: # by def Point_A(x) < Point_B(x)
#intersection = -1 #error
return -1
for i in range(len(ListCol)):
if (ListCol[i] - Point_A[0] < minimum) and (ListCol[i] > Point_A[0]): #% look for intersection "after" Point_A(x) so ListCol(i) should be > Point_A(1)
min = ListCol[i] - Point_A[0]
result = ListCol[i]# %X of next intersection
if result < Point_B[0] and not result==100000:
intersection = [1, result, Point_A[1], 1]
return intersection
else:
if Point_B[0] > xDim:
Point_B[0] = xDim - (Point_B[0] - xDim)
Point_A[0] = xDim
for i in range(len(ListCol)):
if (Point_A[0] - ListCol[i] < minimum) and (Point_A[0] > ListCol[i]):
minimum = Point_A[0] - ListCol[i]
result = ListCol[i]#X of next intersection
if result > Point_B[0]:
intersection = [1, result, Point_A[1], 0]
return intersection
else:
intersection = [0, Point_B[0], Point_B[1], 0]#no intersection
return intersection
else:
intersection = [0, Point_B[0], Point_B[1], 1]#no intersection
return intersection
# else %direct negative
# if Point_B(1) > Point_A(1) % by def Point_A(x) > Point_B(x)
# intersection = -1; % error
# return;
# end
# for i = 1 : length(ListCol)
# if (Point_A(1) - ListCol(i) < min) && (Point_A(1) > ListCol(i))
# min = Point_A(1) - ListCol(i);
# result = ListCol(i); %X of next intersection
# end
# end
# if result > Point_B(1) && result ~= 10000
# intersection = [1, result, Point_A(2), 0];
# else
# if Point_B(1) < 0
# Point_B(1) = abs(Point_B(1));
# Point_A(1) = 0;
# for i = 1 : length(ListCol)
# if (ListCol(i) - Point_A(1) < min) && (Point_A(1) < ListCol(i))
# min = ListCol(i) - Point_A(1);
# result = ListCol(i); %X of next intersection
# end
# end
# if result < Point_B(1) && result ~= 10000
# intersection = [1, result, Point_A(2), 1];
# else
# intersection = [0, Point_B(1), Point_B(2), 1]; %no intersection
# end
# else
# intersection = [0, Point_B(1), Point_B(2), 0]; %no intersection
# end
# end
# end
else: # negative
print('suspicious case')
print('Point B')
print(Point_B)
if Point_B[0] > Point_A[0]: # by def Point_A(x) > Point_B(x)
return -1
for i in range(len(ListCol)):
if (Point_A[0] - ListCol[i] < minimum) and (Point_A[0] > ListCol[i]):
minimum = Point_A[0] - ListCol[i]
result = ListCol[i]#X of next intersection
print('result: '+ str(result))
if result > Point_B[0] and not result==100000:
print('subcase 1')
intersection = [1, result, Point_A[1], 0]
return intersection
else:
if Point_B[0] < 0:
Point_B[0] = abs(Point_B[0])
Point_A[0] = 0
for i in range(len(ListCol)):
if (ListCol[i] - Point_A[0] < minimum) and (Point_A[0] < ListCol[i]):
minimum = ListCol[i] - Point_A[0]
result = ListCol[i]#X of next intersection
if result < Point_B[0] and not result == 10000:
intersection = [1, result, Point_A[1], 1]
print('subcase 2')
return intersection
else:
intersection = [0, Point_B[0], Point_B[1], 1]#no intersection
print('subcase 3')
return intersection
else:
intersection = [0, Point_B[0], Point_B[1], 0]#no intersection
print('subcase 4')
return intersection
else:#################### on a column
# else %on a column
# if direct %positive
# if Point_A(2) > Point_B(2) % by def Point_A(2) < Point_B(2)
# intersection = -1; % error
# return;
# end
# for i = 1 : length(ListLine)
# if (ListLine(i) - Point_A(2) < min) && (ListLine(i) > Point_A(2))
# min = ListLine(i) - Point_A(2);
# result = ListLine(i); %X of next intersection
# end
# end
# if result < Point_B(2) && result ~= 10000
# intersection = [1, Point_A(1), result, 1];
# else
# if Point_B(2) > dim2
# Point_B(2) = dim2 - (Point_B(2) - dim2);
# Point_A(2) = dim2;
# for i = 1 : length(ListLine)
# if (Point_A(2) - ListLine(i) < min) && (Point_A(2) > ListLine(i))
# min = Point_A(2) - ListLine(i);
# result = ListLine(i); %Y of next intersection
# end
# end
# if result > Point_B(2)
# intersection = [1, Point_A(1), result, 0];
# else
# intersection = [0, Point_B(1), Point_B(2), 0]; %no intersection
# end
# else
# intersection = [0, Point_B(1), Point_B(2), 1]; %no intersection
# end
# end
if direction==1:
if Point_A[1] > Point_B[1]:# by def Point_A(2) < Point_B(2):
return -1
for i in range(len(ListLine)):
if (ListLine[i] - Point_A[1] < minimum) and (ListLine[i] > Point_A[1]):
minimum = ListLine[i] - Point_A[1]
result = ListLine[i]#X of next intersection
if result < Point_B[1] and not result == 10000:
intersection = [1, Point_A[0], result, 1]
return intersection
else:
if Point_B[1] > yDim:
Point_B[1] = yDim - (Point_B[1] - yDim)
Point_A[1] = yDim
for i in range(len(ListLine)):
if (Point_A[1] - ListLine[i] < minimum) and (Point_A[1] > ListLine[i]):
minimum = Point_A[1] - ListLine[i]
result = ListLine[i]#Y of next intersection
if result > Point_B[1]:
intersection = [1, Point_A[0], result, 0]
return intersection
else:
intersection = [0, Point_B[0], Point_B[1], 0]#no intersection
return intersection
else:
intersection = [0, Point_B[0], Point_B[1], 1]#no intersection
return intersection
# else %direct negative
# if Point_B(2) > Point_A(2) % by def Point_A(y) > Point_B(y)
# intersection = -1; % error
# return;
# end
# for i = 1 : length(ListLine)
# if (Point_A(2) - ListLine(i) < min) && (Point_A(2) > ListLine(i))
# min = Point_A(2) - ListLine(i);
# result = ListLine(i); %Y of next intersection
# end
# end
# if result > Point_B(2) && result ~= 10000
# intersection = [1, Point_A(1), result, 0];
# else
# if Point_B(2) < 0
# Point_B(2) = abs(Point_B(2));
# Point_A(2) = 0;
# for i = 1 : length(ListLine)
# if (ListLine(i) - Point_A(2) < min) && (Point_A(2) < ListLine(i))
# min = ListLine(i) - Point_A(2);
# result = ListLine(i); %X of next intersection
# end
# end
# if result < Point_B(2) && result ~= 10000
# intersection = [1, Point_A(1), result, 1];
# else
# intersection = [0, Point_B(1), Point_B(2), 1]; %no intersection
# end
# else
# intersection = [0, Point_B(1), Point_B(2), 0]; %no intersection
# end
# end
# end
# end
else: #direction negative
if Point_B[1] > Point_A[1]: # by def Point_A(y) > Point_B(y)
return -1
for i in range(len(ListLine)):
if (Point_A[1] - ListLine[i] < minimum) and (Point_A[1] > ListLine[i]):
minimum = Point_A[1] - ListLine[i]
result = ListLine[i]#Y of next intersection
if result > Point_B[1] and not result == 10000:
intersection = [1, Point_A[0], result, 0]
return intersection
else:
if Point_B[1] < 0:
Point_B[1] = abs(Point_B[1])
Point_A[1] = 0
for i in range(len(ListLine)):
if (ListLine[i] - Point_A[1] < minimum) and (Point_A[1] < ListLine[i]):
minimum = ListLine[i] - Point_A[1]
result = ListLine[i]#X of next intersection
if result < Point_B[1] and not result == 10000:
intersection = [1, Point_A[0], result, 1]
return intersection
else:
intersection = [0, Point_B[0], Point_B[1], 1]#no intersection
return intersection
else:
intersection = [0, Point_B[0], Point_B[1], 0]#no intersection
return intersection
print("testing intersection"+ str(intersection))
return intersection | 41.854305 | 166 | 0.436155 | 1,438 | 12,640 | 3.706537 | 0.057719 | 0.10469 | 0.055159 | 0.027017 | 0.879925 | 0.874484 | 0.852908 | 0.829456 | 0.801313 | 0.754597 | 0 | 0.056101 | 0.444383 | 12,640 | 302 | 167 | 41.854305 | 0.702834 | 0.492484 | 0 | 0.650794 | 0 | 0 | 0.015968 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007937 | false | 0 | 0.007937 | 0 | 0.18254 | 0.087302 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f40ea001219ea8386a47b9b939e49f3216e907c3 | 154 | py | Python | 8.py | abphilip-codes/Codechef_DSA | ee93243fdd1c16ce8d8163e92f0764bd5a64436c | [
"MIT"
] | 1 | 2021-11-25T13:39:49.000Z | 2021-11-25T13:39:49.000Z | 8.py | abphilip-codes/Codechef_DSA | ee93243fdd1c16ce8d8163e92f0764bd5a64436c | [
"MIT"
] | null | null | null | 8.py | abphilip-codes/Codechef_DSA | ee93243fdd1c16ce8d8163e92f0764bd5a64436c | [
"MIT"
] | 1 | 2021-07-14T17:51:24.000Z | 2021-07-14T17:51:24.000Z | # https://www.codechef.com/CCSTART2/problems/FINDMELI
n,k = map(int,input().split())
if k in list(map(int,input().split())): print("1")
else: print("-1") | 30.8 | 53 | 0.662338 | 26 | 154 | 3.923077 | 0.730769 | 0.117647 | 0.215686 | 0.313725 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021127 | 0.077922 | 154 | 5 | 54 | 30.8 | 0.697183 | 0.331169 | 0 | 0 | 0 | 0 | 0.029412 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
f432949ce09b1a2ed33d47d3015791aaeccfdeba | 8,981 | py | Python | tmglow/nn/modules/flowAffine.py | zabaras/deep-turbulence | 0daca5daada449d4ba16bce37b703e20b444b6bc | [
"MIT"
] | 20 | 2020-12-01T14:58:01.000Z | 2022-03-15T07:40:10.000Z | tmglow/nn/modules/flowAffine.py | zabaras/deep-turbulence | 0daca5daada449d4ba16bce37b703e20b444b6bc | [
"MIT"
] | 2 | 2021-06-05T14:29:42.000Z | 2022-03-04T15:57:40.000Z | tmglow/nn/modules/flowAffine.py | zabaras/deep-turbulence | 0daca5daada449d4ba16bce37b703e20b444b6bc | [
"MIT"
] | 4 | 2020-09-04T06:11:04.000Z | 2021-09-05T10:47:16.000Z | '''
=====
Distributed by: Notre Dame SCAI Lab (MIT Liscense)
- Associated publication:
url: http://aimsciences.org//article/id/3a9f3d14-3421-4947-a45f-a9cc74edd097
doi: https://dx.doi.org/10.3934/fods.2020019
github: https://github.com/zabaras/deep-turbulence
=====
'''
import sys
sys.path.append(".")
from nn.modules.denseBlock import DenseBlock, NoNormDenseBlock
from nn.modules.convLSTM import ResidLSTMBlock
from nn.modules.flowUtils import Conv2dZeros
import torch
import torch.nn as nn
import torch.nn.functional as F
class AffineCouplingLayer(nn.Module):
"""Conditional invertable affine coupling layer.
See Fig. 7 of paper: https://arxiv.org/abs/2006.04731
:param in_features: Number of input feature channels
:type in_features: int
:param cond_features: Number of conditional feature channels
:type cond_features: int
:note: For more information see "NICE: Non-linear Independent
Components Estimation" by Dihn et al. https://arxiv.org/abs/1410.8516
:note: Check Feistel cipher as this functions very similarly.
"""
def __init__(self, in_features, cond_features):
"""Constructor method
"""
super(AffineCouplingLayer, self).__init__()
# assert in_features % 2 == 0, '# input features must be evenly split,'\
# 'but got {} features'.format(in_features)
if in_features % 2 == 0:
in_channels = in_features // 2 + cond_features
out_channels = in_features
else:
# chunk is be (2, 1) if in_features==3
in_channels = in_features // 2 + 1 + cond_features
out_channels = in_features - 1
# Initialize coupling network (Dense Block)
num_layers = 2
growth_rate = 1
self.coupling_nn = nn.Sequential()
self.coupling_nn.add_module('dense_block', NoNormDenseBlock(num_layers, in_channels,
growth_rate=growth_rate, drop_rate=0., bottleneck=False))
self.coupling_nn.add_module('relu1', nn.ReLU(inplace=True))
self.coupling_nn.add_module('zero_conv', Conv2dZeros(in_channels + growth_rate*num_layers, out_channels))
self.softsign = nn.Softsign()
def forward(self, x, cond):
"""Forward pass
:param x: [B, in_features, H, W] input feature tensor
:type x: torch.Tensor
:param cond: [B, cond_features, H, W] input feature tensor
:type cond: torch.Tensor
:returns:
- y: [B, in_features, H, W] Output feature tensor
- logdet: log determinate of affine layer
:rtype: (torch.Tensor, torch.Tensor)
"""
# Split in the channel dimension
# last chunk is smaller if not odd number of channels
x1, x2 = x.chunk(2, 1)
h = self.coupling_nn(torch.cat((x1, cond), 1))
shift = h[:, 0::2]
scale = (2*self.softsign(h[:, 1::2])).exp()
x2 = x2 + shift
x2 = x2 * scale
logdet = torch.abs(scale).log().view(x.shape[0], -1).sum(1)
return torch.cat((x1, x2), 1), logdet
def reverse(self, y, cond):
"""Backward pass
:param y: [B, in_features, H, W] input feature tensor
:type y: torch.Tensor
:param cond: [B, cond_features, H, W] input feature tensor
:type cond: torch.Tensor
:returns:
- x: [B, in_features, H, W] Output feature tensor
- logdet: log determinate of affine layer
:rtype: (torch.Tensor, torch.Tensor)
"""
# Split in the channel dimension
y1, y2 = y.chunk(2, 1)
h = self.coupling_nn(torch.cat((y1, cond), 1))
shift = h[:, 0::2]
scale = (2*self.softsign(h[:, 1::2])).exp()
y2 = y2 / scale
y2 = y2 - shift
logdet = torch.abs(scale).log().view(y.shape[0], -1).sum(1)
return torch.cat((y1, y2), 1), logdet
class LSTMAffineCouplingLayer(nn.Module):
"""Conditional LSTM invertable affine coupling layer.
See Fig. 7 of paper: https://arxiv.org/abs/2006.04731
:param in_features: Number of input feature channels
:type in_features: int
:param cond_features: Number of conditional feature channels
:type cond_features: int
:param rec_features: Number of recurrent feature channels,
output from :class:`nn.modules.convLSTM.ResidLSTMBlock`
:type rec_features: int
:note: For more information see "NICE: Non-linear Independent
Components Estimation" by Dihn et al. https://arxiv.org/abs/1410.8516
:note: Check Feistel cipher as this functions very similarly.
"""
def __init__(self, in_features, cond_features, rec_features):
"""Constructor method
"""
super(LSTMAffineCouplingLayer, self).__init__()
# assert in_features % 2 == 0, '# input features must be evenly split,'\
# 'but got {} features'.format(in_features)
if in_features % 2 == 0:
in_channels = in_features // 2 + cond_features
out_channels = in_features
else:
# chunk is be (2, 1) if in_features==3
in_channels = in_features // 2 + 1 + cond_features
out_channels = in_features - 1
# Initialize coupling network (Dense Block)
num_layers = 2
growth_rate = 1
# LSTM block
self.resid_lstm = ResidLSTMBlock(in_channels, rec_features, in_channels, kernel_size=(3,3))
self.dense_nn = nn.Sequential()
self.dense_nn.add_module('dense_block', NoNormDenseBlock(num_layers, in_channels,
growth_rate=growth_rate, drop_rate=0., bottleneck=False))
self.dense_nn.add_module('relu1', nn.ReLU(inplace=True))
# Output convolution
num_feat = in_channels + growth_rate*num_layers
self.out_conv = nn.Sequential()
self.out_conv.add_module('zero_conv', Conv2dZeros(num_feat, out_channels))
self.softsign = nn.Softsign()
def forward(self, x, cond, rec_states=None):
"""Forward pass
:param x: [B, in_features, H, W] input feature tensor
:type x: torch.Tensor
:param cond: [B, cond_features, H, W] input feature tensor
:type cond: torch.Tensor
:param rec_states: tuple of LSTM states (hidden state, cell state), defaults to None
:type rec_states: tuple, optional
:returns:
- y: [B, in_features, H, W] Output feature tensor
- logdet: log determinate of affine layer
- states_out: tuple of LSTM (cell, hidden) states
:rtype: (torch.Tensor, torch.Tensor, tuple)
"""
# Split in the channel dimension
x1, x2 = x.chunk(2, 1)
if(rec_states is None):
out, h_next, c_next = self.resid_lstm(torch.cat((x1, cond), 1), None)
# Store lstm states for next time-step and flow conditions
states_out = (h_next, c_next)
else:
out, h_next, c_next = self.resid_lstm(torch.cat((x1, cond), 1), rec_states)
# Store lstm states for next time-step and flow conditions
states_out = (h_next, c_next)
out = self.dense_nn(out)
h = self.out_conv(out)
shift = h[:, 0::2]
scale = (2*self.softsign(h[:, 1::2])).exp()
x2 = x2 + shift
x2 = x2 * scale
logdet = torch.abs(scale).log().view(x.shape[0], -1).sum(1)
return torch.cat((x1, x2), 1), logdet, states_out
def reverse(self, y, cond, rec_states=None):
"""Backward pass
:param y: [B, in_features, H, W] input feature tensor
:type y: torch.Tensor
:param cond: [B, in_features, H, W] input feature tensor
:type cond: torch.Tensor
:param rec_states: tuple of LSTM states (hidden state, cell state), defaults to None
:type rec_states: tuple, optional
:returns:
- x: [B, in_features, H, W] Output feature tensor
- logdet: log determinate of affine layer
- states_out: tuple of LSTM (cell, hidden) states
:rtype: (torch.Tensor, torch.Tensor, tuple)
"""
# Split in the channel dimension
y1, y2 = y.chunk(2, 1)
if(rec_states is None):
out, h_next, c_next = self.resid_lstm(torch.cat((y1, cond), 1), None)
# Store lstm states for next time-step and flow conditions
states_out = (h_next, c_next)
else:
out, h_next, c_next = self.resid_lstm(torch.cat((y1, cond), 1), rec_states)
# Store lstm states for next time-step and flow conditions
states_out = (h_next, c_next)
out = self.dense_nn(out)
h = self.out_conv(out)
shift = h[:, 0::2]
scale = (2*self.softsign(h[:, 1::2])).exp()
y2 = y2 / scale
y2 = y2 - shift
logdet = torch.abs(scale).log().view(y.shape[0], -1).sum(1)
return torch.cat((y1, y2), 1), logdet, states_out | 38.055085 | 113 | 0.60962 | 1,207 | 8,981 | 4.400994 | 0.168186 | 0.058358 | 0.02259 | 0.020331 | 0.8157 | 0.788215 | 0.775414 | 0.775414 | 0.762989 | 0.752636 | 0 | 0.029733 | 0.277252 | 8,981 | 236 | 114 | 38.055085 | 0.78863 | 0.431466 | 0 | 0.589474 | 0 | 0 | 0.011056 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063158 | false | 0 | 0.073684 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f4515e15ee13c00a42a90312bfacbe08ae23f6e1 | 194 | py | Python | Src/Version/version/version_1_0_0.py | 23233/sproxy | b40d6593e3a60ba089484c527de1bc4afc3a2d24 | [
"MIT"
] | 167 | 2019-07-05T13:25:39.000Z | 2020-04-25T11:31:19.000Z | Src/Version/version/version_1_0_0.py | 23233/sproxy | b40d6593e3a60ba089484c527de1bc4afc3a2d24 | [
"MIT"
] | 10 | 2018-12-11T06:07:37.000Z | 2019-07-03T11:14:36.000Z | Src/Version/version/version_1_0_0.py | 23233/sproxy | b40d6593e3a60ba089484c527de1bc4afc3a2d24 | [
"MIT"
] | 38 | 2019-07-04T07:50:08.000Z | 2020-04-17T21:05:46.000Z |
# just example
def run(mc, last_version, update_version, cur_version):
print("nothing to do", last_version, update_version, cur_version, __file__)
if __name__ == '__main__':
pass | 27.714286 | 80 | 0.716495 | 26 | 194 | 4.653846 | 0.692308 | 0.181818 | 0.280992 | 0.396694 | 0.561983 | 0.561983 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180412 | 194 | 7 | 81 | 27.714286 | 0.761006 | 0.061856 | 0 | 0 | 0 | 0 | 0.12069 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0 | 0 | 0.25 | 0.25 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
f456bab1d84486be43d63aaf985538b75fd4858c | 98,796 | py | Python | runtime/pika/spec.py | KevenLi/Flowy | 540520177bcfda316afaf50a2447108b754dca10 | [
"Apache-2.0"
] | null | null | null | runtime/pika/spec.py | KevenLi/Flowy | 540520177bcfda316afaf50a2447108b754dca10 | [
"Apache-2.0"
] | null | null | null | runtime/pika/spec.py | KevenLi/Flowy | 540520177bcfda316afaf50a2447108b754dca10 | [
"Apache-2.0"
] | null | null | null | # ***** BEGIN LICENSE BLOCK *****
#
# For copyright and licensing please refer to COPYING.
#
# ***** END LICENSE BLOCK *****
# NOTE: Autogenerated code by codegen.py, do not edit
import struct
from pika import amqp_object
from pika import data
PROTOCOL_VERSION = (0, 9, 1)
PORT = 5672
ACCESS_REFUSED = 403
CHANNEL_ERROR = 504
COMMAND_INVALID = 503
CONNECTION_FORCED = 320
CONTENT_TOO_LARGE = 311
FRAME_BODY = 3
FRAME_END = 206
FRAME_END_SIZE = 1
FRAME_ERROR = 501
FRAME_HEADER = 2
FRAME_HEADER_SIZE = 7
FRAME_HEARTBEAT = 8
FRAME_MAX_SIZE = 131072
FRAME_METHOD = 1
FRAME_MIN_SIZE = 4096
INTERNAL_ERROR = 541
INVALID_PATH = 402
NOT_ALLOWED = 530
NOT_FOUND = 404
NOT_IMPLEMENTED = 540
NO_CONSUMERS = 313
NO_ROUTE = 312
PRECONDITION_FAILED = 406
REPLY_SUCCESS = 200
RESOURCE_ERROR = 506
RESOURCE_LOCKED = 405
SYNTAX_ERROR = 502
UNEXPECTED_FRAME = 505
class Connection(amqp_object.Class):
INDEX = 0x000A # 10
NAME = 'Connection'
class Start(amqp_object.Method):
INDEX = 0x000A000A # 10, 10; 655370
NAME = 'Connection.Start'
def __init__(self, version_major=0, version_minor=9, server_properties=None, mechanisms='PLAIN', locales='en_US'):
self.version_major = version_major
self.version_minor = version_minor
self.server_properties = server_properties
self.mechanisms = mechanisms
self.locales = locales
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
self.version_major = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.version_minor = struct.unpack_from('B', encoded, offset)[0]
offset += 1
(self.server_properties, offset) = data.decode_table(encoded, offset)
length = struct.unpack_from('>I', encoded, offset)[0]
offset += 4
self.mechanisms = encoded[offset:offset + length].decode('utf8')
try:
self.mechanisms = str(self.mechanisms)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('>I', encoded, offset)[0]
offset += 4
self.locales = encoded[offset:offset + length].decode('utf8')
try:
self.locales = str(self.locales)
except UnicodeEncodeError:
pass
offset += length
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('B', self.version_major))
pieces.append(struct.pack('B', self.version_minor))
data.encode_table(pieces, self.server_properties)
assert isinstance(self.mechanisms, basestring),\
'A non-bytestring value was supplied for self.mechanisms'
value = self.mechanisms.encode('utf-8') if isinstance(self.mechanisms, unicode) else self.mechanisms
pieces.append(struct.pack('>I', len(value)))
pieces.append(value)
assert isinstance(self.locales, basestring),\
'A non-bytestring value was supplied for self.locales'
value = self.locales.encode('utf-8') if isinstance(self.locales, unicode) else self.locales
pieces.append(struct.pack('>I', len(value)))
pieces.append(value)
return pieces
class StartOk(amqp_object.Method):
INDEX = 0x000A000B # 10, 11; 655371
NAME = 'Connection.StartOk'
def __init__(self, client_properties=None, mechanism='PLAIN', response=None, locale='en_US'):
self.client_properties = client_properties
self.mechanism = mechanism
self.response = response
self.locale = locale
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
(self.client_properties, offset) = data.decode_table(encoded, offset)
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.mechanism = encoded[offset:offset + length].decode('utf8')
try:
self.mechanism = str(self.mechanism)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('>I', encoded, offset)[0]
offset += 4
self.response = encoded[offset:offset + length].decode('utf8')
try:
self.response = str(self.response)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.locale = encoded[offset:offset + length].decode('utf8')
try:
self.locale = str(self.locale)
except UnicodeEncodeError:
pass
offset += length
return self
def encode(self):
pieces = list()
data.encode_table(pieces, self.client_properties)
assert isinstance(self.mechanism, basestring),\
'A non-bytestring value was supplied for self.mechanism'
value = self.mechanism.encode('utf-8') if isinstance(self.mechanism, unicode) else self.mechanism
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
assert isinstance(self.response, basestring),\
'A non-bytestring value was supplied for self.response'
value = self.response.encode('utf-8') if isinstance(self.response, unicode) else self.response
pieces.append(struct.pack('>I', len(value)))
pieces.append(value)
assert isinstance(self.locale, basestring),\
'A non-bytestring value was supplied for self.locale'
value = self.locale.encode('utf-8') if isinstance(self.locale, unicode) else self.locale
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
return pieces
class Secure(amqp_object.Method):
INDEX = 0x000A0014 # 10, 20; 655380
NAME = 'Connection.Secure'
def __init__(self, challenge=None):
self.challenge = challenge
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
length = struct.unpack_from('>I', encoded, offset)[0]
offset += 4
self.challenge = encoded[offset:offset + length].decode('utf8')
try:
self.challenge = str(self.challenge)
except UnicodeEncodeError:
pass
offset += length
return self
def encode(self):
pieces = list()
assert isinstance(self.challenge, basestring),\
'A non-bytestring value was supplied for self.challenge'
value = self.challenge.encode('utf-8') if isinstance(self.challenge, unicode) else self.challenge
pieces.append(struct.pack('>I', len(value)))
pieces.append(value)
return pieces
class SecureOk(amqp_object.Method):
INDEX = 0x000A0015 # 10, 21; 655381
NAME = 'Connection.SecureOk'
def __init__(self, response=None):
self.response = response
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
length = struct.unpack_from('>I', encoded, offset)[0]
offset += 4
self.response = encoded[offset:offset + length].decode('utf8')
try:
self.response = str(self.response)
except UnicodeEncodeError:
pass
offset += length
return self
def encode(self):
pieces = list()
assert isinstance(self.response, basestring),\
'A non-bytestring value was supplied for self.response'
value = self.response.encode('utf-8') if isinstance(self.response, unicode) else self.response
pieces.append(struct.pack('>I', len(value)))
pieces.append(value)
return pieces
class Tune(amqp_object.Method):
INDEX = 0x000A001E # 10, 30; 655390
NAME = 'Connection.Tune'
def __init__(self, channel_max=0, frame_max=0, heartbeat=0):
self.channel_max = channel_max
self.frame_max = frame_max
self.heartbeat = heartbeat
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
self.channel_max = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
self.frame_max = struct.unpack_from('>I', encoded, offset)[0]
offset += 4
self.heartbeat = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>H', self.channel_max))
pieces.append(struct.pack('>I', self.frame_max))
pieces.append(struct.pack('>H', self.heartbeat))
return pieces
class TuneOk(amqp_object.Method):
INDEX = 0x000A001F # 10, 31; 655391
NAME = 'Connection.TuneOk'
def __init__(self, channel_max=0, frame_max=0, heartbeat=0):
self.channel_max = channel_max
self.frame_max = frame_max
self.heartbeat = heartbeat
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
self.channel_max = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
self.frame_max = struct.unpack_from('>I', encoded, offset)[0]
offset += 4
self.heartbeat = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>H', self.channel_max))
pieces.append(struct.pack('>I', self.frame_max))
pieces.append(struct.pack('>H', self.heartbeat))
return pieces
class Open(amqp_object.Method):
INDEX = 0x000A0028 # 10, 40; 655400
NAME = 'Connection.Open'
def __init__(self, virtual_host='/', capabilities='', insist=False):
self.virtual_host = virtual_host
self.capabilities = capabilities
self.insist = insist
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.virtual_host = encoded[offset:offset + length].decode('utf8')
try:
self.virtual_host = str(self.virtual_host)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.capabilities = encoded[offset:offset + length].decode('utf8')
try:
self.capabilities = str(self.capabilities)
except UnicodeEncodeError:
pass
offset += length
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.insist = (bit_buffer & (1 << 0)) != 0
return self
def encode(self):
pieces = list()
assert isinstance(self.virtual_host, basestring),\
'A non-bytestring value was supplied for self.virtual_host'
value = self.virtual_host.encode('utf-8') if isinstance(self.virtual_host, unicode) else self.virtual_host
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
assert isinstance(self.capabilities, basestring),\
'A non-bytestring value was supplied for self.capabilities'
value = self.capabilities.encode('utf-8') if isinstance(self.capabilities, unicode) else self.capabilities
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
bit_buffer = 0
if self.insist:
bit_buffer = bit_buffer | (1 << 0)
pieces.append(struct.pack('B', bit_buffer))
return pieces
class OpenOk(amqp_object.Method):
INDEX = 0x000A0029 # 10, 41; 655401
NAME = 'Connection.OpenOk'
def __init__(self, known_hosts=''):
self.known_hosts = known_hosts
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.known_hosts = encoded[offset:offset + length].decode('utf8')
try:
self.known_hosts = str(self.known_hosts)
except UnicodeEncodeError:
pass
offset += length
return self
def encode(self):
pieces = list()
assert isinstance(self.known_hosts, basestring),\
'A non-bytestring value was supplied for self.known_hosts'
value = self.known_hosts.encode('utf-8') if isinstance(self.known_hosts, unicode) else self.known_hosts
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
return pieces
class Close(amqp_object.Method):
INDEX = 0x000A0032 # 10, 50; 655410
NAME = 'Connection.Close'
def __init__(self, reply_code=None, reply_text='', class_id=None, method_id=None):
self.reply_code = reply_code
self.reply_text = reply_text
self.class_id = class_id
self.method_id = method_id
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
self.reply_code = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.reply_text = encoded[offset:offset + length].decode('utf8')
try:
self.reply_text = str(self.reply_text)
except UnicodeEncodeError:
pass
offset += length
self.class_id = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
self.method_id = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>H', self.reply_code))
assert isinstance(self.reply_text, basestring),\
'A non-bytestring value was supplied for self.reply_text'
value = self.reply_text.encode('utf-8') if isinstance(self.reply_text, unicode) else self.reply_text
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
pieces.append(struct.pack('>H', self.class_id))
pieces.append(struct.pack('>H', self.method_id))
return pieces
class CloseOk(amqp_object.Method):
INDEX = 0x000A0033 # 10, 51; 655411
NAME = 'Connection.CloseOk'
def __init__(self):
pass
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
return self
def encode(self):
pieces = list()
return pieces
class Channel(amqp_object.Class):
INDEX = 0x0014 # 20
NAME = 'Channel'
class Open(amqp_object.Method):
INDEX = 0x0014000A # 20, 10; 1310730
NAME = 'Channel.Open'
def __init__(self, out_of_band=''):
self.out_of_band = out_of_band
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.out_of_band = encoded[offset:offset + length].decode('utf8')
try:
self.out_of_band = str(self.out_of_band)
except UnicodeEncodeError:
pass
offset += length
return self
def encode(self):
pieces = list()
assert isinstance(self.out_of_band, basestring),\
'A non-bytestring value was supplied for self.out_of_band'
value = self.out_of_band.encode('utf-8') if isinstance(self.out_of_band, unicode) else self.out_of_band
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
return pieces
class OpenOk(amqp_object.Method):
INDEX = 0x0014000B # 20, 11; 1310731
NAME = 'Channel.OpenOk'
def __init__(self, channel_id=''):
self.channel_id = channel_id
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
length = struct.unpack_from('>I', encoded, offset)[0]
offset += 4
self.channel_id = encoded[offset:offset + length].decode('utf8')
try:
self.channel_id = str(self.channel_id)
except UnicodeEncodeError:
pass
offset += length
return self
def encode(self):
pieces = list()
assert isinstance(self.channel_id, basestring),\
'A non-bytestring value was supplied for self.channel_id'
value = self.channel_id.encode('utf-8') if isinstance(self.channel_id, unicode) else self.channel_id
pieces.append(struct.pack('>I', len(value)))
pieces.append(value)
return pieces
class Flow(amqp_object.Method):
INDEX = 0x00140014 # 20, 20; 1310740
NAME = 'Channel.Flow'
def __init__(self, active=None):
self.active = active
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.active = (bit_buffer & (1 << 0)) != 0
return self
def encode(self):
pieces = list()
bit_buffer = 0
if self.active:
bit_buffer = bit_buffer | (1 << 0)
pieces.append(struct.pack('B', bit_buffer))
return pieces
class FlowOk(amqp_object.Method):
INDEX = 0x00140015 # 20, 21; 1310741
NAME = 'Channel.FlowOk'
def __init__(self, active=None):
self.active = active
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.active = (bit_buffer & (1 << 0)) != 0
return self
def encode(self):
pieces = list()
bit_buffer = 0
if self.active:
bit_buffer = bit_buffer | (1 << 0)
pieces.append(struct.pack('B', bit_buffer))
return pieces
class Close(amqp_object.Method):
INDEX = 0x00140028 # 20, 40; 1310760
NAME = 'Channel.Close'
def __init__(self, reply_code=None, reply_text='', class_id=None, method_id=None):
self.reply_code = reply_code
self.reply_text = reply_text
self.class_id = class_id
self.method_id = method_id
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
self.reply_code = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.reply_text = encoded[offset:offset + length].decode('utf8')
try:
self.reply_text = str(self.reply_text)
except UnicodeEncodeError:
pass
offset += length
self.class_id = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
self.method_id = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>H', self.reply_code))
assert isinstance(self.reply_text, basestring),\
'A non-bytestring value was supplied for self.reply_text'
value = self.reply_text.encode('utf-8') if isinstance(self.reply_text, unicode) else self.reply_text
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
pieces.append(struct.pack('>H', self.class_id))
pieces.append(struct.pack('>H', self.method_id))
return pieces
class CloseOk(amqp_object.Method):
INDEX = 0x00140029 # 20, 41; 1310761
NAME = 'Channel.CloseOk'
def __init__(self):
pass
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
return self
def encode(self):
pieces = list()
return pieces
class Access(amqp_object.Class):
INDEX = 0x001E # 30
NAME = 'Access'
class Request(amqp_object.Method):
INDEX = 0x001E000A # 30, 10; 1966090
NAME = 'Access.Request'
def __init__(self, realm='/data', exclusive=False, passive=True, active=True, write=True, read=True):
self.realm = realm
self.exclusive = exclusive
self.passive = passive
self.active = active
self.write = write
self.read = read
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.realm = encoded[offset:offset + length].decode('utf8')
try:
self.realm = str(self.realm)
except UnicodeEncodeError:
pass
offset += length
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.exclusive = (bit_buffer & (1 << 0)) != 0
self.passive = (bit_buffer & (1 << 1)) != 0
self.active = (bit_buffer & (1 << 2)) != 0
self.write = (bit_buffer & (1 << 3)) != 0
self.read = (bit_buffer & (1 << 4)) != 0
return self
def encode(self):
pieces = list()
assert isinstance(self.realm, basestring),\
'A non-bytestring value was supplied for self.realm'
value = self.realm.encode('utf-8') if isinstance(self.realm, unicode) else self.realm
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
bit_buffer = 0
if self.exclusive:
bit_buffer = bit_buffer | (1 << 0)
if self.passive:
bit_buffer = bit_buffer | (1 << 1)
if self.active:
bit_buffer = bit_buffer | (1 << 2)
if self.write:
bit_buffer = bit_buffer | (1 << 3)
if self.read:
bit_buffer = bit_buffer | (1 << 4)
pieces.append(struct.pack('B', bit_buffer))
return pieces
class RequestOk(amqp_object.Method):
INDEX = 0x001E000B # 30, 11; 1966091
NAME = 'Access.RequestOk'
def __init__(self, ticket=1):
self.ticket = ticket
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
self.ticket = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>H', self.ticket))
return pieces
class Exchange(amqp_object.Class):
INDEX = 0x0028 # 40
NAME = 'Exchange'
class Declare(amqp_object.Method):
INDEX = 0x0028000A # 40, 10; 2621450
NAME = 'Exchange.Declare'
def __init__(self, ticket=0, exchange=None, type='direct', passive=False, durable=False, auto_delete=False, internal=False, nowait=False, arguments={}):
self.ticket = ticket
self.exchange = exchange
self.type = type
self.passive = passive
self.durable = durable
self.auto_delete = auto_delete
self.internal = internal
self.nowait = nowait
self.arguments = arguments
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
self.ticket = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.exchange = encoded[offset:offset + length].decode('utf8')
try:
self.exchange = str(self.exchange)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.type = encoded[offset:offset + length].decode('utf8')
try:
self.type = str(self.type)
except UnicodeEncodeError:
pass
offset += length
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.passive = (bit_buffer & (1 << 0)) != 0
self.durable = (bit_buffer & (1 << 1)) != 0
self.auto_delete = (bit_buffer & (1 << 2)) != 0
self.internal = (bit_buffer & (1 << 3)) != 0
self.nowait = (bit_buffer & (1 << 4)) != 0
(self.arguments, offset) = data.decode_table(encoded, offset)
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>H', self.ticket))
assert isinstance(self.exchange, basestring),\
'A non-bytestring value was supplied for self.exchange'
value = self.exchange.encode('utf-8') if isinstance(self.exchange, unicode) else self.exchange
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
assert isinstance(self.type, basestring),\
'A non-bytestring value was supplied for self.type'
value = self.type.encode('utf-8') if isinstance(self.type, unicode) else self.type
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
bit_buffer = 0
if self.passive:
bit_buffer = bit_buffer | (1 << 0)
if self.durable:
bit_buffer = bit_buffer | (1 << 1)
if self.auto_delete:
bit_buffer = bit_buffer | (1 << 2)
if self.internal:
bit_buffer = bit_buffer | (1 << 3)
if self.nowait:
bit_buffer = bit_buffer | (1 << 4)
pieces.append(struct.pack('B', bit_buffer))
data.encode_table(pieces, self.arguments)
return pieces
class DeclareOk(amqp_object.Method):
INDEX = 0x0028000B # 40, 11; 2621451
NAME = 'Exchange.DeclareOk'
def __init__(self):
pass
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
return self
def encode(self):
pieces = list()
return pieces
class Delete(amqp_object.Method):
INDEX = 0x00280014 # 40, 20; 2621460
NAME = 'Exchange.Delete'
def __init__(self, ticket=0, exchange=None, if_unused=False, nowait=False):
self.ticket = ticket
self.exchange = exchange
self.if_unused = if_unused
self.nowait = nowait
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
self.ticket = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.exchange = encoded[offset:offset + length].decode('utf8')
try:
self.exchange = str(self.exchange)
except UnicodeEncodeError:
pass
offset += length
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.if_unused = (bit_buffer & (1 << 0)) != 0
self.nowait = (bit_buffer & (1 << 1)) != 0
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>H', self.ticket))
assert isinstance(self.exchange, basestring),\
'A non-bytestring value was supplied for self.exchange'
value = self.exchange.encode('utf-8') if isinstance(self.exchange, unicode) else self.exchange
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
bit_buffer = 0
if self.if_unused:
bit_buffer = bit_buffer | (1 << 0)
if self.nowait:
bit_buffer = bit_buffer | (1 << 1)
pieces.append(struct.pack('B', bit_buffer))
return pieces
class DeleteOk(amqp_object.Method):
INDEX = 0x00280015 # 40, 21; 2621461
NAME = 'Exchange.DeleteOk'
def __init__(self):
pass
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
return self
def encode(self):
pieces = list()
return pieces
class Bind(amqp_object.Method):
INDEX = 0x0028001E # 40, 30; 2621470
NAME = 'Exchange.Bind'
def __init__(self, ticket=0, destination=None, source=None, routing_key='', nowait=False, arguments={}):
self.ticket = ticket
self.destination = destination
self.source = source
self.routing_key = routing_key
self.nowait = nowait
self.arguments = arguments
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
self.ticket = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.destination = encoded[offset:offset + length].decode('utf8')
try:
self.destination = str(self.destination)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.source = encoded[offset:offset + length].decode('utf8')
try:
self.source = str(self.source)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.routing_key = encoded[offset:offset + length].decode('utf8')
try:
self.routing_key = str(self.routing_key)
except UnicodeEncodeError:
pass
offset += length
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.nowait = (bit_buffer & (1 << 0)) != 0
(self.arguments, offset) = data.decode_table(encoded, offset)
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>H', self.ticket))
assert isinstance(self.destination, basestring),\
'A non-bytestring value was supplied for self.destination'
value = self.destination.encode('utf-8') if isinstance(self.destination, unicode) else self.destination
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
assert isinstance(self.source, basestring),\
'A non-bytestring value was supplied for self.source'
value = self.source.encode('utf-8') if isinstance(self.source, unicode) else self.source
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
assert isinstance(self.routing_key, basestring),\
'A non-bytestring value was supplied for self.routing_key'
value = self.routing_key.encode('utf-8') if isinstance(self.routing_key, unicode) else self.routing_key
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
bit_buffer = 0
if self.nowait:
bit_buffer = bit_buffer | (1 << 0)
pieces.append(struct.pack('B', bit_buffer))
data.encode_table(pieces, self.arguments)
return pieces
class BindOk(amqp_object.Method):
INDEX = 0x0028001F # 40, 31; 2621471
NAME = 'Exchange.BindOk'
def __init__(self):
pass
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
return self
def encode(self):
pieces = list()
return pieces
class Unbind(amqp_object.Method):
INDEX = 0x00280028 # 40, 40; 2621480
NAME = 'Exchange.Unbind'
def __init__(self, ticket=0, destination=None, source=None, routing_key='', nowait=False, arguments={}):
self.ticket = ticket
self.destination = destination
self.source = source
self.routing_key = routing_key
self.nowait = nowait
self.arguments = arguments
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
self.ticket = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.destination = encoded[offset:offset + length].decode('utf8')
try:
self.destination = str(self.destination)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.source = encoded[offset:offset + length].decode('utf8')
try:
self.source = str(self.source)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.routing_key = encoded[offset:offset + length].decode('utf8')
try:
self.routing_key = str(self.routing_key)
except UnicodeEncodeError:
pass
offset += length
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.nowait = (bit_buffer & (1 << 0)) != 0
(self.arguments, offset) = data.decode_table(encoded, offset)
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>H', self.ticket))
assert isinstance(self.destination, basestring),\
'A non-bytestring value was supplied for self.destination'
value = self.destination.encode('utf-8') if isinstance(self.destination, unicode) else self.destination
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
assert isinstance(self.source, basestring),\
'A non-bytestring value was supplied for self.source'
value = self.source.encode('utf-8') if isinstance(self.source, unicode) else self.source
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
assert isinstance(self.routing_key, basestring),\
'A non-bytestring value was supplied for self.routing_key'
value = self.routing_key.encode('utf-8') if isinstance(self.routing_key, unicode) else self.routing_key
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
bit_buffer = 0
if self.nowait:
bit_buffer = bit_buffer | (1 << 0)
pieces.append(struct.pack('B', bit_buffer))
data.encode_table(pieces, self.arguments)
return pieces
class UnbindOk(amqp_object.Method):
INDEX = 0x00280033 # 40, 51; 2621491
NAME = 'Exchange.UnbindOk'
def __init__(self):
pass
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
return self
def encode(self):
pieces = list()
return pieces
class Queue(amqp_object.Class):
INDEX = 0x0032 # 50
NAME = 'Queue'
class Declare(amqp_object.Method):
INDEX = 0x0032000A # 50, 10; 3276810
NAME = 'Queue.Declare'
def __init__(self, ticket=0, queue='', passive=False, durable=False, exclusive=False, auto_delete=False, nowait=False, arguments={}):
self.ticket = ticket
self.queue = queue
self.passive = passive
self.durable = durable
self.exclusive = exclusive
self.auto_delete = auto_delete
self.nowait = nowait
self.arguments = arguments
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
self.ticket = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.queue = encoded[offset:offset + length].decode('utf8')
try:
self.queue = str(self.queue)
except UnicodeEncodeError:
pass
offset += length
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.passive = (bit_buffer & (1 << 0)) != 0
self.durable = (bit_buffer & (1 << 1)) != 0
self.exclusive = (bit_buffer & (1 << 2)) != 0
self.auto_delete = (bit_buffer & (1 << 3)) != 0
self.nowait = (bit_buffer & (1 << 4)) != 0
(self.arguments, offset) = data.decode_table(encoded, offset)
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>H', self.ticket))
assert isinstance(self.queue, basestring),\
'A non-bytestring value was supplied for self.queue'
value = self.queue.encode('utf-8') if isinstance(self.queue, unicode) else self.queue
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
bit_buffer = 0
if self.passive:
bit_buffer = bit_buffer | (1 << 0)
if self.durable:
bit_buffer = bit_buffer | (1 << 1)
if self.exclusive:
bit_buffer = bit_buffer | (1 << 2)
if self.auto_delete:
bit_buffer = bit_buffer | (1 << 3)
if self.nowait:
bit_buffer = bit_buffer | (1 << 4)
pieces.append(struct.pack('B', bit_buffer))
data.encode_table(pieces, self.arguments)
return pieces
class DeclareOk(amqp_object.Method):
INDEX = 0x0032000B # 50, 11; 3276811
NAME = 'Queue.DeclareOk'
def __init__(self, queue=None, message_count=None, consumer_count=None):
self.queue = queue
self.message_count = message_count
self.consumer_count = consumer_count
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.queue = encoded[offset:offset + length].decode('utf8')
try:
self.queue = str(self.queue)
except UnicodeEncodeError:
pass
offset += length
self.message_count = struct.unpack_from('>I', encoded, offset)[0]
offset += 4
self.consumer_count = struct.unpack_from('>I', encoded, offset)[0]
offset += 4
return self
def encode(self):
pieces = list()
assert isinstance(self.queue, basestring),\
'A non-bytestring value was supplied for self.queue'
value = self.queue.encode('utf-8') if isinstance(self.queue, unicode) else self.queue
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
pieces.append(struct.pack('>I', self.message_count))
pieces.append(struct.pack('>I', self.consumer_count))
return pieces
class Bind(amqp_object.Method):
INDEX = 0x00320014 # 50, 20; 3276820
NAME = 'Queue.Bind'
def __init__(self, ticket=0, queue='', exchange=None, routing_key='', nowait=False, arguments={}):
self.ticket = ticket
self.queue = queue
self.exchange = exchange
self.routing_key = routing_key
self.nowait = nowait
self.arguments = arguments
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
self.ticket = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.queue = encoded[offset:offset + length].decode('utf8')
try:
self.queue = str(self.queue)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.exchange = encoded[offset:offset + length].decode('utf8')
try:
self.exchange = str(self.exchange)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.routing_key = encoded[offset:offset + length].decode('utf8')
try:
self.routing_key = str(self.routing_key)
except UnicodeEncodeError:
pass
offset += length
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.nowait = (bit_buffer & (1 << 0)) != 0
(self.arguments, offset) = data.decode_table(encoded, offset)
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>H', self.ticket))
assert isinstance(self.queue, basestring),\
'A non-bytestring value was supplied for self.queue'
value = self.queue.encode('utf-8') if isinstance(self.queue, unicode) else self.queue
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
assert isinstance(self.exchange, basestring),\
'A non-bytestring value was supplied for self.exchange'
value = self.exchange.encode('utf-8') if isinstance(self.exchange, unicode) else self.exchange
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
assert isinstance(self.routing_key, basestring),\
'A non-bytestring value was supplied for self.routing_key'
value = self.routing_key.encode('utf-8') if isinstance(self.routing_key, unicode) else self.routing_key
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
bit_buffer = 0
if self.nowait:
bit_buffer = bit_buffer | (1 << 0)
pieces.append(struct.pack('B', bit_buffer))
data.encode_table(pieces, self.arguments)
return pieces
class BindOk(amqp_object.Method):
INDEX = 0x00320015 # 50, 21; 3276821
NAME = 'Queue.BindOk'
def __init__(self):
pass
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
return self
def encode(self):
pieces = list()
return pieces
class Purge(amqp_object.Method):
INDEX = 0x0032001E # 50, 30; 3276830
NAME = 'Queue.Purge'
def __init__(self, ticket=0, queue='', nowait=False):
self.ticket = ticket
self.queue = queue
self.nowait = nowait
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
self.ticket = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.queue = encoded[offset:offset + length].decode('utf8')
try:
self.queue = str(self.queue)
except UnicodeEncodeError:
pass
offset += length
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.nowait = (bit_buffer & (1 << 0)) != 0
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>H', self.ticket))
assert isinstance(self.queue, basestring),\
'A non-bytestring value was supplied for self.queue'
value = self.queue.encode('utf-8') if isinstance(self.queue, unicode) else self.queue
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
bit_buffer = 0
if self.nowait:
bit_buffer = bit_buffer | (1 << 0)
pieces.append(struct.pack('B', bit_buffer))
return pieces
class PurgeOk(amqp_object.Method):
INDEX = 0x0032001F # 50, 31; 3276831
NAME = 'Queue.PurgeOk'
def __init__(self, message_count=None):
self.message_count = message_count
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
self.message_count = struct.unpack_from('>I', encoded, offset)[0]
offset += 4
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>I', self.message_count))
return pieces
class Delete(amqp_object.Method):
INDEX = 0x00320028 # 50, 40; 3276840
NAME = 'Queue.Delete'
def __init__(self, ticket=0, queue='', if_unused=False, if_empty=False, nowait=False):
self.ticket = ticket
self.queue = queue
self.if_unused = if_unused
self.if_empty = if_empty
self.nowait = nowait
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
self.ticket = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.queue = encoded[offset:offset + length].decode('utf8')
try:
self.queue = str(self.queue)
except UnicodeEncodeError:
pass
offset += length
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.if_unused = (bit_buffer & (1 << 0)) != 0
self.if_empty = (bit_buffer & (1 << 1)) != 0
self.nowait = (bit_buffer & (1 << 2)) != 0
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>H', self.ticket))
assert isinstance(self.queue, basestring),\
'A non-bytestring value was supplied for self.queue'
value = self.queue.encode('utf-8') if isinstance(self.queue, unicode) else self.queue
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
bit_buffer = 0
if self.if_unused:
bit_buffer = bit_buffer | (1 << 0)
if self.if_empty:
bit_buffer = bit_buffer | (1 << 1)
if self.nowait:
bit_buffer = bit_buffer | (1 << 2)
pieces.append(struct.pack('B', bit_buffer))
return pieces
class DeleteOk(amqp_object.Method):
INDEX = 0x00320029 # 50, 41; 3276841
NAME = 'Queue.DeleteOk'
def __init__(self, message_count=None):
self.message_count = message_count
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
self.message_count = struct.unpack_from('>I', encoded, offset)[0]
offset += 4
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>I', self.message_count))
return pieces
class Unbind(amqp_object.Method):
INDEX = 0x00320032 # 50, 50; 3276850
NAME = 'Queue.Unbind'
def __init__(self, ticket=0, queue='', exchange=None, routing_key='', arguments={}):
self.ticket = ticket
self.queue = queue
self.exchange = exchange
self.routing_key = routing_key
self.arguments = arguments
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
self.ticket = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.queue = encoded[offset:offset + length].decode('utf8')
try:
self.queue = str(self.queue)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.exchange = encoded[offset:offset + length].decode('utf8')
try:
self.exchange = str(self.exchange)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.routing_key = encoded[offset:offset + length].decode('utf8')
try:
self.routing_key = str(self.routing_key)
except UnicodeEncodeError:
pass
offset += length
(self.arguments, offset) = data.decode_table(encoded, offset)
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>H', self.ticket))
assert isinstance(self.queue, basestring),\
'A non-bytestring value was supplied for self.queue'
value = self.queue.encode('utf-8') if isinstance(self.queue, unicode) else self.queue
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
assert isinstance(self.exchange, basestring),\
'A non-bytestring value was supplied for self.exchange'
value = self.exchange.encode('utf-8') if isinstance(self.exchange, unicode) else self.exchange
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
assert isinstance(self.routing_key, basestring),\
'A non-bytestring value was supplied for self.routing_key'
value = self.routing_key.encode('utf-8') if isinstance(self.routing_key, unicode) else self.routing_key
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
data.encode_table(pieces, self.arguments)
return pieces
class UnbindOk(amqp_object.Method):
INDEX = 0x00320033 # 50, 51; 3276851
NAME = 'Queue.UnbindOk'
def __init__(self):
pass
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
return self
def encode(self):
pieces = list()
return pieces
class Basic(amqp_object.Class):
INDEX = 0x003C # 60
NAME = 'Basic'
class Qos(amqp_object.Method):
INDEX = 0x003C000A # 60, 10; 3932170
NAME = 'Basic.Qos'
def __init__(self, prefetch_size=0, prefetch_count=0, global_=False):
self.prefetch_size = prefetch_size
self.prefetch_count = prefetch_count
self.global_ = global_
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
self.prefetch_size = struct.unpack_from('>I', encoded, offset)[0]
offset += 4
self.prefetch_count = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.global_ = (bit_buffer & (1 << 0)) != 0
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>I', self.prefetch_size))
pieces.append(struct.pack('>H', self.prefetch_count))
bit_buffer = 0
if self.global_:
bit_buffer = bit_buffer | (1 << 0)
pieces.append(struct.pack('B', bit_buffer))
return pieces
class QosOk(amqp_object.Method):
INDEX = 0x003C000B # 60, 11; 3932171
NAME = 'Basic.QosOk'
def __init__(self):
pass
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
return self
def encode(self):
pieces = list()
return pieces
class Consume(amqp_object.Method):
INDEX = 0x003C0014 # 60, 20; 3932180
NAME = 'Basic.Consume'
def __init__(self, ticket=0, queue='', consumer_tag='', no_local=False, no_ack=False, exclusive=False, nowait=False, arguments={}):
self.ticket = ticket
self.queue = queue
self.consumer_tag = consumer_tag
self.no_local = no_local
self.no_ack = no_ack
self.exclusive = exclusive
self.nowait = nowait
self.arguments = arguments
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
self.ticket = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.queue = encoded[offset:offset + length].decode('utf8')
try:
self.queue = str(self.queue)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.consumer_tag = encoded[offset:offset + length].decode('utf8')
try:
self.consumer_tag = str(self.consumer_tag)
except UnicodeEncodeError:
pass
offset += length
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.no_local = (bit_buffer & (1 << 0)) != 0
self.no_ack = (bit_buffer & (1 << 1)) != 0
self.exclusive = (bit_buffer & (1 << 2)) != 0
self.nowait = (bit_buffer & (1 << 3)) != 0
(self.arguments, offset) = data.decode_table(encoded, offset)
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>H', self.ticket))
assert isinstance(self.queue, basestring),\
'A non-bytestring value was supplied for self.queue'
value = self.queue.encode('utf-8') if isinstance(self.queue, unicode) else self.queue
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
assert isinstance(self.consumer_tag, basestring),\
'A non-bytestring value was supplied for self.consumer_tag'
value = self.consumer_tag.encode('utf-8') if isinstance(self.consumer_tag, unicode) else self.consumer_tag
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
bit_buffer = 0
if self.no_local:
bit_buffer = bit_buffer | (1 << 0)
if self.no_ack:
bit_buffer = bit_buffer | (1 << 1)
if self.exclusive:
bit_buffer = bit_buffer | (1 << 2)
if self.nowait:
bit_buffer = bit_buffer | (1 << 3)
pieces.append(struct.pack('B', bit_buffer))
data.encode_table(pieces, self.arguments)
return pieces
class ConsumeOk(amqp_object.Method):
INDEX = 0x003C0015 # 60, 21; 3932181
NAME = 'Basic.ConsumeOk'
def __init__(self, consumer_tag=None):
self.consumer_tag = consumer_tag
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.consumer_tag = encoded[offset:offset + length].decode('utf8')
try:
self.consumer_tag = str(self.consumer_tag)
except UnicodeEncodeError:
pass
offset += length
return self
def encode(self):
pieces = list()
assert isinstance(self.consumer_tag, basestring),\
'A non-bytestring value was supplied for self.consumer_tag'
value = self.consumer_tag.encode('utf-8') if isinstance(self.consumer_tag, unicode) else self.consumer_tag
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
return pieces
class Cancel(amqp_object.Method):
INDEX = 0x003C001E # 60, 30; 3932190
NAME = 'Basic.Cancel'
def __init__(self, consumer_tag=None, nowait=False):
self.consumer_tag = consumer_tag
self.nowait = nowait
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.consumer_tag = encoded[offset:offset + length].decode('utf8')
try:
self.consumer_tag = str(self.consumer_tag)
except UnicodeEncodeError:
pass
offset += length
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.nowait = (bit_buffer & (1 << 0)) != 0
return self
def encode(self):
pieces = list()
assert isinstance(self.consumer_tag, basestring),\
'A non-bytestring value was supplied for self.consumer_tag'
value = self.consumer_tag.encode('utf-8') if isinstance(self.consumer_tag, unicode) else self.consumer_tag
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
bit_buffer = 0
if self.nowait:
bit_buffer = bit_buffer | (1 << 0)
pieces.append(struct.pack('B', bit_buffer))
return pieces
class CancelOk(amqp_object.Method):
INDEX = 0x003C001F # 60, 31; 3932191
NAME = 'Basic.CancelOk'
def __init__(self, consumer_tag=None):
self.consumer_tag = consumer_tag
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.consumer_tag = encoded[offset:offset + length].decode('utf8')
try:
self.consumer_tag = str(self.consumer_tag)
except UnicodeEncodeError:
pass
offset += length
return self
def encode(self):
pieces = list()
assert isinstance(self.consumer_tag, basestring),\
'A non-bytestring value was supplied for self.consumer_tag'
value = self.consumer_tag.encode('utf-8') if isinstance(self.consumer_tag, unicode) else self.consumer_tag
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
return pieces
class Publish(amqp_object.Method):
INDEX = 0x003C0028 # 60, 40; 3932200
NAME = 'Basic.Publish'
def __init__(self, ticket=0, exchange='', routing_key='', mandatory=False, immediate=False):
self.ticket = ticket
self.exchange = exchange
self.routing_key = routing_key
self.mandatory = mandatory
self.immediate = immediate
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
self.ticket = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.exchange = encoded[offset:offset + length].decode('utf8')
try:
self.exchange = str(self.exchange)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.routing_key = encoded[offset:offset + length].decode('utf8')
try:
self.routing_key = str(self.routing_key)
except UnicodeEncodeError:
pass
offset += length
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.mandatory = (bit_buffer & (1 << 0)) != 0
self.immediate = (bit_buffer & (1 << 1)) != 0
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>H', self.ticket))
assert isinstance(self.exchange, basestring),\
'A non-bytestring value was supplied for self.exchange'
value = self.exchange.encode('utf-8') if isinstance(self.exchange, unicode) else self.exchange
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
assert isinstance(self.routing_key, basestring),\
'A non-bytestring value was supplied for self.routing_key'
value = self.routing_key.encode('utf-8') if isinstance(self.routing_key, unicode) else self.routing_key
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
bit_buffer = 0
if self.mandatory:
bit_buffer = bit_buffer | (1 << 0)
if self.immediate:
bit_buffer = bit_buffer | (1 << 1)
pieces.append(struct.pack('B', bit_buffer))
return pieces
class Return(amqp_object.Method):
INDEX = 0x003C0032 # 60, 50; 3932210
NAME = 'Basic.Return'
def __init__(self, reply_code=None, reply_text='', exchange=None, routing_key=None):
self.reply_code = reply_code
self.reply_text = reply_text
self.exchange = exchange
self.routing_key = routing_key
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
self.reply_code = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.reply_text = encoded[offset:offset + length].decode('utf8')
try:
self.reply_text = str(self.reply_text)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.exchange = encoded[offset:offset + length].decode('utf8')
try:
self.exchange = str(self.exchange)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.routing_key = encoded[offset:offset + length].decode('utf8')
try:
self.routing_key = str(self.routing_key)
except UnicodeEncodeError:
pass
offset += length
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>H', self.reply_code))
assert isinstance(self.reply_text, basestring),\
'A non-bytestring value was supplied for self.reply_text'
value = self.reply_text.encode('utf-8') if isinstance(self.reply_text, unicode) else self.reply_text
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
assert isinstance(self.exchange, basestring),\
'A non-bytestring value was supplied for self.exchange'
value = self.exchange.encode('utf-8') if isinstance(self.exchange, unicode) else self.exchange
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
assert isinstance(self.routing_key, basestring),\
'A non-bytestring value was supplied for self.routing_key'
value = self.routing_key.encode('utf-8') if isinstance(self.routing_key, unicode) else self.routing_key
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
return pieces
class Deliver(amqp_object.Method):
INDEX = 0x003C003C # 60, 60; 3932220
NAME = 'Basic.Deliver'
def __init__(self, consumer_tag=None, delivery_tag=None, redelivered=False, exchange=None, routing_key=None):
self.consumer_tag = consumer_tag
self.delivery_tag = delivery_tag
self.redelivered = redelivered
self.exchange = exchange
self.routing_key = routing_key
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.consumer_tag = encoded[offset:offset + length].decode('utf8')
try:
self.consumer_tag = str(self.consumer_tag)
except UnicodeEncodeError:
pass
offset += length
self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0]
offset += 8
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.redelivered = (bit_buffer & (1 << 0)) != 0
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.exchange = encoded[offset:offset + length].decode('utf8')
try:
self.exchange = str(self.exchange)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.routing_key = encoded[offset:offset + length].decode('utf8')
try:
self.routing_key = str(self.routing_key)
except UnicodeEncodeError:
pass
offset += length
return self
def encode(self):
pieces = list()
assert isinstance(self.consumer_tag, basestring),\
'A non-bytestring value was supplied for self.consumer_tag'
value = self.consumer_tag.encode('utf-8') if isinstance(self.consumer_tag, unicode) else self.consumer_tag
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
pieces.append(struct.pack('>Q', self.delivery_tag))
bit_buffer = 0
if self.redelivered:
bit_buffer = bit_buffer | (1 << 0)
pieces.append(struct.pack('B', bit_buffer))
assert isinstance(self.exchange, basestring),\
'A non-bytestring value was supplied for self.exchange'
value = self.exchange.encode('utf-8') if isinstance(self.exchange, unicode) else self.exchange
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
assert isinstance(self.routing_key, basestring),\
'A non-bytestring value was supplied for self.routing_key'
value = self.routing_key.encode('utf-8') if isinstance(self.routing_key, unicode) else self.routing_key
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
return pieces
class Get(amqp_object.Method):
INDEX = 0x003C0046 # 60, 70; 3932230
NAME = 'Basic.Get'
def __init__(self, ticket=0, queue='', no_ack=False):
self.ticket = ticket
self.queue = queue
self.no_ack = no_ack
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
self.ticket = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.queue = encoded[offset:offset + length].decode('utf8')
try:
self.queue = str(self.queue)
except UnicodeEncodeError:
pass
offset += length
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.no_ack = (bit_buffer & (1 << 0)) != 0
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>H', self.ticket))
assert isinstance(self.queue, basestring),\
'A non-bytestring value was supplied for self.queue'
value = self.queue.encode('utf-8') if isinstance(self.queue, unicode) else self.queue
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
bit_buffer = 0
if self.no_ack:
bit_buffer = bit_buffer | (1 << 0)
pieces.append(struct.pack('B', bit_buffer))
return pieces
class GetOk(amqp_object.Method):
INDEX = 0x003C0047 # 60, 71; 3932231
NAME = 'Basic.GetOk'
def __init__(self, delivery_tag=None, redelivered=False, exchange=None, routing_key=None, message_count=None):
self.delivery_tag = delivery_tag
self.redelivered = redelivered
self.exchange = exchange
self.routing_key = routing_key
self.message_count = message_count
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0]
offset += 8
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.redelivered = (bit_buffer & (1 << 0)) != 0
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.exchange = encoded[offset:offset + length].decode('utf8')
try:
self.exchange = str(self.exchange)
except UnicodeEncodeError:
pass
offset += length
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.routing_key = encoded[offset:offset + length].decode('utf8')
try:
self.routing_key = str(self.routing_key)
except UnicodeEncodeError:
pass
offset += length
self.message_count = struct.unpack_from('>I', encoded, offset)[0]
offset += 4
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>Q', self.delivery_tag))
bit_buffer = 0
if self.redelivered:
bit_buffer = bit_buffer | (1 << 0)
pieces.append(struct.pack('B', bit_buffer))
assert isinstance(self.exchange, basestring),\
'A non-bytestring value was supplied for self.exchange'
value = self.exchange.encode('utf-8') if isinstance(self.exchange, unicode) else self.exchange
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
assert isinstance(self.routing_key, basestring),\
'A non-bytestring value was supplied for self.routing_key'
value = self.routing_key.encode('utf-8') if isinstance(self.routing_key, unicode) else self.routing_key
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
pieces.append(struct.pack('>I', self.message_count))
return pieces
class GetEmpty(amqp_object.Method):
INDEX = 0x003C0048 # 60, 72; 3932232
NAME = 'Basic.GetEmpty'
def __init__(self, cluster_id=''):
self.cluster_id = cluster_id
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.cluster_id = encoded[offset:offset + length].decode('utf8')
try:
self.cluster_id = str(self.cluster_id)
except UnicodeEncodeError:
pass
offset += length
return self
def encode(self):
pieces = list()
assert isinstance(self.cluster_id, basestring),\
'A non-bytestring value was supplied for self.cluster_id'
value = self.cluster_id.encode('utf-8') if isinstance(self.cluster_id, unicode) else self.cluster_id
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
return pieces
class Ack(amqp_object.Method):
INDEX = 0x003C0050 # 60, 80; 3932240
NAME = 'Basic.Ack'
def __init__(self, delivery_tag=0, multiple=False):
self.delivery_tag = delivery_tag
self.multiple = multiple
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0]
offset += 8
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.multiple = (bit_buffer & (1 << 0)) != 0
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>Q', self.delivery_tag))
bit_buffer = 0
if self.multiple:
bit_buffer = bit_buffer | (1 << 0)
pieces.append(struct.pack('B', bit_buffer))
return pieces
class Reject(amqp_object.Method):
INDEX = 0x003C005A # 60, 90; 3932250
NAME = 'Basic.Reject'
def __init__(self, delivery_tag=None, requeue=True):
self.delivery_tag = delivery_tag
self.requeue = requeue
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0]
offset += 8
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.requeue = (bit_buffer & (1 << 0)) != 0
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>Q', self.delivery_tag))
bit_buffer = 0
if self.requeue:
bit_buffer = bit_buffer | (1 << 0)
pieces.append(struct.pack('B', bit_buffer))
return pieces
class RecoverAsync(amqp_object.Method):
INDEX = 0x003C0064 # 60, 100; 3932260
NAME = 'Basic.RecoverAsync'
def __init__(self, requeue=False):
self.requeue = requeue
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.requeue = (bit_buffer & (1 << 0)) != 0
return self
def encode(self):
pieces = list()
bit_buffer = 0
if self.requeue:
bit_buffer = bit_buffer | (1 << 0)
pieces.append(struct.pack('B', bit_buffer))
return pieces
class Recover(amqp_object.Method):
INDEX = 0x003C006E # 60, 110; 3932270
NAME = 'Basic.Recover'
def __init__(self, requeue=False):
self.requeue = requeue
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.requeue = (bit_buffer & (1 << 0)) != 0
return self
def encode(self):
pieces = list()
bit_buffer = 0
if self.requeue:
bit_buffer = bit_buffer | (1 << 0)
pieces.append(struct.pack('B', bit_buffer))
return pieces
class RecoverOk(amqp_object.Method):
INDEX = 0x003C006F # 60, 111; 3932271
NAME = 'Basic.RecoverOk'
def __init__(self):
pass
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
return self
def encode(self):
pieces = list()
return pieces
class Nack(amqp_object.Method):
INDEX = 0x003C0078 # 60, 120; 3932280
NAME = 'Basic.Nack'
def __init__(self, delivery_tag=0, multiple=False, requeue=True):
self.delivery_tag = delivery_tag
self.multiple = multiple
self.requeue = requeue
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0]
offset += 8
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.multiple = (bit_buffer & (1 << 0)) != 0
self.requeue = (bit_buffer & (1 << 1)) != 0
return self
def encode(self):
pieces = list()
pieces.append(struct.pack('>Q', self.delivery_tag))
bit_buffer = 0
if self.multiple:
bit_buffer = bit_buffer | (1 << 0)
if self.requeue:
bit_buffer = bit_buffer | (1 << 1)
pieces.append(struct.pack('B', bit_buffer))
return pieces
class Tx(amqp_object.Class):
INDEX = 0x005A # 90
NAME = 'Tx'
class Select(amqp_object.Method):
INDEX = 0x005A000A # 90, 10; 5898250
NAME = 'Tx.Select'
def __init__(self):
pass
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
return self
def encode(self):
pieces = list()
return pieces
class SelectOk(amqp_object.Method):
INDEX = 0x005A000B # 90, 11; 5898251
NAME = 'Tx.SelectOk'
def __init__(self):
pass
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
return self
def encode(self):
pieces = list()
return pieces
class Commit(amqp_object.Method):
INDEX = 0x005A0014 # 90, 20; 5898260
NAME = 'Tx.Commit'
def __init__(self):
pass
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
return self
def encode(self):
pieces = list()
return pieces
class CommitOk(amqp_object.Method):
INDEX = 0x005A0015 # 90, 21; 5898261
NAME = 'Tx.CommitOk'
def __init__(self):
pass
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
return self
def encode(self):
pieces = list()
return pieces
class Rollback(amqp_object.Method):
INDEX = 0x005A001E # 90, 30; 5898270
NAME = 'Tx.Rollback'
def __init__(self):
pass
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
return self
def encode(self):
pieces = list()
return pieces
class RollbackOk(amqp_object.Method):
INDEX = 0x005A001F # 90, 31; 5898271
NAME = 'Tx.RollbackOk'
def __init__(self):
pass
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
return self
def encode(self):
pieces = list()
return pieces
class Confirm(amqp_object.Class):
INDEX = 0x0055 # 85
NAME = 'Confirm'
class Select(amqp_object.Method):
INDEX = 0x0055000A # 85, 10; 5570570
NAME = 'Confirm.Select'
def __init__(self, nowait=False):
self.nowait = nowait
@property
def synchronous(self):
return True
def decode(self, encoded, offset=0):
bit_buffer = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.nowait = (bit_buffer & (1 << 0)) != 0
return self
def encode(self):
pieces = list()
bit_buffer = 0
if self.nowait:
bit_buffer = bit_buffer | (1 << 0)
pieces.append(struct.pack('B', bit_buffer))
return pieces
class SelectOk(amqp_object.Method):
INDEX = 0x0055000B # 85, 11; 5570571
NAME = 'Confirm.SelectOk'
def __init__(self):
pass
@property
def synchronous(self):
return False
def decode(self, encoded, offset=0):
return self
def encode(self):
pieces = list()
return pieces
class BasicProperties(amqp_object.Properties):
CLASS = Basic
INDEX = 0x003C # 60
NAME = 'BasicProperties'
FLAG_CONTENT_TYPE = (1 << 15)
FLAG_CONTENT_ENCODING = (1 << 14)
FLAG_HEADERS = (1 << 13)
FLAG_DELIVERY_MODE = (1 << 12)
FLAG_PRIORITY = (1 << 11)
FLAG_CORRELATION_ID = (1 << 10)
FLAG_REPLY_TO = (1 << 9)
FLAG_EXPIRATION = (1 << 8)
FLAG_MESSAGE_ID = (1 << 7)
FLAG_TIMESTAMP = (1 << 6)
FLAG_TYPE = (1 << 5)
FLAG_USER_ID = (1 << 4)
FLAG_APP_ID = (1 << 3)
FLAG_CLUSTER_ID = (1 << 2)
def __init__(self, content_type=None, content_encoding=None, headers=None, delivery_mode=None, priority=None, correlation_id=None, reply_to=None, expiration=None, message_id=None, timestamp=None, type=None, user_id=None, app_id=None, cluster_id=None):
self.content_type = content_type
self.content_encoding = content_encoding
self.headers = headers
self.delivery_mode = delivery_mode
self.priority = priority
self.correlation_id = correlation_id
self.reply_to = reply_to
self.expiration = expiration
self.message_id = message_id
self.timestamp = timestamp
self.type = type
self.user_id = user_id
self.app_id = app_id
self.cluster_id = cluster_id
def decode(self, encoded, offset=0):
flags = 0
flagword_index = 0
while True:
partial_flags = struct.unpack_from('>H', encoded, offset)[0]
offset += 2
flags = flags | (partial_flags << (flagword_index * 16))
if not (partial_flags & 1):
break
flagword_index += 1
if flags & BasicProperties.FLAG_CONTENT_TYPE:
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.content_type = encoded[offset:offset + length].decode('utf8')
try:
self.content_type = str(self.content_type)
except UnicodeEncodeError:
pass
offset += length
else:
self.content_type = None
if flags & BasicProperties.FLAG_CONTENT_ENCODING:
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.content_encoding = encoded[offset:offset + length].decode('utf8')
try:
self.content_encoding = str(self.content_encoding)
except UnicodeEncodeError:
pass
offset += length
else:
self.content_encoding = None
if flags & BasicProperties.FLAG_HEADERS:
(self.headers, offset) = data.decode_table(encoded, offset)
else:
self.headers = None
if flags & BasicProperties.FLAG_DELIVERY_MODE:
self.delivery_mode = struct.unpack_from('B', encoded, offset)[0]
offset += 1
else:
self.delivery_mode = None
if flags & BasicProperties.FLAG_PRIORITY:
self.priority = struct.unpack_from('B', encoded, offset)[0]
offset += 1
else:
self.priority = None
if flags & BasicProperties.FLAG_CORRELATION_ID:
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.correlation_id = encoded[offset:offset + length].decode('utf8')
try:
self.correlation_id = str(self.correlation_id)
except UnicodeEncodeError:
pass
offset += length
else:
self.correlation_id = None
if flags & BasicProperties.FLAG_REPLY_TO:
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.reply_to = encoded[offset:offset + length].decode('utf8')
try:
self.reply_to = str(self.reply_to)
except UnicodeEncodeError:
pass
offset += length
else:
self.reply_to = None
if flags & BasicProperties.FLAG_EXPIRATION:
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.expiration = encoded[offset:offset + length].decode('utf8')
try:
self.expiration = str(self.expiration)
except UnicodeEncodeError:
pass
offset += length
else:
self.expiration = None
if flags & BasicProperties.FLAG_MESSAGE_ID:
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.message_id = encoded[offset:offset + length].decode('utf8')
try:
self.message_id = str(self.message_id)
except UnicodeEncodeError:
pass
offset += length
else:
self.message_id = None
if flags & BasicProperties.FLAG_TIMESTAMP:
self.timestamp = struct.unpack_from('>Q', encoded, offset)[0]
offset += 8
else:
self.timestamp = None
if flags & BasicProperties.FLAG_TYPE:
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.type = encoded[offset:offset + length].decode('utf8')
try:
self.type = str(self.type)
except UnicodeEncodeError:
pass
offset += length
else:
self.type = None
if flags & BasicProperties.FLAG_USER_ID:
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.user_id = encoded[offset:offset + length].decode('utf8')
try:
self.user_id = str(self.user_id)
except UnicodeEncodeError:
pass
offset += length
else:
self.user_id = None
if flags & BasicProperties.FLAG_APP_ID:
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.app_id = encoded[offset:offset + length].decode('utf8')
try:
self.app_id = str(self.app_id)
except UnicodeEncodeError:
pass
offset += length
else:
self.app_id = None
if flags & BasicProperties.FLAG_CLUSTER_ID:
length = struct.unpack_from('B', encoded, offset)[0]
offset += 1
self.cluster_id = encoded[offset:offset + length].decode('utf8')
try:
self.cluster_id = str(self.cluster_id)
except UnicodeEncodeError:
pass
offset += length
else:
self.cluster_id = None
return self
def encode(self):
pieces = list()
flags = 0
if self.content_type is not None:
flags = flags | BasicProperties.FLAG_CONTENT_TYPE
assert isinstance(self.content_type, basestring),\
'A non-bytestring value was supplied for self.content_type'
value = self.content_type.encode('utf-8') if isinstance(self.content_type, unicode) else self.content_type
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
if self.content_encoding is not None:
flags = flags | BasicProperties.FLAG_CONTENT_ENCODING
assert isinstance(self.content_encoding, basestring),\
'A non-bytestring value was supplied for self.content_encoding'
value = self.content_encoding.encode('utf-8') if isinstance(self.content_encoding, unicode) else self.content_encoding
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
if self.headers is not None:
flags = flags | BasicProperties.FLAG_HEADERS
data.encode_table(pieces, self.headers)
if self.delivery_mode is not None:
flags = flags | BasicProperties.FLAG_DELIVERY_MODE
pieces.append(struct.pack('B', self.delivery_mode))
if self.priority is not None:
flags = flags | BasicProperties.FLAG_PRIORITY
pieces.append(struct.pack('B', self.priority))
if self.correlation_id is not None:
flags = flags | BasicProperties.FLAG_CORRELATION_ID
assert isinstance(self.correlation_id, basestring),\
'A non-bytestring value was supplied for self.correlation_id'
value = self.correlation_id.encode('utf-8') if isinstance(self.correlation_id, unicode) else self.correlation_id
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
if self.reply_to is not None:
flags = flags | BasicProperties.FLAG_REPLY_TO
assert isinstance(self.reply_to, basestring),\
'A non-bytestring value was supplied for self.reply_to'
value = self.reply_to.encode('utf-8') if isinstance(self.reply_to, unicode) else self.reply_to
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
if self.expiration is not None:
flags = flags | BasicProperties.FLAG_EXPIRATION
assert isinstance(self.expiration, basestring),\
'A non-bytestring value was supplied for self.expiration'
value = self.expiration.encode('utf-8') if isinstance(self.expiration, unicode) else self.expiration
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
if self.message_id is not None:
flags = flags | BasicProperties.FLAG_MESSAGE_ID
assert isinstance(self.message_id, basestring),\
'A non-bytestring value was supplied for self.message_id'
value = self.message_id.encode('utf-8') if isinstance(self.message_id, unicode) else self.message_id
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
if self.timestamp is not None:
flags = flags | BasicProperties.FLAG_TIMESTAMP
pieces.append(struct.pack('>Q', self.timestamp))
if self.type is not None:
flags = flags | BasicProperties.FLAG_TYPE
assert isinstance(self.type, basestring),\
'A non-bytestring value was supplied for self.type'
value = self.type.encode('utf-8') if isinstance(self.type, unicode) else self.type
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
if self.user_id is not None:
flags = flags | BasicProperties.FLAG_USER_ID
assert isinstance(self.user_id, basestring),\
'A non-bytestring value was supplied for self.user_id'
value = self.user_id.encode('utf-8') if isinstance(self.user_id, unicode) else self.user_id
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
if self.app_id is not None:
flags = flags | BasicProperties.FLAG_APP_ID
assert isinstance(self.app_id, basestring),\
'A non-bytestring value was supplied for self.app_id'
value = self.app_id.encode('utf-8') if isinstance(self.app_id, unicode) else self.app_id
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
if self.cluster_id is not None:
flags = flags | BasicProperties.FLAG_CLUSTER_ID
assert isinstance(self.cluster_id, basestring),\
'A non-bytestring value was supplied for self.cluster_id'
value = self.cluster_id.encode('utf-8') if isinstance(self.cluster_id, unicode) else self.cluster_id
pieces.append(struct.pack('B', len(value)))
pieces.append(value)
flag_pieces = list()
while True:
remainder = flags >> 16
partial_flags = flags & 0xFFFE
if remainder != 0:
partial_flags |= 1
flag_pieces.append(struct.pack('>H', partial_flags))
flags = remainder
if not flags:
break
return flag_pieces + pieces
methods = {
0x000A000A: Connection.Start,
0x000A000B: Connection.StartOk,
0x000A0014: Connection.Secure,
0x000A0015: Connection.SecureOk,
0x000A001E: Connection.Tune,
0x000A001F: Connection.TuneOk,
0x000A0028: Connection.Open,
0x000A0029: Connection.OpenOk,
0x000A0032: Connection.Close,
0x000A0033: Connection.CloseOk,
0x0014000A: Channel.Open,
0x0014000B: Channel.OpenOk,
0x00140014: Channel.Flow,
0x00140015: Channel.FlowOk,
0x00140028: Channel.Close,
0x00140029: Channel.CloseOk,
0x001E000A: Access.Request,
0x001E000B: Access.RequestOk,
0x0028000A: Exchange.Declare,
0x0028000B: Exchange.DeclareOk,
0x00280014: Exchange.Delete,
0x00280015: Exchange.DeleteOk,
0x0028001E: Exchange.Bind,
0x0028001F: Exchange.BindOk,
0x00280028: Exchange.Unbind,
0x00280033: Exchange.UnbindOk,
0x0032000A: Queue.Declare,
0x0032000B: Queue.DeclareOk,
0x00320014: Queue.Bind,
0x00320015: Queue.BindOk,
0x0032001E: Queue.Purge,
0x0032001F: Queue.PurgeOk,
0x00320028: Queue.Delete,
0x00320029: Queue.DeleteOk,
0x00320032: Queue.Unbind,
0x00320033: Queue.UnbindOk,
0x003C000A: Basic.Qos,
0x003C000B: Basic.QosOk,
0x003C0014: Basic.Consume,
0x003C0015: Basic.ConsumeOk,
0x003C001E: Basic.Cancel,
0x003C001F: Basic.CancelOk,
0x003C0028: Basic.Publish,
0x003C0032: Basic.Return,
0x003C003C: Basic.Deliver,
0x003C0046: Basic.Get,
0x003C0047: Basic.GetOk,
0x003C0048: Basic.GetEmpty,
0x003C0050: Basic.Ack,
0x003C005A: Basic.Reject,
0x003C0064: Basic.RecoverAsync,
0x003C006E: Basic.Recover,
0x003C006F: Basic.RecoverOk,
0x003C0078: Basic.Nack,
0x005A000A: Tx.Select,
0x005A000B: Tx.SelectOk,
0x005A0014: Tx.Commit,
0x005A0015: Tx.CommitOk,
0x005A001E: Tx.Rollback,
0x005A001F: Tx.RollbackOk,
0x0055000A: Confirm.Select,
0x0055000B: Confirm.SelectOk
}
props = {
0x003C: BasicProperties
}
def has_content(methodNumber):
if methodNumber == Basic.Publish.INDEX:
return True
if methodNumber == Basic.Return.INDEX:
return True
if methodNumber == Basic.Deliver.INDEX:
return True
if methodNumber == Basic.GetOk.INDEX:
return True
return False
| 35.718004 | 255 | 0.557634 | 10,749 | 98,796 | 5.001116 | 0.040841 | 0.063843 | 0.050263 | 0.048366 | 0.796622 | 0.784847 | 0.754302 | 0.738564 | 0.699816 | 0.677028 | 0 | 0.039423 | 0.339892 | 98,796 | 2,765 | 256 | 35.730922 | 0.784869 | 0.011934 | 0 | 0.748401 | 1 | 0 | 0.052646 | 0.000215 | 0 | 0 | 0.01339 | 0 | 0.026013 | 1 | 0.107463 | false | 0.03838 | 0.001279 | 0.033689 | 0.235394 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f4794c7ed79dc613a9928a3e15acb61d4970f4a5 | 136 | py | Python | blog/admin.py | noraj/ciphersuite.info | 959733ac42f5a1d3603bf2278eff98369ef415be | [
"MIT"
] | 10 | 2017-05-10T04:44:14.000Z | 2022-01-31T20:09:08.000Z | blog/admin.py | noraj/ciphersuite.info | 959733ac42f5a1d3603bf2278eff98369ef415be | [
"MIT"
] | 46 | 2017-05-26T20:29:56.000Z | 2022-02-09T14:29:31.000Z | blog/admin.py | noraj/ciphersuite.info | 959733ac42f5a1d3603bf2278eff98369ef415be | [
"MIT"
] | 4 | 2020-04-28T06:51:16.000Z | 2022-01-27T15:07:40.000Z | from django.contrib import admin
from .models import *
admin.site.register(Post)
admin.site.register(Category)
admin.site.register(Tag) | 22.666667 | 32 | 0.808824 | 20 | 136 | 5.5 | 0.55 | 0.245455 | 0.463636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080882 | 136 | 6 | 33 | 22.666667 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
be502798fcc6ba5ff28d293cd09784f83d4ef05d | 43 | py | Python | test.py | Vassago55/normal_sort | e174223dd59e895516fc5fe113a4d7071230980e | [
"MIT"
] | null | null | null | test.py | Vassago55/normal_sort | e174223dd59e895516fc5fe113a4d7071230980e | [
"MIT"
] | null | null | null | test.py | Vassago55/normal_sort | e174223dd59e895516fc5fe113a4d7071230980e | [
"MIT"
] | null | null | null | import math
print(int(math.ceil(0.1 / 2)))
| 14.333333 | 30 | 0.674419 | 9 | 43 | 3.222222 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 0.116279 | 43 | 2 | 31 | 21.5 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
be8a5ed67a1bfe9ab1fec7d2d7ebb240bbf7c606 | 29,481 | py | Python | sugartensor/sg_layer.py | DevashishJoshi/sugartensor | 6eb5b5142b688427b3521f31feda57ffeb8bd6ef | [
"MIT"
] | null | null | null | sugartensor/sg_layer.py | DevashishJoshi/sugartensor | 6eb5b5142b688427b3521f31feda57ffeb8bd6ef | [
"MIT"
] | null | null | null | sugartensor/sg_layer.py | DevashishJoshi/sugartensor | 6eb5b5142b688427b3521f31feda57ffeb8bd6ef | [
"MIT"
] | null | null | null | from __future__ import absolute_import
import sugartensor as tf
__author__ = 'namju.kim@kakaobrain.com'
#
# neural network layers
#
# noinspection PyUnusedLocal
@tf.sg_layer_func
def sg_bypass(tensor, opt):
r"""Returns the input tensor itself.
Args:
tensor: A `Tensor` (automatically passed by decorator).
opt:
bn: Boolean. If True, batch normalization is applied.
ln: Boolean. If True, layer normalization is applied.
dout: A float of range [0, 100). A dropout rate. Default is 0.
act: A name of activation function. e.g., `sigmoid`, `tanh`, etc.
Returns:
The same tensor as `tensor`.
"""
return tensor
@tf.sg_layer_func
def sg_dense(tensor, opt):
r"""Applies a full connection.
Args:
tensor: A 2-D tensor (automatically passed by decorator).
opt:
in_dim: An `integer`. The size of input dimension.
dim: An `integer`. The size of output dimension.
bias: Boolean. If True, biases are added.
regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable
will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization
summary: If True, summaries are added. The default is True.
Returns:
A `Tensor` with the same type as `tensor`.
"""
# parameter initialize
w = tf.sg_initializer.he_uniform('W', (opt.in_dim, opt.dim),
regularizer=opt.regularizer)#, summary=opt.summary)
b = tf.sg_initializer.constant('b', opt.dim, summary=opt.summary) if opt.bias else 0
# apply transform
out = tf.matmul(tensor, w) + b
return out
@tf.sg_layer_func
def sg_conv(tensor, opt):
r"""Applies a 2-D convolution.
Args:
tensor: A 4-D `Tensor` (automatically passed by decorator).
opt:
size: A tuple/list of positive integers of length 2 representing `[kernel height, kernel width]`.
Can be an integer if both values are the same.
If not specified, (3, 3) is set implicitly.
stride: A tuple/list of positive integers of length 2 or 4 representing stride dimensions.
If the length is 2, i.e., (a, b), the stride is `[1, a, b, 1]`.
If the length is 4, i.e., (a, b, c, d), the stride is `[a, b, c, d]`.
Can be an integer. If the length is a, the stride is `[1, a, a, 1]`.
Default value is [1, 1, 1, 1].
in_dim: A positive `integer`. The size of input dimension.
dim: A positive `integer`. The size of output dimension.
pad: Either `SAME` (Default) or `VALID`.
bias: Boolean. If True, biases are added.
regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable
will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization
summary: If True, summaries are added. The default is True.
Returns:
A `Tensor` with the same type as `tensor`.
"""
# default options
opt += tf.sg_opt(size=(3, 3), stride=(1, 1, 1, 1), pad='SAME')
opt.size = opt.size if isinstance(opt.size, (tuple, list)) else [opt.size, opt.size]
opt.stride = opt.stride if isinstance(opt.stride, (tuple, list)) else [1, opt.stride, opt.stride, 1]
opt.stride = [1, opt.stride[0], opt.stride[1], 1] if len(opt.stride) == 2 else opt.stride
# parameter initialize
w = tf.sg_initializer.he_uniform('W', (opt.size[0], opt.size[1], opt.in_dim, opt.dim),
regularizer=opt.regularizer, summary=opt.summary)
b = tf.sg_initializer.constant('b', opt.dim, summary=opt.summary) if opt.bias else 0
# apply convolution
out = tf.nn.conv2d(tensor, w, strides=opt.stride, padding=opt.pad) + b
return out
@tf.sg_layer_func
def sg_conv1d(tensor, opt):
r"""Applies a 1-D convolution.
Args:
tensor: A 3-D `Tensor` (automatically passed by decorator).
opt:
size: A positive `integer` representing `[kernel width]`.
If not specified, 2 is set implicitly.
stride: A positive `integer`. The number of entries by which
the filter is moved right at each step.
in_dim: A positive `integer`. The size of input dimension.
dim: A positive `integer`. The size of output dimension.
pad: Either `SAME` (Default) or `VALID`.
bias: Boolean. If True, biases are added.
regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable
will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization
summary: If True, summaries are added. The default is True.
Returns:
A `Tensor` with the same type as `tensor`.
"""
# default options
opt += tf.sg_opt(size=2, stride=1, pad='SAME')
# parameter tf.sg_initializer
w = tf.sg_initializer.he_uniform('W', (opt.size, opt.in_dim, opt.dim),
regularizer=opt.regularizer, summary=opt.summary)
b = tf.sg_initializer.constant('b', opt.dim, summary=opt.summary) if opt.bias else 0
# apply convolution
out = tf.nn.conv1d(tensor, w, stride=opt.stride, padding=opt.pad) + b
return out
@tf.sg_layer_func
def sg_aconv(tensor, opt):
r"""Applies a 2-D atrous (or dilated) convolution.
Args:
tensor: A 4-D `Tensor` (automatically passed by decorator).
opt:
size: A tuple/list of positive integers of length 2 representing `[kernel height, kernel width]`.
Can be an integer if both values are the same.
If not specified, (3, 3) is set automatically.
rate: A positive integer. The stride with which we sample input values across
the `height` and `width` dimensions. Default is 2.
in_dim: A positive `integer`. The size of input dimension.
dim: A positive `integer`. The size of output dimension.
pad: Either `SAME` (Default) or `VALID`.
bias: Boolean. If True, biases are added.
regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable
will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization
summary: If True, summaries are added. The default is True.
Returns:
A `Tensor` with the same type as `tensor`.
"""
# default options
opt += tf.sg_opt(size=(3, 3), rate=2, pad='SAME')
opt.size = opt.size if isinstance(opt.size, (tuple, list)) else [opt.size, opt.size]
# parameter tf.sg_initializer
w = tf.sg_initializer.he_uniform('W', (opt.size[0], opt.size[1], opt.in_dim, opt.dim),
regularizer=opt.regularizer, summary=opt.summary)
b = tf.sg_initializer.constant('b', opt.dim, summary=opt.summary) if opt.bias else 0
# apply convolution
out = tf.nn.atrous_conv2d(tensor, w, rate=opt.rate, padding=opt.pad) + b
return out
@tf.sg_layer_func
def sg_aconv1d(tensor, opt):
r"""Applies 1-D atrous (or dilated) convolution.
Args:
tensor: A 3-D `Tensor` (automatically passed by decorator).
opt:
causal: Boolean. If True, zeros are padded before the time axis such that
each activation unit doesn't have receptive neurons beyond the equivalent time step.
size: A positive `integer` representing `[kernel width]`. As a default it is set to 2
if causal is True, 3 otherwise.
rate: A positive `integer`. The stride with which we sample input values across
the `height` and `width` dimensions. Default is 1.
in_dim: A positive `integer`. The size of input dimension.
dim: A positive `integer`. The size of output dimension.
pad: Either `SAME` (Default) or `VALID`.
bias: Boolean. If True, biases are added.
regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable
will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization
summary: If True, summaries are added. The default is True.
Returns:
A `Tensor` with the same type as `tensor`.
"""
# default options
opt += tf.sg_opt(size=(2 if opt.causal else 3), rate=1, pad='SAME')
# parameter tf.sg_initializer
w = tf.sg_initializer.he_uniform('W', (1, opt.size, opt.in_dim, opt.dim),
regularizer=opt.regularizer, summary=opt.summary)
b = tf.sg_initializer.constant('b', opt.dim, summary=opt.summary) if opt.bias else 0
if opt.causal:
# pre-padding for causality
if opt.pad == 'SAME':
pad_len = (opt.size - 1) * opt.rate # padding size
x = tf.pad(tensor, [[0, 0], [pad_len, 0], [0, 0]]).sg_expand_dims(axis=1)
else:
x = tensor.sg_expand_dims(axis=1)
# apply 2d convolution
out = tf.nn.atrous_conv2d(x, w, rate=opt.rate, padding='VALID') + b
else:
# apply 2d convolution
out = tf.nn.atrous_conv2d(tensor.sg_expand_dims(axis=1),
w, rate=opt.rate, padding=opt.pad) + b
# reduce dimension
# noinspection PyUnresolvedReferences
out = out.sg_squeeze(axis=1)
return out
@tf.sg_layer_func
def sg_upconv(tensor, opt):
r"""Applies a up convolution (or convolution transpose).
Args:
tensor: A 4-D `Tensor` (automatically passed by decorator).
opt:
size: A tuple/list of integers of length 2 representing `[kernel height, kernel width]`.
Can be an integer if both values are the same.
If not specified, (4, 4) is set implicitly.
stride: A tuple/list of integers of length 2 or 4 representing stride dimensions.
If the length is 2, i.e., (a, b), the stride is `[1, a, b, 1]`.
If the length is 4, i.e., (a, b, c, d), the stride is `[a, b, c, d]`.
Can be an integer. If the length is a, the stride is `[1, a, a, 1]`.
Default value is [1, 2, 2, 1].
in_dim: A positive `integer`. The size of input dimension.
dim: A positive `integer`. The size of output dimension.
pad: Either `SAME` (Default) or `VALID`.
bias: Boolean. If True, biases are added.
regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable
will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization
summary: If True, summaries are added. The default is True.
Returns:
A `Tensor` with the same type as `tensor`.
"""
# default options
opt += tf.sg_opt(size=(4, 4), stride=(1, 2, 2, 1), pad='SAME')
opt.size = opt.size if isinstance(opt.size, (tuple, list)) else [opt.size, opt.size]
opt.stride = opt.stride if isinstance(opt.stride, (tuple, list)) else [1, opt.stride, opt.stride, 1]
opt.stride = [1, opt.stride[0], opt.stride[1], 1] if len(opt.stride) == 2 else opt.stride
# parameter tf.sg_initializer
w = tf.sg_initializer.he_uniform('W', (opt.size[0], opt.size[1], opt.dim, opt.in_dim),
regularizer=opt.regularizer, summary=opt.summary)
b = tf.sg_initializer.constant('b', opt.dim, summary=opt.summary) if opt.bias else 0
# tedious shape handling for conv2d_transpose
shape = tensor.get_shape().as_list()
if opt.pad == "SAME":
out_shape = [tf.shape(tensor)[0], shape[1] * opt.stride[1], shape[2] * opt.stride[2], opt.dim]
else:
out_shape = [tf.shape(tensor)[0], (shape[1] - 1) * opt.stride[1] + opt.size[0],
(shape[2] - 1) * opt.stride[2] + opt.size[1], opt.dim]
# apply convolution
out = tf.nn.conv2d_transpose(tensor, w, output_shape=tf.stack(out_shape),
strides=opt.stride, padding=opt.pad) + b
# reset shape is needed because conv2d_transpose() erase all shape information.
# noinspection PyUnresolvedReferences
out.set_shape([None, out_shape[1], out_shape[2], opt.dim])
return out
@tf.sg_layer_func
def sg_upconv1d(tensor, opt):
r"""Applies 1-D a up convolution (or convolution transpose).
Args:
tensor: A 3-D `Tensor` (automatically passed by decorator).
opt:
size: A positive `integer` representing `[kernel width]`. As a default it is set to 4
stride: A positive `integer` representing stride dimension. As a default it is set to 2
in_dim: A positive `integer`. The size of input dimension.
dim: A positive `integer`. The size of output dimension.
pad: Either `SAME` (Default) or `VALID`.
bias: Boolean. If True, biases are added.
regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable
will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization
summary: If True, summaries are added. The default is True.
Returns:
A `Tensor` with the same type as `tensor`.
"""
# default options
opt += tf.sg_opt(size=4, stride=2, pad='SAME')
opt.size = [opt.size, 1]
opt.stride = [1, opt.stride, 1, 1]
# parameter tf.sg_initializer
w = tf.sg_initializer.he_uniform('W', (opt.size[0], opt.size[1], opt.dim, opt.in_dim),
regularizer=opt.regularizer, summary=opt.summary)
b = tf.sg_initializer.constant('b', opt.dim, summary=opt.summary) if opt.bias else 0
# make 4-D tensor
tensor = tensor.sg_expand_dims(axis=2)
# tedious shape handling for conv2d_transpose
shape = tensor.get_shape().as_list()
if opt.pad == "SAME":
out_shape = [tf.shape(tensor)[0], shape[1] * opt.stride[1], shape[2] * opt.stride[2], opt.dim]
else:
out_shape = [tf.shape(tensor)[0], (shape[1] - 1) * opt.stride[1] + opt.size[0],
(shape[2] - 1) * opt.stride[2] + opt.size[1], opt.dim]
# apply convolution
out = tf.nn.conv2d_transpose(tensor, w, output_shape=tf.stack(out_shape),
strides=opt.stride, padding=opt.pad) + b
# reset shape is needed because conv2d_transpose() erase all shape information.
# noinspection PyUnresolvedReferences
out.set_shape([None, out_shape[1], out_shape[2], opt.dim])
# squeeze
out = out.sq_squeeze(axis=2)
return out
@tf.sg_layer_func
def sg_espcn(tensor, opt):
r"""Applies a 2-D efficient sub pixel convolution.
(see [Shi et al. 2016](http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Shi_Real-Time_Single_Image_CVPR_2016_paper.pdf)
Args:
tensor: A 4-D `Tensor` (automatically passed by decorator).
opt:
size: A tuple/list of positive integers of length 2 representing `[kernel height, kernel width]`.
Can be an integer if both values are the same.
If not specified, (3, 3) is set implicitly.
stride: A tuple/list of positive integers of length 2 or 4 representing stride dimensions.
If the length is 2, i.e., (a, b), the stride is `[1, a, b, 1]`.
If the length is 4, i.e., (a, b, c, d), the stride is `[a, b, c, d]`.
Can be an integer. If the length is a, the stride is `[1, a, a, 1]`.
Default value is [1, 1, 1, 1].
in_dim: A positive `integer`. The size of input dimension.
dim: A positive `integer`. The size of output dimension.
pad: Either `SAME` (Default) or `VALID`.
bias: Boolean. If True, biases are added.
factor: factor to multiply shape by. Default is 2.
regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable
will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization
summary: If True, summaries are added. The default is True.
Returns:
A `Tensor` with the same type as `tensor`.
"""
# default options
opt += tf.sg_opt(size=(3, 3), stride=(1, 1, 1, 1), pad='SAME', factor=2)
opt.size = opt.size if isinstance(opt.size, (tuple, list)) else [opt.size, opt.size]
opt.stride = opt.stride if isinstance(opt.stride, (tuple, list)) else [1, opt.stride, opt.stride, 1]
opt.stride = [1, opt.stride[0], opt.stride[1], 1] if len(opt.stride) == 2 else opt.stride
# parameter initialize
w = tf.sg_initializer.he_uniform('W', (opt.size[0], opt.size[1], opt.in_dim, opt.dim * opt.factor * opt.factor),
regularizer=opt.regularizer, summary=opt.summary)
b = tf.sg_initializer.constant('b', opt.dim, summary=opt.summary) if opt.bias else 0
# apply convolution
out = tf.nn.conv2d(tensor, w, strides=opt.stride, padding=opt.pad) + b
# apply periodic shuffle
out = out.sg_periodic_shuffle(factor=opt.factor)
return out
#
# RNN layers
#
def sg_emb(**kwargs):
r"""Returns a look-up table for embedding.
kwargs:
name: A name for the layer.
emb: A 2-D array (optional).
If None, the resulting tensor should have the shape of
`[vocabulary size, embedding dimension size]`.
Note that its first row is filled with 0's associated with padding.
in_dim: A positive `integer`. The size of input dimension.
dim: A positive `integer`. The size of output dimension.
voca_size: A positive integer. The size of vocabulary.
summary: If True, summaries are added. The default is True.
Returns:
A 2-D `Tensor` of float32.
"""
opt = tf.sg_opt(kwargs)
assert opt.name is not None, 'name is mandatory.'
if opt.emb is None:
# initialize embedding matrix
assert opt.voca_size is not None, 'voca_size is mandatory.'
assert opt.dim is not None, 'dim is mandatory.'
w = tf.sg_initializer.he_uniform(opt.name, (opt.voca_size - 1, opt.dim), summary=opt.summary)
else:
# use given embedding matrix
w = tf.sg_initializer.external(opt.name, value=opt.emb, summary=opt.summary)
# 1st row should be zero and not be updated by backprop because of zero padding.
emb = tf.concat([tf.zeros((1, opt.dim), dtype=tf.sg_floatx), w], 0)
return emb
# layer normalization for rnn
def _ln_rnn(x, gamma, beta):
r"""Applies layer normalization.
Normalizes the last dimension of the tensor `x`.
Args:
x: A `Tensor`.
gamma: A constant `Tensor`. Scale parameter. Default is 1.
beta: A constant `Tensor`. Offset parameter. Default is 0.
Returns:
A `Tensor` with the same shape as `x`.
"""
# calc layer mean, variance for final axis
mean, variance = tf.nn.moments(x, axes=[len(x.get_shape()) - 1], keep_dims=True)
# apply layer normalization
x = (x - mean) / tf.sqrt(variance + tf.sg_eps)
# apply parameter
return gamma * x + beta
@tf.sg_rnn_layer_func
def sg_rnn(tensor, opt):
r"""Applies a simple rnn.
Args:
tensor: A 3-D `Tensor` (automatically passed by decorator).
opt:
in_dim: A positive `integer`. The size of input dimension.
dim: A positive `integer`. The size of output dimension.
bias: Boolean. If True, biases are added.
ln: Boolean. If True, layer normalization is applied.
init_state: A 2-D `Tensor`. If None, the initial state is set to zeros.
last_only: Boolean. If True, the outputs in the last time step are returned.
mask: Boolean 2-D `Tensor` or None(default).
For false elements values are excluded from the calculation.
As a result, the outputs for the locations become 0.
summary: If True, summaries are added. The default is True.
Returns:
A `Tensor`. If last_only is True, the output tensor has shape [batch size, dim].
Otherwise, [batch size, time steps, dim].
"""
# layer normalization
# noinspection PyPep8
ln = lambda v: _ln_rnn(v, gamma, beta) if opt.ln else v
# step function
def step(hh, x):
# simple rnn
y = ln(tf.matmul(x, w) + tf.matmul(hh, u) + (b if opt.bias else 0))
return y
# parameter initialize
w = tf.sg_initializer.orthogonal('W', (opt.in_dim, opt.dim), summary=opt.summary)
u = tf.sg_initializer.identity('U', opt.dim, summary=opt.summary)
if opt.bias:
b = tf.sg_initializer.constant('b', opt.dim, summary=opt.summary)
# layer normalization parameters
if opt.ln:
# offset, scale parameter
beta = tf.sg_initializer.constant('beta', opt.dim, summary=opt.summary)
gamma = tf.sg_initializer.constant('gamma', opt.dim, value=1, summary=opt.summary)
# initial state
init_h = opt.init_state if opt.init_state is not None \
else tf.zeros((tensor.get_shape().as_list()[0], opt.dim), dtype=tf.sg_floatx)
# do rnn loop
h, out = init_h, []
for i in range(tensor.get_shape().as_list()[1]):
# apply step func
h = step(h, tensor[:, i, :])
# save result
out.append(h.sg_expand_dims(axis=1))
# merge tensor
out = tf.concat(out, 1)
# apply mask
if opt.mask is None:
if opt.last_only:
return out[:, -1, :]
else:
return out
else:
# apply mask
out *= opt.mask.sg_expand_dims(axis=2).sg_float()
if opt.last_only:
# calc sequence length using given mask
seq_len = opt.mask.sg_int().sg_sum(axis=1)
# get last output
rev = tf.reverse_sequence(out, seq_len, seq_axis=1)
return rev[:, 0, :]
else:
return out
@tf.sg_rnn_layer_func
def sg_gru(tensor, opt):
r"""Applies a GRU.
Args:
tensor: A 3-D `Tensor` (automatically passed by decorator).
opt:
in_dim: A positive `integer`. The size of input dimension.
dim: A positive `integer`. The size of output dimension.
bias: Boolean. If True, biases are added.
ln: Boolean. If True, layer normalization is applied.
init_state: A 2-D `Tensor`. If None, the initial state is set to zeros.
last_only: Boolean. If True, the outputs in the last time step are returned.
mask: Boolean 2-D `Tensor` or None(default).
For false elements values are excluded from the calculation.
As a result, the outputs for the locations become 0.
summary: If True, summaries are added. The default is True.
Returns:
A `Tensor`. If last_only is True, the output tensor has shape [batch size, dim].
Otherwise, [batch size, time steps, dim].
"""
# layer normalization
# noinspection PyPep8
ln = lambda v: _ln_rnn(v, gamma, beta) if opt.ln else v
# step func
def step(hh, x):
# update gate
z = tf.sigmoid(ln(tf.matmul(x, w_z) + tf.matmul(hh, u_z) + (b_z if opt.bias else 0)))
# reset gate
r = tf.sigmoid(ln(tf.matmul(x, w_r) + tf.matmul(hh, u_r) + (b_r if opt.bias else 0)))
# h_hat
h_hat = tf.tanh(ln(tf.matmul(x, w_h) + tf.matmul(r * hh, u_h) + (b_h if opt.bias else 0)))
# final output
y = (1. - z) * h_hat + z * hh
return y
# parameter initialize
w_z = tf.sg_initializer.orthogonal('W_z', (opt.in_dim, opt.dim), summary=opt.summary)
u_z = tf.sg_initializer.identity('U_z', opt.dim, summary=opt.summary)
w_r = tf.sg_initializer.orthogonal('W_r', (opt.in_dim, opt.dim), summary=opt.summary)
u_r = tf.sg_initializer.identity('U_r', opt.dim, summary=opt.summary)
w_h = tf.sg_initializer.orthogonal('W_h', (opt.in_dim, opt.dim), summary=opt.summary)
u_h = tf.sg_initializer.identity('U_h', opt.dim, summary=opt.summary)
if opt.bias:
b_z = tf.sg_initializer.constant('b_z', opt.dim, summary=opt.summary)
b_r = tf.sg_initializer.constant('b_r', opt.dim, summary=opt.summary)
b_h = tf.sg_initializer.constant('b_h', opt.dim, summary=opt.summary)
# layer normalization parameters
if opt.ln:
# offset, scale parameter
beta = tf.sg_initializer.constant('beta', opt.dim, summary=opt.summary)
gamma = tf.sg_initializer.constant('gamma', opt.dim, value=1, summary=opt.summary)
# initial state
init_h = opt.init_state if opt.init_state is not None \
else tf.zeros((tensor.get_shape().as_list()[0], opt.dim), dtype=tf.sg_floatx)
# do rnn loop
h, out = init_h, []
for i in range(tensor.get_shape().as_list()[1]):
# apply step function
h = step(h, tensor[:, i, :])
# save result
# noinspection PyUnresolvedReferences
out.append(h.sg_expand_dims(axis=1))
# merge tensor
out = tf.concat(out, 1)
# apply mask
if opt.mask is None:
if opt.last_only:
return out[:, -1, :]
else:
return out
else:
# apply mask
out *= opt.mask.sg_expand_dims(axis=2).sg_float()
if opt.last_only:
# calc sequence length using given mask
seq_len = opt.mask.sg_int().sg_sum(axis=1)
# get last output
rev = tf.reverse_sequence(out, seq_len, seq_axis=1)
return rev[:, 0, :]
else:
return out
@tf.sg_rnn_layer_func
def sg_lstm(tensor, opt):
r"""Applies an LSTM.
Args:
tensor: A 3-D `Tensor` (automatically passed by decorator).
opt:
in_dim: A positive `integer`. The size of input dimension.
dim: A positive `integer`. The size of output dimension.
bias: Boolean. If True, biases are added.
ln: Boolean. If True, layer normalization is applied.
init_state: A 2-D `Tensor`. If None, the initial state is set to zeros.
last_only: Boolean. If True, the outputs in the last time step are returned.
mask: Boolean 2-D `Tensor` or None(default).
For false elements values are excluded from the calculation.
As a result, the outputs for the locations become 0.
summary: If True, summaries are added. The default is True.
Returns:
A `Tensor`. If last_only is True, the output tensor has shape [batch size, dim].
Otherwise, [batch size, time steps, dim].
"""
# layer normalization
# noinspection PyPep8
ln = lambda v: _ln_rnn(v, gamma, beta) if opt.ln else v
# step func
def step(hh, cc, x):
# forget gate
f = tf.sigmoid(ln(tf.matmul(x, w_f) + tf.matmul(hh, u_f) + (b_f if opt.bias else 0)))
# input gate
ii = tf.sigmoid(ln(tf.matmul(x, w_i) + tf.matmul(hh, u_i) + (b_i if opt.bias else 0)))
# new cell value
c_new = tf.tanh(ln(tf.matmul(x, w_c) + tf.matmul(hh, u_c) + (b_c if opt.bias else 0)))
# out gate
o = tf.sigmoid(ln(tf.matmul(x, w_o) + tf.matmul(hh, u_o) + (b_o if opt.bias else 0)))
# cell update
cell = f * cc + ii * c_new
# final output
y = o * tf.tanh(cell)
return y, cell
# parameter initialize
w_i = tf.sg_initializer.orthogonal('W_i', (opt.in_dim, opt.dim), summary=opt.summary)
u_i = tf.sg_initializer.identity('U_i', opt.dim, summary=opt.summary)
w_f = tf.sg_initializer.orthogonal('W_f', (opt.in_dim, opt.dim), summary=opt.summary)
u_f = tf.sg_initializer.identity('U_f', opt.dim, summary=opt.summary)
w_o = tf.sg_initializer.orthogonal('W_o', (opt.in_dim, opt.dim), summary=opt.summary)
u_o = tf.sg_initializer.identity('U_o', opt.dim, summary=opt.summary)
w_c = tf.sg_initializer.orthogonal('W_c', (opt.in_dim, opt.dim), summary=opt.summary)
u_c = tf.sg_initializer.identity('U_c', opt.dim, summary=opt.summary)
if opt.bias:
b_i = tf.sg_initializer.constant('b_i', opt.dim, summary=opt.summary)
b_f = tf.sg_initializer.constant('b_f', opt.dim, summary=opt.summary)
b_o = tf.sg_initializer.constant('b_o', opt.dim, value=1, summary=opt.summary)
b_c = tf.sg_initializer.constant('b_c', opt.dim, summary=opt.summary)
# layer normalization parameters
if opt.ln:
# offset, scale parameter
beta = tf.sg_initializer.constant('beta', opt.dim, summary=opt.summary)
gamma = tf.sg_initializer.constant('gamma', opt.dim, value=1, summary=opt.summary)
# initial state
init_h = opt.init_state if opt.init_state is not None \
else tf.zeros((tensor.get_shape().as_list()[0], opt.dim), dtype=tf.sg_floatx)
# do rnn loop
h, c, out = init_h, init_h, []
for i in range(tensor.get_shape().as_list()[1]):
# apply step function
h, c = step(h, c, tensor[:, i, :])
# save result
out.append(h.sg_expand_dims(axis=1))
# merge tensor
out = tf.concat(out, 1)
# apply mask
if opt.mask is None:
if opt.last_only:
return out[:, -1, :]
else:
return out
else:
# apply mask
out *= opt.mask.sg_expand_dims(axis=2).sg_float()
if opt.last_only:
# calc sequence length using given mask
seq_len = opt.mask.sg_int().sg_sum(axis=1)
# get last output
rev = tf.reverse_sequence(out, seq_len, seq_axis=1)
return rev[:, 0, :]
else:
return out
| 41.117155 | 142 | 0.629117 | 4,447 | 29,481 | 4.088824 | 0.076456 | 0.017159 | 0.043722 | 0.030798 | 0.84689 | 0.797173 | 0.76764 | 0.747951 | 0.730353 | 0.702909 | 0 | 0.013214 | 0.258166 | 29,481 | 716 | 143 | 41.174581 | 0.818198 | 0.488959 | 0 | 0.570881 | 0 | 0 | 0.016963 | 0.001725 | 0 | 0 | 0 | 0 | 0.011494 | 1 | 0.065134 | false | 0.003831 | 0.007663 | 0 | 0.172414 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bebcd4f7987efd9979ab6d77dde844261d2587bb | 927 | py | Python | tests/test_dealer.py | ccr5/BlackJack | 7570938b8e81246231e535cfd83eee9730b22d5d | [
"MIT"
] | null | null | null | tests/test_dealer.py | ccr5/BlackJack | 7570938b8e81246231e535cfd83eee9730b22d5d | [
"MIT"
] | null | null | null | tests/test_dealer.py | ccr5/BlackJack | 7570938b8e81246231e535cfd83eee9730b22d5d | [
"MIT"
] | null | null | null | from classes.dealer import Dealer
from classes.cards import Cards
class TestDealer():
d = Dealer()
c = Cards()
def test_check_winner(self):
hand_1 = [self.c.matrix[0], self.c.matrix[9]]
hand_2 = [self.c.matrix[0], self.c.matrix[7]]
assert self.d.check_winner(hand_1, hand_2) == ('h1', 21, 19)
hand_1 = [self.c.matrix[0], self.c.matrix[7]]
hand_2 = [self.c.matrix[0], self.c.matrix[7]]
assert self.d.check_winner(hand_1, hand_2) == ('draw', 19, 19)
hand_1 = [self.c.matrix[0], self.c.matrix[7]]
hand_2 = [self.c.matrix[9], self.c.matrix[7], self.c.matrix[9]]
assert self.d.check_winner(hand_1, hand_2) == ('h1', 19, 28)
hand_1 = [self.c.matrix[9], self.c.matrix[7], self.c.matrix[9]]
hand_2 = [self.c.matrix[9], self.c.matrix[7], self.c.matrix[9]]
assert self.d.check_winner(hand_1, hand_2) == ('no win', 28, 28)
| 35.653846 | 72 | 0.596548 | 162 | 927 | 3.277778 | 0.179012 | 0.178908 | 0.393597 | 0.158192 | 0.760829 | 0.751412 | 0.751412 | 0.751412 | 0.751412 | 0.696798 | 0 | 0.072603 | 0.212513 | 927 | 25 | 73 | 37.08 | 0.654795 | 0 | 0 | 0.333333 | 0 | 0 | 0.015102 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.055556 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fe3c00e91c4147b635cf342ddca63ab3e4fc88b6 | 13,973 | py | Python | tests/test_pyscanner.py | YaokaiYang-assaultmaster/py3PortScanner | 632cd2f09e6776d3d667fe94bd4567cf7b87a664 | [
"Apache-2.0"
] | 26 | 2018-11-13T01:07:48.000Z | 2022-03-29T00:34:45.000Z | tests/test_pyscanner.py | YaokaiYang-assaultmaster/py3PortScanner | 632cd2f09e6776d3d667fe94bd4567cf7b87a664 | [
"Apache-2.0"
] | 2 | 2018-12-26T08:15:22.000Z | 2019-08-27T00:18:31.000Z | tests/test_pyscanner.py | YaokaiYang-assaultmaster/py3PortScanner | 632cd2f09e6776d3d667fe94bd4567cf7b87a664 | [
"Apache-2.0"
] | 5 | 2019-06-11T08:31:37.000Z | 2021-09-16T00:16:41.000Z | import unittest
import collections
from unittest.mock import Mock, patch
import socket
from socket import error as socket_error
from concurrent import futures
from os import sys, path
sys.path.append(path.dirname(path.dirname(path.abspath(__file__))))
from pyportscanner import pyscanner
from pyportscanner.etc.service_port import ServicePort
@patch('pyportscanner.pyscanner.socket', autospec=True)
@patch('pyportscanner.pyscanner.read_input', autospec=True)
class PortScannerTest(unittest.TestCase):
def setUp(self):
self.target_ports = [80, 443]
self.thread_limit = 100
self.timeout = 10
port_80 = ServicePort('HTTP', 80, 'TCP', 0.1)
port_443 = ServicePort('TLS', 443, 'TCP', 0.09)
self.mock_port_list = {
80: port_80,
443: port_443,
}
self.test_ip = 'test_ip_address'
self.test_host = 'http://test_domain.com'
self.test_domain = 'test_domain.com'
def test_init_func(self, mock_read_input, mock_socket):
scanner = pyscanner.PortScanner(self.target_ports, self.thread_limit, self.timeout)
self.assertIsNotNone(scanner)
self.assertEqual(scanner.get_target_ports(), self.target_ports)
self.assertEqual(scanner.thread_limit, self.thread_limit)
self.assertEqual(scanner.timeout_val, self.timeout)
def test_extract_list_success(self, mock_read_input, mock_socket):
mock_read_input.return_value = self.mock_port_list
scanner = pyscanner.PortScanner(self.target_ports, self.thread_limit, self.timeout)
result = scanner.extract_list(2)
self.assertEqual(result, [80, 443])
result = scanner.extract_list(1)
self.assertEqual(result, [80])
def text_extract_list_error(self, mock_read_input, mock_socket):
mock_read_input.return_value = self.mock_port_list
scanner = pyscanner.PortScanner(self.target_ports, self.thread_limit, self.timeout)
self.assertRaises(scanner.extract_list(-1), ValueError)
def test_get_target_ports(self, mock_read_input, mock_socket):
mock_read_input.return_value = self.mock_port_list
scanner = pyscanner.PortScanner(self.target_ports, self.thread_limit, self.timeout)
result = scanner.get_target_ports()
self.assertIsNotNone(result)
self.assertEqual(result, self.target_ports)
def test_get_top_k_ports(self, mock_read_input, mock_socket):
mock_read_input.return_value = self.mock_port_list
scanner = pyscanner.PortScanner(self.target_ports, self.thread_limit, self.timeout)
result = scanner.get_top_k_ports(2)
self.assertIsNotNone(result)
self.assertEqual(result, self.target_ports)
def test_scan_input_ip_success(self, mock_read_input, mock_socket):
mock_read_input.return_value = self.mock_port_list
mock_socket.inet_aton.return_value = None
mock_socket.gethostbyname.return_value = self.test_ip
mock_scan_results = {
80: 'OPEN',
443: 'CLOSE',
}
scanner = pyscanner.PortScanner(self.target_ports, self.thread_limit, self.timeout)
# private instance methods in python are name mangled
# see https://docs.python.org/3.5/tutorial/classes.html#private-variables
scanner._PortScanner__scan_ports = Mock(return_value=mock_scan_results)
result = scanner.scan(self.test_ip)
self.assertEqual(result, mock_scan_results)
mock_socket.gethostbyname.assert_called_once_with(self.test_ip)
def test_scan_os_error_success(self, mock_read_input, mock_socket):
mock_read_input.return_value = self.mock_port_list
mock_socket.inet_aton.side_effect = OSError
mock_socket.gethostbyname.return_value = self.test_ip
mock_scan_results = {
80: 'OPEN',
443: 'CLOSE',
}
scanner = pyscanner.PortScanner(self.target_ports, self.thread_limit, self.timeout)
# private instance methods in python are name mangled
# see https://docs.python.org/3.5/tutorial/classes.html#private-variables
scanner._PortScanner__scan_ports = Mock(return_value=mock_scan_results)
result = scanner.scan(self.test_host)
self.assertEqual(result, mock_scan_results)
scanner._PortScanner__scan_ports.assert_called_once_with(self.test_ip, '')
mock_socket.gethostbyname.assert_called_once_with(self.test_domain)
def test_scan_socket_error_success(self, mock_read_input, mock_socket):
mock_read_input.return_value = self.mock_port_list
mock_socket.inet_aton.side_effect = socket_error
mock_socket.gethostbyname.return_value = self.test_ip
mock_scan_results = {
80: 'OPEN',
443: 'CLOSE',
}
scanner = pyscanner.PortScanner(self.target_ports, self.thread_limit, self.timeout)
# private instance methods in python are name mangled
# see https://docs.python.org/3.5/tutorial/classes.html#private-variables
scanner._PortScanner__scan_ports = Mock(return_value=mock_scan_results)
result = scanner.scan(self.test_host)
self.assertEqual(result, mock_scan_results)
scanner._PortScanner__scan_ports.assert_called_once_with(self.test_ip, '')
mock_socket.gethostbyname.assert_called_once_with(self.test_domain)
def test_scan_server_unknown(self, mock_read_input, mock_socket):
mock_read_input.return_value = self.mock_port_list
mock_socket.gethostbyname.side_effect = socket_error
mock_socket.inet_aton.side_effect = socket_error
mock_scan_results = {
80: 'OPEN',
443: 'CLOSE',
}
scanner = pyscanner.PortScanner(self.target_ports, self.thread_limit, self.timeout)
# private instance methods in python are name mangled
# see https://docs.python.org/3.5/tutorial/classes.html#private-variables
scanner._PortScanner__scan_ports = Mock(return_value=mock_scan_results)
result = scanner.scan(self.test_host)
self.assertEqual(result, {})
scanner._PortScanner__scan_ports.assert_not_called()
mock_socket.gethostbyname.assert_called_once_with(self.test_domain)
@patch('pyportscanner.pyscanner.concurrent.futures.ThreadPoolExecutor', autospec=True)
def test_scan_ports_success(self, mock_executor, mock_read_input, mock_socket):
mock_read_input.return_value = self.mock_port_list
mock_future1 = Mock(spec=futures.Future)
mock_future1.done.return_value = True
mock_future1.result.return_value = (80, 'OPEN')
mock_future2 = Mock(spec=futures.Future)
mock_future2.done.side_effect = [False, True]
mock_future2.result.return_value = (443, 'OPEN')
mock_executor.return_value.__enter__.return_value.submit.side_effect = [mock_future1, mock_future2]
scanner = pyscanner.PortScanner(self.target_ports, self.thread_limit, self.timeout)
result = scanner._PortScanner__scan_ports(self.test_ip, '')
self.assertEqual(result, {80: 'OPEN', 443: 'OPEN'})
@patch('pyportscanner.pyscanner.concurrent.futures.ThreadPoolExecutor', autospec=True)
def test_scan_ports_exception(self, mock_executor, mock_read_input, mock_socket):
mock_read_input.return_value = self.mock_port_list
mock_future1 = Mock(spec=futures.Future)
mock_future1.done.return_value = True
mock_future1.result.return_value = (80, 'OPEN')
mock_future2 = Mock(spec=futures.Future)
mock_future2.done.side_effect = [False, True]
mock_future2.result.side_effect = socket_error
mock_executor.return_value.__enter__.return_value.submit.side_effect = [mock_future1, mock_future2]
scanner = pyscanner.PortScanner(self.target_ports, self.thread_limit, self.timeout)
result = scanner._PortScanner__scan_ports(self.test_ip, '')
self.assertEqual(result, {80: 'OPEN', 443: 'CLOSE'})
@patch('pyportscanner.pyscanner.concurrent.futures.ThreadPoolExecutor', autospec=True)
def test_scan_ports_thread_limit(self, mock_executor, mock_read_input, mock_socket):
mock_read_input.return_value = self.mock_port_list
mock_future1 = Mock(spec=futures.Future)
mock_future1.done.return_value = True
mock_future1.result.return_value = (80, 'OPEN')
mock_future2 = Mock(spec=futures.Future)
mock_future2.done.side_effect = [False, True]
mock_future2.result.side_effect = socket_error
mock_executor.return_value.__enter__.return_value.submit.side_effect = [mock_future1, mock_future2]
scanner = pyscanner.PortScanner(self.target_ports, 1, self.timeout)
result = scanner._PortScanner__scan_ports(self.test_ip, '')
self.assertEqual(result, {80: 'OPEN', 443: 'CLOSE'})
def test_check_futures_succes(self, mock_read_input, mock_socket):
mock_read_input.return_value = self.mock_port_list
mock_future1 = Mock(spec=futures.Future)
mock_future1.done.return_value = True
mock_future1.result.return_value = (80, 'OPEN')
mock_future2 = Mock(spec=futures.Future)
mock_future2.done.return_value = False
test_futures = collections.deque()
test_futures.append(mock_future1)
test_futures.append(mock_future2)
test_output = {
80: 'CLOSE',
443: 'CLOSE',
}
scanner = pyscanner.PortScanner(self.target_ports, self.thread_limit, self.timeout)
scanner._PortScanner__check_futures(test_output, test_futures)
self.assertEqual(len(test_futures), 1)
self.assertEqual(test_output, {80: 'OPEN', 443: 'CLOSE'})
@patch('pyportscanner.pyscanner.platform', autospec=True)
def test_TCP_connect_open(self, mock_platform, mock_read_input, mock_socket):
test_message = 'test_message_djiqojiocn'
mock_platform.system.return_value = 'Linux'
mock_tcp_socket = Mock(spec=socket.socket)
mock_tcp_socket.setsockopt.return_value = None
mock_tcp_socket.settimeout.return_value = None
# assume the port is open
mock_tcp_socket.connect_ex.return_value = 0
mock_tcp_socket.sendall.return_value = None
mock_tcp_socket.close.return_value = None
mock_udp_socket = Mock(spec=socket.socket)
mock_udp_socket.sendto.return_value = None
mock_socket.socket.side_effect = [mock_tcp_socket, mock_udp_socket]
scanner = pyscanner.PortScanner(self.target_ports, self.thread_limit, self.timeout)
result = scanner._PortScanner__TCP_connect(self.test_ip, 80, test_message)
self.assertEqual(result, (80, 'OPEN'))
mock_tcp_socket.connect_ex.assert_called_once_with((self.test_ip, 80))
mock_tcp_socket.sendall.assert_called_once_with(test_message.encode('utf8'))
mock_tcp_socket.close.assert_called_once_with()
mock_tcp_socket.settimeout.assert_called_once_with(self.timeout)
mock_udp_socket.sendto.assert_called_once_with(test_message.encode('utf8'), (self.test_ip, 80))
mock_udp_socket.close.assert_called_once_with()
@patch('pyportscanner.pyscanner.platform', autospec=True)
def test_TCP_connect_close(self, mock_platform, mock_read_input, mock_socket):
test_message = 'test_message_djiqojiocn'
mock_platform.system.return_value = 'Linux'
mock_tcp_socket = Mock(spec=socket.socket)
mock_tcp_socket.setsockopt.return_value = None
mock_tcp_socket.settimeout.return_value = None
# assume the port is close
mock_tcp_socket.connect_ex.return_value = 1
mock_tcp_socket.sendall.return_value = None
mock_tcp_socket.close.return_value = None
mock_udp_socket = Mock(spec=socket.socket)
mock_udp_socket.sendto.return_value = None
mock_socket.socket.side_effect = [mock_tcp_socket, mock_udp_socket]
scanner = pyscanner.PortScanner(self.target_ports, self.thread_limit, self.timeout)
result = scanner._PortScanner__TCP_connect(self.test_ip, 80, test_message)
self.assertEqual(result, (80, 'CLOSE'))
mock_tcp_socket.connect_ex.assert_called_once_with((self.test_ip, 80))
mock_tcp_socket.sendall.assert_not_called()
mock_tcp_socket.close.assert_called_once_with()
mock_tcp_socket.settimeout.assert_called_once_with(self.timeout)
mock_udp_socket.sendto.assert_called_once_with(test_message.encode('utf8'), (self.test_ip, 80))
mock_udp_socket.close.assert_called_once_with()
@patch('pyportscanner.pyscanner.platform', autospec=True)
def test_TCP_connect_socket_error(self, mock_platform, mock_read_input, mock_socket):
test_message = 'test_message_djiqojiocn'
mock_platform.system.return_value = 'Linux'
mock_tcp_socket = Mock(spec=socket.socket)
mock_tcp_socket.setsockopt.return_value = None
mock_tcp_socket.settimeout.return_value = None
# assume the port is close
mock_tcp_socket.connect_ex.side_effect = socket_error
mock_tcp_socket.sendall.return_value = None
mock_tcp_socket.close.return_value = None
mock_udp_socket = Mock(spec=socket.socket)
mock_udp_socket.sendto.return_value = None
mock_socket.socket.side_effect = [mock_tcp_socket, mock_udp_socket]
scanner = pyscanner.PortScanner(self.target_ports, self.thread_limit, self.timeout)
result = scanner._PortScanner__TCP_connect(self.test_ip, 80, test_message)
self.assertEqual(result, (80, 'CLOSE'))
mock_tcp_socket.connect_ex.assert_called_once_with((self.test_ip, 80))
mock_tcp_socket.close.assert_called_once_with()
mock_tcp_socket.settimeout.assert_called_once_with(self.timeout)
mock_udp_socket.sendto.assert_called_once_with(test_message.encode('utf8'), (self.test_ip, 80))
mock_udp_socket.close.assert_called_once_with()
| 52.728302 | 107 | 0.724111 | 1,811 | 13,973 | 5.218664 | 0.079514 | 0.065178 | 0.044017 | 0.046556 | 0.851656 | 0.840546 | 0.831235 | 0.824886 | 0.821183 | 0.810496 | 0 | 0.015187 | 0.184785 | 13,973 | 264 | 108 | 52.92803 | 0.814503 | 0.040435 | 0 | 0.642241 | 0 | 0 | 0.04652 | 0.030765 | 0 | 0 | 0 | 0 | 0.202586 | 1 | 0.073276 | false | 0 | 0.038793 | 0 | 0.116379 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fe6d11f309b8a48c32063e3dbd7660fc7f430805 | 16,234 | py | Python | i3Deep/eval_and_plot/uncertainty_coverage.py | Karol-G/nnUNet | a30bdbd64254c94c515ee03617173eb217eea505 | [
"Apache-2.0"
] | 2 | 2022-03-18T12:49:28.000Z | 2022-03-24T14:39:20.000Z | i3Deep/eval_and_plot/uncertainty_coverage.py | Karol-G/nnUNet | a30bdbd64254c94c515ee03617173eb217eea505 | [
"Apache-2.0"
] | null | null | null | i3Deep/eval_and_plot/uncertainty_coverage.py | Karol-G/nnUNet | a30bdbd64254c94c515ee03617173eb217eea505 | [
"Apache-2.0"
] | null | null | null | import numpy as np
from collections import defaultdict
from i3Deep import utils
from sklearn.metrics import confusion_matrix
from tqdm import tqdm
import pickle
import multiprocessing as mp
from functools import partial
import matplotlib.pyplot as plt
import argparse
from matplotlib.lines import Line2D
# def comp_uncertainty_coverage(uncertainty_dir, prediction_dir, gt_dir, thresholds, parallel=True, target_shape=(256, 256, 50)):
# uncertainty_dir = utils.fix_path(uncertainty_dir)
# prediction_dir = utils.fix_path(prediction_dir)
# gt_dir = utils.fix_path(gt_dir)
# uncertainty_filenames = utils.load_filenames(uncertainty_dir)
# prediction_filenames = utils.load_filenames(prediction_dir)
# gt_filenames = utils.load_filenames(gt_dir)
# U_threshold = defaultdict(lambda: defaultdict(int))
# if parallel:
# pool = mp.Pool(processes=8)
#
# for i in tqdm(range(len(uncertainty_filenames)), desc="Case"):
# uncertainty = utils.load_nifty(uncertainty_filenames[i])[0]
# prediction = utils.load_nifty(prediction_filenames[i])[0]
# ground_truth = utils.load_nifty(gt_filenames[i])[0]
# uncertainty = utils.interpolate(uncertainty, target_shape, mask=False)
# prediction = utils.interpolate(prediction, target_shape, mask=True)
# ground_truth = utils.interpolate(ground_truth, target_shape, mask=True)
# prediction = np.rint(prediction).flatten().astype(int)
# ground_truth = np.rint(ground_truth).flatten().astype(int)
# prediction, ground_truth = guard_input(prediction, ground_truth)
# missclassification = comp_missclassification(prediction, ground_truth)
# if not parallel:
# for threshold in tqdm(thresholds, leave=False, desc="Threshold"):
# U_t, U_f = comp_uncertainty_true_false(threshold, uncertainty, missclassification)
# U_threshold[threshold]["U_t"] += U_t
# U_threshold[threshold]["U_f"] += U_f
# # print("Threshold: {}, U_t: {}, U_f: {}".format(threshold, U_t, U_f))
# else:
# results = pool.map(partial(comp_uncertainty_true_false, _uncertainty=uncertainty, missclassification=missclassification), thresholds)
# results = np.asarray(results)
# U_t_threshold = results[:, 0]
# U_f_threshold = results[:, 1]
# for i, threshold in enumerate(thresholds):
# U_threshold[threshold]["U_t"] += U_t_threshold[i]
# U_threshold[threshold]["U_f"] += U_f_threshold[i]
#
# if parallel:
# pool.close()
# pool.join()
#
# U_t = 0
# UC_threshold = []
# for threshold in thresholds:
# U_t += U_threshold[threshold]["U_t"]
# for threshold in thresholds:
# UC_threshold.append((U_threshold[threshold]["U_t"]**2) / (U_threshold[threshold]["U_f"] * U_t))
# UC = np.sum(UC_threshold)
# return {"UC": UC, "UC_threshold": UC_threshold, "Thresholds": thresholds}
def comp_uncertainty_coverage(uncertainty_dir, prediction_dir, gt_dir, thresholds, parallel=True, resize=True, target_shape=(256, 256, 50)):
uncertainty_dir = utils.fix_path(uncertainty_dir)
prediction_dir = utils.fix_path(prediction_dir)
gt_dir = utils.fix_path(gt_dir)
uncertainty_filenames = utils.load_filenames(uncertainty_dir)
prediction_filenames = utils.load_filenames(prediction_dir)
gt_filenames = utils.load_filenames(gt_dir)
U_threshold = defaultdict(lambda: defaultdict(int))
if parallel:
pool = mp.Pool(processes=10)
if not parallel:
results = []
for i in tqdm(range(len(uncertainty_filenames)), desc="Case"):
U_threshold_single = comp_uncertainty_coverage_single(i, uncertainty_filenames, prediction_filenames, gt_filenames, thresholds, resize, target_shape)
results.append(U_threshold_single)
else:
results = pool.map(partial(comp_uncertainty_coverage_single, uncertainty_filenames=uncertainty_filenames, prediction_filenames=prediction_filenames,
gt_filenames=gt_filenames, thresholds=thresholds, resize=resize, target_shape=target_shape), range(len(uncertainty_filenames)))
results = np.asarray(results)
U_t_threshold = results[:, 0]
U_f_threshold = results[:, 1]
M = results[:, 2, 0]
M = np.sum(M)
for i in range(len(uncertainty_filenames)):
for j, threshold in enumerate(thresholds):
U_threshold[threshold]["U_t"] += U_t_threshold[i][j]
U_threshold[threshold]["U_f"] += U_f_threshold[i][j]
if parallel:
pool.close()
pool.join()
U_t, U_f = 0, 0
UC_threshold = []
for threshold in thresholds:
U_t += U_threshold[threshold]["U_t"]
U_f += U_threshold[threshold]["U_f"]
for threshold in thresholds:
# UC_threshold.append((U_threshold[threshold]["U_t"]**2) / (U_threshold[threshold]["U_f"] * U_t))
UC_threshold.append((U_threshold[threshold]["U_t"] / U_threshold[threshold]["U_f"]) / (U_threshold[threshold]["U_t"] / M))
UC = np.sum(UC_threshold)
return {"UC": UC, "UC_threshold": UC_threshold, "Thresholds": thresholds}
def comp_uncertainty_coverage_single(i, uncertainty_filenames, prediction_filenames, gt_filenames, thresholds, resize, target_shape):
#print("starting")
uncertainty = utils.load_nifty(uncertainty_filenames[i])[0]
prediction = utils.load_nifty(prediction_filenames[i])[0]
ground_truth = utils.load_nifty(gt_filenames[i])[0]
if resize:
uncertainty = utils.interpolate(uncertainty, target_shape, mask=False)
prediction = utils.interpolate(prediction, target_shape, mask=True)
ground_truth = utils.interpolate(ground_truth, target_shape, mask=True)
prediction = np.rint(prediction).flatten().astype(int)
ground_truth = np.rint(ground_truth).flatten().astype(int)
prediction, ground_truth = guard_input(prediction, ground_truth)
missclassification = comp_missclassification(prediction, ground_truth)
M = np.sum(missclassification)
# U_threshold = defaultdict(lambda: defaultdict(int))
U_t_threshold, U_f_threshold = [], []
for threshold in thresholds:
U_t, U_f = comp_uncertainty_true_false(threshold, uncertainty, missclassification)
# U_threshold[threshold]["U_t"] += U_t
# U_threshold[threshold]["U_f"] += U_f
U_t_threshold.append(U_t)
U_f_threshold.append(U_f)
#print("finished")
return U_t_threshold, U_f_threshold, [M]*len(thresholds)
def comp_uncertainty_true_false(threshold, _uncertainty, missclassification):
uncertainty = (_uncertainty > threshold).flatten().astype(int)
# prediction = np.rint(_prediction).flatten().astype(int)
# ground_truth = np.rint(_ground_truth).flatten().astype(int)
# uncertainty, ground_truth = guard_input(uncertainty, ground_truth)
uncertainty, missclassification = guard_input(uncertainty, missclassification)
# missclassification = comp_missclassification(prediction, ground_truth)
_, U_f, _, U_t = confusion_matrix(missclassification, uncertainty).ravel()
U_t, U_f = float(U_t), float(U_f)
return U_t, U_f
def comp_missclassification(prediction, ground_truth):
missclassification = np.zeros_like(prediction)
missclassification[prediction != ground_truth] = 1
return missclassification
def guard_input(uncertainty, ground_truth):
"""The metrices F1, MCC and AP all have rare edge cases that result in a NaN output.
For example in the case that all ground truth labels and binarized uncertainties are postive -> there are no true negatives and no false negatives -> sqrt of zero for MCC score, similar for the others.
This methods changes the value of 4 pixels in the ground truth and the binarized uncertainty to ensure that there is always at least one TP, TN, FP and FN.
The influence of this modification on the metric results is neglectable."""
uncertainty[0] = 1
uncertainty[1] = 0
uncertainty[2] = 1
uncertainty[3] = 0
ground_truth[0] = 1
ground_truth[1] = 0
ground_truth[2] = 0
ground_truth[3] = 1
return uncertainty, ground_truth
def plot_results_combined(load_dir):
def load(uncertainty_quantification, uncertainty_measure):
with open(load_dir + 'UC_{}_{}.pkl'.format(uncertainty_quantification, uncertainty_measure), 'rb') as handle:
results = pickle.load(handle)
return {"UC_threshold": results["UC_threshold"], "Thresholds": results["Thresholds"]}
fig, ax = plt.subplots()
results = load("ensemble", "bhattacharyya_coefficient")
plt.plot(results["Thresholds"], results["UC_threshold"], linestyle="-", color="r", label="UC")
legend_ensemble = Line2D([0, 1], [0, 1], linestyle='-', color="r")
results = load("mcdo", "bhattacharyya_coefficient")
plt.plot(results["Thresholds"], results["UC_threshold"], linestyle="-", color="g", label="UC")
legend_mcdo = Line2D([0, 1], [0, 1], linestyle='-', color="g")
results = load("tta", "bhattacharyya_coefficient")
plt.plot(results["Thresholds"], results["UC_threshold"], linestyle="-", color="b", label="UC")
legend_tta = Line2D([0, 1], [0, 1], linestyle='-', color="b")
ax.legend([legend_ensemble, legend_mcdo, legend_tta], ["Ensemble", "MC-Dropout", "TTA"], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.ylim(0, 0.1)
plt.title("Uncertainty coverage (Bhattacharyya coefficient)")
plt.savefig(load_dir + "UC_BC.png", bbox_inches='tight')
plt.clf()
fig, ax = plt.subplots()
results = load("ensemble", "predictive_entropy")
plt.plot(results["Thresholds"], results["UC_threshold"], linestyle="-", color="r", label="UC")
legend_ensemble = Line2D([0, 1], [0, 1], linestyle='-', color="r")
results = load("mcdo", "predictive_entropy")
plt.plot(results["Thresholds"], results["UC_threshold"], linestyle="-", color="g", label="UC")
legend_mcdo = Line2D([0, 1], [0, 1], linestyle='-', color="g")
results = load("tta", "predictive_entropy")
plt.plot(results["Thresholds"], results["UC_threshold"], linestyle="-", color="b", label="UC")
legend_tta = Line2D([0, 1], [0, 1], linestyle='-', color="b")
ax.legend([legend_ensemble, legend_mcdo, legend_tta], ["Ensemble", "MC-Dropout", "TTA"], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.ylim(0, 0.1)
plt.title("Uncertainty coverage (Predictive entropy)")
plt.savefig(load_dir + "UC_E.png", bbox_inches='tight')
plt.clf()
fig, ax = plt.subplots()
results = load("ensemble", "predictive_variance")
plt.plot(results["Thresholds"], results["UC_threshold"], linestyle="-", color="r", label="UC")
legend_ensemble = Line2D([0, 1], [0, 1], linestyle='-', color="r")
results = load("mcdo", "predictive_variance")
plt.plot(results["Thresholds"], results["UC_threshold"], linestyle="-", color="g", label="UC")
legend_mcdo = Line2D([0, 1], [0, 1], linestyle='-', color="g")
results = load("tta", "predictive_variance")
plt.plot(results["Thresholds"], results["UC_threshold"], linestyle="-", color="b", label="UC")
legend_tta = Line2D([0, 1], [0, 1], linestyle='-', color="b")
ax.legend([legend_ensemble, legend_mcdo, legend_tta], ["Ensemble", "MC-Dropout", "TTA"], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.ylim(0, 0.1)
plt.title("Uncertainty coverage (Predictive variance)")
plt.savefig(load_dir + "UC_V.png", bbox_inches='tight')
plt.clf()
def plot_results_combined2(load_dir):
def load(uncertainty_quantification, uncertainty_measure):
with open(load_dir + 'UC_{}_{}.pkl'.format(uncertainty_quantification, uncertainty_measure), 'rb') as handle:
results = pickle.load(handle)
return {"UC_threshold": results["UC_threshold"], "Thresholds": results["Thresholds"]}
fig, ax = plt.subplots()
results = load("ensemble", "bhattacharyya_coefficient")
plt.plot(results["Thresholds"], results["UC_threshold"], linestyle="-", color="r", label="UC")
legend_BC = Line2D([0, 1], [0, 1], linestyle='-', color="r")
results = load("ensemble", "predictive_entropy")
plt.plot(results["Thresholds"], results["UC_threshold"], linestyle="-", color="g", label="UC")
legend_entropy = Line2D([0, 1], [0, 1], linestyle='-', color="g")
results = load("ensemble", "predictive_variance")
plt.plot(results["Thresholds"], results["UC_threshold"], linestyle="-", color="b", label="UC")
legend_variance = Line2D([0, 1], [0, 1], linestyle='-', color="b")
ax.legend([legend_BC, legend_entropy, legend_variance], ["BC", "Entropy", "Variance"], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
#plt.ylim(0, 0.1)
plt.title("Uncertainty coverage")
plt.savefig(load_dir + "UC.png", bbox_inches='tight')
plt.clf()
# if __name__ == '__main__':
# parser = argparse.ArgumentParser()
# parser.add_argument("-t", "--task", help="Task", required=True)
# parser.add_argument("-s", "--set", help="val/test", required=True)
# args = parser.parse_args()
#
# thresholds = np.arange(0.0, 1.0, 0.03)
# uqs = ["ensemble", "tta", "mcdo"]
# ums = ["bhattacharyya_coefficient", "predictive_entropy", "predictive_variance"]
# task = args.task
#
# for uq in uqs:
# for um in ums:
# base_path = "/gris/gris-f/homelv/kgotkows/datasets/nnUnet_datasets/nnUNet_raw_data/nnUNet_raw_data/" + task + "/"
# uncertainty_dir = base_path + "refinement_" + args.set + "/uncertainties/" + uq + "/" + um + "/"
# prediction_dir = base_path + "refinement_" + args.set + "/basic_predictions/"
# gt_dir = base_path + "refinement_" + args.set + "/labels/"
# save_dir = base_path + "refinement_" + args.set + "/"
# name = save_dir + "uncertainty_evaluation/UC_" + uq + "_" + um
#
# results = comp_uncertainty_coverage(uncertainty_dir, prediction_dir, gt_dir, thresholds)
# print("UQ: {}, UM: {}, UC: {}".format(uq, um, results["UC"]))
#
# with open(name + ".pkl", 'wb') as handle:
# pickle.dump(results, handle, protocol=pickle.HIGHEST_PROTOCOL)
#
# plot_results_combined("/gris/gris-f/homelv/kgotkows/datasets/nnUnet_datasets/nnUNet_raw_data/nnUNet_raw_data/" + task + "/refinement_" + args.set + "/uncertainty_evaluation/")
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("-t", "--task", help="Task", required=True)
parser.add_argument("-s", "--set", help="val/test", required=True)
args = parser.parse_args()
thresholds = np.arange(0.0, 1.0, 0.03)
uqs = ["ensemble"]
ums = ["bhattacharyya_coefficient", "predictive_entropy", "predictive_variance"]
task = args.task
for uq in uqs:
for um in ums:
base_path = "/gris/gris-f/homelv/kgotkows/datasets/nnUnet_datasets/nnUNet_raw_data/nnUNet_raw_data/" + task + "/"
uncertainty_dir = base_path + "refinement_" + args.set + "/uncertainties/" + uq + "/" + um + "/"
prediction_dir = base_path + "refinement_" + args.set + "/basic_predictions/"
gt_dir = base_path + "refinement_" + args.set + "/labels/"
save_dir = base_path + "refinement_" + args.set + "/"
name = save_dir + "uncertainty_evaluation/UC_" + uq + "_" + um
results = comp_uncertainty_coverage(uncertainty_dir, prediction_dir, gt_dir, thresholds)
print("UQ: {}, UM: {}, UC: {}".format(uq, um, results["UC"]))
with open(name + ".pkl", 'wb') as handle:
pickle.dump(results, handle, protocol=pickle.HIGHEST_PROTOCOL)
plot_results_combined2("/gris/gris-f/homelv/kgotkows/datasets/nnUnet_datasets/nnUNet_raw_data/nnUNet_raw_data/" + task + "/refinement_" + args.set + "/uncertainty_evaluation/")
| 52.879479 | 206 | 0.659172 | 1,988 | 16,234 | 5.150402 | 0.119718 | 0.006641 | 0.005274 | 0.03516 | 0.818439 | 0.793144 | 0.771071 | 0.757203 | 0.742651 | 0.736205 | 0 | 0.012954 | 0.196378 | 16,234 | 306 | 207 | 53.052288 | 0.771884 | 0.316989 | 0 | 0.3 | 0 | 0 | 0.146122 | 0.032503 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.061111 | 0 | 0.15 | 0.005556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fe6e270d93fbac00624f576c61bbcc3165d9b8d1 | 130 | py | Python | optics/diffraction/__init__.py | bodokaiser/optics | fb1b95e0c5c54fd74f00e112151051e9d0b4f906 | [
"Apache-2.0"
] | null | null | null | optics/diffraction/__init__.py | bodokaiser/optics | fb1b95e0c5c54fd74f00e112151051e9d0b4f906 | [
"Apache-2.0"
] | null | null | null | optics/diffraction/__init__.py | bodokaiser/optics | fb1b95e0c5c54fd74f00e112151051e9d0b4f906 | [
"Apache-2.0"
] | null | null | null | from optics.diffraction.urey import Diffraction as UreyDiffraction
from optics.diffraction.li import Diffraction as LiDiffraction
| 43.333333 | 66 | 0.876923 | 16 | 130 | 7.125 | 0.5625 | 0.175439 | 0.368421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092308 | 130 | 2 | 67 | 65 | 0.966102 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
227d8fda0d01ada1e42c681dac5bed15c3a650ca | 4,128 | py | Python | cvlog/html_template.py | vinooniv/opencv-log | a35df38ded4f90488efe99a0e60ff028e8477024 | [
"MIT"
] | null | null | null | cvlog/html_template.py | vinooniv/opencv-log | a35df38ded4f90488efe99a0e60ff028e8477024 | [
"MIT"
] | null | null | null | cvlog/html_template.py | vinooniv/opencv-log | a35df38ded4f90488efe99a0e60ff028e8477024 | [
"MIT"
] | null | null | null | CONTENT_START = '<body><div class="log-menu"><img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAACXBIWXMAAA7EAAAOxAGVKw4bAAACYUlEQVRYhc2XP2jWQBjGf+/Hx2eQCiJOgiLiIB2kOIiDiHQQERfnLn4grrqIopODCKKDjrq4WBFpRfEfOgkqFQQ3cXBwcPgQaRW0ltrmcciFxjPJXeKH+EBILnf33u8uT95cjBJJ6gBHgD6wGei5qk6hWVpyfg0cM7P3ZXGjJKkn6ZHaa07S3tjxrATgHHDaFd8BD4F5N8Nld04LXVJgDXAcWO3uLQCHzOxxm9nPuZm8kJQ06HvNW4kfkg6E+nW88jZgrbu+ZGYLDfhnvXICTIUgfIBi2Q8Y0lLJvRxifyxAUWlNXSxADnFH0nhTgKqAVfpZU5cAdyWNNQFoqpBfRoDbvrGHCXCfLBF9AD4CA+ATv3tpK7C9MoKkscJrtGtYZJIOF+LuKdYNcwXq9LKqohvqKakLnAU2kQHnR5nSwjELnDGzb9QYug4gfw13s5Kam+oVMEnNKx1cAWAGuE62AjHtIZvxZ+CJK7cHcOm4HzlwY/0rE1YqxoQJcJlsYxJqvwRMmdnVoQEAO4GjsQGBUcAHaOWB/PHMABcpN6G/LVsEHkRhRgAAYGaLwIkmQZvovzZhB0DSCHAT2FJoX5cJITPjAJgws0FbgFw7gIMR7XyNAuNkmfCvAJ4DJ4GNDQG+ANPuuvJRBx+BmaXAhYaDR6vOhMM0aOVE/UGKCWP9EAHiYklKJH11O5dnknrhXsGYHUm3XMxlSRuK9WW/ZueBU674FrgHfGdlUxHaruer2gVWAfvI0jnApJlNhIgTSU/b/5tW6o2kdf54f6yAg+iSfYD6ZAmorSHzhHQDuGJm836DXyc+jBXxLgUYAAAAAElFTkSuQmCC" alt=""> </div><div class="log-detail"> <div id="log-detail"> <h2 id="ld_title"></h2> <h3 id="ld_head1" style="margin-top: 35px;"></h3> <div id="ld_log_data"></div><h3 id="ld_head2" style="margin-top: 35px;"></h3> <div><pre id="stack_trace"></pre></div></div></div><div class="log-list"> <h1 class="title"> Logs</h1>'
CONTENT_END = '</div></body></html>'
NO_DATA_CONTENT = '<div id="no-data">No data logged</div>'
SCRIPT = '<script>var selected_id; function show_data(id){var menu=document.getElementById(id); data=JSON.parse(menu.getAttribute("data")); document.getElementById("ld_title").innerHTML=data["time_stamp"]; document.getElementById("ld_head2").innerHTML="Stack Trace:"; document.getElementById("stack_trace").innerHTML=data["long_stack"]; document.getElementById("ld_log_data").innerHTML=menu.getAttribute("logdata"); document.getElementById("ld_head1").innerHTML="Log Detail:"; if(selected_id && selected_id !=id){var prev=document.getElementById(selected_id); prev.className="log-item"}menu.className="log-item selected"; selected_id=id}</script>'
STYLE = '<style>body{width: 100%; height: 100%; margin: 0px; font-family:Arial, Helvetica, sans-serif;}.log-menu{height: 100%;width: 3%;float: left;background-color: #32324a;text-align: center;}.log-menu>img{height: 25px; padding-top: 25px;}.log-list{width:30%;height: 100%;float: left; background-color: #f3f3f3; overflow: scroll;}.log-detail{width:67%;height: 100%;float: right; overflow: scroll;}.log-detail>div{padding: 30px 15px;height: 100%;}h2{font-family: Trebuchet MS, Lucida Grande, Lucida Sans Unicode, Lucida Sans, Tahoma, sans-serif;font-size: 22px;font-style: normal;font-variant: normal;font-weight: 700;line-height: 20px; color: #333;}.log-item{padding: 5px 15px;border-bottom: 1px solid #ddd; margin-left: 10px;}.title{padding: 5px 20px;}.show{display: block;}.hide{display: none;}.log-detail img{max-width:100%; margin-top: 10px; border: 1px solid #888;}.stack{font-style: italic; margin-bottom: 10px;}.selected{margin-left: 6px;background-color: white;border-top-left-radius: 10px;border-bottom-left-radius: 10px;box-shadow: -2px 2px 4px #ddd;border-bottom: none;}h1{font-family: Trebuchet MS, Lucida Grande, Lucida Sans Unicode, Lucida Sans, Tahoma, sans-serif; font-size: 30px; font-style: normal; font-variant: normal; font-weight: 700; line-height: 26.4px;}h3{font-family: Trebuchet MS, Lucida Grande, Lucida Sans Unicode, Lucida Sans, Tahoma, sans-serif; font-size: 14px; font-style: normal; font-variant: normal; font-weight: 700; line-height: 15.4px;}p{font-size: 14px; font-style: normal; font-variant: normal; font-weight: 400; line-height: 20px;}span.level{color: #fff; border: 1px solid transparent; padding: 2px 12px;float: right; margin: -5px 5px;border-radius: 5px;}.info{background-color: #039BE5;}.trace{background-color: #757575;}.error{background-color: #e53935;}pre{font-size: 13px;line-height: 1.3em;padding: 0px;text-align: justify;width: 100%;padding-left: 5px;white-space: pre-wrap;background: #fafafa;}#no-data{margin: 10px;border: 1px solid transparent;padding: 5px;border-radius: 5px;background: #FFE082;color: #555;}</style>'
| 688 | 2,071 | 0.793847 | 510 | 4,128 | 6.378431 | 0.321569 | 0.024593 | 0.029511 | 0.023363 | 0.178912 | 0.159238 | 0.145097 | 0.145097 | 0.145097 | 0.145097 | 0 | 0.067844 | 0.053779 | 4,128 | 5 | 2,072 | 825.6 | 0.764977 | 0 | 0 | 0 | 0 | 0.6 | 0.98062 | 0.526405 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a3a83189cf46cf487d564b9c739f5688d1f0f2f6 | 234 | py | Python | website/moneybird_accounting/admin/project_admin.py | JobDoesburg/landolfio | 4cbf31c2e6f93745f5aa0d20893bf20f3acecc6e | [
"MIT"
] | 1 | 2021-02-24T14:33:09.000Z | 2021-02-24T14:33:09.000Z | website/moneybird_accounting/admin/project_admin.py | JobDoesburg/landolfio | 4cbf31c2e6f93745f5aa0d20893bf20f3acecc6e | [
"MIT"
] | 2 | 2022-01-13T04:03:38.000Z | 2022-03-12T01:03:10.000Z | website/moneybird_accounting/admin/project_admin.py | JobDoesburg/landolfio | 4cbf31c2e6f93745f5aa0d20893bf20f3acecc6e | [
"MIT"
] | null | null | null | from django.contrib import admin
from import_export.admin import ImportExportModelAdmin
from moneybird_accounting.models.project import Project
@admin.register(Project)
class MoneybirdProductAdmin(ImportExportModelAdmin):
pass
| 23.4 | 55 | 0.854701 | 25 | 234 | 7.92 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098291 | 234 | 9 | 56 | 26 | 0.938389 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.166667 | 0.666667 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
4308dec1cffc1029fe00e5685a2b14452c918edb | 2,818 | py | Python | tests/validators/test_in_range.py | kdeltared/tcex | 818c0d09256764f871e42d9ca5916f92d941d882 | [
"Apache-2.0"
] | 18 | 2017-01-09T22:17:49.000Z | 2022-01-24T20:46:42.000Z | tests/validators/test_in_range.py | kdeltared/tcex | 818c0d09256764f871e42d9ca5916f92d941d882 | [
"Apache-2.0"
] | 84 | 2017-04-11T13:47:49.000Z | 2022-03-21T20:12:57.000Z | tests/validators/test_in_range.py | kdeltared/tcex | 818c0d09256764f871e42d9ca5916f92d941d882 | [
"Apache-2.0"
] | 43 | 2017-01-05T20:40:26.000Z | 2022-03-31T19:18:02.000Z | """Test the in_range validator."""
# first-party
from tcex.validators import ValidationError, in_range
class TestInRange:
"""Test the in_range validator."""
@staticmethod
def test_happy_path_string():
"""Happy path test with a string"""
validator = in_range(0, 100)
try:
validator(10, 'test_arg', 'Test Arg')
except ValidationError:
assert False, 'in_range threw exception when it should have passed'
@staticmethod
def test_happy_path_string_array():
"""Happy path test with a string array"""
validator = in_range(0, 100)
try:
validator([10, 15, 20], 'test_arg', 'Test Arg')
except ValidationError:
assert False, 'in_range threw exception when it should have passed'
@staticmethod
def test_boundaries():
"""Test values on the min and max boundaries"""
validator = in_range(0, 100)
try:
validator([0, 100], 'test_arg', 'Test Arg')
except ValidationError:
assert False, 'in_range threw exception when it should have passed'
@staticmethod
def test_fail_lower_string():
"""Test failure because of lower bound."""
validator = in_range(0, 100)
try:
validator(-1, 'test_arg', 'Test Arg')
assert False, 'Validator should have failed!'
except ValidationError as v:
assert (
v.message == '"Test Arg" (test_arg) is not between 0 and 100.'
), 'Validator failed with incorrect message'
@staticmethod
def test_fail_upper_string():
"""Test failure because of upper bound."""
validator = in_range(0, 100)
try:
validator(101, 'test_arg', 'Test Arg')
assert False, 'Validator should have failed!'
except ValidationError as v:
assert (
v.message == '"Test Arg" (test_arg) is not between 0 and 100.'
), 'Validator failed with incorrect message'
@staticmethod
def test_fail_string_array():
"""Test failure in an array."""
validator = in_range(0, 100)
try:
validator([101, 100], 'test_arg', 'Test Arg')
assert False, 'Validator should have failed!'
except ValidationError as v:
assert (
v.message == '"Test Arg" (test_arg) is not between 0 and 100.'
), 'Validator failed with incorrect message'
@staticmethod
def test_allow_none_true():
"""Test allow_none."""
validator = in_range(0, 100, allow_none=True)
try:
validator([None, 100], 'test_arg', 'Test Arg')
except ValidationError:
assert False, 'in_range threw exception when it should have passed'
| 31.662921 | 79 | 0.592619 | 331 | 2,818 | 4.906344 | 0.199396 | 0.086207 | 0.067734 | 0.086207 | 0.857143 | 0.782635 | 0.722906 | 0.703202 | 0.586207 | 0.586207 | 0 | 0.033402 | 0.309439 | 2,818 | 88 | 80 | 32.022727 | 0.801131 | 0.104684 | 0 | 0.716667 | 0 | 0 | 0.266855 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.116667 | false | 0.066667 | 0.016667 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
4323a53ba66a6117c76a3c43d918496d41084317 | 29 | py | Python | empiricaldist/__init__.py | dbready/empiricaldist | 1d3dcfdcb68d4524e18722d87eb106cc34b7f4a4 | [
"BSD-3-Clause"
] | 81 | 2019-07-16T18:01:54.000Z | 2022-03-11T09:14:00.000Z | empiricaldist/__init__.py | AllenDowney/empyre | 6e071092bdf4312345e4e0b0b3bf11eef3c926f5 | [
"BSD-3-Clause"
] | 18 | 2019-04-10T14:47:16.000Z | 2019-04-11T14:24:44.000Z | empiricaldist/__init__.py | AllenDowney/empyre | 6e071092bdf4312345e4e0b0b3bf11eef3c926f5 | [
"BSD-3-Clause"
] | 22 | 2020-01-09T04:32:51.000Z | 2022-02-28T04:18:43.000Z | from .empiricaldist import *
| 14.5 | 28 | 0.793103 | 3 | 29 | 7.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.